“`html
Mark Zuckerberg, chief executive officer of Meta Platforms Inc., wears a pair of Meta Ray-Ban Display AI glasses during the Meta Connect event in Menlo Park, California, US, on Wednesday, Sept. 17, 2025.
David Paul Morris | Bloomberg | Getty Images
The real surprise of the new $799 Meta Ray-Ban Display glasses isn’t the augmented reality itself, but the accompanying neural interface: a fuzzy, gray wristband that promises intuitive control unlike anything seen before in consumer AR devices.
At its annual Connect event this Wednesday, Meta unveiled its next-generation smart glasses. As the first consumer-ready glasses from Meta with a built-in display, it represents a tangible step towards CEO Mark Zuckerberg’s vision of a future where headsets and glasses supersede smartphones as the primary computing platform.
While the display on these new glasses remains relatively simplistic compared to the advanced prototypes showcased in the past, such as last year’s Orion glasses, which featured the capability of overlaying complex 3D visuals onto the real world, the Meta Ray-Ban Display marks a shift from purely experimental technology to a commercially available product. The Orion glasses, requiring a separate computing unit and primarily designed for demonstration purposes, were never intended for consumer release.
The Meta Ray-Ban Display, in contrast, will be available to the public, launching in the U.S. on Sept. 30.
Featuring a small digital display in the right lens, the glasses introduce intriguing visual functionalities, including the ability to read messages, preview photos, and view live captions during conversations. This targeted approach prioritizes utility over immersive entertainment, signaling a strategic decision to focus on practical applications of AR technology.
However, the user experience hinges heavily on the EMG sensor wristband, which detects electrical signals generated by the body and enables precise control of the glasses through hand gestures. While the wristband feels similar to a standard watch, the subtle electrical pulse upon activation creates an immediate awareness of the technology at play, hinting at the complex neuro-computer interface underpinning its functionality.
The glasses themselves are comfortable to wear, and the miniature display, positioned just below the right cheek, presents a unique viewing experience. It simulates a miniaturized smartphone screen but maintains translucency to avoid obstructing the user’s view of the surrounding environment.
Despite the high-resolution display, the clarity of the icons can be somewhat compromised by the overlay with the real-world environment and the letters can appear slightly blurred. The visual experience prioritizes functionality over high-fidelity immersion, catering towards simple actions like activating the camera or selecting songs on Spotify. The glasses are more about practical assistance than full-fledged immersive entertainment.
The Meta Ray-Ban Display AI glasses with the Meta Neural Band wristband at Meta headquarters in Menlo Park, California, US, on Tuesday, Sept. 16, 2025.
David Paul Morris | Bloomberg | Getty Images
The most engaging aspect of the demonstration involved experimenting with hand gestures to navigate the display and launch applications. Clenching a fist and swiping a thumb across the index finger emulates a touchpad, enabling scrolling through applications.
Initially, launching the camera app required multiple attempts. The pinching motion between the index finger and thumb, intended to activate the app, proved to be an exercise in precision. The user’s natural reaction to double-pinch, mimicking “double clicking” a Mouse, revealed the learning curve associated with this new form of input. The fine motor skills needed for consistent app activation are not as immediately intuitive as a mouse.
The process of continually pinching fingers to interact with the screen can appear somewhat comical to onlookers but becomes a visible translation of intent into action – a manifestation of the bridge between human thought and machine response.
Once the camera app is activated, the display previews the field of view, providing a picture-in-picture representation of the final image or video. This function brings practical value, allowing users to frame shots with greater confidence.
However, the constant presence of the display demands cognitive effort as the eyes continually adjust their focus to perceive both the physical world and the digital overlay. Users may experiences a split between reality and the digital interface.
Aside from gestures, the Meta Ray-Ban Display glasses can also be controlled with Meta AI. This extends to voice command as a means for interaction as featured in their predecessors.
During a demonstration, attempting to use Meta AI to analyze paintings decorating the demo room faltered, however the possibility to activate the Bauhaus description of the paintings was clear.
Furthermore, the live caption feature demonstrates potential utility in noisy environments, successfully transcribing speech even amidst the loud music at the Connect event by showing text in real time.
The neural wristband emerged from the experience as the most compelling feature when used with Spotify. The ability to adjust the volume by rotating the thumb and index finger, as if manipulating an invisible knob, brings an element of tactile feedback to digital control.
The integration of cutting-edge technology into the new Meta Ray-Ban Display glasses is a clear direction for where this technology is going. While the device’s high price may deter some consumers, its unique functionalities could potentially attract developers seeking to expand the ecosystem and create intuitive applications around augmented reality.
“`
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/9680.html