Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Meta’s Mark Zuckerberg on Wednesday unveiled a prototype for lightweight augmented reality glasses dubbed Orion, as the Big Tech race to build the next computing platform intensifies.
During a presentation at Meta’s annual Connect conference, Zuckerberg said the glasses were the “most advanced” in the world, marking the “culmination of decades of breakthrough inventions and nearly one of the hardest challenges in tech industries ever seen”.
With holographic displays, the glasses can overlay 2D and 3D content over the real world, and use artificial intelligence to analyse its contents and “proactively” offer wearers suggestions in the display, Meta said. Its shares rose more than 2 per cent following the announcement.
The prototype will only be available for use internally and for some developers to build on. To be ready for consumers, Zuckerberg conceded the glasses needed to be “smaller”, “more fashionable”, with a sharper display and more affordable, adding “we have line of sight to all those things”.
A Meta video displaying the Orion technology featured praise from several Silicon Valley stars, including Jensen Huang, Nvidia founder and chief executive, and Reddit chief executive Steve Huffman.
Meta in 2014 bought virtual reality headset maker Oculus and continues to develop fully immersive VR headsets under the Quest brand. It announced it was aiming to build AR glasses five years ago.
The race to build AR smart “glasses” is heating up. Evan Spiegel’s Snap, a smaller rival to Meta, revealed its latest version of AR glasses last week. Meta and Snap are hoping to bypass Apple’s operating system in a bet that immersive glasses will one day replace smartphones. Tech companies have for years been trying to develop AR wearables, such as the now-discontinued Google Glasses, but they have failed to excite consumers.
Zuckerberg touted the AI capabilities of Orion, and said the rapid development of large language models, including the model Llama, had led to “a new AI-centric device category”. As well as having Meta AI and hand-tracking embedded, the chief executive said the Orion glasses will use wristbands to take signals from the body, including from the brain, in a neural interface.
“Voice is great, but the thing is sometimes public, and you don’t want to say what you’re trying to do . . . I think that you need a device that allows you to just send a signal from your brain to the device,” Zuckerberg said.
Meta has previously invested significant resources into electromyography, or EMG, a technique that uses sensors to translate motor nerve signals travelling through the wrist to the hand into digital commands that can control a device. EMG has been used in recent research in attempts to translate how the brain sends signals to the hands to perform actions such as typing and swiping.
In 2019, Meta acquired CTRL-labs, a US start-up developing technology to let people control electronic devices with their brains, for about $1bn.
The company on Wednesday also released new product updates powered by its improvements in generative AI, as tech companies jockey to deploy the fast-developing technology. This included the latest iteration of its large language model, Llama 3.2, its “first major vision model” that can understand charts, graphs and documents.
“Now that Llama is at the frontier in terms of capabilities, I think we’ve reached an inflection point in the industry where it’s starting to become something of an industry standard, or sort of like the Linux of AI,” Zuckerberg said of the new model.
He said its Meta AI chatbot, which uses Llama, was “on track to being the most-used AI assistant” with 500mn monthly active users.
Meta also announced an updated version of its Ray-Ban smart glasses, which do not have AR displays but can now allow users to analyse photos or videos in real time with a voice interface that uses Meta AI. Additional features included setting reminders, such as remembering a parking space; calling numbers from documents; scanning QR codes; and translating live conversations in different languages.