Meta unveils new mixed reality Quest 3S headset, updated AI models and next-gen AR glasses
Meta Platforms Inc. took the stage today at Meta Connect 2024 to announce innovations in mixed reality and artificial intelligence with the introduction of the Quest 3S headset, AR glasses and updated AI models.
The new device has the same mixed reality capabilities and features as the Meta Quest 3, but a lower price at $299.99. It’s designed as a midranget upgrade for users who are looking for something better than the Quest 2 but it has the same processor, hand-tracking and touch-plus controllers as the Quest 3.
It features 1832 by 1920 pixels per eye using Fresnel lenses, a switch-out from the pancake lens design stack used in the Quest 3, with 4.5 times better resolution for clarity and color than the Quest 2. It has 2.5 hours of average battery time. There are two storage options, 128 and 256 gigabytes.
“In the past year we have made so many improvements and optimizations to the technical stack to the effective resolution, latency, mixed reality and hand tracking software, it is actually better in Quest 3S today at $299 than it was in Quest 3 when we launched it a year ago,” said Meta Chief Executive Mark Zuckerberg (pictured). “And, of course, all of these software improvements have flowed through to the Quest 3. The bottom line here is Quest 3 is the best family of mixed reality devices out there, period.”
Meta’s social media apps are being updated for mixed reality, including Facebook and Instagram. They will soon be updated so that they can interact more freely with headset hand gestures and controllers.
Zuckerberg also said the company is working with Microsoft to bring Windows 11 desktop and laptop PCs into mixed reality. Soon it will be possible to pair any Windows 11 operating system with a Quest headset, becoming a natural extension of a PC.
“This is the path to building a general computing platform,” Zuckerberg said. “It’s not just games, although it’s really good at that. You’ll also be able to use it for apps and watching videos and doing all the things that you do on a general-purpose computer. Quest is the full package.”
New features for Ray-Ban Meta glasses
The company’s AI-enabled Ray-Ban Meta smart glasses are getting some AI-powered upgrades that will enable them to be a better companion for users.
With improvements to Meta AI, users will not need to invoke the assistant feature with “Hey Meta” when asking follow-up questions, they can just keep asking. This will allow for longer and more sensible conversations.
Meta is also adding the ability for the glasses to remember things. For example, adding reminders, such as where a car is parked, or a reminder to text or call someone in a few hours so that you can set up a time to go to a restaurant. The glasses can also be asked to call or text a number visible on a billboard. Users can also ask Meta AI to record and send voice messages using WhatsApp or Messenger, making it easier to get things done when a phone isn’t within reach or hands are full.
Soon, it will also be possible for the glasses to do real-time language translation. When a user is talking to someone speaking Spanish, French or Italian, the glasses will translate what they say in English through the glasses’ open-ear speakers – and vice versa. This means that someone else wearing the glasses and listening to someone speaking English can hear a translation in their native language. Meta said that the company plans to add more languages in the future.
Meta AI updates and Llama 3.2 multimodal AI
Meta’s AI is the company’s AI-powered assistant that uses generative AI technology that answers questions and generates images, among other useful tasks for users. At Connect, Meta said users can now use their voice to talk to Meta AI on Messenger, Facebook, WhatsApp and Instagram DM.
The assistant will respond back out loud, provide answers to questions and explain anything the user wants to know. As the feature continues to roll out, users will be able to choose between different voices including AI voices of celebrities such as Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell.
Users can now share photos in chats with Meta AI and it can understand what it’s looking at and answer questions about what it sees. For example, a user could share a photo of a birthday cake, ask for instructions on how to make it and the assistant would provide a recipe and a step-by-step guide to baking.
Continuing the company’s open-source AI trajectory, Meta also announced the release of its latest AI large language model Llama 3.2 with multimodal capabilities. It comes in two large variants with 11 billion and 90 billion parameters that the company says deliver performance competitive with leading closed-source models and can be used as drop-in replacements for its text-only Llama 3.1 8B and 70B models.
The company said that the new models bridge the gap between vision and language by allowing them to extract details from images and understand visual knowledge in scenes. They can then craft sentences that can be used to caption images or tell stories from what is visible in images.
Meta also released two lightweight, text-only models with 1 billion and 3 billion parameters designed to bit onto select edge and mobile devices developed to support Arm, MediaTek and Qualcomm processors from day one. At those sizes, they can run locally on devices such as smartphones with a context length of 128,000 for use cases such as summarization, instruction following and function calling.
“Now that Llama is at the frontier in terms of capabilities, I think we’ve reached an inflection point and it’s starting to become something of an industry standard — sort of like the Linux of AI,” Zuckerberg said. “We’ve seen closed-source labs try to react by slashing their prices. Look, I think the trend and trajectory is pretty clear. I think open-source is going to be the most cost-effective, customizable, trustworthy and performant option for developers to use.”
Orion: Meta’s first true AR glasses
Meta announced five years ago that it planned to produce actual augmented reality glasses, then known as Project Nazare, which would involve see-through lenses that would project holograms onto a user’s vision. Today, Meta unveiled Orion, a prototype of those glasses that represents the culmination of that project.
Augmented reality differs from mixed reality in that it doesn’t use cameras or screens and instead allows a user to look directly at the world through a fully transparent lens. The Orion glasses are exactly that: glasses. They look like extremely thick glasses and appear to have a fairly wide field of view. Zuckerberg said that they also have a bright display capable of rendering holograms in varying degrees of light levels.
According to Meta, Orion is a feat of miniaturization, as the components needed to be packed into a very small form factor and remain light enough to sit on a person’s face without being uncomfortable. It allows users to view the world and provides access to an AI assistant to see and display holograms and interact with gestures or the user can take hand’s hands-free calls.
The glasses also work with Meta’s neural wristband, which can detect signals from the user’s brain.
“Voice is great, but sometimes you’re in public and you don’t want to speak to your computer out loud,” said Zuckerberg. “I think that you need a device that just allows you to send a signal from your brain to the device.”
Zuckerberg said Meta is not ready to ship Orion but believes that the company has gotten most of the way there. The device will be used as a developer kit and mostly internally to prepare the hardware to become the first consumer AR holographic glasses. He added that the company also intends to work with a select number of partners to get a diversity of content prepared for the AR glasses at launch.
Images: Meta
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU