Meta’s Ray-Ban Glasses Added AI That Can See What You’re Seeing – CNET

“Hey, Meta. Take a look at this and tell me which of these teas is caffeine-free.”

I spoke these words as I wore a pair of Meta Ray-Bans at the tech giant’s New York headquarters, while I stared at a table with four tea packets with their caffeine labels blacked out with a Magic Marker. A little click sound in my ears was followed by Meta’s AI voice telling me that the chamomile tea was likely caffeine-free. It was reading the labels and making judgments using generative AI.

I was demoing a feature that’s rolling out to Meta’s second-generation Ray-Ban glasses starting today, a feature that Meta CEO Mark Zuckerberg had already promised in September when the new glasses were announced. The AI features, which can access Meta’s on-glasses cameras to look at images and interpret them with generative AI, were supposed to launch in 2024. Meta has moved to introduce these features a lot faster than I expected, although the early-access mode is still very much a beta. Along with adding Bing-powered search into Ray-Bans as part of a new update, which ups the power of the glasses’ already available voice-enabled capabilities, Meta’s glasses are starting to gain a number of new abilities fast.

I was pretty wowed by the demo because I had never seen anything like it. I have in parts: Google Lens and other on-phone tools use cameras and AI together already, and Google Glass — a decade ago — had some translation tools. That said, the easy-access way that Meta’s glasses have of invoking AI to identify things in the world around me feels pretty advanced. I’m excited to try it a lot more.

A photo of grilling, with captions asking an AI assistant for cooking help A photo of grilling, with captions asking an AI assistant for cooking help

I didn’t try Meta’s glasses while cooking — yet.

Meta

It could also have possible uses for assistive purposes. I wore a test pair of Meta glasses that didn’t have my prescription, and I asked it what I was looking at. Answers can vary in detail and accuracy, but it can give a heads-up. It knew I was showing it my glasses, which it said had bluish-tinted lenses (blue-black frame, pretty close). 

Sometimes it can hallucinate. I asked the glasses about fruit in a bowl in front of me, and it said there were oranges, bananas, dragonfruit, apples and pomegranates. It was correct, except for the pomegranates. (There were none of those.) I was asked to have it make a caption for a big stuffed panda in front of a window. It made some cute ones, but one was about someone being lonely and looking at a phone, which didn’t match.

I looked at a menu in Spanish and asked the glasses to show me spicy dishes. It read off some dishes and translated some key ingredients for me, but I asked again about dishes with meat and it read everything back in Spanish.

The possibilities here are wild and fascinating, and possibly incredibly useful. Meta admits that this early launch will be about discovering bugs and helping evolve the way the on-glasses AI works. I found there were too many “Hey, Meta, look at this” moments. But that process might change, who knows. When engaged in immediate image analysis, asking direct follow-up questions can work without saying “Look at this” again, but I’m sure my success will vary.

Leave a Reply