Friday, March 29, 2024

As AR heads to Google search, Lens learns to translate, add tips, and more

Share

Riley Young/Digital Trends

Computer vision puts the camera to use when you’re at a loss for words — but Google Lens can soon do more than just reverse search for similar items or details about what’s in that photo. During I/O on Tuesday, May 7, Google demonstrated new search capabilities powered by the camera and expanded Lens skills for calculating tips, translating text, and more.

During the keynote, Aparna Chennapragada, Google’s vice president for the camera and augmented reality products, demonstrated how Google’s search results can use AR to bring 3D models into the room with you, without leaving the search results. A new “view in 3D button” pops up in the search results whenever 3D content is available.

Besides allowing users to look around the 3D object from every angle, the update will also bring that 3D item into AR, mixing the model with the content from your camera to see the object in front of you. Chennapragada says the tool will be helpful for tasks such as research along with shopping.

The camera feature for search is expected to arrive later in May. Partners like NASA, New Balance, Samsung, Target, Visible Body, Volvo, Wayfair, and others will be among the first to have their 3D content pop up in the search results.

As search becomes more camera-heavy, Google Lens is moving beyond simply searching with a camera. At a restaurant, Lens can soon scan the menu, highlight the most popular dishes, bring up photos and even highlight reviews from other diners using Google Maps. The camera first has to differentiate between the different menu options before matching the text with relevant results online. At the end of the meal, Lens will calculate the tip or split the bill with friends when pointing the camera at the receipt.

Google Lens is also gaining the ability to verbally translate text. While earlier versions could use Smart Text to highlight text to copy or translate, Lens can soon read the text out loud or overlay the translated text over the original image in more than 100 languages. Alternately, Lens can also use text-to-speech in the original language, a feature that could be helpful for those with vision or reading difficulty.

The text-to-speech feature is launching first inside Google Go, a lightweight app designed for new smartphone users. Chennapragada says that the team managed to fit those languages onto just over 100KB of space, allowing the app to run on budget phones.

“Seeing is often understanding,” Chennapragada said. “With computer vision and AR, the camera is turning into a powerful visual tool to understand the world around you.”

Lens will also gain a handful of new features as part of partnerships. Readers of Bon Appetit, for example, can scan a recipe page to see a video of the dish being created. In June, Lens will uncover hidden details about paintings at San Fransisco’s de Young Museum.

The updates join a growing list of features for Google Lens like the ability to look up the artist behind a piece of artwork, shop for similar styles, or find the name of that flower you spotted. Google Lens, which has now been used more than a billion times, is available inside Google Assistant, Photos, and directly in the native camera app on a number of Android devices.

Editors’ Recommendations

  • How to use Google Lens to identify objects using your smartphone
  • Sony FE 135mm f/1.8 is sharp enough to handle futuristic 90-megapixel cameras
  • Olympus M.Zuiko 12-100mm F4.0 IS Pro review
  • The best mirrorless cameras for 2019
  • The Tacs Nato-Lens is a watch for those who live life through a camera lens







Read more

More News