GOOGLE BRINGS AR AND LENS CLOSER TO THE FUTURE OF SEARCH

Posted by Alena o.
7
Dec 7, 2020
338 Views
Image

Google also announced a range of new updates to Google Lens, the software created by the augmented reality expert for image recognition accessible as an interface and integrated into Google Pixel system cameras. If it takes AR and Lens forward to the future of search, Google is making AR even more useful for tablets. Google Lens is built into Google Search, which operates the following way: you aim your camera at a text wall, and Lens can start interpreting the text out loud automatically.

Blog Contents

  • What is Google AR, and how does it work?
  • The Real Challenge
  • Would it work on low-end phones as well?
  • What’s in the future?

What is Google AR, and how does it work?

When searching for “tiger,” you see a clickable file that launches an animated 3D file complete with roaring sounds. Then you can fire it in the room in AR, and oh, practical tiger AR. You may drop a NASA Mars Curiosity Rover scale model into space or a human arm bone and musculature anatomical model.

This year, Google is adding AR to Search, and this is how it works: 

In Search, compatible Android and iOS users can see 3D object connections, which will pull up 3D objects that can then be dropped into the real world in AR at the proper size. As opposed to Apple’s USDZ format used by ARKit in iOS 12, Google Search will incorporate 3D files using the glTF format. Developers would need to add only a few lines of code, according to Google, to allow 3D assets to appear in Google Search.

Anyone with 3D properties and it turns out that many retail partners do, people like Wayfair or Lowe’s, three lines of code are what they have to do. There’s not anything that the content distributors have to do. 

To integrate 3D properties into Google Search, Google is already collaborating with NASA, New Balance, Samsung, Aim, Transparent Body, Volvo, and Wayfair. It is launching the AR effects into a new Scene Viewer feature called Android. Individuals use Google Search as an extension of the process. If AR-enabled Search is finally applied to a pair of AR glasses, it can mean conjuring items into the real world seamlessly, without launching any applications at all.

Augmented Reality experts have started first-hand at the newest capabilities of Google Lens, and they are beginning to feel like means of shaping reality and understanding it. Lens will now view translations from other languages that translate effortlessly to signs or artifacts and stick in space there as if the text were there. It’s an extension of what was in Google Translate, but today, beginning with restaurant menus, Google can evaluate whole texts’ meaning.

To let Lens know what to do in context, plus the do-it-all Auto mode, new Shopping, Dining, Translate, and Text filters are available.

For example, the shopping philter helps to recognize a plant on a table and locate places to purchase the plant rather than simply recognizing what sort of plant it is. 

The Real Challenge- “How does the lens view what you need if you keep a magic lens that sees everything?

Google Lens also has some quirky new AR tweaks. Suddenly, a menu page from Bon Appetit magazine turns and animates as cooking instructions are revealed. We’re holding up a phone in Google Lens, and we’re holding a real poster of Paris animated on the screen, with moving clouds. Google is focusing on test cases where it might collaborate with developers with animated photos such as these. It remains to be seen where and how it will manifest.

For now, Google is performing these tricks with 2D images, not 3D, but pulling these improvements off without advanced marker codes seems like a glimpse into what a world full of augmented reality could be: signs that come alive at a glance. 

Would it work on low-end phones?

The best is a Google Lens program that comes for low-end phones running Android Go apps. Instant translating and reading support works on phones that are not efficient enough for ARCore, focusing on cloud providers. You take a picture of a sign, and now the phone is reading back to you what it sees, highlighting each word. A tap and can be converted into a foreign language.

What’s in the future?

Google agrees that they are in a “deep R&D” process towards emerging technology beyond mobile, but right now, the aim of AI developer is to resolve first phone users. There are, however, signs of some other type of hardware on the horizon.

The site replies that you are forwarding translated very well to the Assistant if you care about voice quest. Here we take the same approach and say, what are the functionality and capabilities? This is very helpful in the smartphone and applies really well to potential form factors. This year, Lens is shifting to become a “AR browser” for her. This goes into the other form factors of two or three more Lego bricks.

With no new VR or AR hardware appearing this year at Google I/O, Google’s emphasis on services and utilities could mean that Lens could develop into a reality browser for other platforms.

For now, it’s a mystery. You don’t even really have to squint. There’s so much in common with these things. Google has a history of building platforms and services widely available and tries to deliver helpfulness and utility as broadly as possible to everyone. Google is aiming for a similar strategy with this as well. 

In the meantime, all these Google AR applications, like Google’s new AR mapping features in Charts, sound like ongoing experiments. Even the I/O software has incorporated AR to direct attendees to sessions at this year’s I/O meeting. The functionality could be a glimpse at where in the future AR advice could evolve. Or maybe any of these traits won’t succeed. It’s Darwinian, maybe. And maybe that’s what’s needed to find out how AR on phones and beyond will flourish.

1 people like it
avatar
Comments
avatar
Please sign in to add comment.