Google will soon be applying a layer of artificial intelligence to all of its products.
Google Lens was among the new features introduced at the long-awaited Google I/O developer conference.
It represents a set of vision-based computer capabilities that enables users to understand what they are seeing and helps in taking action based on that information.
This new feature will first appear on Google Assistant and Google Photos, but it will also be available on other products later on.
How does Google Lens work?
Basically, users can perform search tasks through the use of their phone’s camera. One can easily get information from Google Assistant just by pointing their phone at an object they want to know something more about.
For example, point the phone at a WiFi router’s details and it will ask you if you want to connect to the network. Take a photo of a flower and Google will identify what kind of flower it is. You can also get a snapshot of a restaurant, and it will pull all the relevant data available, such as customer ratings and other business information.
By letting Google know where you are and what you are looking at, it connects these details with its knowledge graph and gives the right information in a meaningful way.
The overall picture from the Google I/O developer conference is that these are the beginnings of understanding images and videos by a search engine. Google’s reputation today is built on the understanding of text and web pages. The fact that new algorithms can now understand images and videos have profound implications for Google’s core mission.
Google said that they have been building the technology behind Google Lens for many years and that this new initiative represents a grown-up version of the visual search Google Goggles, introduced in 2010.
Google Lens is set to launch later this year.