Tuesday, May 8, 2018

Google Lens Combines AI and AR in a phone



During Google’s I/O developer conference keynote, artificial intelligence was once again the defining theme and Google’s guiding light for the future. AI is now interwoven into everything Google does, and nowhere is the benefits of CEO Sundar Pichai’s AI-first approach more apparent than with Google Lens. The Lens platform combines the company’s most cutting-edge advances in computer vision and natural language processing with the power of Google Search. In doing so, Google makes a compelling argument for why its way of developing AI will generate more immediately useful software than its biggest rivals, like Amazon and Facebook. It also gives AI naysayers an illustrative example of what the technology can do for consumers, instead of just for under-the-hood systems like data centers and advertising networks or for more limited hardware use cases like smart speakers.

Lens is effectively Google’s engine for seeing, understanding, and augmenting the real world. It lives in the camera viewfinder of Google-powered software like Assistant and, following an announcement at I/O this year, within the native camera of top-tier Android smartphones. For Google, anything a human can recognize is fair game for Lens. That includes objects and environments, people and animals (or even photos of animals), and any scrap of text as it appears on street signs, screens, restaurant menus, and books. From there, Google uses the expansive knowledge base of Search to surface actionable info like purchase links for products and Wikipedia descriptions of famous landmarks. The goal is to give users context about their environments and any and all objects within those environments.

Click here for a video


Credits:
https://www.theverge.com/2018/5/8/17333154/google-lens-ai-ar-live-demo-hands-on-io-2018






No comments:

Post a Comment