We use cookies on our websites. Information about cookies and how you can object to the use of cookies at any time or end their use can be found in our privacy policy.

The most important AI projects from Google you can see today

The most important AI projects from Google you can see today

Google has been a pioneer in the field of artificial intelligence and is continuously researching new applications for AI and tools that, in its own words, "ensure that everyone can access AI". In this article, we highlight the most important artificial intelligence projects from Mountain View.

Google Assistant

We start with the Google AI that's probably closest to you right now - the ubiquitous and apparently all-knowing Google Assistant. Google Assistant is built based on natural language processing, a procedure in which AI recognizes speech and connects it to sounds, words, and ideas. To do this, Google Assistant records your speech, sends it to Google's servers, which analyze the sounds to pick out key words and phrases to understand your request, and sends back the result (fast, right?). Assistant learns from your behavior, so it will first check answers related to your contacts, daily schedule, search history, local area and so on.

Google Assistant is not just in smartphones, but also in many smart home devices such as the Mountain View company's own Google Home, Home Mini and Max, and Hub. However, it's also a popular choice for many 3rd party devices. While Google Assistant is becoming something we take for granted on our devices more and more, that doesn't mean it's development has stalled. See our guide to the most useful Google Assistant commands to make the most out of your AI helper.

Google Lens

If Google Assistant is the AI with ears, then Google Lens has the eyes. This program saw its debut on the 2017 Pixel phones, but reached other Android smartphones and finally iPhones in 2018. Google Lens looks through your smartphone camera and uses neural networks to detect and identify objects, text and landmarks.

The neural networks are training on huge amounts of data to help sort out visual elements and put them in context. For example, you can point the camera at a label containing the name and password of a Wi-Fi network and your device will then automatically connect to that network. You can also scan famous landmarks for information on its meaning and history, or look up a restaurant menu item to find out its ingredients. Lens is also integrated with the Google Photos and Google Assistant apps. 

Want Google Lens on your device? Get it on the Play Store.

DeepMind

DeepMind is a UK-based AI company that was bought by Google back in 2014, and now it works under the aegis of Mountain View to produce solutions based on machine learning for various applications. DeepMind started out training AI to play video games, and that's still a big part of what it does. Famously, DeepMind AI agent AlphaGo beat human world champion Ke Jie in a landmark contest, and more recently triumphed over humans again at StarCraft II.

Success at games helps develop more powerful AI that can be used directly in more practical applications in the real world. DeepMind's AI agents also assist in medical research, are involved in diagnosis of diseases and the organization of patient records.

With regard to the latter, DeepMind has taken some heat for data protection issues related to patients during its work with the UK's National Health Service. After being criticized for this, the company has emphasized its commitment to ethical and socially beneficial uses of AI, founding the DeepMind Ethics & Society group dedicated to directing the use of AI in a socially responsible manner.

DeepMind has also contributed to subtle conveniences on your smartphone. For example, the Google Play Store's app recommendations and the Adaptive Battery and Adaptive Brightness functions of Android 9.0 Pie.

Google Duplex

Google Duplex is an AI tool that makes phone calls for you, using natural human speech patterns to do things like book a table at a restaurant or appointment at a salon, order items, schedule meetings and the like. On the business side, Duplex is also meant to be able to handle customer support interactions.

Initially demoed at Google I/O 2018, Duplex is enjoying a limited rollout in the United States as of the time of writing, although different legal situations regarding consent to record calls prevent it from being available in Indiana, Kentucky, Louisiana, Minnesota, Montana, Nebraska and Texas. In the other states, Duplex is integrated as a function of Google Assistant.

As an AI effectively posing as human, Duplex has raised a few thorny ethical issues, solved in part by the Assistant identifying itself at the beginning of the conversation so that the human on the other side isn't deceived. But, just as Google Assistant records your voice when you command it, Duplex also relies on voice recording, this time of the person on the other side of the line.

Do you use any of Google's AI tools regularly? What would you like to see from the big G in the future? Let us know in the comments below.

Recommended articles

No comments

Write new comment:
All changes will be saved. No drafts are saved when editing
Write new comment:
All changes will be saved. No drafts are saved when editing