Google is integrating AI into the Android operating system, revolutionizing user interaction with their devices.
Circle to Search, built into the user experience, allows users to search anything they see on their phone using a gesture. It now includes full-screen translation and is available on more Pixel and Samsung devices. It can assist students with homework, providing step-by-step instructions to solve physics and math problems. This feature is available on over 100 million devices, with plans to double that by year-end.
Gemini, a generative AI integrated into Android, assists users in being more creative and productive. It understands the context of what's on the user's screen and the app in use. Users will soon be able to use Gemini's overlay on top of the app they're in. This update will roll out to hundreds of millions of devices in the coming months.
Gemini Nano, a built-in, on-device foundation model, will soon have multimodality, meaning it will not just process text input but also understand more contextual information. This feature will be introduced starting with Pixel later this year.
TalkBack, a feature for people who experience blindness or low vision, will soon receive Gemini Nano’s multimodal capabilities, providing richer descriptions of what’s happening in an image.
Google is testing a feature that uses Gemini Nano to provide real-time alerts during a call if it detects conversation patterns commonly associated with scams.
Google continues to build AI into every part of the smartphone experience with Pixel, Samsung, and more. Developers can build with the latest AI models and tools, like Gemini Nano and Gemini in Android Studio. Further updates on Android 15 and ecosystem updates are expected soon.