In a significant advancement for its conversational AI capabilities, Google has announced the rollout of its Gemini Live camera and screenshare functions. These innovative features allow the Gemini chatbot to engage with users by answering questions about real-time visual content, such as images or objects captured by the device's camera. The update is currently available for Pixel 9 series smartphones and Samsung Galaxy S25 models, with plans for expansion to a broader range of Android devices anticipated shortly. However, to take full advantage of these features, users will need to subscribe to the paid Gemini Advanced service.

Once the Gemini Live functionality is activated on compatible devices, users can easily access the live video feature with just a single button press. This allows them to ask the chatbot questions about whatever the camera is capturing at that moment. For instance, in a demonstration showcased during Google's April Pixel Drop video, users could point their cameras at an aquarium tank and inquire about specific types of fish present within the tank. This interactive capability is designed to enhance user engagement and make obtaining information more intuitive and visually oriented.

In addition to the live camera feature, Gemini Live also introduces a new screenshare button. This allows users to display content from their devices, such as a shopping website, and pose questions to the AI assistant regarding product comparisons or receive personalized styling recommendations. This interactive experience aims to make the use of AI more dynamic and helpful, particularly for tasks that involve visual elements or decision-making.

The rollout of these features began last month, as confirmed by Google spokesperson Alex Joseph. Feedback from users on platforms like Reddit has indicated that some individuals, including those using Xiaomi smartphones, have successfully accessed the Gemini Live functions on their devices. The innovative combination of video and screensharing capabilities in Gemini Live was initially presented at Google’s I/O developer conference in May, under the initiative known as “Project Astra.” This project aims to enrich user interactions through enhanced AI-driven communication and information retrieval.