3 things we learned from this interview with Google Deepmind's CEO, and why Astra could be the key to great AI smart glasses

Skip to main content TechRadar the technology experts Search TechRadar View Profile België (Nederlands) Deutschland North America US (English) Australasia New Zealand Expert Insights Website builders Web hosting Tech Radar Pro Tech Radar Gaming Best web hosting Best website builder Best office chairs Expert Insights Artificial Intelligence 3 things we learned from this interview with Google Deepmind's CEO, and why Astra could be the key to great AI smart glasses Eric Hal Schwartz 21 April 2025 Stars in your eyes When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. (Image credit: Google DeepMind) Google has been hyping up its Project Astra as the next generation of AI for months. That set some high expectations when 60 Minutes sent Scott Pelley to experiment with Project Astra tools provided by Google DeepMind. He was impressed with how articulate, observant, and insightful the AI turned out to be throughout his testing, particularly when the AI not only recognized Edward Hopper’s moody painting "Automat," but also read into the woman’s body language and spun a fictional vignette about her life. All this through a pair of smart glasses that barely seemed different from a pair without AI built in. The glasses serve as a delivery system for an AI that sees, hears, and can understand the world around you. That could set the stage for a new smart wearables race, but that's just one of many things we learned during the segment about Project Astra and Google's plans for AI. You may like Google confirms Gemini Live's next big AI upgrade will be widely available on Android – with one catch Gemini AI can see and talk to you about what's on your screen – which could be more helpful than it sounds Google DeepMind CEO demonstrates world-building AI model Genie 2 - YouTube Astra's understanding Of course, we have to begin with what we now know about Astra. Firstly, the AI assistant continuously processes video and audio from connected cameras and microphones in its surroundings. The AI doesn’t just identify objects or transcribe text; it also purports to spot and explain emotional tone, extrapolate context, and carry on a conversation about the topic, even when you pause for thought or talk to someone else. During the demo, Pelley asked Astra what he was looking at. It instantly identified Coal Drops Yard, a retail complex in King’s Cross, and offered background information without missing a beat. When shown a painting, it didn’t stop at "that’s a woman in a cafe." It said she looked "contemplative." And when nudged, it gave her a name and a backstory. According to DeepMind CEO Demis Hassabis, the assistant’s real-world understanding is advancing even faster than he expected, noting it is better at making sense of the physical world than the engineers thought it would be at this stage. Veo 2 views But Astra isn’t just passively watching. DeepMind has also been busy teaching AI how to generate photorealistic imagery and video. The engineers described how two years ago, their video models struggled with understanding that legs are attached to dogs. Now, they showcased how Veo 2 can conjure a flying dog with flapping wings. Get daily insight, inspiration and deals in your inbox Sign up for breaking news, reviews, opinion, top tech deals, and more. Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. The implications for visual storytelling, filmmaking, advertising, and yes, augmented reality glasses, are profound. Imagine your glasses not only telling you what building you're looking at, but also visualizing what it looked like a century ago, rendered in high definition and seamlessly integrated into the present view. And then there’s Genie 2, DeepMind’s new world-modeling system. If Astra understands the world as it exists, Genie builds worlds that don’t. It takes a still image and turns it into an explorable environment visible through the smart glasses. Walk forward, and Genie invents what lies around the corner. Turn left, and it populates the unseen walls. During the demo, a waterfall photo turned into a playable video game level, dynamically generated as Pelley explored. DeepMind is already using Genie-generated spaces to train other AIs. Genie can help these navigate a world made up by another AI, and in real time, too. One system dreams, another learns. That kind of simulation loop has huge implications for robotics. In the real world, robots have to fumble their way through trial and error. But in a synthetic world, they can train endlessly without breaking furniture or risking lawsuits. Google is trying to get Astra-style perception into your hands (or onto your face) as fast as possible, even if it means giving it away. Just weeks after launching Gemini’s screen-sharing and live camera features as a premium perk, they reversed course and made it free for all Android users. That wasn’t a random act of generosity. By getting as many people as possible to point their cameras at the world and chat with Gemini, Google gets a flood of training data and real-time user feedback. There is already a small group of people wearing Astra-powered glasses out in the world. The hardware reportedly uses micro-LED displays to project captions into one eye and delivers audio through tiny directional speakers near the temples. Compared to the awkward sci-fi visor of the original Glass, this feels like a step forward. Sure, there are issues with privacy, latency, battery life, and the not-so-small question of whether society is ready for people walking around with semi-omniscient glasses without mocking them mercilessly. Whether or not Google can make that magic feel ethical, non-invasive, and stylish enough to go mainstream is still up in the air. But that sense of 2025 as the year smart glasses go mainstream seems more accurate than ever. You might also like Why 2025 will be the year of the AI smart glasses You don't have to pay for Google Gemini to comment on what you're looking at on your phone anymore Google Gemini could soon get a super-useful 'Power up' button – here's what it does See more Computing News Eric Hal Schwartz Social Links Navigation Contributor Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City. You must confirm your public display name before commenting Please logout and then login again, you will then be prompted to enter your display name. Google confirms Gemini Live's next big AI upgrade will be widely available on Android – with one catch Gemini AI can see and talk to you about what's on your screen – which could be more helpful than it sounds We've tried Google Pixel 9's new Gemini Astra upgrade, and users are in for a real treat Google previews AI Mode for search, taking on the likes of ChatGPT search and Perplexity Android XR comes out swinging with our first look at Samsung Project Moohan, but the VR headset just makes me want AR glasses more “The gloves are off when it comes to innovation” - Google Workspace head tells us why AI really can revolutionize the way you work Latest in Artificial Intelligence 3 things we learned from this interview with Google Deepmind's CEO, and why Astra could be the key to great AI smart glasses New AI Chibi figure trend may be the cutest one yet, and we're all doomed to waste time and energy making these things Is AI bad for music or is it just another step in the auto-tune timeline? Opera Mini stuffs a whole AI assistant into a tiny Android browser ChatGPT model matchup - I pitted OpenAI's o3, o4-mini, GPT-4o, and GPT-4.5 AI models against each other and the results surprised me You can't hide from ChatGPT – new viral AI challenge can geo-locate you from almost any photo – we tried it and it's wild and worrisome Latest in News Bluesky unveils a verification system, but you still can't request a blue check 3 things we learned from this interview with Google Deepmind's CEO, and why Astra could be the key to great AI smart glasses New AI Chibi figure trend may be the cutest one yet, and we're all doomed to waste time and energy making these things The OnePlus 13T’s battery just got revealed, and it could come with a surprising twist Russian bulletproof hosting system targeted by hackers to spread malware NYT Connections hints and answers for Tuesday, April 22 (game #681) LATEST ARTICLES OpenAI continues to dominate AI landscape among developers - but things are changing fast Synology confirms it is cracking down on third-party NAS hard drives Bluesky unveils a verification system, but you still can't request a blue check FBI warns scammers are posing as agents pretending to help recover lost funds Fake PDF converters are spreading malware to steal user information and worse - here's how to stay secure TechRadar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site. Contact Future's experts Terms and conditions Privacy policy Cookies policy Advertise with us Web notifications Accessibility Statement Future US, Inc. Full 7th Floor, 130 West 42nd Street, Please login or signup to comment Please wait...