I really enjoyed this interview with Sundar Pichai from John Collison and Elad Gil from Stripe.
Here are the 5 most interesting things I learned.
1. Search will still exist in the future, but much of it will be agent-based
Sundar was asked if agents will replace the search. He said:
“If I fast forward, a lot of the pure information-seeking queries in search are running as an agent. They’re going to be doing tasks. There’s going to be a lot of threads running.”
And search will also change so that we look at it like an agent manager.
“It’s evolving. Search is going to be an agent manager where you do a lot of things. I think in some ways I’m using Antigravity today, and you know, you have a lot of agents doing things, and I imagine Search is doing versions of those things, and you’re doing a lot of things.”
He said that in AI mode, people do thorough research and completing tedious tasks will soon become the norm. He also said that the form factor of the devices will change.
2. Google uses Antigravity internally
Boy, do I love Google’s IDE and Agent Manager. Antigravity. I’ve built so many things with it, including my own RSS feed reader, a screenshot and annotation tool, workflows for publishing things I write to a Google Doc on my WordPress site, and a set of tools for completing agent tasks with GSC and GA4 data. Although I think Claude Cowork and Claude Code are incredible, I really prefer using Antigravity.
It turns out that Google uses Antigravity well internally. Except They don’t call it antigravity. They call it “Jet Ski”.
Sundar said that Google Deep Mind and Google Software Engineers use it:
“I can see groups, and in particular I would say that GDM and some of the SWE groups are really changing their workflows. They use, we call it for some strange reason, we have a different name internally than externally for the same product, but internally it’s Jet Ski, which is Antigravity. You live by it, you live in an agent-manager world. You have workflows and you work in this new way.”
He also uses it himself.
“I would ask in Antigravity, in our internal version of Antigravity, ‘Hey, we started this thing. What did people think about it? Tell me the five worst things people are talking about?’ and that’s what I type. Now that brings it back. Has my life become easier? Yes. I used to have to spend a lot more time getting a feel for it. Now an AI agent is helping me on this journey. “
Also just last week The Google search team started using Antigravity.
“Just last week we introduced it (Antigravity) to the search team. We’re constantly pushing this forward. In a large organization, I think change management is a difficult aspect of distributing this technology, which might be easy for a small company. They can pivot quickly.”
If you want to learn how to use Antigravity, I’ve created a complete guide that shows you how it works and how I not only program with it, but also create full agent workflows that I actually use in my daily work. It is available in the paid part of my community, The Search Bar. And next Thursday the Search Bar Pro crew is hosting an event where we will split into two teams: Team Claude Code and Team Antigravity and see who can develop the better SEO tool.
I know it’s a bit of a hassle to use something new in your workflows. I truly believe that those who learn how to use Antigravity today will have a huge advantage as things really take off as AI improves.
3. Robotics is growing rapidly
Sundar admitted that Google had so far entered robotics too early. AI has become the missing ingredient for ideas that were developed 10 to 15 years ago. The Gemini Robotics models have achieved the state-of-the-art in spatial thinking. Google has partnered again with Boston Dynamics and Agile, as well as a few other companies.
The most interesting thing for me was the discussion wing for delivery by drone.
“I think we’re building out Wing so that in a reasonable period of time, 40 million Americans will have access to wing delivery service. I’m not talking years or anything like that.”
When asked if Google would do more to build hardware, Sundar said it was important to have first-party hardware for robotics and AI.
“I think we would keep a very open mind. My lesson from Waymo and on the AI side with TPUs, etc. is that I need to push the curve really well, especially in areas where it’s safety, regulations and everything. You want to experience the product feedback cycle firsthand. I think having first-party hardware will be very important in the end.”
4. Agentic OpenClaw-like systems are the future
There’s a reason OpenClaw (originally Clawdbot) went viral a few weeks ago. I still haven’t set up an OpenClaw system because I feel like I don’t know enough about security to make this system secure.
When Sundar was asked if something similar to OpenClaw would come from Google, he said he believes it is the future.
“I think you want to give users the ability to perform persistent, long-running tasks in a reliable and secure way. You have to think about things like identity, access, etc. But I think that’s the future. That’s the future of agents. And creating that for consumers is an exciting challenge that we’re tackling. That’s one of mine too.”
I think consumer interfaces will actually have complete coding models underlying them, and the right systems and the right capabilities and the ability to maintain and run security somewhere in the cloud, on-premises and in the cloud. All these primitives come together.
Today I feel like 1% of the world, maybe not 1%, 0.1% of the world lives in this future. They build things for themselves but bring them to mass adoption. Yes. I think it’s a very exciting frontier.”
As I write this, Google Deep Mind just tweeted instructions on how to use its new local open model Gemma 4 with OpenClaw. A new way of communicating with our machines is beginning to unfold!
5. AI and AI agents will improve dramatically in 2027
Sundar was asked when he thought agent systems would be able to function completely without a human being involved. He said twice that 2027 would likely be a big year.
“I definitely expect ’27 to be a major turning point for certain things in some of these areas. Even the people making it, this is the workflow that they would produce it with. Maybe you would look at it the traditional way for a while, but you switch, a crossover. But I expect ’27 to be a big year with some of these changes happening pretty profoundly.”
The interview ended with Sundar talking about what he was most excited about. He did mention that building data centers in space was very exciting, but the last part was super interesting.
“I literally spent time yesterday with someone who was explaining some improvements after training, so a person who’s talking about the improvements they’re making. When I listen to that, I’m like, ‘Oh, that’s really going to turn out to be a nice jump.’ That is the constant power of this moment. I don’t want to say any more about the second one, but we will publish it one day, I’m sure.”
It sounds to me like he’s talking about agent self-improvement.
We are currently learning how to let AI build and do things for us. I remember learning to code for the first time with ChatGPT as a partner. It would give me code to paste into VS Code. Then I would run it and put the errors back into ChatGPT. We went back and forth until something actually worked. I felt like I was unnecessary in this process – the copy and paste robot, and in fact today’s systems like Antigravity, Claude Code and ChatGPT Codex run the code, check the errors and fix things without much human intervention.
It makes sense to me that the next step in this process is for AI systems to learn to improve their utility without us having to specifically ask them to do so. If this happens, I expect we will see an even faster evolution of AI capabilities and utility!

