agi games: the last lab standing
i’ve been reading and listening to a lot of stuff regarding ai and google lately. this is my attempt, with help from my fierce ai assistants, to compile it all into a single thesis that reflects how i feel about the company right now.
larry page said over 25 years ago: “artificial intelligence would be the ultimate version of google.” it’s never been more real than it is today.
the popular framing of the ai race focuses on benchmarks. who’s got the best model this week? the leaderboards reshuffle every couple of months, and the crown passes from one lab to another. but the real game is being played on the balance sheet, and when you look at it through that lens, one company stands apart in a way that’s hard to argue with.
the thesis, simply put: google can win ai by destroying margins. it can afford to price ai products below cost indefinitely because its core business prints cash independent of ai revenue. openai and anthropic cannot survive that, because ai revenue is their entire business. if google plays this hand aggressively, the standalone ai labs bleed out, google absorbs the talent, and eventually reaches agi as perhaps the only remaining frontier lab with both the resources and the research breadth to get there.
let’s walk through why this isn’t as crazy as it sounds.
the frontier ai business model is broken. this is the starting point that makes the whole thesis possible. openai generated $13 billion in revenue in 2025 and is projecting $14 billion in losses in 2026. cumulative losses of $115 billion through 2029 before hoped-for profitability. they just closed a $110 billion funding round at a $730 billion valuation, the largest private financing in history, from a company that has never turned a profit. anthropic raised $30 billion at a $380 billion valuation. run-rate revenue is $14 billion but they’re still cash-flow negative.
dylan patel of semianalysis argued on the a16z show that openai is not even capturing 10% of the value it has already created, and that the same value-capture problem applies to anthropic and others. one developer consumed $50k worth of anthropic tokens while paying $200/month for a chatgpt pro subscription. as mbi deep dives noted, this is a business model problem. the consumer surplus through these models is undeniable, but value capture is broken.
epoch ai’s analysis of model economics is even more damning. the gpt-5 bundle generated $6.1 billion in lifetime revenue and roughly $2.9 billion in gross profit. but training r&d alone cost $5 billion. as mbi put it in “lipstick on frontier ai pigs”: “the gpt-5 bundle gross profit didn’t even cover the cost of training, let alone employee compensation and sales and marketing. the entire opex base was basically being funded by investors.”
meanwhile chatgpt plus was introduced at $20/month in february 2023 and is still listed at $20/month despite capabilities improving maybe 10x. why? competition. according to similarweb data cited here, openai’s web traffic share in genai eroded from 87% to 74% in a single year while gemini rose from 6.4% to 12.9%. in that environment, raising prices is self-sabotage.
now look at google. $80 billion in net cash. the only hyperscaler with a net cash position exceeding $50 billion. generating over $400 billion annually in revenue from a diversified base that has nothing to do with selling ai subscriptions. capex trajectory: $32 billion (2023), $52 billion (2024), $85-93 billion (2025), $175-185 billion (2026). google recently raised $32 billion in debt including a rare 100-year bond, but that’s financial optimization, not survival. they’re sitting on $125 billion in cash. openai and anthropic raise capital because they have to. google raises debt because it’s cheap.
the late-2022 google bear case was genai-driven margin compression and search monetization pressure. instead, google services margins kept rising, from roughly 30% when chatgpt launched to 42% by 2025. that’s not a one-off either: eleven consecutive quarters of incremental margins above 50% (more than $0.50 of operating profit for each additional $1 of revenue), helped by a roughly 90% reduction in cost per ai query over 18 months.
the vertical integration moat is the key to understanding why. google built its first tpu in just 15 months back in 2013-2014, when it realized ml workloads could require doubling data center capacity. today, an estimated 70-80% of google’s internal compute runs on tpus, bypassing what the industry calls the “nvidia tax.” the latest generation, trillium, delivers 4.7x the compute per chip versus its predecessor and is 67% more energy efficient. in a world where power access is becoming the bottleneck for data center expansion, performance-per-watt may matter more than raw flops.
even google’s competitors are buying its silicon. anthropic signed a deal for up to one million tpus. meta entered similar negotiations. midjourney has migrated workloads to tpus. google profits whether its competitors use nvidia or switch to tpus. thomas kurian, google cloud ceo, put it plainly: “google is the only hyperscaler that offers our own systems and our own models. we’re not just reselling other people’s stuff.”
google also has a hedge that standalone ai labs don’t: if external cloud demand softens, google’s own products consume the compute. search, youtube, waymo, gmail, android. google is its own biggest customer. there’s less overcapacity risk in the way there is for a company whose only product is the ai itself.
this is what makes the margin destruction strategy rational. charlie munger observed that in some industries, participants behave like rational oligopolists (coca-cola), while in others they act like “demented kellogg,” destroying each other’s margins. mbi deep dives made the key observation: “if you believe in agi and that not getting there first may have profoundly negative implications for your business, you should try to kill your competitors.”
google doesn’t need ai to be a profit center. search and youtube and cloud already generate massive profits. google can price gemini api calls, cloud ai, and consumer products below cost indefinitely. amodei’s profitability model for anthropic assumes oligopolistic pricing with 67% gross margins, but as mbi noted, “alphabet has $80 billion in net cash and doesn’t need profits from ai inference to fund training. if alphabet actually acts rationally, that might be amodei’s nightmare.”
google has run this playbook before. google cloud went from -47% operating margin in 2020 to +30% in 2025. youtube lost $1 billion a year on $30 million in revenue before becoming a $60 billion business. sustain losses, crush competition, flip to profitability once the market consolidates.
distribution compounds the advantage. google doesn’t need to win the chatbot wars. it has seven products with over two billion users each, and it’s embedding ai into all of them. ai overviews already has 2 billion monthly active users and monetizes at approximately the same rate as traditional search. the gemini app hit 650-750 million mau. youtube generates $60 billion in revenue and uploads 500 hours of video per minute, and as mbi noted: “as soon as our models get efficient enough, google is going to start training models on youtube. they own the thing; it would be silly not to use the data to their advantage.” data from the antitrust ruling revealed that google receives 9x more queries daily than all rivals combined, and 93% of unique search phrases are seen solely by google. 13 months of google search data equals 17.5 years of bing data. that distribution is extremely difficult to replicate.
deepmind isn’t an llm shop. openai builds gpt models, dall-e, sora, codex. they dissolved their robotics team in 2021. anthropic builds claude, does safety and interpretability research, and has a growing enterprise platform. as far as we know, both are betting everything on language models being the path to agi.
google deepmind is doing something fundamentally different. it’s running active research programs across virtually every domain of ai simultaneously. the breadth is staggering when you actually list it out: gemini (foundation models), alphafold (protein folding, nobel prize, used by 3 million researchers), alphaproof and alphageometry (mathematical reasoning, international math olympiad silver medal level), alphaevolve (algorithm design), alphagenome (dna analysis), alphaqubit (quantum error correction), genie 3 (photorealistic world models), sima 2 (embodied agents in 3d worlds), gemini robotics (physical robot control), veo 3 (video generation with synchronized audio), imagen 3 (image generation), lyria (music), weathernext 2 (weather forecasting, cyclone prediction 15 days out), a new automated materials science lab opening in 2026 for superconductor discovery, and waymo (15 million real-world autonomous trips per year).
this matters because agi isn’t just better chatbots. intelligence is perception, spatial reasoning, physical interaction, scientific discovery, mathematical proof, and self-improvement. deepmind is building capabilities in all of these simultaneously. no other lab is attempting this breadth at the same scale.
the sima + genie pipeline deserves special attention because it points to something that could be genuinely unprecedented: an infinite synthetic data flywheel for physical ai. genie 3 generates photorealistic, interactive 3d environments from text prompts. sima 2, powered by gemini, operates as an embodied agent within those worlds. crucially, sima 2 uses self-generated training data: a separate gemini model creates tasks, a reward model scores the agent’s attempts, and the agent learns from its own mistakes. the loop is: genie generates worlds, sima trains in them, performance data improves the world model, richer worlds generate better training data. repeat. scale with compute.
deepmind has already shown this transfers to reality. they proved video generation models can serve as virtual simulators for robotics with 0.88 correlation to real-world success. waymo closes the loop with 15 million real-world trips per year providing validation data. as mbi pointed out, other companies need more physical robots to train. google just needs more compute.
no other lab has this pipeline. openai shut down robotics. anthropic never started.
robotics might be the most underpriced part of the google story. musk claims 80% of tesla’s value will come from optimus, and analysts are projecting $2-3 trillion market caps on that promise. optimus could deliver real value in structured environments, but its approach relies on imitation learning from video demonstrations, skill by skill. every new task needs curated training data. it’s hard to see how that scales to the kind of general-purpose autonomy the valuation implies.
gemini robotics is doing something more interesting and far less discussed. it’s a vision-language-action foundation model that aims to be the software layer for any humanoid robot, not just google’s own. it already powers apptronik’s apollo, backed by google and mercedes-benz. give it a natural language instruction and a camera feed, and it reasons through the task, plans the steps, and executes, with zero or few-shot generalization across different robot bodies. three major releases in nine months. it’s the same philosophy as the sima/genie work: build broad embodied intelligence that transfers across form factors, rather than programming one task at a time. if the software layer eats robotics the way android ate mobile, the hardware becomes commoditized and the intelligence provider wins.
the talent endgame follows logically. bret taylor and clay bavor at sierra noted that “talent is the primary constraint. there’s a small set of people who know how to architect these models, do the pre-training runs, do post-training rl runs.” openai has lost plenty of marquee talent over the past three years. anthropic was itself a defection from openai. if standalone labs face financial stress, google is the natural destination: stability, resources, research freedom, and compensation that cash-burning startups can’t sustain. google absorbed deepmind in 2014 and it became the crown jewel. the same playbook could repeat at scale.
so where does this end? if google wins the attrition war, it may be the only entity with the resources, talent, and research breadth to push toward agi. scaling now requires optimizing across pre-training compute, rl compute, inference compute, and data quality simultaneously. as mbi noted, “we’ve moved from a single-variable equation to a multi-variable optimization problem.” only full-stack owners can optimize across all variables at once. the compute wall favors the rich: estimates suggest 1,000,000x the rl compute is needed to match a 100x pre-training boost. only companies generating hundreds of billions in annual revenue can fund that.
altman himself, on conversations with tyler, described the endgame: “robots that can build other robots, data centers that can build other data centers, chips that can design their own next generation.” google is the company best positioned to build all three. it designs its own chips and it runs its own data centers.
to quote mbi one last time: “if more evidence of recursive self-improving models emerge in the next couple of years, alphabet will likely be the most valuable company in the world.”
this is the strongest version of the argument. here’s what could break it.
- the whole thesis assumes google will play aggressively, but google invented the transformer and sat on it, needed a “code red” to react to chatgpt, and has institutional DNA that favors caution over killer instinct.
- the margin destruction strategy also has a legal problem: pricing below cost to kill competitors is textbook predatory pricing, and google is already under antitrust pressure from the search ruling. the eu has fined for less.
- meta has the same structural advantage as google: a cash-printing ad business, $115 billion-plus in capex, and 3.5 billion users, plus the open-source llama strategy that could commoditize frontier models from below. so far meta has disappointed on model quality, but it doesn’t need to build the best model if it can make the best model not matter.
- deepseek could disrupt cost structures from china. anthropic just accused deepseek, moonshot, and minimax of distilling claude through 16 million queries. the $175 billion capex creates a depreciation time bomb if demand disappoints.
- openai and anthropic have massive user bases, enterprise contracts, and investors willing to fund losses for years.
but the structural logic is hard to dismiss. the ai race isn’t won by the smartest model on today’s benchmark. it’s won by the last company standing when the bills come due. and right now, google might be the only company that can afford to keep playing forever.