Inside Google’s AI Gambit and the Specter of a Bubble

Google’s chief executive, Sundar Pichai is offering a rare glimpse into the heart of the tech behemoth at its California headquarters, the Googleplex. A sunny walkway traverses the campus, passing a giant dinosaur skeleton, a vibrant beach volleyball court, and scores of employees enjoying lunch under a hazy November sun.

But the real destination, the site of what Google believes is its strategic ace, is tucked away, a nondescript laboratory hidden behind a curtain of trees at the back of the campus.

This is where the Tensor Processing Unit (TPU) is being forged. The chip itself is unassuming, but its ambition is anything but: Mr. Pichai says this modest piece of silicon is poised to power every single A.I. query processed by Google. In the evolving landscape of the global economy, this makes the TPU potentially one of the world’s most critical objects.

“AI is the most profound technology humanity [has ever worked] on,” Mr. Pichai told the BBC. “It has potential for extraordinary benefits – we will have to work through societal disruptions.”

Yet, beneath the starry-eyed promise of A.I., a confounding question lingers: Is the industry’s unprecedented surge merely a spectacular bubble in the making? If so, its bursting could be on the scale of the dotcom crash at the turn of the century, with far-reaching consequences.

The Bank of England has already issued a stern caution about a “sudden correction” in global financial markets, noting that “market valuations appear stretched” for A.I. tech firms. The chief executive of OpenAI, Sam Altman, has also speculated that “there are many parts of AI that I think are kind of bubbly right now.”

When asked if Google could be insulated from a potential collapse, Mr. Pichai suggested the company could weather the storm, but tempered his excitement with a warning: “I think no company is going to be immune, including us.”

So why, then, is Google plowing more than $90 billion a year into its A.I. build-out, a three-fold increase in just four years, precisely as these risks are being discussed?

The Unprecedented Surge and the Scale of the Risk

The A.I. boom is, in purely financial terms, the biggest market surge the world has ever witnessed.

The scale of the numbers is extraordinary. Collectively, Google and four other giants, all headquartered within a short drive of one another in Silicon Valley, account for $15 trillion in market value. Chipmaker-turned-A.I.-systems pioneer Nvidia in Santa Clara is now worth more than $5 trillion. A 10-minute drive south, Apple H.Q. in Cupertino hovers around $4 trillion. Fifteen minutes west, Meta (formerly Facebook) is valued at $1.9 trillion. In San Francisco, OpenAI was recently valued at $500 billion. Google’s parent firm, Alphabet, is worth about $3.3 trillion, having almost doubled in value since last April.

The financial repercussions of this trend are significant. The value of these companies’ shares—along with a few others like Microsoft in Seattle, have served as a massive cushion for the U.S. economy against trade wars and have kept global retirement plans buoyant.

But the risk is baked into the numbers. The incredible dependence of U.S. stock market growth on a handful of technology titans is stark. The “Magnificent 7” (Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla) collectively make up one third of the total valuation of the S&P 500. According to the I.M.F., market value is now more highly concentrated in a few firms than it was during the height of the dotcom bubble in 1999.

Mr. Pichai maintains that history is simply repeating itself, pointing to the “inflection points” that emerge every decade or so: the personal computer, the late-1990s internet, followed by mobile and cloud. “Now it’s clearly the era of Artificial Intelligence.”

But is it a bubble? Mr. Pichai sees two sides to the question. First, the progress in A.I. services used by people and companies are “palpably exciting.”

Yet, he conceded: “It’s also true when we go through these investment cycles, there are moments we overshoot collectively as an industry… So I think it’s both rational and there are elements of irrationality through a moment like this.”

A new distinction is emerging in the markets: between businesses that rely on borrowed money and complicated deals to access the crucial A.I. chips, and the largest players, such as Google, Microsoft, and Amazon, which can fund their massive chip and data center investments from their own pockets. 

This distinction leads directly to Google’s “restricted” silicon lab and its prized TPUs.

‘Restricted’: Inside the Chip Lab

The testing lab for Google’s chips is the size of a small soccer field, laced with a mesmerizing mesh of multi-colored wires and deep blue blinking lights. Signs around the facility are unambiguous: “restricted.”

The most striking feature is the sheer, deafening noise, the roar of the cooling systems required to manage the immense heat generated by the chips as they crunch trillions of calculations.

The TPUs are a specialist type of Asic (application-specific integrated circuit), custom-built by Google for A.I. algorithms. They differ from the general-purpose CPU (central processing unit) and the parallel-processing GPU (graphics processing unit) made famous by Nvidia. The latest version is called the Ironwood.

The development of the TPU cluster is part of Mr. Pichai’s overall strategy: to own the entire scientific supply chain, from the silicon and the data to the A.I. models and everything in between.

The core frenzy of the A.I. boom has been the “mad dash” to amass top-performing chips and pack them into data centers, which Nvidia’s chief executive, Jensen Huang, famously termed “A.I. factories.”

Stories of this frenzied competition abound. At a recent dinner at Nobu in Palo Alto, Elon Musk and Larry Ellison, the head of Oracle, were said to have aggressively courted Mr. Huang to secure more GPUs.

As Mr. Ellison recounted: “I would describe the dinner as me and Elon begging Jensen for GPUs. Please take our money – no, no take more. You’re not taking enough. We need you to take more, please!”

This desperate race to procure and scale up high-performance chips is what is driving the A.I. boom, fueled by a perception that the only way to win is through relentless spending.

The Chip Race

The terrace of the Rosewood Sand Hill hotel, the de facto deal-making hub near the Santa Cruz mountains, is abuzz with whispered rumors about who will be the next to announce their own custom A.I. chips to compete with Google and Nvidia.

The investment plans of OpenAI, co-founded by Elon Musk, have recently caused a stir, involving a complex web of cross-investments to secure the necessary hardware for A.I. processing.

While few doubt the phenomenal user growth of its chatbot, ChatGPT, some have questioned how the company’s spending commitments tally with its revenues. Co-founder Sam Altman, who has ambitions to design custom A.I. chips, shot back at an investor questioning its revenue figures: “If you want to sell your shares, I’ll find you a buyer. Enough.”

Mr. Altman has since committed to about $1.4 trillion in spending over the next eight years, arguing that now is the time to invest in scaling up technology. He maintains that while “I do not think the government should be writing insurance policies for AI companies,” he also suggested: “What we do think might make sense is governments building (and owning) their own AI infrastructure.”

Recent market jitters have seen a supplier to OpenAI, Coreweave, lose 26% of its share value earlier this month, amid other reactions to perceived credit risk among firms.

ChatGPT Versus Gemini 3.0

None of this has tempered the industry’s excitement. Google’s consumer A.I. model, Gemini 3.0, launched this week to great fanfare, setting up a direct and costly battle with OpenAI’s dominant ChatGPT for market share.

The key question remains: Will the end result of this fantastic investment be less reliable information, chatbots recommending glue as a pizza ingredient, for instance?

“I think if you only construct systems standalone and you only rely on that, [that] would be true,” Mr. Pichai told me. “Which is why I think we have to make the information ecosystem has to be much richer than just having AI technology being the sole product in it.”

Truth matters, the reporter pressed. His response: “truth matters.”

The Energy Question

Another major challenge for advancing A.I. is simply: how to power it?

According to the I.M.F., by 2030, global data centers will consume roughly the same amount of electricity as all of India did in 2023. This comes at a time when governments are committing to ambitious climate change targets.

The reporter put it to Mr. Pichai: Is it coherent for a government, like the U.K., to aim to generate 95% of its electricity from low-carbon sources by 2030, and simultaneously aspire to be an A.I. superpower?

“I think it’s possible. But I think for every government, including the U.K., it’s important to figure out how to scale up infrastructure, including energy infrastructure,” he said. “You don’t want to constrain an economy based on energy. I think that will have consequences.”

Lessons From the Past

Through coverage of the 2000 dotcom bubble, one lesson became clear: even in the toughest of crashes, catastrophe is not guaranteed for all.

Take Amazon. Its share price once slumped to $6, and its market capitalization fell to $4 billion during that crash. Yet, some 25 years later, the company is valued at $2.4 trillion.

The same will be true for companies shaken by a potential A.I. bubble burst.

Another looming factor may explain why Silicon Valley is pressing on regardless: the glittering prize of achieving Artificial General Intelligence (AGI), the point at which machines match human intelligence, which many believe is within reach.

However, a thought-provoking perspective from one Silicon Valley figure suggests that the bubble-or-no-bubble question is secondary. Step back, and the bigger picture is a global battle for A.I. supremacy, with the U.S. pitted against China.

In this free-market free-for-all, the U.S. currently maintains superiority in silicon. Companies like Nvidia, with their GPUs, and Google, with their TPUs, can accelerate into the storm. Others will surely fail, and spectacularly so.

But the physical footprint left behind, containing sheer computing firepower for mass A.I. deployment, will inevitably shape our economy, how we work and learn, and ultimately, who dominates the world for the rest of the 21st Century.

Stay ahead in the world of AI, business, and technology by visiting Impact AI News for the latest news and insights that drive global change.

Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide!


Discover more from Impact AINews

Subscribe to get the latest posts sent to your email.

Scroll to Top

Discover more from Impact AINews

Subscribe now to keep reading and get access to the full archive.

Continue reading