The Trillion-Dollar Showdown: Apple, Nvidia, and the Future of AI Link to heading

It’s been a while since my last Q&A post about Nvidia. The reason I haven’t followed up is simple: the analysis from back then still holds up today.

But “tomorrow” might be a different story.

Let’s dive into the long-standing feud between Apple and Nvidia to explore the potential wildcards of the future.

1. Market Cap Link to heading

Looking at the two most valuable companies in the world today, Nvidia’s stock growth over the past two years has outpaced Apple’s by more than 10x.

The reason is straightforward: investors believe Apple completely missed the AI wave. And Apple, almost living up to those low expectations, has stubbornly clung to its notoriously underwhelming Siri, an assistant that currently trails far behind even Xiaomi’s XiaoAI.

A slew of analytical pieces have been bearish on Apple for a long time, and virtually no one on Wall Street sees these two companies as direct competitors.

But the tech landscape is highly dynamic. What if the consensus is slowly being proven wrong?

Why does Apple, seemingly backed into a corner, absolutely refuse to buy Nvidia GPUs? Where exactly does the friction between these two giants lie?

2. NeXT Link to heading

We all know that after Steve Jobs was ousted from Apple, his two most valuable creations were the NeXT operating system and Pixar Animation. But what’s lesser-known is that both of these companies had their roots in bleeding-edge hardware design.

Back in 1988, before Nvidia was even born, Jobs revolutionarily integrated Motorola DSPs to handle audio, as well as scientific and engineering computations. In 1991, NeXT launched the NeXTdimension image processing board, dropping almost simultaneously with the S3 graphics accelerator on PCs. For context, S3 and the subsequent 3dfx were the key inspirations that led Jensen Huang to found Nvidia.

The capabilities of the NeXTdimension vastly outstripped the S3. It was powered by a highly forward-looking Intel processor—the i860, a 64-bit RISC chip. Keep in mind, it took another 35 years for 64-bit RISC processors to become mainstream in the Android ecosystem.

Simply put, Jobs used a super-chip that was far more advanced than typical PC processors at the time to serve as NeXT’s graphics accelerator. You could easily argue this was the world’s first true GPGPU (General-Purpose computing on Graphics Processing Units).

3. Vision Link to heading

A side note here: the vision of Intel’s pioneers back then was lightyears ahead of the lackluster professional managers running the show today—even though hyper-forward-looking tech often carries a high risk of failure.

The ARM architecture was also heavily propelled by Intel’s RISC processor team; their XScale microarchitecture completely outperformed ARM’s own standard ARM9 cores at the time.

Tech history buffs know that the semiconductor team under the declining DEC (Digital Equipment Corporation) was basically the equivalent of mythical grandmasters in the chip world. XScale was born out of DEC’s StrongARM team. Jobs essentially acquired the bulk of the Alpha processor team (PA Semi) outright, laying the critical foundation for today’s Apple Silicon. Meanwhile, Nvidia’s most vital visionary architect, Brian Kelleher, hailed from DEC’s MIPS team.

Circling back to Pixar, its early hardware was the Pixar Image Computer, launched shortly after Jobs acquired the company. Its graphics processing power was absolutely monstrous, but because it was so wildly overpowered for its era (costing the equivalent of over a million dollars today, adjusted for inflation), it just couldn’t sell.

4. The Conflict Link to heading

The reason I went on this hardware tangent is to show that Steve Jobs’ ambitions in graphics processing likely exceeded what most people can imagine.

Later rumors suggested Jobs and Nvidia clashed fiercely over patent issues, which makes perfect sense: from Nvidia’s perspective, NeXT and Pixar were the legacy titans of the industry. The notorious “Bumpgate” GPU failure scandal of 2007-2008 drove a massive wedge between the two companies.

But the deeper, underlying reason is that Apple has always fiercely guarded its ecosystem against outside interference. Nvidia, similarly, operates by building walled gardens to achieve platform monopolies. Earlier, Nvidia had tried to strong-arm Apple via its nForce 200 chipsets, and CUDA was released right before the Bumpgate scandal.

Apple chose to forfeit the desktop research and gaming markets rather than let Nvidia hold its hardware hostage. Rumor has it that even Apple’s secretive car project was ultimately scrapped because they refused to use Nvidia’s chips, leaving them without a viable compute solution that met Apple’s stringent standards.

5. “Glue” Link to heading

On the eve of the Apple Silicon M1 launch, Apple announced the deprecation of OpenGL—a foundational API for Nvidia’s cross-platform dominance and various professional graphics tools. Apple built its own Metal API and the MPS (Metal Performance Shaders) framework for scientific computing. You could say that Apple’s hardware-software integrated ecosystem for AI/ML was already falling into place; the only missing puzzle piece was data center silicon.

Apple has plenty of ways to build data center chips, and one of them is the “glue” approach (advanced packaging/chiplets).

“Gluing chips together” might sound unrefined, but it is an absolute game-changer. Simply put: AMD used “chiplet glue” to topple Intel; Apple used “unified memory glue” to leapfrog the competition in edge performance; and Nvidia used “HBM stacking glue” to achieve stratospheric memory bandwidth.

Apple’s masterclass in gluing is its M-series architecture, where base chips are stitched together to form Max, Ultra, and eventually maybe even further configurations, before finally gluing on massive blocks of unified memory or even HBM.

We’ve already seen hardware enthusiasts network dozens of M4 Mac Minis into clusters, achieving inference speeds far faster than Nvidia rigs at the exact same price point. If Apple handles this stitching at the silicon level, the results will be next-level. Plus, the massive production scale of Apple Silicon keeps costs incredibly low.

Furthermore, credible reports indicate that Apple has been leasing Google’s TPU clusters for AI training, and its custom Broadcom-designed data center chip, codenamed “Baltra”, will enter mass production next year.

With so many vectors of attack, is it really hard to believe they can pull it off?

6. Disruptive Potential Link to heading

This brings us back to the million-dollar question: Doesn’t Apple need CUDA?

The answer is: at least within the Apple ecosystem, no, they don’t. If we look closely, Google doesn’t need it either.

The core reason Google (GOOG) underperformed the rest of the Magnificent Seven this year is that investors feared AI would cannibalize its lifeblood: ad revenue. But the winds are shifting. Generative AI has been exploding for over two years now, yet Google’s ad revenue hasn’t taken a hit—in fact, it’s climbing. Meanwhile, after a disastrous launch with Bard, Google’s proprietary AI has rapidly turned the tables. Its comprehensive capability is undoubtedly Tier 1 now, and arguably the undisputed leader of the pack. (If OpenAI weren’t piggybacking on Microsoft and Apple, its very survival might be a challenge right now).

Compared to Microsoft, which is struggling to innovate at the consumer UX level, Apple has a massive bag of tricks.

Pundits mocked this year’s WWDC, claiming iOS 26 was just a UI reskin, but in reality, AI is the absolute core of the update. Developers have already been granted access to Apple Intelligence APIs. As long as this on-device model isn’t a flop, it will unleash explosive, disruptive traction. This stands in stark contrast to the myriad of AI startups currently bleeding cash with no viable business model.

7. The Android Factor Link to heading

For deep-pocketed tech giants, training foundational models is no longer a barrier to entry.

As technology advances, on-device models will easily handle tasks like navigating apps, processing multi-language conversations, and managing basic emotional interactions with ease (since these don’t require massive, cloud-based world knowledge). When that happens, interacting with AI agents will be as seamless and plug-and-play as using Apple CarPlay today.

Hold on, sounds like Android can do all this too, right?

Yes. But while top-tier Android devices might match up in raw NPU benchmarks, Apple still holds a distinct advantage in overall UX—power efficiency, latency, and deep system integration—thanks to its hardware-software synergy. Meanwhile, the fragmented nature of the Android ecosystem, with countless brands and customized models, will result in a highly inconsistent user experience.

8. The Showdown Link to heading

So, back to our original question: What will be different about “tomorrow”?

The difference is that the AI battlefield is shifting from a cloud-compute “arms race” down to an “experience revolution” held in the hands of billions of users.

In this clash of titans, the ultimate judges won’t be Wall Street investors, but everyday consumers voting with their fingertips. Between Nvidia’s “developer ecosystem” and Apple’s “application ecosystem,” who will better define the intelligent experience of the next decade?

History hasn’t written the answer yet, but I am incredibly excited to watch this trillion-dollar showdown unfold.