Semiconductor and AI computing company
Appears in 13 stories
Dominant supplier of autonomous vehicle computing hardware
Nvidia and Uber announced a plan to deploy 100,000 Level 4 autonomous robotaxis across 28 cities on four continents by 2028, using Nvidia's new DRIVE Hyperion 10 computing platform and an open-source reasoning model called Alpamayo. Five automakers—BYD, Geely, Stellantis, Lucid, and Mercedes-Benz—will manufacture vehicles with Nvidia's hardware pre-installed. Commercial rides begin in Los Angeles and San Francisco in the first half of 2027.
Updated 1 hour ago
Secures photonics supply chain with $4B investments amid hyperscaler capex surge
The four largest cloud providers—Microsoft, Meta, Alphabet, and Amazon—guided to over $650 billion in combined AI infrastructure spending for 2026 during their February earnings reports, up sharply from $410 billion in 2025, and have begun tapping debt markets to fund the buildout. Microsoft and Meta reported on January 28-29 with divergent market reactions: Microsoft shares plunged 12% on $37.5 billion quarterly capex, while Meta surged on $115-135 billion 2026 guidance. Alphabet stunned investors February 4 with $175-185 billion capex plans—doubling last year's spend—while Amazon topped all on February 5 with a $200 billion pledge, 50% above 2025 and $50 billion over expectations, prompting a share selloff despite strong revenue beats.
Updated Yesterday
Dominant AI chip supplier, expanding into enterprise AI software
Nvidia has spent four years on an annual architecture cadence that no semiconductor company has sustained before. At GTC 2026, chief executive Jensen Huang unveiled the Vera Rubin platform—a system built around a single graphics processing unit that delivers 50 petaflops of inference compute, roughly five times the performance of its Blackwell predecessor, while claiming to cut the cost of generating each AI token by a factor of ten. In the same keynote, Huang launched NemoClaw, an open-source software platform that lets any company deploy autonomous AI agents across its operations without being locked into a specific cloud provider's hardware.
Infrastructure partner and co-developer of Evo 2
It took thirteen years and $2.7 billion to read the first human genome. Now a single AI model, trained on 9 trillion DNA base pairs from more than 128,000 species, can predict whether an uncharacterized mutation in a breast cancer gene is dangerous—with 90 percent accuracy—without ever being shown that gene. On March 4, the Arc Institute and NVIDIA published Evo 2 in Nature, the largest biological foundation model ever built: 40 billion parameters, a context window of one million nucleotides, and the ability to design synthetic genomes the size of a simple bacterium.
Updated Mar 5
Strategic investor in Ayar Labs; developing own co-packaged optics platforms
Every time engineers double the data rate on a copper wire, electrical noise doubles too, cutting the usable cable length in half. That physics problem is now strangling the AI industry. As graphics processing units (GPUs) push toward 224 gigabits per second per lane, passive copper cables inside data centers can reach less than one meter before the signal degrades. Ayar Labs, a startup born from research at the Massachusetts Institute of Technology (MIT) and the University of California, Berkeley, just closed $500 million in Series E funding at a $3.75 billion valuation to mass-produce chips that replace those copper links with light.
Updated Mar 3
Technology partner powering Caterpillar's edge AI and autonomy systems
For more than three decades, giant autonomous trucks have hauled billions of tonnes of rock out of mines with no one behind the wheel. Now Caterpillar, the world's largest construction equipment manufacturer, is moving that technology to ordinary job sites. At CONEXPO-CON/AGG 2026 in Las Vegas, the company ran a 26,000-pound CS12 soil compactor through live compaction passes with an empty cab—the first time a major equipment maker has demonstrated this level of autonomous operation on a construction machine in a public setting.
Updated Mar 2
Finalized $30B equity investment and committed to providing 5GW of Vera Rubin capacity to OpenAI
In October 2024, OpenAI raised $6.6 billion at a $157 billion valuation. Seventeen months later, on February 27, 2026, the maker of ChatGPT closed a record $110 billion funding round at a $730 billion pre-money valuation ($840 billion post-money)—the largest private capital raise in history. Amazon led with a $50 billion commitment ($15 billion upfront, $35 billion contingent on OpenAI achieving AGI or completing an IPO by year-end), while Nvidia and SoftBank each committed $30 billion. The round remains open for additional investors. The deal includes expanded infrastructure partnerships: Amazon will provide $100 billion in additional AWS compute services over eight years (on top of the existing $38 billion commitment), while Nvidia will supply 3 gigawatts of dedicated inference capacity and 2 gigawatts of training capacity using its Vera Rubin systems.
Updated Feb 27
Dominant AI GPU supplier facing Meta AMD diversification, $51.2B datacenter revenue Q3 2025
ChatGPT's November 2022 launch triggered the fastest infrastructure buildout in tech history. Datacenter construction spending tripled from $15 billion to $45 billion annually in just two years. Hyperscalers are now on track to spend over $1 trillion in 2026—exceeding the GDP of all but 10 countries—racing to secure power, land, and cooling systems before their rivals. Alphabet shocked markets on February 4, 2026 with guidance of $175-185 billion in 2026 capex, 55-65% above Wall Street estimates of $119.5 billion. Amazon escalated the spending war on February 5 with $200 billion 2026 capex guidance after Q4 revenue of $213.4 billion and AWS growth of 24% to $35.6 billion. Microsoft reported $37.5 billion in capex for Q2 FY2026 (just one quarter), while Meta committed $6 billion to Corning for fiber-optic cables in late January, secured 6.6 gigawatts of nuclear power through three partnerships announced in early January 2026, confirmed a multi-billion Nvidia chip deal, and on February 24 announced a $60-100 billion, 6-gigawatt AMD GPU deal—diversifying away from Nvidia dominance.
Updated Feb 24
Primary driver of AI chip testing demand
Advantest, a Japanese company most people have never heard of, just posted record quarterly sales—and its stock now moves in near-lockstep with NVIDIA's. The reason: every advanced AI chip must pass through test equipment before it ships, and Advantest controls nearly 60% of the global market for the machines that do this. As AI spending explodes, chip testing has quietly become one of the supply chain's tightest chokepoints. Yet the company faces intensifying competition: U.S. rival Teradyne is gaining ground in memory testing, and the entire semiconductor equipment sector is experiencing unprecedented demand as chipmakers race to expand capacity for AI accelerators and high-bandwidth memory.
Updated Feb 4
Subject to new semiconductor export rules
China posted a $1.2 trillion trade surplus for 2025—the largest any country has ever recorded. The number is roughly equivalent to the GDP of Indonesia, the world's 16th-largest economy. It comes after seven years of U.S. tariffs designed to shrink that very surplus, and eight days after Canada struck a deal with Beijing that slashed Chinese EV tariffs from 100% to 6.1%, marking a dramatic shift in Western trade policy toward China that prompted Trump to threaten 100% retaliatory tariffs on Canadian goods.
Updated Jan 30
Primary customer driving HBM demand; claimed exclusive HBM4 access through 2026
For decades, chip packaging was the unglamorous final step—stacking and connecting silicon dies after the real engineering was done. Now it's the constraint holding back AI. SK Hynix announced a $12.9 billion investment to build the world's largest advanced packaging facility in South Korea, a bet that the company controlling 61% of the high-bandwidth memory market can't afford to lose its lead as competitors circle. At CES 2026, the company unveiled the first 16-layer, 48GB HBM4 module—double the capacity of current generation memory—requiring silicon wafers thinned to just 30 micrometers, thinner than a human hair.
Updated Jan 15
Acquiring Groq's assets and team for $20B
On Christmas Eve 2025, Nvidia paid $20 billion for Groq's assets—nearly triple the AI chip startup's $6.9 billion valuation from three months earlier. The deal brings Groq's founder Jonathan Ross, who created Google's original Tensor Processing Unit, and his breakthrough inference technology into Nvidia's fold. It's Nvidia's largest acquisition ever, nearly three times bigger than its $7 billion Mellanox purchase. By structuring the deal as a "non-exclusive licensing agreement" rather than an outright acquisition, Nvidia bypasses Hart-Scott-Rodino Act merger review requirements that trigger automatic FTC scrutiny—following Microsoft's 2024 playbook with Inflection AI. The deal's unusual structure has drawn immediate analyst warnings about "the fiction of competition" as Groq's leadership and technical talent move to Nvidia while the company nominally continues independently. Adding to the intrigue: 1789 Capital, where Donald Trump Jr. serves as partner, was among Groq's September investors who saw their stake nearly triple in just three months.
Updated Dec 27, 2025
Seller of the H200 chip; lobbying for access to China while navigating export-control swings
The Trump administration just did the thing Washington has spent years swearing it wouldn’t do: let China buy a near-top-tier Nvidia AI chip again. Now a key China hawk in Congress is demanding the Commerce Department explain, in detail, why this isn’t a strategic own-goal.
Updated Dec 13, 2025
No stories match your search
Try a different keyword
How would you like to describe your experience with the app today?