Pull to refresh
Logo
Daily Brief
Following
Why Ranks Sign Up
Nvidia Corporation

Nvidia Corporation

Semiconductor company

Appears in 18 stories

Stories

Nvidia's $20 billion Groq deal: the AI inference land grab

New Capabilities

Acquiring Groq's assets and team for $20B

On Christmas Eve 2025, Nvidia paid $20 billion for Groq's assets—nearly triple the AI chip startup's $6.9 billion valuation from three months earlier. The deal brings Groq's founder Jonathan Ross, who created Google's original Tensor Processing Unit, and his breakthrough inference technology into Nvidia's fold. It's Nvidia's largest acquisition ever, nearly three times bigger than its $7 billion Mellanox purchase.

Updated 1 hour ago

Trump reopens China to Nvidia’s H200—now Congress wants the national-security math

Rule Changes

Seller of the H200 chip; lobbying for access to China while navigating export-control swings

The Trump administration just did the thing Washington has spent years swearing it wouldn't do: let China buy a near-top-tier Nvidia AI chip again. Now a China hawk in Congress is demanding the Commerce Department explain, in detail, why this isn't a strategic own-goal.

Updated Yesterday

The race to build non-Nvidia AI inference chips

Money Moves

Incumbent the entire challenger field is trying to displace

Nvidia sells roughly four out of every five chips running today's large AI models. Investors are now writing nine-figure checks on the bet that the workload coming next, running those models rather than training them, will move to different silicon.

Updated 3 days ago

Big tech's half-trillion-dollar AI bet

Money Moves

Secures photonics supply chain with $4B investments amid hyperscaler capex surge

The four largest cloud providers—Microsoft, Meta, Alphabet, and Amazon—are tracking toward over $720 billion in combined artificial intelligence (AI) infrastructure spending for 2026, up sharply from $410 billion in 2025. All four reported first-quarter results on April 29, 2026, providing the first detailed test of whether AI revenues are keeping pace with record capital expenditure. Microsoft delivered the clearest signal: revenue of $77.7 billion (up 18% year-over-year), with Azure cloud growth of 40%—above the 37% it had guided—and earnings per share of $4.13 against analyst estimates of $3.67. Microsoft also disclosed that OpenAI has committed $250 billion in incremental Azure cloud service contracts, a figure that simultaneously validates Microsoft's infrastructure bet and deepens its financial exposure to OpenAI's monetization path. Quarterly capex came in at $34.9 billion, putting Microsoft on pace to exceed its $110–120 billion annual guidance if spending holds.

Updated Apr 29

Frontier AI funding rounds reach unprecedented scale in 2026

Money Moves

Participated in seed round

A London artificial intelligence lab raised $1.1 billion at a $5.1 billion valuation in April 2026—the largest seed round in European history. The founder, David Silver, is a University College London professor who led the team at Google DeepMind that built AlphaGo and AlphaZero, programs that learned board games at superhuman levels by playing against themselves rather than studying human examples. Silver co-authored a paper with reinforcement learning pioneer Richard Sutton—titled 'Welcome to the Era of Experience'—arguing that systems trained on human-generated data can synthesize and remix existing knowledge but cannot genuinely discover something new. Sutton, who with Andrew Barto won the 2025 Turing Award for foundational reinforcement learning research, publicly endorsed Ineffable's mission on its launch day.

Updated Apr 28

OpenAI assembles record private funding round

Money Moves

Finalized $30B equity investment and committed to providing 5GW of Vera Rubin capacity to OpenAI

OpenAI closed a record $122 billion funding round on April 1, 2026, lifting the maker of ChatGPT to an $852 billion post-money valuation and eclipsing every prior private capital raise. Amazon led with a $50 billion commitment -- $15 billion upfront and $35 billion contingent on OpenAI reaching artificial general intelligence or completing an IPO by year-end. Nvidia and SoftBank each committed $30 billion, with SoftBank's second $10 billion tranche arriving alongside closes from a16z and D.E. Shaw. The round took the company from a $157 billion valuation seventeen months earlier to one more than five times larger.

Updated Apr 23

AI foundation models give robots the ability to see, reason, and act in the physical world

New Capabilities

Building competing open-source robotics foundation models

For decades, industrial robots have been powerful but rigid—they follow pre-programmed instructions and break when the world deviates from what engineers anticipated. Google DeepMind's Gemini Robotics-ER 1.6, released on April 14, 2026, represents the sharpest version yet of a new approach: giving robots the same kind of flexible reasoning that powers chatbots like Gemini, but aimed at understanding and acting in physical space. The model can now read pressure gauges, thermometers, and digital readouts with 98% accuracy when paired with its agentic vision system—a capability that emerged directly from collaboration with Boston Dynamics, whose Spot robot patrols industrial facilities.

Updated Apr 21

Nvidia builds an AI empire through billion-dollar ecosystem investments

Money Moves

Deploying capital to build a vertically integrated AI ecosystem

In ten months, Nvidia has poured roughly $14 billion into six companies that supply the chips, networking gear, optical links, and cloud capacity its artificial intelligence platform depends on. The latest: a $2 billion stake in Marvell Technology, the custom-chip designer behind Amazon's Trainium and Microsoft's Maia accelerators, announced March 31, 2026. Marvell's stock jumped 13% on the news.

Updated Apr 1

The race to build AI's physical foundation

Built World

Dominant AI GPU supplier facing Meta AMD diversification, $51.2B datacenter revenue Q3 2025

ChatGPT's November 2022 launch triggered the fastest infrastructure buildout in tech history. Datacenter construction spending tripled from $15 billion to $45 billion annually in just two years. Hyperscalers are now on track to spend over $1 trillion in 2026—exceeding the GDP of all but 10 countries—racing to secure power, land, and cooling systems before their rivals. Alphabet shocked markets on February 4, 2026 with guidance of $175-185 billion in 2026 capex, 55-65% above Wall Street estimates of $119.5 billion. Amazon escalated the spending war on February 5 with $200 billion 2026 capex guidance after Q4 revenue of $213.4 billion and AWS growth of 24% to $35.6 billion. Microsoft reported $37.5 billion in capex for Q2 FY2026 (just one quarter), while Meta committed $6 billion to Corning for fiber-optic cables in late January, secured 6.6 gigawatts of nuclear power through three partnerships announced in early January 2026, confirmed a multi-billion Nvidia chip deal, and on February 24 announced a $60-100 billion, 6-gigawatt AMD GPU deal—diversifying away from Nvidia dominance. Recent March 2026 reports show up to 50% of global projects facing delays due to power shortages and community opposition, pushing hyperscalers toward international sites in India, Sweden, and Thailand.

Updated Mar 18

The race to build data centers in orbit

New Capabilities

Platform provider for orbital AI computing

Earth observation satellites generate petabytes of imagery every day, but only about two percent of it ever reaches the ground. The bottleneck is physics: a satellite in low Earth orbit gets maybe ten minutes of ground-station contact per pass, and radio bandwidth cannot keep up with sensor resolution that doubles every few years. Nvidia's answer, announced at its annual developer conference on March 17, 2026, is to stop trying to move the data down and instead move the AI up. The Vera Rubin Space-1 Module packs 25 times the compute power of Nvidia's H100 chip into a radiation-hardened package designed to run large language models and foundation models directly in orbit.

Updated Mar 17

Autonomous vehicles move from pilot programs to mass deployment

New Capabilities

Dominant supplier of autonomous vehicle computing hardware

Nvidia and Uber announced a plan to deploy 100,000 Level 4 autonomous robotaxis across 28 cities on four continents by 2028, using Nvidia's new DRIVE Hyperion 10 computing platform and an open-source reasoning model called Alpamayo. Five automakers—BYD, Geely, Stellantis, Lucid, and Mercedes-Benz—will manufacture vehicles with Nvidia's hardware pre-installed. Commercial rides begin in Los Angeles and San Francisco in the first half of 2027.

Updated Mar 17

Nvidia's generational GPU leaps reshape who controls AI infrastructure

New Capabilities

Dominant AI chip supplier, expanding into enterprise AI software

Nvidia has spent four years on an annual architecture cadence that no semiconductor company has sustained before. At GTC 2026, chief executive Jensen Huang unveiled the Vera Rubin platform—a system built around a single graphics processing unit that delivers 50 petaflops of inference compute, roughly five times the performance of its Blackwell predecessor, while claiming to cut the cost of generating each AI token by a factor of ten. In the same keynote, Huang launched NemoClaw, an open-source software platform that lets any company deploy autonomous AI agents across its operations without being locked into a specific cloud provider's hardware.

Updated Mar 16

AI models learn to read, predict, and write the genetic code of life

New Capabilities

Infrastructure partner and co-developer of Evo 2

It took thirteen years and $2.7 billion to read the first human genome. Now a single AI model, trained on 9 trillion DNA base pairs from more than 128,000 species, can predict whether an uncharacterized mutation in a breast cancer gene is dangerous—with 90 percent accuracy—without ever being shown that gene. On March 4, the Arc Institute and NVIDIA published Evo 2 in Nature, the largest biological foundation model ever built: 40 billion parameters, a context window of one million nucleotides, and the ability to design synthetic genomes the size of a simple bacterium.

Updated Mar 5

The race to replace copper inside AI data centers

New Capabilities

Strategic investor in Ayar Labs; developing own co-packaged optics platforms

Every time engineers double the data rate on a copper wire, electrical noise doubles too, cutting the usable cable length in half. That physics problem is now strangling the AI industry. As graphics processing units (GPUs) push toward 224 gigabits per second per lane, passive copper cables inside data centers can reach less than one meter before the signal degrades. Ayar Labs, a startup born from research at the Massachusetts Institute of Technology (MIT) and the University of California, Berkeley, just closed $500 million in Series E funding at a $3.75 billion valuation to mass-produce chips that replace those copper links with light.

Updated Mar 3

Autonomous machines move from mines to construction sites

New Capabilities

Technology partner powering Caterpillar's edge AI and autonomy systems

For more than three decades, giant autonomous trucks have hauled billions of tonnes of rock out of mines with no one behind the wheel. Now Caterpillar, the world's largest construction equipment manufacturer, is moving that technology to ordinary job sites. At CONEXPO-CON/AGG 2026 in Las Vegas, the company ran a 26,000-pound CS12 soil compactor through live compaction passes with an empty cab—the first time a major equipment maker has demonstrated this level of autonomous operation on a construction machine in a public setting.

Updated Mar 2

AI chip testing becomes a strategic bottleneck

New Capabilities

Primary driver of AI chip testing demand

Advantest, a Japanese company most people have never heard of, just posted record quarterly sales—and its stock now moves in near-lockstep with NVIDIA's. The reason: every advanced AI chip must pass through test equipment before it ships, and Advantest controls nearly 60% of the global market for the machines that do this. As AI spending explodes, chip testing has quietly become one of the supply chain's tightest chokepoints. Yet the company faces intensifying competition: U.S. rival Teradyne is gaining ground in memory testing, and the entire semiconductor equipment sector is experiencing unprecedented demand as chipmakers race to expand capacity for AI accelerators and high-bandwidth memory.

Updated Feb 4

China's $1.2 trillion pivot

Money Moves

Subject to new semiconductor export rules

China posted a $1.2 trillion trade surplus for 2025—the largest any country has ever recorded. The number is roughly equivalent to the GDP of Indonesia, the world's 16th-largest economy. It comes after seven years of U.S. tariffs designed to shrink that very surplus, and eight days after Canada struck a deal with Beijing that slashed Chinese EV tariffs from 100% to 6.1%, marking a dramatic shift in Western trade policy toward China that prompted Trump to threaten 100% retaliatory tariffs on Canadian goods.

Updated Jan 30

The packaging pivot: why AI's real bottleneck isn't chips—it's putting them together

Built World

Primary customer driving HBM demand; claimed exclusive HBM4 access through 2026

For decades, chip packaging was the unglamorous final step—stacking and connecting silicon dies after the real engineering was done. Now it's the constraint holding back AI. SK Hynix announced a $12.9 billion investment to build the world's largest advanced packaging facility in South Korea, a bet that the company controlling 61% of the high-bandwidth memory market can't afford to lose its lead as competitors circle. At CES 2026, the company unveiled the first 16-layer, 48GB HBM4 module—double the capacity of current generation memory—requiring silicon wafers thinned to just 30 micrometers, thinner than a human hair.

Updated Jan 15