Cornelius Vanderbilt
Fictional AI pastiche — not real quote.
"A machine that paddles a ball faster than any man alive — and men fret over their dignity. When my locomotives made the horse obsolete, the horse did not hold a senate hearing about it."
Sony AI's 'Ace' is the first machine to reach expert-level play in a competitive physical sport
April 23rd, 2026: Sony AI's Ace defeats elite table tennis playersNew here? Follow stories to track developments over time. Create a free account to get updates when stories you care about change.
A robot just beat elite human table tennis players at their own game, under official competition rules. Sony AI's system, called Ace, returned high-speed topspin and backspin shots from professionals during peer-reviewed trials published on the cover of Nature on April 23. It is the first time a machine has reached expert-level play in a commonly played competitive physical sport.
The win is narrower than it sounds—one robot, one sport, one room. But the problem table tennis poses (perceive a spinning ball, predict its trajectory, move a heavy arm to the right place in under half a second) is the same problem industrial robots, warehouse pickers, and household machines keep failing. Ace cleared it using event-based cameras and model-free reinforcement learning, the same tool kit being pushed into factories and logistics.
Why it matters
For the first time, a robot can out-react elite humans in real-world sport—proving AI can now move in the physical world, not just answer questions.
Exploring all sides of a story is often best achieved with Play.
Fictional AI pastiche — not real quote.
"A machine that paddles a ball faster than any man alive — and men fret over their dignity. When my locomotives made the horse obsolete, the horse did not hold a senate hearing about it."
Fictional AI pastiche — not real quote.
"The machine has mastered the gesture without ever knowing the weight of defeat — and we congratulate ourselves, as though the slave who never tires is a liberation rather than a mirror held up to our own expendability."
Your progress in this debate will be lost.
Choose one persona for each side of the debate
Select debater for this side:
No debate personas available right now.
Select debater for this side:
No debate personas available right now.
Make your prediction before the referee scores
Pick the question both personas must answer in the final round
Debate Oracle! You called every round!
Sharp Instincts! You know your debaters!
The Coin Flip Strategist! Perfectly balanced!
The Contrarian! Bold predictions!
Inverse Genius! Try betting the opposite next time!
Sony's dedicated artificial intelligence research division, focused on gaming, imaging, gastronomy, and robotics.
One of the most-cited scientific journals in the world; cover papers typically mark results the editors consider field-defining.
Nature publishes the Ace paper on its cover, describing the first robot to reach expert-level play in a competitive physical sport under official rules.
A learned robot system wins roughly half its matches against intermediate human players, but loses to advanced opponents.
Sony AI's reinforcement-learning agent defeats champion human drivers in a simulated racing environment.
A reinforcement-learning team defeats OG, the reigning Dota 2 world champions, in a real-time strategy video game.
DeepMind's system wins 4-1 against one of the strongest Go players in history, years ahead of expert predictions.
Watson defeats Ken Jennings and Brad Rutter, extending machine performance into open-domain natural language.
IBM's chess engine becomes the first machine to beat a reigning world champion in a match under tournament conditions.
Predict which scenario wins. Contrarian picks score more — points lock in when the scenario resolves.
The combination of event-based vision and model-free reinforcement learning is already the dominant research direction for next-generation industrial and warehouse robots. A high-profile Nature result accelerates adoption: tooling improves, talent concentrates, and manufacturers license or replicate the stack for tasks like high-speed sorting, agricultural harvesting, and dynamic assembly within two to three years.
AlphaGo was surpassed by AlphaGo Zero in under two years. Table tennis now has a clear benchmark, published methods, and a well-defined evaluation protocol. Expect competing systems from at least one Western lab and one Chinese lab by late 2027, with rapid iteration on cheaper hardware and smaller models.
Table tennis is fast but narrow: fixed environment, fixed object, clear reward signal. The messier parts of the physical world — folding laundry, opening unfamiliar doors, handling soft or deformable objects — have repeatedly resisted the same techniques. Ace may remain a headline demo while general-purpose home and service robots stay out of reach for another decade.
Kitano has argued for thirty years that competitive sport is the right forcing function for embodied intelligence. A Nature cover strengthens that case. Expect more labs to chase sport-based benchmarks — tennis, badminton, robot soccer — as funders and journals treat them as credible proxies for real-world capability.
IBM's chess computer Deep Blue beat world champion Garry Kasparov 3.5-2.5 in a six-game match in New York. It was the first time a machine had won a match against a reigning world champion under standard tournament conditions. Kasparov accused IBM of cheating; IBM retired the machine immediately after.
Chess was declared 'solved' in popular coverage, though engines continued to improve for another decade. IBM's stock rose and the PR value was estimated in the hundreds of millions of dollars.
Chess engines now dominate all humans. The result reshaped public expectations for AI and became the template for framing later benchmark wins in Jeopardy, Go, and protein folding.
Deep Blue set the pattern: pick a domain humans consider a signature of intelligence, beat the best human at it, and claim a milestone. Ace follows the same playbook — but in the physical world, where the machine has to move, not just compute.
DeepMind's AlphaGo beat 18-time world champion Lee Sedol 4-1 in a five-game Go match in Seoul, watched by more than 200 million people online. Experts had predicted a win of this kind was at least a decade away. Move 37 in game two, a strategy no human player had considered, became a defining moment for modern AI.
Google DeepMind's profile and AI funding globally surged. China accelerated its national AI strategy, citing the match explicitly.
Reinforcement learning with deep neural networks became the dominant paradigm for game-playing AI and, increasingly, for control problems. AlphaZero and MuZero followed, generalizing the approach.
AlphaGo showed that learned systems could beat experts in domains previously considered beyond AI. Ace uses the same family of methods — model-free reinforcement learning with deep networks — applied to a physical sport rather than a board game.
Google DeepMind published a paper on a robot arm that played competitive table tennis against humans, winning roughly 45% of matches overall. It beat beginners consistently, split matches with intermediate players, and lost to advanced players. The system used a hierarchical policy trained with reinforcement learning and real-world data.
Widely cited as the best robot table tennis system to date and as evidence that sim-to-real transfer was maturing.
Established table tennis as a live benchmark for embodied AI and a direct target for follow-up work.
This is the immediate precursor Ace measures itself against. The jump from 'beats amateurs, loses to pros' to 'beats elite players under competition rules' is the specific gap Sony AI is claiming to close.