Pull to refresh
Logo
Daily Brief
Following
Why Ranks Sign Up
Yoshua Bengio

Yoshua Bengio

Professor, University of Montreal; AI Safety Researcher

Appears in 2 stories

Notable Quotes

Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we're not allowed to shut them down.

These concerning behaviors increase with the reasoning capabilities of these systems.

I'm more and more confident that it can be done in a reasonable number of years, so that there might be an impact before AI systems become so powerful that misalignment causes terrible problems.

Stories

The recursive loop begins

New Capabilities

Led International AI Safety Report 2026; warns capabilities are outpacing safeguards while remaining optimistic about technical solutions at LawZero

In May 2025, DeepMind's AlphaEvolve became the first commercial AI to optimize its own training—shaving 23% off a critical computation kernel. The loop has tightened since. By April 2026, Anthropic's Claude agents were outperforming human alignment researchers on safety experiments, and GPT-5.5 had rewritten its own serving infrastructure to run 20% faster.

Updated 7 days ago

AI systems cross the creativity threshold

New Capabilities

Founder and Scientific Advisor of Mila; Founder of LawZero

For decades, creativity was considered AI's final frontier—the one domain where machines could never match human ingenuity. That assumption just cracked. A study published January 21, 2026 in Scientific Reports tested 100,000 humans against nine leading AI systems on standardized creativity measures. GPT-4 outscored the typical human participant. Google's GeminiPro matched average human performance.

Updated Jan 27