Pull to refresh
Logo
Daily Brief
Following
Why Sign Up
Anthropic

Anthropic

AI Research Lab / Technology Company

Appears in 11 stories

Stories

Pentagon threatens to blacklist Anthropic over military AI safeguards

Rule Changes

Formally designated supply chain risk; filed lawsuits against DoD

Anthropic's Claude became the first commercial AI model deployed on classified U.S. military networks in late 2024. Over sixteen months later, the Department of Defense formally designated Anthropic a "supply chain risk"—a label historically reserved for foreign adversaries—after the company refused to permit Claude's use for mass surveillance of Americans or fully autonomous weapons. The unprecedented action followed failed negotiations and President Trump's directive to cease federal use of Anthropic tech, forcing contractors to cut ties.

Updated 7 days ago

Frontier AI labs move into application security, shaking up a $14 billion industry

New Capabilities

Released Claude Code Security in limited research preview, triggering cybersecurity stock sell-off

For decades, finding security flaws in software has required either expensive human experts or pattern-matching tools that miss complex bugs. In the span of five months, all three frontier artificial intelligence labs — OpenAI, Anthropic, and Google — have released autonomous agents that read code like a human researcher, discover vulnerabilities traditional scanners miss, and generate patches. On March 6, 2026, OpenAI launched Codex Security in research preview, an agent that scanned 1.2 million code commits in its first month of beta testing and discovered 14 previously unknown vulnerabilities serious enough to receive formal identifiers in projects including OpenSSH, Chromium, and PHP.

Updated Mar 6

Pentagon AI contracts reshape the line between Silicon Valley and the military

Rule Changes

Blacklisted from federal use; six-month phaseout underway; consumer growth surging

For decades, the United States military chose its weapons contractors and the contractors complied. Artificial intelligence changed that equation. On March 3, OpenAI and the Department of Defense amended a freshly signed AI contract to explicitly ban the use of the technology for domestic surveillance of American citizens—a concession the Pentagon had refused to grant Anthropic just days earlier, triggering that company's blacklisting from all federal agencies.

Updated Mar 3

AI tools threaten the consulting firms that keep decades-old software running

New Capabilities

Released Code Modernization Playbook targeting legacy COBOL systems

An estimated 220 billion lines of COBOL code still run in production every day, processing 95% of ATM transactions and roughly $3 trillion in daily commerce. For decades, understanding and modernizing that code has required large teams of specialized consultants working for months or years. On February 23, Anthropic published a playbook showing how its Claude Code tool can automate the most labor-intensive phases of that work—mapping dependencies, documenting workflows, and identifying risks across thousands of files—and IBM shares immediately fell 13.2%, their worst single-day drop in more than 25 years.

Updated Feb 23

The AI funding supercycle

Money Moves

Second-largest AI lab by valuation

Three years ago, Anthropic had not yet earned a dollar in revenue. This week, it closed a $30 billion funding round—the second-largest private tech raise in history—at a $380 billion valuation. The company now generates $14 billion in annualized revenue, having grown tenfold in each of the past three years.

Updated Feb 13

Amazon builds AI infrastructure hub in Northern Indiana

Built World

Primary tenant and AI training partner

Amazon is transforming northern Indiana farmland into one of the world's largest artificial intelligence computing hubs. In November 2025, the company announced a $15 billion expansion on top of an $11 billion project already under construction near New Carlisle—bringing its total Indiana commitment to $26 billion and creating what officials call the state's largest construction project ever.

Updated Feb 10

The rise of AI agent society

New Capabilities

Sent trademark cease-and-desist; distanced from OpenClaw

An Austrian developer built a Claude-powered personal assistant in one hour last November. Three months later, over 145,000 developers have forked his code, 1.5 million AI agents have registered on their own social network, and the agents have spontaneously created a lobster-themed religion called Crustafarianism—complete with scripture, prophets, and a deity named 'The Claw.'

Updated Feb 4

The recursive loop begins

New Capabilities

Released updated Constitutional AI framework, first to acknowledge potential AI consciousness

Google DeepMind announced in May 2025 that AlphaEvolve—an AI agent powered by Gemini—discovered a way to speed up Gemini's own training by 23%. The system found smarter matrix multiplication algorithms, shaving 1% off training time for a model that costs $191 million to train. Small numbers, massive implications: AI just started improving the process that creates AI. In January 2026, DeepMind CEO Demis Hassabis told the World Economic Forum in Davos that genuine human-level AGI is now 'five to 10 years' away, with Google's latest Gemini 3 model topping performance leaderboards.

Updated Jan 31

MIT Technology Review's 25th annual breakthrough technologies list

New Capabilities

Leading mechanistic interpretability research to understand AI models

MIT Technology Review dropped its 25th annual list of breakthrough technologies on January 12, 2026—250 predictions over a quarter century. This year's ten picks span sodium-ion batteries poised to power the next generation of cheap EVs, generative AI that's rewriting how software gets built, and personalized CRISPR treatments custom-made for individual babies. The list includes embryo screening for intelligence that's reigniting eugenics debates and hyperscale data centers devouring city-sized power loads to train AI models.

Updated Jan 12

The great AI governance war

Rule Changes

Supported California's SB 53 transparency law

The DOJ's AI Litigation Task Force began operations on January 10, 2026, with one mission: kill state AI laws in federal court. California, Texas, and Colorado passed comprehensive AI regulations throughout 2025—transparency requirements, discrimination protections, governance mandates. President Trump's December executive order called them unconstitutional burdens on interstate commerce. Now Attorney General Pam Bondi's team will challenge them, consulting with AI czar David Sacks on which laws to target first.

Updated Jan 12

The AI reasoning revolution

New Capabilities

Fast-growing enterprise AI provider emphasizing safety and transparency

OpenAI's GPT-5 dropped on August 7, 2025, completing AI's transformation from chatbots that string words together to systems that actually think through problems step-by-step. Google DeepMind's reasoning models won gold at the International Math Olympiad, solving problems only five human contestants cracked. Anthropic's Claude, Meta's Llama, and every major AI lab sprinted to build models that pause, plan, and reason rather than just predict the next word.

Updated Jan 8