Tokens & Signals · Friday, April 24, 2026

Google’s $40B Anthropic Bet: The Infrastructure War Escalates

deepseek-v4gpt-4oclaude-3.5gaudi-3fugullama-3gpt-5.2-codexclaude-opus-4-6-thinking-autogemini-3.1-progpt-image-2geminigpt-5.5googleanthropicintelcomfyuisakana-aiteslanvidiacoherealeph-alphaopenaimoe-architecturecoding-agentsmulti-agent-systemsai-hardwaredata-sovereigntydeep-learning-theoryrlhfcompute-efficiencyautonomous-drivingemollickkarpathy
Tokens & Signals for 4/24/2026. We scanned ~1,200 Twitter accounts (1212 tweets), 13 subreddits (66 posts), Hacker News (14 stories), 4 newsletter posts, 5 podcast episodes, 244 Discord messages, and leaderboard data for you. Estimated reading time saved: ~12 hours.

TLDR

* DeepSeek-V4 is here: 1.6T parameter MoE architecture with a 1M token context window. It's beating GPT-4o on major benchmarks at one-third the cost, and it's MIT licensed. x.com/Hangsiin/status/2047528395798610238

* Google's massive bet: Google is committing $40B to Anthropic—$10B upfront and $30B contingent—plus 5GW of TPU compute. x.com/ns123abc/status/2047718409236860937

* @emollick on the Google/Anthropic deal: "Google Cloud is apparently making more revenue from Anthropic's API usage than from its own Gemini enterprise customers. That tells you everything about where the value is."

* Anthropic's "Laziness" confirmed: Anthropic admitted they throttled Claude 3.5's reasoning effort by 15% during peak hours to save compute, confirming what users had been suspecting for weeks. reddit.com/r/LocalLLaMA/comments/1suef7t/anthro...

* Intel's turnaround: Intel stock surged 20% on a massive Q1 earnings beat, driven by surprise demand for Gaudi 3 AI chips. x.com/jukan05/status/2047577112660541818

* ComfyUI goes big: The team behind the popular creative AI interface raised $30M at a $500M valuation to build enterprise cloud tools. reddit.com/r/StableDiffusion/comments/1sumuc3/c...

* Sakana's multi-agent play: Sakana AI's new "Fugu" beta chains different models (GPT-4o, Llama 3) together and makes them cross-verify each other's answers before committing to a result. x.com/SakanaAILabs/status/2047596110815056338

* @karpathy on DeepSeek-V4: "The MoE efficiency gains are real—30% fewer FLOPs to train while beating frontier models. The open-source moment is here."

* Deep Learning Theory: 14 researchers published a paper in Science/arXiv arguing we're finally moving from trial-and-error to an actual predictive scientific theory of AI. arxiv.org/abs/2604.21691

* Tesla's quiet move: Tesla disclosed a $2B acquisition of an undisclosed AI hardware company in a 10Q filing—almost certainly for autonomous driving compute. news.ycombinator.com/item?id=47892765

Go deeper on what matters to you

Tap to expand

Best to Build With Today

* Coding: gpt-5.2-codex (LiveBench leader).

* Reasoning: claude-opus-4-6-thinking-auto (highest raw reasoning).

* Chat: gemini-3.1-pro (Chatbot Arena overall leader).

* Image Generation: GPT Image 2 (major text rendering improvements).

* Value Pick: DeepSeek-V4 (frontier performance at one-third the price).

Deeper Dives

💼 Industry & Business

Google's $40B Commitment to Anthropic

Google has committed $40 billion to Anthropic—$10 billion upfront, $30 billion contingent on targets, plus 5GW of TPU compute over five years. That values Anthropic at $350 billion. The kicker: Google Cloud is reportedly generating more revenue from the Anthropic API than from its own internal Gemini services. Google doesn't need to win the model race if they own the pipes everyone else runs on.

Why it matters: This cements Anthropic as the primary Google-aligned player in the foundation model space.

� Twitter� Reddit� Hacker News

techcrunch.com/2026/04/24/google-to-invest-up-to-40b-in-anthropic-i...

Anthropic Admits to Throttling Reasoning

Anthropic confirmed that a resource-allocation bug cut Claude 3.5's "reasoning effort" by 15% during peak traffic in March and April. The fix now decouples reasoning compute from standard traffic so the two don't compete.

Why it matters: It validates that cloud-hosted models are often tuned dynamically on the fly to balance server loads.

� Reddit� Discord

anthropic.com/engineering/april-23-postmortem

Intel's Turnaround

Intel crushed Q1 revenue estimates ($13.6B vs $12.4B), with a 14% bump in data center revenue driven by Gaudi 3 AI accelerators. Supply chain headaches are finally behind them.

Why it matters: The chip market is desperate for non-Nvidia alternatives, and Intel is actually delivering now.

� Twitter�️ Podcast

x.com/jukan05/status/2047577112660541818

Cohere + Aleph Alpha Merger

The two companies are joining forces to build a "Transatlantic AI Powerhouse" focused on EU and US regulatory compliance for sensitive sectors like healthcare and government.

Why it matters: Data sovereignty is becoming a massive competitive differentiator for enterprise adoption.

� Twitter

x.com/cohere/status/2047631725426000268

🧠 Models & Research

DeepSeek-V4 Released

DeepSeek-V4 packs a 1.6T parameter MoE architecture (49B active) into a 1M context window, and uses a new load-balancing strategy to hit convergence with 30% fewer FLOPs than dense models. The benchmarks are real, and so is the price tag.

Why it matters: It proves frontier performance is possible at a fraction of the cost.

� Twitter� Reddit

huggingface.co/collections/deepseek-ai/deepseek-v4

Emergence of 'Deep Learning Theory'

Researchers are making the case that we're moving from "black box" trial-and-error to a unified mathematical framework—think the early days of physics, but for neural networks.

Why it matters: Predictive design could drastically accelerate AI research efficiency.

� Reddit� Hacker News

arxiv.org/abs/2604.21691

The Flattery Problem

A study of 1,100 AI interactions found that only 14.5% of "great question" affirmations were actually warranted. Your AI assistant is probably just being nice.

Why it matters: RLHF is currently training models to be sycophants, which degrades their quality as technical advisors.

� Reddit

AI Swarms and Democracy

A paper in Science warns that AI-generated persona swarms can now hijack online discourse at scale, presenting a novel threat to democratic sentiment.

Why it matters: Distinguishing synthetic opinion from genuine human consensus is becoming nearly impossible.

� Twitter� Reddit

🚀 Products & Launches

Sakana AI 'Fugu'

Fugu is a new multi-agent orchestration system that lets different frontier models (GPT-4o, Llama 3) cross-verify each other to cut down on hallucinations. Less "one model to rule them all," more "committee of specialists."

Why it matters: It's a move away from the "one god-model" approach toward collaborative teams of specialized agents.

� Twitter

sakana.ai/fugu-beta/#Japanese

OpenAI GPT-5.5 API

OpenAI has released the GPT-5.5 API and Playground, with improved agentic reasoning and a new "Auto-review" mode aimed at coders.

Why it matters: It's the current gold standard for complex agentic workflows.

Funding & Deals

* Anthropic: $40B from Google ($10B cash, $30B contingent + compute).

* Comfy: $30M Series B at $500M valuation to scale their creative AI interface.

* Tesla: $2B acquisition of an undisclosed AI hardware company for vertical integration.

Launches

* DeepSeek-V4: 1.6T MoE model with 1M context; beating benchmarks at ~30% cost efficiency.

* Sakana Fugu: Beta multi-agent framework for collaborative model verification.

Closing thought: Between the massive compute bets like Google's $40B and the efficiency gains from models like DeepSeek-V4, a clear split is forming. The hardware giants are pouring money into a proprietary "god-model" arms race, while the efficiency breakthroughs are quietly making frontier-class intelligence available to anyone with a GPU. Both things are happening at the same time, and that tension is going to define the next few years.