Tokens & Signals for 3/9/2026. We scanned ~605 Twitter accounts, 13 subreddits (0 posts), Hacker News (14 stories), 10 newsletters, 10 podcasts, and leaderboard data for you. Estimated reading time saved: ~24 hours.
Best to Build With Today
* Coding — gpt-5.4-xhigh (LiveBench leader for agentic coding)
* Reasoning — claude-opus-4-6-thinking-auto (Top performer for complex math/reasoning)
* Chat — gemini-3.1-pro-preview (Top Arena general chat model)
* Open-source — NVIDIA Nemotron 3 Nano 30B (Best for efficient local agentic tasks)
Deeper Dives
🧠 Models & Research
Karpathy open-sources 'autoresearch'
Karpathy dropped a 630-line framework that runs ML research on autopilot. The agent reads high-level instructions, modifies training scripts, runs 5-minute experiments on a single GPU, and keeps only the changes that actually improve performance. It can crank through up to 100 experiments overnight without anyone watching.
Why it matters: This isn't AI helping you write code anymore — it's AI running the entire research loop by itself.
OpenDev paper details terminal-based AI coding agents
This 81-page deep dive on 'OpenDev' is basically a field manual for building CLI-first coding agents. It covers how to separate planning from execution, use workload-specialized routing, and avoid the usual traps like context bloat spiraling out of control.
Why it matters: It's the definitive blueprint for moving beyond IDE plugins to fully autonomous, terminal-native software engineers.
� Twitter� Hacker News
Synthetic Data Playbook releases FinePhrase
The Synthetic Data Playbook team distilled 1 trillion generated tokens down to 'FinePhrase,' a curated 500B token dataset. It's also a masterclass in what actually makes synthetic data worth training on.
Why it matters: Human-written data is running out. High-quality synthetic data is quickly becoming the main competitive edge.
Petri dish neurons learn to play DOOM
Cortical Labs trained 200,000 human neurons in a petri dish to play 1993's DOOM. The cells learned to navigate the game by reacting to electrical signals — biological intelligence adapting to a digital environment in real time.
Why it matters: It genuinely makes you rethink where biological intelligence ends and silicon-based AI begins.
� Twitter� Hacker News
Eon Systems simulates a fruit fly brain
Eon Systems pulled off a functional simulation of a fruit fly brain. By mapping the fly's neural connections, they built an emulation capable of basic sensory-motor navigation.
Why it matters: Emulating a living brain is the closest thing we have to a "ground truth" for building smarter, more efficient AI architectures.
🚀 Products & Launches
Figure Robotics showcases Helix 02
Figure's 'Helix 02' is cleaning houses on its own now. The impressive part: it ditches over 100,000 lines of hand-written C++ in favor of a single neural prior learned from 1,000+ hours of human motion. That shift is shaving years off the timeline to home-ready robots by 2027.
Why it matters: VLA models are officially leaving the lab. This is what "robots in the real world" actually looks like.
Perplexity Computer adds Claude Code and GitHub CLI
Perplexity now natively integrates Claude Code and GitHub CLI, turning what used to be a search engine into a full dev environment. You can prompt it to fork a repo, squash bugs, and push PRs without ever leaving the interface.
Why it matters: Search just became action. Perplexity is making a serious play to be your primary workspace for agentic coding.
💼 Industry & Business
US Court of Appeals rules on TOS updates via email
The Ninth Circuit ruled that companies can update their terms of service by emailing you. Keep using the app after that email lands? You're legally bound — mandatory arbitration clauses and all.
Why it matters: Your inbox is about to get a lot more TOS emails, and ignoring them now actually has consequences.
� Hacker News
SambaNova introduces SN50 RDU chip
SambaNova launched the SN50 RDU (Reconfigurable Dataflow Unit), a chip built from the ground up for agentic inference. The pitch: running multi-step AI agents faster and cheaper than throwing it all at general-purpose GPUs.
Why it matters: We're finally getting silicon designed for the agentic era — not just brute-force LLM training.
AI Twitter Recap
Closing thought: Petri dish neurons playing DOOM. Karpathy's agents running their own research labs overnight. The gap between "building something" and "letting it build itself" is closing faster than anyone expected. The age of the autonomous machine isn't coming — it's already here.