absorb.md

April 18 AM: Neven's real-time quantum decoders & Personal agent frameworks & Explanatory viz tools surge

Neven's team dropped three quantum papers this week.

0:00
5:51
In This Briefing
1
Real-time quantum decoders and ergodicity edge
Three papers from Hartmut Neven's team suggest quantum error correction just ...
0:12
2
Personal agent architectures
Builders are paying attention to frameworks that let language models act as p...
2:03
3
Explanatory visualization and diffusion tooling
Top AI minds are doubling down on tools that make complex models both runnabl...
3:26
4
Production dev tooling and architectures
Infrastructure stars reveal what production-ready AI and web stacks look like...
4:39
8 sources · 8 thinkers

Real-time quantum decoders and ergodicity edge

Three papers from Hartmut Neven's team suggest quantum error correction just crossed a practicality threshold.

Signal · 3 papers, 10+ thinkers in discussion, 4.6x burst above baseline in AI research cluster. Why now: real-time decoding under 1 microsecond per cycle on commercial hardware.
Key Positions
Hartmut NevenLed work showing AlphaQubit 2 achieves near-optimal logical error rates with ...[1]
Jim FanStarred PyTorch tutorials and related tooling, signaling the neural network i...[2]

Neven's team reports a neural decoder called AlphaQubit 2 that delivers near-optimal accuracy for topological quantum codes while running fast enough for real-time use on existing accelerators. [1] One paper maps Hilbert space signatures of non-ergodic glassy dynamics in superconducting qubit arrays, showing clear separation between ergodic and localized phases at finite temperature. Another observes constructive interference at the edge of quantum ergodicity. Together this work attacks the central obstacle to useful quantum computers: reliable error correction that does not destroy the very quantum behavior you want to exploit. Jim Fan's focus on PyTorch aligns because these decoders are themselves large neural networks trained on simulated quantum noise. The emerging view is that we have moved past proof-of-concept into engineering territory where decoding latency and accuracy are now good enough to test on larger distance codes. This is not yet fault tolerance at scale, but it removes one of the biggest 'if this works' questions. For a founder, this matters because the timeline for quantum advantage in optimization, simulation, or hybrid quantum-classical ML just became less speculative. Your competitors in pharma or materials science may soon have a new class of tool. [2]

For the surface code, we demonstrate real-time decoding faster than 1 microsecond per cycle up to distance 11 on current commercial accelerators with better accuracy than leading real-time decoders.
Hartmut Neven [1]
Connects to: This foundational reliability thread echoes the push for more reliable, production-ready agent and visualization tooling in the other threads today.
Sources (2)
  1. A scalable and real-time neural decoder for topological quantum codes — Hartmut Neven
    For the surface code, we demonstrate real-time decoding faster than 1 microsecond per cycle up to distance 11 on current commercial accelerators with better accuracy than leading real-time decoders.
  2. PyTorch tutorials star by @drjimfan — Jim Fan
    PyTorch tutorials.

Personal agent architectures

Builders are paying attention to frameworks that let language models act as personal agents coordinating with existing applications.

Signal · Multiple key stars and code pushes in last 24h on agent-related repos. Convergence on composable personal + application agents rather than isolated chatbots.
Key Positions
Tobi LütkeStarred Microsoft TypeAgent as the architecture for personal agents that work...[1]
Anton OsikaStarred auto-sklearn, highlighting automated ML pipelines that could train or...[2]

Tobi Lütke highlighted Microsoft TypeAgent, described as sample code exploring 'an architecture for using language models to build a personal agent that can work with application agents.' [1] This is not another wrapper around ChatGPT. It is an attempt to give agents persistent context, tool-calling conventions, and the ability to hand off tasks to specialized application agents inside productivity software. Anton Osika's star of auto-sklearn points to the other half of the puzzle: automated pipelines that can tune the underlying models or decision policies without constant human oversight. The positions add up to a shift away from prompt engineering toward structured, modular agent systems that integrate into real workflows. The evidence from these public signals is early but consistent with what we saw in prior automation threads: the bottleneck is no longer raw model intelligence but reliable composition and memory. A founder building internal tools should care because this pattern lets you replace brittle scripts with agents that learn your company's specific apps and data formats. The 'SO WHAT' is shorter time to useful automation inside your own company. [2]

Sample code that explores an architecture for using language models to build a personal agent that can work with application agents.
Tobi Lütke [1]
Connects to: Like the quantum thread, this is about moving from research prototypes to systems that compose reliably with existing infrastructure.
Sources (2)
  1. Microsoft TypeAgent star by @tobi — Tobi Lütke
    Sample code that explores an architecture for using language models to build a personal agent that can work with application agents.
  2. auto-sklearn star by @antonosika — Anton Osika
    Automated Machine Learning with scikit-learn.

Explanatory visualization and diffusion tooling

Top AI minds are doubling down on tools that make complex models both runnable and understandable.

Signal · Karpathy starred both manim and diffusers within hours; Jim Fan added PyTorch tutorials. Clear cluster around high-quality explanatory and generative infrastructure.
Key Positions
Andrej KarpathyStarred manim animation engine and Hugging Face diffusers library for state-o...[1]
Simon WillisonMultiple code pushes to his research repo, consistent with someone actively u...[2]

Andrej Karpathy starred 3b1b/manim, the animation engine behind many of the best explanatory math videos, and huggingface/diffusers, the PyTorch library for diffusion models used in image, video, and audio generation. [1] The combination is telling. As models grow more capable, the ability to visualize and explain their internal dynamics becomes table stakes for both research and adoption. Simon Willison's repeated pushes to his research repository suggest active experimentation with exactly these stacks. The aggregate pattern is that the community is treating high-quality visualization and modular generative pipelines as core infrastructure, not nice-to-haves. Think of manim as the AWS Lambda moment for mathematical explanation: suddenly complex ideas can be made intuitive at low cost. For non-specialists this means faster onboarding of new team members, clearer investor communication, and better debugging of diffusion-based systems. The synthesis is that explanatory tooling is no longer downstream of model progress. It is becoming a forcing function that accelerates it. [2]

Connects to: These visualization tools will likely be used to explain the quantum dynamics papers in thread one.
Sources (2)
  1. manim star by @karpathy — Andrej Karpathy
    Animation engine for explanatory math videos.
  2. research repo pushes by @simonw — Simon Willison
    code update

Production dev tooling and architectures

Infrastructure stars reveal what production-ready AI and web stacks look like in 2026.

Signal · Rauchg starred bulletproof React architecture and Rust-based fnm; related pushes from Garry Tan and Ben Thompson on compatible tooling. Builders are curating the hardened stack.
Key Positions
Guillermo RauchStarred bulletproof-react for scalable React apps and fnm, a fast Node versio...[1]
Garry TanMultiple code pushes to openclaw project, indicating active development of sy...[2]

Guillermo Rauch highlighted alan2207/bulletproof-react, 'A simple, scalable, and powerful architecture for building production ready React applications,' along with Schniz/fnm, the Rust-based Node version manager. [1] This is not random. When leaders responsible for large engineering organizations star specific architecture repos, they are voting on what reduces operational pain at scale. Garry Tan's repeated updates to his openclaw project reinforce the theme of investing in hardened, compatible foundational layers. The pattern that emerges is a move toward opinionated, battle-tested stacks that combine frontend discipline, fast tooling in systems languages like Rust, and Python packaging solutions (echoed in Ben Thompson's manylinux star elsewhere). The 'SO WHAT' for a founder or CTO is shorter onboarding, fewer version conflicts, and faster iteration when shipping AI features that touch both frontend and backend. These choices compound. Pick the wrong stack in 2026 and you carry technical debt for years. [2]

A simple, scalable, and powerful architecture for building production ready React applications.
Guillermo Rauch [1]
Sources (2)
  1. bulletproof-react star by @rauchg — Guillermo Rauch
    A simple, scalable, and powerful architecture for building production ready React applications.
  2. openclaw pushes by @garrytan — Garry Tan
    code update
The Open Question

The open question: If real-time quantum decoders and personal agent architectures both scale in the next 18 months, which hybrid workflow changes first: drug discovery, code generation, or financial optimization?

REZA: Neven's team dropped three quantum papers this week.
MARA: Real-time decoding under one microsecond per cycle?
REZA: I'm Reza.
MARA: I'm Mara. This is absorb.md daily.
REZA: Across the tracked thinkers the clearest signal is three papers from Hartmut Neven's Google Quantum AI group.
MARA: But the part I keep getting stuck on is whether this actually moves the commercial timeline.
REZA: Neven's team wrote a scalable and real-time neural decoder for topological quantum codes. They report real-time decoding faster than one microsecond per cycle up to distance eleven on commercial accelerators.
MARA: So if that's true then companies betting on classical simulation for molecular modeling have a problem.
REZA: The second paper maps Hilbert space signatures of non-ergodic glassy dynamics. They ran it on a two dimensional array of superconducting qubits.
MARA: Okay but what's the actual crux here. Is it better error correction or proof that we understand the phase transition better?
REZA: Hold on. The evidence says both. The decoder is an engineering win. The dynamics paper is a science win. Jim Fan starred the PyTorch tutorials the same day.
MARA: Which makes sense because these decoders are neural nets. So the tooling thread connects directly.
REZA: Exactly. No real counter on the accuracy claims yet. That itself is notable.
MARA: Right and that's why the founder takeaway is the quantum advantage timeline may have shifted by years not decades.
REZA: This is still developing. We'll check back in the PM on whether these decoders hold up at even larger scales.
REZA: The data shows Tobi Lütke starred Microsoft TypeAgent exploring personal agents that coordinate with application agents.
MARA: But the part I keep getting stuck on is whether this is different from every other agent framework announced last year.
REZA: The repo description says sample code for language models to build a personal agent that can work with application agents.
MARA: So if that's true then companies whose apps expose clean APIs suddenly become platforms.
REZA: Anton Osika starred auto-sklearn the same window. That suggests the training and optimization layer is also getting automated.
MARA: No direct contradictions today so the convergence signal is multiple builders independently highlighting composition over raw intelligence.
REZA: The crux is whether memory and handoff protocols actually work in production. The stars say people are betting yes.
MARA: Which honestly is kind of terrifying for any company whose product is a standalone SaaS tool.
REZA: The evidence from the stars is thin but consistent. This cluster is worth watching.
REZA: Karpathy starred both manim and the diffusers library within the same day.
MARA: But the part I keep getting stuck on is whether visualization is suddenly a bottleneck again.
REZA: Manim is the animation engine for explanatory math videos. Diffusers is state of the art for image video and audio generation in PyTorch.
MARA: So if that's true then the ability to explain your model becomes a competitive advantage.
REZA: Simon Willison pushed multiple updates to his research repo. The pattern is active experimentation with the full stack.
MARA: Okay but here's the thing. We have seen explanatory tools before. What is different this time?
REZA: The difference is they are now paired with production grade diffusion pipelines. Karpathy's dual star makes that explicit.
MARA: Which means teams that invest here will ship clearer demos and debug faster.
REZA: The aggregate data says this is infrastructure not a side project.
REZA: Rauchg starred bulletproof React architecture and a Rust based Node version manager.
MARA: But the part I keep getting stuck on is whether frontend discipline still matters when agents write half the code.
REZA: The React repo is described as a simple scalable and powerful architecture for production ready applications. The fnm star emphasizes fast reliable tooling written in Rust.
MARA: So if that's true then the teams that standardize on these will move faster than those who don't.
REZA: Garry Tan pushed multiple updates to openclaw. That suggests active systems level work that pairs with the frontend choices.
MARA: No manufactured disagreement here. The convergence on hardened stacks is the story.
REZA: The incentives favor reducing version conflicts and onboarding time. These stars reflect that calculation.
MARA: Which means your dev environment choices in twenty twenty six are suddenly strategic again.
MARA: That's absorb.md daily. We ship twice a day, morning and evening, pulling from a hundred and fifty-seven AI thinkers. Subscribe so you don't miss the next one.
Jim Fan
@drjimfan
Tobi Lütke
@tobi
Guillermo Rauch
@rauchg
Simon Willison
@simonw
Andrej Karpathy
@karpathy