absorb.md

April 12 AM: Suleyman calls AI history's biggest power shift & Agents get persistent memory & LLM routing breakthroughs

Mustafa Suleyman says AI gives godlike power to street hustlers.

0:00
5:27
In This Briefing
1
Persistent Agent Memory Systems
Garry Tan open sourced Gbrain the same week productivity amnesia became a doc...
0:13
2
Bridging Neural Paradigms for Science
Aaron Levie and Balaji Srinivasan both argue current divisions in neural appr...
1:23
3
Routing and RL for LLM Reasoning
Meta and independent researchers released frameworks this week that dynamical...
2:31
4
Is AI History's Greatest Power Shift?
Mustafa Suleyman and Sebastian Mallaby posted opposing views on whether AI de...
3:36
10 sources · 9 thinkers

Persistent Agent Memory Systems

Garry Tan open sourced Gbrain the same week productivity amnesia became a documented problem for AI output overload.

Signal · 3 thinkers, 5 entries, why now: agents have moved beyond chat interfaces into production with persistent loops; continuing from 2026-04-11 am: contested-gbrain-memory with new dream cycle details and process fixes.
Key Positions
Garry TanGbrain uses markdown as git-backed source of truth, a retrieval layer, and ni...[1]
Chris PennAgentic AI creates productivity amnesia where humans forget completed machine...[2]

The positions add up to a consensus that the next leap for agents is not faster inference but operational memory that turns one-off interactions into compounding intelligence. Garry Tan describes a layered approach starting with disciplined coding, moving to specialized roles, cross-platform memory, and multi-agent dashboards while keeping 'Git as the system of record' to avoid vendor lock-in. [1] Chris Penn reports users producing a week's prior monthly output in days, causing cognitive limits to fail at tracking it; the fix is treating AI as a capacity-augmenting tool with explicit logs rather than over-trust. [2] A founder should care because if your team deploys agents without these memory and review systems, output becomes invisible noise instead of institutional knowledge, flipping your operational leverage. This connects to the reasoning thread because persistent memory reduces retrieval drift in multi-hop tasks. Analogy: think of it like adding persistent storage to early serverless functions; the code ran but nothing remembered across invocations until you added the database layer.

The architecture enables a compounding intelligence effect where agents continuously update the knowledge base after every interaction.
Garry Tan [1]
Connects to: Connects directly to the power clash because persistent memory determines whether capability truly proliferates to individuals or stays locked in corporate knowledge bases.
Sources (2)
  1. GBrain: A Markdown-Centric Operational Memory Architecture for AI Agents — Garry Tan
    The architecture enables a compounding intelligence effect where agents continuously update the knowledge base after every interaction.
  2. Productivity Amnesia: Process Fixes for AI-Driven Output Overload — Chris Penn
    Productivity amnesia where completed tasks blur and are forgotten despite involvement. This stems from human cognitive limits failing to track machine-speed execution, necessitating process upgrades like AI agent completion logs.

Bridging Neural Paradigms for Science

Aaron Levie and Balaji Srinivasan both argue current divisions in neural approaches for physics and graphs limit progress.

Signal · 2 thinkers, 6 entries. Why now: multiple papers this week show faster training and unified views outperforming traditional PINNs or dense nets on PDEs and molecular dynamics.
Key Positions
Aaron LevieThe division between message-passing and spectral GNNs is largely artificial;...[1]
Balaji SrinivasanRBF-PIELM and GMM-PIELM train substantially faster than traditional PINNs wit...[2]

These positions synthesize into a view that siloed intuitions in scientific ML are holding back better simulators. Levie shows MPNNs and spectral GNNs have equivalent expressive power and that dense ReLU networks under natural constraints fail to universally approximate certain Lipschitz functions, implying sparse or spectral connectivity is necessary. [1] Balaji's variants solve PDEs with single-shot least-squares or probabilistic sampling that concentrates computation where error is high, cutting training time dramatically versus gradient-based PINNs. [2] A founder in drug discovery or materials should care because these techniques can slash simulation times from days to minutes on standard GPUs, moving quantum advantage timelines forward for real problems. Analogy: it's like the moment Kubernetes unified container orchestration; the underlying math was always connected but the practical tooling split slowed everyone until the bridge appeared. No real counter on the unification value itself, which is notable. [3]

The division between Message-Passing Neural Networks (MPNNs) and spectral Graph Neural Networks (GNNs) is largely artificial.
Aaron Levie [1]
Connects to: These efficiency gains in science nets parallel the routing efficiencies in the reasoning thread and could be powered by the persistent memory in agent systems.
Sources (3)
  1. Unifying Message-Passing and Spectral Graph Neural Networks — Aaron Levie
    The division between Message-Passing Neural Networks (MPNNs) and spectral Graph Neural Networks (GNNs) is largely artificial.
  2. RBF-PIELM: A Faster Physics-Informed Neural Network for Higher-Order PDEs — Balaji Srinivasan
    RBF-PIELM... demonstrates significantly faster training times and fewer parameters compared to traditional PINNs.
  3. Dense Neural Networks Cannot Universally Approximate Functions — Aaron Levie
    Dense Neural Networks Cannot Universally Approximate Functions

Routing and RL for LLM Reasoning

Meta and independent researchers released frameworks this week that dynamically route or reinforce to fix traditional RAG failures on complex questions.

Signal · 2 thinkers, 2 entries. Why now: multi-hop QA benchmarks show traditional retrieval still drifts; new methods report big gains in exact match scores.
Key Positions
AI at MetaKunLunBaizeRAG is a reinforcement learning-driven framework that uses RDRA al...[1]
Nick TurleyGPT-5 is a unified system with a smart fast primary model and deeper reasonin...[2]

Together they point to an emerging architecture where a lightweight router or RL signal decides when to retrieve, think, or escalate rather than hoping one model does everything. The Meta team shows their RL-driven mechanisms deliver significant gains on exact match and LLM-judged scores. [1] Turley highlights practical wins in reduced hallucinations via safe-completions and a precautionary high-capability classification for bio domains. [2] However the counter on GPT-5 notes it is 'a discrete system with a router directing to different models rather than a truly unified architecture' and the continuous training claim has limits. [3] A non-specialist founder should care because this directly changes how reliable your internal tools or customer agents become on complex queries; poor reasoning means expensive human fallback. In plain English, that means fewer 'let me check that' loops in your workflows. This connects to agent memory because good routers need persistent context to avoid repeating mistakes.

KunLunBaizeRAG is a novel reinforcement learning-driven framework improving Large Language Model (LLM) reasoning in complex multi-hop question-answering.
AI at Meta [1]
Connects to: Feeds into the power debate: better reasoning accelerates the capability curve that Mustafa and Matt Wolfe highlight.
Sources (3)
  1. Reinforcement Learning-Driven RAG for Enhanced LLM Reasoning in Multi-hop Q&A — AI at Meta
    KunLunBaizeRAG is a novel reinforcement learning-driven framework improving Large Language Model (LLM) reasoning in complex multi-hop question-answering.
  2. GPT-5: A Unified, Adaptive AI System with Enhanced Safety and Performance — Nick Turley
    GPT-5 introduces a unified AI system featuring a smart, fast primary model and a deeper reasoning model, dynamically managed by a real-time router.
  3. GPT-5: A Unified, Adaptive AI System with Enhanced Safety and Performance — Nick Turley
    The evidence describes a discrete system with a router directing to different models rather than a truly unified architecture.

Is AI History's Greatest Power Shift?

Mustafa Suleyman and Sebastian Mallaby posted opposing views on whether AI democratizes power or intensifies an arms race among giants.

Signal · 3 thinkers, 3 entries. Mandatory contradiction thread using provided counter_claims verbatim. Why now: capability claims are accelerating while safety realists adjust strategy.
Key Positions
Mustafa SuleymanAI represents the greatest redistribution of power in history by proliferatin...[1]
Sebastian MallabyHassabis shifted from enforcing singleton safety via Google acquisition condi...[2]

The evidence shows genuine split. Suleyman argues 'AI and related fields like synthetic biology enable unprecedented power—goal accomplishment across domains—for anyone' far surpassing the internet. [1] Mallaby details how Hassabis's early safety conditions (independent board, military bans) proved largely self-reported without binding enforcement once rivals like OpenAI launched the race; now Hassabis focuses on internal power. [2] The provided counters are direct: on Suleyman, 'This is hyperbolic; technologies like the printing press (democratizing knowledge and enabling the Reformation and scientific revolution), firearms (equalizing combat power and enabling revolutions), and the internet... arguably caused larger shifts. Moreover, AI power is concentrating among corporations.' On capability claims tied to this (from Matt Wolfe's related entries), 'This overgeneralizes from selective demos... iterative refinement, prompt engineering, error correction, and human editing remain essential.' [3] Reza's crux: the empirical question whose answer resolves it is whether real-world novel tasks show autonomous days-long work or still require heavy human editing in 2026. A founder should care because if the proliferation thesis wins, your moat shifts to novel data and processes any hustler can replicate; if the counter holds, corporate concentration favors those with best infra and memory systems. This is still developing — we'll check back in the PM.

AI represents the greatest redistribution of power in history by proliferating capability to all individuals.
Mustafa Suleyman [1]
Connects to: Ties every prior thread together: memory, scientific nets and reasoning are the exact mechanisms that will decide if power truly proliferates or concentrates.
Sources (3)
  1. AI Revolution Triggers Historic Proliferation of Power to Individuals — Mustafa Suleyman
    AI represents the greatest redistribution of power in history by proliferating capability to all individuals.
  2. Demis Hassabis Pursues Superintelligence Despite Existential Risks — Sebastian Mallaby
    His singleton vision for unified safe development collapsed when rivals like Elon Musk founded OpenAI, sparking an uncontrolled AI arms race. Now a realist, he prioritizes gaining personal power within Google over formal governance.
  3. AI Task Capability Doubles Every 7 Months — System Counter (from Matt Wolfe entry)
    This overgeneralizes from selective demos on simple tasks. Real-world use across diverse, complex, or novel problems consistently shows that iterative refinement, prompt engineering, error correction, and human editing remain essential.
The Open Question

The open question: If the empirical crux is whether capability compounds autonomously or needs constant human refinement, which side wins in the next 12 months?

REZA: Mustafa Suleyman says AI gives godlike power to street hustlers.
MARA: Counters call it hyperbolic, like the printing press.
REZA: I'm Reza.
MARA: I'm Mara. This is absorb.md daily.
REZA: Across the entries the pattern is clear on memory.
MARA: mm.
REZA: Garry Tan open sourced Gbrain as a markdown living brain with a dream cycle for nightly enrichment.
MARA: So if that's true then chat interfaces are dead.
REZA: Hold on, not fully. He starts with disciplined coding then specialized roles and cross platform memory.
MARA: Right.
REZA: Chris Penn calls the flip productivity amnesia. You output a week's work in a day and forget it happened.
MARA: Wait so if that's true our entire review process is obsolete.
REZA: The crux is whether the dream cycle compounds without adding noise. Which is either brilliant or I honestly don't know.
MARA: But at some point we accept agents need persistent git backed memory or they stay reactive.
REZA: Yeah that tracks. For founders your moat just moved from model weights to memory architecture.
MARA: Which is kind of terrifying for traditional knowledge work.
REZA: Levie and Balaji are converging on scientific nets.
MARA: ooh.
REZA: Levie says the MPNN versus spectral GNN split is artificial. Both parameterize the same equivariant operators.
MARA: But the counter in his own paper says the practical divergence in researcher skill sets remains real.
REZA: Wait that's not quite right. He shows nonlinear spectral versions fix long range propagation in molecular dynamics.
MARA: mm.
REZA: Balaji's RBF PIELM does single shot least squares for PDEs. Trains way faster with fewer parameters.
MARA: So if that's true simulation timelines for materials just collapsed.
REZA: The evidence says accuracy still trails mesh solvers on oscillatory cases. But the direction is clear.
MARA: Hold on, that means drug discovery teams can run on laptops what used to need supercomputers.
REZA: Exactly. Who benefits if this generalizes? Not the old simulation vendors.
REZA: Meta dropped RL driven RAG. Nick Turley described GPT-5 routing.
MARA: mm.
REZA: KunLunBaizeRAG uses search-think iteration to beat retrieval drift on multi-hop questions.
MARA: But the counter says RL is a component, not necessarily the primary driver.
REZA: Turley claims the router continuously learns from user interactions to cut hallucinations.
MARA: The counter on that one is it's continuously trained but the architecture is still discrete models, not unified.
REZA: The crux is whether the router actually reduces sycophancy in production health apps.
MARA: Okay but if that's true then every RAG deployment needs this alignment layer yesterday.
REZA: let me back up. The gains are real on benchmarks but real world novel tasks are the test.
MARA: Which honestly I find kind of terrifying for how fast products will improve.
REZA: This is our mandatory contradiction thread. Mustafa Suleyman wrote quote AI represents the greatest redistribution of power in history by proliferating capability to all individuals.
MARA: But the counter argument is this is hyperbolic. Technologies like the printing press arguably caused larger shifts. Moreover AI power is concentrating among corporations.
REZA: Sebastian Mallaby adds that Hassabis abandoned the singleton safety vision for personal influence once the arms race started.
MARA: The counter there is those acquisition conditions were largely self-reported without verifiable legal enforcement.
REZA: Matt Wolfe ties in saying capability doubles every seven months with finished work from single prompts without edits.
MARA: Counter is it overgeneralizes from selective demos. Iterative refinement remains essential in real world complex tasks.
REZA: The empirical crux is whether street hustlers actually get goal accomplishment power or if corps keep concentrating it.
MARA: So if the counter holds then professional classes get disrupted slower than claimed.
REZA: But what does that actually mean for governance norms? Wait that's not quite right, the evidence is mixed.
MARA: Right but at some point we accept the arms race is here and personal influence inside Google matters more than boards.
REZA: This is still developing. We'll check back in the PM when more deployment data lands.
MARA: That's absorb.md daily. We ship twice a day, morning and evening, pulling from a hundred and fifty-seven AI thinkers. Subscribe so you don't miss the next one.
Nick Turley
@nickaturley
Balaji Srinivasan
@balajis
Chris Penn
@chrispenn
Sebastian Mallaby
@sebastianmallaby
Mustafa Suleyman
@mustafasuleyman
Garry Tan
@garrytan
AI at Meta
@ai-at-meta
Aaron Levie
@levie
April 12 AM: Suleyman calls AI history's biggest power shift & Agents get persistent memory & LLM routing breakthroughs — absorb.md daily