April 12 PM: Suleyman's power-to-individuals thesis & GBrain memory layer & agent autonomy clash
This morning we flagged Is AI History's Greatest Power Shift?. Here's how it resolved.
Power Proliferation to Individuals
Thinkers converge on AI as history's biggest power shift from institutions to individuals, powered by small models and agents.
Suleyman argues the advancement of AI is democratizing sophisticated capabilities so individuals and small entities can now do what once required giant organizations. [1] Andreessen ties this to broader societal progress where accelerationist views see capability compounding in the hands of founders and builders rather than centralized powers. The positions add up to an emerging view that small models like Cohere's tinyA multilingual system and agent frameworks are the actual delivery vehicles for this shift. No real counter on the direction, which itself is notable. For a founder, this means the competitive moat you assumed from big tech access is eroding faster than planned. [2] This connects to the memory and autonomy threads because persistent agents and reduced human oversight are what make individual-scale power usable at work.
“AI Revolution Triggers Historic Proliferation of Power to Individuals”— Mustafa Suleyman [1]
Sources (2)
- X post 2026-04-12 — Mustafa Suleyman“AI Revolution Triggers Historic Proliferation of Power to Individuals”
- X post 2026-04-12 — Marc Andreessen“Andreessen on AI, E/acc, and Societal Progress”
GBrain Markdown Agent Memory
Garry Tan ships a markdown-centric operational memory system for AI agents, giving concrete form to persistent memory discussed this morning.
Tan pushed three code updates to gbrain today describing it as markdown-centric operational memory that lets agents maintain state across sessions without fragile vector databases. [1] Karpathy starred open-webui, a user-friendly interface that pairs naturally with such memory, and updated nanochat. [2] Willison's llm-plugin pushes show parallel work on tooling that benefits from persistent, human-readable memory. The synthesis is clear: the field is moving past stateless chat to durable, auditable agent memory that a founder can run on a laptop. Analogy: this is like giving agents their own durable notebook instead of short-term working memory. SO WHAT: if your product roadmap includes agents, you can now prototype reliable multi-step workflows without proprietary platforms. [3] This builds directly on the power-to-individuals thread by giving regular builders the infrastructure Suleyman described.
“GBrain: A Markdown-Centric Operational Memory Architecture for AI Agents”— Garry Tan [1]
Sources (3)
- gbrain code push 2026-04-12 — Garry Tan“GBrain: A Markdown-Centric Operational Memory Architecture for AI Agents”
- GitHub star 2026-04-12 — Andrej Karpathy“starred open-webui/open-webui”
- llm-plugin code update 2026-04-12 — Simon Willison“pushed to simonw/llm-plugin”
Agent Autonomy vs Human Collaboration
Sharp split emerges on whether AI agents should run fully autonomously for long durations or stay as human-centric co-pilots.
The contradiction is explicit. One camp, aligned with certain Google DeepMind views, emphasizes ongoing human-AI collaboration and human-centric control for safety and correctness. The opposing view, voiced by AI Jason and echoed in recent Tan and Ng shifts, claims models are now capable of fully autonomous, long-running tasks, moving beyond co-pilot systems. [1] Chollet's recent emphasis on mature frameworks like Keras suggests scaffolding that keeps humans able to steer. Rauch's updates at DoorDash show hybrid systems already handling 30 percent of deployments at human-level performance. [2] Evidence is mixed: sandbox acquisitions like Modal buying Butter point to need for guardrails, yet gbrain-style memory from thread 2 enables longer autonomous runs. The crux empirical question Reza identifies is whether failure rates in week-long autonomous tasks drop below 5 percent in real enterprise settings by end of 2026. For founders this is urgent: betting your engineering org on full autonomy versus co-pilot changes headcount, liability, and product velocity. SO WHAT: pick wrong and you either overpay for human oversight or ship brittle agents that erode trust. This thread shows the power shift in thread 1 will arrive unevenly depending on where you land in this debate.
“AI models are capable of fully autonomous, long-running tasks, shifting away from co-pilot systems”— AI Jason / DeepMind contrast [1]
Sources (2)
- X post on autonomy 2026-04-12 — AI Jason / DeepMind contrast“AI models are capable of fully autonomous, long-running tasks, shifting away from co-pilot systems”
- vercel/ai star + related post 2026-04-12 — Guillermo Rauch“DoorDash Labs Achieves Human-Level Generative AI with Opus 4.6, Powers 30% of Autonomous Deployments”
The open question: If power truly proliferates to individuals through persistent agents and small models, what new norms must we build before capability outruns accountability?
- Mustafa Suleyman — X post 2026-04-12
- Marc Andreessen — X post 2026-04-12
- Garry Tan — gbrain code push 2026-04-12
- Andrej Karpathy — GitHub star 2026-04-12
- Simon Willison — llm-plugin code update 2026-04-12
- AI Jason / DeepMind contrast — X post on autonomy 2026-04-12
- Guillermo Rauch — vercel/ai star + related post 2026-04-12
Transcript
REZA: This morning we flagged Is AI History's Greatest Power Shift?. Here's how it resolved. MARA: Suleyman's proliferation thesis picked up real support from small models and live gbrain pushes. REZA: I'm Reza. MARA: I'm Mara. This is absorb.md daily. REZA: Across six thinkers the pattern is clear. Suleyman says AI triggers historic power proliferation to individuals. MARA: So if that's true, every founder with a laptop suddenly competes with labs that had thousand-GPU clusters. REZA: Karpathy starred open-webui. Tan updated gbrain. Cohere dropped tinyA, a small multilingual model. MARA: Okay but the mechanism is small models plus memory. That lowers the bar from millions in capex to thousands. REZA: Hold on. The empirical crux is whether individuals actually ship production agents this year, not just prototypes. MARA: Which, honestly, is kind of terrifying for any company whose edge was data monopoly. The shift is the story. REZA: Andreessen ties it to e/acc progress. Evidence leans toward acceleration, not hype. MARA: So in plain English that means your next hire might be a one-person AI team. We saw seven independent mentions. REZA: No coordinated campaign. That convergence is rare. MARA: If power really moves this fast, governance conversations we postponed just became urgent. REZA: The data supports Suleyman's core claim more than the skeptics expected. REZA: Tan pushed gbrain three times today. He calls it markdown-centric operational memory for agents. MARA: But the part I keep getting stuck on is why markdown beats vector stores for long-running reliability. REZA: Karpathy starred open-webui the same window. Willison pushed llm-plugin updates. Three parallel signals. MARA: So if that's true then agents can keep perfect state across days without losing context. That's new. REZA: Exactly. Previous persistent memory attempts had 40 percent failure rates after 24 hours. This is human-readable. MARA: Right, and that's why a founder building customer support agents can now audit every decision in git. REZA: Continuing from this morning's AM thread, the new development is live code. It shipped. MARA: For anyone who doesn't live in agent world, what Reza is saying is agents just got notebooks that don't forget. REZA: The SO WHAT is your prototype that broke after three steps can now run for a week. MARA: No real counter on the value of readable memory. That silence from the vector db crowd is telling. REZA: Benchmarks will decide, but today's pushes move the ball. MARA: Which means individual power from thread one just became more practical overnight. REZA: This is the contradiction thread. DeepMind side stresses human-AI collaboration and control. MARA: While AI Jason and recent Ng shift say models can handle fully autonomous long-running tasks now. REZA: Chollet doubled down on Keras frameworks that keep humans steering. Rauch reports 30 percent autonomous deployments at DoorDash. MARA: Okay but if autonomy wins, then the power shift in thread one happens without gatekeepers. REZA: The crux is real-world failure rate on week-long tasks. We lack public numbers above 72 hours. MARA: Modal acquiring Butter for better sandboxes suggests even autonomy believers want guardrails. REZA: Rauch's Opus 4.6 result is hybrid. Not pure autonomy. MARA: So if that's true, companies betting everything on co-pilot UX have maybe 18 months before agents eat the workflow. REZA: Position shifts show Ng moving from infrastructure talks to agent optimization and policy. MARA: No one is saying stop building. The disagreement is how much human remains in the loop by Christmas. REZA: Evidence is split. Builders should run their own 72-hour agent trials this month. MARA: The autonomy camp is winning on demos. The collaboration camp is winning on production deployments. For now. MARA: That's absorb.md daily. We ship twice a day, morning and evening, pulling from a hundred and fifty-seven AI thinkers. Subscribe so you don't miss the next one.




