April 10: vibe coding, capex contradictions, and relativity's hold
This morning we flagged The End of Protest. Here's how it resolved.
Vibe Coding and Agent Teams
Coding is becoming a generalized interface where agents collaborate, critique, and execute in persistent teams rather than isolated tasks.
The positions add up to a concrete shift. Traditional coding is giving way to 'vibe coding' where humans describe intent at high level and persistent agent teams handle the details, tracking tasks in shared JSON, broadcasting critiques, and dynamically loading only relevant skills to avoid token bloat. [1] [2] Think of it like moving from writing every AWS Lambda function yourself in 2014 to orchestrating a team of specialized functions that talk to each other and self-correct. Claude's move from simple sub-agents to this collaborative setup, combined with memory layers addressing LLM statelessness, suggests non-technical users will soon direct complex engineering through natural description. [3] [4] The emerging view is that this collapses org size. Smaller, leaner teams ship faster. SaaS apps that cannot be orchestrated by agents face replacement. For a founder this means auditing your product for agent compatibility today or watching adoption erode. No real counter on the direction, which itself is notable. Connects to the capex thread because these efficient agents could reduce the compute demand that justifies NVIDIA's spend. [5]
“The emergence of coding agents is shifting the paradigm from software as a static tool to 'vibe coding' and agentic workflows that collapse the distance between ideation and execution.”— a16z [1]
Sources (5)
- a16z YouTube on Vibe Coding Era — a16z“The emergence of coding agents is shifting the paradigm from software as a static tool to 'vibe coding' and agentic workflows that collapse the distance between ideation and execution.”
- Claude Wikipedia on Agentic Evolution — Claude (language model)“Claude Code has transitioned from a simple sub-agent task model to a collaborative 'Agent Teams' architecture. This new system utilizes persistent team configurations, shared JSON-based task tracking, and a bidirectional messaging protocol.”
- AI Jason on Agent Token Optimization — AI Jason“A more efficient approach involves using a combination of 'skills' and CRI tools. This method significantly reduces token usage by dynamically loading only relevant tool descriptions.”
- YC Root Access on Mezero Memory Layer — YC Root Access“Mezero raised $24 million in a seed plus Series A round to develop a memory layer for AI agents, tackling the fundamental issue of statelessness in Large Language Models.”
- Sequoia on AI Agents in Insurance — Sequoia“Pace is an agentic process outsourcer for the insurance industry, focusing on automating back-office operations traditionally handled by BPO providers.”
NVIDIA Capex Contradiction
Is NVIDIA's unprecedented capital expenditure set to run for four to five more years, or is this front-loaded boom about to correct?
The aggregate says the AI buildout is reallocating resources at scale. Companies like Meta and Atlassian are laying off humans to fund compute. NVIDIA's growth is tied to this shift with capex projected to remain massive. [1] Yet this is the contradiction thread. The provided counter claim is direct: 'Long-term capex projections in the volatile semiconductor industry are notoriously unreliable. NVIDIA's current unprecedented spending is likely front-loaded to capture the AI boom, but competitive pressures, technological shifts, and potential market saturation could lead to earlier-than-expected capex reductions. Historical patterns suggest capex often peaks and then contracts faster than expected.' [4] Reza's crux question is whether actual utilization and revenue from agents and applications will justify the spend or whether power infrastructure (per Asianometry) becomes the limiting factor first. [5] Mara traces the implication: if the bet contracts, open models and sovereign AI efforts (Ng) gain faster as cost sensitivity rises. The evidence leans toward front-loading being real. Historical chip cycles support the counter more than the four-to-five-year certainty. Founders should model both scenarios. A pullback would accelerate efficiency tools like the agent teams in thread 1. This connects to thread 3 because reliable physics underpins any long-term simulation or quantum-adjacent AI claims. [6]
“Poor messaging around AI, and the escalating risk of space debris impacting satellite infrastructure.”— Barry Ritholtz [2]
Sources (6)
- 20VC with Harry Stebbings on NVIDIA Dominance — Harry Stebbings (20VC)“NVIDIA is experiencing unprecedented growth, driven by massive capex investment set to continue for years, despite some market skepticism regarding the sustainability of their trillion-dollar revenue projections. Concurrently, a significant trend of ...”
- Barry Ritholtz Sunday Reads — Barry Ritholtz“Poor messaging around AI, and the escalating risk of space debris impacting satellite infrastructure.”
- Andrew Ng The Batch on Data Centers — Andrew Ng“Concerns regarding data centers' environmental impact are often overstated. While they contribute to carbon emissions, electricity consumption, and water usage, data centers, especially hyperscale operations, are significantly more efficient than alt...”
- 20VC NVIDIA Counter Claim — Provided Contradiction Block (20VC)“Long-term capex projections in the volatile semiconductor industry are notoriously unreliable. NVIDIA's current unprecedented spending is likely front-loaded to capture the AI boom, but competitive pressures, technological shifts, and potential marke...”
- Asianometry on AI Bottleneck — Asianometry“Power infrastructure, not chip manufacturing, is emerging as the true bottleneck for AI data center growth.”
- Jason on AI Cybernetics Race — Jason Calacanis“The advent of advanced AI models capable of widespread system infiltration elevates the global AI race to an existential geopolitical threat.”
Gravitational Waves Uphold General Relativity
Dozens of new observations from LIGO, Virgo, and KAGRA find zero deviations from Einstein's predictions, tightening bounds on everything from graviton mass to neutron star interiors.
The positions converge strongly. Multiple independent analyses of the latest gravitational wave catalog, using five distinct pipelines for continuous waves from supernova remnants and pulsars, plus ringdown tests, all return null results for deviations from General Relativity. [1] [2] In plain English, that means the theory that describes gravity from black hole mergers to the large-scale universe keeps passing every empirical test we throw at it with tighter and tighter precision. No new physics yet. This is not boring. For anyone building quantum systems or simulations that assume stable spacetime, it is reassuring. The timeline for 'beyond GR' phenomena that might enable exotic computing or propulsion just got pushed further out. [3] SO WHAT: founders chasing quantum advantage or physics-based AI acceleration should double down on error-corrected neutral atom or superconducting approaches that work within known GR rather than betting on speculative new theories. The absence of counter-claims across these papers is itself notable. It connects to thread 1 because reliable physics underpins the simulation environments agents might use and to thread 2 because capex on quantum hardware assumes the foundational models are solid. [4]
Sources (4)
- A Martinez on Gravitational Wave Observations — A Martinez“Analysis of 91 confident gravitational wave signals from compact binary coalescences, including data from LIGO Virgo KAGRA's O4a run, continues to validate General Relativity (GR). The study examined parameterized deviations across eight tests, findi...”
- A Martinez on GW Remnant Analysis — A Martinez“Analysis of 42 gravitational wave events from GWTC-4.0, combined with previous observations, consistently supports General Relativity (GR). Seven tests on binary merger remnants reveal no strong evidence of GR deviation.”
- Sean Cairncross on Ytterbium Quantum Gates — Sean Cairncross“Researchers have successfully demonstrated a universal, high-fidelity gate set using arrays of optically trapped 171Yb atoms. This system utilizes the long coherence times of nuclear spin qubits to achieve robust entanglement, with measured two-qubit...”
- A Martinez on CW Limits from Supernova Remnants — A Martinez“The LIGO-Virgo-KAGRA Collaboration performed directed searches for continuous gravitational waves from 15 supernova remnants. No gravitational wave signals were detected, leading to the most stringent upper limits to date.”
The open question: If our agents critique each other, our physics holds firm, and our markets question the spend, what uniquely human judgment remains the decisive moat?
- a16z — a16z YouTube on Vibe Coding Era
- Claude (language model) — Claude Wikipedia on Agentic Evolution
- AI Jason — AI Jason on Agent Token Optimization
- YC Root Access — YC Root Access on Mezero Memory Layer
- Sequoia — Sequoia on AI Agents in Insurance
- Harry Stebbings (20VC) — 20VC with Harry Stebbings on NVIDIA Dominance
- Barry Ritholtz — Barry Ritholtz Sunday Reads
- Andrew Ng — Andrew Ng The Batch on Data Centers
- Asianometry — Asianometry on AI Bottleneck
- Jason Calacanis — Jason on AI Cybernetics Race
- A Martinez — A Martinez on Gravitational Wave Observations
- A Martinez — A Martinez on GW Remnant Analysis
- Sean Cairncross — Sean Cairncross on Ytterbium Quantum Gates
- A Martinez — A Martinez on CW Limits from Supernova Remnants
Transcript
REZA: This morning we flagged The End of Protest. Here's how it resolved. MARA: Pragmatic governance like Shapiro's is gaining while the radical capitalism critique lingers. REZA: I'm Reza. MARA: I'm Mara. This is absorb.md daily. REZA: Across the data seven thinkers converged on agent teams. a16z calls it vibe coding. MARA: mm. REZA: Karpathy style wiki pattern shows Claude Code moved from single sub-agents to persistent teams with JSON tracking and real-time critique. MARA: So if that's true then traditional code review jobs are first to go. REZA: Hold on. The actual claim is that token use drops when you load only relevant skills via CRI. MARA: Right. REZA: AI Jason showed that loading every tool constantly bloats context. Skills plus CRI fixes it. MARA: Okay but if agents critique each other in real time, what does that do to SaaS that can't be orchestrated? REZA: a16z says it obsoletes task-oriented apps. Token consumption moves to direct usage. MARA: Which honestly I find kind of terrifying for incumbents. Wait so YC's Mezero memory layer solves statelessness? REZA: Exactly. 24 million raised, 14 million Python downloads. The pattern is stateful collaborative agents. MARA: So in plain English that means non-technical founders direct complex work through natural description. REZA: Yeah. The crux is whether UX friction drops enough for 10x velocity. Data says yes. MARA: But the part I keep getting stuck on is enterprise adoption speed. REZA: Wait that's not quite right. Sequoia showed insurance back offices already automating judgment workflows. MARA: Ooh. REZA: 20VC says NVIDIA capex sustains at unprecedented levels for four to five years. MARA: But the counter is right there in the data. REZA: Quote. Long-term capex projections in the volatile semiconductor industry are notoriously unreliable. MARA: NVIDIA's current unprecedented spending is likely front-loaded. Historical patterns show boom then bust. REZA: Hm. The layoffs at Meta and Atlassian are the reallocation to compute signal. MARA: So if that's true then every founder betting on endless infra growth has a problem. REZA: The crux is utilization after the shift to AI fluency. Barry Ritholtz called out the messaging dysfunction. MARA: Andrew Ng says data centers get scapegoated but are more efficient than alternatives. REZA: Asianometry counters that power, not chips, is the real bottleneck. Wait actually the counter claim strength is moderate. MARA: Right but at some point we have to accept the front-loading pattern from past cycles. REZA: Let me back up. The empirical question whose answer resolves this is revenue per watt in 2027. MARA: If capex slows then open models win faster. That's the implication I keep coming back to. REZA: Which is either brilliant market discipline or, I honestly don't know, a dangerous pullback. MARA: mm. REZA: Four separate A Martinez papers plus supporting work all say the same thing. No deviations from GR. MARA: 91 events. 42 remnants. New continuous wave limits. All clean. REZA: In plain English the theory that governs black holes and the cosmos keeps passing every test with tighter bounds. MARA: So if that's true then the timeline for new physics that might unlock exotic compute just moved back. REZA: The hierarchical analysis puts GR predictions right in the high credible region. No strong evidence otherwise. MARA: Wait so quantum computing timelines that assume weird spacetime effects? REZA: Hold on. Sean Cairncross's ytterbium gates with 99 percent fidelity still rely on stable GR. MARA: Ooh. For founders that means double down on what we know works instead of waiting for breakthroughs. REZA: The pattern across pipelines is remarkable consistency. The crux is whether catalog size eventually reveals something. MARA: But the absence of counters is itself notable. No one is claiming deviation. REZA: Let me back up. This grounds the simulation environments the agent teams in thread one would use. MARA: Which ties the whole briefing together. Self-correcting systems everywhere. REZA: Yeah that tracks. MARA: That's absorb.md daily. We ship twice a day, morning and evening, pulling from a hundred and fifty-seven AI thinkers. Subscribe so you don't miss the next one.








