May 1 AM: Resilient training claims scrutinized & Agent frameworks customize & Visibility tools balance & Code activity surges
Decoupled DiLoCo claims to keep training running through chip failures.
Resilient Multi-Region AI Training
Google DeepMind claims Decoupled DiLoCo keeps AI training alive despite hardware failures across regions and mixed TPUs.
Google DeepMind's Decoupled DiLoCo combines Pathways and DiLoCo to enable continuous AI model training across multiple data centers without halting due to chip failures or synchronization issues [1]. It features self-healing capabilities, isolating disruptions from artificial hardware failures and reintegrating recovered units seamlessly. Demonstrated by training a 12B Gemma model over four US regions on low-bandwidth networks and mixing TPUv5p with TPU6e without performance loss. Jim Fan starred the Mamba SSM architecture, highlighting interest in state space models that could offer efficiency gains to complement distributed resilience [2]. The pattern adds up to a push for more fault-tolerant training systems. Counter arguments note that the tests relied on simulated artificial hardware failures rather than unpredictable real-world ones, which could involve correlated failures, data corruption, or network issues not addressed. Continuous operation may come at the cost of slower effective training throughput during degraded states, and reintegration could introduce synchronization overhead. For founders scaling AI products, this means potential cost savings on compute by avoiding full restarts on failures, but it requires monitoring for those hidden overheads before committing production runs to the approach.
Sources (2)
- X post on Decoupled DiLoCo — Google DeepMind“Decoupled DiLoCo integrates Pathways and DiLoCo to enable continuous AI model training across multiple data centers without halting due to chip failures or synchronization issues. It features self-healing capabilities, isolating disruptions from arti...”
- GitHub star — Jim Fan“drjimfan starred state-spaces/mamba: Mamba SSM architecture”
Agent Frameworks with Customization and Self-Review (continuing from 2026-04-25 am: agent-rebuild-frenzy)
Harrison Chase details DeepAgents for full customization and ListenLabs with self-reviewing subagents, while Karpathy points to LlamaIndex for document agents.
Harrison Chase positions DeepAgents as a comprehensive framework for building agents with full customization hooks, providing batteries-included convenience alongside extensive hooks for customization [1]. He also notes ListenLabs employs self-reviewing feedback subagents, sandboxed environments, and purpose-built abstractions for large-scale response analysis. Counter claims suggest the 'it’s all you need' slogan is typical marketing language and does not constitute evidence of being truly complete or standalone, as agent systems inherently rely on external LLMs, tools, and infrastructure. Andrej Karpathy starred run-llama/llama_index as the leading document agent and OCR platform, reinforcing focus on agent tooling [2]. The positions add up to an emerging view that agent development needs both pre-built components and deep customization options. This represents a substantive update to prior agent rebuild discussions with specific architectural choices. For a founder prototyping agents, the hooks could cut weeks off initial builds while the self-review subagents address reliability at scale.
“karpathy starred run-llama/llama_index: LlamaIndex is the leading document agent and OCR platform”— Andrej Karpathy [2]
Sources (2)
- X post on DeepAgents — Harrison Chase“DeepAgents is positioned as an all-in-one solution for agent development, providing batteries-included convenience alongside extensive hooks for customization. It enables developers to adjust components precisely to their needs without starting from ...”
- GitHub star — Andrej Karpathy“karpathy starred run-llama/llama_index: LlamaIndex is the leading document agent and OCR platform”
AI Tools for Visibility and Human Authenticity
Robert Scoble shares an AI tool for balanced visibility of whales and small accounts on X; Kevin Roose highlights intentional typos boosting email open rates by 40%.
Robert Scoble developed an AI tool to analyze the AI community on X, revealing both influential whales and smaller accounts [1]. This demonstrates feasibility of maintaining visibility for large accounts while improving the platform for newcomers, and he urges Elon Musk and Nikita Bier to adopt similar approaches amid user concerns. Counter claims note that Scoble's self-reported claim lacks independent verification or technical details; the linked resource may represent use of an existing platform feature, third-party service, or simple list curation rather than a novel AI tool he personally created. Kevin Roose reports that email marketers observed a 40% increase in open rates after deliberately adding typos to subject lines, as recipients perceived them as human-written rather than automated [2]. This tactic leverages perceived authenticity to counter AI-generated perfection. Anecdotal evidence aligns with tools like anti-Grammarly services that intentionally mess up emails. Counter argument: This claim relies solely on anecdotal evidence from one email marketer without details on methodology, controls, sample size, or statistical significance. The 40% increase could result from other simultaneous changes in their email strategy, audience targeting, or content, not the typos themselves. The pattern shows a response to AI's growing presence in content and platforms, with tools aiming to restore human elements. For smart non-specialists, this could mean practical tweaks to marketing or community management to stand out.
Sources (2)
- X post on AI Tool for Visibility — Robert Scoble“Scoble developed an AI tool to analyze the AI community on X, revealing both influential "whales" and smaller accounts. This demonstrates feasibility of maintaining visibility for large accounts while improving the platform for newcomers. He urges El...”
- X post on Typos in Emails — Kevin Roose“Email marketers observed a 40% increase in open rates after deliberately adding typos to subject lines, as recipients perceived them as human-written rather than automated. This tactic leverages perceived authenticity to counter AI-generated perfecti...”
Developer Code Pushes and Open Source Stars
Yulun Wang and Garry Tan push code to their projects while Ben Thompson and Peter Zoller star useful libraries like whitenoise and Factory.
Yulun Wang made several code pushes to yulunwang/Citadels [1]. Garry Tan pushed updates to garrytan/gbrain [2]. Ben Thompson starred evansd/whitenoise for radically simplified static file serving for Python web apps [3]. Peter Zoller starred hmlongco/Factory for a modern approach to Container-Based Dependency Injection for Swift and SwiftUI, and square/Listable for declarative list views for iOS apps [4]. The activity reflects builders actively maintaining and adopting tools that simplify development. This pattern suggests sustained momentum in open source contributions that could underpin future AI applications and infrastructure. For founders, it highlights the importance of monitoring these libraries for integration opportunities that could accelerate prototyping without reinventing core pieces.
“benthompson starred evansd/whitenoise: Radically simplified static file serving for Python web apps”— Ben Thompson [3]
Sources (4)
- GitHub push — Yulun Wang“yulunwang pushed to yulunwang/Citadels: code update”
- GitHub push — Garry Tan“garrytan pushed to garrytan/gbrain: code update”
- GitHub star — Ben Thompson“benthompson starred evansd/whitenoise: Radically simplified static file serving for Python web apps”
- GitHub star — Peter Zoller“peterzoller starred hmlongco/Factory: A modern approach to Container-Based Dependency Injection for Swift and SwiftUI.”
The open question: Will the self-healing training and customizable agents deliver promised gains, or will real-world tests reveal hidden costs?
- Google DeepMind — X post on Decoupled DiLoCo
- Jim Fan — GitHub star
- Harrison Chase — X post on DeepAgents
- Andrej Karpathy — GitHub star
- Robert Scoble — X post on AI Tool for Visibility
- Kevin Roose — X post on Typos in Emails
- Yulun Wang — GitHub push
- Garry Tan — GitHub push
- Ben Thompson — GitHub star
- Peter Zoller — GitHub star
Transcript
REZA: Decoupled DiLoCo claims to keep training running through chip failures. MARA: But only with simulated breakdowns? REZA: I'm Reza. MARA: I'm Mara. This is absorb.md daily. REZA: The data shows Google DeepMind posting three times on Decoupled DiLoCo for resilient training across regions. MARA: The counters are clear that those failures were artificial and simulated. REZA: Jim Fan starred Mamba as an efficient alternative architecture that might sidestep some issues. MARA: So what does that mean for a lab running frontier models? REZA: It could mean less downtime if it works, but the evidence is still exploratory on a 12B model. MARA: Which means founders should watch for real deployments before betting on it. REZA: The crux is whether asynchronous operation introduces new problems like model divergence. REZA: Harrison Chase detailed DeepAgents with customization hooks and ListenLabs with self-reviewing subagents. MARA: But the counters call the all-you-need claim typical marketing since agents need external LLMs anyway. REZA: Karpathy starred LlamaIndex for document agents and OCR, showing the agent space is heating up. MARA: This is a deeper take than the prior agent rebuild coverage, with specific code repos now. REZA: The synthesis is that builders want modular agents they can tweak precisely. MARA: If that's true, then off-the-shelf agent platforms might lose ground to customizable ones. REZA: Karpathy's involvement signals mainstream interest in these tools. MARA: For anyone building agents, the hooks could reduce starting from scratch. REZA: The self-review subagents in ListenLabs add another layer for reliability at scale. REZA: Scoble built an AI tool for balanced whale and small account visibility on X. MARA: But counters say it might just be a list or existing feature, not a new creation. REZA: Roose says typos in emails get 40% more opens by looking human. MARA: Anecdotal though, no controls mentioned. Could be confirmation bias. REZA: The tension is whether these tools are truly novel or just tweaks. MARA: Right, and if they work, it changes how we interact with AI outputs daily. REZA: For community managers, the balanced visibility could fix real platform gripes. MARA: And for email marketers, the typos tactic might spread if it holds up. REZA: Yulun Wang and Garry Tan pushed code updates to Citadels and gbrain. MARA: Ben Thompson starred whitenoise for simple Python static files. REZA: Peter Zoller added Factory for Swift dependency injection and Listable for iOS lists. MARA: This shows steady hands-on work across personal projects and supporting libs. REZA: The pattern is builders iterating on tools that could back AI apps. MARA: So if this activity continues, it speeds up prototyping for everyone. REZA: No major contradictions here, just consistent development signals. MARA: Which, honestly, is the quiet foundation for the bigger claims in the other threads. REZA: This code activity is still developing — we'll check back in the PM. MARA: That's absorb.md daily. We ship twice a day, morning and evening, pulling from a hundred and fifty-seven AI thinkers. Subscribe so you don't miss the next one.






