absorb.md

April 23 AM: Rosalind compresses drug cycles & Grok photorealism faces pushback & Parcae cuts params & persistent subagents

OpenAI's new model targets ten to fifteen year drug timelines.

0:00
5:54
In This Briefing
1
Rosalind Compresses Drug Discovery Timelines
OpenAI launches a biology-optimized reasoning model aimed at slashing the dec...
0:13
2
Grok Photorealism Claims Face Pushback
Elon Musk says Grok's Imagine tool produces images nearly indistinguishable f...
1:43
3
Parcae Enables Stable Looped Transformers
Together AI's new architecture matches 1.3B parameter quality at 770M paramet...
3:07
4
Subagents Create Persistent Always-On Threads
Developers are using subagents and steering in Codex to keep AI threads alive...
4:35
8 sources · 8 thinkers

Rosalind Compresses Drug Discovery Timelines

OpenAI launches a biology-optimized reasoning model aimed at slashing the decade-plus timeline for new drugs.

Signal · Three entries from OpenAI plus related Nobel congratulations from Ilya Sutskever and Demis Hassabis on AI-science impact, why now: arrives as ai-research trend tops importance scores and directly targets translational medicine.
Key Positions
OpenAIGPT-Rosalind excels at protein, chemical and genomic reasoning to accelerate ...[1]
Ilya SutskeverCongratulates Hinton, Hassabis and Jumper on Nobels, underscoring AI's growin...[2]

The positions converge on AI now being specialized enough to meaningfully compress discovery workflows in biology and chemistry. OpenAI positions GPT-Rosalind as a frontier reasoning model available in ChatGPT, Codex and API for partners like Amgen and Moderna, optimized for scientific tool use and exploration of vast possibility spaces. [1] Ilya and Demis's notes on Nobels for neural networks and AlphaFold-style work add weight that this is not isolated hype but part of a broader pattern where AI earns recognition for advancing hard science. [2] Yet the provided counter claims this 10-15 year average includes many failures and already varies by area, with some approvals in 3-5 years via accelerated paths, suggesting the compression claim may overstate the baseline. For a founder in biotech this matters because even modest timeline cuts could reshape capital requirements and competition with big pharma. The emerging view is cautious optimism: the tools are here now as research previews, but real impact will be measured in published pipelines, not announcements. This thread connects to the later efficiency discussion as specialized models often trade parameters for domain performance.

It aims to shorten the 10-15 year drug development timeline by enabling faster hypothesis generation and exploration of possibilities.
OpenAI [1]
Connects to: Connects to persistent subagents thread as both point to AI moving from general tools to domain-embedded collaborators.
Sources (2)
  1. X post 2026-04-20 — OpenAI
    It aims to shorten the 10-15 year drug development timeline by enabling faster hypothesis generation and exploration of possibilities.
  2. X post on Nobel — Ilya Sutskever
    Ilya Sutskever publicly congratulated Demis Hassabis and John Jumper for receiving the 2024 Nobel Prize in Chemistry.

Grok Photorealism Claims Face Pushback

Elon Musk says Grok's Imagine tool produces images nearly indistinguishable from reality, but counters call it standard marketing without evidence.

Signal · Elon Musk post plus Riley Goodside on generational shift in what counts as AI-generated, elevated by ai-research burst and explicit contradiction data.
Key Positions
Elon MuskGrok Imagine elevates realism to a new level where AI images are nearly impos...[1]
Riley GoodsideFuture kids will view all CG Pixar films as non-AI despite computer use, refl...[2]

This thread captures the tension between perceptual leaps in generative models and the lack of rigorous measurement. Elon states that Grok's Imagine feature produces unprecedented realism, making images nearly impossible to distinguish from photographs and elevating AI image synthesis to a new level of fidelity. [1] Riley Goodside adds that this shift will soon make pre-AI computer graphics like Pixar films seem distinctly non-AI to the next generation. [2] Yet the provided counter claims hit hard: the assertion relies on subjective promotional language without objective benchmarks, side-by-side comparisons to Flux, Midjourney or DALL-E, or quantitative metrics like FID scores or user studies, making it indistinguishable from standard marketing hype. The evidence says we are in another wave of impressive demos without the infrastructure to prove 'indistinguishable' at scale. A smart non-specialist should care because if photorealism crosses the threshold, it changes content creation, advertising, evidence in courts and what 'real' means online. The split is genuine: optimists see the demos as the signal, skeptics want the benchmarks before declaring victory. This connects to the Rosalind thread as both illustrate specialization (images vs biology) outpacing verification.

Grok's Imagine feature produces AI-generated images with unprecedented realism, making them nearly impossible to distinguish from real photographs.
Elon Musk [1]
Connects to: Connects to Rosalind thread as both show domain specialization racing ahead of rigorous verification standards.
Sources (2)
  1. X post 2026-04-20 — Elon Musk
    Grok's Imagine feature produces AI-generated images with unprecedented realism, making them nearly impossible to distinguish from real photographs.
  2. X post 2026-04-20 — Riley Goodside
    Riley Goodside predicts a future where children perceive Pixar movies, created with computers but pre-AI techniques, as distinctly non-AI.

Parcae Enables Stable Looped Transformers

Together AI's new architecture matches 1.3B parameter quality at 770M parameters by stabilizing recurrence, continuing last week's looped model discussion.

Signal · One detailed post from Together AI plus John Carmack on LLM compression implications, arriving amid ai-infrastructure trend and explicit counters on learning rate and stabilization claims.
Key Positions
Together AIParcae models recurrence as discrete LTI system, constrains spectral radius b...[1]
John CarmackLLM training enables near-lossless compression of massive corpora like the In...[2]

The synthesis here is efficiency under fixed FLOP budgets by trading width for depth via stable recurrence. Together AI explains Parcae constrains the spectral radius with a learned negative diagonal matrix, allowing LR 1e-3 instead of 4e-4 for unconstrained loops, yielding 14.5% better Core score at 370M scale and 6.3% lower validation perplexity. [1] Carmack notes the training process itself acts as powerful compression for internet-scale data when bit-exact regurgitation is not the goal. [2] Counters note that divergence thresholds are setup-specific and other techniques like spectral normalization could achieve similar results, so the claimed advantage may not be universal. For infrastructure teams this is important because inference memory drops when you loop deeper instead of widening, potentially lowering serving costs dramatically. The pattern adds up to renewed interest in recurrent architectures now that stability is solvable, echoing earlier looped-model discussion but with concrete scaling laws. This is still early but the direction suggests parameter efficiency may improve faster than pure scaling predicts.

Parcae introduces stable looped architectures by modeling recurrence as a discrete LTI system and constraining the spectral radius below 1 with a learned negative diagonal matrix.
Together AI [1]
Connects to: Connects forward to persistent subagents because efficient base models make always-on threads more practical at scale.
Sources (2)
  1. X post 2026-04-20 — Together AI
    Parcae introduces stable looped architectures by modeling recurrence as a discrete LTI system and constraining the spectral radius below 1 with a learned negative diagonal matrix.
  2. X post 2026-04-20 — John Carmack
    LLM training process can achieve near-lossless compression of vast datasets like the Internet Archive.

Subagents Create Persistent Always-On Threads

Developers are using subagents and steering in Codex to keep AI threads alive indefinitely, triggered by prompts or automation.

Signal · Posts from Alexander Embiricos on subagent patterns plus Aaron Levie on rapid model progress forcing quarterly agent architecture rebuilds, part of ai-product and infrastructure convergence.
Key Positions
Alexander EmbiricosSubagents with steering in Codex maintain long-lived perpetually active threa...[1]
Aaron LevieRapid AI model advances obsolete prior agent mitigations every few months, re...[2]

The aggregate view is that agent interaction is moving from stateless chats to persistent, stateful systems that behave more like always-available colleagues. Embiricos describes the workflow as magical: initial prompts or automation pings keep threads alive, and parallel subagents handle new tasks without context loss. [1] Levie adds that because models improve so quickly, teams must rebuild agent architectures and tooling quarterly, making last year's RAG or LLMOps patterns already outdated. [2] No direct counterclaim in the data, which Mara notes is itself notable given how much agent hype usually attracts skepticism. The SO WHAT for a founder or engineering leader is that your developers may soon spend less time copying context and more time steering high-level goals, but only if your stack can handle the memory and orchestration costs. This thread feels like the user-behavior shift that follows the model and infra improvements in earlier sections. The evidence is early adopter reports rather than large-scale studies, so adoption curves remain uncertain.

using subagents with steering in Codex to maintain long-lived, perpetually active threads triggered by prompts or automations.
Alexander Embiricos [1]
Sources (2)
  1. X post 2026-04-20 — Alexander Embiricos
    using subagents with steering in Codex to maintain long-lived, perpetually active threads triggered by prompts or automations.
  2. X post 2026-04-20 — Aaron Levie
    AI model progress demands quarterly rebuilds of agent systems, obsoleting mitigations for prior limitations like context windows.
The Open Question

The open question: When specialized models and agent patterns deliver impressive demos but independent verification and benchmarks lag, how should teams decide what to trust and deploy in production?

REZA: OpenAI's new model targets ten to fifteen year drug timelines.
MARA: But counters say that average is already shrinking.
REZA: I'm Reza.
MARA: I'm Mara. This is absorb.md daily.
REZA: Across the entries the clearest pattern is OpenAI launching GPT-Rosalind specifically tuned for biology, genomics and chemistry reasoning.
MARA: So if that's true then biotech founders could see discovery cycles drop from over a decade to something far shorter.
REZA: Hold on. The claim is it shortens the ten to fifteen year average from target to approval.
MARA: But the counter says that average includes many failed projects and recent analyses show median successful drugs already take seven to ten years.
REZA: The crux is whether the model meaningfully accelerates the parts that matter or just generates more hypotheses that still fail later.
MARA: Ilya congratulating Hassabis and Hinton on Nobels reinforces the view that AI is already earning science prizes.
REZA: Carmack separately notes LLM training compresses internet-scale corpora near losslessly.
MARA: Which honestly makes the Rosalind approach feel like training on the right distribution rather than raw scale.
REZA: Available today in research preview to selected customers. Early but the direction is clear.
MARA: No real counter on the specialization itself. That's notable.
REZA: Elon posted that Grok Imagine reaches a new level where images are nearly impossible to distinguish from real photos.
MARA: The counter says it relies on subjective promotional language without FID scores, side-by-sides or user studies.
REZA: Hold on. The crux is whether this is genuine perceptual leap or the usual marketing cycle.
MARA: Okay but if that's true then media, advertising and evidence standards all shift overnight.
REZA: Riley Goodside separately predicts kids will soon see all Pixar CG as non-AI.
MARA: I just connected that. The generational redefinition of what counts as AI is the bigger story.
REZA: Exactly. Demos look compelling but we lack independent verification the counters demand.
MARA: So provenance problems explode. Creators now compete with indistinguishable fakes.
REZA: The evidence is all curated. Real-world lighting and edge cases still break most models.
MARA: No convergence yet. Absence of benchmarks itself is the signal given how fast claims escalate.
REZA: This one is still developing. We'll check back in the PM on adoption metrics.
REZA: Together AI released Parcae, modeling recurrence as discrete LTI system with spectral radius constraint.
MARA: So in plain English that means stable looped models that match one point three billion quality at seven hundred and seventy million parameters.
REZA: It trains at higher learning rate than unconstrained versions and shows fourteen point five percent better Core score at three seventy million scale.
MARA: Okay but the counter says divergence thresholds are setup specific and spectral normalization might achieve the same.
REZA: Carmack adds that the training process itself compresses petabyte corpora near losslessly when exact bits aren't needed.
MARA: If that's true then inference costs drop because you loop deeper instead of widening the model.
REZA: This continues the looped models discussion from yesterday but with new stabilization math.
MARA: Founders running inference at scale should care about the memory win.
REZA: Scaling laws now tie recurrence depth and data via power laws. Not hype, measured on Core and perplexity.
MARA: The counters didn't land hard here. The technique seems reproducible.
REZA: Embiricos describes using subagents plus steering inside Codex to keep threads perpetually active.
MARA: So if that's true then developers get always-on research assistants instead of resetting context every session.
REZA: The pattern is dispatching new tasks in parallel with the phrase using a subagent in parallel do X.
MARA: Levie says rapid model advances force quarterly overhauls of agent architectures and tooling.
REZA: Hold on. The crux is whether persistence scales or just creates harder memory management problems.
MARA: For product teams this changes the entire workflow from episodic to continuous.
REZA: LlamaIndex ParseBench also raises the bar for faithful document parsing that agents can actually act on.
MARA: No strong counterclaim in the data. The excitement seems genuine among early users.
REZA: This is still developing. We'll check back in the PM on how widely teams adopt persistent threads.
MARA: Which honestly is kind of terrifying for anyone whose product assumes one-shot interactions.
MARA: That's absorb.md daily. We ship twice a day, morning and evening, pulling from a hundred and fifty-seven AI thinkers. Subscribe so you don't miss the next one.
Alexander Embiricos
@embirico
Ilya Sutskever
@ilyasut
Together AI
@togethercompute
Elon Musk
@elonmusk
Aaron Levie
@levie
Demis Hassabis
@demishassabis
OpenAI
@OpenAI
Riley Goodside
@goodside