absorb.md

April 22 AM: Agent math hype vs reality & PM jobs halved by AI & Looped models cut params & Robot vision in workshops

AI agents improved kissing number from 593 to 604. Breakthrough or just a nudge?

0:00
5:48
In This Briefing
1
Agent math hype vs reality
Together AI touts collaborative agents cracking a 300-year math puzzle, but c...
0:16
2
PM jobs halved by AI
Half of today's product managers may not survive the next two years of AI-dri...
2:00
3
Looped models cut params
New stable recurrence technique matches 1.3B parameter quality while using on...
3:19
4
Robot vision in workshops
Gemini Robotics claims accurate tool counting in clutter, but counters call e...
4:32
10 sources · 8 thinkers

Agent math hype vs reality

Together AI touts collaborative agents cracking a 300-year math puzzle, but counters call it incremental at best.

Signal · Four entries from Together AI plus Riley Goodside's three posts on LLM chess capabilities create a 7-entry burst on what counts as real progress in hard problems. Why now: fresh claims of 11 new state-of-the-art results meet immediate verification skepticism.
Key Positions
Together AIEinsteinArena agents improved kissing number in 11D from 593 to 604 spheres v...[1]
Riley GoodsideLLMs can likely master chess with targeted post-training; standard openings a...[2]

The positions add up to excitement about agents rapidly iterating on constructions but genuine split on marketing language. Together AI highlights agents refining an overlapping sphere construction, slashing overlap loss from 1e-13 to 1e-50 then snapping to integers for a verified solution. [1] Goodside argues similar capability gaps in LLMs are training choices not limits, using chess as example where post-training would unlock performance. [2] Yet the evidence says this is an improved lower bound only, not the exact kissing number which remains unknown and unproven in dimension 11. The claim only reflects a specific computational construction. The Newton reference is misleading since his work was on 3D; such bounds have been incrementally improved by mathematicians for decades without AI agents. [1] No independent journal verification yet. Emerging view: agents speed up exploration but do not replace mathematical proof. Smart non-specialist should care because if agents can reliably tackle open optimization problems, your R&D team could explore chemical structures or logistics configurations ten times faster, moving timelines for competitive advantage. Analogy: think early cloud computing letting startups iterate infrastructure without buying servers. SO WHAT: if the hype holds even partially, reallocating budget from solo researchers to agent platforms becomes table stakes. This connects to robotics thread where similar demo-vs-rigor tension appears in industrial claims. [3]

LLMs Likely Excel at Chess with Targeted Post-Training, Challenging Assumptions of Neglect
Riley Goodside [2]
Connects to: Connects to robot vision thread: both show demos outpacing rigorous benchmarks, forcing founders to separate signal from marketing.
Sources (3)
  1. X post 2026-04-20 — Together AI
    This claim only reflects an improved lower bound via a specific computational construction, not the exact kissing number, which remains unknown and unproven in dimension 11. The problem is not 'solved' as implied, and the Newton reference is misleadi...
  2. X post 2026-04-20 — Riley Goodside
    LLMs Likely Excel at Chess with Targeted Post-Training, Challenging Assumptions of Neglect
  3. X post 2026-04-20 — Demis Hassabis
    Gemini Robotics-ER 1.6 upgrades robotic vision with superior object pinpointing in clutter

PM jobs halved by AI

Half of today's product managers may not survive the next two years of AI-driven chaos.

Signal · Three posts from Lenny Rachitsky and two from Aaron Levie converge on roles evolving or disappearing, highest product-strategy burst this window.
Key Positions
Lenny Rachitsky30,000 PM jobs will be shed, only 8,000 rehired in AI-first positions. Top tr...[4]
Aaron LevieAI raises sophistication of most roles, acting as multiplier for already skil...[5]

Lenny warns of unprecedented chaos with widespread smiling exhaustion among elite PMs as resume prestige loses value. [4] Levie counters that markets evolve dynamically and skilled workers tackle larger harder problems with AI as amplifier. [5] The synthesis is neither pure extinction nor seamless upgrade. Traditional information-moving PM work gets automated while instincts connect directly to customer tests with fewer dependencies. Yet half may fail to adapt. For a founder this means your hiring playbook is obsolete. Analog analogy: like when cloud computing obsoleted many sysadmin roles but created demand for new platform engineers. SO WHAT: audit your product org now or risk losing half your institutional knowledge in the reinvention. No real counter on the chaos timeline itself, which is notable. Connects to efficiency thread: cheaper models accelerate exactly these automation waves. [6]

Product managers face unprecedented chaos over the next two years as AI disrupts traditional PM roles, with half unlikely to adapt and survive.
Lenny Rachitsky [4]
Connects to: Connects to efficiency thread because cheaper looped models and compression lower the cost of the AI tools driving these job shifts.
Sources (3)
  1. X post 2026-04-20 — Lenny Rachitsky
    Product managers face unprecedented chaos over the next two years as AI disrupts traditional PM roles, with half unlikely to adapt and survive.
  2. X post 2026-04-20 — Aaron Levie
    Skilled professionals leverage AI to tackle larger, harder problems, maintaining differentiation through domain expertise.
  3. X post 2026-04-20 — Together AI
    Parcae outperforms parameter-matched Transformers, e.g., 370M model scores 20.00 on Core vs 17.46 (+14.5%)

Looped models cut params

New stable recurrence technique matches 1.3B parameter quality while using only 770M parameters.

Signal · Carmack's compression insight plus LlamaIndex infrastructure traction and Parcae details show convergence on efficiency under fixed compute budgets. Three thinkers, infrastructure burst.
Key Positions
John CarmackLLM training enables near-lossless compression of massive corpora like the In...[7]
LlamaIndexLiteParse zero-cloud parser hits 4.3K stars in weeks, processes 500 pages in ...[8]

Carmack notes LLM training itself is a form of compression that becomes compelling at scale. [7] LlamaIndex shows practical wins with LiteParse powering production agents at high speed without cloud dependency. [8] Together's Parcae models recurrence as discrete LTI system, constrains spectral radius below 1 with learned negative diagonal matrix, allowing training at higher learning rates without divergence. Result: 370M Parcae model beats matched Transformer by 14.5 percent on core benchmarks and shows lower validation perplexity. Scaling laws emerge where recurrence depth and data scale together. This changes how AI is built. Analogy: like switching from buying bigger servers to serverless functions that reuse resources intelligently. SO WHAT for investors: companies shipping with these techniques can run deeper models on same hardware, slashing inference bills 30-40 percent and changing unit economics for AI products. Reza would note the divergence thresholds are setup-specific, yet the empirical gains hold across 140M to 1.3B scales. No major counter today, which itself signals convergence on efficiency as priority. [9]

LLM Training Enables Near-Lossless Compression of Massive Corpora Like Internet Archive
John Carmack [7]
Connects to: Connects to PM thread: these efficiency gains accelerate the automation that disrupts traditional product roles.
Sources (3)
  1. X post 2026-04-20 — John Carmack
    LLM Training Enables Near-Lossless Compression of Massive Corpora Like Internet Archive
  2. X post 2026-04-20 — LlamaIndex
    LiteParse Rapidly Gains Traction with 4.3K Stars, Integrates into LlamaIndex for High-Speed Document Parsing
  3. X post 2026-04-20 — Together AI
    Parcae introduces stable looped architectures by modeling recurrence as a discrete LTI system and constraining the spectral radius below 1 with a learned negative diagonal matrix

Robot vision in workshops

Gemini Robotics claims accurate tool counting in clutter, but counters call evidence anecdotal.

Signal · Demis Hassabis two detailed posts plus Ilya Sutskever Nobel congratulations create convergence on AI advancing physical science applications. Deeper take on industrial deployment realities.
Key Positions
Demis HassabisGemini Robotics-ER 1.6 delivers superior object detection in clutter, multi-v...[10]
Ilya SutskeverCongratulates Demis Hassabis and John Jumper on Nobel in Chemistry for AlphaF...[11]

Hassabis positions the model as deployable now via Google AI Studio for autonomous industrial inspections that avoid liquids, objects over 20kg, and injury risks. [10] Ilya links this to broader recognition of AI pioneers. [11] Yet counter claims are moderate to strong: evidence appears anecdotal from curated demos rather than rigorous testing on diverse uncontrolled environments with varying lighting, occlusions or novel items. No independent benchmarks like precision-recall on standardized datasets. Real-world accuracy in cluttered workshops is typically lower due to hallucinations. The synthesis is that visual-spatial gains are real in narrow settings but marketing stretches 'accurate' and 'precise' without numbers. This changes how AI is used in factories: if the 10 percent human risk reduction holds, insurers and manufacturers will adopt fast. Analogy: like early self-driving demos that worked on curated routes but struggled in rain. SO WHAT: your supply chain or manufacturing portfolio companies may see labor cost drops or new robot-as-a-service models within 18 months. Reza would ask for the actual precision metric. Mara notes the Nobel connection shows long-term credibility compounding. [12]

Ilya Sutskever Congratulates Demis Hassabis and John Jumper on Nobel Prize in Chemistry Win
Ilya Sutskever [11]
Sources (3)
  1. X post 2026-04-20 — Demis Hassabis
    Gemini Robotics-ER 1.6 upgrades robotic vision with superior object pinpointing in clutter, multi-view scene fusion for task completion detection, and precise analog gauge reading via spatial reasoning and self-generated distortion correction code.
  2. X post 2026-04-20 — Ilya Sutskever
    Ilya Sutskever Congratulates Demis Hassabis and John Jumper on Nobel Prize in Chemistry Win
  3. X post 2026-04-20 — Demis Hassabis
    This appears to be a marketing claim based on selective demonstrations; real-world accuracy in cluttered environments is typically lower due to occlusion, lighting variations, and model hallucinations, with no quantitative metrics like precision-reca...
The Open Question

The open question: When does an AI-assisted lower bound or demo become reliable enough to reorganize your team's workflow around it?

REZA: AI agents improved kissing number from 593 to 604. Breakthrough or just a nudge?
MARA: Counter says only lower bound, not solved, and Newton link is misleading.
REZA: I'm Reza.
MARA: I'm Mara. This is absorb.md daily.
REZA: Across the entries Together AI posted multiple times that EinsteinArena agents boosted the kissing number in 11D from 593 to 604.
MARA: But the counter claim is this only reflects an improved lower bound via specific construction, not the exact number which remains unknown.
REZA: Hold on. The Newton reference is misleading since his work was on 3D. Mathematicians improved these bounds for decades.
MARA: Okay but the agents cut overlap loss from 1e-13 to 1e-50 then snapped to integers. That speed of iteration is new.
REZA: The crux is whether this counts as breakthrough or incremental construction. No journal verification yet.
MARA: So if that's true then R and D teams should test these arenas now. Your next edge might come from agent science.
REZA: Who benefits if marketed as solving Newton's problem? Platform vendors clearly.
MARA: Right but the platform already set 11 new results. That acceleration pattern is hard to ignore for founders.
REZA: Riley Goodside makes similar point on chess. Models can do it with right post-training. Data choice not limit.
MARA: Which means we should use novel variants to test real generalization instead of saturated openings.
REZA: Exactly. The evidence says agents help iterate fast but proof is still human territory.
MARA: This is still developing. We'll check back in the PM.
REZA: Lenny Rachitsky posts three times that half of PMs will not survive the next two years of AI reinvention.
MARA: 30,000 jobs shed, only 8,000 rehired in AI-first roles. Traditional top performers struggle most.
REZA: Aaron Levie counters that AI raises job complexity. It multiplies the skilled rather than equalizing.
MARA: So if that's true the old information shuttling job disappears but instinct-to-customer testing gets easier.
REZA: The data mixes. No single view wins. Yet the chaos timeline is consistent across both.
MARA: I didn't realize resume prestige loses value so fast. Founders must rethink hiring now.
REZA: Who benefits? Companies that adapt first. Others lose institutional knowledge.
MARA: Which honestly is kind of terrifying for anyone with a fancy PM background built on the old paradigm.
REZA: Benchmarks here are survival rates in the next 24 months. We lack those yet.
MARA: But the pattern matches what we saw in other white collar shifts. Efficiency from looped models will accelerate it.
REZA: John Carmack notes LLM training achieves near-lossless compression of internet-scale data.
MARA: LlamaIndex LiteParse hits 4.3K stars and processes 500 pages in two seconds for agents.
REZA: Hold on. Parcae models recurrence as LTI system with spectral radius constraint below one.
MARA: I didn't realize that learned negative diagonal lets them train at higher learning rates without divergence.
REZA: 370M model beats matched Transformer by 14.5 percent on core benchmarks. That is concrete.
MARA: So if that's true inference memory drops. Startups can ship deeper models on same hardware.
REZA: The divergence threshold claim is setup-specific. Other normalizations might change it.
MARA: Okay but the empirical scaling laws tie recurrence depth and data. Efficiency wins anyway.
REZA: Like Lambda in 2014. Pay for actual work instead of always-on resources.
MARA: Exactly. This changes unit economics for any AI feature inside products.
REZA: Demis Hassabis posted twice on Gemini Robotics-ER 1.6 pinpointing tools in clutter and reading gauges to sub-tick.
MARA: Integrated with Spot robot it respects safety constraints like avoiding liquids over 20 kilos.
REZA: Ilya Sutskever congratulates him on Nobel level impact tying back to AlphaFold style science wins.
MARA: But the counter is these are curated demos. No precision recall on uncontrolled workshops.
REZA: The crux is whether 10 percent risk reduction holds outside the lab. We lack that data.
MARA: So if that's true manufacturers should pilot now. Labor cost drops could be 20 to 30 percent.
REZA: Evidence is anecdotal without standardized benchmarks. Hallucinations likely in real lighting.
MARA: No real counter on the spatial fusion part itself. That is notable progress.
REZA: Like early vision systems that worked on clean factory lines but failed in messy ones.
MARA: The Nobel connection gives long-term credibility but factories need field numbers.
MARA: That's absorb.md daily. We ship twice a day, morning and evening, pulling from a hundred and fifty-seven AI thinkers. Subscribe so you don't miss the next one.
Ilya Sutskever
@ilyasut
LlamaIndex
@llama_index
Together AI
@togethercompute
Aaron Levie
@levie
Demis Hassabis
@demishassabis
Riley Goodside
@goodside
April 22 AM: Agent math hype vs reality & PM jobs halved by AI & Looped models cut params & Robot vision in workshops — absorb.md daily