absorb.md

About Gary Marcus

CEO and Author at Marcus on AI

Gary Marcus is a scientist, author, entrepreneur, and NYU professor emeritus renowned for his critiques of deep learning and large language models (LLMs), arguing they suffer from fundamental flaws like hallucinations, lack of reasoning, and brittleness despite scaling hype. As CEO of Marcus on AI, he advocates for hybrid neurosymbolic approaches combining neural networks with symbolic AI to achieve reliable AGI. He warns of an AI investment bubble akin to WeWork, predicting diminishing returns from pure scaling.

Biography and Background

Gary Marcus is a leading AI scientist, psychologist, and entrepreneur with a PhD from MIT. He is an NYU professor emeritus, author of books like The Algebraic Mind, and founder of Geometric Intelligence, acquired by Uber.[2][11][12] As CEO and author at Marcus on AI (his Substack), he focuses on AI/ML critiques, the creator economy, and entrepreneurship.[5][13]

Critiques of LLMs and Scaling Hype

Marcus consistently argues that LLMs are overhyped 'stochastic parrots'—advanced autocomplete systems prone to hallucinations, lacking true reasoning (System 2 thinking), and failing on reliability, causation, and common sense.[1][2][6][8][9][18] He popularized the 'trillion-pound baby fallacy,' mocking scaling laws: just as doubling a baby's weight doesn't yield intelligence, more data/compute won't fix core flaws.[2] GPT-5's underwhelming performance vindicates his views, signaling the end of 'Scale Is All You Need.'[1][10][21][28][29]

The AI Bubble and Economic Realities

Marcus declares the AI bubble bursting, citing unsustainable economics: OpenAI's massive losses, commoditization of models, price wars, and WeWork-like risks.[1][2][19] VC misalignments favor scaling over research; he predicts no near-term job takeover and investor pullback.[1][2][23]

Path to Reliable AGI: Hybrid and Neurosymbolic AI

Rejecting LLM-only paths, Marcus champions hybrid systems integrating neural nets with symbolic AI for 'world models,' causality, and reliability—echoing his early work on cognitive architectures.[8][13][18][27][33] Recent advances validate neurosymbolic AI as the biggest post-LLM breakthrough.[13][27][33]

Media and Public Engagement

Active on X (@GaryMarcus) and Substack, Marcus testified to the US Senate on AI oversight.[5][10] He critiques lazy journalism ('CEO said a thing!'), doomerism, and figures like Sam Altman.[20][24][30][31]

Earlier Academic Work

Marcus's papers explore language acquisition, lexical semantics, and critiques of statistical AI as insufficient for general intelligence.[14][15][16][17][18]

LLM Limitations (Hallucinations, No Reasoning)

LLMs excel at pattern matching but fail on logic, causation, common sense, and reliability; they're System 1 (intuitive) without System 2 (deliberative).

  • Core LLM limitations and scaling failure [1]

  • "Strengthened autocomplete" with hallucinations and 'work scrap' effect [2]

  • ChatGPT in shambles, warnings since 2001 [6]

  • Knockout blow for LLMs [9]

  • Statistical approximation ≠ general intelligence [18]

Scaling Hype and 'Trillion-Pound Baby' Fallacy

Blind faith in scaling laws is naive; GPT-5 disappointment proves diminishing returns.

  • GPT-5 underwhelmed, scaling breaks [1]

  • Trillion-pound baby fallacy [2]

  • AGI vs. broad shallow intelligence [8]

  • "Scale Is All You Need" is dead [29]

  • Game over for AGI via LLMs [21]

AI Bubble and Economic Unsustainability

AI investments mirror WeWork; high costs, low moats, no profitability soon.

  • Economics don't add up, OpenAI wrong bet [1]

  • OpenAI as AI's WeWork [2]

  • New financing questionable [19]

  • Bubble all over [28]

Hybrid/Neurosymbolic Path to AGI

Combine neural and symbolic AI for reliable intelligence with world models.

  • Practical hybrid path to AGI [1]

  • World models needed [2]

  • Good news for neurosymbolic AI [13][27]

  • Biggest advance since LLMs [33]

Critiques of AI Hype, Doomers, and Figures

Calls out overpromising CEOs (Altman), doomers, and media hype.

  • Sam Altman unconstrained by truth [31]

  • AGI-is-nigh doomers own-goal [30]

  • Lazy journalism [20]

  • Overblown announcements [32]

Every entry that fed the multi-agent compile above. Inline citation markers in the wiki text (like [1], [2]) are not yet individually linked to specific sources — this is the full set of sources the compile considered.

  1. Agents Of Tech: Is AI a Bubble? Gary Marcus on GPT-5 Hype and the Future of AIpodcast_episode · 2026-04-14
  2. 跨国串门儿计划: #402.AI 时代的“大空头”:对话 Gary Marcus,拆解大模型背后的逻辑陷阱与投资泡沫podcast_episode · 2026-04-14
  3. On the Couch: On the Couch with Gary Norden: Inside a 35-Year Trading Careerpodcast_episode · 2026-04-14
  4. Strength in Numbers with Marcus Crigler: Episode 48: The Feast or Famine Cycle Isn't Bad Luck - Here's Exactly What's Causing It with Gary Harperpodcast_episode · 2026-04-14
  5. What if Generative AI turned out to be a Dud? - Marcus on AIarticle · 2026-04-14
  6. ChatGPT in Shambles - by Gary Marcusarticle · 2026-04-14
  7. Archive - Marcus on AIarticle · 2026-04-14
  8. AGI versus “broad, shallow intelligence” - by Gary Marcusarticle · 2026-04-14
  9. A knockout blow for LLMs? - Marcus on AI - Substackarticle · 2026-04-14
  10. Gary Marcus (@GaryMarcus) / Posts / Xarticle · 2026-04-14
  11. Dr. Gary Marcusarticle · 2026-04-14
  12. Gary Marcus | Substackarticle · 2026-04-14
  13. Marcus on AI | Substackarticle · 2026-04-14
  14. Shifting Senses in Lexical Semantic Developmentpaper · 2026-04-14
  15. CUNY Academic Workspaper · 2026-04-14
  16. Broeder & Murre, eds.: Models of language acquisition: Inductive and deductive approachespaper · 2026-04-14
  17. Journal of Experimental Psychology : Human Perception and Performance Temporal Dynamics and the Identification of Musical Keypaper · 2026-04-14
  18. Statistical approximation is not general intelligencepaper · 2026-04-14
  19. Does OpenAI’s new financing make sense? - Marcus on AI | Substacknews_article · 2026-04-14
  20. “CEO said a thing!” - Marcus on AI | Substacknews_article · 2026-04-14
  21. Game over. AGI is not imminent, and LLMs are not the royal road to getting there. - Marcus on AI | Substacknews_article · 2026-04-14
  22. Marilyn (Molly) Marcus, 1942-2026 - Marcus on AI | Substacknews_article · 2026-04-14
  23. Six (or seven) predictions for AI 2026 from a Generative AI realist - Marcus on AI | Substacknews_article · 2026-04-14
  24. Promises are cheap - Marcus on AI | Substacknews_article · 2026-04-14
  25. The two wildest stories today in tech - Marcus on AI | Substacknews_article · 2026-04-14
  26. About that Matt Shumer post that has nearly 50 million views - Marcus on AI | Substacknews_article · 2026-04-14
  27. Even more good news for the future of neurosymbolic AI - Marcus on AI | Substacknews_article · 2026-04-14
  28. The AI bubble is all over now, baby blue - Marcus on AI | Substacknews_article · 2026-04-14
  29. “Scale Is All You Need” is dead - Marcus on AI | Substacknews_article · 2026-04-14
  30. How AGI-is-nigh doomers own-goaled humanity - Marcus on AI | Substacknews_article · 2026-04-14
  31. Sam Altman, unconstrained by the truth - Marcus on AI | Substacknews_article · 2026-04-14
  32. Three reasons to think that the Claude Mythos announcement from Anthropic was overblown - Marcus on AI | Substacknews_article · 2026-04-14
  33. The biggest advance in AI since the LLM - Marcus on AI | Substacknews_article · 2026-04-14