absorb.md

Ai Policy

David Sacks8Marc Andreessen5Anthropic4OpenAI2Yann LeCun2Alexander Embiricos1Casey Newton1Upstream with Erik Torenberg1Matt Wolfe1Ethan Mollick1Scott Aaronson1
No compiled wiki article for this topic yet. Raw entries below are the source material — a wiki article can be generated on demand from /admin/triggers.

AI Governance Requires Adaptation Buffers, Not Nonproliferation: Helen Toner on Policy Realism

Helen Toner, former OpenAI board member and CSET director, argues that AI policy is mis-framed around nonproliferation — a strategy that collapses as capability costs fall rapidly — and should instead prioritize building societal "adaptation buffers": resilience infrastructure like outbreak detectio

The Sanders-AOC Data Center Moratorium Would Paradoxically Entrench Big Tech's AI Dominance

Bernie Sanders and AOC introduced the AI Data Center Moratorium Act, which would halt all new U.S. data center construction until federal AI safeguards are enacted. While the bill's core concerns — rising residential electricity costs, water consumption, and environmental harm from data centers — ar

White House Releases National AI Framework to Unify State Regulations and Protect Citizens

The Trump administration has released a national AI framework, dubbed "One Rulebook," aiming to standardize AI regulation across the United States. This initiative is designed to counteract fragmented state-level regulations that could impede AI innovation and U.S. leadership in the field. The frame

Trump Administration Releases Unified National AI Framework to Counter State Patchwork and Boost US Leadership

The White House has released a national AI policy framework, directed by President Trump's December Executive Order, to replace fragmented state regulations that risk stifling innovation and US AI dominance. The framework addresses key issues including child safety from online harms, prevention of A

Effective Altruism Funds Astroturf Campaign to Mask AI Regulation Push as Bipartisan

Effective Altruism (EA), dominated by Bay Area progressive donors, faces resistance from conservatives over its AI regulation and content governance agenda perceived as censorship. To advance this, EA-backed Center for AI Safety founders created "Humans First" without disclosing ties, recruiting con

David Sacks Frames U.S.-China AI Race as a Stack-Level Competition Requiring Energy Dominance

White House AI & Crypto Czar David Sacks characterizes the U.S.-China AI competition as a multi-layer stack problem where American leads grow larger at deeper infrastructure levels — from months ahead on frontier models to years ahead on chip manufacturing. The Trump administration's core strategy c

Sacks Frames H-20 Export Resumption as Strategic Counter to Huawei's Global AI Hardware Push

David Sacks, the White House AI and Crypto policy adviser, argues that allowing Nvidia to sell deprecated H-20 chips to China is not a concession but a strategic move to deny Huawei a captive Chinese market it could use to scale its Cloud Matrix 384 architecture globally. The core logic is zero-sum:

Andreessen on DeepSeek, AI Censorship, and the Fight Over Who Controls the Intelligence Layer

Marc Andreessen argues that DeepSeek R1 is a genuine inflection point — not because it threatens U.S. AI dominance, but because it delivered open-source reasoning AI to the world for free, undermining a nascent U.S. government effort to lock down AI under centralized political control. Andreessen re

DeepSeek R1's Real Cost and Strategic Implications: Debunking the $6M Myth While Validating China's AI Progress

DeepSeek's R1 reasoning model is legitimately competitive with OpenAI's o1 (released ~4 months prior), but the widely-cited $6M training cost is a misleading apples-to-oranges comparison — the figure covers only the final training run, while DeepSeek's parent hedge fund likely controls a compute clu