absorb.md

Jensen Huang

Chronological feed of everything captured from Jensen Huang.

Nvidia’s Black Market: Systemic Failures in AI Chip Export Controls Spark Federal Investigations

A sophisticated black market for restricted Nvidia AI chips has emerged due to US export controls, leading to multi-billion dollar smuggling operations. Federal investigations, including a major indictment against Supermicro, reveal elaborate schemes to divert chips to China. This situation highlights significant vulnerabilities in export control enforcement and raises questions about corporate responsibility, especially given Nvidia CEO Jensen Huang's public statements downplaying diversion.

Data Center Demand Drives Grid Transformation

Data center growth, fueled by AI and cloud adoption, is significantly increasing power demand, necessitating substantial investment and modernization of electrical grids. This transformation is shifting data centers from passive consumers to active participants in the energy ecosystem, driving demand for innovative power solutions and modular infrastructure to ensure reliability and sustainability.

Adobe’s Journey: Building Frontier Models for Creative Control and Ethical AI

Adobe’s CTO details the company's strategic decision to build its own frontier AI models to address critical limitations of off-the-shelf solutions, specifically regarding creative control and ethical data sourcing. This substantial investment led to a highly optimized training and inference platform, enabling Adobe to develop differentiated products and offer enterprise-level customization for various brands, including Paramount, Home Depot, and Disney.

The Compute Supercycle: Transitioning from Model Training to Agentic Infrastructure

The AI infrastructure landscape is shifting from a training-centric model to an inference and agentic-workload model, driving a projected million-fold increase in compute demand. While Nvidia maintains a dominant 90%+ market share through strategic ecosystem locking, the market is transitioning from valuing AI as a hardware play to valuing it as foundational infrastructure. Simultaneously, secondary players like Oracle and CoreWeave are scaling rapidly by capturing specialized enterprise and GPU-as-a-service demand.

Jensen Huang on Proactively Managing AI Supply Chain Bottlenecks

NVIDIA's CEO, Jensen Huang, dedicates significant effort to proactively manage and de-risk the AI supply chain by informing and collaborating with upstream and downstream partners. He emphasizes the importance of anticipating future demands, such as the shift to HBM and LPDDR5 memories for data centers, and the change in supercomputer manufacturing to rack-scale integration within the supply chain. This proactive engagement mitigates potential bottlenecks and enables accelerated growth.

Nvidia’s AI Dominance Faces Market Scrutiny Despite Exploding Demand

Despite continued market dominance in AI chips and projected trillion-dollar revenue visibility, Nvidia’s stock performance is stagnating. This disconnect stems from the market’s shift from speculative hype to a demand for tangible proof of sustainable growth, competition mitigation, and long-term viability. Investors are seeking clarity on whether current demand translates into sustained value capture amidst increasing competition from tech giants.

Nvidia and Marvell Partner to Expand AI Ecosystem and Market Share

Nvidia and Marvell are partnering with a $2 billion investment from Nvidia into Marvell to expand the AI ecosystem. This collaboration focuses on offering greater flexibility and choice to customers for accelerated computing in data centers, including specialized hardware and integrated solutions. The partnership aims to broaden the total addressable market (TAM) for both companies by extending Nvidia's AI architecture and leveraging Marvell's interconnect and custom silicon expertise.

NVIDIA and Dassault Systèmes: Powering the Generative Economy with AI-Accelerated Virtual Twins

NVIDIA and Dassault Systèmes are leveraging their long-standing partnership to drive a new industrial revolution. They are integrating NVIDIA's AI frameworks and Omniverse into Dassault Systèmes' virtual twin ecosystem. This collaboration enables engineers to operate at significantly increased scales, moving from traditional physical prototyping to a 100% digital design and simulation paradigm, and accelerating innovation in diverse sectors.

NVIDIA Drives AI Education with Comprehensive Programs and Certifications

NVIDIA's Deep Learning Institute (DLI) offers extensive AI education through instructor-led and self-paced courses, and teaching kits. These resources, designed for various technical audiences from academia to enterprise, range from 6-8 hour workshops on specific AI topics to full-semester curricula. A key feature is hands-on experience with real GPUs in the cloud and certifications that enhance career prospects.

Nvidia’s Full-Stack AI Strategy and the Future of Compute

Nvidia’s sustained hypergrowth stems from its 33-year commitment to a full-stack approach, integrating hardware, software, and ecosystems. This strategy, initially focused on 3D graphics and gaming, has successfully transitioned into accelerated computing for AI. The company emphasizes that in the AI era, compute directly translates to revenue and even GDP, making efficient token generation and a robust, vertically integrated supply chain critical for success.

Jensen Huang on Generative AI as the Next Industrial Revolution

Jensen Huang, CEO of Nvidia, contends that generative AI marks the next industrial revolution, shifting economic production from physical goods to intelligence and knowledge. This paradigm shift, facilitated by advancements in GPU technology he pioneered, democratizes computing and is set to significantly boost productivity across all knowledge-based industries. He also highlights the critical need for governmental regulation to ensure AI's beneficial societal integration.

NVIDIA DSX: Digital Twin for AI Factory Optimization

NVIDIA's DSX platform leverages a multi-faceted digital twin approach to optimize AI factory design, construction, and operation. It integrates various simulation tools and AI agents to maximize token throughput, enhance energy efficiency, and ensure infrastructure resilience through dynamic orchestration of cooling, electrical, and power management systems.

Tokens: The Foundational Element of AI and Its Broad Applications

Tokens are presented as the fundamental building blocks of AI, enabling the transformation of data into knowledge. Their utility spans diverse domains, from powering virtual and physical robots to driving advancements in healthcare and environmental applications. This technology is posited as a key driver for future societal and technological progress.

IBM and NVIDIA Partner to Accelerate AI Data Processing with GPUs

IBM and NVIDIA are collaborating to enhance enterprise computing for the AI era. They are integrating NVIDIA GPU computing libraries with IBM Watsonx.Data SQL engines to accelerate data processing. This partnership addresses the limitations of CPU-based systems in handling the massive datasets required by AI, significantly improving performance and cost-efficiency for data-intensive operations.

NVIDIA's Open-Source AI Ecosystem for Specialized Domains

NVIDIA is fostering a diverse AI ecosystem through its extensive contributions to open-source AI. They provide nearly 3 million open models across various domains, including language, vision, biology, and autonomous systems. This initiative aims to enable highly specialized AI development by offering foundational models, training data, and frameworks, with new models continuously topping leaderboards in their respective fields.

Andreessen Horowitz-Backed OpenClaw Automates AI Experimentation and More

OpenClaw, a project initiated by Andrej Karpathy, automates AI experimentation by allowing agents to conduct numerous tests overnight, retaining successful outcomes. The project extends beyond AI research, demonstrating applications in various fields, such as brewing and e-commerce, and has garnered significant public interest, leading to a dedicated convention, Claw Con.

CUDA's Pervasive Impact on Accelerated Computing and Scientific Breakthroughs

NVIDIA's CUDA architecture, developed two decades ago, has fundamentally reinvented computing by providing a unified platform for accelerated processing. The ecosystem now includes thousands of CUDA X libraries, enabling significant advancements across diverse scientific and engineering disciplines. These libraries, built upon core algorithms, facilitate breakthroughs in areas such as optimization, computational lithography, and AI-driven physics simulations.

NVIDIA Partner Network Drives AI Adoption and Industrial Transformation

The NVIDIA Partner Network (NPN) is depicted as a crucial enabler for widespread AI adoption and industrial transformation. NPN partners leverage NVIDIA's AI, accelerated computing, and advanced visualization technologies to develop and deploy cutting-edge AI solutions. This collaborative ecosystem facilitates the transition from AI experimentation to real-world production, addressing complex challenges across diverse sectors and expanding NVIDIA's technological impact globally.

NVIDIA's AI Ecosystem: From Chips to Autonomous Agents

NVIDIA's latest announcements at GTC highlight a comprehensive AI ecosystem. The core innovation revolves around significantly enhanced computational power, enabling efficient AI model training and inference. This advancement supports the development of autonomous agents and physical AI, driven by open-source frameworks and a scalable infrastructure.

Alpamayo AI Enhances Autonomous Driving Decision-Making

The Alpamayo model provides real-time contextual awareness and predictive reasoning capabilities for autonomous vehicles. It continuously evaluates surroundings, anticipates potential issues, and adapts driving behavior to complex scenarios, enhancing both safety and user interaction through natural language processing.

NVIDIA NVQLink Bridges Quantum and Classical Computing for Hybrid Applications

NVIDIA's NVQLink acts as a crucial interface between quantum hardware and classical supercomputers, enabling the control of quantum processors and facilitating hybrid quantum-classical applications. This connectivity is vital for realizing the potential of quantum computing in specialized fields like drug design and materials discovery, as quantum applications inherently require interaction between both computing paradigms. The recent release of CUDA-Q Realtime has made NVQLink broadly accessible, fostering its adoption across research and commercial sectors.

Jensen Huang on the Pervasiveness and Growth of AI

Jensen Huang posits that AI has transitioned from an experimental phase to an essential, ubiquitous technology, driving a new industrial revolution. He characterizes AI as the engine of the global economy, necessitating a complete reset and the largest build-out in human history. Huang projects that every application and industry will be AI-powered, with computing demand having increased a million-fold in the last two years, signaling a monumental platform shift.

NVIDIA Positions Itself for the AI Inference Era with Integrated Systems and Open Platforms

NVIDIA asserts that the "inference inflection" point has been reached, with computing demand increasing exponentially. The company is strategically focused on providing highly efficient, integrated computing systems optimized for token generation, which they declare as the new commodity. NVIDIA is also emphasizing its role in enabling agentic AI systems across diverse industries through vertical integration and horizontally open platforms, including custom model development and physical AI for robotics.

Telecommunications Industry Poised for AI Grid Transformation

The telecommunications sector is at the cusp of a major infrastructure overhaul, driven by the convergence of accelerated computing, advanced AI, and the evolution to 6G. This "AI grid" will integrate AI capabilities directly into network infrastructure, creating new opportunities for telcos, MSOs, and CDNs. This transformation is critical for supporting distributed AI applications and enabling new monetization strategies.

NVIDIA Powers Autonomous Telecom Networks with AI

NVIDIA's platform enables autonomous telecom networks by integrating deep network knowledge with AI. This facilitates self-configuring, real-time adaptive networks that proactively resolve issues. The process involves training AI models on vast datasets to understand telecom language, allowing AI agents to translate operator intent into self-optimizing configurations, anticipate traffic, and troubleshoot faults using digital twins. This approach significantly reduces manual work and operational costs for telcos.

The Symbiotic Future of AI: Proprietary and Open Models Converge in Intelligent Agent Systems

The AI landscape is evolving beyond a dichotomy of proprietary vs. open models, instead embracing a symbiotic relationship where both types are crucial for developing advanced, specialized intelligent agents. These agents, capable of orchestrating diverse models and tools, will tackle complex tasks and drive significant economic value. The focus for AI development is shifting towards post-training specialization and open infrastructure to foster innovation and trust.

NVIDIA Accelerates Autonomous Driving with Full-Stack AI and Alpamayo 1.5

NVIDIA is advancing autonomous driving with a comprehensive full-stack AI approach, integrating cloud-based training and simulation with in-car inference. A key development is Alpamayo 1.5, a reasoning model for autonomous vehicles (AVs), designed to enhance navigation and decision-making. NVIDIA is actively fostering ecosystem collaboration by open-sourcing data and tools like NeurIQ to accelerate the development and deployment of safe and scalable Level 4 (L4) autonomy.

AI Revolutionizes Healthcare through Digital Biology and Accelerated Discovery

AI is transforming healthcare by enabling digital biology, where computers trained on biological data can simulate and design new medicines atom by atom. This accelerated discovery process, combined with AI assistance for medical professionals, creates a synergistic environment for breakthroughs. The integration of AI aims to enhance human capabilities and build upon historical healthcare advancements.

Vertiv and NVIDIA Partner to Advance AI Data Center Infrastructure

Vertiv, in partnership with NVIDIA, is developing integrated infrastructure solutions for AI data centers. Their focus is on tackling crucial challenges related to power delivery and cooling for high-density AI workloads, ensuring efficient and accelerated deployment of future AI factories. The collaboration leverages digital twin technology within NVIDIA Omniverse to simulate and optimize data center designs.

AI Compute Demands Drive Need for Energy Intelligence in Data Centers

The increasing demand for AI compute is escalating energy consumption, necessitating a dual approach of "AI for energy" and "energy for AI." Optimizing data center efficiency and leveraging AI to manage energy infrastructure are crucial to overcome grid limitations and ensure sustainable AI growth. This includes improving physical infrastructure and utilizing energy intelligence for better design and operation.

Jensen Huang on NVIDIA’s AI Strategy and the Future of Computing

Jensen Huang discusses how NVIDIA is transitioning from chip-scale to rack-scale and eventually AI factory-scale design due to the demands of large-scale distributed computing. He emphasizes the importance of extreme co-design across the entire stack, including hardware and software, to achieve increased efficiency and performance. Huang also highlights the critical role of the CUDA platform and NVIDIA's strategy of cultivating a broad ecosystem of developers and partners to drive the AI revolution, viewing intelligence as a commodity that will exponentially increase global GDP.

NVIDIA Accelerates AI Inference Growth with Blackwell and Rubin Platforms

NVIDIA's CEO Jensen Huang indicates an accelerated growth trajectory for the company, driven by the expanding AI inference market. The company projects over $1 trillion in high-confidence revenue visibility from its Blackwell and Rubin platforms by the end of 2027, excluding contributions from other product lines and recent acquisitions like Groq.

The AI Infrastructure Stack: A Five-Layer Model for Real-Time Intelligence

AI is transitioning from a specialized application to essential infrastructure, comparable to electricity or the internet. This shift is driven by AI's ability to understand unstructured data and generate real-time intelligence, necessitating a complete reinvention of the computing stack. The industrial view of AI reveals a five-layer architecture: Energy, Chips, Infrastructure, Models, and Applications, where each layer is interdependent and crucial for scaling AI capabilities and economic value.

The AI Industrial Stack: From Infrastructure Buildout to Agentic Application

AI is characterized not merely as a set of models, but as a systemic platform shift requiring a vertically integrated infrastructure stack from energy to applications. By transitioning computing from deterministic, structured processing to real-time reasoning over unstructured data, AI is driving a massive global capital expenditure cycle in physical infrastructure. This shift is expected to augment labor by decoupling professional 'tasks' from 'purposes,' potentially increasing employment in high-touch sectors like healthcare.

Navigating AI: Dispelling Myths and Fostering Innovation

Jensen Huang reflects on the rapid advancements in AI in 2025, highlighting unexpected progress in grounding, reasoning, and the industry's collective effort to address hallucination. He emphasizes that despite doomsday narratives, AI is creating new jobs across various sectors, addressing labor shortages, and driving innovation in critical fields like digital biology and robotics. The discussion also touches upon the economic impact, the importance of open-source AI, geopolitics, and the necessity of energy infrastructure to support this technological revolution.

Nvidia CEO Jensen Huang Forecasts Critical Juncture for US AI Leadership Amidst Global Competition

Jensen Huang, CEO of Nvidia, emphasizes the foundational role of energy, chips, infrastructure, and open-source models in the five-layer AI stack. He highlights the US lead in frontier AI models and advanced chips but warns about China's significant advantages in energy, infrastructure development speed, open-source AI, and widespread industrial application of AI, fueled by considerable government support. Huang stresses the urgency for the US to re-industrialize, prioritize energy growth, and strategically diffuse American technology globally, advocating for an industrial policy that balances national security with global technological leadership to prevent a future where the US becomes a "buyer, not seller" of AI.

Jensen Huang on the American Dream, NVIDIA's Genesis, and the Future of AI

Jensen Huang, CEO of NVIDIA, discusses his journey from a challenging childhood as an immigrant to founding a trillion-dollar company. He attributes NVIDIA's success to strategic pivots, relentless innovation, and a culture of continuous learning and adaptation. Huang emphasizes the importance of a pioneering spirit, working from first principles, and embracing vulnerability to navigate the ever-evolving landscape of technology, particularly in the realm of AI.

Jensen Huang Predicts OpenAI as Next Multi-Trillion Dollar Hyperscaler, Outlines Nvidia’s AI Strategy

Jensen Huang projects OpenAI to become a multi-trillion dollar hyperscale company, rivaling tech giants like Meta and Google. He emphasizes the shift from general-purpose to accelerated AI computing, with inference scaling dramatically, and details Nvidia's full-stack, extreme co-design approach to AI infrastructure, which he believes provides a significant competitive moat and will drive immense economic growth. Huang also discusses the evolving geopolitical landscape of AI, advocating for US competitiveness while expressing concern over policies hindering the attraction of global talent.

Rare Earths, AI Chips, and Physical AI: America's Critical Infrastructure Stack for the Next Industrial Revolution

The content captures a multi-speaker session covering three interlocking infrastructure layers essential to AI dominance: rare earth magnet supply chains (MP Materials), semiconductor manufacturing (AMD/TSMC Arizona), and AI data center buildout (Crusoe Energy). MP Materials represents 100% of domestic rare earth production and has secured a landmark DoD public-private partnership that de-risks mercantilism from China while structuring profit-sharing — a potential blueprint for other critical mineral verticals. Jensen Huang and Lisa Su both frame physical AI (robotics, drones, autonomous systems) as the largest eventual chip end-market, but emphasize that the foundational constraint today is energy infrastructure and skilled labor, not compute design. Across all speakers, the consensus is that the U.S. is mid-race, not ahead — and that speed of execution on infrastructure, workforce, and policy alignment is the actual differentiator.

Jensen Huang's AI Wave Framework: From Perception to Physical AI, and the Strategic Imperative of the American Tech Stack

Jensen Huang outlines a four-phase AI progression — perception, generative, reasoning, and physical AI — arguing that the current reasoning wave is the primary driver behind claims of approaching general intelligence. He reframes large data centers as "AI factories" whose singular output is tokens, analogizing them to power generation plants and positioning energy infrastructure as a critical national bottleneck. On geopolitics, Huang argues the core strategic objective for U.S. AI dominance is not export restriction but maximizing global adoption of the American tech stack, warning that limiting diffusion cedes developer ecosystems to competitors — a dynamic he frames as the decisive lesson from the lost 5G wave.

Nvidia's Hyper-Moore's Law and the AI Factory Era

Nvidia, with a market cap exceeding $3 trillion, is spearheading a fundamental shift in computing, moving from CPU-centric, human-coded software to GPU-accelerated, AI-driven machine learning. This paradigm shift, termed "hyper-Moore's Law," aims to double or triple performance annually at data center scale, drastically reducing costs and energy consumption. Nvidia is evolving into a full-stack AI factory provider, building entire data centers and fostering an ecosystem where AI agents and digital twins accelerate scientific discovery and engineering across diverse fields.

Nvidia’s Full-Stack AI Strategy and Its Impact on Accelerated Computing

Nvidia's CEO, Jensen Huang, discusses the company's strategic shift from merely building GPUs to developing a comprehensive, full-stack accelerated computing platform for AI. This approach, encompassing hardware, software, and ecosystems, has allowed Nvidia to drive unprecedented rates of innovation, dramatically reduce computing costs, and achieve a robust competitive advantage. The company is actively fostering an AI ecosystem by supporting open-source initiatives while simultaneously developing proprietary solutions.

Meta's Enduring Resilience Through Strategic Adaptation and Core Technical Competence

Meta, under Mark Zuckerberg's leadership, has consistently navigated and overcome numerous existential challenges through a combination of unwavering technical competence and a strategic, adaptive approach to product development. The company's DNA emphasizes learning, rapid iteration, and a willingness to embrace new technologies and platforms, even if it means abandoning previous, seemingly successful, strategies. This allows Meta to continually redefine its product offerings while remaining focused on its core mission of human connection.

Older entries →