youtube / dwarkesh / 1d ago
Current AI development, heavily reliant on pre-training and supervised learning with extensive human input, struggles with genuine on-the-job learning and generalization. The economic impact remains limited because models lack the continuous, self-directed learning capabilities inherent in human intelligence, which is crucial for navigating real-world complexities and diverse job requirements. Achieving true AGI necessitates a breakthrough in continual learning, allowing AI to acquire and adapt skills efficiently across various and evolving contexts.
agi-timelinesllm-capabilitiescontinual-learningai-diffusionai-economicsmodel-competitionreinforcement-learning
“The current approach of scaling reinforcement learning on top of LLMs with extensive pre-baking of skills is fundamentally flawed if AGI is imminent.”
youtube / dwarkesh / 1d ago
This content explores how the brain's learning and steering subsystems operate, hypothesizing that evolution hardwired specific, complex loss functions into the steering subsystem to guide learning in the cortex. It contrasts this with current LLMs, which primarily use simple next-token prediction, and introduces the concept of omnidirectional inference as a more generalized learning capability present in the brain. The discussion also touches on the potential for AI to leverage similar architectural and algorithmic principles for more advanced and sample-efficient learning.
neuroscience-researchai-capabilitiesbrain-computer-interfacesmachine-learning-algorithmsai-safetyformal-verificationscientific-funding
“The brain's learning efficacy stems from evolutionarily developed, varied loss functions within the 'Steering Subsystem.'”
youtube / dwarkesh / 1d ago / needs_transcription
youtube / dwarkesh / 1d ago / needs_transcription
youtube / dwarkesh / 1d ago / needs_transcription
youtube / dwarkesh / 1d ago / needs_transcription
youtube / dwarkesh / 1d ago / needs_transcription
youtube / dwarkesh / 1d ago / needs_transcription