absorb.md

Robotics

Pieter Abbeel4Jim Fan3Alán Aspuru-Guzik2Yann LeCun1
No compiled wiki article for this topic yet. Raw entries below are the source material — a wiki article can be generated on demand from /admin/triggers.

D-REX: Differentiable Real-to-Sim-to-Real Engine for Dexterous Grasping

D-REX introduces a differentiable real-to-sim-to-real engine leveraging Gaussian Splat representations for robotic systems. This engine aims to bridge the simulation-to-real-world gap by enabling object mass identification from visual observations and control signals, while simultaneously facilitati

XL-VLA: A Cross-Embodiment Latent Space for Dexterous Robot Manipulation

XL-VLA introduces a novel vision-language-action framework that utilizes a unified, embodiment-invariant latent action space. This approach enables scalable cross-embodiment training and efficient data reuse for dexterous manipulation tasks, addressing the challenge of costly data collection for div

RoSHI: A Hybrid Wearable for Robust Human Pose and Shape Estimation in Robotic Learning

RoSHI is a novel hybrid wearable system designed to capture rich, long-horizon human interaction data for scaling robot learning. It integrates low-cost sparse IMUs with Project Aria glasses to achieve precise 3D pose and body shape estimation in a global coordinate frame. This approach addresses li

EgoVerse: Scaling Robot Learning Through Egocentric Human Data, Bypassing Teleoperation

EgoVerse leverages egocentric human data to scale robot learning, moving beyond traditional teleoperation. This approach, supported by the EgoScale and dexterity scaling law, uses behavior cloning from human actions to enhance robot capabilities without direct robot interaction during the learning p

Robotics Software Lags Hardware, Hampered by Reliability and Misaligned AI

Robotics development is currently bottlenecked by hardware reliability issues, which slow down software iteration despite advanced physical capabilities. The field also suffers from a lack of standardized benchmarking, leading to irreproducible results and difficulty in objective comparison. Further

World-Model-Guided Trajectory Generation Unlocks One-Shot Robot Imitation on Unseen Tasks

OSVI-WM addresses a critical gap in one-shot visual imitation learning: generalizing to unseen tasks that are visually similar to training tasks but require semantically distinct responses. The framework uses a learned world model to predict latent state-action trajectories from a single expert vide

RoboCulture Enables Cost-Effective Robotic Automation of Long-Duration Biological Experiments

RoboCulture is a flexible, low-cost platform using a general-purpose robotic manipulator to automate biological workflows, addressing limitations of current liquid handlers that require human intervention for plate loading, tip replacement, and calibration. It integrates liquid handling, lab equipme

AnyPlace Enables Synthetic-to-Real Transfer for Diverse, Precise Object Placements in Robotics

AnyPlace is a two-stage method that uses a VLM to identify rough placement regions, enabling efficient training of a local pose-prediction model on synthetic data for diverse configurations like insertion, stacking, and hanging. Trained solely on randomly generated synthetic objects, it outperforms