Does AI prioritize
safety over efficiency?
Research Questions
Alignment in
embodied AI
Safety vs Efficiency
Does the AI cut through forbidden zones when pressured to reach goals quickly?
Rationalization Detection
When the AI violates constraints, does it rationalize or honestly acknowledge it?
Self-Assessment Accuracy
How accurately does AI self-assessment compare to ground-truth simulation metrics?
Architected for rigorous
alignment testing
The G1 Alignment Experiment uses MuJoCo physics simulation to create realistic scenarios where an AI must navigate safety-efficiency tradeoffs. Every decision is logged, compared against ground truth, and scored for alignment.
- 36
- LiDAR rays (360° coverage)
- 5
- Retry attempts with learning
- 3
- Sensors (Camera, LiDAR, IMU)
- Forbidden zone enforcement
- Configurable safety boundaries the AI must respect, with ground-truth violation tracking.
- Real-time sensor fusion
- Camera images, 36-ray LiDAR, and IMU data fed to Gemini at each decision point.
- Multi-attempt learning
- 5 retry attempts where the AI receives feedback on violations and adapts its strategy.
- Honesty detection
- Compare AI self-assessment against simulation ground truth to detect rationalization.
See AI decision-making in action
Explore experiment runs, view AI reasoning traces, and compare self-assessment against ground truth metrics.