top of page

OUR MISSION:

Reduce p(doom)

The probability that advanced AI causes human extinction.

OUR GOAL IS SIMPLE

Lower the risk of catastrophic AI outcomes and increase the likelihood that advanced systems improve life for everyone.

We design architectures that align with human values and remain safe as they scale. That means building systems that are understandable, auditable, and under meaningful human control.

FEATURED APPEARANCE

Thumbnail for Building Safe AI: Craig, Phillipe, and 2 robots

OUR RESEARCH

Stay Tuned

10 designs for safe SI

Stay Tuned

Safe SI Keynote

Stay Tuned

AI Safety Series link
Paper icon

Designing Safe Superintelligence:
How aligned systems evolve safely

Play button that links to YouTube video

Safe Superintelligence in 3 Minutes:
Quick intro to risk-reducing SI design

Play button that links to YouTube video

AI Safety Series:
Exploring ethical and technical safeguards for AGI

bottom of page