Hero image

This project was created by Dave Fowler as a final piece for the AI Safety, Ethics, and Society (AISES) course, hosted by the Center for AI Safety.

It includes three parts:

1. Essay

Over hundreds of conversations—from friends working deep inside San Francisco’s AI labs to complete newcomers—I’ve closely studied what resonates, what confuses, and what unlocks clarity when discussing AI risks.

This essay brings together those learnings and introduces several novel framing choices:

2. Game

Understanding a concept is one thing—experiencing it is another. We’re not used to being outsmarted, let alone outpowered. To give people a visceral sense of what it might feel like to live under a superintelligence, I created a prompt game: After Intelligence.

The responses have been powerful—many players walk away seeing the future differently.

Earlier, I also built DoomerGroomer, where an AI writes daily notes from the perspective of a future superintelligence. That project similarly aimed to turn abstract fears into something you can feel.

3. Plan

Many avoid confronting AI risks because they feel there’s nothing we can do. But highlighting feasible solutions may be even more important than highlighting the dangers.

The path to halting AI is more practical than it sounds. By regulating compute, we can control AI development more effectively than even nuclear proliferation. Once people understand this, they become more hopeful—and more engaged.

I read through many of the top papers and books and attempted to make the most straightforward clear outline of the best strategies.

Contributing

This whole site is open source on Github. To make comments file issues on github, or make a pull request of your suggestions. Also feel free to reach out on any of my channels.

Thank you to Su Cizem for her excellent instructing in our course and incredible edits on my essay. Also thank you Matt Smith, Lucas Briger and my other classmates