This project was created by Dave Fowler as a final piece for the AI Safety, Ethics, and Society (AISES) course, hosted by the Center for AI Safety.
It includes three parts:
1. Essay
Over hundreds of conversations—from friends working deep inside San Francisco’s AI labs to complete newcomers—I’ve closely studied what resonates, what confuses, and what unlocks clarity when discussing AI risks.
This essay brings together those learnings and introduces several novel framing choices:
Decider Status: Humans hold top status on the planet not through strength or moral superiority, but through intelligence. I call this Decider status. If we build something more intelligent, we may lose that seat—and history shows we haven’t treated those beneath us kindly. Why would an intelligence far less empathetic to us than we are to animals treat us differently? Calling it a Decider AI (DAI), rather than using muddled terms like AGI or ASI, creates a clearer and more urgent picture of the threat.
Clear focus on the real solution—Halt AI: Most public discussions bend toward compromise, working to nudge outcomes slightly rather than acknowledging the obvious truth: halting development is the only safe path. We fear being labeled extreme, but reason doesn’t always sit within society’s current definition of “reasonable.”
Unapologetic rejection of weak counterarguments: Many advocates for halting AI engage in overly polite, ineffective debates with profit-driven industry leaders. But respectful warnings are being bulldozed by power and profit. It’s time to speak plainly and passionately about what must be done.
A clear, feasible, actionable plan: Many people agree AI is risky but feel it’s inevitable. That hopelessness leads to disengagement. This essay ends with a realistic plan—because showing a way forward is often the difference between fear and action. Along with the essay I include an outlined action plan to Halt AI, summarized from the best existing writings.
Judgment DAI: though i misspelled the domain...(working on fixing that) I found it interesting to tie a very culturally present concept with what's happening with AI. I didn't go strong on it in the essay, but might that connection draw more serious regard for the threat or resonance with religiously inclined audiences? Rather than run from the "doomer" name calling, I think we should choose to more confidently accept it. Doom calling is exactly what we're doing. Unfortunately the threat is real this time.
2. Game
Understanding a concept is one thing—experiencing it is another. We’re not used to being outsmarted, let alone outpowered. To give people a visceral sense of what it might feel like to live under a superintelligence, I created a prompt game: After Intelligence.
The responses have been powerful—many players walk away seeing the future differently.
Earlier, I also built DoomerGroomer, where an AI writes daily notes from the perspective of a future superintelligence. That project similarly aimed to turn abstract fears into something you can feel.
3. Plan
Many avoid confronting AI risks because they feel there’s nothing we can do. But highlighting feasible solutions may be even more important than highlighting the dangers.
The path to halting AI is more practical than it sounds. By regulating compute, we can control AI development more effectively than even nuclear proliferation. Once people understand this, they become more hopeful—and more engaged.
I read through many of the top papers and books and attempted to make the most straightforward clear outline of the best strategies.
Contributing
This whole site is open source on Github. To make comments file issues on github, or make a pull request of your suggestions. Also feel free to reach out on any of my channels.
Thank you to Su Cizem for her excellent instructing in our course and incredible edits on my essay. Also thank you Matt Smith, Lucas Briger and my other classmates