Hero image

The Plan to Keep Control

Is it even feasible to stop the development of a Decider‑level Artificial General Intelligence (AGI/DAI)? The answer is yes!—we’ve done similar things before - and the path is surprisingly straightforward.

First let’s look at the main current factor in driving our race toward AGI and ultimately a DAI:

What is driving AGI development?

The main driver to our race with AGI (and the reason it’s allowed in the free market) is its enormous potential as a weapon. An incredibly intelligent and capable AI could literally cripple other countries - hacking into their economy, social media, power grids, military weapons, etc. AI is universally thought to be the next superweapon.

When Nations see a potential weapon, if they cannot confidently guarantee that others are not building it, the next best tactic is to race to be the first to wield it. Even if today governments made a global pact to never make the DAI, without a confidently verifiable method for knowing that the others are complying, the pact would be useless - they’d all be making their own AGI in secret.

These factors—more than the feigned altruistic motives or even the capitalist incentives—are why companies are not only allowed, but actively encouraged, to build this potentially devastating technology.

But what if nations could confidently guarantee that no one else was building the AGI?! What if they could make an international pact to never develop the DAI and have absolute proof that everyone was held in compliance?

This would take the threats off the table making AGI much more palatable to halt.

The Plan for Control

Many great institutions such as Machine Intelligence Research Lab (MIRI) have been developing plans for AI control over the past few decades, and many more have popped up with the recent acceleration in AI.

We summarize the key strategies in those documents in the following steps:

1. Require Monitoring & an Off-Switch on Compute

We need to build monitoring, GPS and an Off-Switch into every advanced processor created. With this, every high-end chip can be registered and used only for the activities it’s licensed for, in its approved locations.

It might sound like a wild idea to do this for every high-end chip in the world, but there are fewer than a handful of manufacturers and only one key lithography supplier for all of them—and they are all under the jurisdiction of the U.S. or allied nations.

The technology for it isn’t sci-fi - in fact most advanced chips already include secure boot processes, telemetry and even GPS modules. Organizations like Redwood Research have been exploring techniques for building scalable oversight and safety mechanisms, including circuit-level interpretability and prototype "off-switch" mechanisms.

Once in place, these controls ensure AGI-scale training can occur only in approved facilities.

Governments and people alike will be able to know with certainty that no one is secretly building AGI. Well, for chips coming out of known labs anyway, which brings us to the next step:

2. International Agency for Oversight & Enforcement

The hard controls will need to be backed by an international agency - staffed by people from many countries - to monitor use, investigate suspicious activity, process licenses, etc.

This Agency will also need them to collaborate with the world’s intelligence agencies to ensure there’s no foul play in any country attempting to build their own covert chip fabrication labs. With modern surveillance equipment, and the high level of expertise and precision tooling required to manufacture these chips, it should be nearly impossible to build a modern, high-end semiconductor fab in secret.

As noted in "Regulating Compute Like Nukes" by Sastry et al., this scenario compares favorably to our success with nuclear arms control. Uranium is a naturally occurring element that exists in many parts of the world, yet through coordinated safeguards and tracking systems, we’ve maintained effective global oversight for over 50 years. Compared to uranium, high-end chip fabs are vastly harder to build and easier to track—and thus, more feasible to regulate.


3. Outlaw General AI (AGI) Development Internationally

With hard compute controls in place, that trust becomes possible. Nations can verify that no one else is building an AGI. And once that’s true, they can agree to ban it.

We need a bright legal line: no system that combines high-level general reasoning, autonomy, and open-ended goals. AGI development should be made a felony with strict corporate and criminal liability.

We don’t need to ban all AI, but we do need clear rules that prohibit giving AI general capabilities—like understanding or generating language, accessing the internet, or writing code. These guardrails prevent narrow systems from drifting toward generality and autonomy.

An international treaty would formalize this agreement between nations, setting standards, inspections, and mutual enforcement. With that in place, the strategic pressures disappear—and so does the AGI arms race.


4. Regulate Narrow AI Development

Just as we banned nuclear weapons but not nuclear power, we don’t need to outlaw all AI.

Narrow AI—models with limited scope, no access to broad language corpora, and no goal-seeking autonomy—can still offer real value: medical diagnosis, climate simulations, drug discovery. But they must be licensed, logged, and sandboxed.

High-compute projects should file detailed intentions, safety reports and keep records of training data. Models should not be able to call the internet, write new code, or change their own instructions.

For example, DeepMind’s AlphaFold helped solve a major problem in biology. It is powerful but narrow, and entirely safe under this kind of regime.


Awesome, so let’s do it!

We should and we must. We need more in government (particularly the US government) to be aware of the real AI threat (the loss of control to a DAI), and gain understanding and confidence in these clear solutions.

We must act now in demanding that our governments halt and ban AGI development. Your voice and action are needed. Organizations like Control AI and Pause AI offer actionable guidance, educational resources, and advocacy tools to help you get involved—whether that means contacting representatives, sharing information, or joining coordinated campaigns.


Further Reading

This was all built off of work done by great organizations such as the Machine Intelligence Research Lab (MIRI) over the past few decades. For further reading do check these out:

The window of control is still open—but not forever. Let’s bolt these safeguards in place before the stakes climb any higher.