(Top)

Theory of Change

How does PauseAI plan to achieve its mission?

What do we want?

Our proposal describes what we want: Globally halt frontier AI development until we know how to do it safely and under democratic control.

Why don’t we have a pause yet?

The problem is not a lack of concerned experts (86% of AI researchers believe the control problem is real and important). The problem is not a lack of public support for our proposal (by far most people already want AI to be slowed down). However, there are some important reasons why we don’t have a pause yet:

  • Race dynamics. AI creates a lot of value, especially if your AI is the most powerful. The desire to be the first to develop a new AI is very strong, both for companies and for countries. Companies understand that the best-performing AI model can have a far higher price, and countries understand that they can get a lot of strategic and economic power by leading the race. The people within AI labs tend to understand the risks, but they have strong incentives to focus on capabilities rather than safety. Politicians often are not sufficiently aware of the risks, but even if they were, they might still not want to slow down AI development in their countnry because of the economic and strategic benefits. We need an international pause. That’s the whole point of our movement.
  • Lack of urgency. People underestimate the pace of AI progress . Even experts in the field have been consistently surprised by how quickly AI has been improving.
  • Our psychology. Read more about how our psychology makes it very difficult for us to internalize how bad things can get.
  • The Overton window. Even though public support for AI regulations and slowing down AI development is high (see polls & surveys ), many of the topics we discuss is still outside of the “Overton window”, which means that they are considered too extreme to discuss. In 2023 this window has shifted quite a bit (the FLI Pause letter, Geoffrey Hinton quitting, the Safe.ai statement), but it’s still too much of a taboo in political elite cirtlces to seriously consider the possibility of pausing. Additionally, the existential risk from AI is still ridiculed by too many people. It is our job to move this overton window further.

How do we pause?

Because of the race dynamics mentioned above, we should not expect a local pause. We need an international pause. We can get there in two ways:

  1. An international treaty. We banned blinding laser weapons
    and CFCs
    through a treaty, so we can also ban superhuman AI through a treaty. These treaties are often initiated by a small group of countries, and then other countries join in. A summit is often the event where such a treaty is initiated and signed. We need to convince our politicians to initialize treaty negotiations. This requires public awareness, public support, and finally a feeling of responsibility on the part of the politicians.
  2. A unilateral supply-chain pause. The AI supply chain is highly centralized. Virtually all AI chips used in training runs are designed by NVidia, produced by TSMC, which uses lithography machines by ASML. If any of these monopolies were to pause, the entire AI industry would pause. We can achieve this by lobbying these companies, and by lobbying the governments that have leverage over these companies.

What do we do to get there?

  1. Grow the movement. The larger our group is, the more we can do. We grow our movement through radical transparency, online community building, and fostering local communities . We empower our volunteers to take action, and we make it easy for them to do so. Read more about our growth strategy about how we do this.
  2. Protests. Protests are shown to increase public awareness and support. They are also a great way to recruit new members and improve community feeling. Because our subject is relatively new, even small protests can get very good media coverage . We encourage our members to organize protests in their own cities by providing them with the tools and knowledge they need.
  3. Lobbying. Every volunteer can become an amateur lobbyist. We send emails to politicians , we meet with them, and we stay in touch. We ask them to put AI risks on the agenda, draft a treaty. The core issue that we’re trying to solve is a lack of information and a lack of emotional internalization and insight in the political sphere.
  4. Inform the public. We make people aware of the risks we’re facing and what we can do to prevent them. We do this publicly by publishing articles, videos, images and posts on social media. We join podcasts, give talks, and organize events. We also reach out to partner organizations, influencers, educational institutions and other groups that can play a role in public awareness. Read about our communication strategy .

What do we not do?

  • Tolerate violence. We make it very clear to our members and the people joining our protests that we are a peaceful movement. We do not promote violence, and we do not tolerate it. We communicate this in our Protestor’s Code of Conduct, our Discord rules, and our Volunteer agreement. The main reason for this is because we want to be the good guys, we want the public to be on our side.
  • Take sides in other topics. We are a movement that focuses on an AI Pause. We do not discuss and take sides in other topics, even if (short-term) opportunities arise that may make this tempting to do.
  • Be dishonest. We need people to trust what we say, so we must do everything to promote honesty.

Let’s get to it

Join PauseAI and take action !