(Top)

The feasibility of a Pause

AGI is not inevitable. One of the most common counterarguments for a pause is not that it would be bad, but that there’s no way we’ll achieve it. And that is just not true.

We understand where these beliefs come from and that it will not be easy. But it’s not easy to predict the future either, and this article seeks to argue against the overconfidence of our powerlessness. Because the problem with defeatist beliefs is that they work as a self-fulfilling prophecy.

The only easy thing to do is giving up, it’s the easy way out . Because if there is nothing that we can do, there is nothing that we should do. But we shouldn’t give up without even trying it. This is actually our best chance to have an impact in the world and the future of our civilization.

Power over companies and governments

Although each proof of incompetence or malice of our governments, companies and systems can lure us into a defeatist thinking, where coordination is too hard, the interests of the people are not well represented, and/or are represented but are stupid, we fail to recognize the victories throughout history.

As problem solvers, conflict stays in our minds, it eats us alive, stories without it don’t drive our attention, there is a reason why they only continue after the happy ending if another conflict appears. The same happens in news and real life. Good events don’t drive our curiosity as much as bad ones, and that can lead us to have bad models of the world.

If what you fear mainly companies or organizations, there’s mainly two ways we can control them:

  • Laws, regulations and treaties.
  • Public opinion that forces them to autoregulate themselves. Of course the first method is the better one, but reputation which affects customers, investors, employee morale and recruitment, is a reason why we organize protests in front of some AI labs. Also, it’s important to remember that regulations can benefit companies in the long-term, because of regulatory capture, not losing consumers if the dangers actualize, and to disadvantage competitors. So we must be wary that we not only get a pause, but that it is not lifted until it’s actually safe to keep developing frontier AI.

If you fear governments not taking your safety seriously, that’s a more complicated issue. But generally, politicians care about not losing political support to a certain degree. And, more importantly, they can also be concerned about the risks without the huge bias and legal obligations that some individuals from companies have to maximize profits.

If you think we could get regulation of a single government, but not a multilateral treaty, one thing you must realize is that if a government can recognize that some uncontrollable technologies which can originate from other nations are a danger for its nation, they become a national security problem, and the government becomes invested in other countries stopping their development too. Also, we don’t really need many countries agreeing to a pause in the first place. In reality, the urgent thing is to get a ban just in the US. China and the rest of the world seem pretty far behind, and it’s fine if their accession to a treaty happens afterwards.

Similar historical cases

For empiric evidence of why a treaty like this is possible, we should look at past global agreements. Whether informal or formal, they have been quite common throughout history, mainly to resolve disputes and progress human rights. A lot of them, whether informal or formal, had against them strong short-term economic incentives such as those provided by AI. Including the abolition of slavery, which was argued to be impossible due to short-term economic interests.

But, what about more modern examples of global agreements against new technologies? Some of the most important ones are:

  • The Montreal Protocol, which banned CFCs production in all 197 countries and as a result caused global emissions of ozone-depleting substances to decline by more than 99% since 1986. This agreement is the reason why the ozone layer hole is healing now, and we don’t hear about it anymore.
  • The Biological Weapons Convention, which bans biological and toxin weapons and was signed by 185 states.
  • The Chemical Weapons Convention, which bans chemical weapons and was signed by 193 states.
  • The Environmental Modification Convention, which bans weather warfare and was signed by 78 states.
  • The Outer Space Treaty, which bans the stationing of weapons of mass destruction in outer space, prohibits military activities on celestial bodies, details legally binding rules governing the peaceful exploration and use of space, and was signed by 114 countries.
  • Nuclear warfare, although previously expected to proliferate uncontrollably, has not been developed by many countries. Some international agreements, such as the Non-Proliferation Treaty, have been key in preventing the spread of nuclear weapons and furthering the goal of achieving nuclear disarmament. It is a meaningful achievement to have dissuaded many countries from pursuing nuclear weapons programs, reduced the amount of nuclear arsenals since the 90s, and avoided nuclear war for many decades now.
  • The International Atomic Energy Agency (IAEA) is an intergovernmental organization that seeks to promote the peaceful use of nuclear energy and to inhibit its use for any military purpose, including nuclear weapons. It implements nuclear safety standards and is composed of 178 member states. Regardless of whether you think nuclear power is overregulated or not, it is thought as a good example of what we could have to evaluate the safety of big AI models.
  • Although preferable, a formal agreement may not be necessary. In 2005, the United Nations called on member states to ban Human Cloning, which more than 60 countries did either fully or partially. Those multiple unilateral regulations have been enough to not have a single (verified) case of a human being cloned almost 20 years later.

If you think AI is actually similar to other cases in which we failed to make any good treaties internationally: everything that ever happened had a first time ;’). There were particularities that made them the first time and that’s a reason to address AI particularities .

Impact of protests

It’s quite common for people to question the effectiveness of protests and social movements in general. Of course, there are many cases where demonstrations don’t yield any results, but there are also situations where the protesters’ demands are met. Where the protests are likely to have influenced those outcomes . And there are reasons to believe that AI activism could achieve similar results .

In any case, if for some reason you don’t believe in the impact of protests, you can read about the other things that PauseAI does , and you can try to contact governments directly .

AI particular case

If you think AI is different enough to these cases (or even if you don’t), it’s useful to analyze its particular situation. The things that make AI different may not necessarily make it less regulatable. For example, we are not trying to regulate existing products and services that people already enjoy and use regularly, and we are not going against a lot of companies that can lobby or workers who would lose their jobs if we are successful. Pretty much the opposite. Also, the public is not partisan or politically divided, but united in support of regulation . Still, a lot of people haven’t made their minds about it yet. We must be careful to not put them off, hear their perspectives, and see in which particular ways they can be helped by a pause based on what they care about.

When it comes to AI risks, the public and the experts seem worried about the risks and interested in regulation . The politicians, based on the policies that are passing and working on , the summits that they are organizing , and the statements that they are giving , seem pretty worried too. Even a recent US government commissioned report recommends, among multiple proposals, different types of pauses on AI development to avert risks to national security and humanity as a whole.

All of this is happening while PauseAI is still pretty young and most people haven’t heard about most of the risks. If we were to raise awareness and consensus about existential risks for example, we would have the potential to become more mainstream given that virtually nobody wants to die or the world to end. That is not something which is even in the best interest of the most selfish of companies, governments, and people.

Even if it takes time, the manifestation of the problems that AIs will bring the next years will potentiate the awareness to them and eventually trigger more and more regulation. In the case we don’t get a pause as soon as we’d like, massive unemployment and all kind of incidents could put most people on the same page, either progressively or suddenly, and make the people who wouldn’t have seriously considered a pause, to actually do it. That’s why it’s important to not base our potential to succeed on short term results, but to be always prepared for new adherents and allies, and be ready to guide politicians to implement our proposals in case a warning shot happens.

Enforceability

The easiest way to regulate frontier models, in a way which is enforceable, is governing computing power . Lucky for us, the hardware needed to train all the biggest models are being produced, in some steps of the supply chain, by just 1 or 3 companies . And we can track GPUs as we track elements used in the development of nuclear weapons.

Impact of PauseAI protests

Even at our relatively small size, we managed to get important press coverage and

Collateral benefits

Advocating for a pause has other positive impacts outside achieving it. Informing the public, tech people and politicians of the risks helps other interventions that aim to make safe AIs and AIs safe. It causes people to give more importance to the technical, political and communicational work that goes into AI Safety and AI ethics, which ultimately means more funding and jobs going into them, and expecting better solutions.

It would not only bring new people and resources to new interventions, but it would also help moderate technical and policy proposals to look more “reasonable” and increase their chances of being approved.

Additionally, it could somewhat prepare people for the dangers, teach them how to use AI more ethically, and even convince them to not invest or work on frontier and unsafe projects.

Decision theory says: try it anyways

Even if you believe a pause is quite improbable and you don’t care about the other benefits, unless you don’t believe on the biggest risks or have better strategies on mind, we recommend you to join instead of just burying your head in the sand and wait to die or be saved.