The Psychology of AI Doom Why do people who are well informed about AI worry about unrealistic dangers? There are incentives for doomsaying.

Published
Reading time
2 min read
Man pulls lever to electrify a robot maid holding cleaning tools, cartoon-style.

Dear friends,

Welcome to our special Halloween issue of The Batch, in which we probe fears, anomalies, and shadows of AI.

In this letter, I’d like to explore why some people who are knowledgeable in AI take extreme positions on AI “safety” that warn of human extinction and describe scenarios, such as AI deciding to “take over,” based less on science than science fiction. As I wrote in last year’s Halloween edition, exaggerated fears of AI cause real harm. I’d like to share my observations on the psychology behind some of the fear mongering.

First, there are direct incentives for some AI scientists and developers to create fear of AI: 

  • Companies that are training large models have pushed governments to place large regulatory burdens on competitors, including open source/open weights models.
  • A few enterprising entrepreneurs have used the supposed dangers of their technology to gin up investor interest. After all, if your technology is so powerful that it can destroy the world, it has to be worth a lot!
  • Fear mongering attracts a lot of attention and is an inexpensive way to get people talking about you or your company. This makes individuals and companies more visible and apparently more relevant to conversations around AI.
  • It also allows one to play savior: “Unlike the dangerous AI products of my competitors, mine will be safe!” Or “unlike all other legislators who callously ignore the risk that AI could cause human extinction, I will pass laws to protect you!”
  • Persuading lawmakers to place compliance burdens on AI developers could boost one's efforts to build a business that helps AI companies comply with new regulations! See, for example, this concerning conflict of interest from a prominent backer of California’s proposed AI safety law, SB-1047. 

I’ve seen people start off making mild statements about dangers of AI and get a little positive feedback in the form of attention, praise or other rewards, which encouraged them to double down and become more alarmist over time. Further, once someone has taken a few steps in this direction, the psychological effect known as commitment and consistency bias, where one feels obliged to stay consistent with one’s earlier statements, will lead some people to keep going in this direction.

To be clear, AI has problems and potentially harmful applications that we should address. But excessive hype about science-fiction dangers is also harmful.

Although I’m highlighting various motivations for AI fear mongering, ultimately the motivations that underlie any specific person’s actions are hard to guess. This is why, when I argue for or against particular government policies, I typically stick to the issues at hand and make points regarding the impact of particular decisions (such as whether it will stifle open source) instead of speculating about the motivations of specific people who take particular sides. This, too, is why I rarely make issues personal. I would rather stick to the issues than to the personalities.

When I understand someone’s motivations, I find that I can better empathize with them (and better predict what they’ll do), even if I don’t agree with their views. I also encourage expressing one’s own motives transparently. For example, I’m strongly pro the AI community, and strongly pro open source! Still, arguments based on substantive issues ultimately carry the most weight. By arguing for or against specific policies, investments, and other actions based on their merits rather than hypothetical motivations, I believe we can act more consistently in a rational way to serve the goals we believe in.

Happy Halloween!

Andrew

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox