Dear friends,
An ill-advised proposal for a 6-month pause in cutting-edge AI research got far more attention than I think it deserved. To me, this is a wake-up call that AI doomsayers have done a much better job than AI optimists at framing the narrative of progress in AI.
Most of the AI community is building systems that help and empower people, and we see every day how it is improving lives. Open AI’s ChatGPT is delivering value to hundreds of millions of users, and reportedly it’s the fastest-growing consumer application to date. This is wildly exciting, and I foresee many more products yet to be built that will help and empower people in other ways.
Yet, while most of us have been building useful systems, AI doomsayers — who forecast unlikely scenarios such as humanity losing control of runaway AI (or AGI, or even superintelligent systems) — have captured the popular imagination and stoked widespread fear.
Last week, Yann LeCun and I had an online conversation about why the proposed 6-month pause, which would temporarily suspend work on models more powerful than GPT-4, is a bad idea. You can watch the video here and read a synopsis in this article. Briefly:
- The proposal’s premises with respect to AI’s potential for harm are sensationalistic and unrealistic.
- A pause in development is unworkable— that is, unless governments intervene, which would have an even worse impact on competition and innovation.
- If it were implemented, it would (i) slow down valuable innovations and (ii) do little good, because it seems unlikely that a 6-month pause in our decades-long journey toward AGI would have much useful impact.
To be clear, AI has problems including bias, fairness, job displacement, and concentration of power. Our community should work, and is working, to address them. However, stoking fears about speculative risks does more harm than good:
- It distracts us from the real and present risks that we should be working on.
- It is another form of hype about AI, which misleads people to overestimate AI’s capabilities.
- It risks slowing down further progress in AI that would be very beneficial.
I’m disappointed that we have let AI doomsayers get this far. Their narrative hampers innovation, discourages individuals, and interferes with society’s ability to make good decisions.
Let’s help people understand that AI is empowering people even as we work to mitigate the real risks. It’s time for us all to stand up for a realistic view of this incredibly important technology.
Keep learning!
Andrew
P.S. Shoutout to University of Washington’s Emily Bender for her line-by-line analysis of how the proposal contributes to AI hype, and Princeton professor Arvind Narayanan, who explained how fears of AI-driven dangers such as misinformation often have been overblown.