AI is a red-hot topic for lobbyists who aim to influence government policies in the United States.
What’s new: The number of organizations lobbying to influence U.S. laws and regulations that affect AI jumped more than 20 percent in the first half of 2024, TechCrunch reported. Data collected by OpenSecrets, which tracks political contributions, shows increased lobbying by startups including OpenAI and Anthropic.
How it works: OpenSecrets searched for the words “AI” and “artificial intelligence” in lobbying disclosure forms. Organizations must file such forms quarterly if they discuss specific laws and regulations with decision makers or their staffs.
- More than 550 organizations lobbied the federal government about AI policy in the first half of 2024, up from 460 in 2023. These included tech giants and startups; venture capital firms; think tanks; companies and trade groups in various industries including insurance, health care, and education; and universities.
- OpenAI spent $800,000 on lobbying in the first half of the year, compared to $260,000 the previous year. OpenAI’s team of contract lobbyists grew to 15, including former U.S. Senator Norm Coleman. That’s up from three in 2023, when it hired its first internal lobbyist. In addition, the company’s global affairs department expanded to 35 people; it’s expected to balloon to 50 by the end of the year. OpenAI publicly supports legislation currently under consideration by the U.S. Senate that would appoint a National AI Research Resource program manager and authorize an AI Safety Institute to set national standards and create public datasets.
- Anthropic expanded its team of external lobbyists from three to five this year and hired an in-house lobbyist. It expects to spend $500,000 on lobbying as the election season heats up.
- Cohere budgeted $120,000 for lobbying this year after spending $70,000 last year.
- Amazon, Alphabet, Meta, and Microsoft each spent more than $10 million on lobbying in 2023, Time reported.
Yes, but: The lobbying disclosure forms show who is spending money to influence policy, but they provide only a limited view. For instance, they reveal only that an organization aimed to influence AI policy, not the directions in which they aimed to influence it. Similarly, the disclosures shed no light on other efforts to influence laws and regulations such as advertising or campaign contributions. They also don’t reveal how much an organization discussed AI relative to other topics and concerns. For instance, last year the American Medical Association spent $21.2 million on lobbying including AI but, given the wide range of policy issues involved in medicine, AI likely accounted for a small amount of the total.
Behind the news: The ramp-up in AI lobbying comes as the U.S. Congress is considering a growing number of laws that would regulate the technology. Since 2023, more than 115 bills have been proposed that seek to restrict AI systems, require developers to disclose or evaluate them, or protect consumers against potential harms like AI bias, infringement of privacy or other rights, or spreading inaccurate information according to the nonprofit, nonpartisan Brennan Center for Justice. Nearly 400 state laws are also under consideration, according to BSA, a software lobbying group, including California SB-1047, which would regulate AI models whose training exceeds a particular threshold of computation. Moreover, the U.S. will hold national elections in November, and lobbying of all kinds typically intensifies as organizations seek to influence candidates for office.
Why it matters: Given the large amount of AI development that takes place in the U.S., laws that govern AI in this country have an outsized influence over AI development worldwide. So it’s helpful to know which companies and institutions seek to influence those laws and in what directions. That the army of AI lobbyists includes companies large and small as well as far-flung institutions, with varying degrees of direct involvement in building or using AI, reflects both the technology’s power and the importance of this moment in charting its path forward.
We’re thinking: We favor thoughtful regulation of AI applications that reinforces their tremendous potential to do good and limits potential harms that may result from flaws like bias or privacy violations. However, it’s critical to regulate applications, which put technology to specific uses, not the underlying technology, whose valuable uses are wide-ranging and subject to human creativity. It’s also critical to encourage, and not stifle, open models that multiply the potential good that AI can do. We hope the AI community can come together on these issues.