Regulate AI Applications, Not Technology Why the DEFIANCE Act and FTC ban on fake product reviews take the right approach to regulating AI.

Published
Reading time
2 min read
Regulate AI Applications, Not Technology: Why the DEFIANCE Act and FTC ban on fake product reviews take the right approach to regulating AI.

Dear friends,

I’m encouraged at the progress of the U.S. government at moving to stem harmful AI applications. Two examples are the new Federal Trade Commission (FTC) ban on fake product reviews and the DEFIANCE Act, which imposes punishments for creating and disseminating non-consensual deepfake porn. Both rules take a sensible approach to regulating AI insofar as they target harmful applications rather than general-purpose AI technology.

As I described previously, the best way to ensure AI safety is to regulate it at the application level rather than the technology level. This is important because the technology is general-purpose and its builders (such as a developer who releases an open-weights foundation model) cannot control how someone else might use it. If, however, someone applies AI in a nefarious way, we should stop that application. 

Even before generative AI, fake reviews were a problem on many websites, and many tech companies dedicate considerable resources to combating them. A telltale sign of old-school fake reviews is the use of similar wording in different reviews. AI’s ability to automatically paraphrase or rewrite is making fake reviews harder to detect.

Importantly, the FTC is not going after the makers of foundation models for fake reviews. The provider of an open weights AI model, after all, can’t control what someone else uses it for. Even if one were to try to train a model to put up guardrails against writing reviews, I don’t know how it could distinguish between a real user of a product asking for help writing a legitimate review and a spammer who wanted a fake review. The FTC appropriately aims to ban the application of fake reviews along with other deceptive practices such as buying positive reviews. 

The DEFIANCE Act, which passed unanimously in the Senate (and still requires passage in the House of Representatives before the President can sign it into law) imposes civil penalties for the creating and distributing non-consensual, deepfake porn. This disgusting application is harming many people including underage girls. While many image generation models do have guardrails against generating porn, these guardrails often can be circumvented via jailbreak prompts or fine-tuning (for models with open weights).

Again, DEFIANCE regulates an application, not the underlying technology. It aims to punish people who engage in the application of creating and distributing non-consensual intimate images, regardless of how they are generated — whether the perpetrator uses a diffusion model, a generative adversarial network, or Microsoft Paint to create an image pixel by pixel.

I hope DEFIANCE passes in the House and gets signed into law. Both rules guard against harmful AI applications without stifling AI technology itself (unlike California’s poorly designed SB-1047), and they offer a good model for how the U.S. and other nations can protect citizens against other potentially harmful applications. 

Keep learning!

Andrew 

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox