U.S. Cracks Down on AI Apps That Overpromise, Underdeliver U.S. Federal Trade Commission launches Operation AI Comply to tackle deceptive business practices

Published
Reading time
2 min read
Several legal documents from the Federal Trade Commission are spread out on a wooden surface.

The United States government launched Operation AI Comply, targeting businesses whose uses of AI allegedly misled customers. 

What’s new: The Federal Trade Commission (FTC) took action against five businesses for allegedly using or selling AI technology in deceptive ways. Two companies settled with the agency, while three face ongoing lawsuits.

How it works: The FTC filed complaints against the companies based on existing laws and rules against unfair or deceptive commercial practices. The FTC alleges: 

  • DoNotPay claimed its AI service was a “robot lawyer” that could substitute for human legal expertise. The FTC said the company misled consumers about its system’s ability to handle legal matters and provide successful outcomes. DoNotPay settled the case, paying $193,000 in consumer redress and notifying customers about the limitations of its services.
  • Rytr, a writing tool, generated fake reviews of companies. According to the FTC, Rytr offered to create and post fake reviews on major platforms like Google and Trustpilot, which helped it to bring in $3.8 million in revenue from June 2022 to May 2023. Rytr agreed to settle and is barred from offering services that generate consumer reviews or testimonials. The settlement amount was not disclosed.
  • Ascend Ecommerce claimed that its “cutting-edge” AI-powered tools would help consumers quickly earn thousands of dollars monthly through online storefronts. The company allegedly charged thousands of dollars for its services, but the promised returns failed to materialize, defrauding customers of at least $25 million. The government temporarily halted the company’s operations and froze its assets.
  • Ecommerce Empire Builders promised to help consumers build an “AI-powered Ecommerce Empire” through training programs that cost customers nearly $2,000 each, or readymade online storefronts that cost tens of thousands of dollars. A federal court temporarily halted the scheme.
  • FBA Machine said its AI-powered tools could automate the building and management of online stores on platforms like Amazon and Walmart. The company promoted its software with guarantees that customers’ monthly earnings would exceed $100,000. Consumers paid nearly $16 million but didn’t earn the promised profits. A federal court temporarily halted FBA’s operations.

Behind the news: The FTC has a broad mandate to protect consumers, including both deceptive and anticompetitive business practices. In June, it agreed to focus on Microsoft’s investment in OpenAI and Google’s and Amazon’s investments in Anthropic, while the U.S. Department of Justice would examine Nvidia’s dominant market share in chips designed to process AI workloads. The FTC previously brought cases against Rite Aid for misuse of AI-enabled facial recognition, Everalbum for deceptive use of facial recognition, and CRI Genetics, which misled consumers while using AI to conduct DNA tests.

Why it matters: The FTC’s enforcement actions send a message to businesses that aim to take advantage of the latest AI models: making exaggerated claims about AI will bring legal consequences. The complaints point to a set of issues: falsely claiming to use AI to provide a particular service, exaggerating AI’s ability to replace human expertise, generating fake reviews of businesses, promising unrealistic financial returns, and failing to disclose crucial information about AI-based services. 

We’re thinking: These particular actions crack down not on AI per se but on companies that allegedly deceived consumers. By taking scams off the market while leaving legitimate businesses to operate freely, they may actually increase customer trust in AI.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox