The Federal Trade Commission (FTC) has increased its focus on the threats that artificial intelligence (AI) pose to consumers. With consumers encountering AI systems across multiple sectors, there is a growing risk of AI-related harms including fraud, privacy violations, discrimination, and impersonation issues such as the use of deepfakes to con consumers into engaging with bad actors. In view of these harms, the FTC has stressed that AI technologies must comply with consumer protection laws before, during, and after companies use them. The FTC warns companies that while AI technologies are still emerging, companies are still bound by existing consumer protection regulations, and that the FTC will continue to monitor whether companies meet their legal obligations.
The FTC’s Focus
One of the FTC’s core areas of focus is ensuring that AI systems are tested and monitored for accuracy and fairness before and after implementation. For example, the agency took action against Rite Aid for using facial recognition technology that targeting women and shoppers of color. In the 2023 complaint against the retail pharmacy company, the FTC alleged that Rite Aid failed to properly evaluate the AI technology for accuracy, which caused harm when it falsely accused targeted consumers of shoplifting. The complaint also alleged that Rite Aid failed to take reasonable steps to assess and monitor the technology after employing it to ensure accuracy. To settle the case, Rite Aid agreed to a five-year ban from using facial recognition technology. The FTC’s action underscores the need for companies relying on AI models to mitigate the risk of harmful outputs and to ensure that the impact of using such AI tools does not harm consumers.
The FTC has also been proactive in addressing AI-driven fraud, impersonation, and exploitation. It has tackled the growing problem of AI-generated deepfakes used for impersonation and fraud by finalizing a 2024 rule on government and business impersonation. Under the new rule, the FTC has the power to file federal claims demanding that fraudsters return money earned from impersonating the government or a private business. The rule targets and deters fraudulent activity such as spoofing government and business websites and wrongfully using government seals when communicating with consumers. Further to its efforts against exploitation, the agency filed a complaint against the revenge porn website MyEx.com and successfully obtained a court order shutting down the website. The Order required the website operators pay victims for posting their images without consent and charging takedown fees to remove the content. These measures are part of broader efforts to safeguard consumers from the misuse of generative AI tools, which can be used to create non-consensual imagery or perpetuate scams.
Privacy and data security are another area of focus for the FTC. AI systems, which train on large amounts of data, may access sensitive consumer data such as personally identifiable information. The FTC has issued complaints against companies for the collection, retention, and mishandling of such data. In one notable case, the FTC alleged that Amazon’s Alexa voice-activated service retained voice recordings, including from children, without users’ full knowledge or consent. As a part of the settlement, Amazon agreed to a $25 million penalty and to overhaul its data collection and retention practices. Amazon also agreed not to make misrepresentations about its retention, access, and deletion of voice information.
Key Takeaways
Overall, the FTC stresses that companies should take proactive steps to identify and address the potential risks posed by AI before, during, and after their use and implementation. Such steps include preventing fraud, ensuring accuracy, and safeguarding consumer privacy and security. As AI continues to evolve, companies must be diligent in considering the potential consequences of the technologies they engage with and how such technology impacts consumers and can create harm.