-4.2 C
New York

New U.S. Bill Targets AI Deepfakes and Protects Content Creators: COPIED Act Gains Bipartisan Support

Published:

Last week, the United States introduced a groundbreaking bill aimed at combatting the misuse of AI deepfakes and safeguarding original content from unauthorized AI training. Known as the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act), this legislation has garnered strong bipartisan support.

Addressing AI Deepfakes and Content Misuse

The COPIED Act is designed to tackle the growing issue of AI-generated deepfakes, following high-profile controversies such as the viral spread of Taylor Swift’s AI-generated deepfake images earlier this year. This bill will also address broader concerns about how AI technologies exploit content created by journalists, artists, and musicians without proper acknowledgment or compensation.

In January, the controversy surrounding deepfake content highlighted the urgent need for regulation. The “Take It Down Act,” introduced last month, also focuses on removing AI deepfakes related to non-consensual intimate imagery, reflecting heightened awareness of the dangers posed by such technologies.

How the COPIED Act Will Operate

The COPIED Act proposes the creation of a “content provenance information” system, akin to a digital logbook, to authenticate and track all forms of content, including news articles, artistic works, images, and videos. This system will help detect AI-generated content and prevent tampering with content provenance data.

Additionally, the bill seeks to make it illegal to alter or misuse this information. It empowers state officials to enforce the provisions and allows for legal action against AI companies that remove watermarks or use content without proper consent or compensation.

International Approaches to AI Regulation

While the U.S. is taking significant steps with the COPIED Act, other countries have also been active in regulating AI. The European Union (EU) has implemented the Artificial Intelligence Act, which classifies AI systems into four categories based on risk levels: Unacceptable Risk, High-Risk AI, Limited-Risk AI, and Minimal-Risk AI. The Act prohibits AI systems deemed to be of Unacceptable Risk, such as those used in China for social scoring.

In India, although a specific AI regulatory law is not yet in place, the Ministry of Electronics and Information Technology issued a directive in March requiring government approval for deploying AI systems deemed “under-tested” or “unreliable.” This directive was later reversed, indicating a cautious approach to balancing regulation and innovation.

Related articles

spot_img

Recent articles

spot_img