Skip to content

"Inquiring about European Union AI Act's control over deepfakes"

Questions for The Sumsuber: Does the EU AI Act apply to deepfake regulation in KYC/AML practices?

Deepfake Regulation Inquiry: Does the EU AI Act provide rules for deepfakes?
Deepfake Regulation Inquiry: Does the EU AI Act provide rules for deepfakes?

"Inquiring about European Union AI Act's control over deepfakes"

The European Union (EU) has taken a significant step towards combating the proliferation of deepfake content with the approval of the EU AI Act. Adopted on March 13, 2024, the Act will become enforceable 20 days after its publication in the Official Journal of the EU.

One of the key provisions of the EU AI Act is the mandate for synthetic media, including deepfake content, to be clearly labeled or watermarked. This measure aims to ensure provenance and transparency, helping to distinguish synthetic content from human-created content.

The Act emphasises four main aspects in relation to watermarking:

  1. Clear labelling or watermarking: This could include visible watermarks or captions indicating the synthetic nature of the content, as well as invisible digital signatures embedded in metadata for tamper-resistant traceability.
  2. Provenance-by-design principle: Watermarking or similar mechanisms should be integrated during content creation or editing to maintain transparency of AI involvement.
  3. Robust tamper-resistance: The watermarking technology should be robust enough to prevent removal or alteration of the watermarks.
  4. Adoption of global and cross-platform standards: It is encouraged to ensure the watermarking can be detected and recognized consistently by platforms and end-users.

Transparency about watermarking methods is also emphasized, allowing stakeholders to understand how watermarks operate and build trust.

The EU AI Act's approach to deepfakes is part of a broader multi-layered strategy that also includes provenance metadata standards, cryptographic signatures, model fingerprinting, and AI detection tools. However, details of technical specifications and enforcement mechanisms are still developing, with standardization expected to accelerate under the Act's framework.

The effectiveness of watermarks has raised concerns due to issues like technical implementation, accuracy, and robustness. To address these concerns, the Sumsub bi-weekly Q&A series has been launched. The first Q&A session, scheduled for this week, will focus on the EU AI Act and deepfake regulations, featuring experts answering frequently asked questions.

Natalia Fritzen will be the AI Policy and Compliance Specialist for the first Q&A session, which will take place on Instagram and LinkedIn. The series will be published every other Thursday.

While the EU AI Act acknowledges the potential negative disruptive effect of synthetic content on modern societies, it does not prohibit deepfakes but rather sets transparency requirements for providers and deployers of such technologies. The circumstances under which the disclosure requirement for deployers may be loosened are not specified in the Act, generating uncertainty and potential resolution by future case-law.

The EU AI Act represents a landmark regulation in the fight against deepfakes, but its effectiveness remains to be seen as the technical specifications and enforcement mechanisms are still evolving. Stay tuned for updates from the Q&A series and the continued development of the EU AI Act.

  1. The EU AI Act mandates that synthetic media content, such as deepfakes, should be clearly labeled or watermarked to maintain transparency and ensure the distinction between synthetic and human-created content.
  2. Adhering to global and cross-platform standards is encouraged for the watermarking technology implemented in the EU AI Act, which is essential to ensure consistent detection and recognition across various platforms and end-users.

Read also:

    Latest