An overview of C2PA

Page contents

Note: even though C2PA can be extended to other types of content, this article focuses on watermarking image files specifically, from real-world photos to ai generated contents. This is also what most discourse around the watermarking mechanism currently focuses on.

How C2PA works

In a time where editing or forging digital content, especially images, has become widely available thanks to ai-based image generators and editors, most people now have trouble filtering out disinformation on social media platforms and messengers. Aiming to fix this issue, the C2PA mechanism proposed to attach some metadata to the image, containing the identity of the user making changes and a timestamp.

This is an improvement in the sense that the history of an image can now be verified, clearly showing if it is still the published original or if it was altered maliciously. For legitimate changes, it can help establish ownership of the newly created work. Each camera, user or company will have their own keypair for signing, providing more fine-grained insights into the lifecycle of every image.

Use cases

While watermarking may not be useful for everyone, it can help mitigate the effects of disinformation, especially in social networks or media publications.

For journalists, it can help establish trust that an image was indeed published by a reputable media outlet and ensure accurate crediting of the people involved in it's creation (photographer, editor etc). This may also be useful for media licensing, where the original creators or copyright holders of images are embedded into the history including a license, enabling quick checks if the image was obtained legally.

Social media platforms can further use it to help users identify official content by showing icons to indicate that an image was validated to be issues by the claimed author or platform, and correctly credit them. Forging images and passing them off as legitimate news articles could be prevented if the news source uses C2PA watermarking, because the forged version couldn't imitate the correct cryptographic signature.

AI-generated images remain problematic

There is some talk about marking ai-generated images using C2PA watermarks, allowing viewers to easily differentiate between real and artificial images. While the idea is interesting, it inverts the way C2PA works:

Watermarking is used to mark an image as "official" by a company or individual, ensuring it will fail to validate if it is tampered with. This guarantee only works as long as the publisher wants to guarantee it - they could easily strip this metadata from the image.

Ai-generated images on the other hand may be viewed unfavorably by some people, potentially limiting their reach or response within social networks or online platforms. Users may directly benefit from stripping the C2PA watermark exposing it as ai-generated, potentially even re-signing it as a legitimate work with their own key.

C2PA is not currently able to safely flag ai-generated content from untrusted or malicious users.

Watermarks can be removed

While C2PA has limited use for authenticity validation, it is by no means perfect. A malicious user could simply download an image, remove the metadata, then sign it with their own key and claim ownership of it. They may even change the timestamp before re-signing the image, so it cannot be used to establish who signed it first.

Stripping metadata in general is also very easy, either with software tools for tech-savvy users or by simply making a screenshot or photo of the image for less technical people. C2PA is only saved int he image metadata, copying the visual information (aka pixel data) inherently removes the entire watermark.

Problems with signing keys

The keys used for signing can also be misleading; nothing stops a malicious user to generate a new key and use the name of a reputable journalist. These keys have no way to officially validate which of those two keys belongs to the real journalist, and the vast majority of people is unlikely to manually look for the real public key and verify it. Disinformation doesn't need to be impossible to spot, it only needs to look convincing for the moment an unsuspecting viewer spends looking at it.

Keys containing identity information can also be a privacy concern; spreading the names of individuals across the internet, potentially without reasonable means to remove the content again. Depending on region, this can be a significant legal issue (e.g. in the eu as private names are protected under GDPR).

Finally, the C2PA mechanism does not fully support the key revocation portion typical in public key infrastructures, which is essential to its long-term viability. If a signing key is compromised, there is no reasonable way to revoke it and tell the public that content signed with it cannot be trusted anymore. They could publish a message on their website or socials, but not everyone will see that, and in the meantime the attacker could forge any number of images with back-dated timestamps, effectively rendering all the images this publisher has ever released untrustworthy.



The C2PA watermarking mechanism does have its uses, especially around officially published media and journalism, and can certainly help combat disinformation in online discourse and social media platforms. That said, it is not a magical tool to guarantee the source and history of every image, but rather a tool for publishers to claim ownership of them, that is verifiable for viewers. It is a good fit for media publishers, journalists and people working in the creative industry, but cannot provide a mechanism to prevent passing off ai-images as legitimate or prevent impersonation. It is a tool to help with these issues, but cannot handle it alone.

More articles

A guide to OCI container runtimes

Not all containers are made equal

Detecting suspicious filesystem changes with aide

Keeping track of changes made by intruders

Automating SSL certificates for web servers with certbot

Enabling painless traffic encryption for free