Adobe Proposes a Way to Protect Artists from AI Rip-Offs (TechChrunch)

Adobe is taking a proactive stance against AI-driven deepfakes, misinformation, and content theft by launching its Content Authenticity web app in beta in early 2025. This new tool will allow creators to apply secure content credentials to their work, certifying it as their own. Going beyond simple metadata tagging, Adobe’s system uses digital fingerprinting, invisible watermarking, and cryptographically signed metadata to provide a robust method of proving ownership for images, videos, and audio files. These measures are designed to ensure that even if credentials are removed, the file remains traceable to its original creator.

Adobe’s efforts extend beyond its own platform, involving partnerships with industry giants like Microsoft, OpenAI, and major social media platforms such as TikTok, Instagram, and Facebook. The company has also launched a Chrome browser extension and an Inspect tool to help users identify and verify content credentials across the web. Additionally, Adobe has collaborated with Spawning to help artists control how their works are used in AI training datasets through a “Do Not Train” registry. As AI becomes increasingly integrated into creative processes, Adobe’s initiatives aim to balance innovation with the protection of artist rights, ensuring transparency and trust in digital content creation.

Click here to read the entire article on TechCrunch