Google Photos could soon introduce a powerful new feature to reveal whether an image or video was created or altered using AI. Hidden references found in the app’s version 7.41 code point to a tool called “threepio,” designed to add a How was this made section in the media details view.
By swiping up on a photo or video, users could see a clear breakdown of its creation and editing history.
Potential labels include:
• Media created with AI
• Edited with AI tools
• Edited with multiple AI tools
• Edited with non-AI tools
• Captured with a camera without software adjustments
The system might also detect if multiple tools were involved or if different images were merged. If the file’s edit history is missing or tampered with, an error message would appear instead.
This capability appears to be powered by Content Credentials, a technology that attaches a persistent record of edits to media files, even after sharing-unless intentionally removed. While Google hasn’t confirmed it, this could work alongside DeepMind’s SynthID, which invisibly watermarks AI-generated images.
Google isn’t alone in this push for transparency. Adobe’s Content Authenticity Initiative and Meta’s labeling of AI-generated content on Facebook and Instagram show a growing industry trend toward clarity in digital media origins. With Google Photos’ massive user base, such a feature could set a new standard for verifying authenticity-especially valuable in journalism, education, and online marketplaces, where trust is key.