Artificial intelligence labels should be the new norm in 2025
I’m an AI reporter and next year I want to be bored out of my mind. I don’t want to hear about rate hikes AI powered scamsmessy boardroom power struggles or people who abuse AI programs to create harmful, misleading or intentionally inflammatory photos and videos.
It’s a tall order and I know I probably won’t get my wish. There are simply too many companies developing AI and too few guidelines and regulations. But if I had to ask for one thing this holiday season, it’s this: 2025 should be the year we get meaningful AI content tags, especially for images and videos.
AI-generated images and videos have come a long way, especially in the last year. But the evolution of AI image generators is a double-edged sword. Model improvements mean images come out with less hallucinations or flukes. But these strange things, people with 12 fingers and disappearing objects, were one of the few things that people could tag and guess whether the image was created by humans or AI. As AI generators improve and these telltale signs disappear, this will be a major problem for all of us.
Legal power struggles and ethical debates over AI imaging will undoubtedly continue in the coming year. But for now, AI image generators and editing services are legal and easy to use. This means that AI content will continue to flood our online experiences, and identifying the origin of an image will become more difficult – and more important – than ever. There is no silver bullet, one-size-fits-all solution. But I am convinced that the widespread adoption of AI content tags would go a long way towards this.
The complex history of AI art
If there’s one button you can push to send any artist into a blind frenzy, it’s bringing out the AI ​​image generators. The technology, powered by generative AI, can create entire images from a few simple words in your prompts. I have used and reviewed several of them for CNET, and it can still surprise me how detailed and clear the images can be. (They are not all winnersbut they can be pretty good.)
Like my former CNET colleague Stephen Shankland in short“AI can let you lie with photos. But you don’t want a photo untouched by digital processing.” Striking a balance between retouching and editing the truth is something that photojournalists, editors and creatives have been dealing with for years. Generative AI and AI-powered editing only make it more complex.
Take Adobe for example. This fall, Adobe introduced many new featuresmany of which are generating AI. Photoshop can now remove distracting wires and cables from images, and Premiere Pro users can extend existing movie clips with gen AI. Generative filling is one of the the most popular Photoshop toolson par with the crop tool, Adobe’s Deepa Subramaniam told me. Adobe has made it clear that its generative editing will be the new norm and the future. And since Adobe is the industry standard, that puts creators in a quandary: Get on board with AI or get left behind.
Although Adobe promises to never train on the work of its users—one of the biggest problems with generative AI—not every company does or even discloses how its AI models are built. Creators who share their work online now have to deal with ‘art theft and plagiarism’, digital artist Rene Ramos told me earlier this year, noting how image-generating tools provide access to the styles that artists have spent their lives perfecting.
What AI tags can do
AI tags are any kind of digital annotations that signal when an image may have been created or significantly altered by AI. Some companies automatically add a digital watermark to their generations (such as Image of Meta AI), but many offer the ability to remove them by upgrading to paid tiers (such as OpenAI’s Dall-E 3). Or users can simply crop the image to cut out the ID.
Much good work has been done in the past year to support this effort. This year, Adobe’s content authenticity initiative launched a new app called Content Identifiers which allows anyone to attach digital, invisible signatures to their work. Creators may also use these credentials to disclose and track the use of AI in their work. Adobe also has an extension for Google Chrome that helps identify these credentials in web content.
Google has adopted a new standard for content credentials for images and ads on Google Search as part of the Content Provenance and Authenticity Coalition co-founded by Adobe. He also added a new tab to image information on Google Search, which highlights any AI editing for “greater transparency.” Google’s beta AI content watermarking and identification program called SynthIDtook a step forward and it was released open source for developers this year.
Social media companies are also working on AI content labeling. People are twice as likely to encounter fake or misleading images online on social media than on any other channel, according to a report from Poynter’s MediaWise initiative. Instagram and Facebook’s parent company Meta launched automatically “Made with AI” labels. for social posts and tags quickly, wrongly marked photos taken by humans as generated by AI. Meta later clarified that labels are applied when it “detects industry standard AI image indicators” and changed the label to read “AI Information” to avoid suggesting that the image was entirely generated by a computer program. Other social media platforms, such as Pinterest and TikTok, have AI tags with varying degrees of success – in my experience, Pinterest is extremely flooded with AI, and TikTok’s AI tags are ubiquitous but easily overlooked.
Adam Mosseri, head of Instagram, recently shared a series of posts on the subject saying: “Our role as internet platforms is to label AI-generated content as best we can. But some content will inevitably slip through the cracks, and not all misrepresentations will be AI-generated, so we also need to provide context about who’s sharing so you can judge for yourself how much you want to trust their content.”
If Mosseri has any practical advice other than “examine the source” — which most of us learn in high school English class — I’d love to hear it. But more optimistically, it could hint at future product developments to give people more context, like the Twitter/X community notes. These things like AI tags will be even more important if the Meta decides to go through with its addition experiment AI generated suggested posts to our feeds.
What we need in 2025
This is all great, but we need more. We need consistent, glaringly obvious labels in every corner of the internet. It’s not buried in a photo’s metadata, but placed on it (or above/below it). The more obvious the better.
There is no easy solution to this. This kind of online infrastructure will take a lot of work and collaboration between technology, social, and possibly government and civic groups. But this kind of investment in differentiating raw images from those that are entirely AI-generated to everything in between is essential. Training people to identify AI content is great, but as AI improves, it will become increasingly difficult even for experts like me to accurately rate images. So why not make it super obvious and give people the information they need to know about the origin of the image – or at least help them guess when they see something strange?
My concern is that this issue is currently at the bottom of many AI companies’ to-do lists, especially as the tide seems to be turning on development AI videos. But for the sake of my sanity and everyone else’s, 2025 should be the year we create a better system for identifying and labeling AI images.