AI

Labelling Obligation for AI-Generated Texts and Images?

Written by Florian Schafroth

23 May 2023

Share

When I wrote the review of the past year and the preview of this year for our blog, I was – I have to be honest – not aware of the dynamics that the topic of artificial intelligence would unfold in the public discussion as well as in the technological development.

Sure, I listed AI as a trend for 2023 and mentioned the first successful examples of interviews written by ChatGPT and the like; but at what speed we would deal with the topic of AI and its possible impact (positive and negative) already in the first quarter of 2023 – I was not aware of that at the end of 2022.

I think the basic discussion of whether AI will turn out to be a nightmare or an evolutionary leap has been sufficiently conducted. What I feel has been neglected in the discussion – especially in the PR and media world – so far (although discussed quite excitingly by relevant industry media as well as by us here on the blog) is the topic of transparency.

Because one thing is clear: one of the biggest challenges in the future will be to distinguish false information (created by generative AI in text, images and video) from correct information.

Therefore, I read with great interest an interview in PR Magazine with Thomas Klindt, an expert for litigation PR at the law firm Noerr. In it, he says, among other things, that “we […] will need AI forensics and AI detectives who check the material at hand for AI editing” and that “we [will] label every little thing […]” and that it is only a matter of time “[…] until we will also get a labelling obligation for AI-generated texts and images.”

Transparency for AI-generated content a top priority

I think here Thomas Klindt has addressed an area that is still sore for us as an industry (communications professionals as well as journalists). We must, if we use AI (and this will increasingly be the case), label AI-generated content accordingly.

How this can then be implemented in practice (Who sets the rules? How will AI-generated content be labelled? Which instance and with which technology? Etc.) must of course first be discussed and worked out by experts.

Nevertheless, PR professionals as well as journalists should deal transparently with AI-generated content from the very beginning and inform recipients, customers, employees or partners if ChatGPT and the like were used in the process. This is the only way to prevent possible negative effects of AI – especially on the topic of fake news – on our profession.

The discussion about transparency and AI has already started. Recently, the Bavarian Journalists’ Association BJV criticized Burda for publishing an extra issue completely by means of generative AI without labelling it. Let’s discuss!

Cover image: Jorge Franganillo on Unsplash