Skip to main content
Search
Menu
Information on internet
Photo: Pixabay

Watermarking of AI-generated information

13 September 2023, 09:20

How to ensure information, to determine if it is true and correct?

AI is here to stay, it won't take our jobs away, but some occupational categories will change, we already see that today. More and more information being produced is AI-generated, and it is becoming increasingly important to ensure that AI remains an asset to society and not a threat. We are fed information daily, and what we consume is correct and true is important for the media, school, politics, the individual and not least research. Unfortunately, nothing says that a text is correct just because it was produced by a human. Disinformation, false texts, videos, and images are threats that can affect society and trustworthiness.

Watermarking a solution?

One solution is watermarking of text generated by language models, such as Chat GPT. With the help of tools that can detect if a text is written by AI, one could ensure the sender of a text, AI vs. human, which can be an important solution for, for example, teachers who must grade school assignments. For this to be possible, a language model can be trained on data where AI-generated text and human text are compared, something that has already started to be tested, which today is not one hundred percent reliable. One problem is that watermarking only works for systems covered by that regulation. Texts that, for example, were created in other countries that do not require this, or by actors that do not care, will still be in circulation. It is likely to be very difficult – perhaps even impossible – to automatically classify information generated by a system you have not had access to test.

In addition to watermarking, there are also proposals to require chat bots to present themselves as just bots (the Schibbolet rule), so that those who interact know that it is not a question of human contact.

What does RISE do in the matter?

— RISE researches AI in itself, but also has research groups that work with AI and ethics, social impact of AI, media criticism, and last but not least, how individuals and organizations best benefit from AI's potential, Kristina Knaving says who is a researcher in AI and ethics at RISE.

— The best protection we have against misinformation is a well-functioning democratic society, with education, security, and access to good information. There is already a lot of fake news and unverified information on the internet. What will happen with AI is that the amount of fake texts, images and videos will increase rapidly and then it will be even more important that organizations and individuals know how to find verified information, Kristina continues.
 

Other efforts for verification of information

Other methods to ensure that information is correct are used, among other things, in journalism. Newsrooms face AI policies that will ensure that all information published is ultimately reviewed and quality-assured by a human, including material produced using, or based on, AI. AI is often used for news summaries, subtitling videos, and transcribing audio to text.

Increased awareness important

It also applies as an individual to learn to be critical of sources. Here the school has an important role, something that RISE affirms in the AI Agenda (proposal 15). It is also good for the organization to create its own approach to AI, both by defining which areas it will affect and which internal policy it wants to have in these matters.

Contact RISE

Kristina Knaving

Kristina Knaving

Fokusområdesledare Den uppkopplade individen-Senior forskare och interaktionsdesigner

+46 73 030 19 86

Read more about Kristina

Contact Kristina
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

* Mandatory By submitting the form, RISE will process your personal data.

Sverker Janson
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

* Mandatory By submitting the form, RISE will process your personal data.

2024-12-10

2024-11-26

2024-11-04

2024-09-09

2024-09-03

2024-09-02

2024-08-06

2024-04-30

2024-04-29

2024-03-19

2024-02-06

2023-11-15

2023-10-02

2023-09-13

2023-08-24

2023-06-21

2023-06-19

2023-06-02

2023-05-17

2023-05-09

2023-04-27

2023-04-05

2023-04-04

2023-04-04

2023-03-29

2023-03-16

2023-01-31

2023-01-31

2022-12-06

2022-11-15

2022-10-24

2022-10-24

2022-10-20

2022-10-20