Peter Nordström
Forskare
Contact Peter04 April 2023, 16:38
In the fall of 2022, OpenAI announced its AI-based chatbot GhatGPT, and with it began a new era where big companies and chatbots compete for our attention and data. The rapid development of technology creates lots of opportunities but also comes with a number of new risks that you and your organization should be aware of. Below we list some key risk areas along with a number of questions you should consider before using an AI-based chatbot or other language model in your work.
It is important to be aware that any information fed into an AI-based chatbot or language model can be stored, analyzed and incorporated into the underlying dataset. If you are unlucky, unpublished material can be further disseminated and research and trade secrets revealed. What information can you safely input and what information should you keep to yourself?
New language models make it easier to generate credible text in any language, increasing the risk of phishing and fraud. You should therefore be extra vigilant about emails and SMS from unknown senders and avoid clicking on links and files. How can you determine whether a sender is trustworthy if language and content are no longer as much of an obstacle for potential fraudsters?
AI-based language models can be used to generate fake news, research reports and other misleading material. It is often difficult to distinguish between machine-generated and human-written text, making source criticism all the more important. How do language models affect your ability to distinguish true from false, and in what contexts are you at risk of being exposed to false messages?
AI models have no opinions of their own, but they may have been trained on data reflecting prejudices and biased representations about, for example, gender, age, ethnicity, language and religion. These biases can then influence the content of the generated text. How is your work affected by potential biases, and when should you take this into account?
Language models have no factual knowledge but use statistics and patterns from the data they are trained on. Even if a generated text looks credible, it may not be correct. Do you have the ability to fact-check all claims and what are the consequences (for you and your organization) if an important claim turns out to be wrong?
Parts of the generated text may have been taken from previous publications, ideas or patents, but despite this, claims are often made without reference to the source, or in the worst case with incorrect or invented sources. How would your work be affected if the model copied, plagiarized or paraphrased someone else's work?
AI models will be used to write texts such as emails, news articles, CVs, scientific publications and applications. Often it is good, but sometimes it is clear that the text is not written by a human. How will your credibility and that of your organization be affected if the recipient suspects that all or part of the text was written by an AI model?
2024-11-04
2024-09-09
2024-09-03
2024-09-02
2024-08-06
2024-04-30
2024-04-29
2024-03-19
2024-02-06
2023-11-15
2023-10-02
2023-09-13
2023-08-24
2023-06-21
2023-06-19
2023-06-02
2023-05-17
2023-05-09
2023-04-27
2023-04-05
2023-04-04
2023-04-04
2023-03-29
2023-03-16
2023-01-31
2023-01-31
2022-12-06
2022-11-15
2022-10-24
2022-10-24
2022-10-20
2022-10-20