Skip to main content
Search
Menu

Nordic model of GPT3 being developed

ChatGPT has taken the world by storm. A similar model, trained for Nordic languages, is currently being developed. 

The tech world, and indeed the rest of the world, has been completely knocked out by OpenAI's new ChatGPT language model. Not only can it answer questions and conduct long dialogues with people, but it can also produce its own texts on everything from baking to fusion.

– It's this ability to have long dialogues that really sets it apart from other language models, says Joakim Nivre, AI researcher at RISE.

Important that the technology is available for more language areas

He and his colleagues are researching basic algorithms for linguistic AI, but also how such algorithms can be used to best solve practical tasks and problems involving linguistic data. In addition, together with AI Sweden, WASP and NVIDIA, they are developing a Nordic version of OpenAI's other language model GPT3.

– Although ChatGPT can understand and write Swedish, it is more proficient in English. We believe it is important that this technology is also available for other language areas, such as the Nordic languages.

The largest model that RISE with partners has developed so far contains 20 billion parameters, compared to GPT3 which contains 175 billion parameters.

– But right now we are training a model with 40 billion parameters that will be ready in early 2023, and by the summer we expect to have a finished version as large as GPT3. And that's unique for a language as small as Swedish.

Requires huge amounts of data

The reason is money. Training large language models requires huge amounts of data and computing power, which costs a lot of money. In addition, extremely powerful computers are required to run the models, which means that most ordinary companies cannot afford to use them.

– That's why we're also researching how these language models can be made more resource-efficient, so that more people can use them.

But what should these models be used for? Some have suggested that they could replace search engines like Google or encyclopaedias like Wikipedia, but Joakim Nivre doesn't think so.

– You have to remember that these models don't search for information in real time, but only in the text they are trained on. ChatGPT is indeed trained on huge amounts of text and data, but only such produced until 2021. Events more recent than that it knows nothing about.

Right now we are training a model with 40 billion parameters that will be ready in early 2023

Will write drafts and complement search engines

Also, ChatGPT doesn't know everything about what happened before 2021 either, and sometimes it gets it wrong.

– I conversed with it yesterday and asked what it knew about famous Swedish authors. First it brought up Astrid Lindgren, but then it claimed that Gustav Vasa was one of Sweden's most esteemed writers, so you really have to pay attention.

Instead, Joakim Nivre believes that in the future ChatGPT will be used to produce drafts of text that people can then check and edit, and that it will be used in combination with search engines to access information in real time.

– It's so expensive to train the models, so I think these hybrid systems will be a trend in the future.

Joakim Nivre

Contact person

Joakim Nivre

Forskare

+46 10 228 44 44

Read more about Joakim

Contact Joakim
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

* Mandatory By submitting the form, RISE will process your personal data.