Skip to main content
Search
Menu
En man som tittar på en robot

How AI can deliver long-term value

Sweden being a competitive nation in AI is an excellent vision. But it's important to remember that it's about quality, not quantity.
"We don't need the most AI, but the best used AI. We need to understand both the technology and society," says Kristina Knaving of RISE.

AI is already a big part of your everyday life, from suggesting routes in your car in real time to sorting spam from your inbox. But AI can also be applied to more socially beneficial challenges, such as medical analysis and climate change mitigation.

All experts agree that AI has the potential to bring great benefits to both individuals and society. However, opinions differ on how quickly this should happen.

The debate lacks good meeting points between technology and society

"The speed of technological development is not necessarily a problem. The technology itself is often neutral. However, adoption is currently rapid and there is a lot of hype around AI. "Of course we need to harness the positive effects of AI, but we also need to be aware of the risks," says Kristina Knaving, senior researcher at RISE.

She believes that the current debate on AI lacks a good meeting point between technology and social understanding.

"Because it's not really the technology itself that we want, it's the outcomes that we want. We want a society with prosperous, happy people. This is where Sweden should be a leader: we don't need the most AI, but the best used AI. We need to understand both the technology and society, and talk more about what we want and need," she says.

One difficulty when a new technology like AI comes along is to have a balanced and realistic view of what it can mean. "It's easy to exaggerate both the opportunities and the threats," says Peter Ljungstrand, senior researcher and marketing manager at RISE.

"It will take some time before we really understand its pros and cons, and what consequences we can expect. AI is an extremely powerful and disruptive technology, which means it changes and overturns many of today's truths," he says.

One of the aspects he considers important to monitor is the impact of AI on democratic values and critical thinking.

"There is a concentration of power in that there are a few companies that control generative AI and the big language models. Transparency is difficult and we cannot be sure that the training data used for the models really reflect democratic principles," says Kristina Knaving.

It's not really the technology itself that we want, it's the outcomes.

People's perception of reality driven by algorithms

AI also plays an important role in today's media world, where traditional TV and newspapers are losing ground.

"Much of our world coverage and perception of reality comes through digital channels that can, in principle, be freely altered, manipulated and customised based on algorithms. Potentially, AI can be used to influence by adapting the flow in a subversive way. It can simply threaten independent thinking," said Peter Ljungstrand.

But as Kristina Knaving pointed out, technology itself is essentially neutral, and it is how we use it that is interesting.

"It is becoming increasingly clear how important lifelong learning is to keep up with developments and to interact with new technologies in an informed and critical way. This is particularly the case with generative AI such as ChatGPT, as it is a consumer technology that will be used by masses of people and will therefore have a major impact on our society."

The highly complex issues involved in implementing AI make it natural for companies and organisations to be unsure of how to approach the new technology.

"They need to be prepared for it to be a journey of development for both employees and the organisation. Understanding AI and how to use it is not a one-day process. It's a learning process full of responsible experimentation with the goal of creating value," says Peter Ljungstrand.

"Experimentation almost inevitably means that mistakes will be made," he says.

"You have to have a culture that allows for that, and a process that ensures that the mistakes are small."

Solutions need to be tested and evaluated

"Knowing how to navigate development is something that RISE helps many organisations with," says Kristina Knaving.

"It can be anything from understanding needs and designing technical solutions, to training employees and managers to create a deeper understanding. "An important starting point is to have an iterative approach to your own development; that you can't expect a ready-made solution, but that everything needs to be tested and evaluated," she says.

Peter Ljungstrand also stresses the importance of having multiple perspectives:

"This is such a multifaceted issue that you need to be able to see things from many different angles in order to be able to monitor possible positive and negative consequences. This is very difficult for a single organisation, which is why RISE's interdisciplinary working methods can be valuable in this context," he says.

Three steps: How to start implementing AI

1. Identify needs and build a foundation

  • Identify the problems you want to solve and why AI is an appropriate solution.
  • Gather insight into user needs through workshops, observations and interviews.
  • Focus on understanding how AI can add value to your specific business (rather than using the technology just because it is available).

2. Experiment on a small scale

  • Use policy sandboxes and testbeds to explore technology under controlled conditions.
  • Test solutions at a smaller scale to ensure they work for end users and contribute to desired outcomes.

3. Build knowledge and interdisciplinary collaboration

  • Build a diverse team - be sure to include technical experts as well as people with expertise in the humanities and social sciences.
  • Invest in ongoing education and training to ensure people understand the technology.
  • Work iteratively and be open to adjusting the strategy based on insights and lessons learned.
Kristina Knaving

Contact person

Kristina Knaving

Fokusområdesledare Den uppkopplade individen-Senior forskare och interaktionsdesigner

+46 73 030 19 86

Read more about Kristina

Contact Kristina
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

* Mandatory By submitting the form, RISE will process your personal data.

Peter Ljungstrand

Contact person

Peter Ljungstrand

Marknadschef

Read more about Peter

Contact Peter
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

* Mandatory By submitting the form, RISE will process your personal data.