Contact person
Susanne Stenberg
Senior Forskare/Rättslig expert
Contact SusanneThe EU’s forthcoming regulation on products and services that utilise artificial intelligence is detailed and far-reaching. Experts at RISE have reviewed the technical and legal aspects of the proposal and see new burdens on those whose AI systems are suddenly classified as high risk.
“The new regulation, called the AI Act, is the world’s first specific regulation pertaining to AI and may be passed in the spring of 2023 under Sweden’s EU presidency,” says Susanne Stenberg, a researcher at RISE and a legal expert in new technologies. “It would then begin to be applied two years later.”
So, why this is needed? Firstly, the EU wants to demonstrate what “Good AI” should be; that it is guided by our Western society and fundamental rights. Then there is the realisation that AI systems inherently pose a risk and can cause damage, and therefore need to be regulated.
“The regulation is under development and more compromise proposals may be included,” explains Stenberg. “Therefore, once it comes into force, the rules may not be exactly as proposed by the Commission, and we must be prepared for that.”
Once the regulation is approved, no further national legislation will be required since the new rules are applicable in all EU countries. AI systems introduced after the proposal becomes law are covered, as are existing systems that are updated or substantially modified.
What is an AI system? Many things, according to the regulation wording, which lists technologies such as machine learning, logic- and knowledge-based systems, and statistical methods.
“They have defined AI systems very broadly,” says Håkan Burden, a researcher in AI and IT systems at RISE.
An AI system receives input and then does some sort of analysis in relation to a goal set by a human. The system then produces output, which can affect its surroundings.
“One example is a control system in a mining machine. Sensor data shows that there is an obstacle to the right. That’s not good, a human-set goal is to avoid collision and the output of the system tells the wheel axle to turn left.”
Another example is a system that ranks preschool applicants according to their journey and whether they have a sibling in the preschool. This involves a support system for carrying out municipal functions. An algorithm uses the position of the school and the distance from the applicant’s home address according to the population register to obtain a value, which is then used to create a queue according to established terms and conditions.
“It’s a very simple algorithm, but because the right to attend school is a fundamental right, it could be classified as a high-risk system,” says Burden.
If the AI Act is passed in 2023 and goes into effect two years later, you need to think about this yesterday
The proposal for a new regulation assigns AI systems to different risk categories:
High-risk AI systems are proposed to be subject to specific requirements for oversight and registration to be able to be used within the EU. These include technical documentation requirements and the implementation of a risk management system, as well as data management and quality requirements. When an AI system provider asserts that it has satisfied the requirements in the regulation, CE marking can be carried out. However, to prevent CE marking on incorrect grounds, hefty fines of up to 6 percent of global turnover are proposed.
Third parties in the AI value chain, i.e. producers of software or pre-trained models and data, network service providers, etc., are implored in the regulation text to cooperate “as appropriate” to facilitate certification of AI systems.
“Reaching consensus on ‘as appropriate’ will not be a trivial matter,” says Stenberg.
Stenberg and her colleague Håkan Burden recognise that different operators have different needs, while authorities and product developers have a common interest in getting functioning processes for documentation and certification in place.
“Consider a municipal administration where you’re going to procure a software system for water, roads, electricity – these are high-risk systems. And when a purchasing department must set requirements for what is needed, how do you obtain information that the data used to train the system is relevant, representative, flawless and complete?”
“If the AI Act is passed in 2023 and goes into effect two years later, you need to think about this yesterday.”