Skip to main content
Search
Menu
Engineer in front of control panel

Trustworthy AI with Quality-Assured Measurements and Validated Models

Traceable measurements, uncertainty analysis, and model validation are crucial for greater adoption and trust in AI models within industry and business. Here, measurement technology can contribute to reliable results in areas such as predictive maintenance.

Traditionally, metrology focuses on maintaining traceability in measurements. Through chains of calibrations, measurements are ensured to be reliable and comparable.

"For AI models to be trustworthy, traceable and quality-assured measurements are needed as inputs. This makes model predictions more reliable," says Olle Penttinen, a researcher in volume and flow, specializing in IoT and AI solutions within metrology and predictive maintenance.

For measurements to be traceable and quality-assured, taking measurement uncertainty into account is essential.

"Measurement uncertainty is important in measurement technology because all measurements contain errors. We have valuable knowledge in uncertainty analysis, which can also be applied to AI and machine learning," says Olle Penttinen.

Predictive Maintenance

He has experience from AI projects in predictive maintenance, where the goal is to use machine learning to predict where maintenance needs exist, thereby preventing damage before it occurs.

"To achieve this, we need to understand the uncertainties both in the factors used to train the models and in the model's predictions. Just as with physical measurements, we should analyze how different factors affect the total uncertainty," says Olle Penttinen.

We need to be able to trust the results, and just as measuring instruments need calibration, AI models need validation

Trusting AI

For companies to take the leap and use AI to, for example, optimize their production or maintenance, trust is needed in the quality of the predictions. Ensuring this requires model validation. What uncertainty exists in the predictions produced by the model?

"We need to be able to trust the results, and just as measuring instruments need calibration, AI models need validation," says Olle Penttinen.

As an independent research institute, RISE has is in a unique position to work on validating AI models. Validation involves examining the outcomes or predictions generated by the model and determining how accurate they are through measurements and experiments.

"Sometimes it's difficult to fully understand how models function internally, but ultimately, it’s the prediction output that matters. To confidently use the technology, we need to know how accurate the predictions are, that the models are robust, and that the results are reproducible," says Olle Penttinen.

Correlation and Causality

Considering the advancements in large language models over recent years, it’s not surprising that there’s sometimes wishful thinking about what AI can achieve.

"It’s easy to believe that AI can magically solve the impossible. However, in industrial and business applications, language models are often not used; instead, simpler models are applied. Here, the amount of training data is smaller, while the demand for prediction quality may be higher," says Olle Penttinen.

Higher quality in predictions with less training data? To achieve this, an understanding of which factors influence the outcome is essential. Otherwise, a variable with only indirect influence might be overinterpreted. There’s a difference between correlation and causality; just because something is related doesn’t mean it causes the other.

"For instance, say we use an AI model to predict the probability of damage to individual pipeline segments in a water or district heating network. The training is based on a maintenance log where various pipes have been dug up over the years, with several factors recorded for the replaced pipes, such as pipeline length. Including pipeline length in the training could potentially improve the model’s precision—there is a correlation between pipeline length and the likelihood of damage. However, this provides a potentially weaker decision support for the network owner, as the causality is absent. The damage isn’t due to the pipeline length itself," says Olle Penttinen.

Connecting Input Data to Outcome

When using so-called "supervised learning," as in the example above, it’s necessary to link the factors the model is trained on to the outcome. In the case of the pipelines, the same parameter must be present in information about both the damaged and undamaged pipes.

"Acoustic methods, for example, can estimate wall thickness in a pipeline network. But if wall thickness data is missing for damaged pipes, it becomes challenging to link it to damage risk. For the model to predict outcomes accurately, there must be a clear connection between the input data and the result," says Olle Penttinen.

Challenging to Handle Data – But Measure Anyway

The challenge of managing data is a time-consuming aspect in AI model development and is highlighted as one of the central areas to further develop within the Advanced Digitization program, led by Vinnova in partnership with industry. Despite the challenges of handling data, Olle Penttinen is clear that more data is beneficial:

"Do not be afraid to measure and record parameters that can indirectly influence the quality of a product or process, such as ambient temperature or humidity. The day will come when these measurements are sought after as input for new models and research ideas," says Olle Penttinen.

Supervised Learning and Unsupervised Learning

Supervised learning involves training the model with correct answers, which it uses to learn to predict answers for new examples. For instance, a model can learn to recognize dogs by training on images specifically of dogs.

In unsupervised learning, the model tries to find patterns or groupings in data without any predetermined answers. This might involve grouping customers based on their purchasing behaviors without predefined customer groups from the start.

 

Olle Penttinen

Contact person

Olle Penttinen

Forskare

+46 10 516 50 47

Read more about Olle

Contact Olle
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

* Mandatory By submitting the form, RISE will process your personal data.