Kategorien: Nachrichten

Vertrauen in KI als Priorität

Artificial intelligence and machine learning are already woven into the routines of daily life, from the voice assistants in our kitchens to the complex analytics that help businesses make decisions. They’ve reshaped how we access information, find answers, and even plan our days. Yet as we lean on these digital brains for more and more important choices, a big question hangs in the air: just how much trust should we place in these systems?

Rethinking Trust: Beyond Just “Getting It Right”

It’s tempting to believe that if an AI system consistently delivers accurate results, it must be reliable. But that’s not the full picture. Every AI model, no matter how advanced, faces uncertainty. Sometimes that’s because its training data was limited or inconsistent, or simply because the world is too complex to predict with perfect confidence. The answer you see is just one of many possible outcomes, and the model may be glossing over the others.

So how do we navigate this hidden layer of unpredictability? The answer lies in something called *uncertainty quantification* (UQ). UQ is a process that helps AI systems estimate not just the most likely answer, but also the range of other plausible outcomes and how confident the system is in its own predictions. Without it, users are left guessing about how much to believe what the AI says.

Skipping Uncertainty Comes at a Cost

Take weather forecasting as an example. If tomorrow’s high is predicted to be 21°C, most of us take that at face value. But imagine if the forecast also told you there’s a real chance it could be 12°C, 15°C, or 16°C instead. That uncertainty changes how you’d plan your day.

In practice, though, uncertainty quantification is often skipped because it eats up a lot of computer power and makes systems more complex to design. But in high-stakes situations, like healthcare or autonomous vehicles, ignoring uncertainty can be dangerous. Doctors relying on AI for a diagnosis or treatment plan need to know how confident the system is — and where its blind spots might be. For self-driving cars, even a small margin of error could mean the difference between a near miss and a collision if the system doesn’t capture uncertainty in its calculations.

One of the oldest ways to estimate uncertainty is to run Monte Carlo simulations, which means running the same model repeatedly with slight changes in the input. This gives a sense of the probability distribution behind different outcomes. It’s reliable, but slow and resource-hungry — and since it builds in randomness, results can differ a bit from run to run even if you set up everything the same way.

Next-Generation Hardware: Raising the Bar

Now, new computing platforms are emerging that tackle these challenges head-on. Unlike traditional CPUs and AI accelerators, these new chips are built from the ground up to handle probability distributions as naturally as they handle basic arithmetic.

In finance, this means risk assessments like “Value at Risk” can finally use real-world market data directly, without creating synthetic samples. The results? Much faster, more accurate reads on risk. And it’s now possible to add uncertainty quantification into existing AI workflows — even retrofitting models that are already in production — with much less hassle.

A notable case: recent research showcased at NeurIPS 2024 found that these specialized platforms completed UQ tasks over 100 times faster than a conventional server running Monte Carlo simulations. That’s a leap not just in speed, but in practical usability.

Trustworthy AI: The Road Ahead

As AI systems take on greater roles in our lives, building real, justified trust isn’t optional — it’s a must. Uncertainty quantification should become a built-in part of every important AI deployment, right alongside transparency and explanations about how systems reach their decisions.

This isn’t just technical nitpicking — it’s something the public is asking for. According to a KPMG study, about three-quarters of people say they’d trust AI more if systems were transparent and provided confidence scores with their answers. As we all grapple with tough questions about the ethics, legalities, and wider impact of AI, making uncertainty quantification a standard is a crucial step toward earning public trust for the long run.

Read the original article here: https://www.unite.ai/prioritizing-trust-in-ai/

Max Krawiec

Teilen Sie
Herausgegeben von
Max Krawiec

Diese Website verwendet Cookies.