Annonce

Log ud Log ind
Log ud Log ind

ING: AI månedligt: AI’s vane med at finde på ting er en voksende bekymring

Oscar M. Stefansen

onsdag 19. november 2025 kl. 16:01

Resume af teksten:

Brugen af AI vokser, men forskning viser, at nogle modeller producerer ukorrekte udsagn i op til 40% af tilfældene. Med funktioner som dybdegående ræsonnering og webadgang opfører AI sig ofte overkonfident, hvilket kan føre til misinformation – kendt som “AI-hallucinationer.” Moderne modeller prioriterer flydende sprog over nøjagtighed for at øge brugertilfredshed, hvilket resulterer i selvsikre, men ofte forkerte svar. Dette er problematisk i dynamiske felter som politik og sundhed, hvor præcision er essentiel. Mens nyere og større modeller viser forbedringer i faktualitet, varierer præstationen betydeligt mellem dem. To nylige sager viser betydelige finansielle og omdømmeskader som følge af AI-fejl. For at undgå misforståelser og fejlinformation bør AI-genererede udsagn vurderes kritisk.

Fra ING:

With great power comes great responsibility, and AI is no exception. As usage and trust in AI models grow, ensuring their accuracy is critical. Yet, research shows that some models produce incorrect statements in more than one-third of responses

Are AI models becoming overconfident?

Modern AI models include features like deep reasoning, long-term memory, and autonomous agents that can browse the web or perform tasks with minimal human input. To perform these tasks, the models require vast amounts of data, which increases reliance on uncontrolled and unverified data sources. This increased exposure can lead to behaviour that resembles overconfidence, a cognitive bias where confidence in one’s knowledge or abilities exceeds actual accuracy.

A recent study by the European Broadcasting Union (EBU) highlights the consequences of this: leading AI systems generate false claims at a rate of up to 40%. This high rate coincides with a change in model behaviour. Previously, AI systems would decline to answer questions about events outside their training data. Newer systems having web access are designed to respond more frequently, even when they are uncertain or have insufficient information. While this improves user engagement, it also leads to more fabricated output. We refer to these cases as “AI hallucinations.” However, the models often deliver such responses with strong confidence, creating the impression that they are unquestionably correct.

Models are prioritising fluency over accuracy

There are several reasons why AI hallucinations are still common, even with newer models. One reason is that users may ask vague or complex questions, which the model cannot easily interpret. This often prompts the model to “fill in the blanks” using statistical patterns to deliver what appears to be a complete answer. While these responses are intended to be helpful, they can result in factually wrong statements.

Additionally, the models are often fine-tuned using human feedback, meaning they prioritise answers that sound confident and helpful. This results in a bias towards confident-sounding but inaccurate statements rather than cautious or uncertain responses.

This issue is further exacerbated by the declining “no response rate”. While older models refused to answer almost 40% of queries, newer models are designed to answer virtually every request. In fast-moving fields such as politics and health, where accurate information is crucial, this shift from accuracy to fluency poses serious misinformation risks. These structural biases are technically embedded in modern AI systems and reflect the preference for confidence over accuracy.

What’s at stake

As we reported in a previous edition of AI Monthly , investments in AI are at an all-time high. However, trust and continued investment could be at risk if even the most sophisticated models are found to deliver false information. AI is increasingly used to gather information about current events, particularly among younger users. In fact, 15% of individuals under 25 report using AI chatbots as a primary source of news. Given the rising usage of AI both privately and in businesses, accuracy should be a priority.

Recent research shows that factuality improves with newer, larger models, suggesting that further scaling may help address the challenge. Yet performance still varies widely. Some models produce incorrect statements in more than one-third of responses, whereas grounded models, which connect to external sources of truth, can reduce this figure to around one in 10. This emphasises the importance of selecting the right model. Despite these improvements, businesses are still at risk of reputational damage or legal issues due to factually incorrect answers.

The level of factuality varies greatly between AI models

FACTS: factuality score (in % of total answers)

- Source: FACTS leaderboard, 2025

Source: FACTS leaderboard, 2025

Two recent cases have highlighted the impact inaccurate or fabricated output can have on businesses and individuals. In one case, an international advisory firm had to refund the government for a report containing AI errors. A judge was misquoted, and non-existent references were cited.

In another case, a senior judge criticised lawyers who had used AI tools to prepare written arguments referencing fake case law. He called on regulators and industry leaders to ensure lawyers know their ethical obligations. Both cases resulted not only in financial but also reputational damage.

Awareness is vital

In Germany, there’s a saying: “stiffly claimed is already half proven” – but confidence does not automatically translate into correctness. AI’s low level of accuracy shows that its chances of replacing entire professions in the near future are rather low – unless those who practice the profession rely entirely on the (false) information that AI feeds them. AI-generated statements should be treated with the same critical mindset as human claims.

Hurtige nyheder er stadig i beta-fasen, og fejl kan derfor forekomme.

Få dagens vigtigste
økonominyheder hver dag kl. 12

Bliv opdateret på aktiemarkedets bevægelser, skarpe indsigter
og nyeste tendenser fra Økonomisk Ugebrev – helt gratis.

Jeg giver samtykke til, at I sender mig mails med de seneste historier fra Økonomisk Ugebrev.  Lejlighedsvis må I gerne sende mig gode tilbud og information om events. Samtidig accepterer jeg ØU’s Privatlivspolitik. Du kan til enhver tid afmelde dig med et enkelt klik.

[postviewcount]

Jobannoncer

No data was found

Log ind

Har du ikke allerede en bruger? Opret dig her.

FÅ VORES STORE NYTÅRSUDGAVE AF FORMUE

Her er de 10 bedste aktier i 2022

Tilbuddet udløber om:
dage
timer
min.
sek.

Analyse af og prognoser for Fixed Income (statsrenter og realkreditrenter)

Direkte adgang til opdaterede analyser fra toneangivende finanshuse:

Goldman Sachs

Fidelity

Danske Bank

Morgan Stanley

ABN Amro

Jyske Bank

UBS

SEB

Natixis

Handelsbanken

Merril Lynch 

Direkte adgang til realkreditinstitutternes renteprognoser:

Nykredit

Realkredit Danmark

Nordea

Analyse og prognoser for kort rente, samt for centralbankernes politikker

Links:

RBC

Capital Economics

Yardeni – Central Bank Balance Sheet 

Investing.com: FED Watch Monitor Tool

Nordea

Scotiabank