“AI also raises questions on fairness, accountability and transparency. By being trained on masses of historic data, which could already be skewed towards or against a particular group, AI can deepen structural biases in society. Some AI models also act as a black box and fail to transparently provide clear explanations of the results they provide. Even if decisions made by AI are encouraged, it remains unclear as to who is responsible for those decisions – and whether AI can be held accountable at all. A McKinsey survey reports that organisations using AI surged from 50% in 2022 to 78% by July 2024 (around a 1.5-fold increase). While it is plausible that an increase in usage brings a parallel rise in AI-related incidents, the OECD’s AI Incidents and Hazards Monitor (AIM) reports incidents and hazards doubling over the same time period. A majority relate to the threat of accountability, transparency and human well-being. While the latter poses a direct risk to the UN’s Sustainable Development Goals, a lack of accountability and transparency of AI actors also risks erosion of trust in society and the ability to make informed decisions.”
Morten W. Langer