top of page
Adrien Book

Who Ordered the Apocalypse with Extra Taxonomies?

AI Risk Taxonomy

As of today, over 3 000 incidents of AI-related harm have been documented, covering everything from biased decision-making to financial scams. AI isn’t just making errors… it’s doing so at scale, amplifying biases and embedding discrimination into the digital fabric of our lives. There is clearly a need for a cohesive, shared understanding of what these risks are and how they happen. If we don’t clearly understand the risks, we cannot regulate and mitigate them.

But how can we make sense of the many risks AI poses, when its use is so disseminated?

Enter the “AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence. Written by a group of researchers from MIT, the Future of Life Institute, and several universities, this paper aims to tackle the confusion by organizing 777 (!) identified risks into a living, structured database. Led by Peter Slattery, the paper provides a categorized list of risks that may one day create a common frame of reference for researchers, policymakers and industry leaders.

Summary of the AI Risk Repository

The research project set out to solve a fundamental problem: there are too many conflicting ways of categorising AI risk, and no one agrees on a standard. To fix this, the team created the Repository to include both a high-level “causal taxonomy” and a mid-level “domain taxonomy”. If that sounds like creating more categories because there are too mant categories, trust your instinct.

How standards proliferate

The high-level Causal Taxonomy categorizes each risk based on causal factors: who or what caused it (AI or human), whether it was intentional or unintentional, and whether it happened pre-deployment or post-deployment. The Domain Taxonomy categorizes the risks into seven major domains, like discrimination, privacy, misinformation, or socioeconomic harm, each of which is further divided into 23 subdomains (6 7 23 = 966 possible categories in total).

The methodology used to get there is pretty thorough: consultations with experts, systematic database searches, rounds of coding to ensure that no major categories were missing, etc. The result? A structured, accessible, and (most importantly) public database of the many ways AI can go horribly wrong. They even provided a website and online spreadsheets where users can filter, modify, and add to the risk database.

Key takeaways from the repository’s creation

  • There are a lot of possible risks associated with AI. The Repository makes it clear that a (very small) majority of risks (51%) are caused by decisions or actions “made” by AI systems. This indicates a significant challenge in controlling AI behaviour post-deployment, even if humans believe they’ve done everything right.

  • We need to monitor deployed AIs better. With 65% of risks emerging after AI systems are deployed, it’s evident that existing monitoring processes are inadequate. This highlights the importance of continuous oversight to address issues that only become apparent when AI is in real-world use. Strong processes are the cornerstone of the corporate world. Let’s make processes great again.

  • We have got to think more about the impact of the technologies we deploy out into the world. Many of the risks identified (37%) are (allegedly) unintentional, stemming from design flaws or limitations in training data. This serves as a reminder that even well-intentioned teams can cause harm if they do not properly think about what they’re doing and the associated impacts. Hire humanities majors y’all.

  • Socioeconomic impacts of AI are likely to be more profound than we think (unless you’re already a doomer). The study / repository emphasizes the considerable socioeconomic effects of AI, such as job displacement and power centralization. These risks were noted in 73% of documents, highlighting the need for policies that mitigate economic and social disruption.

  • We need to get better at seeing the full picture, as we have blind spots today. Some risks, like “AI welfare and rights” and “pollution of the information ecosystem”, are significantly underrepresented in discussions. These areas need more research attention to ensure a comprehensive understanding of AI’s potential harms.

Proposed solutions to address AI risks

Though the repository is light on recommendation (to be fair, it’s not its job), I do have some ideas on some actions that could helps with the issues highlighted.

Force pre-deployment audits (regulation required!)

Since quite a lot of of risks occur before AI is deployed, a thorough, mandatory audit before said deployments could catch design flaws or biases early. Pre-deployment audits have long been put forward as a necessity to significantly reduce risks by ensuring that AI systems meet ethical standards before they are released into the wild. Regulators should demand that AI developers implement a standardized pre-deployment audit focusing on human oversight and testing for bias. That standardisation, as always, is easier said than done. The EU’s AI Act is a good start, though.

Mandate transparency reports

Addressing the “lack of transparency” mentioned in the paper could be tackled by requiring transparency reports for every major AI model. Probably hard to implement across the board, but possible for large companies?

Transparency reports would increase a) accountability and b) trust in AI systems, by providing insights into data use and decision-making processes. These reports could include detailed information about data sources, model training processes, and risk considerations during development. Mandate them and enforce heavy penalties for non-compliance.

Encourage risk reassessment cycle

Given that 65% of risks occur post-deployment, AI systems should undergo periodic risk reassessments. The dynamic nature of AI systems — otherwise you just have Spicy If-Then statements — means that periodic audits can help catch evolving risks, such as biases that emerge with new data inputs. Companies should be… forced? Nudged? Lightly encouraged? to submit updated risk evaluations, much like environmental impact assessments, especially if significant updates are made to an AI system. This would help maintain safety and performance standards over time.

Protect marginalized groups

Discrimination and toxicity” make up a substantial part of AI risk. It’s also one of the biggest worry regarding the technology for many communities. Regulatory frameworks must ensure that AI systems undergo bias testing specifically aimed at preventing harm to marginalized groups. Engaging these groups in the evaluation process ensures broader and socially aware perspectives that help prevent biases (in the coding or in the data or both), which predominantly affect underrepresented communities.

Limitations of the AI risk repository

While the paper is incredibly comprehensive, it does have issues.

Firstly, not a single one of the documents reviewed by the team covered all 23 subdomains of AI risk, which means the Repository itself may not be exhaustive (lol). So there’s a risk that some emerging concerns may still be left out.

Moreover, some definitions remain somewhat vague or open to interpretation, especially under categories like “Other” for causal factors, which could reduce clarity and impede effective risk management.

It’s only a step forward

The AI Risk Repository is an ambitious step towards creating a more organized approach to understanding AI risks. However, it’s just a starting point.

Without enforcement and real commitment from both industry leaders and policymakers, this work could end up as just another dusty PDF in the “should read” pile. But let’s hope this doesn’t happen. With clearer rules, mandatory audits, and a focus on transparency, maybe, just maybe, we can make AI safer for everyone.

Good luck out there.

Comentarios


Los comentarios se han desactivado.

You may also like :

Thanks for subscribing!

Get the Insights that matter

Subscribe to get the latest on AI, innovative business models, corporate strategy, retail trends, and more. 

No spam. Ever.

Let's get to know each other better

  • LinkedIn
  • Twitter
  • Instagram
  • Buy Me A Coffee
bottom of page