The Choice Between Virtue and Vice depicted man's choice and the eternal dilemma in historical research

A Reality Check on McKinsey's AI Bias Matrix

McKinsey addresses algorithmic bias in AI neatly and structurally but overlooks inherent philosophical paradoxes and complexity dynamics. A multidisciplinary, agile model that incorporates ethical pluralism and continuous adaptation better suits today's AI landscape. Revisiting older frameworks is essential due to shifting societal norms and emerging regulations.

McKinsey’s Framework on Algorithmic Bias

McKinsey & Company's original article proposes a beautifully and meticulously designed, four-step framework that aims to mitigate algorithmic bias within machine learning systems, particularly for business applications. The framework is thorough, explicitly delineating each step involved:

1. Identifying biases. Domain experts, ethicists, and data scientists team up to identify existing biases in the data set and in the algorithmic model. The biases could range from overt ones like gender or racial bias to subtle, systemic biases like those related to income inequality or access to resources.

2. Quantifying impact. The team then employs statistical methods to quantify the extent to which these biases could skew the algorithm’s decision-making capabilities. For example, if the algorithm is designed for credit scoring, what would be the impact of an income-related bias on creditworthiness assessments?

3. Algorithm modification. Armed with this quantitative analysis, the team modifies the algorithm. This could range from re-weighting variables in the training data to a total rebuilding of the algorithmic model. This is an iterative process that may require multiple rounds of modification and testing.

4. Ongoing monitoring. The framework concludes by emphasizing the necessity for continuous monitoring. As social norms and data sets evolve, new biases can emerge, necessitating an ongoing commitment to scrutiny and adjustment.

The framework thus aims to provide businesses with a structured and repeatable process for identifying and mitigating algorithmic biases, attempting to operationalize what could otherwise be an abstract ethical mandate.

The Philosophical Dimensions: Meet the Critical Paradox

While McKinsey's framework is a noble effort, it seems to overlook the inherent paradoxes that algorithmic bias presents - paradoxes deeply rooted in philosophical thought. For instance, Plato's interpretation of Protagoras’ phrase that "Man is the measure of all things" highlights the subjectivity inherent in any attempt to define or measure bias. 

The Ship of Theseus paradox provokes questions about identity and transformation. If an algorithm is continually tweaked to remove biases, can it maintain its original identity? This question becomes really pressing when considering the possibility of new biases being introduced with each modification.

Foucauldian Power Dynamics bring to light the power structures embedded in the act of identifying and correcting bias. The authority to define what constitutes bias effectively shapes societal norms, calling into question the objective neutrality of the entire exercise.

Nietzsche's Perspectivism complicates the issue further. According to Nietzsche, all truths are subject to individual perspectives. Therefore, the definition of what constitutes a bias is not universal but contingent on the individual or societal lens through which it is viewed.

The paradox deepens when we consider Sartre's Existentialism, which argues that existence precedes essence. If we apply this to algorithms, the essence of an algorithm (its intended unbiased state) is not predefined but emerges through its existence (its interaction with the world and its continuous modification). This poses an existential dilemma: Can an algorithm ever attain an 'unbiased' essence?

Complexity Theory and the Estuarine Model: Practical Implications for Organizational Decision-Making

Dave Snowden's work on complexity theory provides a robust conceptual framework to explore this subject matter further. His Cynefin framework has been widely acknowledged, but his company’s more recent Estuarine model adds an immense value to our understanding. The Estuarine model asserts that both ordered (simple and complicated) and unordered (complex and chaotic) systems co-exist and interact, especially in organizational settings.

In practical terms, the Estuarine model offers a wiser approach to organizational decision-making. For example, in a business environment, a 'simple' system like payroll management might interact with a 'complex' system like employee satisfaction in unpredictable ways. Similarly, algorithmic decision-making in AI is far from a linear or isolated process; it exists in a complex adaptive system where multiple variables interact in unforeseeable patterns.

This complexity is particularly evident in AI systems that are deployed in dynamic environments. For instance, an AI system used for supply chain management in a global corporation would have to adapt to myriad variables - economic fluctuations, geopolitical events, environmental factors, and more. The Estuarine model advocates for a decision-making process that is agile, continually adapting to emerging patterns rather than adhering to a predetermined, linear plan (and yes, the latter seems to include McKinsey’s anti-bias machine learning framework).

Tentative Enhancements to McKinsey’s Framework

Given these philosophical quandaries and complexity-oriented perspectives, a few amendments could perhaps enrich McKinsey’s original framework:

1. Ethical pluralism

Broadening the panel to include ethicists from various philosophical/civilizational traditions could offer a more comprehensive understanding of bias. This diversity would allow for a nuanced negotiation of the complexities inherent in defining and measuring bias. (Read ‘Justification of Galston’s liberal pluralism’ by Golam Azam.)

2. Interdisciplinary education

Encouraging a curriculum for data scientists and engineers that includes humanities and social sciences could offer them a more holistic toolset for navigating the ethical complexities of their work. (View ‘The new education: how to revolutionize the university to prepare students for a world in flux’ by Cathy N. Davidson.)

3. Ethical sandboxing

Before full-scale deployment, running the algorithm in a controlled environment where its decisions do not have real-world impacts but can be studied for potential biases. This is similar to a philosophical 'thought experiment' and can help in identifying unforeseen ethical dilemmas. (Watch ‘Making an Ethical Machine’ by Alan Winfield.)

Why an Article on ML Written in 2017 is Being Discussed in 2023?

The relevance of McKinsey's 2017 framework on algorithmic bias in today's 2023 landscape may initially seem counterintuitive given the rapid advancements in AI and ethics. However, I find compelling reasons for its current discussion. 

For starters, the article serves as a historical yardstick, allowing us to assess whether advancements in the field of AI ethics have been substantive or merely superficial. The social landscape has also changed considerably since 2017, particularly concerning our collective understanding of gender diversity, racial equality, and economic inclusion. The older assumptions of the framework may now be outdated, posing a risk that outmoded societal biases could be encoded into new algorithms: McKinsey's magazine remains a must-read for management inspirations.

As AI technology gains traction in emerging markets, these countries may default to established but potentially flawed frameworks, amplifying the reach and impact of any inherent biases: linearity and simplicity are appealing. Lastly, the evolving legal landscape, represented by new regulations like the European Union's Artificial Intelligence Act, makes the modern evaluation of older frameworks a legal imperative, not just an ethical one.

Thus, the need to know and discuss ideas that are not aging well is a critical step for the wise and responsible deployment of AI today.

Don't miss out on the latest news!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

People also read

An open book with various symbols emerging from its pages, representing knowledge

AI-Powered Preservation of Endangered Languages

This article explores how AI is revolutionizing efforts to preserve endangered languages and protect cultural heritage and diversity.
Othon Viannis
Othon Viannis
December 3, 2024
8
min read
Archaeology
Digital Humanities
Generative AI
Mesoamerica and its cultural areas

Reimagining Mesoamerican and Colonial Historical Archaeology with Artificial Intelligence

Discover how AI transforms historical research, from deciphering vast colonial archives to mapping forgotten geographies. Using cutting-edge tools like NLP, computer vision, and GIS, researchers uncover hidden narratives in 16th-century documents like the Relaciones Geográficas de Nueva España. This fusion of technology and collaboration with Indigenous communities is reshaping our understanding of history, making the past more accessible.
Patricia Murrieta-Flores
Patricia Murrieta-Flores
November 28, 2024
6
min read
Archaeology
Historical Research
Generative AI
a man studies history

Artificial Intelligence’s Unexpected Role in Uncovering Historical Silences

This article explores how AI can help address the problem of “historical silence,” where marginalized voices are excluded from narratives. Drawing on Michel-Rolph Trouillot’s framework, it highlights the potential of AI to expose biases and create counter-narratives, amplifying overlooked perspectives such as Indigenous resilience.
Miray Özmutlu
Miray Özmutlu
November 18, 2024
6
min read
Historical Research
Historical Events

Contribute to Historica's blog!

Learn guidelines, requirements, and join our history-loving community.

Become an author

FAQs

How can I contribute to or collaborate with the Historica project?
If you're interested in contributing to or collaborating with Historica, you can use the contact form on the Historica website to express your interest and detail how you would like to be involved. The Historica team will then be able to guide you through the process.
What role does Historica play in the promotion of culture?
Historica acts as a platform for promoting cultural objects and events by local communities. It presents these in great detail, from previously inaccessible perspectives, and in fresh contexts.
How does Historica support educational endeavors?
Historica serves as a powerful tool for research and education. It can be used in school curricula, scientific projects, educational software development, and the organization of educational events.
What benefits does Historica offer to local cultural entities and events?
Historica provides a global platform for local communities and cultural events to display their cultural artifacts and historical events. It offers detailed presentations from unique perspectives and in fresh contexts.
Can you give a brief overview of Historica?
Historica is an initiative that uses artificial intelligence to build a digital map of human history. It combines different data types to portray the progression of civilization from its inception to the present day.
What is the meaning of Historica's principles?
The principles of Historica represent its methodological, organizational, and technological foundations: Methodological principle of interdisciplinarity: This principle involves integrating knowledge from various fields to provide a comprehensive and scientifically grounded view of history. Organizational principle of decentralization: This principle encourages open collaboration from a global community, allowing everyone to contribute to the digital depiction of human history. Technological principle of reliance on AI: This principle focuses on extensively using AI to handle large data sets, reconcile different scientific domains, and continuously enrich the historical model.
Who are the intended users of Historica?
Historica is beneficial to a diverse range of users. In academia, it's valuable for educators, students, and policymakers. Culturally, it aids workers in museums, heritage conservation, tourism, and cultural event organization. For recreational purposes, it serves gamers, history enthusiasts, authors, and participants in historical reenactments.
How does Historica use artificial intelligence?
Historica uses AI to process and manage vast amounts of data from various scientific fields. This technology allows for the constant addition of new facts to the historical model and aids in resolving disagreements and contradictions in interpretation across different scientific fields.
Can anyone participate in the Historica project?
Yes, Historica encourages wide-ranging collaboration. Scholars, researchers, AI specialists, bloggers and all history enthusiasts are all welcome to contribute to the project.