A Reality Check on McKinsey's AI Bias Matrix
McKinsey addresses algorithmic bias in AI neatly and structurally but overlooks inherent philosophical paradoxes and complexity dynamics. A multidisciplinary, agile model that incorporates ethical pluralism and continuous adaptation better suits today's AI landscape. Revisiting older frameworks is essential due to shifting societal norms and emerging regulations.
McKinsey’s Framework on Algorithmic Bias
McKinsey & Company's original article proposes a beautifully and meticulously designed, four-step framework that aims to mitigate algorithmic bias within machine learning systems, particularly for business applications. The framework is thorough, explicitly delineating each step involved:
1. Identifying biases. Domain experts, ethicists, and data scientists team up to identify existing biases in the data set and in the algorithmic model. The biases could range from overt ones like gender or racial bias to subtle, systemic biases like those related to income inequality or access to resources.
2. Quantifying impact. The team then employs statistical methods to quantify the extent to which these biases could skew the algorithm’s decision-making capabilities. For example, if the algorithm is designed for credit scoring, what would be the impact of an income-related bias on creditworthiness assessments?
3. Algorithm modification. Armed with this quantitative analysis, the team modifies the algorithm. This could range from re-weighting variables in the training data to a total rebuilding of the algorithmic model. This is an iterative process that may require multiple rounds of modification and testing.
4. Ongoing monitoring. The framework concludes by emphasizing the necessity for continuous monitoring. As social norms and data sets evolve, new biases can emerge, necessitating an ongoing commitment to scrutiny and adjustment.
The framework thus aims to provide businesses with a structured and repeatable process for identifying and mitigating algorithmic biases, attempting to operationalize what could otherwise be an abstract ethical mandate.
The Philosophical Dimensions: Meet the Critical Paradox
While McKinsey's framework is a noble effort, it seems to overlook the inherent paradoxes that algorithmic bias presents - paradoxes deeply rooted in philosophical thought. For instance, Plato's interpretation of Protagoras’ phrase that "Man is the measure of all things" highlights the subjectivity inherent in any attempt to define or measure bias.
The Ship of Theseus paradox provokes questions about identity and transformation. If an algorithm is continually tweaked to remove biases, can it maintain its original identity? This question becomes really pressing when considering the possibility of new biases being introduced with each modification.
Foucauldian Power Dynamics bring to light the power structures embedded in the act of identifying and correcting bias. The authority to define what constitutes bias effectively shapes societal norms, calling into question the objective neutrality of the entire exercise.
Nietzsche's Perspectivism complicates the issue further. According to Nietzsche, all truths are subject to individual perspectives. Therefore, the definition of what constitutes a bias is not universal but contingent on the individual or societal lens through which it is viewed.
The paradox deepens when we consider Sartre's Existentialism, which argues that existence precedes essence. If we apply this to algorithms, the essence of an algorithm (its intended unbiased state) is not predefined but emerges through its existence (its interaction with the world and its continuous modification). This poses an existential dilemma: Can an algorithm ever attain an 'unbiased' essence?
Complexity Theory and the Estuarine Model: Practical Implications for Organizational Decision-Making
Dave Snowden's work on complexity theory provides a robust conceptual framework to explore this subject matter further. His Cynefin framework has been widely acknowledged, but his company’s more recent Estuarine model adds an immense value to our understanding. The Estuarine model asserts that both ordered (simple and complicated) and unordered (complex and chaotic) systems co-exist and interact, especially in organizational settings.
In practical terms, the Estuarine model offers a wiser approach to organizational decision-making. For example, in a business environment, a 'simple' system like payroll management might interact with a 'complex' system like employee satisfaction in unpredictable ways. Similarly, algorithmic decision-making in AI is far from a linear or isolated process; it exists in a complex adaptive system where multiple variables interact in unforeseeable patterns.
This complexity is particularly evident in AI systems that are deployed in dynamic environments. For instance, an AI system used for supply chain management in a global corporation would have to adapt to myriad variables - economic fluctuations, geopolitical events, environmental factors, and more. The Estuarine model advocates for a decision-making process that is agile, continually adapting to emerging patterns rather than adhering to a predetermined, linear plan (and yes, the latter seems to include McKinsey’s anti-bias machine learning framework).
Tentative Enhancements to McKinsey’s Framework
Given these philosophical quandaries and complexity-oriented perspectives, a few amendments could perhaps enrich McKinsey’s original framework:
1. Ethical pluralism
Broadening the panel to include ethicists from various philosophical/civilizational traditions could offer a more comprehensive understanding of bias. This diversity would allow for a nuanced negotiation of the complexities inherent in defining and measuring bias. (Read ‘Justification of Galston’s liberal pluralism’ by Golam Azam.)
2. Interdisciplinary education
Encouraging a curriculum for data scientists and engineers that includes humanities and social sciences could offer them a more holistic toolset for navigating the ethical complexities of their work. (View ‘The new education: how to revolutionize the university to prepare students for a world in flux’ by Cathy N. Davidson.)
3. Ethical sandboxing
Before full-scale deployment, running the algorithm in a controlled environment where its decisions do not have real-world impacts but can be studied for potential biases. This is similar to a philosophical 'thought experiment' and can help in identifying unforeseen ethical dilemmas. (Watch ‘Making an Ethical Machine’ by Alan Winfield.)
Why an Article on ML Written in 2017 is Being Discussed in 2023?
The relevance of McKinsey's 2017 framework on algorithmic bias in today's 2023 landscape may initially seem counterintuitive given the rapid advancements in AI and ethics. However, I find compelling reasons for its current discussion.
For starters, the article serves as a historical yardstick, allowing us to assess whether advancements in the field of AI ethics have been substantive or merely superficial. The social landscape has also changed considerably since 2017, particularly concerning our collective understanding of gender diversity, racial equality, and economic inclusion. The older assumptions of the framework may now be outdated, posing a risk that outmoded societal biases could be encoded into new algorithms: McKinsey's magazine remains a must-read for management inspirations.
As AI technology gains traction in emerging markets, these countries may default to established but potentially flawed frameworks, amplifying the reach and impact of any inherent biases: linearity and simplicity are appealing. Lastly, the evolving legal landscape, represented by new regulations like the European Union's Artificial Intelligence Act, makes the modern evaluation of older frameworks a legal imperative, not just an ethical one.
Thus, the need to know and discuss ideas that are not aging well is a critical step for the wise and responsible deployment of AI today.