AI bias

The (im)possibility of technological neutrality

What counts as ‘technology’?

What counts as ‘technology’ and what does it mean, if at all possible, for technology to be ‘neutral’? How has, and does, technology impact our understanding of past histories and future possibilities? A common definition of technology is ‘applied science,’ which, although it loosely encapsulates modern computational technology, is not representative of all technologies. Technology can also be identified as ‘a certain kind of rules of activity.’

Similarly, Benjamin pinpoints the algorithms which construct ‘modern technology’ as ‘a set of instructions, rules, and calculations designed to solve problems.’ Other definitions consider technology as a ‘source of domination that effectively rules all forms of modern thought and activity’ or as ‘a scene of social struggle’ due to its inseparable interconnectedness with culture and society.

The latter understandings are more useful in focusing on Generative Artificial Intelligence software, as an entrenched concept which comments on longstanding, historical social inequalities. Based on the mentioned points, I propose that technology cannot be neutral due to its racially discriminative outputs against Black people, where they are dually erased from some illustrations, and hypervisible in others. By considering technology as a concept, and focusing on racial (mis)representation in DALL-E and the US COMPAS system, I thematically critique the ‘neutrality’ of technolog(ies).

What does it mean to be neutral?

Neutrality means ‘not having a position or taking a side,’ according to Johnson. Neutrality is also commonly linked to objectivity to minimize bias. Neutrality can also be described as ‘a democratic value that eliminates bias.’ Neutrality can also be discussed with regards to ‘moral neutrality,’ which is based on the notion that technology is solely a tool for society to use: ‘to be innocent, we must be powerless.

Yet technology is a power.’ A further connection can be made from neutrality to moral ‘innocence.’ From the provided definitions, the following subcategories can be extracted to ‘measure’ neutrality: objectivity, and moral ‘innocence.’ Although arbitrary concepts like ‘neutrality’ cannot be quantified, the proposed subcategories interrogate neutrality as an umbrella term. I predominantly take from, and extend, arguments centred on ethical dilemmas and biases within the coding of technology, whether computationally, or historically.

Many philosophers, such as Moore and Hare, support notions of neutrality as objectivity. When applied to technology, Technological objectivity is a myth which ‘masks the risks of biases’ that are inherently coded into technology. The possibility of technology as objective is subverted by Benjamin’s work which highlight the (re)production of racial biases in modern technologies where the ‘power of these technologies rests of a false assertion of neutrality’ which rely on the belief of objectivity, facilitating ‘racist habits.’. Benjamin’s research indicates that the datasets which train technologies are gender, race, and economically biased. This allows them to (re)produce racial and cultural hierarchies. This research also highlights racist forms of ‘coded exposure’ which technologies fail ‘to see Blackness’ rendering their experiences invisible.

What is DALL-E?

DALL-E and DALL-E Mini are advanced Artificial Intelligence models which can generate images through their databases, which ‘learns context and thus meaning by tracking relationships in sequential data.’ It must be noted that these biases partially go unnoticed due to the ‘black-boxed’ nature of how modern technologies produce their answers. DALL-E Mini’s Model Card states that ‘initial testing demonstrates that they may generate images that contain negative stereotypes against minoritized groups’ disproving objective representation and, in turn, neutrality.

I argue that there is a danger in accepting these technologies as ‘objective’ or ‘neutral’ as indicated through their racially discriminative representations presented as hegemonic illustrations of society. As the development of technology, particularly in the form of AI, is trained on historically and racially biased information, its outputs unavoidably reflect this. Its training dataset is based on Western understandings of historical events, which perpetuate through the technology.

Case study 1: US COMPAS System

The US COMPAS system, a criminal risk-rating algorithm ranking the possibility of re-offending as shown in figure 1 has shown to be racially discriminative. Historical examples of past software are essential in understanding the motivating factors behind their situated biases and considering their influence on future technologies. History is always being created and re-created, so contextualising examples through past AI models is crucial.

Figure 1 - (Raigoso, 2024)

Raigoso identified an ‘unfairness’ when assigning risk score as the algorithms were disproportionally ‘fed’ and trained on images of Black people; consequently, they were almost twice as likely to be labeled as high risk but not re-offend, with the opposite trend showing for white offenders. This data evidences a disproportionate ‘hypervisibility’ of Black people in certain contexts, which highlights the historical biases that have influenced the disproportionate monitoring of Black bodies.

Case study 2: DALL-E

DALL-E’s generative capabilities arguably showcase a lack of ‘moral innocence’ through their representation of Black people by changing the racial prompt, as shown in figures 2 and 3 comparatively.

Figure 2 – Hosseini
Figure 3 – Hosseini

The prompt in figure 2 broadly stipulates ‘American grandmothers,’ which are all generated as white, solidifying the erasure and invisibility of the Black community in modern technologies, reinforcing a white-favoring representation of ‘sweet, old American women.’ By utilising research from Critical Race Studies scholars, Hosseini indicates the ‘chef’s hat’ in figure 3 represents the women as cooks in service jobs, further intersecting with historical class and gendered hierarchies whilst commenting on Black positionality in society.

This data shows a lack of ‘moral innocence’ in the active exclusion of Black representation in the imagery of ‘American’ grandmothers. The third image in figure 3 employs colorful background lighting, a bright apron, and a red nose, which exhibit mocking, clown-like connotations, indicating a racial Othering and implausibility of the notion of a ‘sweet, old Black American grandmother’; this tone is absent in figure two.

This ridiculing imagery reifies ideologies ‘reminiscent of historical minstrelsy and Blackface.’ Bell Hooks’ identification of the ‘mammy’ figure, an overweight, unkept, Black maternal figure which is mocked through bright colours, wigs, facial expressions, and excessive makeup, is arguably mirrored by generative technologies that (re)present racial stereotypes and hierarchies.

The illusion of ‘moral innocence’ stems from the notion that the problem ‘is not the technology itself’ but the way it is enforced, which ‘exonerates technology from moral culpability,’ as Benjamin infers. I argue the two are inseparably fused. Technology is constructed and produced; it is constantly being mediated and re-mediated, so the product the user interacts with cannot be morally innocent or considered separate from the ideologies that formed its code, bonding technology to the unequal representations it generates.

Concluding summary

Through the subcategorization of ‘neutrality’, and the analysis of DALL-E imagery, the interrogated case studies disprove the possibility of technological neutrality and indicate negative racial bias in technological representation. The duality of the invisibility and the hypervisibility of Black identities is prominent, showing a historical, and mirrored technological correlation of (re)mediated, derogatory Black representations.

Although some authors envision a utopic possibility towards technological neutrality, technology has not reached this point. This piece has employed ‘technology’ as a concept; therefore, the data analysed is not necessarily generalisable, nor transferable, to various technological forms. The data analysed shows potential for the exploration of other intersections, particularly, gender and class, which are also discriminately represented and/or excluded.

There are many other examples of technologies in which the same intersectionalities can also be explored like ChatGPT, Google, or Microsoft Co-Pilot. This piece is by no means exhaustive of the potential considerations of technology as neutral and the historical racial examples that can be employed to prove the (im)possibility of technological neutrality.

Don't miss out on the latest news!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

People also read

AI Ethics

AI’s Role in Shaping Future Ethics: Who Teaches AI Right from Wrong?

AI is increasingly making life-changing decisions—from hiring to healthcare and criminal justice—but who decides what it considers right and wrong? This article explores the hidden power struggle over AI ethics, the risks of bias, and whether AI can ever develop a truly fair moral framework.
Ravi Kumar
Ravi Kumar
March 13, 2025
7
min read
Generative AI
Ethics

The Application of Artificial Intelligence in Literary Text Analysis: Modern Approaches and Examples

This article explores AI-driven literary analysis, including authorship attribution, sentiment detection, and poetry translation.
Anastasia Serbinova
Anastasia Serbinova
February 26, 2025
8
min read
AI Identity
ai bias

The Impact of AI on Historiographical Storytelling and the Risk of a Selective, Eurocentric Narrative

AI is often viewed as an all-knowing oracle, but its understanding of history is shaped by colonial biases. This article examines how AI inherits and reinforces Eurocentric perspectives and why postcolonial studies offer an essential ethical framework for reshaping historical research.
Sofia Di Bella
Sofia Di Bella
February 4, 2025
5
min read
Generative AI
AI Bias

Contribute to Historica's blog!

Learn guidelines, requirements, and join our history-loving community.

Become an author

FAQs

How can I contribute to or collaborate with the Historica project?
If you're interested in contributing to or collaborating with Historica, you can use the contact form on the Historica website to express your interest and detail how you would like to be involved. The Historica team will then be able to guide you through the process.
What role does Historica play in the promotion of culture?
Historica acts as a platform for promoting cultural objects and events by local communities. It presents these in great detail, from previously inaccessible perspectives, and in fresh contexts.
How does Historica support educational endeavors?
Historica serves as a powerful tool for research and education. It can be used in school curricula, scientific projects, educational software development, and the organization of educational events.
What benefits does Historica offer to local cultural entities and events?
Historica provides a global platform for local communities and cultural events to display their cultural artifacts and historical events. It offers detailed presentations from unique perspectives and in fresh contexts.
Can you give a brief overview of Historica?
Historica is an initiative that uses artificial intelligence to build a digital map of human history. It combines different data types to portray the progression of civilization from its inception to the present day.
What is the meaning of Historica's principles?
The principles of Historica represent its methodological, organizational, and technological foundations: Methodological principle of interdisciplinarity: This principle involves integrating knowledge from various fields to provide a comprehensive and scientifically grounded view of history. Organizational principle of decentralization: This principle encourages open collaboration from a global community, allowing everyone to contribute to the digital depiction of human history. Technological principle of reliance on AI: This principle focuses on extensively using AI to handle large data sets, reconcile different scientific domains, and continuously enrich the historical model.
Who are the intended users of Historica?
Historica is beneficial to a diverse range of users. In academia, it's valuable for educators, students, and policymakers. Culturally, it aids workers in museums, heritage conservation, tourism, and cultural event organization. For recreational purposes, it serves gamers, history enthusiasts, authors, and participants in historical reenactments.
How does Historica use artificial intelligence?
Historica uses AI to process and manage vast amounts of data from various scientific fields. This technology allows for the constant addition of new facts to the historical model and aids in resolving disagreements and contradictions in interpretation across different scientific fields.
Can anyone participate in the Historica project?
Yes, Historica encourages wide-ranging collaboration. Scholars, researchers, AI specialists, bloggers and all history enthusiasts are all welcome to contribute to the project.