The (im)possibility of technological neutrality
What counts as ‘technology’?
What counts as ‘technology’ and what does it mean, if at all possible, for technology to be ‘neutral’? How has, and does, technology impact our understanding of past histories and future possibilities? A common definition of technology is ‘applied science,’ which, although it loosely encapsulates modern computational technology, is not representative of all technologies. Technology can also be identified as ‘a certain kind of rules of activity.’
Similarly, Benjamin pinpoints the algorithms which construct ‘modern technology’ as ‘a set of instructions, rules, and calculations designed to solve problems.’ Other definitions consider technology as a ‘source of domination that effectively rules all forms of modern thought and activity’ or as ‘a scene of social struggle’ due to its inseparable interconnectedness with culture and society.
The latter understandings are more useful in focusing on Generative Artificial Intelligence software, as an entrenched concept which comments on longstanding, historical social inequalities. Based on the mentioned points, I propose that technology cannot be neutral due to its racially discriminative outputs against Black people, where they are dually erased from some illustrations, and hypervisible in others. By considering technology as a concept, and focusing on racial (mis)representation in DALL-E and the US COMPAS system, I thematically critique the ‘neutrality’ of technolog(ies).
What does it mean to be neutral?
Neutrality means ‘not having a position or taking a side,’ according to Johnson. Neutrality is also commonly linked to objectivity to minimize bias. Neutrality can also be described as ‘a democratic value that eliminates bias.’ Neutrality can also be discussed with regards to ‘moral neutrality,’ which is based on the notion that technology is solely a tool for society to use: ‘to be innocent, we must be powerless.
Yet technology is a power.’ A further connection can be made from neutrality to moral ‘innocence.’ From the provided definitions, the following subcategories can be extracted to ‘measure’ neutrality: objectivity, and moral ‘innocence.’ Although arbitrary concepts like ‘neutrality’ cannot be quantified, the proposed subcategories interrogate neutrality as an umbrella term. I predominantly take from, and extend, arguments centred on ethical dilemmas and biases within the coding of technology, whether computationally, or historically.
Many philosophers, such as Moore and Hare, support notions of neutrality as objectivity. When applied to technology, Technological objectivity is a myth which ‘masks the risks of biases’ that are inherently coded into technology. The possibility of technology as objective is subverted by Benjamin’s work which highlight the (re)production of racial biases in modern technologies where the ‘power of these technologies rests of a false assertion of neutrality’ which rely on the belief of objectivity, facilitating ‘racist habits.’. Benjamin’s research indicates that the datasets which train technologies are gender, race, and economically biased. This allows them to (re)produce racial and cultural hierarchies. This research also highlights racist forms of ‘coded exposure’ which technologies fail ‘to see Blackness’ rendering their experiences invisible.
What is DALL-E?
DALL-E and DALL-E Mini are advanced Artificial Intelligence models which can generate images through their databases, which ‘learns context and thus meaning by tracking relationships in sequential data.’ It must be noted that these biases partially go unnoticed due to the ‘black-boxed’ nature of how modern technologies produce their answers. DALL-E Mini’s Model Card states that ‘initial testing demonstrates that they may generate images that contain negative stereotypes against minoritized groups’ disproving objective representation and, in turn, neutrality.
I argue that there is a danger in accepting these technologies as ‘objective’ or ‘neutral’ as indicated through their racially discriminative representations presented as hegemonic illustrations of society. As the development of technology, particularly in the form of AI, is trained on historically and racially biased information, its outputs unavoidably reflect this. Its training dataset is based on Western understandings of historical events, which perpetuate through the technology.
Case study 1: US COMPAS System
The US COMPAS system, a criminal risk-rating algorithm ranking the possibility of re-offending as shown in figure 1 has shown to be racially discriminative. Historical examples of past software are essential in understanding the motivating factors behind their situated biases and considering their influence on future technologies. History is always being created and re-created, so contextualising examples through past AI models is crucial.

Raigoso identified an ‘unfairness’ when assigning risk score as the algorithms were disproportionally ‘fed’ and trained on images of Black people; consequently, they were almost twice as likely to be labeled as high risk but not re-offend, with the opposite trend showing for white offenders. This data evidences a disproportionate ‘hypervisibility’ of Black people in certain contexts, which highlights the historical biases that have influenced the disproportionate monitoring of Black bodies.
Case study 2: DALL-E
DALL-E’s generative capabilities arguably showcase a lack of ‘moral innocence’ through their representation of Black people by changing the racial prompt, as shown in figures 2 and 3 comparatively.


The prompt in figure 2 broadly stipulates ‘American grandmothers,’ which are all generated as white, solidifying the erasure and invisibility of the Black community in modern technologies, reinforcing a white-favoring representation of ‘sweet, old American women.’ By utilising research from Critical Race Studies scholars, Hosseini indicates the ‘chef’s hat’ in figure 3 represents the women as cooks in service jobs, further intersecting with historical class and gendered hierarchies whilst commenting on Black positionality in society.
This data shows a lack of ‘moral innocence’ in the active exclusion of Black representation in the imagery of ‘American’ grandmothers. The third image in figure 3 employs colorful background lighting, a bright apron, and a red nose, which exhibit mocking, clown-like connotations, indicating a racial Othering and implausibility of the notion of a ‘sweet, old Black American grandmother’; this tone is absent in figure two.
This ridiculing imagery reifies ideologies ‘reminiscent of historical minstrelsy and Blackface.’ Bell Hooks’ identification of the ‘mammy’ figure, an overweight, unkept, Black maternal figure which is mocked through bright colours, wigs, facial expressions, and excessive makeup, is arguably mirrored by generative technologies that (re)present racial stereotypes and hierarchies.
The illusion of ‘moral innocence’ stems from the notion that the problem ‘is not the technology itself’ but the way it is enforced, which ‘exonerates technology from moral culpability,’ as Benjamin infers. I argue the two are inseparably fused. Technology is constructed and produced; it is constantly being mediated and re-mediated, so the product the user interacts with cannot be morally innocent or considered separate from the ideologies that formed its code, bonding technology to the unequal representations it generates.
Concluding summary
Through the subcategorization of ‘neutrality’, and the analysis of DALL-E imagery, the interrogated case studies disprove the possibility of technological neutrality and indicate negative racial bias in technological representation. The duality of the invisibility and the hypervisibility of Black identities is prominent, showing a historical, and mirrored technological correlation of (re)mediated, derogatory Black representations.
Although some authors envision a utopic possibility towards technological neutrality, technology has not reached this point. This piece has employed ‘technology’ as a concept; therefore, the data analysed is not necessarily generalisable, nor transferable, to various technological forms. The data analysed shows potential for the exploration of other intersections, particularly, gender and class, which are also discriminately represented and/or excluded.
There are many other examples of technologies in which the same intersectionalities can also be explored like ChatGPT, Google, or Microsoft Co-Pilot. This piece is by no means exhaustive of the potential considerations of technology as neutral and the historical racial examples that can be employed to prove the (im)possibility of technological neutrality.