AI Ethics

AI’s Role in Shaping Future Ethics: Who Teaches AI Right from Wrong?

Imagine a future where artificial intelligence (AI) determines who gets a job, who receives medical treatment first, or even who is likely to commit a crime. These are no longer hypothetical scenarios—AI systems are already making such decisions in hiring, healthcare, and law enforcement. But while AI can process vast amounts of data, it doesn’t have a moral compass of its own. It follows patterns, algorithms, and ethical guidelines that are shaped by humans.

This raises a crucial question: Who decides what AI considers “right” and “wrong”? Unlike humans, AI doesn’t develop ethics from lived experiences or personal values. Instead, it learns from data—data that reflects the biases, priorities, and ideologies of those who design and train it. Governments, corporations, and researchers all have different visions of ethical AI, and their decisions shape how AI interacts with society.

But can ethics truly be programmed? Do we want AI to follow a single global moral framework, or should it adapt to cultural differences? More importantly, if AI is already influencing critical aspects of our lives, do we have enough control over who teaches AI morality in the first place?

In this article, we’ll explore the growing role of AI in ethical decision-making, the hidden power struggle over its moral framework, and whether AI can ever develop a truly fair and universal sense of ethics.

AI’s Growing Role in Ethical Decision-Making

AI is no longer just a tool for automation—it is increasingly making decisions that have real-world ethical consequences. From hiring employees to determining who gets access to healthcare, AI systems are being used to evaluate human lives in ways that were once reserved for people. But unlike humans, AI doesn’t have intuition, empathy, or moral reasoning. It simply follows the patterns it learns from data, which makes its role in ethical decision-making both powerful and problematic.

Here are some key areas where AI is influencing ethical decisions:

1. AI in Hiring: Who Gets the Job?

Many companies now use AI-driven systems to scan resumes, rank candidates, and even conduct initial video interviews. While this makes hiring more efficient, it also raises ethical concerns:

• Bias in hiring: If an AI system is trained on past hiring data that favored men over women, it may unintentionally reject female candidates more often.

• Lack of transparency: Many AI hiring tools function as “black boxes,” meaning applicants never know why they were rejected.

For example, Amazon’s biased hiring algorithm was trained on past hiring data, which reflected a male-dominated workforce. As a result, the system downgraded resumes that included the word "women" (e.g., “women’s chess club”) and preferred male candidates. Amazon scrapped the system after realizing it reinforced gender discrimination instead of eliminating bias. But the fact is that AI doesn’t create bias—it amplifies existing biases in training data. This case highlights why AI ethics must involve fair and diverse training data before deployment.

2. AI in Healthcare: Who Gets Prioritized?

Hospitals are using AI to predict which patients are at higher risk of complications and need urgent care.

While AI can improve medical efficiency, it also poses ethical risks:

• Bias in healthcare data: If an AI system is trained on data from wealthier populations, it may not accurately diagnose conditions in underprivileged or minority communities.

• Life-or-death decisions: Should AI be allowed to decide who gets an organ transplant first or who receives life-saving treatment in a crisis?

For example, racial bias in medical risk prediction: a 2019 study published in Science found that an AI healthcare algorithm used in U.S. hospitals discriminated against Black patients when predicting who needed urgent care.

The AI predicted patient risk based on healthcare spending history, assuming that higher spending = greater medical need. Black patients historically received less medical attention and fewer resources, leading the AI to incorrectly assume they were healthier than they actually were. The system recommended less care for Black patients, reinforcing systemic racial disparities. AI can unintentionally worsen healthcare inequality when it relies on flawed indicators of medical risk. Ethical AI in medicine must ensure fairness by auditing for biases across different racial and socioeconomic groups.

3. AI in Criminal Justice: Predicting Crime and Sentencing

AI is being used to predict crime rates, assess bail eligibility, and even recommend prison sentences. This raises serious ethical concerns:

• Racial and socioeconomic bias: Studies have shown that AI-based predictive policing tools disproportionately target minority communities.

• The illusion of objectivity: AI is often seen as neutral, but if trained on biased crime data, it can reinforce systemic discrimination.

For example, The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm was used in U.S. courts to predict which defendants were likely to reoffend. A 2016 investigation by ProPublica found that the system wrongly labeled Black defendants as "high risk" nearly twice as often as White defendants. The AI was trained on historical crime data, which reflected systemic racial bias in policing and sentencing. The system disproportionately assigned higher risk scores to Black defendants, leading to harsher sentences. White defendants were more likely to be misclassified as "low risk," even if they had similar criminal records.

This shows AI in law enforcement is not neutral—it reflects biases from past data. Using AI in criminal justice without transparency and accountability can deepen racial disparities instead of reducing them.

4. AI in Social Media and Content Moderation

AI is used to detect hate speech, misinformation, and inappropriate content on platforms like Facebook, Twitter, and YouTube. But ethical issues arise when AI decides what content is acceptable:

• Censorship vs. free speech: Who decides what AI should block? Different cultures and governments have different definitions of what is “harmful.”

• Algorithmic bias: AI moderation tools have been found to flag certain languages and cultural expressions as offensive, even when they are not.

For example, YouTube’s AI-driven recommendation system has been criticized for amplifying extremist and misleading content. A 2019 report by The New York Times found that the AI promoted conspiracy theories and misinformation, leading viewers down radicalization pathways. YouTube's algorithm prioritizes watch time and user engagement—it suggests content that keeps users watching, even if it’s misleading or extreme.

The AI recommended conspiracy theories, fake news, and extremist videos because they generated high engagement. This resulted in real-world harm, including spreading COVID-19 misinformation and fueling political extremism. AI-driven content moderation needs ethical oversight—it should balance engagement with social responsibility, ensuring that misinformation and harmful ideologies are not amplified.

Conclusion: Why AI Ethics Cannot Be Ignored

Ignoring AI ethics risks deepening societal inequalities, eroding democratic values, and reinforcing harmful biases. AI is shaping the future, but we must ensure that the future it creates is one that prioritizes fairness, accountability, and human dignity.

If we fail to act now, we risk a world where AI’s decisions go unchallenged, affecting billions without transparency or recourse. The question is not whether AI will continue to evolve—it will.

The real question is whether we, as a society, will take responsibility for shaping its evolution in a way that benefits all of humanity. Can AI Ever Be Truly Fair?

Don't miss out on the latest news!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

People also read

AI bias

The (im)possibility of technological neutrality

Is technology ever truly neutral? This article dismantles the illusion of objectivity in AI, exposing how systems like DALL-E and the US COMPAS algorithm reinforce racial biases. From erasure to hypervisibility, technology encodes historical inequalities, making neutrality not just improbable but impossible.
Sara Badran
Sara Badran
March 18, 2025
7
min read
AI Bias

The Application of Artificial Intelligence in Literary Text Analysis: Modern Approaches and Examples

This article explores AI-driven literary analysis, including authorship attribution, sentiment detection, and poetry translation.
Anastasia Serbinova
Anastasia Serbinova
February 26, 2025
8
min read
AI Identity
ai bias

The Impact of AI on Historiographical Storytelling and the Risk of a Selective, Eurocentric Narrative

AI is often viewed as an all-knowing oracle, but its understanding of history is shaped by colonial biases. This article examines how AI inherits and reinforces Eurocentric perspectives and why postcolonial studies offer an essential ethical framework for reshaping historical research.
Sofia Di Bella
Sofia Di Bella
February 4, 2025
5
min read
Generative AI
AI Bias

Contribute to Historica's blog!

Learn guidelines, requirements, and join our history-loving community.

Become an author

FAQs

How can I contribute to or collaborate with the Historica project?
If you're interested in contributing to or collaborating with Historica, you can use the contact form on the Historica website to express your interest and detail how you would like to be involved. The Historica team will then be able to guide you through the process.
What role does Historica play in the promotion of culture?
Historica acts as a platform for promoting cultural objects and events by local communities. It presents these in great detail, from previously inaccessible perspectives, and in fresh contexts.
How does Historica support educational endeavors?
Historica serves as a powerful tool for research and education. It can be used in school curricula, scientific projects, educational software development, and the organization of educational events.
What benefits does Historica offer to local cultural entities and events?
Historica provides a global platform for local communities and cultural events to display their cultural artifacts and historical events. It offers detailed presentations from unique perspectives and in fresh contexts.
Can you give a brief overview of Historica?
Historica is an initiative that uses artificial intelligence to build a digital map of human history. It combines different data types to portray the progression of civilization from its inception to the present day.
What is the meaning of Historica's principles?
The principles of Historica represent its methodological, organizational, and technological foundations: Methodological principle of interdisciplinarity: This principle involves integrating knowledge from various fields to provide a comprehensive and scientifically grounded view of history. Organizational principle of decentralization: This principle encourages open collaboration from a global community, allowing everyone to contribute to the digital depiction of human history. Technological principle of reliance on AI: This principle focuses on extensively using AI to handle large data sets, reconcile different scientific domains, and continuously enrich the historical model.
Who are the intended users of Historica?
Historica is beneficial to a diverse range of users. In academia, it's valuable for educators, students, and policymakers. Culturally, it aids workers in museums, heritage conservation, tourism, and cultural event organization. For recreational purposes, it serves gamers, history enthusiasts, authors, and participants in historical reenactments.
How does Historica use artificial intelligence?
Historica uses AI to process and manage vast amounts of data from various scientific fields. This technology allows for the constant addition of new facts to the historical model and aids in resolving disagreements and contradictions in interpretation across different scientific fields.
Can anyone participate in the Historica project?
Yes, Historica encourages wide-ranging collaboration. Scholars, researchers, AI specialists, bloggers and all history enthusiasts are all welcome to contribute to the project.