Skip to main content Skip to footer

Generative AI

Ethics and Bias in Generative AI: Navigating the Moral Landscape

Today, we live in a digital era where we encounter many seemingly authentic content on a daily basis—celebrity endorsements, masterpiece paintings or perfectly crafted symphonies. But what if the celebrity never endorsed the product or the masterpiece was not painted by a human? With Generative Artificial Intelligence (Gen AI) advancements, such scenarios are becoming commonplace with AI-generated content and deepfakes misleading audiences and challenging the boundaries of truth. What is more alarming is how AI systems in critical fields like hiring, healthcare, criminal justice, etc., are found to be biassed, unintentionally perpetuating societal inequalities.

These examples highlight the ethical challenges involved when working with Gen AI, prompting urgent attention to responsible AI development. Signalling the critical need for guidelines and regulations to address these concerns, UNESCO adopted the first global ethical standards for AI in 2021 - Recommendation on the Ethics of Artificial Intelligence.

Let us see what biases are in AI models.


Bias in AI models

AI bias refers to the biassed outcomes generated by AI models. These models identify patterns and predict outcomes based on the data used to train them. Inherent biases in historical data reflect in the AI algorithms and perpetuate societal inequities. It creates unequal and unfair opportunities, fostering mistrust, especially among the marginalised and underrepresented sections. Skewed or mislabelled training data adds to the bias. Algorithms become biassed even if programmers factor it into the decision-making process unintentionally. 

Here are some real-life instances.

  • Computer-aided diagnosis (CAD) systems have produced lower accuracy results for black patients compared to white patients.
  • Applicant tracking systems have propagated gender biases owing to filtering based on specific words.
  • Programmatic Ads and image generators have propagated gender and age biases, while
  • Predictive policing tools reinforced racial biases based on historical data patterns.

Ethical implications of AI-Generated content


Gen AI tools can help generate any form of content, raising grave concerns about potential misuse, as criminals can deceive their target audience quite convincingly. Deepfake technology helps them create and propagate misinformation with grave consequences. Also, there is the issue of authorship or copyright. An AI tool trained to generate art does so with great accuracy. But who does this AI-generated art belong to? The artist on whose art the AI tool trained, the developers who wrote the algorithms or the company who created the tool? These are the ethical complications related to copyrights of AI-generated content.

Another ethical AI concern is accountability. If AI systems make decisions, who is accountable for those decisions? For example, who is responsible for an accident involving an autonomous car with no driver? Such scenarios emphasise the need for transparency and explainability in AI models.

An ethical framework must govern all decision-making processes of AI models through the training and development phase. Organisations must create a comprehensive AI policy for AI governance that includes transparent decision-making processes for ethical dilemmas, risk management and compliance. The policy must incorporate best practices and periodic audits, adopting a human-in-the-loop approach to ensure quality, foster trust and build a bias-free system. Such an approach makes the systems and entities building them more accountable. An ethical AI framework covering aspects such as bias, fairness, transparency, explainability, accountability, data privacy, security, etc., helps developers understand their responsibility for the outcomes of the systems and work within them.

Let us explore how to mitigate AI biases.


Mitigating bias in AI

For AI bias mitigation, it is crucial to address the root cause giving rise to biases—the training data. Historical biases, imbalances in data collection, societal stereotypes and inaccurate labelling lead to gender, racial and socioeconomic biases. Bias mitigation strategies must effectively detect and nullify these biases. The training datasets must reflect real-world diversity to ensure the model is accurate. Audit the datasets to ensure fair representation across various demographics, especially for underrepresented or overrepresented groups.

Leverage bias detection tools like AI Fairness 360, Fairlearn, etc., to detect, quantify and mitigate biases in training datasets. Domain experts can help refine this further to ensure fairness. Biases can also creep in during the training and development of AI algorithms. 

Here are some mitigation techniques for those:

  • Balance accurate representation of demographics by augmenting or resampling data with additional examples of underrepresented groups using techniques such as SMOTE (Synthetic Minority Over-sampling Technique), Tomek links, KMeans-SMOTE, random sampling, etc.
  • Reduce bias before training by decreasing the influence of sensitive attributes like race, gender, etc., in the dataset during pre-processing.
  • Employ fair representation learning algorithms during pre-processing to ensure balanced datasets.
  • During training, observe and optimise the learning algorithm for accuracy and fairness.
  • Utilise tools like What-If help to visually analyse and inspect the model behaviour/outcomes to make necessary adjustments.
  • Employ the reinforcement learning technique that does not employ historical datasets or labelling to train AI models. It uses an unsupervised learning mode to train the AI models in a trial-and-error mode, just like humans learn.

However, to navigate the complex moral landscape of AI, we need an umbrella of AI governance and regulations, which seems to be rapidly evolving. Some examples are:

  • The European Union’s AI Act
  • The U.S. National AI Initiative Act
  • The General Data Protection Regulation (GDPR)
  • Canada's Directive on Automated Decision-Making

Regulations may vary by region and require coordinated global efforts to make them effective. Establishing consistent guidelines and industry-wide standards involving all stakeholders, including civil society, helps achieve this.


How can Infosys BPM help?


Infosys BPM's Gen AI business operations platform is a suite of tailormade, ready-to-use BPM-focused solutions and responsible design frameworks that enables enterprises to accelerate value-creation. Responsible by design is a key tenet of our platform that underscores our commitment to upholding ethical standards.


Recent Posts