Skip to main content Skip to footer

Generative AI

Ethical considerations with Generative AI adoption in HR

As Generative AI (GenAI) becomes increasingly integrated into our workplaces, HR faces the unenviable task of balancing technological advancements with ethical considerations. While there are enough reasons to welcome the positive changes heralded by GenAI, one must not ignore its potential to introduce or exacerbate biases, compromise privacy and obscure decision-making processes. HR is at the centre of organisational AI adoption and presented with a unique opportunity to champion ethical practices and ensure the responsible use of these advanced technologies through thoughtful strategies.

GenAI tools can transform talent acquisition, elevate employee experiences and add immense value to HR services. But since HR is the custodian of confidential and sensitive data, there are possibilities of pitfalls arising from misinformation, identity, trust and authenticity issues. One of the key observations from the Gartner Data & Analytics Summit 2023was, “By 2027, 80 per cent of enterprise marketers will establish a dedicated content authenticity function to combat misinformation and fake material.” Imagine some deepfake videos of employees floating around, or realising that the screening algorithm has developed a systemic bias; if HR does not address ethical considerations of GenAI with the gravity required, such issues will crop up regularly.

Thankfully, HR can solve most ethical issues through meticulous data governance policies—including the training data—by conducting regular audits and maintaining human oversight. Much of it is about providing a transparent view into the GenAI black box to imbibe trust and confidence about its usage.

Let’s delve into the top ethical considerations of GenAI adoption in HR.


Top HR challenges of GenAI adoption

  1. Biases
  2. At the core of all GenAI tools lies the model used to train it. These models are trained based on sample data prepared by humans, so there is always scope for errors. Systemic biases of historical data also creep in. Credible research has found models favouring specific genders or excluding candidates based on certain parameters. These models work on the cues thrown by data patterns. The people working on the model may be unaware of these biases in the training data and the models fail to understand that the bias is unintentional. Flawed data and incorrect stereotypes perpetuate biases and inequalities through poorly trained algorithms. Whether in recruitment or talent management, these biases lead to unfair outputs.


  3. Data privacy and security
  4. Such systems work with humongous amounts of data to function effectively. But therein lies the challenge of managing the privacy concerns of candidates and employees. Data storage and usage lead to trust and security issues among people. One cannot rule out the risk of data leakage, hacking or even an insider threat from some malicious or disgruntled employee. What compounds this issue further is that most enterprises work with a global workforce, and all data collected and stored is subject to local regulations of the region. The fallout of such incidents goes beyond the financial in most cases. As such, securing data and ensuring privacy is paramount.


  5. Transparency
  6. As the line between human and machine interactions begins to blur, organisations must provide transparency about GenAI tool usage. Whether candidates or employees, they must know if their interaction is with a tool or a human. HR must facilitate awareness training so people understand its implications. It helps build trust by assuring them of the safety and security measures built in. Often, trust issues occur because these tools become a black box with no transparency or explainations. Understanding the rationale behind how these systems arrive at their decisions is a must to build trust, and the biggest challenge for HR. Disclosures and obtaining consent before collecting information also help in this endeavour.

    So, how can HR tackle these challenges effectively?


Strategies for responsible AI adoption

  • Implement robust data governance
  • Since data drives GenAI algorithms, it is crucial to establish a robust data governance framework covering data collection, storage and AI usage. The policy must cover compliance with data security and privacy regulations. This approach ensures the data used to train the GenAI models is diverse, bias-free, accurate, relevant and of high quality. The data governance policy must also include regular audits to ensure assessment governance process compliance, data biases, other vulnerabilities or issues, etc.

  • Establish an AI ethics committee
  • Form a cross-functional ethics committee focusing on AI ethics, including HR, legal and IT concerns and guidelines. They should help formulate an AI ethics organisational policy. It should cover ethical AI principles such as fairness, transparency, accountability, privacy, etc. The committee should also plan for open communication on these policies across the organisation and employee training, and periodically review and update the policy for relevance.

  • Maintain human-in-the-loop processes
  • Build AI systems with ahuman-centric approach and adopt a human-in-the-loop approach that ensures critical human oversight for the checks and balances in decision-making. HR must always be able to interpret and intervene in crucial decisions at the right juncture to prevent unintended outcomes. Such an approach helps evolve a fair AI system and fosters trust and accountability. It ensures GenAI augments the humans rather than replaces them.


How can Infosys BPM help?

Infosys BPM's GenAI business operations platform helps customers transform their business operations through responsible design frameworks. It enables you to drive AI-first operations through reimagined business processes while ensuring ethical AI usage that adheres to legal, security and privacy guidelines.


Recent Posts