What it takes to create and implement ethical artificial intelligence

Artificial Intelligence (AI) is all-pervasive today, sometimes evidently, and at other times running quietly in the background. Shopping, healthcare, manufacturing, traffic routes, law and order, entertainment – you name it, and AI algorithms are at work. Most enterprises have automated their workflows across functions. Recruitment is a vital corporate function revolutionised by AI. While earlier organisations struggled to screen hordes of resumes, AI made it look easy. AI-powered hiring platforms are a norm today, propelled by changes brought about by Covid-19. AI brought in cost and time savings and improved the quality of hires, fuelling services like recruitment process outsourcing. Another unique selling point in its favour was objectivity in hiring. Humans can be biassed, whereas machines will not be.

In November 2021, the UNESCO member states adopted the first-ever global agreement on the ethics of AI to protect and promote human rights and human dignity. This acknowledgement of the need for global-level policies and regulatory frameworks was to ensure human-centred AI. Why was this need felt? Governments and corporates across the world realised that there were occasions when AI was unethical and it impacted their reputation and financials alike. Gender discrimination and racial discrimination were main concerns. For example, more AI-backed surveillance in neighbourhoods with dense coloured populations can lead to more arrests, even though the crime rates may be the same in other areas. Such instances led to a discussion about ethical AI and the need for policies and frameworks.

So, how can humankind create and implement ethical AI that does not discriminate in any way or harm, even unintentionally? A crucial difference between human bias and machine bias is that human bias could be intentional, whereas bias by a machine is unintentional. Machine Learning (ML) algorithms learn and gain intelligence from the data and models fed to them. So, the biases the machine algorithms brought out through its algorithms were systemic. It amplified the ones existing in society. This scenario leads us to the first step in creating and implementing ethical AI. Let us look at it and a few more steps in detail.

Understand and eliminate human biases

AI learns from historical data and makes inferences based on the training sets and models it has access to. If inherent biases exist in our system, AI takes it as the natural way to take things forward. AI isn’t naturally aware of morals or ethics to decide otherwise. Therefore, the liability of removing biases or bringing in ethics is on the people creating and implementing AI systems. Organisations should create awareness about its need and incentivise people to adopt an ethical AI approach. The people responsible for creating the AI systems should be clear about the usage. If they see an intentionally unethical issue, people can enforce their choice of not being a part of it. Ironically, one solution to ensure this happens is to involve humans to evaluate and override the AI algorithm biases. It means enterprises would have to learn to balance between historical data and human involvement, And that leads us to the next step that enterprises should take to ensure ethical AI.

Formulate an ethical risk framework

When you bring in human involvement, you need robust mechanisms to retain objectivity, transparency, and traceability. Many enterprises jumped onto the AI bandwagon without clearly understanding their need. This approach can prove disastrous. The first step to implementing ethical AI is to identify the goals of implementing AI and evaluate it ethically to ensure no harm comes out of it. Strengthen this step with a robust framework for ethical risk identification and evaluation. All stakeholders, even external ones, should be considered while formulating this policy. Identify all potential risks and discuss ways to address them in the system. Scour the data sets used to create models to remove slang, code words, abbreviations, and proxy variables. For example, race or gender should not appear in hiring or loan disbursing decision data sets because it should not matter, even legally. Incorporate such policies into data annotation platforms.

Establish an ethical risk governance structure for continued monitoring. Enterprises should employ business process analytics that function within this governance structure. Identify metrics and KPIs and measure and track them periodically to ensure they stay the course. Employee training is essential to ensure they understand privacy violations, biases, and unexplained outputs from algorithms. Reinforce this approach with robust and infallible processes to catch violations and ensure corrective actions. Such an approach can bring transparency and accountability to the system. Regular audits, preferably third-party ones, are one of the best practices deployed to ensure compliance. Ensure that the framework that you formulate suits your industry needs.

Solve the black box issue

In computing, black box refers to the system or program that takes an input and gives you an output. The user does not get to know the working that happens in between to derive the output. In AI systems, it refers to the working of the algorithms that arrive at the recommendations assisting in decision-making. Only if we achieve full transparency, explainability, or interpretability in this black box can we assure ethical AI. We cannot afford not to know why an AI algorithm recommended something not desirable or harmful. It is more so when we use such systems in high-stakes use-case scenarios like healthcare for identifying and treating diseases, risk profile identification in loan processing, etc. However, enterprises would benefit from a more holistic assessment of use cases by discussing them with all stakeholders.

AI is a complex technology, and we need to ensure we use it only for the betterment of humans. It is not easy to create and implement ethical AI, but it is not impossible either. What is encouraging is that it is a known problem to tackle, with governments and enterprises acknowledging it and working out solutions.

*For organizations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organizational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organizations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organizations that are innovating collaboratively for the future

Recent Posts