BUSINESS TRANSFORMATION

Tackling biases of AI with human fairness

Artificial Intelligence has been a scientific discussion since the time of Alan Turing in the early 1900s. Since then, it has often been a matter of suspicion and many other times a catalyst of revolutionary change. Be it showing recommendations on streaming platforms, or restaurant suggestions from food delivery startups, Artificial Intelligence (AI) has already touched various corners of our lives. It is, just like the invention of electricity, poised to become a milestone in revolutionizing our ways of living, from transportation to entertainment.

But with the increased reliance on AI, some equally difficult problems have been recorded, such as the bias of AI. The ill effect of this bias is magnified in industries such as mortgage where rejection can alter an applicant’s life course. Take for example an African American couple in Charlotte, North Carolina. According to local news reports, their lifelong dream of owning a house was crushed after a mortgage-approval algorithm rejected their application, allegedly based on racial profiling. This is a classic case of algorithmic bias where an algorithm makes systematic and repeatable errors to produce an unfair outcome.

In business parlance, Artificial intelligence bias could further aggravate, given AI’s deep roots today in an organization’s decision-making process. The impact of AI, in some instances, has taken over in its entirety to eliminate manual intervention altogether. In the mortgage space, AI is used almost in every aspect, from pre-approval, application evaluation, portfolio assessment, non-performing loans, expansion, debt collection, and overall streamlining of processes.

But a few studies have made compelling observations. The 2021 Journal of Financial Economics study found that borrowers from minority groups were charged interest rates at least 8% higher and were rejected for loans 14% more often than those from privileged groups. When the biases of the traditional approach creep into Machine Learning (ML) models and is trained with an unfair dataset, it can negatively impact the very foundation based on which an AI algorithm predicts, analyses, and decides on the veracity of an application.

To safeguard against unmeasurable gaps in an AI algorithm, it is important to understand how an AI algorithm works and what makes it powerful enough to resolve complex problems. Manual intervention and conscious human effort are crucial here to weed out randomized and arbitrary biases that could have crept into the algorithms.


Garbage in, garbage out

It's ironic that an altruistic tool as an AI, which has helped evangelize human biases, now requires human intervention to tackle the biases in AI. But AI is as good as the underlying data used to train it. The higher the volume of data fed, the more the algorithm will train to be accurate and consistent. Even more crucial is an understanding of the quality of data underneath, which is essential in driving accuracy. If there are undetected biases in the data, the AI algorithms will eventually learn them and use them for evaluation.

Take for example Amazon’s automated recruiting tool in 2014 that was not rating candidates fairly and showed bias against women. The company had used 10-year historical data to train its AI to look for the best-matched hiring candidates. With significant male dominance on past hires and men forming over 60% of Amazon’s employee base, the bias that male candidates are preferred was inadvertently fed into the algorithm. The company eventually terminated the use of the algorithm for recruiting purposes.


Unbox the black box

With simpler algorithms, accountability can be set, and logic can be tweaked as per compliances. With complex algorithms, the tweaks are difficult and hence the impact of the results needs to be understood.
It is vital to choose the appropriate algorithm to solve certain problems. Using an unexplainable ML technique to make sensitive decisions might lead to higher accuracy but could also be non-compliant with the law of the land. One must keep an eye on the impact of false positives and true negatives, irrespective of the frequency of their occurrence.

  • False positives – people not eligible for mortgage getting their application approved
  • True negatives – People eligible for mortgages get their application rejected

Quantify fairness - enhance equity

Fairness has many definitions, which can change with a given perspective. There are different schools of thought, some advocating different thresholds for vulnerable strata, others maintaining a single threshold for all. The correct approach may change on a case-to-case basis. Multiple perspectives from diverse thoughts could help identify the best approach, both ethical and fair. To maintain fairness and prevail in equity, the model predictions should not be dependent extensively on sensitive variables such as gender, race, or sexual orientation.

While creating and deploying AI solutions, processes can be included/enhanced to identify and mitigate biases. These processes should be implemented throughout the development process, be it the design, implementation, or monitoring phase. A thorough internal and external audit of raw data intended for training using third-party tools would help too.


New and timely laws

There is a lot to be accomplished when it comes to laws and policies around digital services and AI best practices. Existing laws need changes as AI in current times is going through rapid modifications and is starting to be used to make sensitive and impactful decisions. AI algorithms, thus, must be tested through a regulatory sandbox – a framework with regulatory safeguards. A time-bound small-scale live implementation could help in identifying non-compliant issues, before a scaled rollout.

Another strategy widely proposed is the Algorithmic Impact Assessments (AIAs) – which comprises risk and mitigation questions to build a scorecard to identify overall risk levels – to consider social impacts early in the development stage, and to properly document decisions and testing that would support future use of a solution.

To summarize, the following efforts must be undertaken to weed out the biases in AI while retaining the levels of accuracy: 

  • Keep the historical training data as clean and rich as possible
  • Lookout for the false negatives and minimize them
  • Reduce algorithm dependencies where sensitive variables are involved, such as ethnicity and gender
  • Work with regulators to ensure compliance (using a regulatory sandbox)
  • Review the Model’s outcome manually for any discrepancies

But not all unequal outcomes are unfair. Overcorrection of a bias should not lead to an unfair outcome for a different group. All stakeholders – developers, leaders, regulatory bodies, and users – need to shoulder this responsibility.
New ways must be identified to reduce disparities between groups without sacrificing the overall performance of the model, especially when there appears to be a trade-off. There is a balance between equity and accuracy, and every AI solution must strive to achieve the equal world it envisaged in the first place.


Recent Posts