Healthcare

Addressing AI and implicit bias in healthcare

Bias is a basic aspect of human nature that exists so subtly that we may not be aware that it is at play in our thoughts, actions, and transactions, potentially exacerbating inequities when clinicians make decisions. The National Institutes of Health (NIH) defines implicit bias as “a form of bias that occurs automatically and unintentionally, that nevertheless affects judgements, decisions, and behaviours”. A sector as important as healthcare cannot function impartially if biases are at work. Before we turn our attention to how AI (Artificial Intelligence) used in healthcare can be rid of it, we must understand the implicit bias in healthcare and its effect on algorithmic decision-making.

AI has wide applications in healthcare. Under the AI umbrella, there is a collection of different technologies aimed at freeing human effort and accumulating and building the intelligence of healthcare systems. One of the key AI technologies to do this is ML (machine learning). One of the important processes of any ML model is to ‘learn’ the specific datasets, a step critical in ensuring AI tools do not exacerbate bias in clinician decisions.  If a machine must take decisions related to diagnosis, identify target groups, or predict the prevalence of a disease or symptoms among demographics, it should first be readied with millions of data records on the aspect which is being studied. So, it follows that the training datasets, used must be bias-free to prevent algorithmic bias from influencing clinician decisions. Is it so in real-world scenarios? Let us take a look at some healthcare scenarios to understand how bias creeps in.

Computer-Aided Diagnosis (CAD) has been used extensively for image and X-ray analysis. If the uploaded data used for training has most x-rays from men, it clearly means that other groups like women and kids have not been represented enough.  This could lead to inaccurate diagnoses by the CAD systems.

ML has been extensively used in breast cancer detection. One of the major limitations encountered has been the availability and use of inclusive datasets for training. There have been reports that few online breast cancer prediction tools have calculated a lower risk for African American and Latino women as compared to Caucasian women. This may result in lower screening and early detection of breast cancer. 

Another example is the use of AI systems used to detect skin cancers by identifying skin lesions. If the AI system has been fed the data from people with light skin colour, how can it be accurate in detecting skin lesions in dark-skinned individuals? The system has not been trained to identify the contrast in the skin and there is a high probability of missing the lesions in dark-skinned individuals.

Jordon Crowley’s case is a classic example of an implicit bias of race in healthcare. A biracial American kid who needed a kidney transplant, Jordon had one black grandparent and 3 white grandparents. Doctors who examined him physically deemed him racially black. American rules for kidney transplant specifically list an eGFR (estimation of Glomerular filtration rate) of 17 for white and 21 for blacks. Since doctors decided he is Black, he could not make the cut for the kidney transplant and was put on a waitlist.

In the field of AI-assisted disease diagnosis, AI models utilise audio data to detect conditions like Alzheimer's. However, it is crucial to recognise that if these models are trained on a limited range of accents, they may produce biased outcomes. There was an instance where an AI algorithm developed in Canada focused solely on Canadian English speakers, thus adversely affecting individuals with different English accents within the country.


How do implicit biases seep into the systems used by healthcare?

The algorithms used extensively by healthcare have been generated by humans until now. That area is bound to change rapidly now thanks to the quick emergence and spread of new AI systems and applications. People creating the algorithms may unwittingly let their biases into the system. For example, the training datasets created for use by CAD could have biases due to gender, race, skin colour, age, weight, etc.
While the prevalence of bias has been widespread, there has been a continuous effort to mitigate it too. Many tools, some of them open source, have been developed to minimise bias. Examples of such tools include The What-if tool and TCAV (testing with concept activation) to detect bias in ML models by Google, AI fairness 360, and Skater by Oracle.

Understanding why and how biases creep into AI, increasing transparency while building AI models, rigorously testing the models that are built, using synthetic data sets instead of real data in cases where it can be applied, and instituting standard frameworks before deploying could go a long way in building bias-free AI.

* For organizations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organizational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organizations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organizations that are innovating collaboratively for the future.


Recent Posts