Business Transformation

Bias, consent, and intent - the ethical dilemmas around AI

Did you know, if you are a Mac user, you might end up paying higher for hotel rooms?

Or, that activity tracker on your Fitbit could be a compelling legal evidence?

This is the dark side of the Artificial Intelligence era.

As AI integrates into the very fabric of our lives, it’s giving rise to unique ethical dilemmas that warrant deep thought. An analogy used by ethics and security expert Patrick Lin comes to mind. Let’s assume you are speeding down a crowded expressway in your self-driving car, and suddenly a massive crate falls off the truck and lands ahead of you. If you don’t swerve, you may be killed. If you do swerve, you will hit a biker on either side. If you were driving yourself, your action will be driven by instincts and muscle memory. But how can you code that response in a self-driving car? How can a machine be trained to react instinctively like us? Certainly, simple decision-making principles do not apply here. What if the machine decides to hit a biker on either of its side to save the head-on collision? Oftentimes, decisions made by a machine could be at loggerheads with the moral standards of a society. Take for example Microsoft’s AI assistant Tay, that soon after its launch in 2016 started spewing out racist remarks, sending the readers and the company in a panicked frenzy.   

According to a 2018 Deloitte survey of 1,400 AI professionals in the US, 32% of respondents ranked ethical issues as one of the top three risks of AI. However, most organizations, countries, and governments are not yet cognizant of the approaches that can be adopted to deal with AI ethics. We are so enamored by AI and its potential to learn that we sometimes ignore guiding the algorithm to learn right.


Bias is real

IArtificial intelligence is just that – artificial. The decisions made by such an algorithm is typically heavily influenced by the way it is coded and the kind of data they are exposed to – either through training, or via user feedback. Exposure to bias from the creator or the users – whether deliberate or inadvertent – can quickly reflect in its functioning, damaging outcomes and impacting stakeholders of the society. For example, Amazon in 2015 was forced to scrap its AI recruiting tool after it showed bias against women candidates. Similarly, Twitter’s image-cropping algorithm that went viral last year after users discovered it was racist in favour of the white race. The root cause of such biases in AI can be traced back to the historical bias residual in the primary training data set.


To isolate and rectify that bias and adjust the algorithm is a challenge, one that is further compounded by the speed of computing. The time is now ripe to examine and take responsibility for how we approach new-age technology. How are we designing for technology that keeps the greater good in mind? How do we discourage products and services that use data to “hook” people to them, even if it isn’t valuable to the user?


It all starts with data

They say if you are using something for free, you aren’t the consumer but the product, or rather your data, is. At least in today’s scenario. All of us leave behind data trails that we are not even aware of. This data has immense value to companies who pay millions of dollars to understand who you are, your preferences, your spending capacity, and what you are willing to pay for. They use this data to create personalized offers and recommendations to drive revenue and engagement. Personal data, in short, is a gold mine.

Jeff Hammerbacher, Co-founder and Chief Scientist of Cloudera once pointed out the number of avenues brands have at their disposal to track and monitor everything we do, that we can’t even understand who is tracking us, what information they are getting, and how they are using it. Protecting personal data is becoming a battlefield all on its own, and the ownership and access of data is an ongoing conflict.

According to Harvard Professor Dustin Tingley, the key questions to determine data ethics are: ‘Is this the right thing to do?’ and ‘Can we do better?’ A good framework rests on five principles:

Ownership
People own their data
Transparency
The subject of the data has the right to
know their data
will be collected, stored, and used
Privacy
Data privacy must be ensured
Intent
What is the intent to use the data? How will it benefit you?
Does it benefit the data subject,
or is it only for your gain? Does it harm the data subject in any way?
Outcomes
Are the outcomes causing
any inadvertent
harm to the data subject?

 

An important aspect of upholding these principles is consent. Do you have express, informed, and recent consent from the data subject to use their information? While getting this consent is easier in industries like banking and healthcare, where there is an established communication mechanism with the consumers, it becomes fuzzy in cases like social media data. While there are regulations for companies to get consent from consumers, the complex legal agreements make the whole exercise redundant. Don’t we all just scroll through to the end and simply click “Agree,” signing away rights to our data? Does it make it right for companies to then say we signed up for it? Can we make it simpler for consumers to understand what they are sharing? Can we set up data exchanges to give customers the power to monetize their data and get rid of middlemen?

Every organization these days, from Google to UNESCO (specifically to address gender bias in AI), the Govt. of Canada, BMW, among others are defining their own ethical frameworks for AI. Perhaps not surprisingly, most of them include obvious, commonly acknowledged, and broad-based parameters like environmental sustenance, human intelligence, equality, inclusivity, non-discrimination, etc. But are these high-level guiding principles able to provide developers and data scientists the necessary guard rails to write ethical, unbiased code in their models? Unfortunately, the answer is no.


Defined metrics

The answer probably lies in a sustained ongoing effort, which is focused on deployment and has metrics at its core to define frameworks for AI ethics. This effort needs to be measured and monitored by a cross-functional team of experts, drawn not just from technology, but also from risk, legal, data science, and an independent watchdog.

However, as a matter of fact, there will never be one single metric that would be an indicator of AI fairness. What we need is a combination of many such metrics. For example, to ensure minimal bias in lending processes and algorithms, one must refer to the many case laws and judgments in US credit, housing, and employment laws. Some organizations monitor metrics like adverse impact ratio, marginal effect, or standardized mean difference to quantify discrimination in highly regulated fair-lending environments.

Having said that, can we assume that metrics will be foolproof and ensure a minimal-bias AI system? Not really. There will always be some facets of algorithmic decision-making that will be difficult to quantify. However, this shouldn’t deter organizations from undertaking this sometimes resource-intensive, seemingly daunting exercise because the alternative is not an option. Waiting to see the ill effects of AI and taking corrective action as a rection to that effect will have catastrophic impact on a company’s client base, market reputation, and the society at large.

Tell us how you are building and deploying ethical AI. We would love to know more about your journey. Write to me at sourav.chowdhury02@infosys.com.


Recent Posts