Annotation Services

Is AI ready to make unsupervised decisions?

Artificial intelligence (AI) has progressed to such an extent that it can compete with the best human brain in many areas, usually with unbelievable speed, quality and accuracy. However, it still remains to be seen whether AI can make decisions when emotions come into play. For example, ‘can AI make decisions taking empathy into account’?

AI models are designed to help with decision making when humans cannot handle all the data, variables and parameters involved in managing a situation. However, when intangible human emotions are involved, AI still flounders. AI is driven by algorithms that respond to data and models, not morality and ethics. While the decisions may be technically correct, they may sometimes mean trouble for an individual or a business.

Consider a few situations


If banks rely completely on algorithms to decide whether a customer is eligible to receive a loan or an increase in credit limit, AI models would qualify only those customers who presented almost zero risk. However, a customer’s value may be more than what the AI model can assess. An AI model would not pick a customer with calculated risks but who promised higher returns, over one with minimum risk. Only a human involved in the process would be able to make a fair judgement.


Technology can now create text that resembles human writing very closely. Language transformer AI models are equipped to independently produce blogs, articles, short stories, news reports, songs and much more. While these kinds of content can be very useful in marketing, chatbots, translations and sales responses among other tasks, there is always a doubt about whether AI tools can independently decide what people want to read or whether the content produced would be unbiased and of a quality that a qualified human would present.


AI is now making recommendations about almost everything. The role of AI-driven social media influencers is becoming quite prominent too. If AI models start making political recommendations, the impact on public policies could be quite huge.

AI gone wrong

AI suggestions and solutions can sometimes be very wrong too. Here are a few unnerving examples that also question whether AI has advanced as much as it is believed to have.

Self-driving car accident:

During a real-world experiment in Tempe, Arizona, a self-driving test car did not stop when a pedestrian pushing a bicycle tried to cross a four-lane road. The AI model did not recognise the jaywalking pedestrian since he was not near a marked crosswalk. The pedestrian died and this brought home the rather shocking point that the AI model had not been designed as well as expected. The human backup driver did not see the pedestrian either as he was watching a streaming video. A human driver would have probably stopped or swerved the car to avoid the pedestrian.

Biased recruiting:

An AI tool that was trained to search for top talent picked the best talent alright, but it picked mostly men since the data it was trained with was largely about male candidates. The AI model gave low scores to female candidates although their qualifications and abilities were no different from the male candidates. The tool was finally abandoned.

Learning disaster:

An AI-driven chatbot that was trained to work without any human intervention drew more attention than necessary when it started learning offensive language and made derogatory remarks on the chat platform. It was supposed to learn from its interactions with humans but it picked up wrong facts and wrong language. This chatbot too was quickly withdrawn. So much for unsupervised decisions. 

Troublesome advice:

An experimental healthcare chatbot that was designed to reduce doctors’ workloads only stirred trouble when it advised a patient to commit suicide! Another disastrous AI tool that could not be trusted to work unsupervised. It had been trained with data that was not cleaned properly leading to very unhelpful medical advice.

What should business and technology leaders do?

AI-driven decisions can have both positive and negative impacts on business decisions and on society. Frequent accidents will only make people wary of the power of AI. It is quite clear that AI-driven decisions require some degree of human involvement. Other than ensuring all AI algorithms are tested thoroughly under different conditions, technology and business leaders must ensure that AI systems are fitted with the necessary checks and balances so that the decisions made are moral and ethical. 

  • Promote ethics in AI decisions:
  • Business leaders must ensure that people creating AI systems are educated about ethics, fairness and morality so that the functions they build into the systems reflect the right standards.

  • Ensure data is trained, fine-tuned and unbiased:
  • To prevent AI-driven decisions from being biased, the data fed into the systems must be analysed and cleaned and made free of any biases. Data sources must be authenticated by data scientists before being used. AI systems must be supervised during the learning phase since they cannot learn on their own.

  • Ensure humans are in the loop:
  • To ensure wrong decisions are not delivered, AI systems must allow humans to override decisions at any time. There are ample examples of situations where humans have had to intervene to prevent erroneous AI-driven decisions.

  • Teach machines human values:
  • Since AI reflects the data and programming fed into it, efforts must be made to improve AI systems such that they mimic human values as closely as possible. Leaders must agree that data-driven insights cannot be the only factor affecting the decision-making process, the systems must be humanised to some level.

The big picture

Huge advancements in AI are largely seen in the virtual world where it can manipulate media content. In the real world however,  AI still has a long way to go. It may make the right fact-based decisions, but when it comes to subjective reasoning, humans must be involved. Since AI is here to stay, it is up to business and technology leaders to ensure that AI systems are fitted with clean unbiased data so that AI-driven decisions are above reproach. To allow AI systems to make unsupervised decisions in certain areas such as repetitive tasks, technology leaders must ensure those areas are clearly defined.

* For organizations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organizational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organizations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organizations that are innovating collaboratively for the future.