Business Transformation

How AI can be detrimental to our social fabric

The fear that smart machines might replace human workers has been rife for years. It has become a reality to some extent in industries that depend on repetitive work. Loss of work could lead to many unwanted situations, besides economic loss. There are growing fears that data could be abused, humans could lose control over data, socio-economic balance could be at risk, and new kinds of crimes could emerge. Despite the doomsday scenarios predicted, experts agree that AI can help solve many global challenges. Global leaders must come together and set up checks and balances such that AI cannot create any malicious upheavals.

Way back in 2017, at a conference in Lisbon, physicist Stephen Hawking warned the world about the dangers of artificial intelligence (AI). While he talked about the potential of AI in solving world problems, he admitted that the future was uncertain. He said, “We cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.” He stressed on the need to be aware of the risks and employ best practices and effective management so that society could be prepared for all consequences. Several business leaders and thinkers have also expressed similar feelings of unease. So how detrimental can AI get? Nobody is quite sure. However, there are a few fears, especially about what AI will do to our human senses.

  1. Job automation leading to high unemployment rates:
  2. Most of the automatable work can be done by AI. These will largely be repetitive tasks. As it happens, the disruption has already started. The dependence on AI-powered machines is already a reality. Lower-wage jobs are at the highest risk. Medicine, law, and accounting are other areas very likely to be affected. While AI is expected to increase efficiency and save time, the loss of jobs will have a huge impact on the social fabric.

  3. AI bias and rise in socio-economic inequality:
  4. The rise in unemployment will naturally create socio-economic inequality but AI can create different kinds of biases too. Many senior technologists have expressed the fear that since humans develop AI, human biases will creep into AI. We live in mostly homogeneous societies, and it is difficult to think beyond certain boundaries. The ability to create algorithms that can tackle worldwide issues will require an understanding of social dynamics beyond one’s geographical limitations. 

  5. Abuse of data and loss of control:
  6. Abuse of data is a huge possibility since both data and AI tools are in the hands of a select few. Unless values and ethics are built into digital systems, decisions made by AI algorithms will favour only selected parties. Privacy and power are being handed over to digital tools over which very few people have any control.

  7. Privacy, security and deepfakes:
  8. Malicious AI could threaten digital, physical, and political security. For example, people could train machines to hack victims in various ways, weaponize consumer drones, automate unfavourable campaigns for political reasons or implement privacy-eliminating surveillance programmes.

    Audio and video deepfakes are already gaining ground. Video deepfakes are being successfully used in the advertising world, making it difficult to differentiate between real and virtual. Social media personalities could be easily engineered to influence sections of society. Similarly, audio clips of influential people can be manipulated and misused.

  9. Financial instability:
  10. The next trigger for a financial crisis could be algorithmic trading. This is the kind of trading that computers would execute based on pre-programmed instructions. Unencumbered by emotions or human instincts, computers could create financial instability by making high-value, high-volume and high-frequency trades.  

  11. Impact on cognitive and social skills:
  12. While AI is expected to augment human capacities, there is fear that increasing dependence on machine-driven networks will reduce our capacity for independent thinking, our social skills, and our ability to take decisions without an automated system. Traditional socio-political frameworks are likely to be disturbed. There could also be the growth of autonomous military applications, use of weaponised information and propaganda that could destabilise humanity.    

Reducing the risks of AI

Well-thought-out regulations can prevent harmful uses of AI in the future. While research cannot be stopped, regulations can control the implementation of harmful AI. Human relevance must not be compromised while promoting programmed intelligence.

* For organizations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organizational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organizations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organizations that are innovating collaboratively for the future.

Recent Posts