Managing the risks of biased visual artificial intelligence systems
While Artificial Intelligence (AI) is being hailed as a data-driven approach to decision-making in business operations, the hard fact is that bias is inherent in AI systems. AI training datasets are peppered with human bias. Such bias may not be deliberate but may be unavoidable and could creep in due to data measurement techniques. For instance, if the training data were to include more men than women in an AI-driven recruitment system, it’s unlikely that the application would result in gender-neutral candidate ratings. Besides a wide spectrum of collected bias-free data, the algorithms need to be accurate and specifically tailored to the task at hand. When algorithms run amok or are “programmed” to perform a certain way, the results of such systems can be skewed. There can be bias in reporting results from AI applications as well, where a simple “yes” or “no” would not suffice to make decisions.
One of the major applications of AI is in Computer Vision (CV) modelling. AI can be used to interpret visual data such as pictures and videos to create automated systems that can mimic or come close to mimicking human vision. The applications of CV are diverse across industries – Augmented Reality (AR) applications, healthcare, and security surveillance, to name a few. The global computer vision market is expected to witness explosive growth and is projected to grow to a $41.11 billion industry by 2030. When these systems are deployed at large scale, any implicit bias can result in discriminatory practices, and severe implications for society. A lack of awareness and regulation of bias in AI systems can reduce the potential applications of AI in business due to trust issues and inaccurate results.
The risks of bias in visual AI systems
Deep learning algorithms that use improper training data sets may result in biased gender associations. Ultimately, all AI applications and algorithms are built by humans who determine the rules, variables and datasets for making decisions. AI training datasets are generated from massive amounts of data. Biased gender associations can be created due to previous stereotypical perceptions which make their way into training datasets or algorithms. These can go on to create gender inequality, lower quality of services, reinforce harmful prejudices, and a lack of opportunities.
Inaccurate computer vision models can also result in discrimination based on race or ethnicity. Disadvantaged communities may continue to be marginalised due to the lack of historical data for this population. When such data is applied to AI-driven recruitment systems, these models may reject candidates and rob them of job opportunities, leading to a vicious cycle. Poor facial recognition systems, either due to lack of data or inaccurate algorithms have even resulted in racial profiling, which is already a big societal concern. There are privacy and legal concerns about public and private enterprises using this data, and the extent to which they can be used.
Mitigating the risks of bias in AI
The complexity increases in multi-modal AI, when language and computer vision modelling come together. Using pre-trained AI computer vision models in public settings such as hospitals or law enforcement, or in private enterprises for recruitment or education requires that such systems be transparent and well-regulated.
Mitigating the risks of AI in applications requires a multi-pronged approach by implementing a mix of policy, and social and technical strategies. Governments and enterprises need to prescribe standards and regulations for using AI in systems and applications for public and private enterprises. The data collection methods and model improvement processes need to be standardised. For instance, the AI Risk Management Framework (AI RMF) proposed by the National Institute of Standards and Technology (NIST) of the US Department of Commerce is a step towards the advancement of trustworthy AI.
Institutions and enterprises that deploy AI need to be accountable. This can be done by using third-party audits and transparent incident reporting systems. Stakeholders need to participate in developing ethical AI policies and CV modelling. Metrics need to be established to measure the extent of bias in AI-driven systems so that correctives can be taken and the model can be improved. By adopting a “human-in-the-loop” strategy, enterprises can involve people to understand the effectiveness of AI, as well as improve the human decision-making process.
Businesses and governments need to collaborate to drive ethical AI policies and stay up to date with the latest advancements in AI. They need to establish technical and operational policies to mitigate risks of bias in visual AI at an organisation level. Government regulations and standards can help with third-party audits, and in creating responsible AI applications that adhere to the prescribed rules. Investing in data management*, education and social enterprises would help to build a diverse data pool.
AI has the potential to transform our lives completely. Adopting a holistic approach to mitigate the risks of AI systems would lead to huge benefits for business, society and the economy.
*For organizations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organizational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organizations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organizations that are innovating collaboratively for the future.