AI ethics: 4 things CIOs need to know

AI adoption is taking off, but ethical concerns persist. Consider these tips to help reduce or eliminate bias in your data sets
1 reader likes this.
CIO_AI_intelligence_idea

Problems with artificial intelligence (AI) and ethics have been well publicized over the past few years. As AI becomes more pervasive, CIOs must be cognizant of ethical issues and look for ways to eliminate or reduce bias.

The source of the problem is the data sets algorithms consume to inform decision-making. Too often, these data sets lead to outcomes biased around gender and race – from mortgage applications to healthcare and criminal sentencing. Therefore, more focus must be put on ensuring that the data sets utilized are fair, accurate, and free of bias.

So, what can CIOs do to ensure that the data they use meets these criteria? To build trust, adopting a process-driven approach is vital to ensure that bias is not baked into your AI system.

[ Also read Artificial intelligence: 3 ways the pandemic accelerated its adoption. ]

Here are four recommendations to help ensure an ethical outcome when using AI.

1. Adopt a model

An excellent model to follow while deploying AI is the European Union’s ethical AI guidelines, which look at ways to remove bias. The draft recommendations state that “AI systems should be accountable, explainable, and unbiased.”

Another example is the NIST Artificial Intelligence Risk Management Framework (RMF), which is currently being finalized. It outlines the need for testing, evaluation, verification, and validation of every AI system.

Both frameworks aim to address risks in designing, developing, using, and evaluating AI products and services to ensure they are explainable and free from bias. In addition, once ratified, the European guidelines include penalties for failing to adhere (similar to GDPR compliance), which is essential for accountability.

2. Deep dive into training data

To achieve explainable AI, you must explore how the algorithm’s data is being trained and used. This will not solve the bias problem, but the visibility will ensure you understand the root cause of any issues so you can take steps to fix them.

To achieve explainable AI, you must explore precisely how the algorithm’s data is being trained and used.

In addition to real data sets, synthetic data is critical to help address ethical concerns. For example, if the actual data is biased and unfair toward specific groups of people, then synthetic data can be used to remove the biases. In addition, if the volume is not enough, synthetic data can increase the volume and create an unbiased dataset. If the volume is there, but it’s not diverse enough, synthetic data can be used to ensure equal representation.

[ Related read Defining an open source AI for the greater good ]

3. Adopt a tech tool

As bias and ethical concerns persist, new market entrants hope to help solve the problem by developing technology tools that evaluate if an AI system can be trusted. Integrating one of these solutions provides a systematic way of ensuring that bias doesn’t creep in. This approach mirrors pen testing, which is used to evaluate the ongoing security of systems.

4. Training for technical teams

In addition, retraining is required to address the digital skills gap within technical teams and to educate them on ethical AI. The training aims to help individuals gain the skills necessary to work with AI ethically.

Trust in AI

Interest in AI will continue to grow, with Gartner predicting that organizations over the next five years will continue to adopt “cutting-edge techniques for smarter, reliable, responsible and environmentally sustainable artificial intelligence applications.” To build trust in these systems, leaders must prioritize removing bias at the start of the product lifecycle. It’s the responsibility of every CIO building or buying AI to ensure that their systems are trustworthy and free from bias.

[ Want to adopt best practices for AI workloads? Get the eBook: Top considerations for building a production-ready AI/ML environment. ]

jonathan_wright_keysight-technologies
Jonathon Wright is Chief Technology Evangelist, Software Automation at Keysight Technologies. He has over 25 years of experience in emerging technologies, innovation, and automation. In addition to his role, he holds a series of advisory positions (MIT, Harvard & EU) and is the author of several award-winning books.