Artificial Intelligence Can Now Explain Its Own Decision-Making

IBM's new open-source AI Fairness 360 toolkit claims both to check for and to mitigate bias in AI models, allowing an AI algorithm to explain its own decision-making. This collection of metrics may allow researchers and enterprise AI architects to cast the revealing light of transparency into "black box" AI algorithms.

Tasmin Lockwood
Artificial-Intelligence-Can-Now-Explain-Its-Own-Decision-Making
Illustration: © IoT For All

People are scared of the unknown. So naturally, one reason why artificial intelligence (AI) hasn’t yet been widely adopted may be because the rationale behind a machine’s decision-making is still unknown.

The Black Box of AI

How can decisions be trusted when people don’t know where they come from? This is referred to as the black box of AI—something that needs to be cracked open. As technology continues to play an increasingly important role in day-to-day life and change roles within the workforce, the ethics behind algorithms has become a hotly debated topic.

Medical practitioners are thought to be among the first who will benefit greatly from AI and deep learning technology, which can easily scan images and analyze medical data, but whose decision-making algorithms will only be trusted once people understand how conclusions are reached.

Key thinkers warn that algorithms may reinforce programmers’ prejudice and bias, but IBM has a different view.

IBM claims to have made strides in breaking open the block box of AI with a software service that brings AI transparency.

Making AI Algorithms More Transparent 

IBM is attempting to provide insight into how AI makes decisions, automatically detecting bias and explaining itself as decisions are being made. Their technology also suggests more data to include in the model, which may help neutralize future biases.

IBM previously deployed an AI to help in decision-making with the IBM Watson, which provided clinicians with evidence-based treatment plans that incorporated automated care management and patient engagement into tailers plans.

Experts were quick to mistrust the model as it didn’t explain how decisions were made. Watson aided in medical diagnosis and reinforces doctor’s decisions, but the hopeful technology would never replace the doctor. When Watson provided an analysis in line with the doctors, it was used as a reinforcement measure. When Watson differed, it was wrong.

But the company’s latest innovation, which is currently unnamed, appears to tackle Watson’s shortfalls. Perhaps naming it Sherlock would be fitting.

Open-Source and Ethical AI

It’s important to increase transparency not just in decision-making but also in records of the model’s accuracy, performance and fairness are easily traced and recalled for customer service, regulatory or compliance reasons, e.g. GDPR compliance.

Alongside the announcement of this AI, IBM Research also released an open-source AI bias detection and mitigation toolkit, bringing forward tools and resources to encourage global collaboration around addressing bias in AI.

This includes a collection of libraries, algorithms, and tutorials that will give academics, researchers, and data scientists the tools and resources they need to integrate bias detection into their machine learning models.

While other open-source resources have focused solely on checking for bias in training data, the IBM AI Fairness 360 toolkit claims to check for and mitigate bias in AI models.

A diagram of how the AI Fairness 360 toolkit works.
Image Credit: IBM

“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice. We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”

— David Kenny, IBM’s SVP of Cognitive Solutions.

What could this mean for medical practitioners? The new technology may open an array of problems with its implementation as policy still has to catch up with tech. Who is liable for issues following a wrong diagnosis: the doctor or the AI? After a proven track-record of correct diagnosis, how does a person go against the software? How is a gut feeling justified?

Author
Tasmin Lockwood
Tasmin Lockwood
Tasmin is a writer and journalist from Newcastle upon Tyne, UK, with a degree in Journalism combined with Sociology and a masters degree in News Journalism. She loves anything techy; from fibre in the ground to Amazon Alexa's privacy policy.
Tasmin is a writer and journalist from Newcastle upon Tyne, UK, with a degree in Journalism combined with Sociology and a masters degree in News Journalism. She loves anything techy; from fibre in the ground to Amazon Alexa's privacy policy.