Today, artificial intelligence (AI) and machine learning (ML) are used in everything from running thermostats in our homes, to cars that tell us when we’re steering off course, and security systems that let us see who is at the front door when we’re not home.
Businesses are leveraging AI/ML for use with data they have gathered over the years to create better products and services for their customers. The technology helped Scotiabank, for example, deliver better service during the COVID-19 pandemic by creating a process that helped identify customers who may have been hit harder financially and allowed the Bank to reach out to those clients proactively to offer support.
However, with the increasing use of AI/ML comes concerns around equity and explainability, including the risk that biases in data can cause unintentional discriminatory outcomes.
“With any new field there are a lot of challenges,” Mona Balesh Abadi, Senior Manager, Data Ethics & Use Office at Scotiabank says.
“When we are dealing with data and analytics, especially in creating models with artificial intelligence and machine learning, small errors and omissions can end up having large consequences. No matter how diligent we are when developing models, we are human and omissions or errors still occur,” she says.
As an example, Balesh Abadi cited reports of technology companies creating facial recognition software that were found to be consistently less accurate for women of colour. “I think it’s safe to assume the companies that made the software, were not actively trying to discriminate,” she said, adding that such errors tend to stem from failing to use data that is diverse.
To ensure ethical use of data is considered systematically from point of collection to point of usage, about two years ago Scotiabank set up the dedicated data ethics team Balesh Abadi is part of. The team is responsible for ensuring the Bank’s responsible and ethical use, management, protection, and sharing of data. The Bank also set guidelines that go beyond what is regulated and is implementing tools and processes to help practitioners proactively incorporate ethics into their work. So far, the team has created a set of tools and processes and is training those creating products and services to ensure they understand what data ethics is, why it’s important and how they can avoid errors, Balesh Abadi said.
Our model developers take the utmost care when introducing models, to ensure they are not biased. Ethics Assistant supports the existing approach and ensures that, as models proliferate, we are consistently asking ourselves the right questions
One such tool is the Ethics Assistant, which Scotiabank launched late last month. Built using Deloitte’s Trustworthy AI Impact Assessment Tool, Ethics Assistant is employed early in the development process to help modellers think about ethical considerations before AI and machine learning projects are deployed. First, a series of questions are posed covering a spectrum of possible risks relative to the project ranging from accountability to fairness, transparency, explainability, third-party liability, security, reliability, and acceptable use. Based on the answers provided, model developers are given practical guidance to address potential areas of concern.
It tends not to be the techniques used to create products and services that are biased, Balesh Abadi emphasizes. More likely it is historical biases in the data, which has been collected over a lengthy period. “National origin, disability status, and marital status are some of the things we look at to ensure there is fairness, explainability and transparency,” she said.
“Our model developers take the utmost care when introducing models, to ensure they are not biased,” Balesh Abadi said. “Ethics Assistant supports the existing approach and ensures that, as models proliferate, we are consistently asking ourselves the right questions through a formalized process to further reduce the risk of errors and omissions.”
“The tool provides great insights to data scientists about the potential bias and risks associated with their model development giving them opportunity to apply treatments to mitigate the risk before deploying the models in production,” said Tanaby Zibamanzar Mofrad, Director, Data Science and Analytics, Global Wealth Analytics at Scotiabank.
About three years ago, as issues of trust and ethics around AI/ML began to come to light, Deloitte, a global advisory firm, started working on what it calls the Trustworthy AI Impact Assessment Tool.
“There were a lot of trust issues around systems making decisions on behalf of consumers. It was important for us to take a comprehensive approach to steps organizations can take to be more comfortable in using these recent technologies and to have the right set of guardrails to support them to move AI usage forward,” said Preeti Shivpuri, Trustworthy AI Leader in the Omnia AI Strategy Practice at Deloitte Canada.
“The responsibility for Deloitte is huge”, she said. “We are not just looking at it from an organization standpoint or even a client perspective, we’re also looking at it from our clients’ customers’ perspective. For Scotiabank, for example, the trust of its customers is paramount.”
“There is a lot of evolution to come. The tool we have now is just the tip of the iceberg. It’s bringing the right stakeholders — people with diverse opinions — to the table to ask the right questions and challenge the status quo to ensure the technologies are evolving to do the right thing for society, for human good,” Shivpuri said.