Saturday, April 13, 2019

Explainable AI—A Critical Prerequisite For AI Adoption And Success


As artificial intelligence (AI) flourishes across multiple industries, ethical questions about its use proliferate. 

But whether AI is being used to provide customer service, diagnose health conditions or approve loans, consumers will trust the technology to the extent that its decisions are transparent—and that they’re aware of what personal data went into making those decisions, says Intel’s head of AI for social good, Anna Bethke.

“In terms of the AI ethics discussion, it is fairly unanimous that having full information is something that is beneficial and good,” she says.

Studies show that transparency is critical to AI adoption. Three-quarters of respondents to a 2017 PwC CEO Pulse survey say that the potential for bias and a lack of transparency impede AI adoption in their organizations.

Consumers also want the ability to address AI decisions that appear biased or incorrect—especially when the consequences are severe, as is the case with job rejections, loss of medical benefits or denial of bail.

Consider COMPAS, an AI risk-assessment tool used across the U.S. to provide data on whether a criminal defendant is likely to commit another offense. The program has encountered court challenges for violating defendants’ due process by not disclosing how it makes its recommendations.
Yet providing such disclosure is no easy task. One of the biggest challenges for explainable AI (XAI) is that neither machine learning nor deep learning models can easily describe how they reach decisions with consequential impacts, Bethke says.

Complexity And Counterfactuals
“As we make AI networks and models more and more complex, it does get harder to say why certain decisions are being made,” explains Bethke. Exactly how is an algorithm interpreting the data that’s feeding into it?

In simple classification models, it may be possible to explain how each variable contributes to a prediction. When predicting the price of a home in a certain area, for instance, those variables could be the number of bedrooms and bathrooms, the home’s square footage and whether or not there’s an attached garage. Decision-tree algorithms that represent clear “if-then” choices between several variables are also easier to explain—at least up to a certain number of variables. 

In deep learning, on the other hand, and particularly computer vision, engineers can use attention layers to highlight the areas of an image that were most influential in the classification. This lets someone determine if an AI program, in classifying an image as an image of a dog, for example, searched for a tail, or for surrounding items like a ball or water dish. Tools like Lime can help engineers assess their algorithms in cases like these, but those tools don’t account for everything.
For more complex algorithms, so-called counterfactuals, which modify data points to describe conditions that could lead to different outcomes, could offer explanations about what factors would need to change to achieve a different result—without revealing proprietary code, Bethke notes.
Complex algorithms may also be too complex to explain to individuals, even if an explanation is warranted or even required by law. Counterfactuals provide an alternative, helping a loan applicant—to give just one example—understand the rationale behind a decision, according to an article in the Harvard Journal of Law & Technology. People can learn three things from counterfactuals, the authors write: why a specific decision was reached, what grounds exist to contest a decision and what could be changed to generate a more desirable result in the future.

The Transparency Imperative
Transparency involves more than just helping explain the choices behind an algorithmic prediction or decision. It has to inform the entire data pipeline, from collection to labeling to implementation.
Government rules such as the European Union’s General Data Protection Regulation, which aims to give consumers more control over how their personal data is collected and stored by companies, have sparked some advances in transparency. When you visit a website now, for example, you may receive a notice informing you that it’s using cookies and offering you more information on how your data will be put to use, Bethke says. 

Transparency rules in California may also soon require informing a person whether he or she is interacting with a bot or with a human being in a situation that has as its goal to “incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.” Bethke says that for a company, best practices for transparency include informing customers about when the company is using AI; how it’s collecting, storing and/or selling personal data; and what general approach it’s taking when using AI algorithms to make decisions.
To gain users’ trust, companies may also want to consider opting in to the growing trend toward providing consumers with options when it comes to how they receive ads or recommended products, including whether they want these to be personalized, she adds.

“Let users know and control a little bit more of the secret sauce,” Bethke says.



About the Author

Lisa Wirthman is a journalist who writes about tech, business, public policy and women’s issues.

No comments:

Post a Comment