An interview with Richard Socher, Chief Scientist of Salesforce
Gaining the trust of customers and employees is hard. Companies that act
honorably in the AI race will win the respect of both—and put
themselves in position to outrun the competition.
Why Ethical AI Is A Critical Differentiator
The word “trust” comes up continually in my work when we talk about values. It anchors our culture. For us and many other companies, trust has become a competitive advantage. As businesses race to adopt artificial intelligence (AI), their ability to use it ethically—and in ways that generate trust from customers, partners, and the public—will become a competitive differentiator.This means companies need to make ethics and values a focus of AI development. Some reasons for this are obvious: Three-fourths of consumers today say they won’t buy from unethical companies, while 86% say they’re more loyal to ethical companies, according to the 2019 Edelman Trust Barometer. In Salesforce’s recent Ethical Leadership and Business survey, 93% of consumers say companies have a responsibility to positively impact society. Businesses are being held more accountable than ever for what they do and how they behave.
Other reasons are less obvious but just as important. AI is forcing conversations about corporate trust and ethical use because it holds up a mirror to human behavior; it amplifies preconceptions and biases that can adversely influence business decisions. In my day-to-day conversations with colleagues, responsible use of AI is a constant thread as we devise new algorithms to improve business processes. It’s our responsibility to scrutinize the sources of data and how we’re using it to train algorithms, and to understand the potential impact of AI technology on our stakeholders.
We need these conversations because using AI without an ethical framework isn’t like making a single mistake. If training data is biased, mistakes will be compounded as the algorithms continue to “learn” from flawed data, and the potential for repeatable offenses is greater—such as automated decisions and predictions that could affect a person’s chance of getting a loan or could fail to diagnose illness or disease. Companies must consider the interplay of AI, trust, and culture. These factors affect each other and are critical to developing an ethical framework for AI.
Creating An Ethical AI Mindset
Aligning AI with corporate ethics and values allows companies to change the way employees think about using the technology. Part of their work is to create structures that allow them to use AI responsibly and build transparency into machine-learning models. It’s similar to companies talking on a high level about the culture they want to create and the code of conduct guidelines they expect employees to follow.
Conversations about AI ethics shouldn’t be limited to just data scientists. They need to be reinforced everywhere in the organization and among partners and community members. AI ethics should also be rooted in corporate ethics. In our case, the core values that drive all our decisions are trust, customer success, innovation, and equality. If you don’t already operate with a set of core values for the business, you’re not ready to create an ethical AI mindset for your employees.
An important place to start in building ethical frameworks for AI is documenting data sources and assessing how people could be impacted by AI applications at all stages of development. At Salesforce, one group of engineers has updated their agile development process checklist to include ethical questions such as, “How will people be impacted by this tool?”
Some impacts of AI are bigger than others, of course: Machine learning models that determine outcomes—such as which job applicants are chosen for interviews—deserve greater scrutiny than those used to help Alexa answer questions about the weather.
For example, we scheduled an internal hackathon recently where data scientists and engineers were planning to train AI models for emotion detection. They discovered, however, that the training data was based on stock images of human faces that didn’t represent an accurate cross-section of ethnic diversity. So engineers went back to find a more multi-ethnic data set for training the models. That’s an example of the value of questioning AI projects at every stage to uncover problematic uses long before products are released.
It’s important to extend these frameworks to customers, not just employees. When Salesforce customers use Einstein Prediction Builder to build customized AI predictions, we encourage users to take an online course on ethical AI through our training platform. We know that even simple errors in using data can create algorithms that generate sexist or racist outcomes. The ethical AI training course raises awareness of best practices in choosing data and creating algorithms to avoid introducing biases.
Adding Visibility To AI Decision Making
Businesses often make transparency part of their corporate value systems, and it’s equally important for developing AI tools and applications. If businesses use AI to make predictions, they owe humans an explanation as to how the decisions are made. A salesperson may wonder why Einstein is telling her why some leads and opportunities are prioritized over others, especially if she’s been in business for years and knows her market. It’s our responsibility to make AI-driven decisions visible and explainable to customers if we expect them to change how they work and rely on machine intelligence.
Gaining the trust of customers and other stakeholders is hard work. The key is to ensure that AI is grounded in an ethical framework tightly bound to core values, starting with trust. Companies will then have a better chance of enabling the business and its customers to reap the benefits of AI in the best and most responsible ways possible.
About the Author
No comments:
Post a Comment