Saturday, April 13, 2019

Ethics in the C-Suite:Rise Of The Chief Ethics Officer




A 2018 survey by Deloitte of 1,400 U.S. executives knowledgeable about artificial intelligence (AI) found that 32% ranked ethical issues as one of the top three risks of AI. That’s a surprisingly high number given that just a few years ago there were no such issues. Questions around bias and equality had yet to be raised.

Today, that situation is rapidly changing, and progressive enterprises are starting to think seriously about the intersection of ethics and AI. Much of that work is beginning to find its way to a position that’s also on the rise—the chief ethics officer.

Unlike many CXO roles, the chief ethics officer—also known under numerous other titles, including chief trust officer and chief ethics and compliance officer—doesn’t have a consistent job description. Much of the time, particularly in finance, the role comes from a need to ensure compliance with federal regulations and other rules designed to prevent monetary misdeeds, such as money laundering and insider trading.

But a few forward-looking companies are turning to the position, regardless of specific title, to help steer corporate values more broadly and oversee everything from fair trade discussions to, more recently, ensuring AI algorithms are unbiased.

A New Management Discipline
That said, in much of the tech world, the concept of a chief ethics officer remains a hard sell, in large part because of pressures to stay competitive. “Being first in ethics rarely matters as much as being first in revenues,” says Timothy Casey, a Professor in Residence at California Western School of Law and a member of the Ethics Committee of the San Diego Bar Association.

The other big issue is simply a matter of history. Casey notes that while certain professions have ethics baked in from the start, computer programming decidedly does not. “In medicine and law, you have an organization that can revoke your license if you violate the rules, so the impetus to behave ethically is very high,” he says. “AI developers have nothing like that.”

The good news is that hasn’t stopped a handful of pioneers from wading into this complicated territory, working to create their own code of conduct and rules for ethical behavior, even in the absence of outside guidance.

Salesforce, for instance, is one of the most visible companies to hire an ethics chief. Paula Goldman, who joined the company in January 2019 and carries the title of chief ethical and humane use officer, has a broad mandate: “To develop a strategic framework for the ethical and humane use of technology.” That could cover everything from averting “fake news” to protecting the environment, but one likely focus of her work will be to ensure that Salesforce’s use of AI is not subverted to nefarious ends.

Google is another pioneer in this space, and while the company doesn’t currently have an ethics chief, it does have a board that is focused entirely on ethics and AI. The board’s principles are published online and codified in a set of beliefs that AI should benefit society, should avoid bias, and should incorporate privacy principles, among other things. Since the principles were published last June, Google has said it is creating a formal review structure that will assess projects and products under these rules before they go to market.

Getting Your Own House In Order
But for a company like Google, which has a culture that’s steeped in technology, to bring ethical considerations to bear on emerging technologies like AI is one thing. What happens at a company that is still in the throes of digital transformation? These companies are finding their ethics chiefs being tasked with new types of challenges, and the transition isn’t always easy.

Michael Levin, like many chief ethics officers, began his career in law before ultimately moving in-house. (“I got tired of cleaning up messes,” he says.) After a stint at an ethics-focused startup and BAE Systems, he became director of ethics at Boeing and then chief ethics officer at the Federal Home Loan Mortgage Corporation, aka Freddie Mac, where he’s worked since 2014.

For Levin, the life of a chief ethics officer still revolves around critical baseline functions such as providing guidance and awareness of ethics to the entire business. “That includes training to shape the corporate culture and responding to allegations of misconduct,” he says. As part of that, Levin’s group runs web- and phone-based helplines where employees can ask questions anonymously or report problems they run across.

While Levin says Freddie Mac’s use of machine learning, AI, and predictive analytics is evolving, he says the company is keenly focused on the ethics of data handling and protection. Freddie Mac buys, pools, and resells mortgages into mortgage-backed securities en masse, to the tune of millions of loans. “That data has to be protected the right way,” says Levin. “The more the markets change, the more competitive they get, and the more companies are going to be looking at ways to use data to get a competitive advantage. That’s why clear policies around data use and protection are critical and why the ethics office should be involved early.”

A Bull’s-eye On Ethics At Target
Robert Foehl is now executive-in-residence for business law and ethics at the Ohio University College of Business. In industry, he’s best known as the man who laid the ethical groundwork for Target as the company’s first director of corporate ethics.
At a company like Target, says Foehl, ethical issues arise every day. “This includes questions about where goods are sourced, how they are manufactured, the environment, justice and equality in the treatment of employees and customers, and obligations to the community,” he says. “In retail, the biggest issues tend to be around globalism and how we market to consumers. Are people being manipulated? Are we being discriminatory?”

For Foehl, all of these issues are just part of various ethical frameworks that he’s built over the years; complex philosophical frameworks that look at measures of happiness and suffering, the potential for individual harm, and even the impact of a decision on the “virtue” of the company. As he sees it, bringing a technology like AI into the mix has very little impact on that.

“The fact that you have an emerging technology doesn’t matter,” he says, “since you have thinking you can apply to any situation.” Whether it’s AI or big data or any other new tech, says Foehl, “we still put it into an ethical framework. Just because it involves a new technology doesn’t mean it’s a new ethical concept.” 

In other words, a big data project may raise questions about customer privacy. It’s part of the chief ethics officer’s role to figure this out whenever a new initiative or technology arises. “When a company thinks about an emerging technology such as AI,” says Foehl, “the company should not only ask, ‘How can we use it?’ but also, ‘Should we use it?’”

It’s important to understand that a typical chief ethics officer is not personally scouring code and sifting through machine learning models to mitigate risk of “machine bias.” The role is more advisory and more strategic than tactical. At Target, Foehl’s job was to meet with the executive team to advise them on the appropriateness of decisions and to understand the ethical risks around them—and to train managers in how to do the same. 

You can see some of this at work in the Brookings Institution’s “Blueprint for the Future of AI,” which lays out five ethical dilemmas around AI and a six-step process for how to deal with them, ranging from the development of a code of ethics to building a remediation system when things go awry.

But, says Foehl, the biggest challenge is more forward-looking: “Identifying and understanding new ethical issues as they crop up. For example, is an android with human-level intelligence owed human rights? This is something we haven’t had to cross yet, but eventually we will.”

The Job Of The Future
Many of these discussions around ethics and AI seem academic today, but that’s changing quickly. Research firm Cognizant included the “chief trust officer” in its 21 Jobs of the Future study, alongside gigs like quantum machine learning analyst and genomic portfolio director. And at a recent education conference, one consultant suggested that philosophy graduates would be in heavy demand by 2030, employed to “look into AI-related outputs through a human lens.”

In other words, as AI becomes more and more intelligent, we humans are going to have to work harder to keep up.
 
Doug Rose, author of the upcoming book “Data Dilemma: How Data Ethics Defines Your Business,” offers a tangible example. “Right now, if a crash is unavoidable, Google’s self-driving cars are designed to collide with the smaller of two objects,” he says. “That was an engineering solution to a deeply moral question.” 

That won’t be—or shouldn’t be—the model going forward as AI adoption grows rapidly across industries. More and more, companies need an ethics chief, Rose adds, “to insert themselves into these tough decisions. As these technology questions become more complicated, these ethical decisions might not be just about what’s right and wrong. They actually could turn out to be about life and death.”

CREDIT: Hero Images


About the Author
Forbes Insights is the strategic research and thought leadership practice of Forbes Media. By leveraging proprietary databases of senior-level executives in the Forbes community, Forbes Insights conducts research on a wide range of topics to position brands as thought leaders and drive stakeholder engagement



No comments:

Post a Comment