An algorithm can’t choose where,
when, or how it’s used, including whether it’s used for good or bad intentions.
This puts the burden for the ethical use of artificial intelligence (AI)
squarely on human shoulders.
But are companies taking up the
mantle of responsibility? Keith Strier, Global Artificial Intelligence Leader
at Ernst & Young, isn’t convinced. “The business world has been much more
focused on the upside, not the downside, of these technologies,” Strier says.
“They’re focused on, ‘Let’s get this built. Show me the money.’”
Meanwhile, universities and think
tanks, such as the Partnership for Artificial Intelligence, the Future of Life
Institute, OpenAI, and the AI Now Institute, are actively trying to establish
ethical guardrails for AI and urging both governments and business leaders to
adopt them.
But codifying ethical standards or
enforceable regulations is incredibly complex. And the challenges in using AI
ethically are not equal across industries. Notably, healthcare AI has
emerged as a minefield of ethics quandaries, from potential misdiagnoses by
flawed algorithms to gene editing—that’s why we’ve covered that issue earlier
in Forbes AI.
Beyond healthcare, however, are many
industries where AI technology is raising equally pressing ethical questions.
Here is a look at four of the biggest sectors where ethics and AI are colliding
quickly.
Autonomous
Transportation
One of the most complex aspects of
developing autonomous vehicles is the programmed intelligence they will use to
occasionally make life-and-death decisions on behalf of human passengers.
The research behind this concept
isn’t new: the “Trolley Problem,”
for instance, is a well-known philosophical experiment in which the conductor
of an out-of-control streetcar must choose between staying on the track, which
will kill five bystanders; or switching tracks and killing just one person.
Making a moral calculation between
potential victims is hard enough for humans. For autonomous vehicles, it will
eventually be a matter of coding. An algorithm must be designed to
automatically make that choice. Self-driving cars will need to wrestle with
many variations of the “trolley problem.”
According to Meredith Whittaker,
co-founder and co-director of the AI Now Institute at NYU, this is only the tip
of the ethical iceberg. Accountability and liability are open and pressing
issues that society must address as autonomous vehicles take over roadways.
“Who ultimately bears responsibility
if you’re looking at a black box that only the company, who has a vested
interest in perpetuating the use of these technologies, is able to audit?” she
asks. “These are urban planning questions. They’re regulatory questions. And
they’re questions around the role of the public in decision making that impacts
their lives and environments.”
Two 2018 fatalities during autonomous
vehicle road tests, in Arizona and California, only elevate the urgency of
these questions. In the case of a California pedestrian killed by an autonomous taxi,
investigators found that the company’s technology had failed catastrophically
in an easily preventable way.
Months after launching probes into
the accidents, the National Highway Traffic Safety Administration signaled that
existing regulations for autonomous vehicles may face changes—to weaken them.
The rationale behind that decision
was to remove roadblocks to technology innovation, so that U.S.-based
automakers don’t get left behind as competitors from other countries beat them
to market with self-driving cars.
“Germany, Japan, and China are very
much out front on this,” Strier explains, citing Germany’s recently
unveiled national AI policy,
which eased regulations on autonomous vehicle development. “From a global
competitive perspective, U.S. regulators are keen to enable companies to have
freedom in the sandbox to develop these technologies.”
Looser regulations sidestep the
ethical challenges that carmakers will face inevitably, but for now it appears
that both enterprise and government have set those questions aside. With more
than $80 billion invested in
developing self-driving vehicles in recent years, it appears that too much
money is at stake to tap the brakes for ethical debate.
“The train has left the station,”
Strier says. “Federal governments around the world are trying not to stand in
the way.”
Financial Services
& Insurance
Small business loans. Home mortgage
rates. Insurance premiums. For financial institutions and insurers, AI software
is increasingly automating these decisions, saving them money by speeding
application and claims processing and detecting fraud. Automation in the
financial services sector could save companies $512 billion by 2020,
according to a 2018 Capgemini study.
But the risk of bias is rampant. If
an applicant is denied a loan due to a low credit score or deemed a high risk
and slapped with exorbitant insurance premiums, the algorithm making those
assessments is, for a majority of the time, opaque.
“There are a lot of dangers here,”
says Whittaker. “You’re looking at a scenario in which these companies, whose
ultimate duty is to their shareholders, are going to be increasingly making
assumptions about people’s private lives, about their habits, and what that may
mean about their risk profile.”
For racial, gender, and ethnic
minorities, biased AI can have potentially life-changing impact. A 2018 study
conducted at UC Berkeley found that consumer-lending technology discriminates against
minority applicants.
“There are all sorts of ways in which
we’re seeing these automated systems effectively become the gatekeepers for
determining who gets resources and who doesn’t,” Whittaker warns.
Ironically, tech companies are trying
to fight AI bias by using other AI as watchdogs. For example, one tech company
created an algorithm to “correct”biased datasets
and produce unbiased results. And in 2016, a consortium of researchers designed
AI to detect gender and racial
discrimination in algorithms, a tool that the Consumer Finance
Protection Bureau (CFPB) sought to adopt to test loan decisions for bias.
But major U.S. financial services and
insurance companies have pushed backon the CFPB’s
anti-bias efforts, saying they put them at a competitive disadvantage to
fintech startups. In 2018, lawmakers successfully squashed a CFPB policy aimed
at ending racial discrimination in auto lending.
Still, Strier argues, the issue of AI
fairness in financial services is on governments’ radar globally, and
regulations will emerge to fight bias over time. What role technology will play
in enforcing those regulations remains unclear, however.
“Everyone is worried about bias,”
says Strier. “It’s a pervasive problem—and a science problem. There are
emerging technological methods coming into view, but there’s no clear-cut
answer on how to avoid bias. You’ve got a disease for which there’s no obvious
cure yet.”
Journalism And
“Fake News”
The concept of fake news—deliberate misinformation
disguised as journalism that is spread largely through social media—has become
an international burden. In March 2017, news broke that a U.K. firm had gamed
the data-sharing protocols of a major social media platform in order to
influence voters during the U.S. presidential election. The uproar from the
ensuing scandal still echoes, and the phrase “fake news” has spread, undercutting public
trust in the media.
“This is really an ad-tech story,”
Whittaker says. “Massive platforms like Facebook and Twitter work by directing
people to content they might like that will keep them clicking on ads.
Journalism has become just one input that is buffeted by whatever these
algorithmic whims are that are ultimately calibrated to earn these companies a
profit, not to where it is most salutary for an informed public.”
Can regulation or legislation stem
the tide of this corrosive misinformation?
“We’re going to have years of
emerging regulations in different parts of the world trying out different ways
to deal with this,” she predicts. “It’s a tradeoff: We want the free flow
of information, but we’ve basically created a super highway for bad stuff.”
And with “deepfake” video
now emerging, more bad stuff is on the way. Thanks to tools like FakeApp, which
is based on open source code from Google, anyone can digitally manipulate video
to create a seemingly realistic-looking record of an event that never occurred.
“It’s like Photoshop on steroids,”
says Whittaker. “I think about these technologies in the context of
old-fashioned dirty political tricks we’ve seen. Not everyone has to believe a
fake video for it to have a profound impact on our democratic processes.”
In the U.S., legislators have expressed concerns about
deepfake video, but no legislation has emerged and, for now, Congress has let
Facebook skate on promises to better police itself.
Elsewhere around the globe, countries
from Germany and France to Russia and Indonesia have introduced new laws to crack down on the spread of
misinformation on social media, but these laws have raised ethical concerns
themselves—namely, that they may be misused to muzzle free speech. In Malaysia,
for example, a journalist found guilty of spreading fake news faces up to six years in prison.
“This is a pervasive challenge of our
time,” Strier says. “There’s a lot of discussion and methods being talked
about, but no one has solved it. Right now, the computational propagandists
have the upper hand.”
Military
In 2018, Google employees took a
stand. Learning that their company was supplying AI tech to the U.S. Air Force
for “Project Maven” which
could be used for deadly drone strikes, more than 3,000 workers signed a letter
of protest to CEO Sundar Pichai. Bowing to the pressure, the company pulled out
of the military contract.
The ethics of tech companies
partnering with the U.S. military are fraught. Do the personal moral codes of
employees trump the security interests of a country engaged in an “AI arms
race” with rogue nations, geopolitical foes, and terrorists?
For academics and think tanks like
the Future of Life Institute, it’s a no-brainer. They have launched a worldwide
campaign calling on countries to ban the development of
autonomous weapons.
But that hasn’t deterred the U.S.
Defense Department from boosting its AI spending, and a good chunk of
that $2 billion investment
is going toward undisclosed partnerships with tech companies.
For Open AI’s Whittaker, one of the
biggest ethical dilemmas is transparency. The public has given tech companies
mountains of personal data with the tacit belief that they will guard it, not
weaponize it.
“These companies are protected by
corporate secrecy,” she notes. “It is completely probable that there are
similar projects at other companies, potentially Google, that no one knows
anything about because they’re protected on one side by corporate secrecy and
on the other side by military secrecy protocols.”
And that opacity should be ethically
troubling to society at large, she argues.
“Who is deciding who constitutes a
target? Who gets to decide what an enemy looks like?” Whittaker asks. “Should
we have a say when our data that we entrusted to tech companies is used to
train AI systems that are used in weapons?”
Complicating the ethical equation is
the question of whether the U.S. should have the means to defend itself against
possible AI-based attacks from adversaries or cyber terrorists.
Americans are evenly divided on
this issue. In a 2018 survey by the Brookings Institute, 30% of respondents
said the U.S. should develop AI technologies for warfare, 39% didn’t, and 31%
were undecided.
Tech companies are increasingly
finding themselves forced to take a stand by their own employees. Like Google’s
disaffected workers, a global coalition of Microsoft employees have
publicly rebuked CEO Satya Nadella for signing a $479 million contract with the
U.S. Army to sell it Hololens technology to train U.S. soldiers for battle—and
even to use in combat.
Nadella has publicly defended the
contract as a “principled decision” to protect frontline troops. But he also
assured his rattled staff that Microsoft would “continue to have a dialogue”
about these issues.
As far as ethical issues surrounding
the use of AI are concerned, that conversation, in boardrooms and offices
around the world, is far from over.
Credit: Liyao
Xie/GettyImages
About the Author
Forbes Insights is the strategic research
and thought leadership practice of Forbes Media. By leveraging proprietary
databases of senior-level executives in the Forbes community, Forbes Insights
conducts research on a wide range of topics to position brands as thought
leaders and drive stakeholder engagement.
No comments:
Post a Comment