Saturday, April 13, 2019

The Growing Marketplace For AI Ethics



As companies have raced to adopt artificial intelligence (AI) systems at scale, they have also sped through, and sometimes spun out, in the ethical obstacle course AI often presents.

AI-powered loan and credit approval processes have been marred by unforeseen bias. Same with recruiting tools. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners.

 Unfortunately, there’s no industry-standard, best-practices handbook on AI ethics for companies to follow—at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks. 

A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI. Below is a brief roundup of some of the more influential models to emerge—from the Asilomar Principles to best-practice recommendations from the AI Now Institute. 

“Companies need to study these ethical frameworks because this is no longer a technology question. It’s an existential human one,” says Hanson Hosein, director of the Communication Leadership program at the University of Washington. “These questions must be answered hand-in-hand with whatever’s being asked about how we develop the technology itself.”
Here’s a look at key models and their core principles.

Organization: Institute of Electrical and Electronics Engineers (IEEE)

Concept: A crowd-sourced guide to educate and empower designers and developers to prioritize ethics
The IEEE’s “Global Initiative on Ethics of Autonomous and Intelligent Systems” is one of the most ambitious offerings for AI ethical guidelines. It includes contributions from hundreds of members on six continents. One of its core documents, the 250-page “Ethically Aligned Design,” lays out best practices for how to set up an AI governance structure that includes pragmatic treatment of data management, legal affairs, economics, affective computing, public policy, and other areas.
The project also includes working groups currently developing “sample” ethical guidelines for businesses and governments to draft, update, and put to use.

Core principles:
●    Human well-being should be a key success metric for AI. One key IEEE priority is to increase human well-being as a metric for progress in the AI age. Measuring and honoring the potential of holistic economic prosperity should become more important than pursuing one-dimensional goals like productivity increase or GDP growth.
●    Socialize AI ethics across the enterprise. Another IEEE priority is to ensure that everyone involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations. IEEE provides training courses to practice what it preaches.
●    Be a driver, not a passenger, on ethics. Access to insights on emerging standards, and the ability to shape those standards, helps prepare any organization for AI challenges.
Further reading: Explore the approved IEEE P7000 Standards Projects.

Organization: Future of Life Institute

Concept: High-level prescriptions for AI ethics, with a “do no harm” mandate to developers 
The AI ethical framework known as “The 23 Asilomar Principles” may sound esoteric, but it shouldn’t be to companies adopting AI. The Asilomar Principles were hammered out in 2017 by more than 100 scholars, scientists, philosophers, and industry leaders brought together by the Future of Life Institute in Asilomar on the California coast. (Big-name members of the Institute include Elon Musk, Skype co-founder Jaan Tallinn, and, before his death, Stephen Hawking.)

The Institute was founded in 2014 to fuel initiatives that “develop optimistic visions of the future” and “safeguard life” from threats posed by biotechnology, nuclear weapons, climate change, and AI.

The Asilomar Principles have been endorsed by leaders at Google DeepMind, GoogleBrain, Facebook, Apple, and OpenAI. And, last August, the California Legislature became the first law-making body to officially endorse them as a policy-making guide for the future. 

Core principles:
●    Develop AI for social good. “The goal of AI research must be to create not undirected intelligence but beneficial intelligence.”
●    Create a culture of trust. “A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.”
●    Cooperation trumps competition. “Teams developing AI systems should avoid competitive racing, actively cooperating instead to avoid corner-cutting on safety standards.”
Further reading: The Future of Life Institute’s 2018 Annual Report.

Organization: AI Now

Concept: A framework for public and corporate policy protections against human rights infringements and other threats posed by AI
Companies at all stages on the AI development spectrum want to know where public policy on intelligent systems is headed. The work being produced by New York–based research institute AI Now offers a road map. 

The institute delivers periodic topical reports and more general “state of play” annual reports that often point out where AI threatens rights and liberties and feature specific policy recommendations. The institute was founded by two well-connected women who know how tech-business and government works: Kate Crawford, who studies social change and media technologies, and Meredith Whittaker, founder of Google’s Open Research group. 

AI Now’s work draws headlines and shapes public debate. In March 2019, the group’s report on predictive policing highlighted the skewed resource-allocation recommendations AI programs made based on “dirty data” collected through biased police practices. The report called out the AI technology vendors for doing unreliable work.

Core principle:
One key recommendation from AI Now’s 2018 report is a typical example of the way the group follows principle formulations with prescriptions for action:    
●    Consider AI’s full impact across the supply chain. “Fairness, accountability, and transparency in AI require a detailed account of the ‘full stack supply chain.’ We need to better understand and track the component parts of an AI system and the full supply chain on which it relies: that means accounting for the origins and use of training data, test data, models, application program interfaces (APIs), and other infrastructural components over a product life cycle. The full stack supply chain also includes understanding the true environmental and labor costs of AI systems, meaning energy use, the use of labor in the developing world for content moderation and training data creation, and the reliance on clickworkers to develop and maintain AI systems.”
Further reading: AI Now’s Report 2018.

Organization: Atomium-European Institute for Science, Media and Democracy (EISMD)

Concept: Bolstering and highlighting the role of humans in a future shaped in part by AI 
The European Union often precedes the U.S. in establishing government frameworks in which business and technology development operate. The Atomium-European Institute was founded a decade ago by the former president of France, Valéry Giscard d’Estaing, and journalist Michelangelo Baracchi Bonvicini to develop thinking about democracy in the mediated digital age. 

In 2017, the institute launched AI4People, the first global forum in Europe on the social impact of AI, with the goal of setting a foundation for developing sound principles, policies and practices around AI. This year it published an ethical framework that stands out for the way it prioritizes the need to lay out and eventually safeguard the predominant role of humans in an AI-driven future.

Core principles:
●    Keep humans in the driver’s seat. “What we can do: enhancing human agency without removing human responsibility.”
●    Strive for the greater good. “What we can achieve: increasing societal capabilities, without reducing human control.”
●    Preserve individuality. “How we can interact: cultivating societal cohesion, without eroding human self-determination”
Further reading: AI4People’s Five Challenges

Organization: Microsoft

Concept: A framework for democratizing AI development
In 2018, Microsoft president and chief legal officer Brad Smith and vice president of Microsoft AI Harry Shum published “The Future Computed: Artificial Intelligence and Its Role in Society,” a book that emphasizes the need to build trust in AI with the public by resisting the impulse toward competitive control and embracing outside input from philosophers, ethicists, and social scientists.

Core principles:
●    Democratize AI development. Distribute responsibility for ethical AI by opening up the development process. “We’re working to democratize AI in a manner that’s similar to how we made the PC available to everyone.”
●    Invite end-users into the process. System developers must understand and address potential barriers in a product or environment that could exclude people. AI systems should be designed to understand the context, needs, and expectations of the people who use them.
●    Make decision making transparent. AI systems will make decisions that impact people’s lives. It is particularly important that people understand how those decisions were made.
Further reading: Additional content related to “The Future Computed.”
Each of these efforts to meet the ethical challenges has gained power and focus, in part, by refining original efforts through collaboration. The good news is that all of these projects have embraced the idea that building these frameworks is a collective work in progress. Any company looking for relevant and practical rules of the road have the opportunity to get involved.
CREDIT: d3sign/GettyImages


About the Author
Forbes Insights is the strategic research and thought leadership practice of Forbes Media. By leveraging proprietary databases of senior-level executives in the Forbes community, Forbes Insights conducts research on a wide range of topics to position brands as thought leaders and drive stakeholder engagement


No comments:

Post a Comment