Saturday, April 13, 2019

Wrestling With AI Governance Around The World





Do you trust artificial intelligence (AI) to select and purchase groceries for you? How about trusting it to determine the amount of your next pay raise? Or letting it decide whether you’re eligible for a loan?
As private and public sectors experiment with AI, they are also wrestling with new ethical and legal questions. Think tanks, research organizations, and other groups are crafting recommendations for policymakers about how to ensure responsible and ethical use of AI.

Governments, in turn, are swinging into action. The European Union’s landmark General Data Protection Regulation (GDPR), which went into effect in 2018, was just the start. Some countries have been developing principles for AI while others are drafting laws and regulations. 

The intersection of AI and public policy has reached an inflection point, says Jessica Cussins Newman, AI policy specialist at The Future of Life Institute. “There are specific AI applications that might be so potentially harmful to groups of people that industry self-regulation might not be enough,” says Cussins Newman. “Governments are waking up to the fact that they need to be thinking about this in a pretty comprehensive way.”

 “Regulators have to catch up with technology,” adds Brandon Purcell, principal analyst at Forrester Research. “Technology is moving at breakneck speed.”

The following is a breakdown, by region, showing how countries are approaching these ideas related to AI governance and regulation. As with AI itself, it is very much a work in progress. While some countries have proposed policy measures in play, most are still in the exploratory stage.

United States

While the U.S. has yet to pass legislation on AI governance at the federal level, federal agencies are issuing sector-specific guidance as AI permeates different business areas. State governments are also taking steps toward regulating AI. 

Exploratory efforts:
●       The U.S. National Highway Traffic Safety Administration and the Department of Transportation released guidance in 2016 and 2017 on driver-assistance technologies, including those that might use AI to operate vehicles in place of humans.
●       The White House Office of Science and Technology Policy under former President Barack Obama held a series of workshops on AI resulting in three papers released in 2016: “Preparing for the Future of Artificial Intelligence,” “The National Artificial Intelligence Research and Development Strategic Plan,” and “Artificial Intelligence, Automation, and the Economy.” All three papers touched on ethics questions raised by the development of AI.
Proposed Policy And Regulatory Measures:
●       Federal lawmakers introduced a bill in 2017 that would establish a committee to advise the Department of Commerce on topics like ethics training for technologists developing AI. The bill didn’t receive a hearing in that session of Congress but could be reintroduced in the current session.
●       The U.S. Securities and Exchange Commission issued guidance in February 2017 on the use of robo-advisers, or algorithms that provide investment advice. The SEC’s guidance made it clear that robo-advisers need to meet the same standards of disclosure and compliance as human advisers.
●       California lawmakers passed the California Consumer Protection Act in 2018, establishing rules for how businesses can collect and use people’s data. By handing people control over their personal information, it could ultimately let them control whether, and how, that data is used in commercial AI technologies.
●       The state of Vermont passed a law in 2018 requiring that anyone who buys and sells consumers’ personal information must register with the state and disclose how they handle that data. This provides more transparency for individuals about how their data may be used in technologies such as AI.
●       More than 20 other states have also passed laws pertaining to autonomous vehicles, including safety, accountability, and liability issues.

United Kingdom

The U.K. has several active governmental groups exploring AI governance, and has released a steady stream of reports on AI policy and legal issues.
Exploratory efforts:
●       The House of Commons Science and Technology Committee released a report in September 2016, “Robotics and Artificial Intelligence,” exploring ethical and legal issues related to AI. Contributors to the report noted that ethical and legal issues raised by AI “need to be identified and addressed now” to decrease potential risk while maximizing benefits of the technology.
●       An All-Party Parliamentary Group on Artificial Intelligence, an informal group of U.K. lawmakers, launched in January 2017 with the aim of gathering evidence on topics like AI ethics. It later released a report recommending the appointment of a minister of AI who would focus on the economic, social, and ethical implications of AI.
●       The House of Lords established a committee to make recommendations on AI. That committee released a report in April 2018 titled “AI in the UK: Ready, willing and able?” that included more than 70 policy ideas for government action on AI, such as establishing a Centre for Data Ethics and Innovation to provide guidance on data privacy and more.
●       That Centre for Data Ethics and Innovation launched in November 2018, with a focus on setting “the measures needed to build trust and enable innovation in data-driven technologies.”

European Union

The European Union (EU) took the first stab at AI regulation with GDPR in 2018. A number of European countries are exploring their own regulations, too, taking early steps toward putting policies on the books.
Exploratory efforts:
●       The European Parliament’s report on robotics and AI released in January 2017 noted that the EU “could play an essential role in establishing basic ethical principles” in the use of robots and AI. It led to a declaration that data rights and ethical standards would be among the EU’s top ongoing legislative priorities.
●       In April 2018, 25 European countries signed an agreement of cooperation on AI, resolving to collectively deal with “social, economic, ethical and legal questions” of AI. ●       In the same month, the European Commission released a call for a coordinated approach on AI, setting out the idea that the EU should “be the champion of an approach to AI that benefits people and society as a whole” by taking into account ethical principles.
●       The European Commission set up a group with more than 50 business, academic, and civil society experts on AI in June 2018 to advise the Commission on AI, including ethical and legal frameworks. The group released a draft of its ethics guidelines in December 2018. The commission is set to explore policies based on those recommendations this year.
Proposed policy and regulatory measures:
●       The GDPR, which went into effect in 2018, established sweeping privacy rules as well as requirements for how EU residents’ data can be used with AI. It specifically addresses how business should provide transparency around automated decision-making.

France

Exploratory efforts:
●       A commission led by the French Data Protection Authority released a report in May 2018 on ethics issues related to algorithms and AI. It included broad policy recommendations such as “increasing incentives for research on ethical AI” and “strengthening ethics within businesses.”
●       French President Emmanuel Macron along with Canadian Prime Minister Justin Trudeau committed in June 2018 to forming an international study group for AI that will look at issues including ethical considerations. They are working through inviting other nations to join.

Germany

Exploratory efforts:
●       Facebook created the Institute for Ethics in Artificial Intelligence in partnership with the Technical University of Munich in Germany in January 2019. The group’s goals include developing AI guidance for lawmakers.
●       The German government adopted a strategy on AI in November 2018 with one of its three goals being to integrate “AI in society in ethical, legal, cultural and institutional terms in the context of a broad societal dialogue and active political measures.”
Proposed bills and regulatory measures:
●    Germany’s Federal Ministry of Transport and Digital Infrastructure released a report in June 2017 on automated and connected driving, outlining ethical rules and how accountability and liability would shift as automated systems take over driving previously done by humans. It also called for transparency in the development and deployment of autonomous technology.sia-Pacific
China may be one of the most-watched countries when it comes to AI, but other countries in the Asia-Pacific region are also taking steps to advance AI and discuss how and whether to regulate it.

China

Exploratory efforts:
●       China issued guidance in July 2017 on AI development that includes note of how the “disruptive technology” could impact social ethics, adding that great importance should be attached to “safe, reliable and controllable development” of AI.
Proposed bills and regulatory measures:
●       The Chinese government also established a national standard on personal data collection that took effect in May 2018 to regulate how individuals’ data can be collected, stored, and shared.

India

Exploratory efforts:
●       India’s federal government adopted a national AI strategy in June 2018 that recommends setting up a Centre for Studies on Technological Sustainability to address issues related to ethics, privacy, and more.

Japan

Exploratory efforts:
●       Japan has a network of academia, businesses, and non-government organizations discussing and recommending various ethics guidelines for AI.
●       Its federal government also released an AI technology strategy paper in March 2017 that noted it would set up additional opportunities for examining the ethical aspects of AI.

Singapore

Exploratory efforts:
●       Singapore’s federal government released a model framework for AI governance in January 2019, offering guidance to the private sector on how to address ethical issues that arise from using AI.
●       Singapore also established an advisory council in June 2018 to guide the government on developing ethics standards for AI.

South Korea

Exploratory efforts:
●    South Korea’s federal government released a report in 2016 that outlines steps for regulating AI, including a charter of ethics “to minimize any potential abuse or misuse of advanced technology by presenting a clear ethical guide.”

Australia

Exploratory efforts:
●       Australia’s federal government in May 2018 set aside $29.9 million for AI-related efforts, including development of an ethics framework.

New Zealand

Exploratory efforts:
●       New Zealand’s federal government said in May 2018 it will move quickly on an ethical framework for AI and its effects, and launched a report, “Artificial Intelligence: Shaping a Future New Zealand,” on the opportunities and challenges that come with using AI. ●       The AI Forum of New Zealand met with government officials in July 2018 about recommendations in its report that outlined some of the group’s AI ethics concerns around bias, transparency, and accountability in the use of algorithms.
Countries around the world are in different stages when it comes to AI governance, but it is clearly gaining momentum. As the technology becomes more pervasive, so too will the efforts to put enforceable regulations on the books—and around the world.

 CREDIT: KTSDESIGN/SCIENCE PHOTO LIBRARY/GettyImages



About the Author
Forbes Insights is the strategic research and thought leadership practice of Forbes Media. By leveraging proprietary databases of senior-level executives in the Forbes community, Forbes Insights conducts research on a wide range of topics to position brands as thought leaders and drive stakeholder engagement


No comments:

Post a Comment