In recent years, the topic of ethics in artificial intelligence (AI) has sparked growing concern in academia, at tech companies, and among policymakers here and abroad. That’s not because society suddenly woke up to the need, but rather because trial and error has brought ethics to center stage. Facial recognition technology, for example, has been shown to be vulnerable to racial bias. Similarly, AI-powered tools designed to conduct objective screening of job applicants evaluated women candidates more negatively than men.One crucial question is often absent in these discussions: What is an effective model for integrating ethics into AI research and development?
Tech companies, professional societies, and academic institutions have set about establishing ethics boards and drafting AI principles, and policymakers have started to formulate AI rules and regulations—such as the GDPR in Europe and California’s Consumer Privacy Act. These steps closely mirror how ethical standards first emerged in another field shaken by scandal: biomedical research.
The ethical framework that evolved for biomedical research—namely, the ethics oversight and compliance model—was developed in reaction to the horrors arising from biomedical research during World War II and which continued all the way into the ’70s.
In response, bioethics principles and ethics review boards guided by these principles were established to prevent unethical research. In the process, these boards were given a heavy hand to regulate research without checks and balances to control them. Despite deep theoretical weaknesses in its framework and massive practical problems in its implementation, this became the default ethics governance model, perhaps due to the lack of competition.
The framework now emerging for AI ethics resembles this model closely. In fact, the latest set of AI principles—drafted by AI4People and forming the basis for the Draft Ethics Guidelines of the European Commission’s High-Level Expert Group on AI—evaluates 47 proposed principles and condenses them into just five.
Four of these are exactly the same as traditional bioethics principles: respect for autonomy, beneficence, non-maleficence, and justice, as defined in the Belmont Report of 1979. There is just one new principle added—explicability. But even that is not really a principle itself, but rather a means of realizing the other principles. In other words, the emerging default model for AI ethics is a direct transplant of bioethics principles and ethics boards to AI ethics. Unfortunately, it leaves much to be desired for effective and meaningful integration of ethics into the field of AI.
Ethics Policing Vs. Ethics Integration
The traditional ethics oversight and compliance model has two major problems, whether it is used in biomedical research or in AI. First, a list of guiding principles—whether four or 40—just summarizes important ethical concerns without resolving the conflicts between them.
Say, for example, that the development of a life-saving AI diagnostic tool requires access to large sets of personal data. The principle of respecting autonomy—that is, respecting every individual’s rational, informed, and voluntary decision making about herself and her life—would demand consent for using that data. But the principle of beneficence—that is, doing good—would require that this tool be developed as quickly as possible to help those who are suffering, even if this means neglecting consent. Any board relying solely on these principles for guidance will inevitably face an ethical conflict, because no hierarchy ranks these principles.
Second, decisions handed down by these boards are problematic in themselves. Ethics boards are far removed from researchers, acting as all-powerful decision-makers. Once ethics boards make a decision, typically no appeals process exists and no other authority can validate their decision. Without effective guiding principles and appropriate due process, this model uses ethics boards to police researchers. It implies that researchers cannot be trusted and it focuses solely on blocking what the boards consider to be unethical.
We can develop a better model for AI ethics, one in which ethics complements and enhances research and development and where researchers are trusted collaborators with ethicists. This requires shifting our focus from principles and boards to ethical reasoning and teamwork, from ethics policing to ethics integration.
Ethics is a part of the R&D process whether or not we explicitly recognize it. For that reason, I argue that a suitable model for effectively integrating ethics into the R&D process is one that solves or at least mitigates ethics problems as they arise using a combination of ethics and design tools. The AI ethics framework we develop at the AI Ethics Lab and implement requires businesses to focus on three main components: awareness and understanding of ethics within the company, embedding ethical analysis into the design and development process, and developing company policies for recurrent crucial ethical questions.
1. Understanding Ethics
Developers build and create AI systems. They are in a unique position to recognize and flag ethical problems as they work through a project. They can be effective “first responders” if they understand the ethical concerns around the technology that they work on, the connection and conflicts between these concerns, and how they feature in practical application.
It is also important to understand that ethics involves analytic, structured, and systematic reasoning. Training researchers and mathematically minded developers in these skills would empower them to approach ethical thinking through shared tools like logic while learning ethical concepts and value analysis.
For example, imagine you are developing an AI health coach. It collects all of a person’s health-related information 24/7, using relevant IoTs and platforms (i.e., wearables, social media, home assistant) and complements this data by asking the user explicit questions to reveal their preferences, values, weaknesses, and goals.
Analyzing all this information, the tool provides the user with guidance for a healthy life. Such an AI coach would be very useful to supplement traditional healthcare and wellness methods, analyzing all the relevant data in real-time and alerting the user immediately if any problem arises.
However, designing such a tool would involve various ethical issues. First of all, it deals with extremely intimate data, and this data would not only be of interest to the user but also to other parties such as insurance companies and employers, who do not necessarily have the user’s best interest in mind. Moreover, if this tool is designed to effectively guide behavior change, then it could also utilize manipulative methods.
In designing such an AI coach, it is crucial that developers recognize and flag ethical issues such as privacy concerns around such intimate data (both in terms of how they are collected and how they are shared), autonomy concerns around manipulative methods, and even fairness concerns within the training data to avoid gender or race imbalance that could result in higher error rate for certain groups.
At the AI Ethics Lab, we train developers to recognize ethics problems and become skilled at ethical reasoning through workshops and seminars. Reportedly, Google takes a similar approach with an AI Ethics Speaker Series and through ethics training for their employees.
2. Ethical Analysis
Detecting ethical problems is not enough—we need to solve them. In fact, the goal is to catch these problems early on as they arise and solve them in real time at each stage of the innovation process, from the research phase all the way to design, development, manufacturing, and even updates of the technology. Fixing problems before they become full-blown issues or even scandals is not only “right,” but would also save resources.
The aim of this model for ethics integration is more than just avoiding unethical outcomes. It is enhancing technology and making it more beneficial, ideally for everyone. Fulfilling this goal is only possible if ethics analysis becomes a part of the innovation process and product development, and ethicists become part of project teams as employees or consultants.
While developers can help determine ethical problems, they cannot be fully responsible since they are not experts in ethics. Ethicists could share this responsibility by collaborating with them in catching ethical problems as well as taking on the job of clarifying and solving them to the best of their abilities. By design, this is a collaborative model. And this collaborative structure also has side benefits: It enables developers and ethicists to learn from each other.
Let’s return to the example of an AI health coach. In the process of designing and developing this tool, several ethically loaded questions must be answered: How should the consent structure be designed to ensure that users make informed decisions not only in sharing their data but also in using the AI coach? How should different well-being concerns raised by individual preferences and public health be balanced? How should nudges be utilized without violating individual autonomy but to help individuals overcome their motivational problems? To help solve such problems, we provide mentorship and consulting to researchers and developers and collaborate with them as they develop their projects and products.
3. Company Ethics Policy
Last but not least are the recurring hard questions that businesses and institutions need to take a stand on. What is their policy for working with military and law enforcement? Under what conditions should such collaborations be formed? If research or products can be used to do harm, should they still publish it or release it?
To return to the AI health coach example: The company developing this product must decide crucial policy questions, such as, “Will this product be made available to aggressive businesses or oppressive governments that could use the intimate data against individuals?”
These questions take us beyond specific product- or technology-related ethical problems to broader ethical concerns. The organization’s ethics team—which should include not just ethicists but also stakeholders from other departments or disciplines working on ethical questions—needs to set policies to help guide both project teams and the leadership. Also, many ethical questions will have more than one “right” answer. While this plurality is acceptable and even welcomed, decisions must be made to avoid deep inconsistencies.
Most of the largest tech companies today seem to be on board with the idea of establishing firm company ethics policies. They have all established ethics teams and released their AI principles that, they claim, guide their company policy. Yet, we the public are not yet well-informed about how they devised these policies or how they make use of them in their decision making.
The critical thing to recognize is that none of the issues outlined here are lofty ethical musings. They are actionable ethics decisions. And these decisions require going beyond stating moral preferences or feelings—we have to engage in ethical reasoning. Fortunately, a large body of thought exists to help us in ethical reasoning: Drawing from moral and political philosophy, applied ethics is a vast field of knowledge, skills, and tools distilled over two millennia. And while some questions in AI ethics are truly novel, most ethical problems that we encounter are not new within applied ethics. We need to tap this rich body of knowledge to help solve today’s problems.
Unless we are dealing with large-scale policy questions, most AI ethics issues are everyday problems that need to be solved under time pressure during product development. For that reason precisely, AI ethics necessitates more than merely establishing an ethics board to police R&D.
It requires that institutions and businesses adopt a framework to guide their research and development. It requires training, mentorship, and guidance for developers and researchers as they work through product innovation. Of course, larger businesses and institutions have more resources to implement this model fully. But even startups, incubators and innovation centers can, and should, adopt an ethics framework and integrate it into their environment as they build the AI technology that will impact not just industry but society as well.
About the Author
No comments:
Post a Comment