Can technology be responsibly designed in a vacuum?
It can’t be and it shouldn’t be, according to ethicists. These thinkers are articulating a vision in which the practice of AI ethics moves beyond the mere post facto identification of individual incidents when AI has gone awry. To them, an ethical AI has to begin at the beginning: with inclusive design processes.
The argument goes that the more isolated technologists are from the problems they’re trying to solve with their AI tools, the higher the potential for unintended negative consequences. The solution is to make sure that design teams include a diversity of voices and experiences. Broader discussion of AI issues—in the media, for example—should incorporate those voices as well.
Participation on these teams and in these discussions shouldn’t require a computer science degree, either. More necessary is a broad human and social vision, and an ability to comprehend the consequences that AI solutions could create down the line.
“Experts from the technical domain can bring AI work into proximity with the context where the technology will be used and have an impact,” says Julia Rhodes Davis, director of partnerships at Partnership on AI, a multi-stakeholder research organization.
In other words, the work of AI engineers and researchers devising solutions for, say, subsistence farmers or the mentally ill may benefit from the insights of the people on whose behalf they’re working. What matters to such people the most?
AI: Not A Rationally Neutral Overmind
Contrary to a perception of it as a rationally neutral overmind, AI will reflect the biases and blind spots of its creators. And so far, most of those creators have come from a select pool of proficient tech talent.
But as Davis points out, there is a lot of evidence of the bad outcomes that can result when we lack “a diversity of voices or experiences in designing technology.”
For example, training a facial recognition algorithm with a data set that’s overwhelmingly white and male results in much higher error rates when it’s tasked with identifying women of color. Similarly, voice recognition tech struggles to interpret commands from “low-income, rural, less educated, and non-native speakers” because their voices are infrequently included in training audio, according to a recent Accenture article that appeared in the MIT Sloan Management Review.
Luckily, there already exist organizations that facilitate collaboration between technologists, problem holders, ethicists and social scientists.
“Entities like the Partnership on AI are able to bridge the gap between different groups that don’t necessarily work together, resulting in a much wider set of perspectives,” says Anna Bethke, head of AI for social good at Intel’s Artificial Intelligence Products Group.
This inclusionary approach can slow the development, prototyping and rollout of AI solutions. But from the perspective of its members, like Intel, that’s a feature, not a bug.
“It’s a way to hit the brakes and move more purposefully,” Davis says. “We might mitigate harms that are only realized if those perspectives are not accounted for.”
The End Of Top-Down Thinking
It’s easier to bring inclusionary design to AI if you’re willing to look past the traditional top-down tech design process, in which builders and users have little overlap or direct contact with one another.
In the case of a conventional software product, designers may take user feedback and input into account for a future version, but user experience does not change the extant product itself. AI, by contrast, develops and grows through feedback loops, learning from the way it’s used. That means users and builders play much more similar roles in determining how the system will grow over time.
A binary distinction between builders and users here is therefore worse than useless. It concentrates the responsibility for the creation of negative AI consequences in the hands of a few—a potentially hazardous development. It also “reinforces a lack of agency on users’ part,” undermining what should be a mutually beneficial relationship between users and creators, Davis says.
But that agency is crucial.
Matching The Law To The Tech
Inclusionary design also creates a forum in which technology experts can be heard more directly by the influencers and leaders who will play an essential role in the growth and societal adoption of AI and machine learning applications. That magnifies those influencers and leaders’ power in the AI development process, but also educates them—helping ensure that good AI is empowered by equally good law, instead of derailed by misguided bureaucracy.
“Technologists can explain to their policy-making peers not just the impact they can see today, but the potential for the future,” Davis says. The result is mutual comprehension—and better, more humane products.
Too often, Davis says, the tech/government relationship is characterized by a situation in which “a rule, regulation or guideline written today will take months or longer to put in practice.” By the time it takes force, the “technology can change to a point where the guideline no longer matters.” Getting the government types involved at the start can help in avoiding that sort of dilemma.
Another issue that inclusive design can help resolve has to do with cultural differences. Customs can vary from place to place, and AI applications have to be aware of how and when.
In contemporary America, for example, the safety of children tends to be a cultural priority. Other cultures, however, might rather prioritize the well-being of senior citizens. An AI tool will have to take that key difference into account. Getting representatives of the cultures in question involved on the development side of the equation can help make that happen.
Conversations around inclusive AI are sensitive and promise no guaranteed outcomes. But it’s imperative to build a wide platform for collaboration in this time of tremendous change, when AI is coming into its own as a technology that will have a pervasive impact on how we live.
“If AI is going to benefit all people and society, all people and society must have a role in shaping AI, its design and deployment,” Davis says.
About the Author
Jason Compton is a writer and reporter with extensive experience in enterprise tech. He is the former executive editor of CRM Magazine.
No comments:
Post a Comment