Frank Pasquale helped put algorithmic accountability on the public agenda with his 2015 book, The Black Box Society: The Secret Algorithms That Control Money and Information. In it, he decried the lack of transparency around algorithms that banks and Silicon Valley companies use to allocate credit, sway consumer spending, and make social media posts go viral.
The progress of artificial intelligence and other technologies, the creakiness of the political process, and the economic and political fallout of the coronavirus pandemic have made the issue even more urgent today, the Brooklyn Law School professor says. AI can help companies sort through job candidates, and more firms are doing that in today’s harsh economic environment. But the technology can simply perpetuate longstanding biases, obscuring them with a veneer of science. The accelerated adoption of AI also threatens to imperil more jobs at a time when the global economy has contracted faster than during the global financial crisis, and tens of millions of people around the world have lost work.
What’s needed, says Pasquale, is a more humane AI. That’s the focus of his upcoming book, New Laws of Robotics: Defending Human Expertise in the Age of AI, which is due to be published in late October. The way to get there is to democratize the debate and decision-making process around the technology so that people’s rights are considered as well as corporate profits, and so that AI is adopted in ways that enhance human labor rather than replacing it.
Pasquale discussed his AI vision with Kaijia Gu, a partner at Oliver Wyman and leader of the Oliver Wyman Forum’s City Readiness initiative, and Rory Heilakka, a principal with City Readiness.
There has been an increasing focus on ethics and AI since you published The Black Box Society five years ago. Has anything really changed in the interim?
I think there are a lot of hopeful signs coming out of Europe and the UK, and many jurisdictions in Asia, in terms of taking problems of algorithmic accountability and transparency more seriously. But in order to make this work, it can’t just be a conversation among computer scientists. There has to be a way of bringing together ethicists, people in business, in law, and social science into a broader conversation of what an accountable algorithmic system looks like.
COVID-19 and Black Lives Matter protests have changed the debate around healthcare and social justice. Will these issues force the political process to address issues of AI accountability?
I think there will be more scrutiny of predictive policing, facial recognition, and the use of algorithms to allocate policing. Five years ago, people said if only we had cameras on police, we’d know exactly what happened, and they’d be deterred from wrongdoing. But we see a lot of situations where police turn off the camera, or there are disputes over how the story is told if you release it or cut it in a certain way, and conflicts over who has access to the underlying data. And most chillingly is the turning of this sort of technology back onto protesters. So you have to think twice: Am I going to go to the Black Lives Matter protest knowing that there are facial recognition databases of persons there, and entities that may watchlist people who happen to have been in a place where some random person does something violent?
Five years ago, I thought a moratorium on all facial recognition would be overreaching. But now when I see some of its misuses, I certainly understand why advocacy groups are calling for bans or moratoriums.
The pandemic has accelerated digitization by companies. Where are you seeing that taking place with AI?
One of the areas where I think it’s advancing fastest, and I have many legal and ethical concerns, is in hiring. There are massive companies where you might have a thousand applicants for 20 positions, especially in an era of mass unemployment. And there are many firms that say, give us a corpus of data about your current employees, and we’ll try to find applicants who are most like them. Firms may have been biased in the past in how they hired, and this may end up just being a way of laundering that bias through an AI system to scientifically rationalize it.
A second concern is how dehumanizing it can be to not be important enough to merit a person to deal with you. Perhaps a better way would to make AI part of the system, but not dominant or determinative. Objective criteria for hiring are, to my mind, much more legitimate, fair, and inclusive than black-box hiring based on things as random as tone of voice, eyeball movement, or facial expressions.
What about in medicine?
There is so much low-hanging fruit both in terms of data collection by patients that could help inform their healthcare, and by healthcare providers in terms of the integration of data. COVID has underscored the lack of a unified public health system in the US. Countries that have had world-class responses, like Taiwan and South Korea, have very tight unification of the electronic health record system and integrate that with travel systems – knowing who’s come into the country and where they’ve been. The more you talk about enhancing capacity here, though, the more we have to have a frank conversation about civil liberties, and develop solid protections for any data gathered.
Should the regulatory or legal framework apply to the technology or the application of the technology?
That’s a great distinction. I want to keep open the possibility of broad technology moratoria or bans because I think there are certain things that are so worrisome that we just need to call a time out to figure out as a society what the rules are. Facial analysis, for example. Some firms say they can analyze someone’s face and determine if they’re a terrorist, a pedophile, a criminal, or something like that. That’s deeply disturbing because it seems so unlikely to work, and because the consequences could be so severe.
But the main thing that’s going to happen is regulation of use of technologies. Very few persons want to ban drones entirely, but certain uses — to, say, stalk someone — are beyond the pale. I would be very happy to see cities putting in place ordinances that say you can’t fly your drone with a camera within 10 feet of a home’s window and keep it there for more than five seconds.
How do you deal with the dilemma of track and trace, where there’s a trade-off between the public health interest and people’s privacy?
I think the conversation in the US and Europe got off on the wrong foot. There was this huge initial debate about centralized versus decentralized infrastructures, the decentralized protocol versus public health officials who said this unnecessarily limited their ability to get access to data they needed.
In jurisdictions that did best, they realized there was another trade-off you could pursue. We may need intensive and comprehensive surveillance - just for public health - that enables us to rapidly respond to initial outbreaks and clusters. And if we’re able to do that, then everyone else has significantly more freedom to conduct their lives. I want to get that trade-off on the table as well.
People have been worrying about the impact of technology on jobs since before the Luddites. Is AI different in terms of a potential negative impact?
I cannot speak as to whether this time is different, but we can make it different if we choose the right laws and policies. The key question I want to answer is how we structure society to democratize expertise and participation in the development of AI. Rather than asking, can AI replace doctors and nurses, I focus on policies that are designed to ensure that we get proper input from health professionals with domain expertise to ensure better outcomes, and better processes in healthcare organizations. And I do similar analyses in fields ranging from education to journalism to legal practice.
You talk about developing AI to make human labor more valuable. That sounds great in theory; how to you do it in practice?
Let’s think about a potential robot anesthetist. The big enthusiasm among some folks is about AI replacing rather than complementing physicians. We need to think about how tasks could be redefined with the help of technology. We may find that robotic anesthesia tools and other AI technology can increase the value of the labor of, say, nurse anesthetists. That might allow them to do more things while also giving the physician anesthesiologist more ability to closely watch the hardest cases. That’s the ultimate goal: To have better tools and AI that are going to complement professionals.
When I was growing up in Oklahoma and Arizona, I had no access to, say, French lessons or Chinese lessons online. Had I been growing up now, with the rise on online learning tools, it’s quite possible I could have. Technology can open so many doors. But we have to be sure that as it does so, it doesn’t trample on in-person expertise, and all the social and economic opportunities that creates.