The European Union once again is leading the world on issues around data rights. In its first big initiative on artificial intelligence, the European Commission on Feb. 19 released a 26-page white paper on developing and regulating the critical technology. It could take years for the proposal to become law, but it could influence legislation in many countries around the world.
The Commission, the EU’s Brussels-based executive, hopes the new strategy framework will help the 27-nation bloc keep pace with the US and China while protecting individuals from potential abuses.
“We want citizens to trust the new technology,” said Commission President Ursula von der Leyen, who underscored the urgency of AI for Europe by demanding that a strategy be adopted within 100 days of her team taking office last December. “This is why we are promoting a responsible, human-centered approach to artificial intelligence.”
The Commission is taking a page from its strategy playbook on data privacy, where its 2018 General Data Protection Regulation set out principles that other jurisdictions, like California, have followed at least in part. Whatever you think about the new AI proposal, it too is likely to influence what happens globally. Further, foreign companies operating in Europe will also have to abide by the EU’s approach when there’s a European AI connection.
There are still many details to flesh out. The Commission is putting its proposal out for public consultation, and expects to propose specific legislation before the end of the year. Getting anything on the books will likely take much longer. It took six years for the EU to turn its initial proposal for the GDPR into an enforceable statute.
The Oliver Wyman Forum agrees that there should be broad principles of best practices around data governance and stewardship. We think that’s crucial for ensuring the Future of Data benefits society as a whole. Human oversight is important for gaining the economic benefits from AI technologies while preserving individuals’ safety and privacy.
Getting the data framework right is particularly critical for AI. You can’t judge an AI system without knowing the data fueling it. Biased data means biased AI.
The EU proposal takes some steps in this direction.
It begins with a risk-based approach. As Margrethe Vestager, the executive vice president who is leading the Commission’s work on AI, put it, using facial recognition to unlock your smartphone is very different from using the technology to monitor people in public spaces. In considering new regulations, the Commission will focus on sensitive applications in sectors like recruitment, health care and law enforcement, and calls for AI systems to be transparent and supervised by people.
The Commission paper also addresses the importance of good data in developing trustworthy AI applications. Information used in AI systems should comply with existing EU rules like the GDPR. The Commission paper also calls for developers to document the data they use in sensitive applications, as well as their training and testing methodologies.
The Commission also recognizes that artificial intelligence is a critical technology for future economic development. The paper sets a target of attracting annual investment of 20 billion euros, or nearly $22 billion, into AI development over the next decade. That is a significant sum. It will be interesting to see if it helps Europe catch up with the US and China.