A Proposal to Regulate “High-risks” AI
On February 19, the European Commission published a detailed White Paper on Artificial Intelligence (AI) to fanfare in Brussels. But it has received insufficient attention in the United States, especially in U.S. corporate boardrooms. Buried in the paper’s dry bureaucratic language is the acute ambition to create a sweeping new regulatory framework aimed at protecting European consumers and citizens from “high-risk ” AI. The proposed regulations could impose a regime of stringent reporting, testing, certification, and transparency requirements on certain types of data and AI software used in thousands of products and services sold in the EU. If and when these proposals become law, there will be serious implications for companies wishing to do business in the EU.
What Happens in Brussels Doesn’t Stay in Brussels
The EU admits it wants to be the world’s leader in regulating AI applications to reduce risks to consumers. Political distraction and legislative inaction in Washington have created a vacuum of global leadership in setting rules and standards on AI, and the EU is seizing the initiative to fill that vacuum. In doing so, the EU is following a game-plan akin to its successful gambit in 2018 to effectively set and enforce global standards on data privacy via the General Directive on Privacy Regulations (GDPR).
Even though GDPR rules had no force of law beyond the borders of EU member states, every U.S. company that wanted to do business in the EU – an $18 trillion market with over 500 million consumers – was forced to comply with the EU’s data privacy laws. It served as a proof of concept to EU leaders that by setting EU-wide regulations with compliance and enforcement penalties, those regulations effectively become global standards – a compelling example of “superlaw” at work. Once GDPR became law in the EU, it wasn’t long before California followed suit by passing the California Consumer Privacy Act to enhance privacy rights and consumer protection for California residents. And as with GDPR, when the EU approves a new AI regulatory framework, U.S. companies who want to do business in the EU will quickly need to bring their AI software and hardware into full compliance.
What is “High Risk” AI and How Would the EU Regulate It?
The White Paper calls for a new regulatory framework on AI-enabled software and hardware deployed in “high-risk” sectors where misuse could significantly harm users, such as in the fields of healthcare, transport, energy, and cyber-security. This also applies to “high-risk” uses in any sector where the use of applications could cause physical harm, material damage, or violate a user’s rights.
AI applications used in such sectors and use cases would be subject to reporting and transparency requirements. That would include keeping detailed records on the training data used to develop the algorithm, sharing information on the AI’s capabilities, retaining human oversight on automated AI decision-making, certifying an AI system’s robustness, and additional measures to privacy abuses stemming from facial recognition systems. Companies would need to submit AI applications to EU member states’ certified testing centers to conduct “prior conformity assessments.” The EU would also establish a voluntary labelling process for AI applications used in non-high-risk sectors to provide additional confidence to EU consumers.
For US Companies, an EU Framework is Better than No Framework At all
The Commission believes its proposed regulations for AI would bring three major benefits to consumers and producers alike: (1) An EU-wide framework would spare companies from the product fragmentation and legal confusion they would face if every EU country had a different set of rules; (2) The requirements to be imposed on high risk AI would build more trust in AI on the part of consumers, in contrast to the deep distrust many feel today towards “Big Tech;” and (3) A regulatory framework based on principles of human rights, personal privacy, and product safety – values shared on both sides of the Atlantic – is crucial from a geopolitical standpoint. In its absence one likely alternative in the coming decade would be an AI regulatory framework based on Chinese government norms and standards. These are persuasive arguments, made more compelling by the opportunity U.S. companies now have to help shape the final form of future EU regulations.
How to Shape the Outcome
The ideas spelled out in the White Paper are not etched in stone. The Commission has opened a process of public consultation, seeking comments until May 19, 2020, via its website. The more feedback the Commission receives including from U.S. companies, the greater likelihood its views will be meaningfully considered. U.S. companies can also reach out with comments to the U.S. Commerce Department, which has the lead for the US Government in engaging the Commission on its proposals. Finally, companies can register views with the U.S. Chamber of Commerce, which has an acting lobbying presence with the EU in Brussels.
This also offers an opportunity for companies to demonstrate to their shareholders, customers, and stakeholders their commitment to developing safer, more transparent, more robust AI applications, turning a regulatory burden into a proactive socially responsible best practice. One simple way to demonstrate this commitment would be to issue a company mission statement of AI Principles, or sign up to the AI Statement of Principles issued in 2019 by the Organization for Economic Cooperation and Development (OECD).
The bottom line is that while the EU’s timetable for turning these proposals into formal laws and rules is not yet clear, an EU regulatory framework along these lines will be inevitable in a few years’ time. Every U.S. company needs to be acting now to adapt to, and eventually benefit from, the EU’s coming global AI framework.