Skip to main content

On June 17, 2021, the FiscalNote Executive Institute partnered with TheBridge to host “The Truth and Consequences of AI: Getting Ahead of Risk, Regulation and Ethics,” an interactive virtual discussion featuring Avi Gesser, Partner at Debevoise & Plimpton; Renée Cummings, Founder & CEO, Urban AI, LLC, East Coast Regional Leader, Women in AI Ethics and Data Activist in Residence, University of Virginia; and Vlad Eidelman, Chief Scientist and Head of AI Research at FiscalNote. The closed-door conversation was moderated by Allie Brandenburger, Co-founder and CEO of TheBridge.

Avi Gesser

Renée Cummings

Vlad Eidelman

Allie Brandenburger

The following are key takeaways from the program:

 Most regulators are approaching AI like they approached cybersecurity.

  • A lot of the same regulators who focused on privacy and cybersecurity are now focused on AI.
  • Most regulators are zeroing in on three key areas: privacy, accountability, and vendor management.
  • On the privacy side, regulators are worried about data being used for purposes outside of what was originally intended. They want to ensure that companies have the right to use the information they’re putting into their model and that all the proper consents and notices are in place.
  • On the accountability side, regulators want to see a senior manager or a committee in place to oversee AI regulatory compliance and risk mitigation. Disclosure and transparency are also on their radars: Regulators want to ensure that people know there’s a decision that has been made by a machine, and what appeal rights they have if they don’t like that decision.
  • On the vendor management side, regulators are concerned about companies that are outsourcing their AI to third parties without any knowledge of that party’s internal AI policies or risk mitigation. They want companies to establish an AI vendor risk framework, which may include questionnaires, risk assessments and other contractual provisions that ensure some level of quality control.
  • For now, most regulatory efforts will mimic what happened with cybersecurity: Piecemeal, state-level laws built from a patchwork of existing consumer protection law, existing anti-discrimination law, and existing privacy laws.

Until firm regulations are in place, it’s up to companies to do their due diligence.

  • With no federal AI regulations or firm guidelines in place, companies must be proactive about mitigating the business risks of AI, as well as protecting themselves against future regulations.
  • Companies need to take the lead and create internal structures and governance around AI. Possible strategies include: 1) Implementing a risk rating system that clearly identifies high-risk AI. 2) Reviewing inputs to make sure they line up with intentions. 3)Testing outputs to ensure they’re not behaving in a way that is unfair or erratic. 4) Gathering an internal team comprising different departments (i.e., legal, compliance, audit, HR, business) to thoroughly analyze AI from all angles and help reduce risk and identify potential bias. 5) Establishing a review process for third-party AI vendors to confirm they meet the same standards as internal AI.
  • Whatever proactive measures a company decides to take, documentation will be key. Regulators will need proof that companies have attempted to mitigate risk.

The business risks of AI are significant and could be detrimental to companies that aren’t proactive.

  • Many companies have focused their efforts on optimizing AI for accuracy and prediction without considering risk or ethical concerns. However, after watching the fallout in Big Tech, industry leaders agree that companies need to do more, and societal impact needs to be top of mind.
  • Because AI regulation is evolving, companies should design beyond the current law. This will decrease the potential losses their business could face down the road as new or updated regulations are put in place.
  • The biggest risk for companies is reputational. The public reaction of any AI failures will not only hurt business, it will likely inform regulatory response as well. 

Ethics needs to be considered across the entire AI pipeline—from development through deployment.

  • Companies that don’t prioritize ethics ahead of regulation are playing with fire. They may have the greatest AI product today, but end up defending it in court tomorrow.
  • When it comes to AI ethics, there is a lot of talk and “ethical theater,” but not a lot of follow-through.
  • Efforts are being made in academia to raise consciousness among data scientists so that ethics is considered at the design stage. The goal is to change thinking because thinking informs our technology.
  • We need to build the kind of consciousness that is required not only to do the right thing in real time, but to understand the impact of the algorithm and to undertake long-term impacts of AI. This will legitimize AI as an accurate and trustworthy technology.
  • Stakeholders need to understand the power of AI technology; while it is needed for progress, it also brings a dangerous form of privilege and prejudice. Data scientists, for example, have a lot of discretion as to what data they use, where they get that, how they are going to come up with that code, and what they are going to use to code something. Where there is discretion, there needs to be an ethical resilience that understands it’s not only about risk, it’s about doing what is right.

Looking ahead, AI regulations will likely take more of a “soft law” approach over stringent enforcement or rigorous guardrails.

  • The EU law is a good example of what a federal law could look like in the future, although that won’t likely be anytime soon.
  • Unless there is a crisis, most AI regulation will probably happen at the state-level for the foreseeable future. Regulations will likely be industry-specific.
  • Regulators can’t keep up with the pace of innovation happening in the AI sector, making it challenging to develop any specific guidance. This will leave a decent amount of leeway for practitioners.
  • In general, regulators will want companies to look at high risk-use cases, ensure that there is transparency in decision making, conduct the proper testing to confirm there isn’t bias, and monitor results on an ongoing basis. Any instances in which AI behaves unexpectedly, companies will have a mandatory obligation to report that to the government.

Related Resources: