NEW YORK – On Tuesday, November 12, 2019, FiscalNote and Paul Hastings co-hosted a roundtable discussion called “Big Data and AI Collide with the Legal and Policy Ecosystem.” They discussed the role of legal, compliance, and government affairs professionals in building and protecting business and reputations, as well as mitigating risks posed by the Artificial Intelligence (AI) revolution. The event kicked off with brief introductory remarks from co-facilitators: Dave Curran, Senior Vice President & Chief Business Officer at FiscalNote; Robert Silvers, former US Assistant Secretary for Cyber Policy; and Dr. Vlad Eidelman, VP of Research at FiscalNote. Curran also noted that the discussion would be off-the-record, meaning only those interested in receiving attribution for their comments would be named in this takeaway.
AI and Technology Governance
Curran then introduced Robert Silvers, a partner at Paul Hastings, and thanked him for hosting the event. Silvers commented on the need for organizations to develop strategies for managing reputational goals. He spoke about the benefits and importance of using AI, from lives saved by healthcare diagnoses to the positive impacts of self-driving cars,. Silvers also emphasized that failures in AI governance can create catastrophic consequences.
“You have an enormous amount of consumer data used to train algorithms, generating consumer outcomes, but it [also] brings in cybersecurity consequences. Sometimes, you might have physical safety concerns as well.”
Building Practical AI Tools
Curran introduced Vlad Eidelman, who noted that most work on AI today is not focused on achieving superhuman level general intelligence. Although we do achieve superhuman level abilities in very narrow tasks, we are focused on building practical AI tools for:
- Pattern recognition (finding interesting correlations in data),
- Creating math equations (of whatever outcomes you’re trying to achieve).
While machine learning has the computational power to do advanced analysis, Eidelman underscored the limitations of artificial intelligence, specifically in its decision-making process and in understanding human biases. Eidelman concluded his remarks by encouraging everyone to consider the risks of data biases and data privacy when utilizing rule-based and machine learning systems.
Under Chatham House Rule, the conversation moved to the group. The following themes were revisited throughout the discussion:
- Data Relevance
- Narrative and Accountability
- AI Governance
- Data Compliance
- People, Process, Technology
If a regulator is investigating your company’s historical data, what processes do you have to employ to provide relevant information that cuts across departments in a timely manner? Where do you start?
A data expert in the financial industry explained that in order to have a sophisticated data reporting structure, you need to first decide on what information is relevant to you. Then, you must develop a robust workflow to manage information and provide a complete picture of what you do.
Everyone from established businesses to rising start-ups in Silicon Valley deals with data. The difference lies in how they manage information and use it for meaningful analysis. To cut through the noise of irrelevant information, we should ask ourselves three important questions:
- Where did this data come from?
- Why do you have it?
- What do you do with it?
Narrative and Accountability: Who is Responsible?
In response to the data-reporting dialogue, Curran asked the group who is responsible for owning the narrative in a company. The room unanimously responded that this narrative falls to CEOs. Curran then asked who is actually going to own the narrative if CEOs lack a technical background.
The group then shared the following viewpoints:
- It should not be one person’s responsibility to come up with a narrative. Sales and marketing teams are the voice of the customers. Therefore, they should be empowered as part of the process and the narrative.
- Product people often need to work across departments, so they should have a voice in helping to package the company’s vision.
- In the face of an acquisition, companies often struggle with system integrations and data management. When merging two record keeping systems, it is important to be mindful of how you handle sensitive information between the companies.
- Beyond the explainability of the data and algorithms you have put in place, there needs to be an accountability mechanism that takes into consideration how you mitigate a problem and how you handle ethics concerns.
One participant posited that there should be a task force examining AI and data issues as a team effort. Marketing teams and CEOs shouldn’t be the only ones delivering a company’s message.
Responding to this suggestion, one participant explained that her company has a council overseeing AI and data issues. Through this council, the company has created a data privacy office to roll into their general counsels, who examine the type of international data they have and what they use it for. A senior executive at a trade association spoke to AI standards. He encouraged participants to look at both a recent report on Algorithmic Fairness and the National Institute of Standards and Technology (NIST)’s plan for prioritizing federal agency engagement in the development of standards for AI. This plan focuses on developing standards on explainability, audibility, and transparency.
Geoff Odlum, a former State Department diplomat who is now President of Odlum Global Strategies, argued that regulators are also very interested in algorithms being used to process the data. More specifically, they are interested in ensuring that those algorithms are safe, explainable, non- discriminatory, and protective of data privacy. Odlum recommended that companies looking to work with the US Government subscribe publicly to clear, stated principles on their use of AI. Odlum referred participants to the Organization for Economic Cooperation and Development (OECD)’s Statement on AI Principles and Ethics, which the US Government signed onto in May 2019, as a good starting point. Odlum also noted the important recommendations just issued by the US National Security Commission on AI (NSCAI) in its second interim report, regarding the need for the US government to partner more closely with industry and academia to ensure continued US leadership on AI, and encouraged round-table participants to engage the NSCAI directly on its recommendations.
Enacted in 2018, the California Consumer Privacy Act (CCPA) is designed to protect consumer data privacy rights related to personal information collected by businesses. Certain provisions in this bill mimic those of the General Data Protection Regulation (GDPR). With public hearings scheduled throughout California in early December 2019, companies are actively evaluating strategies for anticipating and mitigating risks. An attendee from a multi national company commented on their recent practice of bringing data analytics in-house and developing guardrails around them, in response to the recent CCPA developments. On GDPR compliance, an executive in the insurance industry stressed the need for transparency with data collection methods. As protection standards rise, it is critical to consider how risks impact people, whether positively or negatively.
People, Process, Technology
One refrain of the discussion was that, rather than fearing AI taking over in the job market, we must focus on setting standards, developing people, and building processes. Chris Lu, FiscalNote Senior Advisor and former Deputy Secretary of Labor under President Obama, urged the group to remember
that although AI provides a power tool, we should not use it to make all of our decisions. Otherwise, we remove human intelligence from the decision-making process and risk running into business decision issues.
Silvers [Paul Hastings] added that it is important not to conclude that all the work awaiting businesses is on the technical development side. Workforce management technology and diversity in the workplace are of equal importance.
The discussion closed with Curran summarizing some of the key takeaways and thanking attendees for their thoughtful comments.
The next Big Data and AI roundtable discussion will take place in early 2020 with a special focus on AI in surveillance.
- Check out these tools for more information about AI standards and principles:
- SIIA Issue Brief – Algorithmic Fairness
- National Institute of Standards and Technology (NIST) – AI Standards
- Organization for Economic Cooperation and Development (OECD) – Statement on AI Principles and Ethics
- National Security Commission on AI – NSCAI Report