On May 18, 2023, the FiscalNote Executive Institute and Debevoise & Plimpton hosted a closed-door, invite-only virtual strategy session, “The Wild West of Generative AI: Assessing Your Company’s Regulatory Risks and Opportunities.”
The session was moderated by Vlad Eidelman, FiscalNote’s Chief Technology Officer and Chief Scientist, and featured the following panelists:
Below are some key insights from their discussion.
Generative AI is different from “traditional” AI
- Generative AI is focused on creating content. Unlike traditional AI systems that are designed to recognize patterns and make predictions, generative AI goes a step further by creating new content in the form of images, text, audio, and more.
- It’s more versatile. The large language models that power generative AI can do varied tasks far better than traditional AI.
- It’s widely accessible. Unlike much traditional AI, generative AI can be accessed easily by anyone with a computer and internet access: Within two months of ChatGPT’s launch in late 2022, for example, 100 million people had already used it.
- Its commercial potential is readily apparent. The “tech stacks” being built on top of generative AI seem especially conducive to commercialization.
Generative AI also poses many potential risks for companies
- Copyright risk. Generative AI may use copyrighted material to create content without the consent of the owner of that copyright, thereby exposing producers or users of generative AI to potential copyright lawsuits.
- Data and IP risk. When using generative AI to create text, losing confidential data and other intellectual property is another risk.
- Quality-control risk. Companies need to balance the speed and cost advantages of generative AI with the potential disadvantages of lower-quality output. For instance, generative AI can often “hallucinate” — i.e., create plausible, but incorrect, statements.
- Discrimination risk. Generative AI that is trained on data that reflects historical biases and other harmful prejudices may produce content that y perpetuates racial and other forms of discrimination.
- Transparency risk. Clients and regulators will increasingly want to know whether generative AI is being used to create a company’s offerings. In addition, regulators, especially in the EU, may demand transparency across the generative AI “value chain” — from foundation-model developers to the actual AI users to the end beneficiaries, such as consumers.
- Off-platform risk. Employees may circumvent company restrictions on generative AI by doing company work on private computers.
- Black-box risk. As with other AI, companies may struggle to understand — and explain — how generative AI arrives at its creations.
- Skills risk. As more tasks are outsourced to generative AI, an organization’s human capabilities may wither in certain areas.
- Ethical risk. Companies will want to be sure that their ethical codes and ESG frameworks are reinforced — not undermined — by generative AI.
- Quantity risk. As the number of generative AI tools explodes, evaluating their security vulnerabilities may become more costly and time-consuming.
- Compliance risk. Smaller firms with fewer resources may face a greater burden from AI regulation. And pilot attempts to reduce that potential burden — by creating a marketplace for standardized compliance certifications — are probably still at least a year or two away from scalability.
- Verification risk. The short track record of many firms developing generative AI may make it even harder to assess the reliability of their products.
- No one-size-fits-all factor. The risks of using generative AI will vary from organization to organization — such risks will generally be greater for a defense contractor than for an ice cream shop, for example.
Dealing with employees’ use of generative AI
- Strive for a middle ground. Total bans on the use of generative AI are unrealistic and will be difficult to enforce. But a complete absence of rules will exacerbate many of the aforementioned risks.
- Legal and regulated. Encourage experimentation and innovation within clear parameters. And create approval/rejection processes for isolated employee use cases outside those parameters.
Dealing with customers and regulators
- Adopt guardrails. Thoughtful processes and procedures are needed to reduce the risks associated with generative AI, as well as to respond to customers’ and regulators’ inevitable demands for appropriate guardrails.
- Create guardrails holistically. When developing AI governance, seek input from cross-functional teams and consider developing an organizational “code of AI ethics”.
- Monitoring and sanctions are essential. If guardrails are to be effective, employees who break the rules must face consequences.
- Sit at the table. Proactively engage and educate legislators and regulators (many of whom lack extensive business and tech expertise) to discuss the benefits and risks of generative AI. Remember: if you’re not at the table, you’re on the menu.
- Look in all directions. Different AI regulatory frameworks are already under discussion, at both the state level (such as in California), the national level (Italy temporarily banned ChatGPT), and the supranational level (EU).
- Big regulatory trend #1. Governments are considering both how to apply existing laws to regulate AI — such as those that deal with data privacy and cybersecurity — and how to create new AI-specific regulations.
- Big regulatory trend #2. Governments are expanding their focus beyond the micro risks of AI (such as discrimination) to also consider the macro risks (such as financial instability).