Skip to main content

Artificial intelligence has been capturing significant attention due to its capacity to revolutionize both business and society. As a result, policymakers have legitimate concerns. In response, the Biden Administration has taken proactive steps to leverage the transformative capabilities of AI while setting up regulatory frameworks to ensure its application aligns with societal benefits. Here, we present an overview of two of the administration’s initiatives and explore how companies can learn from these actions to structure their AI programs effectively.

AI Bill of Rights

While recognizing the potential benefits of automated systems in various sectors, the Biden Administration emphasizes the need to also protect civil rights and ensure the use of technologies aligns with democratic values. To guide the responsible development and use of artificial intelligence, The White House Office of Science and Technology Policy has released an AI Bill of Rights that outlines five key principles: 

  1. Safe and Effective Systems: Testing and monitoring are key to ensuring AI systems are safe and effective and not designed to cause harm. 
  2. Algorithmic Discrimination Protections: Measures should be taken to prevent algorithmic discrimination based on protected classifications, employing equity assessments and representative data, with transparency in reporting.
  3. Data Privacy: Data protections should be built into systems by default, users should have control over their data, and surveillance technologies should undergo oversight to safeguard privacy and civil liberties.
  4. Notice and Explanation: Users should receive clear and timely notice when automated systems are used, and explanations for outcomes should be provided in plain language and technically valid.
  5. Human Alternatives, Consideration, and Fallback: Users should have the option to opt out of automated systems where appropriate, with access to timely human consideration and remedy, especially in sensitive domains, while ensuring transparency and effectiveness.

About the Author

Ellie Nieves is Vice President & Assistant General Counsel, Strategic Public Policy Initiatives, at The Guardian Life Insurance Company.

Voluntary Commitments from Leading Companies

The Biden Administration has secured the voluntary commitment of seven leading AI companies, including Amazon, Google, and Microsoft, to prioritize safety, security, and transparency in developing AI technology. The companies will conduct internal and external security testing before product release, invest in cybersecurity, and foster third-party discovery of vulnerabilities. They will also develop mechanisms to inform users of AI-generated content, publicly report AI capabilities and limitations, and prioritize research on mitigating societal risks, including bias and discrimination. 

These commitments reflect the importance of safety, security, transparency, and fairness in the development and deployment of AI technologies, which are key elements addressed in the AI Bill of Rights.

What is Next? 

The administration is developing an executive order and bipartisan legislation to promote responsible AI innovation. These efforts extend internationally through consultations with allied countries and organizations. The goal is to establish a strong global framework for the development and use of AI while ensuring safety and non-discrimination for all.

Key Considerations for Businesses Implementing AI

Given these important developments, when implementing AI into business operations, key considerations include:

  1. Ethical and Legal Compliance: Ensure that AI applications comply with ethical standards and legal regulations to avoid bias, discrimination, and potential legal liabilities.
  2. Transparency and Explainability: To build trust with users and stakeholders, AI systems should be transparent and provide clear explanations for their decisions.
  3. Data Privacy and Security: Robust security measures should be taken to protect sensitive information and prevent unauthorized access.
  4. Testing and Validation: Before deploying AI systems, AI models should be tested and validated to ensure that they are safe and effective.
  5. Social Responsibility: Consider the broader social implications of AI usage, ensuring it contributes positively to society and minimizes negative consequences.

By embracing these considerations, businesses can uphold the principles outlined in the AI Bill of Rights and advance toward a safer, more trustworthy AI landscape in line with the voluntary commitments made by industry leaders.

Guardian® is a registered trademark of The Guardian Life Insurance Company of America. Copyright © 2023 The Guardian Life Insurance Company of America 2023-159297 Exp. 8/25

Featured Resources

The Wild West of Generative AI: Assessing Your Company’s Regulatory Risks and Opportunities

What is the Blueprint for an AI Bill of Rights?

On-Demand Webinar: Ethical Use of AI and the Emerging Legal Landscape