On January 13, 2025, California Attorney General Rob Bonta released two legal advisories addressing how existing state laws apply to artificial intelligence (AI). “The fifth-largest economy in the world is not the wild west; existing California laws apply to both the development and use of AI,” said Attorney General Bonta. These advisories provide guidance to businesses, healthcare entities, and other organizations utilizing AI, ensuring compliance with California’s consumer protection, civil rights, and healthcare regulations. As AI becomes more prevalent in daily operations and decision-making processes, these legal advisories clarify how existing laws apply to AI systems, reinforcing the importance of ethical and transparent practices.
Consumer Protection and Civil Rights Obligations
The first advisory, titled "Application of Existing California Laws to Artificial Intelligence," emphasizes that AI systems must align with state laws designed to protect consumers and prevent discrimination. Key points include:
- Bias and Discrimination: Businesses must ensure that their AI systems do not perpetuate or exacerbate biases, particularly those that could negatively impact protected groups. California’s anti-discrimination laws, including the Unruh Civil Rights Act, mandate that businesses provide equitable access and treatment regardless of race, gender, or other protected characteristics.
- Transparency: Companies must disclose when AI tools are used in decisions that affect consumers’ rights or access to services. This ensures consumers are informed and able to exercise their legal rights effectively.
- Privacy Compliance: The advisory clarifies that AI applications must adhere to privacy laws such as the California Consumer Privacy Act (CCPA) and its expanded successor, the California Privacy Rights Act (CPRA). These laws require organizations to limit the collection and use of personal data, obtain consumer consent, and provide mechanisms for individuals to opt out of automated decision-making.
The advisory warns that non-compliance may result in penalties under the Unfair Competition Law (UCL), which prohibits unfair or deceptive business practices. Businesses should proactively assess their AI systems to avoid these risks.
Healthcare Sector-Specific Guidance
The second advisory, "Application of Existing California Laws to Artificial Intelligence in Healthcare," addresses the use of AI in medical settings and highlights:
- Patient Transparency: Healthcare providers are required to notify patients when AI technologies are used in diagnostic or treatment decisions. This fosters trust and allows patients to make informed decisions about their care.
- Testing and Validation: Rigorous testing and validation of AI systems are essential to prevent errors and reduce the likelihood of harm. This includes ensuring that training data is free from biases that could compromise the accuracy or fairness of AI-driven medical tools.
- Privacy Protections: AI systems used in healthcare must comply with the Confidentiality of Medical Information Act (CMIA) and the Health Insurance Portability and Accountability Act (HIPAA). These laws impose stringent requirements for safeguarding patient data and ensuring its secure use.
The advisory also emphasizes ongoing monitoring of AI systems to address emerging risks affecting patient outcomes or legal compliance.
Legislative Developments
In addition to reiterating the applicability of existing laws, the advisories highlight new legislative measures that took effect on January 1, 2025, aimed at regulating the use of AI across industries. These measures include:
- Disclosure Requirements: Businesses are now required to clearly disclose when AI systems are employed, particularly in applications that influence consumer decisions or personal rights.
- Unauthorized Use of Likeness: California law prohibits the use of AI to create replicas of individuals’ likenesses without their explicit consent. This protects against exploitation and unauthorized commercial use.
- Election Integrity: New laws restrict the use of AI in campaign and election-related materials to prevent misinformation and manipulation.
- Prohibitions on Harmful Practices: Regulations explicitly target exploitative or harmful uses of AI, such as systems designed to mislead consumers or unfairly target vulnerable populations.
Practical Steps for Businesses
The advisories provide a roadmap for organizations seeking to align their AI practices with California’s legal standards. Key recommendations from the Attorney General include:
- Conduct Comprehensive Audits: Regularly review AI systems, and the process to develop them, to ensure they comply with applicable laws, including anti-discrimination, privacy, and transparency requirements.
- Implement Bias Mitigation Strategies: Use diverse datasets and robust testing protocols to identify and address potential biases in AI algorithms.
- Enhance Transparency Practices: Develop clear and accessible communication strategies to inform consumers and patients about the role of AI in decision-making processes.
- Train Employees: Provide training for staff on the ethical and legal implications of AI use, ensuring they understand their obligations under California law.
- Engage Legal Counsel: Work with legal experts to navigate the complexities of AI compliance and stay ahead of evolving regulatory requirements.
Conclusion
The Attorney General’s advisories serve as an important reminder that compliance with state laws is an evolving challenge, and existing laws apply with equal force to new technologies. Companies that implement measures to align with these requirements will mitigate legal risks and support long-term stability in the evolving AI-driven economy.