AI Policy and Governance
Summary
California will lead in ethical, safe, and empowering AI. This roadmap strengthens the economy, safeguards communities, and protects elections, through transparent rules, practical oversight, and clear accountability. It focuses on disclosure and provenance, not viewpoint control. It equips workers, voters, and small businesses to benefit from AI while reducing risks.
Outcome
By 2030, California implements a cohesive AI framework that aligns law, procurement, workforce programs, public reporting, and voter education. Trust, safety, and innovation improve while free expression and civil liberties are protected.
Guiding principle
Just-right AI governance. Set rules that are strong enough to protect people and fair enough to let innovation thrive.
Plan and policy
I. Public Education and AI Literacy
Goal: Empower Californians to understand, critique, and safely use AI.
Key proposals
• AI literacy curriculum in K–12, higher education, and adult learning.
• Citizen participation through public input in oversight boards and policymaking.
• Accessible communication that translates technical policies into plain language.
• Critical thinking campaigns that expand initiatives like Pause. Question. Choose.
Principle: knowledge is empowerment.
II. Economic Resilience and Workforce Preparedness
Goal: Prepare Californians for AI-driven economic shifts.
Key proposals
• AI workforce retraining for AI-resistant roles and emerging sectors.
• Automation impact assessments before deployment that evaluate employment effects.
• Equitable access to AI for small businesses and startups while preventing monopolistic practices.
• AI taxation and redistribution so large AI windfalls support workforce and public programs.
Principle: AI should serve the public good, not only profit.
III. AI Safety and Risk Management
Goal: Prevent systemic, technological, or catastrophic AI risks.
Key proposals
• High-risk AI oversight boards for systems that affect health, infrastructure, or public safety.
• Robustness and cybersecurity testing including stress tests and adversarial simulations.
• Alignment and fail-safe research on reliability, predictability, and controlled shutdown.
• Critical infrastructure standards for energy, healthcare, transport, water, and emergency response.
Principle: safety first. AI must be dependable before deployment.
IV. Technical Model Governance
Goal: Ensure responsible AI beyond elections and high-risk domains.
Key proposals
• Data privacy protections that restrict harmful or unconsented use of personal data for training.
• Model documentation and explainability such as model cards and audit trails.
• Third-party audits before deployment in sensitive sectors.
• Open standards and interoperability across industries.
V. Climate, Mobility, and Emergency AI Integration
Goal: Use AI to protect communities, infrastructure, and the environment.
Key proposals
• Disaster preparedness with AI-assisted evacuation planning and resource allocation.
• Climate resilience through predictive modeling for wildfires, floods, and extreme weather.
• Mobility and transport optimization for traffic, public transit, and low-carbon planning.
Principle: AI enhances safety, sustainability, and resilience.
VI. AI and Democratic Integrity
Goal: Safeguard elections, civic discourse, and voter autonomy against AI-enabled manipulation.
Key proposals
• Universal AI content disclosure: all election-related AI content carries machine-readable provenance metadata, with graduated penalties based on reach, intent, and harm.
• Platform accountability: detect and mitigate coordinated disinformation and deepfakes; obligations extend to emerging networks with high civic impact; safe-harbor for good-faith compliance.
• Year-round election protection: continuous monitoring for AI-driven disinformation and rapid public advisories during campaigns.
• Algorithmic transparency: disclose ranking and amplification mechanisms; independent audits during statewide election cycles.
• Expanded developer oversight: coordination with platforms to prevent civic harms by widely deployed models, not only frontier systems.
• Enforcement and oversight: joint supervision by the Secretary of State, Attorney General, and FPPC; civil enforcement, injunctive relief, whistleblower protection; annual public report on threats, interventions, and effectiveness.
• Voter empowerment: fund civic and AI literacy that emphasizes critical thinking; public campaigns such as Pause. Question. Choose.
Principles: transparency over censorship; proactive safeguards; voter empowerment over speech control.
VII. AI ethics and responsible governance
Goal: Guarantee fairness, accountability, and human rights in AI systems.
Key proposals
• Independent bias and fairness audits in hiring, healthcare, policing, and education.
• Human oversight for any AI decision that affects individuals, including the right to review and appeal.
• Accountability rules that establish clear liability for AI-caused harm for corporations and developers.
• Ethical standards aligned with global principles, integrated into state procurement.
VIII. Campaign and Voter Messaging Framework
Core identity: independent, pragmatic reformer focused on solutions, not politics.
Messaging highlights
• AI literacy and voter empowerment
• Ethical, bias-free AI
• Safe and accountable AI deployment
• Workforce preparation and economic resilience
• Public oversight and citizen engagement
Safeguards
• Protect civil liberties through:
- Viewpoint-neutral rules focused on disclosure and provenance
- Clear due process with notice and appeal for enforcement actions
- Explicit protections for satire, journalism, and political speech
- Limits that prevent broad surveillance or forced weakening of privacy and encryption
• Require risk checks and transparency:
- Tiered impact assessments before deployment in elections, hiring, healthcare, education, and critical infrastructure
- Independent third-party audits with conflict-of-interest rules
- Public reporting dashboards on enforcement and effectiveness (including error rates and appeals outcomes)
- Secure data handling with minimization, retention limits, and breach notification
• Establish pause and rollback triggers:
- Automatic suspension for verified impersonation or coordinated deception causing voter harm
- Mandatory remediation timelines when audits find material bias or safety failures
- Shutdown and manual-override requirements for critical systems that fail stress tests or are exploited
- Sunset reviews every 2–4 years with measurable benchmarks required for renewal
-
• What about free speech?
We do not regulate opinions. The focus is disclosure, provenance, coordinated manipulation, and system risks.• Are platforms forced to remove content?
No. Requirements center on transparency, audits, and stopping coordinated deception. Removal is a platform decision.• Will small or new platforms be overburdened?
Obligations scale with reach and civic impact. Safe harbor applies to good-faith compliance.• How do you measure success?
Share of content with provenance, time from detection to advisory, number of coordinated networks disrupted, audit completion and remediation rates, literacy gains.• Does this create new bureaucracy?
Use existing offices with clear roles and annual public reporting. Add targeted expert capacity only where necessary.• How are privacy and data protected?
Limit unconsented personal data for model training, require documentation and audit trails, and enforce accountability for misuse.• What about bias in AI systems?
Independent fairness audits in hiring, health, policing, and education, with required fixes and appeal paths for affected people.• How does this help workers and small businesses?
Workforce retraining, impact assessments before automation, and support so smaller firms can adopt safe, affordable AI.• What can voters do now?
Adopt simple habits. Pause. Check. Choose. Share responsibly and report clear deception.
> Back to Plan Hub
Related Plans & Policies