AI Glossary

AI Governance

The frameworks, policies, and practices that organizations and governments use to ensure AI systems are developed and deployed responsibly.

Components

Risk assessment: Evaluate potential harms before deployment. Accountability: Clear ownership of AI decisions. Transparency: Document model capabilities and limitations. Monitoring: Ongoing oversight of deployed systems.

Regulatory Landscape

The EU AI Act (risk-based regulation), US Executive Order on AI Safety, China's AI regulations, and industry self-regulation frameworks like the NIST AI RMF.

← Back to AI Glossary

Last updated: March 5, 2026