AI Glossary

Responsible AI

An approach to AI development that prioritizes safety, fairness, transparency, privacy, and accountability.

Overview

Responsible AI is an approach to developing and deploying AI systems that ensures they are safe, fair, transparent, privacy-preserving, and accountable. It encompasses the entire AI lifecycle from data collection and model training to deployment and monitoring.

Frameworks

Major responsible AI frameworks include Google's AI Principles, Microsoft's Responsible AI Standard, the OECD AI Principles, and the NIST AI Risk Management Framework. Implementation involves bias testing, safety evaluation, model documentation (model cards), impact assessments, human oversight mechanisms, and ongoing monitoring for emerging harms.

← Back to AI Glossary

Last updated: March 5, 2026