Model Auditing
Systematic evaluation of AI systems for bias, fairness, safety, and compliance with standards.
Overview
Model auditing is the systematic process of examining an AI system's behavior, outputs, and decision-making processes to assess fairness, bias, safety, accuracy, and regulatory compliance. Audits may be conducted internally by the developing organization or externally by independent third parties.
Key Details
Audit processes include testing for disparate impact across demographic groups, evaluating robustness to adversarial inputs, checking for data leakage or memorization, and verifying compliance with applicable regulations. Tools include Aequitas, AI Fairness 360, and custom evaluation frameworks. As AI regulation increases (EU AI Act, NYC Local Law 144), model auditing is becoming a mandatory practice for high-risk AI applications.