compas-fairness-audit

🔍 COMPAS Fairness Audit

Fairness Inspector Badge

This repository presents a comprehensive analysis of bias in the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm. Leveraging Python tools and fairness metrics, this project explores how race may influence risk assessment scores—and reflects on the ethical and policy implications of such systems in real-world justice contexts.


📘 Contents


⚖️ Motivation

Algorithms like COMPAS are widely used to assist decisions in the criminal justice system. However, flawed design or training can introduce systemic biases—particularly racial disparities. This project seeks to:


📁 Contents

🎯 Goal

Raise awareness about algorithmic justice, fairness tradeoffs, and the impact of opaque AI in criminal sentencing.


🛠️ Tools & Technologies


📄 Ethics Report

📥 Download the full ethics report (PDF)

This report includes:


📢 License & Attribution

This audit is an educational project submitted to the PLP AI course and guided by research on data ethics and fairness. Open for feedback, learning, and adaptation.


🙌 Author

Leonard Phokane – Driven by a commitment to equitable AI, community empowerment, and responsible innovation.


“Bias isn’t just a flaw in data—it’s a reflection of the systems we choose to build and sustain.”