This repository presents a comprehensive analysis of bias in the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm. Leveraging Python tools and fairness metrics, this project explores how race may influence risk assessment scores—and reflects on the ethical and policy implications of such systems in real-world justice contexts.
notebooks/compas_audit.ipynb
: Exploratory analysis using fairness metrics (e.g., False Positive Rate, Predictive Parity) with visualizations.reports/PLP_AI_Ethics_Assignment.pdf
: Ethics reflection and policy recommendations for responsible algorithm deployment.requirements.txt
: Python dependencies for reproducibility.images/
(optional): Visual assets supporting data findings.Algorithms like COMPAS are widely used to assist decisions in the criminal justice system. However, flawed design or training can introduce systemic biases—particularly racial disparities. This project seeks to:
compas_audit.ipynb
— Data analysis, bias detection, and visual insightsRaise awareness about algorithmic justice, fairness tradeoffs, and the impact of opaque AI in criminal sentencing.
Python
, Pandas
, Matplotlib
Jupyter Notebook
📥 Download the full ethics report (PDF)
This report includes:
This audit is an educational project submitted to the PLP AI course and guided by research on data ethics and fairness. Open for feedback, learning, and adaptation.
Leonard Phokane – Driven by a commitment to equitable AI, community empowerment, and responsible innovation.
“Bias isn’t just a flaw in data—it’s a reflection of the systems we choose to build and sustain.”