Gopher: Interpretable Data-Based Explanations for Fairness Debugging
Home Paper Tech Report Contact Us


Overview

Gopher is a system that produces compact, interpretable and causal explanations for bias or unexpected model behavior by identifying coherent subsets of the training data that are root-causes for this behavior. It generates the top-𝑘 patterns that explain model bias that utilizes techniques from the ML community to approximate causal responsibility and uses pruning rules to manage the large search space for patterns.

Papers:

Interpretable Data-Based Explanations for Fairness Debugging.
Romila Pradhan, Jiongli Zhu, Boris Glavic and Babak Salimi. SIGMOD 2022.
Paper Link | Code

Generating Interpretable Data-Based Explanations for Fairness Debugging using Gopher.
Jiongli Zhu, Romila Pradhan, Boris Glavic and Babak Salimi. SIGMOD 2022 Demo.
Paper Link | Code

Research Talk Video

Demo Video

Contributors: Romila Pradhan, Jiongli Zhu, Zhishang Luo, Boris Glavic, and Babak Salimi

Please reach out to any of the contributors if you have any questions.