This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. We invite you to use and improve it. The AI Fairness 360 toolkit is an extensible open-source library containing techniques developed by the research community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle. AI Fairness 360 package is available in both Python and R. The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available. Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted.
Features
- Comprehensive set of metrics for datasets and models to test for biases
- Explanations for metrics
- Algorithms to mitigate bias in datasets and models
- It is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance
- We have developed the package with extensibility in mind
- Various supported bias mitigation algorithms