Best Practice: Bias Mitigation

Objective: Address and reduce implicit bias in AI models and platforms to promote fairness and equity. 

Practical Application: Select and design AI models with the aim of minimizing bias. This involves auditing algorithms for bias, implementing diverse data sets in training, and incorporating fairness metrics in the development process. Continuous monitoring for bias and implementing corrective measures as needed. 

  • Members of the University community should invest in training programs and seek out education on how AI tools introduce bias and how we can mitigate that bias by thoughtfully and intentionally choosing tools that minimize biases and are transparent about inherent biases. We should participate in the development in and advocate for the creation of AI tools that consider the diverse population and abilities that are a part of our University community. We should encourage users to give feedback about the bias created within AI tools so that they can be reviewed and improved on to reduce future bias. 

Outcome: The reduction of bias in AI outputs, contributing to the development of fairer, more equitable AI systems and services.