The Financial Conduct Authority has
published a research note providing a review of literature on bias in supervised machine-learning models. The note explores how biases may arise and be mitigated in models used to make predictions or assist in decision-making about individuals. Points of particular interest include: (i) past decision-making, historical practices of exclusion, and sampling issues are key potential sources of bias; (ii) biases can arise due to choices made during the AI modelling process itself, such as what variables are included, what specific statistical model is used, and how humans choose to use and interpret predictive models; and (iii) in reviewing technical methods for identifying and mitigating such biases, these methods should be supplemented by careful consideration of context and human review processes.
[View source.]