Technical Paper: Addressing Algorithmic Bias
Ethical AI
Artificial intelligence (AI) promises better, more intelligent decision-making.
Governments are using AI to make decisions in welfare, policing and many other areas. Meanwhile, the private sector has readily adopted AI in to its business models. However, using AI carries with it the risk of algorithmic bias. We cannot achieve ethical AI without fully understanding and addressing this risk.
Algorithmic bias
Algorithmic bias is a kind of error associated with using AI, often resulting in unfairness. Algorithmic bias can arise in many ways - sometimes, the problem is with the design of the AI product itself. Other times the problem lies with the data set used to train the AI.
Algorithmic bias can cause actual harm as it may lead to a person being unfairly treated or even suffering unlawful discrimination based on characteristics such as race, age, sex or disability.
Technology and human rights
This project simulated a typical decision-making process and explored how algorithmic bias can ‘creep in’ to AI systems and, most importantly, how this problem can be addressed.
The lessons learned in this project will be essential in promoting ethical AI compliance with human rights principles.
This technical paper offered some guidance for companies to ensure that when they use AI, their decisions are fair, accurate and comply with human rights.
You can read further reports associated with this project below: