Bias
Bias is when systematic errors occur or unfairness in AI systems that can lead to discriminatory outcomes.
Bias in AI occurs when algorithms systematically favor certain groups or outcomes over others. This can stem from biased training data, flawed algorithm design, or societal prejudices embedded in the development process. Bias can be statistical (mathematical deviation) or social (unfair discrimination).
Where does AI Bias come from?
Data Bias
When training an algorithm with data, the algorithms conclusions will reflect those bias and disparities that exist in that data. When we design an algorithm, we may add thresholds that can be exclusionary, reflecting our own biases. As machine learning algorithms process such a large volume of data, even small biases can lead to widesparead disciminatory problems. One example is misidentification. This might occur when a data set is missing information about minoritised groups and, therefore, doesn’t know how to make decisions properly.
Algorithmic Bias
This occurs when the design and parameters of an algorithm, introduce a bias. This can occur through the way the algorithm is designed, or how it is trained for an algoritm to process and prioritize certain features over others, to introduce discriminatory outcomes.
Human Bias
Bias is inherent in all humans. It’s the byproduct of having a limited perspective of the world and the tendency to generalize information to streamline learning. Human bias seeps into AI systems through subjective decisions, made at various points of model design and training, incl. stages such as data labelling and model development.
Challenges
Difficult to detect and measure, can perpetuate societal inequalities, may be unintentional, and removing bias without losing model performance is challenging.