To understand machine bias, we must first define the word ‘bias'. The Cambridge dictionary defines ‘bias’ as the action of supporting or opposing a particular person or thing in an unfair way. We are all guilty of biases, both harmless and harmful.
A harmless bias would be defending your family and friends when they’ve made mistakes while a harmful bias would be hiring based on race or gender. The problem with bias is its subtlety. You may not realize you’re doing it because your biased beliefs are so deeply ingrained. And since it is humans that design and create the algorithms in machines, these subtle biases get passed on, often with mixed results.
Algorithms use data and formulas to calculate and predict the future based on the past. Simply put, they predict how things should go based on how they used to go. But, this is how all thinking systems in the world function. They learn viewpoints and ideas based on data that is biased often against races, genders, sexualities and more. But it is not as simple as that. These machines have to have data but unfortunately, this data can be biased, simply because people often are. The way society functions is often racist and sexist and this is seen in the data.
On the other side of this argument is necessary bias because in some cases, it is necessary to use bias to make decisions, especially in the medical field. In the creation of an algorithm to screen cancer, the machine should have a bias towards tumors that they have seen and studied in the past. Recognition can be seen as prejudice. It is through past experiences that we learn to recognize and differentiate. So in that sense, it is necessary for a machine to have that level of bias to distinguish between different objects or situations, depending on the context. The problem is when these algorithms are used in social subjective situations which are never clear-cut. For example, an algorithm that sorts out resumes should not learn what successful doctors or engineers looked like and behaved like in the past and use that to decide who gets selected.
There are many examples of algorithms turning rogue with terrible results. In 2016, a beauty pageant website used algorithms to judge the winners of their contest based on submitted photos. Most of the finalists were white with a few Asians and only one person who had visibly dark skin. The machine was functioning based on the way it was trained and ended up reflecting human bias. This racist bias can also be seen in the bail determination and sentencing algorithms in the US. African-Americans are always rated as having a higher chance at re-offending, no matter their record, while white people are seen as less likely to reoffend, even with violent criminal pasts. And these algorithms have been proven wrong.
The scariest part of these biases is that most people assume that machines are neutral, simply because they are not human. They forget that they are made by humans and are therefore, capable of prejudice. When we make decisions based on these biased machines, we enable them to create the future as they have predicted. To the machine, its prediction was right and this is reflected in new data and causes a spiral and makes us get caught in a feedback loop. (Feedback occurs when outputs of a system are routed back as inputs as part of a chain of cause-and-effect that forms a circuit or loop. The system can then be said to feed back into itself.) The input just has to be flawed once and the algorithm will keep reinforcing it because to it, it has been proved right. Such self-fulfilling prophecies end up reinforcing harmful social patterns.
It has therefore become important for companies to take Data Ethics seriously. Simply put, scientists and programmers should work ethically and be aware of the negative ramifications that technology can bring about by its misuse. In terms of machine bias, it has become so important to train machines in the right way. Experts in the field are optimistic as there has been an effort to change in the last few years. There are some ways that these biases can be weeded out but it is still at the beginning stages as we have only begun noticing these issues. To begin with, these systems should be taken apart and examined to find out where the bias originates. Going forward, engineers and designers should attempt to create more transparent machines so that it is easier for people to understand how they work, making it easier to spot and correct the bias. But the most important thing is to remember that these machines were created by humans with a bias and they should be understood. Often, they focus on the science and technological aspects of their job and forget the historical, cultural and racial aspects of the machines that they create. They should be educated on these subjects so that they are aware of what impact their work can have on everyday people and their lives.
P.S: I am no expert in AI/ML, but this is a sincere endeavour to point out the possibilities of errors creeping in since ultimately we humans are the so-called "creators" of these algorithms.
Rajh is a serial entrepreneur with ventures in knowledge process outsourcing, hospitality, retail, IT and e-commerce. He has over 25 years of corporate experience and expertise in key roles of leadership, strategy, planning & management. Rajh is especially skilled at developing new profit centers within scheduled timelines and costs while ensuring operational efficiencies through long-term strategic planning. His core expertise includes delivering customized and cost-effective solutions to meet the operational and financial goals of the organization and its stakeholders. Rajh holds an MBA in Marketing from the University of Mumbai.