Addressing Disability Bias In Artificial Intelligence

Naveen Joshi 13/09/2021

Unfortunately, bias in AI against individuals with physical disabilities is prevalent in today’s digital society.

Dealing with this problem requires AI researchers and developers to show basic common sense and, above all, empathy for disabled system users. The question is, how can we develop AI systems that work without bias for all users?

As we know, bias in AI is, quite sadly, a reality of our times. There have been numerous instances in which recommendations and other outputs from AI-powered systems have had distinctively racist undertones. Unfortunately, bias in AI systems can also extend towards those with physical disabilities too. Real-life instances have been found in which disabled people have been on the end of questionable decisions made by AI-based systems. Eliminating disability-related bias in AI systems requires greater attention to detail on the end of organizations, developers and other people involved in the process of creating and training AI models and derivative systems.

Why AI Models Are Discriminatory Towards the Disabled



As we know, disability is a wide concept with evolving dimensions. Additionally, the context keeps changing too—for example, visually challenged individuals may either be completely vision-impaired or possessing varying degrees of weakened eyesight. As a result of such cases, creating fair datasets will be challenging for AI scientists due to the extreme variations involved in each type of disability. Up to a certain degree, individuals suffering from mental health issues will have the bias stacked up against them even more, as they attempt to use systems that are trained with statistical norms, and have limited to no 'idea' about their conditions.

Another reason for the bias in AI regarding disabilities is that physically challenged persons may not provide their disability-indicative details to organizations due to various fears. At the same time, many countries have laws that safeguard the privacy of disabled individuals so that organizations or individuals are prevented from marginalizing or exploiting such individuals. Due to this reason, the data available for machine learning will also be low.

What Organizations Can Do to Tackle Bias in AI

One way to minimize the elements of bias in AI systems is to include disabled individuals during the research, development, and implementation of AI applications in organizations. This inclusion will facilitate higher quality machine learning as AI models can study such persons’ characteristics closely if they are actively involved in the process. Organizations must also go out of their way and employ disabled people to make them feel included and valued regarding their plans. Essentially, empathy and understanding are key to eliminate bias in AI against disabled people.

Unfortunately, due to the level of complexity related to deformities and disabilities, eliminating bias completely from AI systems is almost impossible. However, efforts must be made nevertheless to reach that goal.

As we all know, specially-abled persons generally just want to live and co-exist peacefully with able-bodied individuals without being constantly reminded of their God-given deficiencies. By eliminating bias in AI against such individuals, we can allow them to be at ease and, more importantly, at peace with their disabilities.

Share this article

Leave your comments

Post comment as a guest