What does the Future Hold for Color-Biased Self-Driving Cars (?)

Self Driving CarsSelf Driving Cars

Self-Driving cars might fail to detect darker-skinned pedestrians hence pose greater chances of hitting them than the ones with lighter skin­­.

Autonomous vehicles were a major breakthrough in recent technological developments. Just like the ones we used to watch in sci-fi movies on our all-time favorite cox tv. Well now, they are deemed to significantly reduce commute costs and the need for an individual to own a vehicle. These systems are supposed to detect crucial objects like road signs, and pedestrians hence determining when to slow down or stop altogether.

However, we can all safely agree to the fact that artificially intelligent systems are still evolving, some facial recognition systems do find trouble in recognizing certain skin tones, mostly those that are relatively dark. In other words, the number of concerns pertaining to self-driving cars just grew a little more.

In the story of AI-enabled systems, the autonomous driving systems struggle with the same issue. Thus, dragging us back to the dark ages where this shortcoming might put the lives of darker-skinned pedestrians quite at risk!




Faulty Pedestrian Recognition Systems

Georgia Institute of Technology took different automated vehicles to test. The research found out that their object-recognition-systems failed to detect pedestrians with darker skin tones and shorter heights.

The vehicle models tested during this research showed poorer performance across the board when exposed to the last three shades on the dark skin spectrum. This increases the chances of these cars running into them rather than stopping at the right time. In short, if you are walking with your white or taller friends then you are more likely to get run-over by a self-driving vehicle.

The Experiment

The researchers divided a collection of pedestrian images from lighter to dark skin tones using the Fitzpatrick Scale. It is a scientifically developed method of classifying skin color. The study conducted by Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern, instead of using volunteers (for obvious reasons), they used varying pedestrian images and fed them to the software of eight different facial recognizers of self-driving systems.

The study suggested predictive inequity in detecting pedestrians belonging to different demographic categories. In other words, the algorithms got light-skinned people right and the detection of dark-skinned people was 5% less accurate as compared to the ones on the other side of the color spectrum.

One of the major roadblocks was that the companies never made their data available for public scrutiny and this research is conducted on the academically available models. These models, however, may differ greatly from the actual ones being produced commercially. This doesn’t make the study any less valuable. Instead, it raises voice over a great issue that needs immediate addressing.

Conclusion

The study suggested predictive inequity in detecting pedestrians belonging to different demographic categories. The researchers reported that the bias was because of too little dark-skinned examples and lesser stress on AI systems to learn from them. A greater problem with the development processes of automated systems is this algorithmic partiality.

The Algorithms may reflect the biases of their creators

Talking about the algorithmic bias, the results from experiments show how deeply normalized this bias is, within us humans. Thus leading to these repercussions, as these “biased” creators design these algorithms.

Some of the famous examples of biased algorithms were: In 2015 when Google’s picture recognition system ended up labeling African Americans as “Gorillas”. Similarly, facial recognition systems designed by Amazon, IBM, Megvii, and Microsoft fail to identify the gender of African American people, mostly women, as opposed to lighter-skinned people.

Critique

Critics called out the study on the lack of the use of datasets by the developers of self-driving vehicles. In defense, A New York University professor tweeted that researchers should be using actual datasets and models for testing purposes. However, those data sets were never available to people of academia; hence, these papers offered insights on more realistic issues like these.

Proposed Solutions (?)

Since the algorithms “learn” from what they are exposed to. For example, if they aren’t exposed to racially diverse examples like black or Asian population then they definitely are going to have a hard time recognizing them in real-life situations.

Hence, this bias can be alleviated by exposing the systems to more realistic demographics for the algorithms to learn and grow.

Another solution is to decree that companies must consider racial diversity before rolling out any such technology.

This post was last modified on October 24, 2019 8:16 PM

Categories: Technology