Outlier bias: AI classification of curb ramps, outliers, and context
Topics:
Keywords: geoAI, accessibility, transportation
Abstract Type: Paper Abstract
Authors:
Shiloh Deitz, Rutgers University
,
,
,
,
,
,
,
,
,
Abstract
Autonomous vehicles and delivery robots, promise to increase the mobility, freedom, and inclusion of people with disabilities in urban environments. These same technologies have failed to ‘see’ or comprehend wheelchair riders, people walking with service animals, and people walking with bicycles - all outliers to machine learning models. The related areas of big data and algorithms have been critiqued from all sides for their biases – harmful and systematic errors – but this literature largely overlooks the harms that arise from AI’s inability to handle nuance, context, individuality, and exception. I call this non-systematic error outlier bias. In this paper, I draw from queer and crip technoscience, qualitative methods, and geoAI to both attempt to fill a gap in data on the location of curb ramps, which would promote safe travel for people with disability, and better understand nuanced error in the machine learning models undergirding autonomous vehicle development. Specifically, I use two machine and deep learning models to generate data on curb ramps across 9 urban areas in the United States. The most effective model was 88% accurate across all municipalities but the rate of accuracy varied in context in ways both predictable and unpredictable. Focusing on the unpredictable error, outlier bias, I propose a kind of rich or thick description of data error, which is human, slow, tedious, and subjective. Through this case, I explore how queer and crip technoscience as well as qualitative methodologies might be integrated into AI on the way to developing more equitable systems.
Outlier bias: AI classification of curb ramps, outliers, and context
Category
Paper Abstract