Using deep learning and high accuracy regions for vehicle detection and traffic density estimation from traffic camera feeds
Topics: Geographic Information Science and Systems
, Transportation Geography
, Cyberinfrastructure
Keywords: traffic count, object detection, deep neural network, quadtree, image segmentation
Session Type: Virtual Paper
Day: Saturday
Session Start / End Time: 4/10/2021 04:40 PM (Pacific Time (US & Canada)) - 4/10/2021 05:55 PM (Pacific Time (US & Canada))
Room: Virtual 43
Authors:
Yue Lin, The Ohio State University
Ningchuan Xiao, The Ohio State University
,
,
,
,
,
,
,
,
Abstract
The growing number of real-time camera feeds in many urban areas has made it possible to provide traffic density estimates in order to provide data support for a wide range of applications such as effective traffic management and control. However, reliable estimation of traffic density has been a challenge due to variable conditions of the cameras (e.g., heights and resolutions). In this work, we develop an efficient and accurate traffic density estimation method by introducing a quadtree segmentation approach to extract the high-accuracy regions of vehicle detection, High-Accuracy Identification Region (HAIR). State-of-the-art object detection models, EfficientDet, are applied to identify the motor vehicles present in traffic images, and vehicles within HAIR are then used for traffic density estimation. The proposed method is validated using images from traffic cameras of different heights and resolutions located in Central Ohio. The results show that HAIR extracted using our method can achieve a high vehicle detection accuracy with a small amount of labeled data. In addition, incorporating HAIR with the deep learning-based object detectors can significantly improve the accuracy of density estimation especially for cameras that are mounted high and at the same time at a low resolution.