Crafting Object Detection in Very Low Light

Abstract

Over the last decade, object detection, as a leading application in computer vision, has been intensively studied, heavily engineered and widely applicable to everyday life. However, existing object detection algorithms could easily break down under very dim environments, due to significantly low signal-to-noise ratio (SNR). Prepending a low-light image enhancement step before detection, as a common practice, increases the computation cost substantially, yet still does not yield satisfactory results. In this paper, we systematically investigate object detection in very low light and identify several design principles that are essential to the low-light detection system. Based upon these criteria, we design a practical low-light detection system that utilizes a realistic low-light synthetic pipeline as well as an auxiliary low-light recovery module. The former can transform any labeled images from existing object detection datasets into their low-light counterparts to facilitate end-to-end training, while the latter can boost the low-light detection performance without adding additional computation cost at inference. Furthermore, we capture a real-world low-light object detection dataset, containing more than two thousand paired low/normal-light images with instance-level annotations to support this line of work. Extensive experiments collectively show the promising results of our designed detection system in very low light, paving the way for real-world object detection in the dark. Our dataset are publicly available at https://github.com/ying-fu/LODDataset.

Publication
In The British Machine Vision Conference (BMVC), 2021

Related