Embedding Weather Simulation in Auto-Labelling Pipelines Improves Vehicle Detection in Adverse Conditions

. 2022 Nov 16 ; 22 (22) : . [epub] 20221116

Jazyk angličtina Země Švýcarsko Médium electronic

Typ dokumentu časopisecké články

Perzistentní odkaz   https://www.medvik.cz/link/pmid36433451

Grantová podpora
20-27034J Czech Science Foundation
CZ.02.1.01/0.0/0.0/16 019/0000765 MSMT

The performance of deep learning-based detection methods has made them an attractive option for robotic perception. However, their training typically requires large volumes of data containing all the various situations the robots may potentially encounter during their routine operation. Thus, the workforce required for data collection and annotation is a significant bottleneck when deploying robots in the real world. This applies especially to outdoor deployments, where robots have to face various adverse weather conditions. We present a method that allows an independent car tansporter to train its neural networks for vehicle detection without human supervision or annotation. We provide the robot with a hand-coded algorithm for detecting cars in LiDAR scans in favourable weather conditions and complement this algorithm with a tracking method and a weather simulator. As the robot traverses its environment, it can collect data samples, which can be subsequently processed into training samples for the neural networks. As the tracking method is applied offline, it can exploit the detections made both before the currently processed scan and any subsequent future detections of the current scene, meaning the quality of annotations is in excess of those of the raw detections. Along with the acquisition of the labels, the weather simulator is able to alter the raw sensory data, which are then fed into the neural network together with the labels. We show how this pipeline, being run in an offline fashion, can exploit off-the-shelf weather simulation for the auto-labelling training scheme in a simulator-in-the-loop manner. We show how such a framework produces an effective detector and how the weather simulator-in-the-loop is beneficial for the robustness of the detector. Thus, our automatic data annotation pipeline significantly reduces not only the data annotation but also the data collection effort. This allows the integration of deep learning algorithms into existing robotic systems without the need for tedious data annotation and collection in all possible situations. Moreover, the method provides annotated datasets that can be used to develop other methods. To promote the reproducibility of our research, we provide our datasets, codes and models online.

Zobrazit více v PubMed

Deng J., Dong W., Socher R., Li L.J., Li K., Fei-Fei L. Imagenet: A large-scale hierarchical image database; Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition; Miami, FL, USA. 20–25 June 2009; pp. 248–255.

Lin T.Y., Maire M., Belongie S., Hays J., Perona P., Ramanan D., Dollár P., Zitnick C.L. Microsoft coco: Common objects in context; Proceedings of the European Conference on Computer Vision; Zurich, Switzerland. 6–12 September 2014; pp. 740–755.

Horwitz J., Timmons H. There Are Some Scary Similarities between Tesla’s Deadly Crashes Linked to Autopilot. Atlantic Media; New York, NY, USA: 2016.

Kohli P., Chadha A. Enabling pedestrian safety using computer vision techniques: A case study of the 2018 Uber Inc. self-driving car crash; Proceedings of the Future of Information and Communication Conference; San Francisco, CA, USA. 14–15 March 2019; pp. 261–279.

Japkowicz N. The class imbalance problem: Significance and strategies; Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on on Innovative Applications of Artificial Intelligence; Austin, TX, USA. 30 July–3 August 2000; pp. 111–117.

Shen X., Pendleton S., Ang M.H. Efficient L-shape fitting of laser scanner data for vehicle pose estimation; Proceedings of the 2015 IEEE 7th International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM); Siem Reap, Cambodia. 15–17 July 2015; pp. 173–178. DOI

Zhang X., Xu W., Dong C., Dolan J.M. Efficient L-shape fitting for vehicle detection using laser scanners; Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV); Los Angeles, CA, USA. 11–14 June 2017; pp. 54–59. DOI

Qu S., Chen G., Ye C., Lu F., Wang F., Xu Z., Gel Y. An Efficient L-Shape Fitting Method for Vehicle Pose Detection with 2D LiDAR; Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO); Kuala Lumpur, Malaysia. 12–15 December 2018; pp. 1159–1164. DOI

Petrovskaya A., Thrun S. Model based vehicle detection and tracking for autonomous urban driving. Auton. Robot. 2009;26:123–139. doi: 10.1007/s10514-009-9115-1. DOI

Keat C.T.M., Pradalier C., Laugier C. Vehicle detection and car park mapping using laser scanner; Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems; Edmonton, AB, Canada. 2–6 August 2005; pp. 2054–2060.

Majer F., Yan Z., Broughton G., Ruichek Y., Krajník T. Learning to see through haze: Radar-based human detection for adverse weather conditions; Proceedings of the 2019 European Conference on Mobile Robots (ECMR); Prague, Czech Republic. 4–6 September 2019; pp. 1–7.

Wang H., Zhang X. Real-time vehicle detection and tracking using 3D LiDAR. Asian J. Control. 2022;24:1459–1469. doi: 10.1002/asjc.2519. DOI

Cheng J., Xiang Z., Cao T., Liu J. Robust vehicle detection using 3D Lidar under complex urban environment; Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA); Hong Kong, China. 31 May–7 June 2014; pp. 691–696. DOI

Lin Z., Hashimoto M., Takigawa K., Takahashi K. Vehicle and Pedestrian Recognition Using Multilayer Lidar based on Support Vector Machine; Proceedings of the 2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP); Stuttgart, Germany. 20–22 November 2018; pp. 1–6. DOI

Oiwane T., Osa P., Enokida S. Research on Feature Descriptors for Vehicle Detection by LIDAR; Proceedings of the 5th World Congress on Electrical Engineering and Computer Systems and Science; Prague, Czech Republic. 21–23 August 2019; DOI

Merdrignac P., Pollard E., Nashashibi F. 2D Laser Based Road Obstacle Classification for Road Safety Improvement; Proceedings of the 2015 IEEE International Workshop on Advanced Robotics and Its Social Impacts (ARSO 2015); Lyon, France. 30 June–2 July 2015.

Heuel S., Rohling H. Two-stage pedestrian classification in automotive radar systems; Proceedings of the 2011 12th International Radar Symposium (IRS); Leipzig, Germany. 7–9 September 2011; pp. 477–484.

Heuel S., Rohling H. Pedestrian classification in automotive radar systems; Proceedings of the 2012 13th International Radar Symposium; Warsaw, Poland. 23–25 May 2012; pp. 39–44.

Heuel S., Rohling H. Pedestrian recognition in automotive radar sensors; Proceedings of the 2013 14th International Radar Symposium (IRS); Dresden, Germany. 19–21 June 2013; pp. 732–739.

Dubé R., Hahn M., Schütz M., Dickmann J., Gingras D. Detection of parked vehicles from a radar based occupancy grid; Proceedings of the 2014 IEEE Intelligent Vehicles Symposium Proceedings; Dearborn, MI, USA. 8–11 June 2014; pp. 1415–1420.

Lombacher J., Hahn M., Dickmann J., Wöhler C. Detection of arbitrarily rotated parked cars based on radar sensors; Proceedings of the 2015 16th International Radar Symposium (IRS); Dresden, Germany. 24–26 June 2015; pp. 180–185.

Lombacher J., Hahn M., Dickmann J., Wöhler C. Potential of radar for static object classification using deep learning methods; Proceedings of the 2016 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM); San Diego, CA, USA. 19–20 May 2016; pp. 1–4.

Danzer A., Griebel T., Bach M., Dietmayer K. 2d car detection in radar data with pointnets; Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC); Auckland, New Zealand. 27–30 October 2019; pp. 61–66.

Blum A., Mitchell T. Combining labeled and unlabeled data with co-training; Proceedings of the Eleventh Annual Conference on Computational Learning Theory; Madison, WI, USA. 24–26 July 1998; New York, NY, USA: Association for Computing Machinery; 1998. pp. 92–100. DOI

Qiao S., Shen W., Zhang Z., Wang B., Yuille A. Deep Co-Training for Semi-Supervised Image Recognition. arXiv. 20181803.05984

Hansen D.M., Mortensen B.K., Duizer P., Andersen J.R., Moeslund T.B. Automatic annotation of humans in surveillance video; Proceedings of the Fourth Canadian Conference on Computer and Robot Vision (CRV’07); Montreal, QC, Canada. 28–30 May 2007; pp. 473–480.

Zhang Z., Zhang H., Arik S.O., Lee H., Pfister T. Distilling effective supervision from severe label noise; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Seattle, WA, USA. 13–19 June 2020; pp. 9294–9303.

Rouček T., Amjadi A.S., Rozsypálek Z., Broughton G., Blaha J., Kusumam K., Krajník T. Self-Supervised Robust Feature Matching Pipeline for Teach and Repeat Navigation. Sensors. 2022;22:2836. doi: 10.3390/s22082836. PubMed DOI PMC

Xiao T., Xia T., Yang Y., Huang C., Wang X. Learning from massive noisy labeled data for image classification; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Boston, MA, USA. 7–12 June 2015; pp. 2691–2699.

Qi C.R., Zhou Y., Najibi M., Sun P., Vo K., Deng B., Anguelov D. Offboard 3d object detection from point cloud sequences; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Nashville, TN, USA. 20–25 June 2021; pp. 6134–6144.

Broughton G., Janota J., Blaha J., Yan Z., Krajnik T. Bootstrapped Learning for Car Detection in Planar Lidars; Proceedings of the 2022 The 37th ACM/SIGAPP Symposium On Applied Computing; Virtual Event. 25–29 April 2022; pp. 758–765.

Chadwick S., Newman P. Radar as a teacher: Weakly supervised vehicle detection using radar labels; Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA); Paris, France. 31 May–31 August 2020; pp. 222–228.

Han B., Yao Q., Yu X., Niu G., Xu M., Hu W., Tsang I., Sugiyama M. Co-teaching: Robust training of deep neural networks with extremely noisy labels; Proceedings of the Advances in Neural Information Processing Systems; Montreal QC, Canada. 3–8 December 2018;

Weng X., Kitani K. A baseline for 3d multi-object tracking. arXiv. 20191907.03961

Chen X., Mersch B., Nunes L., Marcuzzi R., Vizzo I., Behley J., Stachniss C. Automatic Labeling to Generate Training Data for Online LiDAR-Based Moving Object Segmentation. IEEE Robot. Autom. Lett. 2022;7:6107–6114. doi: 10.1109/LRA.2022.3166544. DOI

Broughton G., Majer F., Rouček T., Ruichek Y., Yan Z., Krajník T. Learning to see through the haze: Multi-sensor learning-fusion system for vulnerable traffic participant detection in fog. Robot. Auton. Syst. 2021;136:103687. doi: 10.1016/j.robot.2020.103687. DOI

Krajník T., Cristóforis P., Nitsche M., Kusumam K., Duckett T. Image features and seasons revisited; Proceedings of the 2015 European Conference on Mobile Robots (ECMR); Lincoln, UK. 2–4 September 2015; pp. 1–7.

Krajník T., Cristóforis P., Kusumam K., Neubert P., Duckett T. Image features for visual teach-and-repeat navigation in changing environments. Robot. Auton. Syst. 2017;88:127–141. doi: 10.1016/j.robot.2016.11.011. DOI

Li Y., Duthon P., Colomb M., Ibanez-Guzman J. What happens for a ToF LiDAR in fog? IEEE Trans. Intell. Transp. Syst. 2020;22:6670–6681. doi: 10.1109/TITS.2020.2998077. DOI

Kutila M., Pyykönen P., Jokela M., Gruber T., Bijelic M., Ritter W. Benchmarking automotive LiDAR performance in arctic conditions; Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC); Rhodes, Greece. 20–23 September 2020; pp. 1–8.

Bijelic M., Gruber T., Ritter W. A benchmark for lidar sensors in fog: Is detection breaking down?; Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV); Changshu, China. 26–30 June 2018; pp. 760–767.

Yang T., Li Y., Ruichek Y., Yan Z. LaNoising: A data-driven approach for 903nm ToF LiDAR performance modeling under fog; Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Las Vegas, NV, USA. 24 October 2020–24 January 2021; pp. 10084–10091.

Hahner M., Sakaridis C., Bijelic M., Heide F., Yu F., Dai D., Van Gool L. Lidar snowfall simulation for robust 3d object detection; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; New Orleans, LA, USA. 19–24 June 2022; pp. 16364–16374.

Hahner M., Sakaridis C., Dai D., Van Gool L. Fog simulation on real LiDAR point clouds for 3D object detection in adverse weather; Proceedings of the IEEE/CVF International Conference on Computer Vision; Montreal, QC, Canada. 10–17 October 2021; pp. 15283–15292.

Ester M., Kriegel H.P., Sander J., Xu X. A density-based algorithm for discovering clusters in large spatial databases with noise; Proceedings of the Second International Conference on Knowledge Discovery and Data Mining; Portland, OR, USA. 2–4 August 1996; pp. 226–231.

He K., Gkioxari G., Dollár P., Girshick R. Mask r-cnn; Proceedings of the IEEE International Conference on Computer Vision; Venice, Italy. 22–29 October 2017; pp. 2961–2969.

Najít záznam

Citační ukazatele

Nahrávání dat ...

Možnosti archivace

Nahrávání dat ...