Distributed Camera Subsystem for Obstacle Detection
Jazyk angličtina Země Švýcarsko Médium electronic
Typ dokumentu časopisecké články
Grantová podpora
CZ.02.1.01/0.0/0.0/17_049/0008425
Research Platform focused on Industry 4.0 and Robotics in Ostrava Agglomeration project
SP2022/67
Specific research project
PubMed
35746381
PubMed Central
PMC9228584
DOI
10.3390/s22124588
PII: s22124588
Knihovny.cz E-zdroje
- Klíčová slova
- collaboration, distributed processing, human–robot interaction, obstacles detection, sensors network, workspace monitoring,
- MeSH
- počítačové komunikační sítě * MeSH
- Publikační typ
- časopisecké články MeSH
This work focuses on improving a camera system for sensing a workspace in which dynamic obstacles need to be detected. The currently available state-of-the-art solution (MoveIt!) processes data in a centralized manner from cameras that have to be registered before the system starts. Our solution enables distributed data processing and dynamic change in the number of sensors at runtime. The distributed camera data processing is implemented using a dedicated control unit on which the filtering is performed by comparing the real and expected depth images. Measurements of the processing speed of all sensor data into a global voxel map were compared between the centralized system (MoveIt!) and the new distributed system as part of a performance benchmark. The distributed system is more flexible in terms of sensitivity to a number of cameras, better framerate stability and the possibility of changing the camera number on the go. The effects of voxel grid size and camera resolution were also compared during the benchmark, where the distributed system showed better results. Finally, the overhead of data transmission in the network was discussed where the distributed system is considerably more efficient. The decentralized system proves to be faster by 38.7% with one camera and 71.5% with four cameras.
Department of Robotics Faculty of Mechanical Engineering VSB TU Ostrava 70800 Ostrava Czech Republic
Zobrazit více v PubMed
Feigin M., Bhandari A., Izadi S., Rhemann C., Schmidt M., Raskar R. Resolving Multipath Interference in Kinect: An Inverse Problem Approach. IEEE Sens. J. 2016;16:3419–3427. doi: 10.1109/JSEN.2015.2421360. DOI
Bhandari A., Kadambi A., Whyte R., Barsi C., Feigin M., Dorrington A., Raskar R. Resolving multipath interference in time-of-flight imaging via modulation frequency diversity and sparse regularization. Opt. Lett. 2014;39:1705–1708. doi: 10.1364/OL.39.001705. PubMed DOI
Naik N., Kadambi A., Rhemann C., Izadi S., Raskar R., Kang S. A light transport model for mitigating multipath interference in TOF sensors; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Boston, MA, USA. 7–12 June 2015.
Fanello S.R., Valentin J., Rhemann C., Kowdle A., Tankovich V., Davidson P., Izadi S. UltraStereo: Efficient Learning-Based Matching for Active Stereo Systems; Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Honolulu, HI, USA. 21–26 July 2017; pp. 6535–6544. DOI
Fanello S.R., Rhemann C., Tankovich V., Kowdle A., Escolano S.O., Kim D., Izadi S. HyperDepth: Learning Depth from Structured Light without Matching; Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA. 27–30 June 2016; pp. 5441–5450. DOI
Zhang Y., Khamis S., Rhemann C., Valentin J., Kowdle A., Tankovich V., Schoenberg M., ShahramIzadi, Funkhouser T., Fanello S. Activestereonet: End-to-end self-supervised learning for active stereo systems; Proceedings of the European Conference on Computer Vision (ECCV); Munich, Germany. 7 October 2018; pp. 784–801. DOI
Kumar Jha V., Grushko S., Mlotek J., Kot T., Krys V., Oscadal P., Bobovsky Z. A depth image quality benchmark of three popular low-cost depth cameras. MM Sci. J. 2020;2020:4194–4200. doi: 10.17973/MMSJ.2020_12_2020057. DOI
Duan Y., Chen L., Wang Y., Yang M., Qin X., He S., Jia Y. A real-time system for 3D recovery of dynamic scene with multiple RGBD imagers; Proceedings of the 2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops); Colorado Springs, CO, USA. 20–25 June 2011; pp. 1–8. DOI
Hayashi S., Igarashi H. HCI International 2021—Posters, Proceedings of the 23rd HCI International Conference, HCII 2021, Virtual, 24–29 July 2021. Springer International Publishing; Cham, Switzerland: 2021. Touchless Information Provision and Facial Expression Training Using Kinect. DOI
Yang K., Peng L., Tong L., Liu R., Liu B. An Assessment Method for Upper Limb Rehabilitation Training Using Kinect; Proceedings of the 2018 IEEE 8th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER); Tianjin, China. 19–23 July 2018; DOI
Chulhee B., Lee S. Object Recognition Using Deep Belief Nets with Spherical Signature Descriptor of 3DPoint Cloud Data for Extended Kalman Filter Based Simultaneous Localization and Mapping; Proceedings of the 2018 15th International Conference on Ubiquitous Robots (UR); Honolulu, HI, USA. 26–30 June 2018; DOI
Vysocky A., Novak P. Human—Robot collaboration in industry. MM Sci. J. 2016;9:903–906. doi: 10.17973/MMSJ.2016_06_201611. DOI
Wang L., Gao R., Váncza J., Krüger J., Wang X.V., Makris S., Chryssolouris G. Symbiotic human-robot collaborative assembly. CIRP Ann. 2019;68:701–726. doi: 10.1016/j.cirp.2019.05.002. DOI
Messeri C., Masotti G., Zanchettin A.M., Rocco P. Human-Robot Collaboration: Optimizing Stress and Productivity Based on Game Theory. IEEE Robot. Autom. Lett. 2021;6:8061–8068. doi: 10.1109/LRA.2021.3102309. DOI
Chacón A., Ponsa P., Angulo C. Usability Study through a Human-Robot Collaborative Workspace Experience. Designs. 2021;5:35. doi: 10.3390/designs5020035. DOI
Grushko S., Vysocký A., Oščádal P., Vocetka M., Novák P., Bobovský Z. Improved Mutual Understanding for Human-Robot Collaboration: Combining Human-Aware Motion Planning with Haptic Feedback Devices for Communicating Planned Trajectory. Sensors. 2021;21:3673. doi: 10.3390/s21113673. PubMed DOI PMC
Grushko S., Vysocký A., Heczko D., Bobovský Z. Intuitive Spatial Tactile Feedback for Better Awareness about Robot Trajectory during Human–Robot Collaboration. Sensors. 2021;21:5748. doi: 10.3390/s21175748. PubMed DOI PMC
Moughlbay A.A., Herrero H., Pacheco R., Outón J.L., Sallé D. International Joint Conference SOCO’16-CISIS’16-ICEUTE’16. Springer International Publishing; Berlin/Heidelberg, Germany: 2016. Reliable Workspace Monitoring in Safe Human-Robot Environment; pp. 256–266. DOI
Arents J., Abolins V., Judvaitis J., Vismanis O., Oraby A., Ozols K. Human–Robot Collaboration Trends and Safety Aspects: A Systematic Review. J. Sens. Actuator Netw. 2021;10:48. doi: 10.3390/jsan10030048. DOI
Chiriatti G., Palmieri G., Scoccia C., Palpacelli M.C., Callegari M. Adaptive Obstacle Avoidance for a Class of Collaborative Robots. Machines. 2021;9:113. doi: 10.3390/machines9060113. DOI
Brito T., Lima J., Costa P., Piardi L. Dynamic Collision Avoidance System for a Manipulator Based on RGB-D Data; Proceedings of the ROBOT 2017: Third Iberian Robotics Conference; Sevilla, Spain. 22–24 November 2017; Cham, Switzerland: Springer International Publishing; 2017. pp. 643–654. DOI
Bogue R. Detecting humans in the robot workspace. Ind. Robot. Int. J. 2017;44:689–694. doi: 10.1108/IR-07-2017-0132. DOI
Munaro M., Lewis C., Chambers D., Hvass P., Menegatti E. Intelligent Autonomous Systems 13. Springer International Publishing; Cham, Switzerland: 2015. RGB-D Human Detection and Tracking for Industrial Environments; pp. 1655–1668. DOI
Shu X., Yang J., Yan R., Song Y. Expansion-Squeeze-Excitation Fusion Network for Elderly Activity Recognition. IEEE Trans. Circuits Syst. Video Technol. 2022 doi: 10.1109/TCSVT.2022.3142771. DOI
Shu X., Qi G.-J., Tang J., Wang J. Weakly-Shared Deep Transfer Networks for Heterogeneous-Domain Knowledge Propagation; Proceedings of the 23rd ACM international conference on Multimedia; Brisbane, Australia. 26–30 October 2015; pp. 35–44. DOI
Shu X., Zhang L., Qi G.-J., Liu W., Tang J. Spatiotemporal Co-Attention Recurrent Neural Networks for Human-Skeleton Motion Prediction. IEEE Trans. Pattern Anal. Mach. Intell. 2022;44:3300–3315. doi: 10.1109/TPAMI.2021.3050918. PubMed DOI
Tang J., Shu X., Yan R., Zhang L. Coherence Constrained Graph LSTM for Group Activity Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2022;44:636–647. doi: 10.1109/TPAMI.2019.2928540. PubMed DOI
Grushko S., Vysocky A., Jha V.K., Pastor R., Prada E., Mikova L., Bobovsky Z. Tuning perception and motion planning parameters for moveit! Framework. MM Sci. J. 2020;2020:4154–4163. doi: 10.17973/MMSJ.2020_11_2020064. DOI
Stanford Artificial Intelligence Laboratory Robotic Operating System. 2018. [(accessed on 15 June 2022)]. Available online: https://www.ros.org.
Cohen-Or D., Kaufman A. Fundamentals of Surface Voxelization. Graph. Models Image Processing. 1995;57:453–461. doi: 10.1006/gmip.1995.1039. DOI
Huczala D., Oščádal P., Spurný T., Vysocký A., Vocetka M., Bobovský Z. Camera-Based Method for Identification of the Layout of a Robotic Workcell. Appl. Sci. 2020;10:7679. doi: 10.3390/app10217679. DOI
Oščádal P., Heczko D., Vysocký A., Mlotek J., Novák P., Virgala I., Sukop M., Bobovský Z. Improved Pose Estimation of Aruco Tags Using a Novel 3D Placement Strategy. Sensors. 2020;20:4825. doi: 10.3390/s20174825. PubMed DOI PMC
Xu Y., Tong X., Stilla U. Voxel-based representation of 3D point clouds: Methods, applications, and its potential use in the construction industry. Autom. Constr. 2021;126:103675. doi: 10.1016/j.autcon.2021.103675. DOI
Laine S. A Topological Approach to Voxelization. Comput. Graph. Forum. 2013;32:77–86. doi: 10.1111/cgf.12153. DOI
Nourian P., Gonçalves R., Zlatanova S., Ohori K.A., Vu Vo A. Voxelization algorithms for geospatial applications. MethodsX. 2016;3:69–86. doi: 10.1016/j.mex.2016.01.001. PubMed DOI PMC
Huczala D., Kot T., Pfurner M., Heczko D., Oščádal P., Mostýn V. Initial Estimation of Kinematic Structure of a Robotic Manipulator as an Input for Its Synthesis. Appl. Sci. 2021;11:3548. doi: 10.3390/app11083548. DOI
Jetson Nano Developer Kit NVIDIA Developer. 14 April 2021. [(accessed on 16 December 2021)]. Available online: https://developer.nvidia.com/embedded/jetson-nano-developer-kit.
Specification Lenovo IdeaPad Y910 80V1004CCK MobileXfiles.Com. [(accessed on 16 December 2021)]. Available online: https://mobilexfiles.com/notebooks/lenovo/lenovo_ideapad_y910_80v1004cck/
Camera Arrangement Optimization for Workspace Monitoring in Human-Robot Collaboration