Machine Learning-Based Plant Detection Algorithms to Automate Counting Tasks Using 3D Canopy Scans
Jazyk angličtina Země Švýcarsko Médium electronic
Typ dokumentu časopisecké články
Grantová podpora
2019B0009 - Life Sciences 4.0
Czech University of Life Sciences Prague
CZ.02.2.69/0.0/0.0/16_027/0008366
Czech University of Life Sciences Prague
● Early Career Research Award from Department of Science and Technology
Government of India
PubMed
34884027
PubMed Central
PMC8659963
DOI
10.3390/s21238022
PII: s21238022
Knihovny.cz E-zdroje
- Klíčová slova
- 3D point clouds, computer vision, machine learning, phenotyping, plant detection,
- MeSH
- algoritmy * MeSH
- neuronové sítě (počítačové) MeSH
- strojové učení * MeSH
- Publikační typ
- časopisecké články MeSH
This study tested whether machine learning (ML) methods can effectively separate individual plants from complex 3D canopy laser scans as a prerequisite to analyzing particular plant features. For this, we scanned mung bean and chickpea crops with PlantEye (R) laser scanners. Firstly, we segmented the crop canopies from the background in 3D space using the Region Growing Segmentation algorithm. Then, Convolutional Neural Network (CNN) based ML algorithms were fine-tuned for plant counting. Application of the CNN-based (Convolutional Neural Network) processing architecture was possible only after we reduced the dimensionality of the data to 2D. This allowed for the identification of individual plants and their counting with an accuracy of 93.18% and 92.87% for mung bean and chickpea plants, respectively. These steps were connected to the phenotyping pipeline, which can now replace manual counting operations that are inefficient, costly, and error-prone. The use of CNN in this study was innovatively solved with dimensionality reduction, addition of height information as color, and consequent application of a 2D CNN-based approach. We found there to be a wide gap in the use of ML on 3D information. This gap will have to be addressed, especially for more complex plant feature extractions, which we intend to implement through further research.
Department of Computer Engineering Faculty of Engineering Cukurova University Adana 01330 Turkey
Phenospex B 5 Jan Campertstraat 11 6416 SG Heerlen The Netherlands
Zobrazit více v PubMed
Tardieu F., Cabrera-Bosquet L., Pridmore T., Bennett M. Plant phenomics, from sensors to knowledge. Curr. Biol. 2017;27:R770–R783. doi: 10.1016/j.cub.2017.05.055. PubMed DOI
Li L., Zhang Q., Huang D. A Review of Imaging Techniques for Plant Phenotyping. Sensors. 2014;14:20078–20111. doi: 10.3390/s141120078. PubMed DOI PMC
Pommier C., Garnett T., Lawrence-Dill C.J., Pridmore T., Watt M., Pieruschka R., Ghamkhar K. Editorial: Phenotyping; From Plant, to Data, to Impact and Highlights of the International Plant Phenotyping Symposium-IPPS 2018. Front. Plant Sci. 2020;11:1907. doi: 10.3389/fpls.2020.618342. PubMed DOI PMC
Kholová J., Urban M.O., Cock J., Arcos J., Arnaud E., Aytekin D., Azevedo V., Barnes A.P., Ceccarelli S., Chavarriaga P., et al. In pursuit of a better world: Crop improvement and the CGIAR. J. Exp. Bot. 2021;72:5158–5179. doi: 10.1093/jxb/erab226. PubMed DOI PMC
Vadez V., Kholová J., Hummel G., Zhokhavets U., Gupta S.K., Hash C.T. LeasyScan: A novel concept combining 3D imaging and lysimetry for high-throughput phenotyping of traits controlling plant water budget. J. Exp. Bot. 2015;66:5581–5593. doi: 10.1093/jxb/erv251. PubMed DOI PMC
Furbank R.T., Tester M. Phenomics—Technologies to relieve the phenotyping bottleneck. Trends Plant Sci. 2011;16:635–644. doi: 10.1016/j.tplants.2011.09.005. PubMed DOI
Brown T.B., Cheng R., Sirault X.R., Rungrat T., Murray K.D., Trtilek M., Furbank R.T., Badger M., Pogson B.J., Borevitz J.O. TraitCapture: Genomic and environment modelling of plant phenomic data. Curr. Opin. Plant. Biol. 2014;18:73–79. doi: 10.1016/j.pbi.2014.02.002. PubMed DOI
Virlet N., Sabermanesh K., Sadeghi-Tehran P., Hawkesford M.J. Field Scanalyzer: An automated robotic field phenotyping platform for detailed crop monitoring. Funct. Plant Biol. 2016;44:143–153. doi: 10.1071/FP16163. PubMed DOI
Fiorani F., Schurr U. Future Scenarios for Plant Phenotyping. Annu. Rev. Plant Biol. 2013;64:267–291. doi: 10.1146/annurev-arplant-050312-120137. PubMed DOI
Tardieu F., Hammer G. Designing crops for new challenges. Eur. J. Agron. 2012;42:1–2. doi: 10.1016/j.eja.2012.05.006. DOI
Tardieu F., Simonneau T., Muller B. The Physiological Basis of Drought Tolerance in Crop Plants: A Scenario-Dependent Probabilistic Approach. Annu. Rev. Plant Biol. 2018;69:733–759. doi: 10.1146/annurev-arplant-042817-040218. PubMed DOI
Kholová J., Murugesan T., Kaliamoorthy S., Malayee S., Baddam R., Hammer G.L., McLean G., Deshpande S., Hash C.T., Craufurd P.Q., et al. Modelling the effect of plant water use traits on yield and stay-green expression in sorghum. Funct. Plant Biol. 2014;41:1019–1034. doi: 10.1071/FP13355. PubMed DOI
Sivasakthi K., Thudi M., Tharanya M., Kale S.M., Kholová J., Halime M.H., Jaganathan D., Baddam R., Thirunalasundari T., Gaur P.M., et al. Plant vigour QTLs co-map with an earlier reported QTL hotspot for drought tolerance while water saving QTLs map in other regions of the chickpea genome. BMC Plant Biol. 2018;18:29. doi: 10.1186/s12870-018-1245-1. PubMed DOI PMC
Sivasakthi K., Marques E., Kalungwana N., Carrasquilla-Garcia N., Chang P.L., Bergmann E.M., Bueno E., Cordeiro M., Sani S.G.A., Udupa S.M., et al. Functional Dissection of the Chickpea (Cicer arietinum L.) Stay-Green Phenotype Associated with Molecular Variation at an Ortholog of Mendel’s I Gene for Cotyledon Color: Implications for Crop Production and Carotenoid Biofortification. Int. J. Mol. Sci. 2019;20:5562. doi: 10.3390/ijms20225562. PubMed DOI PMC
Tharanya M., Kholova J., Sivasakthi K., Seghal D., Hash C.T., Raj B., Srivastava R.K., Baddam R., Thirunalasundari T., Yadav R., et al. Quantitative trait loci (QTLs) for water use and crop production traits co-locate with major QTL for tolerance to water deficit in a fine-mapping population of pearl millet (Pennisetum glaucum L. R.Br.) Theor. Appl. Genet. 2018;131:1509–1529. doi: 10.1007/s00122-018-3094-6. PubMed DOI
Kar S., Garin V., Kholová J., Vadez V., Durbha S.S., Tanaka R., Iwata H., Urban M.O., Adinarayana J. SpaTemHTP: A Data Analysis Pipeline for Efficient Processing and Utilization of Temporal High-Throughput Phenotyping Data. Front. Plant Sci. 2020;11:552509. doi: 10.3389/fpls.2020.552509. PubMed DOI PMC
Kar S., Tanaka R., Korbu L.B., Kholová J., Iwata H., Durbha S.S., Adinarayana J., Vadez V. Automated discretization of ‘transpiration restriction to increasing VPD’ features from outdoors high-throughput phenotyping data. Plant Methods. 2020;16:140. doi: 10.1186/s13007-020-00680-8. PubMed DOI PMC
Fanourakis D., Briese C., Max J.F., Kleinen S., Putz A., Fiorani F., Ulbrich A., Schurr U. Rapid determination of leaf area and plant height by using light curtain arrays in four species with contrasting shoot architecture. Plant Methods. 2014;10:9. doi: 10.1186/1746-4811-10-9. PubMed DOI PMC
Pound M.P., Atkinson J., Townsend A.J., Wilson M., Griffiths M., Jackson A., Bulat A., Tzimiropoulos G., Wells D., Murchie E., et al. Deep machine learning provides state-of-the-art performance in image-based plant phenotyping. GigaScience. 2017;6:gix083. doi: 10.1093/gigascience/gix083. PubMed DOI PMC
Grinblat G.L., Uzal L.C., Larese M.G., Granitto P. Deep learning for plant identification using vein morphological patterns. Comput. Electron. Agric. 2016;127:418–424. doi: 10.1016/j.compag.2016.07.003. DOI
Sun Y., Liu Y., Wang G., Zhang H. Deep Learning for Plant Identification in Natural Environment. Comput. Intell. Neurosci. 2017;2017:7361042. doi: 10.1155/2017/7361042. PubMed DOI PMC
Guerrero J., Pajares G., Montalvo M., Romeo J., Guijarro M. Support Vector Machines for crop/weeds identification in maize fields. Expert Syst. Appl. 2012;39:11149–11155. doi: 10.1016/j.eswa.2012.03.040. DOI
Tellaeche A., Burgos-Artizzu X.P., Pajares G., Ribeiro A. A vision-based method for weeds identification through the Bayesian decision theory. Pattern Recognit. 2008;41:521–530. doi: 10.1016/j.patcog.2007.07.007. DOI
Sakamoto T., Gitelson A.A., Nguy-Robertson A.L., Arkebauer T.J., Wardlow B.D., Suyker A.E., Verma S.B., Shibayama M. An alternative method using digital cameras for continuous monitoring of crop status. Agric. For. Meteorol. 2012;154:113–126. doi: 10.1016/j.agrformet.2011.10.014. DOI
Vega F.A., Carvajal-Ramírez F., Pérez-Saiz M., Rosúa F.O. Multi-temporal imaging using an unmanned aerial vehicle for monitoring a sunflower crop. Biosyst. Eng. 2015;132:19–27. doi: 10.1016/j.biosystemseng.2015.01.008. DOI
Yeh Y.-H.F., Lai T.-C., Liu T.-Y., Liu C.-C., Chung W.-C., Lin T.-T. An automated growth measurement system for leafy vegetables. Biosyst. Eng. 2014;117:43–50. doi: 10.1016/j.biosystemseng.2013.08.011. DOI
Gong A., Yu J., He Y., Qiu Z. Citrus yield estimation based on images processed by an Android mobile phone. Biosyst. Eng. 2013;115:162–170. doi: 10.1016/j.biosystemseng.2013.03.009. DOI
Payne A., Walsh K., Subedi P., Jarvis D. Estimation of mango crop yield using image analysis—Segmentation method. Comput. Electron. Agric. 2013;91:57–64. doi: 10.1016/j.compag.2012.11.009. DOI
Polder G., van der Heijden G.W., van Doorn J., Baltissen T.A. Automatic detection of tulip breaking virus (TBV) in tulip fields using machine vision. Biosyst. Eng. 2014;117:35–42. doi: 10.1016/j.biosystemseng.2013.05.010. DOI
Pourreza A., Lee W.S., Etxeberria E., Banerjee A. An evaluation of a vision-based sensor performance in Huanglongbing disease identification. Biosyst. Eng. 2015;130:13–22. doi: 10.1016/j.biosystemseng.2014.11.013. DOI
Valiente-González J.M., Andreu-García G., Potter P., Rodas-Jordá A. Automatic corn (Zea mays) kernel inspection system using novelty detection based on principal component analysis. Biosyst. Eng. 2014;117:94–103. doi: 10.1016/j.biosystemseng.2013.09.003. DOI
Pavlíček J., Jarolímek J., Jarolímek J., Pavlíčková P., Dvořák S., Pavlík J., Hanzlík P. Automated Wildlife Recognition. Agris-Line Pap. Econ. Inform. 2018;10:51–60. doi: 10.7160/aol.2018.100105. DOI
Kamilaris A., Prenafeta-Boldú F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018;147:70–90. doi: 10.1016/j.compag.2018.02.016. DOI
Itakura K., Hosoi F. Automatic individual tree detection and canopy segmentation from three-dimensional point cloud images obtained from ground-based lidar. J. Agric. Meteorol. 2018;74:109–113. doi: 10.2480/agrmet.D-18-00012. DOI
Malambo L., Popescu S., Horne D., Pugh N., Rooney W. Automated detection and measurement of individual sorghum panicles using density-based clustering of terrestrial lidar data. ISPRS J. Photogramm. Remote Sens. 2019;149:1–13. doi: 10.1016/j.isprsjprs.2018.12.015. DOI
Weiss U., Biber P. Plant detection and mapping for agricultural robots using a 3D LIDAR sensor. Robot. Auton. Syst. 2011;59:265–273. doi: 10.1016/j.robot.2011.02.011. DOI
Ugarriza L.G., Saber E., Vantaram S.R., Amuso V., Shaw M., Bhaskar R. Automatic Image Segmentation by Dynamic Region Growth and Multiresolution Merging. IEEE Trans. Image Process. 2009;18:2275–2288. doi: 10.1109/TIP.2009.2025555. PubMed DOI
Zeineldin R.A., El-Fishawy N.A. A Survey of RANSAC enhancements for Plane Detection in 3D Point Clouds. Menoufia J. Electron. Eng. Res. 2017;26:519–537. doi: 10.21608/mjeer.2017.63627. DOI
Rusu R.B. Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. KI Künstliche Intell. 2010;24:345–348. doi: 10.1007/s13218-010-0059-6. DOI
Ren S., He K., Girshick R., Sun J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017;39:1137–1149. doi: 10.1109/TPAMI.2016.2577031. PubMed DOI
Szegedy C., Liu W., Jia Y., Sermanet P., Reed S., Anguelov D., Erhan D., Vanhoucke V., Rabinovich A. Going deeper with convolutions; Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; Boston, MA, USA. 7–12 June 2015.
Maturana D., Scherer S. VoxNet: VoxNet: A 3D convolutional neural network for real-time object recognition; Proceedings of the IEEE International Conference on Intelligent Robots and Systems; Hamburg, Germany. 25 September–2 October 2015.
Kim C., Lee J., Han T., Kim Y.-M. A hybrid framework combining background subtraction and deep neural networks for rapid person detection. J. Big Data. 2018;5:22. doi: 10.1186/s40537-018-0131-x. DOI
Mohamed S.S., Tahir N.M., Adnan R. Background modelling and background subtraction performance for object detection; Proceedings of the 2010 6th International Colloquium on Signal Processing and Its Applications (CSPA 2010); Malacca, Malaysia. 21–23 May 2010.
Chen S., Zheng L., Zhang Y., Sun Z., Xu K. VERAM: Rapid determination of leaf area and plant height by using light curtain arrays in four species with contrasting shoot architecture. IEEE Trans. Vis. Comput. Graph. 2019;25:3244–3257. doi: 10.1109/TVCG.2018.2866793. PubMed DOI
Qi C.R., Su H., Mo K., Guibas L.J. Pointnet: Deep learning on point sets for 3D classification and segmentation; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Honolulu, HI, USA. 21–26 July 2017.
Yavartanoo M., Kim E.Y., Lee K.M. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Springer; Berlin/Heidelberg, Germany: 2019. SPNet: Deep 3D object classification and retrieval using stereographic projection.
Xie Y., Tian J., Zhu X.X. Linking Points with Labels in 3D: A Review of Point Cloud Semantic Segmentation. IEEE Geosci. Remote Sens. Mag. 2020;8:38–59. doi: 10.1109/MGRS.2019.2937630. DOI
Ioffe S., Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift; Proceedings of the 32nd International Conference on Machine Learning (ICML); Lille, France. 7–9 July 2015.
Szegedy C., Vanhoucke V., Ioe S., Shlens J., Wojna Z. Rethinking the inception architecture for computer vision; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA. 27–30 June 2016; pp. 2818–2826.
He K., Zhang X.Y., Ren S.Q., Sun J. Deep residual learning for image recognition; Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA. 27–30 June 2016.