Legume Research

  • Chief EditorJ. S. Sandhu

  • Print ISSN 0250-5371

  • Online ISSN 0976-0571

  • NAAS Rating 6.80

  • SJR 0.391

  • Impact Factor 0.8 (2024)

Frequency :
Monthly (January, February, March, April, May, June, July, August, September, October, November and December)
Indexing Services :
BIOSIS Preview, ISI Citation Index, Biological Abstracts, Elsevier (Scopus and Embase), AGRICOLA, Google Scholar, CrossRef, CAB Abstracting Journals, Chemical Abstracts, Indian Science Abstracts, EBSCO Indexing Services, Index Copernicus
Legume Research, volume 47 issue 10 (october 2024) : 1715-1722

Leveraging Image Analysis for High-throughput Phenotyping of Legume Plants

Bong-Hyun Kim1,*
1Department of Computer Engineering, Seowon University, 377-3, Musimseo-ro, Seowon-gu, Cheongju-si, Chungcheongbuk-do, Republic of Korea.
  • Submitted27-03-2024|

  • Accepted22-06-2024|

  • First Online 13-07-2024|

  • doi 10.18805/LRF-806

Cite article:- Kim Bong-Hyun (2024). Leveraging Image Analysis for High-throughput Phenotyping of Legume Plants . Legume Research. 47(10): 1715-1722. doi: 10.18805/LRF-806.

Background: The advancements achieved in artificial intelligence (AI) technology in recent decades have not yet been equaled by agricultural phenotyping approaches that are both rapid and precise. Efficient crop phenotyping technologies are necessary to enhance crop improvement endeavors in order to fulfill the projected demand for food in future. 

Methods: This work demonstrates a method for non-destructive physiological state phenotyping of plants using cutting-edge image processing methods in conjunction with chlorophyll fluorescence imaging. Key fluorescence metrics, such as fv/fm and NPQ, were extracted from images taken at different phases of development via processing. In addition, this research explores the transformative role of automated image analysis in high-throughput phenotyping of legume traits. A comprehensive examination of recent studies reveals the diverse applications of machine learning and deep learning algorithms in capturing morphological traits, assessing physiological parameters, detecting stress and diseases in various legume species. The comparative analysis underscores the superiority of automated systems over traditional methods, emphasizing scalability and efficiency. Challenges, including algorithm sensitivity and environmental variability, are identified, urging further refinement. Recommendations advocate for standardized metrics, interdisciplinary collaborations and user-friendly platforms to enhance accessibility. As the field evolves, the integration of automated image analysis holds promise for revolutionizing legume phenotyping, accelerating crop improvement and contributing to global food security in sustainable agriculture.

Result: The findings demonstrate that the proposed method is effective in illuminating how plants respond to their environment, hence promoting advancements in plant phenotyping and agricultural research.

The advent of automated image analysis has ushered in a new era in plant research, particularly in the high-throughput phenotyping of legume traits. Legumes, vital contributors to global agriculture and food security, demand efficient and accurate phenotyping methodologies for crop improvement programs. Traditional phenotyping methods- reliant on manual measurements, pose challenges in terms of scalability and precision. In response, automated image analysis has emerged as a transformative solution, leveraging advancements in machine learning and computer vision. Use of digital imaging technology is prevalent in the evaluation of legume seed traits and has proven to be useful in high-throughput phenotyping (Margapuri et al., 2021; Shariff et al., 2010). ImageJ, Cell Profiler, Win SEEDLE, Smart Grain and P-TRAP are open-source software applications that utilize digital imaging technologies to rapidly calculate the 2D characteristics of seeds. However, digital imaging technology has challenges in capturing 3D characteristics, including volume, surface area thickness. Vision technologies and 3D reconstruction technology are utilized across diverse engineering disciplines. Chen et al., (2020) examined the increased 3D perception of the central stock of orchard bananas using adaptive multisession technology (Chen et al., 2020). Firatligil-Durmuş et al., (2010) identified the round or elongated fruits on plants in their natural surroundings and directed harvesting robots to automatically collect them using a 3D fruit detection algorithm that relies on color, depth and shape. Research in agriculture is actively exploring the use of three-dimensional (3D) technologies for measurement purposes (Yang et al., 2020). The technique can be utilized for quantifying leaf area, leaf angle, stems and shoots, fruit and seeds (Miao et al., 2021). Due to the large sample sizes employed in high-throughput phenotyping analysis, it is crucial to explore faster and more automated methods for processing data in batches (Xu et al., 2018). Soybeans, peas, black beans, red beans and mung beans are common legume seeds that hold significant dietary value Globally. The process of high-throughput legume seed phenotyping is highly important as it enables a more convenient assessment of both the yield and quality of legume seeds. Legume seeds exhibit a diverse range of forms and sizes. The typical shapes of these objects are roughly spherical or ellipsoidal (Cervantes and Martín Gómez, 2019). Soybeans and peas exemplify legume seeds characterized by spherical and ellipsoidal shapes. Notable examples of ellipsoidal legume seeds include black beans, red beans and mung beans. (Xu et al., 2018). Spherical or elliptical seeds possess symmetry, which can be utilized to expedite batch 3D modeling (Yang et al., 2021).
       
This article examined the growing body of research focused on employing automated image processing for legume phenotyping. The critical need for accurate phenotypic characterization in legumes is emphasized given their multifaceted roles, encompassing improvements in nutritional content and heightened resilience amid shifting environmental conditions. Placing our study within the wider context of plant phenotyping, the introduction draws attention to the unique challenges faced by researchers investigating legumes and underscores the necessity for innovative methodologies to address these challenges.
       
Plant phenotyping provides the basis for comprehending how plants react to environmental pressures and genetic differences, which in turn helps in enhancing crops and ensuring agricultural sustainability. Chlorophyll fluorescence imaging is a valuable technique for evaluating the physiological condition of plants without causing damage. It provides valuable information on the efficiency of photosynthesis and the ability of plants to withstand stress. However, manually examining fluorescence images is a laborious process that is sensitive to human error. Consequently, there is a growing use of sophisticated image processing methods to automate the analysis procedure and get quantitative data from fluorescence pictures. This paper presents a complete method for analysing chlorophyll fluorescence images and measuring important fluorescence metrics, such as Fv/Fm and NPQ, in order to improve our knowledge of how plants respond to environmental stimuli.
       
Legumes, encompassing a diverse group of plants such as soybeans, chickpeas and lentils, play a pivotal role in global agriculture due to their nutritional value, soil-enriching properties symbiotic nitrogen fixation. Traditional phenotyping methods for legume traits have long relied on manual measurements, but limitations in scalability and precision have driven a quest for innovative approaches. Historically, legume phenotyping faced challenges in capturing the intricacies of morphological traits critical for crop improvement. Conventional methods, often labor-intensive and time-consuming, struggled to keep pace with the demands for high-throughput analysis (Yang et al., 2020). Recognizing these limitations, recent literature showcases a surge in studies employing automated image analysis techniques. Machine learning algorithms, including support vector machines (SVM) and random forests, demonstrate efficacy in accurately classifying and quantifying morphological traits such as leaf size, shape and canopy architecture. The literature highlights their ability to process large datasets rapidly, enabling efficient phenotypic characterization (Elbasi et al., 2023; Cho, 2024; Kim and AlZubi, 2024; Min et al., 2024; Porwal et al., 2024; Wasik and Pattinson, 2024; Maltare, 2023).
       
Deep learning approaches, particularly convolutional neural networks (CNNs), have gained prominence in image-based phenotyping. Their capacity to learn hierarchical features and recognize complex patterns makes them invaluable for legume trait assessment. Studies showcase successful applications of CNNs in identifying nuanced traits, from nodulation patterns to subtle changes in leaf color indicative of stress conditions (Yu et al., 2023). The adaptability of deep learning to diverse legume species emphasizes its potential as a universal tool for comprehensive phenotyping. In parallel, traditional computer vision methods, though eclipsed by machine learning and deep learning, continue to find relevance in certain applications. Studies highlight their utility in addressing specific challenges, such as the precise quantification of root traits through advanced image segmentation techniques (Wang and Su, 2022). The literature review elucidates how these methods contribute to a holistic understanding of legume phenotypes-complementing the strengths of machine learning and deep learning approaches.
               
As the literature unfolds, it becomes evident that automated image analysis extends beyond morphological trait assessment. Physiological parameters, including chlorophyll content, stomatal conductance water use efficiency, emerge as focal points in legume phenotyping. The integration of these parameters into automated systems facilitates an understanding of plant health, stress responses and overall crop performance (Bertolino et al., 2019). These insights are crucial for breeding programs aiming to develop legume varieties that are resilient to changing environmental conditions. Moreover, the literature underlines the significance of automated image analysis in stress and disease detection within legume crops (Warman et al.,  2021). Early identification of stressors facilitated by sophisticated algorithms and image-based diagnostics, equips researchers and farmers with timely information for targeted interventions (Holzinger et al., 2023). The potential for automated systems to contribute to sustainable agriculture by minimizing the reliance on chemical interventions aligns with global efforts towards environmentally conscious crop management (Balaska et al., 2023).
Data collection and preprocessing
 
Plant samples at various development stages were imaged using fluorescence imaging technology to produce chlorophyll fluorescence images (Fig 1). The images undergo preprocessing to improve clarity and contrast and then thresholding to produce binary masks that distinguish differentiable plant characteristics. To ensure precise object recognition, small gaps in the binary images were filled. To concentrate research on certain plant regions, a region of interest was established. Based on accepted formulae, fluorescence parameters such as fv/fm and NPQ were computed. To see how these factors were distributed spatially, pseudocolored pictures were created. Using the PlantCV Python package, the full analytic process was put into practice.

Fig 1: Phenotyping data at different phases.


       
The phenotyping data in the given experiment comprises images taken at different development phases, with each image corresponding to a distinct feature of chlorophyll fluorescence.
Fo: This image depicts the baseline fluorescence (Fo), which is the lowest amount of fluorescence produced by chlorophyll molecules when they are in a condition of  darkness adaptation. Fo quantifies the level of chlorophyll fluorescence when photosystem II does not absorb any light energy.
Fm: This image depicts the highest level of fluorescence (Fm), which is the greatest amount of light released by chlorophyll molecules when all the reaction centres of   photosystem II are in a closed state. Fm quantifies the utmost achievable quantum yield of photosystem II.
FsLss (Fm’): This image depicts the fluorescence produced by chlorophyll molecules under constant light circumstances, known as steady-state fluorescence  (FsLss). FsLss offers valuable information on the  stable-state photosynthetic efficiency and functioning of plants.
FmpLss: This image represents the highest level of fluorescence seen in the light-adapted state (FmpLss), which corresponds to the maximum fluorescence release  by chlorophyll molecules under stable light circumstances. FmpLss offers data on the highest possible fluorescence output of photosystem II when exposed to 
light.
               
Eliminating the background and transforming the images into grayscale are essential preprocessing measures in image analysis, namely in the fields of plant science and phenotyping research. Different stages of data preparation of phenotyping are shown in Fig 2. These processes have numerous essential functions for precise and efficient analysis. Firstly, the process of removing the background improves the sharpness and clarity of the things in the background, such as plants, by eliminating any factors that may cause distraction. This enhances the distinction between the focal items and their surroundings, making it easier to accurately identify and outline plant structures.

Fig 2: Different stages of data preparation of phenotyping.

Phenotyping data includes the collection of quantitative information on observable qualities or characteristics of organisms, especially plants, in order to understand their genetic composition, reactions to environmental stimuli and overall performance. When it comes to chlorophyll fluorescence imaging, phenotyping data typically consists of characteristics obtained from fluorescence images. These factors provide valuable information about plant health, stress reactions and photosynthetic efficiency.
       
In plant phenotyping, plant traits and responses are examined closely. The evaluation of photosynthesis efficiency involves comparing the fluorescence emitted by a plant during active photosynthesis (Fv) with its maximum potential (Fm) (equation 1).

 
       

Non-photochemical quenching (NPQ) is a method used to quantify the process by which plants safely release excessive light energy as heat in order to safeguard themselves from harm. The study conducted by NPQ focuses on the mechanisms used by plants to cope with excessive light exposure (equation 2). Fig 3 shows photosynthesis efficiency and non-photochemical quenching of plants.
      

Fig 3: photosynthesis efficiency and non-photochemical quenching of plants.



A greater ratio of Fv/Fm denotes efficient utilisation of absorbed light energy for photosynthesis, indicating the presence of healthy and actively photosynthesizing plants. Furthermore, NPQ provides valuable information on the plants’ adaptive techniques for handling excessive light energy, where positive values indicate the presence of efficient photoprotection systems. Gaining insight into these processes improves our understanding of the ability of plants to withstand and thrive in different conditions, which in turn helps us develop methods for sustainable agriculture and ecosystem management.
       
A notable portion of the selected studies utilized machine learning algorithms for legume trait phenotyping. These algorithms demonstrated efficacy in classifying and quantifying various traits, contributing to the automation of the phenotyping process. Commonly applied algorithms included CNN and SVM, with notable success in accurately predicting traits such as phenotypic traits. Deep learning methods emerged as a prominent category within the selected studies, showcasing the potential for CNNs and other deep learning architectures to analyze complex legume traits (Bouguettaya et al., 2022). Studies employing deep learning reported significant advancements in the accurate identification and classification of plant phenotyping. Fig 4 shows the steps involved in the plant phenotyping process. While machine learning and deep learning gained prominence, some studies continued to leverage traditional computer vision methods. These approaches demonstrated utility in certain aspects of legume trait phenotyping, particularly in growth rate, seed quality, drought tolerance, etc. (Lee et al., 2018).
       

Fig 4: Steps involved in plant segmentation and plant growth examination.



Automated image analysis has emerged as a transformative tool in the field of legume phenotyping, offering a paradigm shift from traditional, labor-intensive methods to high-throughput, accurate efficient approaches. As previously reported, a diverse array of legume species has been subjected to automated image analysis, demonstrating the versatility of this technology (Berry et al., 2018). One key application of automated image analysis lies in the realm of morphological trait assessment. Across various legume species, automated systems proved skilled at capturing and quantifying morphological traits, ranging from leaf shape and size to stem architecture. The precision and efficiency of these analyses surpassed manual measurements, allowing for a more comprehensive understanding of plant morphology. This capability is particularly crucial in breeding programs, where rapid and accurate trait assessment accelerates the development of improved legume varieties (Wäldchen and Mäder, 2018). In addition to morphological traits, automated image analysis exhibited significant promise in evaluating physiological parameters critical to understanding plant health and performance. The studies underscored the efficacy of image-based approaches in monitoring parameters such as chlorophyll content, stomatal conductance and water use efficiency. These physiological insights provide plant breeders with deeper understanding of legume responses to environmental conditions, enabling targeted interventions and optimization of cultivation practices (Haworth et al., 2023). The uses and restrictions of imaging methods in plant phenotyping are listed in Table 1.
       

Table 1: Application and limitations of imaging techniques for plant phenotyping in various conditions (Gano et al., 2024; Sariæ et al., 2022).



The comparative analysis of automated systems for legume phenotyping provides valuable insights into the efficacy and performance of various methodologies, shedding light on their strengths, limitations and potential applications (Table 1). In this study, an exploration of the comparative landscape reveals noteworthy trends and considerations. First and foremost, the studies incorporated in this analysis exhibited a diverse array of performance metrics used to evaluate automated systems (Shoaib et al., 2023). However, the variability in reported metrics across studies necessitates a careful interpretation of results and underscores the need for standardization in reporting practices within the field. Several studies demonstrated remarkable accuracy in automated legume phenotyping, particularly in the assessment of morphological traits. Machine learning algorithms, such SVM and random forests, exhibited high precision in classifying traits such as leaf shape and size. Deep learning approaches, especially CNNs, showcased notable success in image recognition tasks, contributing to the accurate identification of intricate features within legume crops (Wang and Su, 2022).
       
Despite these achievements, challenges were evident in achieving consistently high performance across all studies. Variability in environmental conditions, imaging techniqued ataset sizes influenced the performance of automated systems. The sensitivity of certain algorithms to factors such as lighting conditions and camera specifications highlighted the need for robustness testing and algorithm refinement. Benchmarking against traditional phenotyping methods provided a critical perspective on the advancements offered by automated systems. In direct comparisons, automated image analysis  consistently outperformed manual measurements in terms of efficiency and scalability (Xu et al., 2021). The ability to rapidly process large datasets and extract detailed phenotypic information proved to be a significant advantage,  especially in the context of high-throughput phenotyping programs. It is crucial to acknowledge the contextual nature of the comparative analysis (Choudhury et al., 2019).
               
In recommendation, to advance the field of automated image analysis for legume phenotyping, future research should prioritize standardization of performance metrics, fostering interdisciplinary collaborations between plant scientists and computer vision experts. Methodological refinements, particularly in addressing algorithm sensitivity to environmental conditions, will enhance robustness. Additionally, efforts to develop user-friendly, open-source platforms and datasets will democratize access, fostering widespread adoption. This will accelerate the integration of automated systems into mainstream agricultural practices, which eventually led to sustainable crop improvement strategies and ensuring global food security.
In conclusion, this study sheds light on the revolutionary path of automated image analysis in the field of legume phenotyping. The amalgamation of wide range of literature highlights the crucial significance of machine learning, deep learning conventional computer vision techniques in surpassing conventional constraints. The remarkable achievement of these methods in precisely categorizing and measuring morphological characteristics, such as the size of leaves and the structure of the canopy, represents significant change in scalability and effectiveness. This has important ramifications for agricultural enhancement initiatives, enabling swift characterization of various bean species. In addition to morphology, the incorporation of automated technologies into legume phenotyping also includes physiological indicators and identification of stress and disorder. The literature provides significant information on chlorophyll content, stomatal conductance, water usage efficiency, which enhances our understanding of legume health and performance. Moreover, the ability of advanced algorithms to detect stress and diseases in legume crops at an early stage demonstrates the potential of sustainable and precise agricultural practices.

This work investigated the relationship between photosynthetic efficiency, measured by Fv/Fm and non-photochemical quenching (NPQ), which serve as markers of plant physiological responses to environmental conditions.
No Funding.
The author contributed toward data analysis, drafting and revising the paper and agreed to be responsible for all aspects of this work.
Not applicable.
Author(s) declare that all works are original and this manuscript has not been published in any other journal.
The author declares that they have no conflict of interest.

  1. Balaska, V., Adamidou, Z., Vryzas, Z. and Gasteratos, A. (2023). Sustainable crop protection via robotics and artificial intelligence solutions. Machines. 11(8): 1-15. 

  2. Berry, J.C., Fahlgren, N., Pokorny, A.A., Bart, R.S. and Veley, K.M. (2018). An automated, high-Throughput method for standardizing image color profiles to improve image- based plant phenotyping. Peer J. (10). Article e5727.

  3. Bertolino, L.T., Caine, R.S. and Gray, J.E. (2019). Impact of stomatal density and morphology on water-use efficiency in a changing world. Frontiers in Plant Science. 10(March). 

  4. Bouguettaya, A., Zarzour, H., Kechida, A. and Taberkit, A.M. (2022). Deep learning techniques to classify agricultural crops through UAV imagery: A review. Neural Computing and Applications. 34(12): 9511-9536.

  5. Cervantes, E. and Martín Gómez, J.J. (2019). Seed shape description and quantification by comparison with geometric models. Horticulturae. 5(3): 60. https://doi.org/10.3390/horticultura e5030060.

  6. Chen, M., Tang, Y., Zou, X., Huang, K., Huang, Z., Zhou, H., Wang, C. and Lian, G. (2020). Three-dimensional perception of orchard banana central stock enhanced by adaptive multi-vision technology. Computers and Electronics in Agriculture. 174(5): 105508. 

  7. Cho, O.H. (2024). An evaluation of various machine learning approaches for detecting leaf diseases in agriculture. Legume Research. https://doi.org/10.18805/LRF-787.

  8. Choudhury, D., S., Samal, A. and Awada, T. (2019). Leveraging image analysis for high-throughput plant phenotyping. Frontiers in Plant Science. 10(4): 1-8. 

  9. Elbasi, E., Zaki, C., Topcu, A.E., Abdelbaki, W., Zreikat, A.I., Cina, E., Shdefat, A. and Saker, L. (2023). Crop prediction model using machine learning algorithms. Applied Sciences (Switzerland). 13(16): 9288. https://doi.org/ 10.3390/app13169288. 

  10. Firatligil-Durmus, E., Šárka, E., Bubník, Z., Schejbal, M. and Kadlec, P. (2010). Size properties of legume seeds of different varieties using image analysis. Journal of Food Engineering. 99(4): 445-451. 

  11. Gano, B., Bhadra, S., Vilbig, J.M., Ahmed, N., Sagan, V. and Shakoor, N. (2024). Drone based imaging sensors, techniques and applications in plant phenotyping for crop breeding: A comprehensive review. Plant Phenome Journal. 7(1). https://doi.org/10.1002/ppj2.20100.

  12. Haworth, M., Marino, G., Atzori, G., Fabbri, A., Daccache, A., Killi, D., Carli, A., Montesano, V., Conte, A. and Balestrini, R. (2023). Plant Physiological Analysis to Overcome Limitations to Plant Phenotyping. Plants. 12(23): 4015. 

  13. Holzinger, A., Keiblinger, K., Holub, P., Zatloukal, K. and Müller, H. (2023). AI for life: Trends in artificial intelligence for biotechnology. New Biotechnology. 74(2): 16-24. 

  14. Kim, S.Y. and AlZubi, A.A. (2024). Blockchain and artificial intelligence for ensuring the authenticity of organic legume products in supply Chains. Legume Research. https://doi.org/10.18805/LRF-786.

  15. Lee, U., Chang, S., Putra, G.A., Kim, H. and Kim, D.H. (2018). An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis. PLoS ONE. 1-17.

  16. Maltare, N.N., Sharma, D., Patel, S. (2023). An Exploration and Prediction of Rainfall and Groundwater Level for the District of Banaskantha, Gujrat, India. International Journal of Environmental Sciences. 9(1): 1-17.

  17. Margapuri, V., Courtney, C. and Neilsen, M. (2021). Image Processing for High-throughput Phenotyping of Seeds. EPiC Series in Computing. 75: 69-79. 

  18. Miao, T., Zhu, C., Xu, T., Yang, T., Li, N., Zhou, Y. and Deng, H. (2021). Automatic stem-leaf segmentation of maize shoots using three-dimensional point cloud. Computers and Electronics in Agriculture. 187. 

  19. Min, P.K., Mito, K. and Kim, T.H. (2024). The evolving landscape of artificial intelligence applications in animal health. Indian Journal of Animal Research. https://doi.org/10.18805/IJAR.BF-1742

  20. Porwal, S., Majid, M., Desai, S.C., Vaishnav, J. and Alam, S. (2024). Recent advances, challenges in applying artificial intelligence and deep learning in the manufacturing industry. Pacific Business Review (International). 16(7): 143-152.

  21. Sariæ, R., Nguyen, V.D., Burge, T., Berkowitz, O., Trtílek, M., Whelan, J., Lewsey, M.G. and Custovic, E. (2022). Applications of hyperspectral imaging in plant phenotyping. Trends in Plant Science. 27(3): 301-315. https://doi.org/10.1016/ j.tplants.2021.12.003.

  22. Shariff, A., Kangas, J., Coelho, L.P., Quinn, S. and Murphy, R.F. (2010). Automated image analysis for high-content screening and analysis. Journal of Biomolecular Screening. 15(7): 726-734. 

  23. Shoaib, M., Shah, B., EI-Sappagh, S., Ali, A., Ullah, A., Alenezi, F., Gechev, T., Hussain, T. and Ali, F. (2023). An advanced deep learning models-based plant disease detection: A review of recent research. Frontiers in Plant Science. 14(3): 1-22. 

  24. Wäldchen, J. and Mäder, P. (2018). Plant Species Identification Using Computer Vision Techniques: A Systematic Literature Review. In Archives of Computational Methods in Engineering. Springer Netherlands. 25: 2.

  25. Wang, Y.H. and Su, W.H. (2022). Convolutional neural networks in computer vision for grain crop phenotyping: A review. Agronomy. 12(11): 2659. https://doi.org/10.3390/agronomy12112659.

  26. Warman, C., Sullivan, C.M., Preece, J., Buchanan, M.E., Vejlupkova, Z., Jaiswal, P. and Fowler, J.E. (2021). A cost-effective maize ear phenotyping platform enables rapid categorization and quantification of kernels. Plant Journal. 106(2): 566-579. 

  27. Wasik, S. and Pattinson, R.  (2024). Artificial intelligence applications in fish classification and taxonomy: Advancing our understanding of aquatic biodiversity. FishTaxa. 31: 11-21. 

  28. Xu, T., Yu, J., Yu, Y. and Wang, Y. (2018). A modelling and verification approach for soybean seed particles using the discrete element method. Advanced Powder Technology. 29(12): 3274-3290. 

  29. Xu, Y., Liu, X., Cao, X., Huang, C., Liu, E., Qian, S., Liu, X. et al. (2021). Artificial intelligence: A powerful paradigm for scientific research. Innovation. 2(4): 100179. 

  30. Yang, S., Zheng, L., Gao, W., Wang, B., Hao, X. and Mi, J. (2020). An efficient processing approach for colored point cloud- based high-throughput seedling phenotype. Remote Sens. 12(10): 1540.

  31. Yang, S., Zheng, L., He, P., Wu, T., Sun, S. and Wang, M. (2021). High-throughput soybean seeds phenotyping with convolutional neural networks and transfer learning. Plant Methods. 17(1): 1-17. 

  32. Yang, W., Feng, H., Zhang, X., Zhang, J., Doonan, J.H., Batchelor, W.D., Xiong, L. and Yan, J. (2020). Crop phenomics and high-throughput phenotyping: Past decades, current challenges and future perspectives. Molecular Plant. 13(2): 187-214. 

  33. Yu, F., Zhang, Q., Xiao, J., Ma, Y., Wang, M., Luan, R., Liu, X., Ping, Y., Nie, Y., Tao, Z. and Zhang, H. (2023). Progress in the application of cnn-based image classification and recognition in whole crop growth cycles. Remote Sensing. 15(12): 2988. https://doi.org/10.3390/rs15122988.

Editorial Board

View all (0)