Banner

Legume Research

  • Chief EditorJ. S. Sandhu

  • Print ISSN 0250-5371

  • Online ISSN 0976-0571

  • NAAS Rating 6.80

  • SJR 0.32, CiteScore: 0.906

  • Impact Factor 0.8 (2024)

Frequency :
Monthly (January, February, March, April, May, June, July, August, September, October, November and December)
Indexing Services :
BIOSIS Preview, ISI Citation Index, Biological Abstracts, Elsevier (Scopus and Embase), AGRICOLA, Google Scholar, CrossRef, CAB Abstracting Journals, Chemical Abstracts, Indian Science Abstracts, EBSCO Indexing Services, Index Copernicus

Detection and Classification of Soybean Wilting Across Progressive Stages using Convolutional Neural Network Method

Sobia Wassan1, Ok Hue Cho2,*, Salman A. AlQahtani3
1School of Equipment Engineering, Jiangsu Urban and Rural Construction Vocational College, Changzhou 213000, China.
2Department of Animation, Sangmyung University, 37, Hongjimun 2-gil, Jongno-gu, Seoul, Republic of Korea.
3New Emerging Technologies and 5G Network and Beyond Research Chair, Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia.
  • Submitted17-04-2024|

  • Accepted02-07-2025|

  • First Online 23-07-2025|

  • doi 10.18805/LRF-815

Background: Food security being one of the prominent global issues requires strategies to maximize plant productivity with due consideration to sustainability. In this regard, precision agricultural practices are incorporated and are quite promising. The use of technology i.e. integrating AI is rapidly changing the complete agricultural scenario. A quick identification of wilting ensures farmers take early action and remedies. An early detection of plant wilting in a real-time scenario can avoid huge food crop losses but the whole task is humongous. The images collected from open fields over large areas can be analyzed via various image processing techniques. Using the CNN model for identification of early images for quick response to plant care saves the time and effort of the farmers. CNN utilization allows prompt wilting detection and early corrective action to protect the crop. In this paper, CNN trained on image data allows high predictability of wilting detection at early stages in Soybean plants.

Methods: A well-defined dataset of 6704 pictures of soybean plants from agricultural fields is considered and allocated appropriately to train, test and validate the CNN model. Using image preprocessing, resizing and rescaling of images is done to ensure consistency of image dimensions. Noise elimination from the images is done via a low-pass filtering method to preserve low-frequency information and the images are converted to grayscale. Using standard Python libraries, data augmentation is ensured and 9158 images across all classes are arranged for the model. The performance evaluation matrix indicating accuracy percentage, precision percentage and recall percentage is estimated.

Result: The proposed CNN algorithm is first calibrated using a set of images from the dataset and then tested for an entirely different set of images not used earlier. The overall accuracy is 91%. The model promises unambiguous identification of wilting in soybean leaves by appropriately classifying images set in 5 orders using a substantially ample dataset. Early identification in real time can prove to be of utmost benefit to the agricultural community in terms of eradication of the causes and yield retention of the crop.

Legumes are essential for global food security. They provide protein, fiber and essential nutrients for humans and animals. Common legumes like chickpeas, lentils, beans and soybeans are low in fat and rich in micronutrients. These qualities make them a sustainable alternative to animal-based proteins (Belay et al., 2022). Global food security faces serious threats from pests, climate change and diseases. These factors reduce food production by 20-40%, causing annual losses of $220 billion from pest-related damage and $70 billion due to invasive insects (FAO, 2022). Smallholder farmers are particularly affected by climate change, declining pollinators and plant wilting.
       
Soybean is one of the most important legume crops. However, it is highly vulnerable to wilting during its growth. Bacterial, fungal and viral pathogens, as well as nematodes, can significantly reduce yields and threaten food security (Arias et al., 2013). These issues often go undetected in the early stages. Symptoms like leaf wilting are critical indicators of plant health but are frequently overlooked until they become severe (Zhou et al., 2020). Management strategies include using resistant seed varieties (Altieri, 2018) and adopting rapid diagnostic methods. Traditional diagnostic techniques are reliable but slow, labor-intensive and less effective for early detection. This has led to growing interest in advanced technologies like deep learning (DL) for detecting plant diseases.
       
Over the past decade, DL models have been used for continuous crop monitoring, enabling early disease detection and effective management (Salas and Argialas, 2022; Cho, 2024; Hai and Duong, 2024; Attri et al., 2023; Semara et al., 2024; Maltare et al., 2023; Bagga et al., 2024; AlZubi, 2023). CNNs, in particular, excel at handling large datasets, modeling complex systems and performing pattern recognition with high accuracy (Abade et al., 2021). Numerous studies have explored CNN-based approaches for disease detection. Saleem et al. (2020) highlighted the superiority of DL models in agricultural tasks using modified CNN architectures. Bierman et al. (2019) achieved high accuracy in detecting powdery mildew using GoogLeNet. Similarly, Maeda-Gutiérrez et al. (2020) demonstrated that DL models outperformed machine learning (ML) algorithms in detecting diseases in tomato and pepper crops. Priya et al. (2023) proposed a deep CNN model for classifying fruit and vegetable leaf diseases with 96% and 89% accuracy, respectively, but noted limitations due to architectural constraints. Bevers et al. (2022) used DenseNet to classify soybean diseases with 96.8% accuracy, though the model required large training datasets and substantial computational resources. Ashraf et al. (2023) tested a CNN model for wheat disease detection but faced limitations due to the small dataset used for validation.
       
While previous studies demonstrate the potential of DL models in plant wilting detection, they also highlight significant challenges. Most models either require large datasets and computational resources or face limitations in accuracy and generalizability due to small datasets or unoptimized architectures. Furthermore, existing methods for soybean wilting detection often focus on general identification rather than specific stages, such as varying levels of wilting, which are critical for timely intervention.
       
To address these gaps, this study proposes an architectural modification of the VGG16-based CNN algorithm for the classification of soybean wilting. The model specifically focuses on classifying soybean leaf wilting into five levels (0 to 4), enabling a more granular and actionable approach to wilting management. This research aims to contribute a scalable and accurate solution for soybean wilting detection, emphasizing early identification and intervention to reduce yield losses and enhance food security.
Image dataset acquisition
 
The experimental field was located in Hebei, China, where soybean plants were imaged under natural lighting conditions. Images of soybean plants are collected with the help of local farmers and agricultural experts. The dataset includes images of both healthy and wilted soybean plants, all taken from a single field with the same crop variety and at a uniform growth stage. Wilting in the plants was observed under natural arid conditions caused by weather stress. No artificial drought or stress was induced during the experiment.
       
Acquisition of images for the training of convolutional neural networks to identify soybean wilting requires a systematic and well-organized dataset. The dataset used for this work includes a large number of images where each image is tagged with distinct labels that are defined independently. A total of 6704 photos are taken in soybean fields (Fig 1). These images consist of five distinct stages of wilting, each labeled with a number ranging from 0 to 4. In the dataset, class 0 signifies the presence of healthy leaves without any wilting. Class 1 captures leaflets folding inward without a loss of firmness in the petioles or leaflets. Moving on to class 2, a slight loss of firmness is observed in the upper canopy’s petioles of leaflets, whereas class 3 indicates a moderate loss of firmness in the upper canopy. Finally, class 4 denotes a severe loss of firmness throughout the entire canopy.

Fig 1: Categories of wilting in soybeans according to their health status.


       
To categorize the images of healthy and wilted soybean plants, the data is divided into three parts: training, testing and validation. Using an 80:10:10 ratio, 80% of images were allocated for training, 10% for learning and the remaining 10% for ensuring the model`s ability.
 
Image preprocessing (annotation and labeling)
 
Uniformity in image size is an important factor for reliable input data and optimal computational efficiency during model training. Image preprocessing involves two primary operations: Resizing and rescaling. Resizing is a method to standardize all images to a consistent dimension. Because different dimensions can significantly impact the performance of the models. When dealing with the large number of image datasets in deep learning, it comes with some challenges. These challenges include needing more power consumption, taking longer to train, the risk of fitting the model too closely to the training data and the possibility of running out of memory on the computer’s graphics card (GPU). To overcome these challenges, the bicubic interpolation method is used to resize the images. This method is implemented using Python libraries like OpenCV. It helps to make the images smaller without losing information, making things more manageable for the computer. In this work, the images were scaled to a standard size of 224 × 224 pixels.
       
By employing image filtering techniques, like the median filter, various image noises were eliminated. This process addresses focus issues and restores undesirable portions captured during image acquisition. In addition, a low-pass filtering method removes the amplitude of high-frequency components, preserving low-frequency information. The images were transformed into grayscale. After that, the soybean plants were categorized based on their health conditions. Subsequently, each image is labeled in 0 to 4 classes.
 
Data augmentation
 
Python libraries, particularly, Keras, were employed for data augmentation in the training of CNN model. During training, each image inputted into the network is generated from the original image. This involves creating additional data from existing training samples to augment the number of training datasets.
       
In the process of augmentation, each class (from 0 to 4) exhibits a distinctive set of image statics. For Class 0, the original dataset contains 448 images and an additional 159 images have been introduced through augmentation, resulting in a cumulative total of 607 images. In Class 1, the initial set consists of 894 images, supplemented by 358 augmented images, bringing the overall count to 1252 images. Similarly, Class 2 starts with 1340 original images and experiences an augmentation of 357 images, culminating in a total of 1697 images. In the case of Class 3, the original dataset comprises 1788 images, augmented by 291, yielding a total of 2079 images. Lastly, for Class 4, the initial set of 2234 images is expanded with 289 augmented images, resulting in a final count of 2523 images. Consequently, the number of images after augmentation across all classes is equal to 9158 images (Fig 2).

Fig 2: Total number of images after augmentation.


 
Libraries and software tools
 
To determine the most suitable tool for implementing the CNN algorithm for categorizing soybean wilting images, an analysis of available software tools and their associated libraries is conducted. In the proposed CNN model, Python programming language was chosen and their Tensorflow, Keras, NumPy, Matplotlib and OpenCV libraries are utilized. These tools and libraries provide a reliable environment for the development and experimentation of the CNN algorithm to ensure efficient image processing and analysis within the context of wilting image datasets.
 
Performance evaluation matrices
 
The performance of the detection and classification model is measured by the number of test datasets true and false classified by the proposed model. Each classifier uses statistical measurements such as actual and predicted images and the rate of error. The performance evaluation matrices are defined as:

Accuracy
 
It is the ratio of correct predictions, including true positives and true negatives to the total number of predictions.

 
Precision
 
It measures the accuracy of positive predictions by dividing the total number of correct positive outputs by the predicted positive labels.

 
Recall
 
It expresses the ratio of true positive events to the total number of actual positive instances in the dataset.


F1-score
 
It signifies the balance between precision and recall, with 1 denoting perfect performance and 0 indicating total failure.


System design
System architecture
 
The flow chart adopted for the implementation of sequential CNN model for soybean wilting detection is presented in Fig 3.

Fig 3: Sketch for detection process of soybean wilting.


 
Training model
 
A multilayer CNN model has been developed to detect and isolate soybean wilt phases. It uses convolutional layers to extract features. Max-pooling layers are used for downsampling. Dense layers handle the final classification. The model aims to efficiently identify patterns and characteristics in soybean images to differentiate between healthy and wilted plants.
 
Input layer: The input layer of the CNN model processes RGB (Red, Green and Blue) images with a pixel size of 256 × 256 to differentiate between two classes: wilted and healthy. This layer does the necessary computations and transfers data to the first convolutional layer.
 
Convolutional layer: The main function of this layer is to extract important data from the input images. This layer filters the dimensions of the incoming images using a special mathematical procedure to extract relevant information from them.
       
An input image (A) and a kernel (K) are used to represent a 2D convolution operation as follows:


 
Where m and n stand for the kernel (K) coordinates and I and j for the image (I) coordinates.
 
Max-pooling layer: Pooling layer: This layer creates a matrix based on the maximum value selected from the filtered image, reducing the input image’s size. It also assists with handling overfitting-related issues.
 
Fully connected layer (FCL): FCL referred to as dense layers, flattens the output from the model’s initial convolutional layers to obtain a one-dimensional array. There are five neurons in the classifier.
 
Output layer: The output layer comprises five neurons with a softmax activation. The choice of the softmax activation function in this layer is based on its recommendation for binary classification tasks.
 
Feature extraction by proposed model
 
A convolutional neural network (CNN) is created as a hybrid integration of two core modules: the use of convolutional and pooling layers to extract complex patterns, followed by fully linked layers to help in classification (Fig 4). Together, these elements give an input image a probability, which determines the image’s precise classification. One important parameter known as a color feature becomes significant when identifying soybean wilting. Color variance is a defining factor for crop wilting identification in convolutional approaches, where this parameter is carefully evaluated because of its noticeable impact. Deep learning (DL) achieves remarkable accuracy by automatically learning classifiers and disorders, in contrast to typical machine learning algorithms. The feature map is computed by.

Fig 4: Feature extraction process of sequential CNN architecture.



 
Where,
b = Bias term.
wik = Filter that was applied to the input.
xk = kth channel of the input image.
 
CNN for classification with hyperparameters
 
A sequential CNN model is designed to classify healthy and wilted soybean leaves. It performs well with image datasets. The batch size is set to 64 pixels. It determines the number of training examples processed in each training iteration. A larger batch size can offer remarkable computational efficiency but may also increase memory requirements. The images were resized into 224 × 224 pixels with three color channels (RGB). The sequential model is constructed with layer-by-layer configuration for effective image classification. The initial layer starts with the VGG 16 architecture (Functional), generating a 3D tensor output of dimensions (7, 7, 512) and contributing 14,714,688 parameters. After this, a Conv2D layer performs 2D spatial Convolution, resulting in an output shape of (3,3,512) and 6,554,112 parameters. Subsequently, a MaxPooling 2D layer executes max pooling, producing an output shape of (1, 1, 512). Next, the Global Average Pooling 2D layer transforms this output into a 1D tensor with 512 dimensions. The model then consists of seven dense layers, each with varying output shapes and associated parameters: dense_1 (512, 262,656), dense_2 (256, 131,328), dense_3 (128, 32,896), dense_4 (64, 8256), dense_5 (32, 2080), dense_6 (16, 528) and dense_7 (5, 85). These layers serve as the classifier, applying linear transformations and activation functions.
       
Three dropout layers are dotted throughout, with no trainable parameters. These layers introduce randomness during training to reduce overfitting. Two Batch Normalization layers were used to normalize and scale input data, contributing 1,024 and 256 parameters. The model outlines a total of 21,707,909 parameters including 14,072,005 trainable and 7,635,904 non-trainable parameters.
The experiment is executed in Python 3.11.1, serving as the primary Integrated Development Environment (IDE). It is chosen as the programming language due to the extensive community involvement in image classification using TensorFlow with Keras. Multiple iterations are conducted to enhance the accuracy of the image classification. The testing process for the proposed CNN algorithm is conducted in two steps. The initial step is the training segment, in which the weights of the model are calibrated using a set of images. In the subsequent step, a testing segment is performed, in which the performance of the model is evaluated using previously unseen data.
       
Fig 5 and 6 display the accuracy and loss measurement of training and validation as a function of epochs. After 100 epochs, the model demonstrates remarkable performance matrices. The training accuracy reached 95.21%, attesting to the model’s ability in accurately classifying instances within the training dataset. The training loss is 0.7239 which signifies the model’s efficacy in minimizing errors during the training process. Furthermore, the validation metrics shed light on the model’s generalization capabilities to new, unseen data. The validation loss is recorded as 1.4143. The validation accuracy is 81.09% which emphasizes the effectiveness of the model in making precise predictions on events not encountered during the training phase. In summary, these matrices show good results in both training and validation phases.

Fig 5: Training and validation accuracy as a function of epoch.



Fig 6: Training and validation loss as a function of epoch.


       
In Table 1, a classification report is presented. The precision, recall, F1-score and support metrics provide specific information about the classification model’s performance over a number of classes. In terms of Class 0, the precision is 0.89, indicating that 89% of the instances predicted to be Class 0 are true, while the recall is 0.78, indicating that the model correctly recognizes 78% of the actual events. The F1-score, or precision and recall harmonic mean, is 0.83. Class 1 has 0.86 precision, 0.91 recall and 0.89 F1-score.

Table 1: Classification report.


       
Class 2 exhibits precision (0.99), recall (1.00) and F1-score (0.99), indicating the efficacy of the model. Class 3 has good accuracy and recall with medium precision, an F1-score of 0.96, a precision of 0.92 and a recall of 1.00. For class 4, the corresponding F1-score, recall and precision are 0.95, 1.00 and 0.90. With an overall accuracy of 0.91, the model predicts the class labels accurately in 91% of the cases. With identical weights assigned to each class, the macro averages for precision, recall and F1-score across all classes are 0.91, 0.94 and 0.92. The weighted-average precision, recall and F1-score are, respectively, 0.91, 0.91 and 0.90 to consider class sizes. Combining these metrics yields a thorough evaluation of the model’s performance by giving specific details about its accuracy, precision and recall across a range of classes.
       
A nine-layer deep CNN model taken by Geetharamani et al. (2019) indicated a high accuracy of 96% but had computationally large requirements. This limitation is not observed in the model evolved in this paper. Priya et al. (2023) also proposed a deep CNN model with an accuracy of 96% for fruit leaves and 89% for vegetable leaves using a data set of 16870 images. The data set chosen is large but the accuracies were limited indicating modification for the CNN architecture. Bevers et al. (2022) described a high-performing automated classifier of soybean disease with CNN predicting an accuracy of 96.8% with DenseNet 201-based model architecture. The work required large inputs of training data and computational resources. Another work indicating a CNN model for wheat crop disease detection by Ashraf et al. (2023) reports 93% accuracy. The indicated data set was limited to just 450 images. The testing and validation of the model are compromised with such a limited data set. Thus, the model proposed proves to be computationally viable and accurate within the prescribed limits.
In this paper, a deep learning-based model for identifying soybean wilt is evaluated using a Convolutional Neural Network (CNN) technique. With the assistance of regional farmers and agricultural experts, data was gathered for this study, ensuring that it had undergone extensive testing and professional approval. Through architecture modification of the VGG16, the CNN algorithm was created for the classification of soybean wilting. The CNN showed a strong 95.21% training accuracy and a low training loss of 0.7239 after 100 epochs. The effectiveness of the final model was demonstrated by its 91% overall accuracy. The experiment showed that the proposed CNN model provides the accurate identification of soybean wilting with a relatively extensive dataset. The study indicates a limitation in terms of the dataset chosen. It uses a dataset of 6704 images with five categories. To enhance the performance of the study, a larger dataset can to be taken making it more exhaustive for higher accuracy of the model. Although the primary focus of this study was on wilting in soybean leaves, other leaf diseases such as “Caterpillar” and “Diabrotica speciosa” should also be taken into account. Further study can include diseases of the stem and roots. To have a more comprehensive understanding of the susceptibility of soybean plants, future research needs to examine a broader spectrum of diseases that impact distinct sections of the plant. This study employs a fully connected layer classifier, a specialized technique for classifying disorders. To find the best method going forward, researchers can experiment with various techniques like decision trees, random forests and support vector machines. By contrasting different approaches, one can gain insight into the advantages and disadvantages of each, which can help optimize models for soybean wiltung classification.
Ongoing Research Funding program-Research Chairs (ORF-RC-2025-5300), King Saud University, Riyadh,  Saudi Arabia.
 
Authors' contributions
 
The authors contributed toward data analysis, drafting and revising the paper and agreed to be responsible for all aspects of this work.
 
Data availability statement
 
The data is available on Mendeley database site.
 
Declarations
 
Author(s) declare that all works are original and this manuscript has not been published in any other journal.
There is no conflicts of Interests of authors.

  1. Abade, A., Ferreira, P.A. and Vidal, F. de B. (2021). Plant diseases recognition on images using convolutional neural networks: A systematic review. Computers and Electronics in Agriculture. 185: 106125.

  2. Altieri, M. (2018). Agroecology: The Science of Sustainable Agriculture, 2nd ed. CRC Press, Endereço. 

  3. AlZubi, A.A. (2023). Artificial intelligence and its application in the prediction and diagnosis of animal diseases: A review. Indian Journal of Animal Research. 57(10): 1265-1271. doi: 10.18805/IJAR.BF-1684

  4. Arias, M.M.D., Leandro and L.F., Munkvold, G.P. (2013). Aggressiveness of fusarium species and impact of root infection on growth and yield of soybeans. Phytopathology. 103: 822-832.

  5. Ashraf, M., Abrar, M., Qadeer, N., Alshdadi, A.A., Sabbah, T. and Khan, M.A., (2023). A convolutional neural network model for wheat crop disease prediction. Computers, Materials and Continua. 75(2): 3867-3882.

  6. Attri, I., Awasthi, L.K., Sharma, T.P. and Rathee, P. (2023). A review of deep learning techniques used in agriculture. Ecological Informatics. 77: 102217.

  7. Bagga, T., Ansari, A.H., Akhter, S., Mittal, A. and Mittal, A. (2024). Understanding Indian consumers' propensity to purchase electric vehicles: An analysis of determining factors in environmentally sustainable transportation. International Journal of Environmental Sciences. 10(1): 1-13.

  8. Belay, A.J., Salau, A., Ashagrie, M., Haile, M. (2022). Development of a chickpea disease detection and classification model using deep learning. Informatics in Medicine Unlocked. 31: 100970.  http://dx.doi.org/10.1016/j.imu.2022.100970.

  9. Bevers, N., Sikora, E.J. and Hardy, N.B. (2022). Soybean disease identification using original field images and transfer learning with convolutional neural networks. Computers and Electronics in Agriculture. 203: 107449.

  10. Bierman, A., LaPlumm, T., Cadle-Davidson, L., Gadoury, D., Martinez, D., Sapkota, S. and Rea, M. (2019). A high-throughput phenotyping system using machine vision to quantify severity of grapevine powdery mildew. Plant Phenomics. 2019(1): 1-13. https://doi.org/10.34133/2019/9209727.

  11. Cho, O.H. (2024). An evaluation of various machine learning approaches for detecting leaf diseases in agriculture. Legume Research. 47(4): 619-627. https://doi.org/10. 18805/LRF-787

  12. Food and Agriculture Organization of the United Nations. (2022). The State of Food and Agriculture: Climate Change and the Role of Sustainable Agriculture. Retrieved from https://www.fao.org/publications/sofa/2022/en/

  13. Geetharamani, G., Pandian, A. (2019). Identification of plant leaf diseases using a nine-layer deep convolutional neural network. Computers and Electrical Engineering. 76: 323-338.

  14. Hai, N.T. and Duong, N.T. (2024). An improved environmental management model for assuring energy and economic prosperity. Acta Innovations. 52: 9-18. https://doi.org/ 10.62441/ActaInnovations.52.2. 

  15. Maeda-Gutiérrez, V., Galván-Tejada, C.E., Zanella-Calzada, L.A., Celaya-Padilla, J.M., Galván-Tejada, J.I., Gamboa-Rosales, H., Luna-García, H., Magallanes-Quintanar, R., Méndez, C.A.G. and Olvera-Olvera, C.A. (2020). Comparison of convolutional neural network architectures for classification of tomato plant diseases. Applied Sciences. 10(4): 1245. https://doi.org/10.3390/app10041245.

  16. Maltare, N.N., Sharma, D. and Patel, S. (2023). An exploration and prediction of rainfall and groundwater level for the district of Banaskantha, Gujrat, India. International Journal of Environmental Sciences. 9(1): 1-17.

  17. Priya, K.P., Vaishnavi, T., Pavithra, T., Sivaranjani, R., Reethika, A., Kalyan, G.R. (2023). Optimized plant disease prediction using CNN and fertilizer recommendation engine. 4th International Conference on Electronics and Sustainable Communication Systems (ICESC). 1660-1665.

  18. Salas, E. and Argialas, D. (2022). Automatic identification of marine geomorphologic features using convolutional neural networks in seafloor digital elevation models: Segmentation of DEM for marine geomorphologic feature mapping with deep learning algorithms. SETN '22: Proceedings of the 12th Hellenic Conference on Artificial Intelligence. 25: 1-8. https://doi.org/10.1145/3549737.3549766.

  19. Saleem, M.H., Potgieter, J. and Arif, K.M. (2020). Plant disease classification: A comparative evaluation of convolutional neural networks and deep learning optimizers. Plants. 9(10): 1319. https://doi.org/10.3390/plants9101319.

  20. Semara, I.M.T., Sunarta, I.N., Antara, M., Arida, I.N.S.  and Wirawan, P.E. (2024). Tourism Sites and Environmental Reservation. International Journal of Environmental Sciences. 10(1): 44-55.

  21. Zhou, J., Zhou, J., Ye, H., Ali, M.L., Nguyen, H.T. and Chen, P. (2020). Classification of soybean leaf wilting due to drought stress using UAV-based imagery. Computers and Electronics in Agriculture. 175: 105576.

Editorial Board

View all (0)