Legume Research

  • Chief EditorJ. S. Sandhu

  • Print ISSN 0250-5371

  • Online ISSN 0976-0571

  • NAAS Rating 6.80

  • SJR 0.391

  • Impact Factor 0.8 (2024)

Frequency :
Monthly (January, February, March, April, May, June, July, August, September, October, November and December)
Indexing Services :
BIOSIS Preview, ISI Citation Index, Biological Abstracts, Elsevier (Scopus and Embase), AGRICOLA, Google Scholar, CrossRef, CAB Abstracting Journals, Chemical Abstracts, Indian Science Abstracts, EBSCO Indexing Services, Index Copernicus
Legume Research, volume 47 issue 10 (october 2024) : 1723-1729

Detection and Classification of Wilting in Soybean Crop using Cutting-edge Deep Learning Techniques

Myung Hwan Na1, In Seop Na2,*
1Department of Mathematics and Statistics, Chonnam National University, Republic of Korea.
2Division of Culture Contents, Chonnam National University, Republic of Korea.
  • Submitted25-01-2024|

  • Accepted10-04-2024|

  • First Online 02-05-2024|

  • doi 10.18805/LRF-797

Cite article:- Na Hwan Myung, Na Seop In (2024). Detection and Classification of Wilting in Soybean Crop using Cutting-edge Deep Learning Techniques . Legume Research. 47(10): 1723-1729. doi: 10.18805/LRF-797.

Background: This paper employs deep learning in the classification of soybean wilting, a plant health indicator affected by external pressures, using a Convolutional Neural Network (CNN) with a pre-trained model. It highlights the promise of deep learning in agriculture by examining the relevance of wilting, evolution in the agricultural sector and applications in crop wellness monitoring.

Methods: A CNN is used in the study to classify soybean withering, with special attention to the VGG16 pre-trained model. Deep learning’s ability to interpret complex data patterns is harnessed for intelligent and accurate wilting detection. A smart detection system tailored for soybean wilting is developed, incorporating recent advancements and addressing associated challenges.

Result: The CNN model, notably VGG16, achieves 76% overall accuracy in distinguishing healthy and wilted soybean leaves, signifying a transformative shift in soybean crop health management. The approach offers a precise, efficient and sustainable solution supported by state-of-the-art CNN technology, advancing soybean cultivation practices.

The numerous issues involved with crop management in today’s dynamic agricultural landscape demand the integration of advanced technologies. One of the main crops in the world, soybeans is subject to a variety of stresses that can have a big effect on crop productivity and health. Wilting is one of the most important signs of plant stress and physiological imbalance among them. The ability to accurately and promptly identify wilting in soybean leaves holds the key to implementing targeted interventions, optimizing resource usage and ultimately ensuring sustainable agricultural practices. However, because of water shortages, drought stress can have a significant impact on soybean yield (Deshmukh et al., 2014). Drought stress lowers seed weight later in the process, but it also lowers the quantity of seed per soybean pod in the early stages of seed filling (Desclaux et al., 2000). There is a pressing need to develop soybean cultivars with drought tolerance to sustain crop productivity under drought conditions (Comas et al., 2013).

The core of this work is the application of state-of-the-art deep learning techniques, a branch of artificial intelligence that is distinguished by its capacity to learn on its own and recognize complex patterns in large datasets. Deep learning models, in contrast to standard methods, possess the ability to understand extensive relationships, which allows them to extract subtle features that are frequently undetectable to human observers. With its smart and sophisticated approach to wilting detection, this revolutionary technology has the potential to completely change the soybean production industry. Several methods have been proposed by researchers to reliably identify and categorize plant infections. Some rely on conventional image processing methods that combine manually executed extraction of features and segmentation (Daniya et al., 2022; Dubey et al., 2012) suggested a K-means clustering technique to segment the leaf portion that is infected and a multi-class support vector machine, or SVM, is used for the final classification.

Convolution neural networks, or CNNs, have drawn a lot of interest lately because of their capacity to extract intricate low-level characteristics from pictures and perform identification and classification tasks. As a result, since CNNs produce better results, they are recommended to replace conventional techniques in automated plant disease detection (Karthik et al., 2019). Barbedo, (2018) has suggested a CNN-based prediction model for paddy plant image processing and categorization. Additionally, Vardhini et al., (2020) employed a CNN to identify diseases in rice fields. Convolutional neural networks with four to six layers are typically used by researchers to classify various plant species. Additionally, Mohanty et al., (2016) classified, recognized and segmented many plant diseases using a CNN and a transfer learning technique. The datasets employed in these studies are not very diverse, despite the fact that many other types of study have been conducted with CNNs and improved results have been reported (Panigrahi et al., 2020).

The development of a smart, deep learning-based system holds the potential to serve as a template for addressing similar challenges across diverse crops and agricultural contexts. The knowledge gained from this exploration not only contributes to soybean crop management but also paves the way for broader applications in the domain of smart agriculture. The convergence of agriculture and cutting-edge deep learning technologies in the pursuit of advancing soybean wilting classification signifies a pivotal moment in the evolution of modern farming practices. Through the utilization of artificial intelligence, this study aims to enhance crop health, empower farmers and make a positive impact on the sustainable and effective future of worldwide agriculture.
 
Related work
 
Improved crop management and reducing the negative effects of stressors on agricultural productivity are the main goals of the current wave of research inspired by the growing field of agriculture and artificial intelligence, especially deep learning. Breeders of soybeans have worked over the past 20 years to choose plants that exhibit the slow-wilting characteristic in drought-stressed environments by grading canopy wilting and ocular observations (Carter et al., 2016; Kim and AlZubi, 2024). But in breeding programs requiring a lot of crossings and thousands of progenies to be screened, it is not possible to finish scoring canopy wilting on thousands of breeding progeny rows in a single day. Moreover, visual evaluations might be biased or prone to human mistakes, which could lower breeding programs’ ability to choose the best genotypes (Bagherzadi, 2017). The aim of this study is to provide a comprehensive overview of the literature on the subject of smart detection with deep learning technologies for the progress of soybean wilting categorization. However, because to the high computational cost of training, deep CNN layers are challenging to implement. Several scholars have developed methods based on transfer learning to address these problems (Tan et al., 2018; Andrew et al., 2019; Andrew-Onesimu and Karthikeyan 2020; Mhathesh et al., 2021; Maria et al., 2022; Kusuma et al., 2022). Several well-liked models for transfer learning include Inception, DenseNet, ResNet and VGG-16 (Too et al., 2019; Min, et al., 2024). Multiple class data from the ImageNet dataset is used to train these models. Since the picture features such as edges and contours are shared by all datasets, these models may be trained on any kind of dataset. Therefore, the most appropriate and reliable model for picture classification has been determined to be the transfer learning strategy (Hussain et al., 2018). A CNN was proposed by Jadhav et al., (2021) to identify plant diseases. With this method, diseases in soybean plants were identified using CNN models that had already been trained. Better results were obtained when pre-trained transfer learning techniques, including AlexNet and GoogleNet, were used in the tests; nonetheless, the model lagged behind in terms of classifying diversity. In their study, Abbas et al., (2021) suggested using the conditional generative adversarial networks to create a library of artificial photos of tomato plant leaves. Real-time data capture and collecting, which were previously costly, time-consuming and arduous, may now be accomplished with the help of generative networks. Anh et al., (2021) discovered that a previously trained MobileNet CNN model could effectively perform multi-leaf classification on a benchmark dataset, achieving a dependable accuracy of 96.58%. Furthermore, the authors of (Kabir et al., 2021) assert that their study is the first to use a multi-label CNN to classify 28 different classes of plant diseases. The multi-label CNN was proposed for determining the classification of multiple diseases of plants using transfer learning approaches, including DenseNet, Inception, Xception, ResNet, VGG and MobileNet.

With the world population growing and environmental changes occurring quickly, crops are becoming increasingly important to guarantee people and animals’ long-term health and welfare by providing sufficient and wholesome food supplies. Varieties of soybean that are resistant to drought are especially vulnerable to the negative impacts of weather and water stress, which results in significant physiological changes in the plants. This study aims to identify and evaluate the water stress levels in soybean crops, which is an important task. The study offers a contemporary and practical substitute for conventional manual surveillance techniques by combining a mechanized strategy with Unmanned Aerial Vehicles (UAVs) to increase efficiency and decrease time requirements. This approach promises to provide timely and accurate insights into the water stress levels of soybean crops, supporting more accurate and proactive agricultural management practices.
 
Dataset description

For deep learning models to handle data effectively, large datasets are necessary. The data is collected with the help of experts in agriculture for this study. There are 1275 photos of soybeans taken in the fields. The pictures show five distinct stages of wilting in soybean plants and each image is annotated with a number between 0 and 4. In this dataset, class 0 denotes healthy leaves that are not wilting. class 1 denotes leaflets folding inward at the secondary pulvinus without any turgor loss in the petioles or leaflets. Class 2 denotes slight turgor loss in the petioles or leaflets in the upper canopy. While class 3 denotes moderate turgor loss in the upper canopy. Class 4 denotes severe turgor loss throughout the canopy. Fig 1 and Table 1 provide a classification of the dataset. In this study, the entire dataset was divided into training, testing and validation sets using an 80:10:10 ratio.

Fig 1: Images from the dataset consisting of Soybean plants with healthy and 4 wilting classes.



Table 1: Images classification detail.


 
State-of-the-art algorithm
 
In this section, the architecture of the custom deep learning model is discussed to detect the wilting of soybean plants.
 
Image pre-processing and labeling
 
Image pre-processing played a crucial role in preparing the raw images for training a custom deep-learning classifier. These pre-processing steps are required because different sources of photos can have images with differing dimensions. Standardizing the size of all photos becomes essential to ensure reliability in the input data and to maximize the computational efficiency during the training phase. Resizing and rescaling are the two main operations in image pre-processing. To ensure that all of the photographs are suitable for the model’s input requirements, resizing is necessary to get them all to the same dimension. This stage is crucial because different dimensions may adversely affect the performance of neural network models, which normally require input images to be a certain size.

In this study, the photos were first resized to 224 × 224 pixels followed by the conversion of images into grayscale. When using VGG16, which normally accepts RGB images with three channels, images are converted to greyscale to simplify and increase computational efficiency. Because greyscale images only have one channel, they require less memory and require less processing power. This method can also help concentrate on the important patterns and features in the images, as well as fulfill task-specific requirements. The next step was to classify the soybean plant pictures based on their types of healthy and wilting. After that, each image is labelled according to class from 0 to 4.
 
Custom deep-learning model implementation
 
A custom deep-learning model is implemented for the classification of soybean crop images based on wilting. The model architecture is a combination of the VGG16 pre-trained model, followed by additional convolutional, pooling and fully connected layers. The details of the model architecture are as follows:
• The VGG16 base model is initialized with pre-trained  Image Net weights and used as the base model for feature extraction. The model’s specified input shape is (224, 224, 3), indicating that it expects input images with dimensions of 224 pixels in height, 224 pixels in width and 3 RGB color channels.
• The model consists of a MaxPooling2D layer (3×3) after the Conv2D layer (512 filters, kernel size 5). Spatial dimensions are decreased by a GlobalAveragePooling2D layer. Dense layers (512, 256, 128, 64, 32) with dropout  regularization and ‘elu’ activation are incorporated. Activations are normalized by batch normalization following  a few Dense layers. Dense (5 units) is the last layer and it uses softmax activation for multi-class classification (Fig 2). Sparse categorical accuracy metric, sparse categorical cross-entropy loss and Adam optimizer are used in the compilation of the model.
• Data augmentation is performed on a subset of the training data to increase the diversity of the dataset. Augmentation includes horizontal flipping, adjusting saturation, brightness and rotating images.
• During each iteration of training, the model processes and updates its weights based on a batch of 64 samples from the training dataset and the number of epochs is 75. To add non-linearity and help the model recognize intricate patterns in the data, the “elu” activation function is employed in neural network layers (Fig 3).

Fig 2: Class distribution before and after augmentation.



Fig 3: Custom deep-learning model architecture.


 
Performance evaluation parameters
 
Accuracy
 
The accuracy measure, which is derived as the ratio of the total number of accurate predictions (including true positives and true negatives) to the total number of predictions, is one of the most widely used metrics for evaluating model performance. Accuracy not only shows how well the model performed overall, but it also shows whether or not the model was trained correctly. The following formula is used to calculate the accuracy:
 
 
 Where:
TP= True positive.
TN= True negative.
FP= False positive.
FN= False negative.
 
Precision
 
The accuracy of the model’s predictions is shown by this parameter. The calculation is performed by dividing the total number of positive labels by the number of correctly predicted positive outcomes.
 
 
 
Recall or sensitivity
 
The completeness of the classifier is measured by this metric. Its definition is the ratio of all positive reviews in the dataset to the number of positive observations that were accurately anticipated.
 
 
F1-score
 
The F1-score ranges from 0 to 1, with 1 denoting total failure and 1 representing flawless performance. the F-measure is expressed as
 

 
Artificial intelligence has benefited from the application of both new and traditional deep learning (DL) and machine learning (ML) approaches, which have produced new ways to interpret data collected in real-world scenarios (Lee et al., 2018). These methods look for and identify biotic and abiotic stressors in plants, then forecast yield loss based on those metrics (Singh et al., 2018. Image collection, storage, preprocessing, annotation, classification and trait extraction are common workflows for deep learning approaches. In this section, the study outcomes are expressed in terms of training loss and accuracy for each fold, along with the presentation of the confusion matrix and classification reports. The classification reports include precision, recall and F-Score metrics. Fig 4 (a, b) compare training and validation loss and accuracy of the model.

Fig 4: Training and validation accuracy and loss of the model.



The training phase achieves an accuracy rate of 95.75% after 75 epochs. The slightly lower accuracy rate observed during the validation phase is 75.62%. This disparity implies that although the model performs well when trained on training data, it has difficulties when applied to unseen validation data. It is sensible to conclude that the observed trend would likely result in improved data accuracy when the number of repetitions (epochs) is increased. It is crucial to recognize that the number of epochs increases in direct proportion to the length of the training phase. The loss rate of the training phase is 71.24% after 75 epochs.
 
Classification report
 
The classification report offers a thorough evaluation of the model’s performance over five classes (0 to 4). Class 0 has a particularly high precision of 0.95, meaning that 95% of the time the model predicts Class 0 correctly. The recall, at 0.68, is significantly lower, indicating that some Class 0 occurrences may be missed by the model. The F1-Score, which is computed to be 0.77, strikes a balance between recall and precision. The precision, recall and F1-Score pattern is examined for every class, offering valuable information about the model’s capacity to accurately categorize cases within each category (Table 2).

Table 2: Classification accuracies, precision, F1 score and support.



The precision, recall and F1-Score unweighted means for all classes are combined to create the macro-average. This macro-average precision of 0.77 takes into account both the uneven class sizes and overall performance across classes. With a weighted average precision of 0.81, the weighted average shows marginally better overall performance by taking into account each class’s contribution according to its support. When taken as a whole, these metrics provide a thorough assessment of the model’s recall, accuracy, precision and F1-Score, offering insightful information for improved classification result interpretation and development. Few false positives or false negatives occur in the model, which has a well-balanced precision and recall. As a result, the majority of the images might be correctly classified by the model. However, the model performed slightly better in certain classes than others. Compared to classes 3 and 4, class 1 exhibits slightly less precision. This may be caused by a variety of things, including the quantity of photos in each class or the inherent difficulty of distinguishing between some groups. The number of actual events or instances of each class in the dataset is referred to as support. It provides a numerical representation of the class distribution, showing the number of samples that fall into each specific group. Class 0 in the provided classification report has a support of 95, indicating that the dataset contains 95 instances of this class. In the same way, the supports for Classes 1, 2, 3 and 4 are 47, 36, 36 and 38, respectively. The overall accuracy of the performance of the model is 76%. Finally, the CNN model is a powerful tool for image categorization of diseases in plants.

Similarly, the classification of drought-induced soybean leaf wilting using UAV-based imagery was reported by Zhou et al., (2020). Overall classification accuracy was 76% for the SVM model that used all seven image features as predictors. Sarkar et al., (2023) used UAV imagery and machine learning to perform a study on the evaluation of soybean lodging. According to the results, the overall accuracies of RF, ANN, KNN and XGBoost were 0.80, 0.79, 0.77 and 0.73, respectively. Finally, the diseases in plants can be identified using deep learning methods.
In summary, the developed deep learning model uses a hybrid architecture with pre-trained VGG16 layers to efficiently classify images of soybean crops based on wilting. The model’s performance on unseen validation data indicates that more generalization is required, even though it achieved a high accuracy rate during training. The selected architecture supports efficient feature extraction by combining layers such as MaxPooling2D, Conv2D and GlobalAveragePooling2D. Utilizing data augmentation methods enhances the model’s ability to ensure changes in soybean crop images. The model’s strengths and potential areas for improvement are highlighted by a thorough evaluation that takes accuracy, recall and F1-score into account. This analysis offers insightful information for future improvements in the multi-class classification environment. Future research can involve a wider range of diseases that may impact various soybean parts. Hybrid machine learning approaches such as support vector machines, decision trees and random forests can also be used in the future to compare and improve the model’s overall performance.
This work was supported by the Korea Institute of Planning and Evaluation for Technology in Food, Agriculture and Forestry (IPET) through the Open Field Smart Agriculture Technology Short-term Advancement Program.
This work was funded by the Ministry of Agriculture, Food and Rural Affairs (MAFRA)(32204003).
Both authors contributed toward data analysis, drafting and revising the paper and agreed to be responsible for all aspects of this work.
The database generated and /or analyzed during the current study are not publicly available due to privacy but are available from the corresponding author on reasonable request.
Author(s) declare that all works are original and this manuscript has not been published in any other journal.
The authors declare that they have no conflicts of interest to report regarding the present study.

  1. Abbas, A., Jain, S., Gour, M., Vankudothu, S. (2021).  Tomato plant disease detection using transfer learning with C-GAN synthetic images. Comput. Electron. Agric. 187: 106279. https://doi.org/10.1016/j.compag.2021.106279. 

  2. Andrew, J., Fiona, R. and Caleb, A.H. (2019). Comparative study of various deep convolutional neural networks in the early prediction of cancer. In 2019 International Conference on Intelligent Computing and Control Systems (ICCS) (884-890). IEEE.

  3. Andrew-Onesimu, J., Karthikeyan, J. (2020). An Efficient privacy- preserving deep learning scheme for medical image analysis. Journal of Information Technology Management, 12(Special Issue: The Importance of Human-Computer Interaction: Challenges, Methods and Applications.), 50- 67. https://doi.org/10.22059/jitm.2020.79191.   

  4. Anh, P.T. and Duc, H.T.M. (2021). A benchmark of deep learning models for multi-leaf diseases for edge devices. In 2021 International Conference on Advanced Technologies for Communications (ATC) IEEE.  pp 318-323. https://ieeexplore.ieee.org/ document/9598196. 

  5. Bagherzadi, L. (2017).  Assessing Physiological Mechanisms to Elucidate the Slow-wilting Trait in Soybean Genotypes. Doctoral Dissertation.

  6. Barbedo, J.G.A. (2018). Factors influencing the use of deep learning for plant disease recognition. Biosyst. Eng. 172: 84-91. https://doi.org/10.1016/j.biosystemseng.2018.05.013.

  7. Carter Jr, T.E., Todd, S.M. and Gillen, A.M. (2016). Registration of ‘USDA N8002’soybean cultivar with high yield and abiotic stress resistance traits. Journal of Plant Registrations. 10(3): 238-245. https://doi.org/10.3198/jpr2015.09.0057crc.

  8. Comas, L.H., Becker, S.R., Cruz, V.M.V., Byrne, P.F. and Dierig, D.A. (2013). Root traits contributing to plant productivity under drought. Frontiers in Plant Science. 4: 442. doi: 10.3389/fpls.2013.00442.

  9. Daniya, T., Vidyadhari, C. and Aluri, S. (2022). Rice plant disease detection using sensing recognition strategy based on artificial intelligence: Journal of Mobile Multimedia. 18(03): 705-722. https://doi.org/10.13052/jmm1550-4646.18311.

  10. Desclaux, D., Huynh, T.T., Roumet, P. (2000). Identification of soybean plant characteristics that indicate the timing of drought stress. Crop Science. 40(3): 716-722.

  11. Deshmukh, R., Sonah, H., Patil, G., Chen, W., Prince, S.J., Mutava, R.N., Vuong, T.D., Valliyodan, B. and Nguyen, H.T. (2014). Integrating omic approaches for abiotic stress tolerance in soybean. Frontiers in Plant Science. 5: 244. https:// doi.org/10.3389/fpls.2014.00244.

  12. Dubey, S.R., Jalal, A.S. (2012). Adapted Approach for Fruit Disease Identification using Images. International Journal of Computer Vision and Image Processing (IJCVIP). 2(3): 44-58. http://doi.org/10.4018/ijcvip.2012070104. 

  13. Hussain, M., Bird, J.J., Faria, D.R. (2018). A Study on CNN Transfer Learning for Image Classification. In Proceedings of the UK Workshop on Computational Intelligence, Nottingham,  UK, 5-7V. 840: 191-202.

  14. Jadhav, S.B., Udupi, V.R., Patil, S.B. (2021). Identification of plant diseases using convolutional neural networks. Int. J. Inf. Technol. 13: 2461-2470. https://doi.org/10.1007/s41870- 020-00437-5.

  15. Kabir, M.M., Ohi, A.Q., Mridha, M.F. (2021). A multi-plant disease diagnosis method using a convolutional neural network. Computer Vision and Machine Learning in Agriculture. 99-111. https://doi.org/10.48550/arXiv.2011.05151.

  16. Karthik, R., Hariharan, M., Anand,S., Mathikshara, P., Johnson, A., Menaka, R. (2020). Attention embedded residual CNN for disease detection in tomato leaves. Appl. Soft Comput. 86: 105933. https://doi.org/10.1016/j.asoc.2019.105933.

  17. Kim, T.H. and AlZubi, A.A. (2024). AI enhanced precision irrigation in legume farming: optimizing water use efficiency. Legume Research. https://doi.org/10.18805/LRF-791.

  18. Kusuma, K.B.M., Arora, M.,  AlZubi, A.A., Verma, A. and Andrze, S. (2022). Application of Blockchain and Internet of Things (IoT) in the Food and Beverage Industry. Pacific Business Review (International). 15(10): 50-59.

  19. Lee, U., Chang, S., Putra, G.A., Kim, H. and Kim, D.H. (2018). An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis. PloS one. 13(4): e0196615. https://doi.or g/10.1371/journal.pone.0196615.

  20. Maria, S.K., Taki, S.S., Mia, J., Biswas, A.A., Majumder, A., Hasan, F. (2022). Cauliflower disease recognition using machine learning and transfer learning. Smart Systems: Innovations in Computing; Springer: Singapore. pp. 359-375.

  21. Mhathesh, T.S.R. andrew, J., Martin Sagayam, K. and Henesey, L. (2021). A 3D Convolutional Neural Network for Bacterial Image Classification. In Intelligence in Big Data Technologies -Beyond the Hype: Proceedings of ICBDCC 2019  Springer Singapore. pp. 419-431.

  22. Min, P.K., Mito, K. and Kim, T.H. (2024). The evolving landscape of artificial intelligence applications in animal health. Indian Journal of Animal Research. https://doi.org/10.18805/ IJAR.BF-1742.

  23. Mohanty, S.P., Hughes, D.P., Salathé, M. (2016). Using deep learning for image-based plant disease detection. Frontiers in Plant Science. 7. 1419. https://doi.org/10.3389/fpls.2016.01419.

  24. Panigrahi, K.P., Sahoo, A.K. and Das, H. (2020). A CNN approach for corn leaves disease detection to support digital agricultural system. In 2020 4th International Conference on Trends in Electronics and Informatics (ICOEI) (48184) IEEE. pp. 678-683.

  25. Sarkar, S., Zhou, J., Scaboo, A., Zhou, J., Aloysius, N. and Lim, T.T. (2023). Assessment of Soybean Lodging Using UAV Imagery and Machine Learning. Plants. 12(16): 2893. https://doi.org/10.3390/plants12162893.

  26. Singh, A.K., Ganapathy Subramanian, B., Sarkar, S. and Singh, A. (2018). Deep Learning for Plant Stress Phenotyping: Trends and Future Perspectives. Trends in Plant Science. 23(10): 883-898. https://doi.org/10.1016/j.tplants.2018. 07.004. 

  27. Tan, C., Sun, F., Kong, T., Zhang, W., Yang, C. and Liu, C. (2018). A survey on deep transfer learning. In Artificial Neural Networks and Machine Learning-ICANN. 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part III 27 Springer International Publishing. pp. 270-279.

  28. Too, E.C., Yujian, L., Njuki, S. and Yingchun, L. (2019). A comparative study of fine-tuning deep learning models for plant disease identification. Computers and Electronics in Agriculture. 161: 272-279. https://doi.org/10.1016/j.compag. 2018.03.032.

  29. Vardhini, P.H., Asritha, S., Devi, Y.S. (2020). Efficient disease detection of paddy crop using CNN. In 2020 International Conference on Smart Technologies in Computing, Electrical and Electronics (ICSTCEE) IEEE. doi: 10.1109/ICSTCEE 49637.2020.9276775. pp. 116-119.

  30. Zhou, J., Ye, H., Ali, L., Nguyen, H.T., Chen, P. (2020). Classification of soybean leaf wilting due to drought stress using UAV- based imagery. Computers and Electronics in Agriculture. 175: 105576.  https://doi.org/10.1016/j.compag.2020.105576

Editorial Board

View all (0)