Legume Research

  • Chief EditorJ. S. Sandhu

  • Print ISSN 0250-5371

  • Online ISSN 0976-0571

  • NAAS Rating 6.80

  • SJR 0.32, CiteScore (0.906)

  • Impact Factor 0.8 (2024)

Frequency :
Monthly (January, February, March, April, May, June, July, August, September, October, November and December)
Indexing Services :
BIOSIS Preview, ISI Citation Index, Biological Abstracts, Elsevier (Scopus and Embase), AGRICOLA, Google Scholar, CrossRef, CAB Abstracting Journals, Chemical Abstracts, Indian Science Abstracts, EBSCO Indexing Services, Index Copernicus

Deep Learning-driven Precision Agriculture: Application of ResNet-20 for the Early Detection and Classification of Soybean Insect Pests under Real-world Constraints

Ok Hue Cho1,*
1Department of Animation, Sangmyung University, 37, Hongjimun 2-gil, Jongno-gu, Seoul, Republic of Korea.
  • Submitted06-12-2024|

  • Accepted10-03-2025|

  • First Online 31-03-2025|

  • doi 10.18805/LRF-844

Background: Soybean, a vital global oilseed crop, faces significant challenges from pest infestations and related damage that compromise productivity. Early and accurate detection of pest damage is critical for sustainable agriculture. This study evaluates the effectiveness of the ResNet-20 architecture in identifying and classifying soybean leaf conditions into three categories-Healthy, Caterpillar-damaged and Diabrotica speciosa-affected-and assesses its potential for practical agricultural deployment.

Methods: The study utilized a dataset of 6,410 soybean leaf images sourced from the Mendeley database, categorized into three classes. The ResNet-20 model architecture incorporated convolutional layers, residual connections and global average pooling for efficient feature extraction and classification. Model performance was evaluated using metrics such as precision, recall, F1-score, accuracy and the Matthews Correlation Coefficient (MCC).

Result: ResNet-20 demonstrated robust performance with an overall classification accuracy of 86.58%. High precision and recall values were observed for pest-affected categories, although a reduced recall for the “Healthy” class was noted. The MCC of 0.7728 highlighted strong predictive reliability. Misclassifications were attributed to visual similarities between pest-related symptoms and class imbalance in the dataset.

The increasing focus on environmentally friendly farming practices highlights the need for innovative crop management solutions. These approaches aim to reduce environmental harm while improving yield and quality (Dordas, 2008). Advanced technologies, such as machine learning (ML), are transforming agriculture. ML models enable early and accurate identification of crop health issues, allowing farmers to apply targeted interventions. This reduces the excessive use of harmful agrochemicals (Kapustina et al., 2024). By supporting sustainable practices, ML enhances both productivity and product quality. Experts and the global community emphasize the importance of integrating technology to address complex environmental challenges (Al-Dosari and Abdellatif, 2024; Bagga et al., 2024; Moses et al., 2022).
       
Soybean (Glycine max), the only domesticated species in the Glycine genus, is a vital oilseed crop. The United States, Brazil and Argentina are its largest producers. However, insect pests such as caterpillars and Diabrotica speciosa significantly threaten productivity. These pests damage soybean leaves and reduce yields in nearly all growing regions, including major exporting countries (Hartman and Hill, 2010). Soybean domestication has created complex interactions with its pests. Effective pest management requires precise and scalable solutions. Deep learning (DL), particularly convolutional neural networks (CNNs), offers promising tools for detecting pest damage early. These tools enable eco-friendly pest control, reduce chemical use and support sustainable farming. They also enhance crop resilience, improve yields and maintain quality.
       
This study explores the use of ResNet-20 to detect and classify soybean leaf conditions. The model focuses on three categories: healthy leaves, caterpillar damage and Diabrotica speciosa damage. It evaluates ResNet-20’s efficiency and accuracy in real-world agricultural settings. The findings aim to advance sustainable crop management and strengthen global food security.
       
Traditional pest monitoring techniques such as visual inspection and trap monitoring have limitations, particularly in their ability to accurately estimate pest numbers and distribution across large or inaccessible areas (Li et al., 2023; Suzuki et al., 2024). As an extension of traditional pest monitoring methods, recent advancements have incorporated DL-based technologies for improved pest detection. An unmanned ground vehicle (UGV) equipped with a portable camera can automatically capture images of pests on soybean plants (Park et al., 2023).  Few studies have utilized DL technologies to identify pests in crops like wheat, rice and corn, including Cryptoleste pusillus, Sitophilus oryzae and Rhizopertha dominica (W. Li et al., 2019; Shen et al., 2018; Thenmozhi and Reddy, 2019). In oilseed rape, Faster RCNN, RFCN and SSD models have been used for real-time pest detection, including identifying insect pests like Athalia rosae and Creatonotus transiens (He et al., 2019).
       
On the other hand, CNN models have proven to be a transformative technology for detecting plant diseases (AlZubi, 2023). These are designed to process structured grid-like data, such as images and are highly effective in tasks like classification, object detection and segmentation. They automatically learn spatial hierarchies of features through layers such as convolutional, pooling and fully connected layers (Thippanna et al., 2023). A key advantage of CNNs is their ability to reduce computational complexity by using shared weights across spatial dimensions. This design eliminates the need for manual feature extraction, enhancing their effectiveness for visual data processing (Khan et al., 2020). Popular CNN architectures include AlexNet, VGGNet and more advanced models like ResNet (Yamashita et al., 2018; Taye, 2023). Among these, ResNet has shown exceptional promise in various applications (Alzubaidi et al., 2021).
       
ResNet, or Residual Network, addresses the vanishing gradient problem that often hampers the performance of deep neural networks. By introducing residual blocks, ResNet allows direct shortcuts for information flow, enabling gradients to propagate effectively during backpropagation. This innovation supports deeper network designs without performance degradation. ResNet’s success has been demonstrated in image recognition tasks, including its groundbreaking results in the ImageNet competition (Yamashita et al., 2018; Shafiq and Gu, 2022; Taye, 2023).
       
Recent studies highlight ResNet’s impact across domains. For instance, it has been applied in medical imaging to detect diseases like pneumonia from chest X-rays, improving diagnostic accuracy significantly compared to traditional methods (Kundu et al., 2021). Similarly, ResNet has been integrated into object detection frameworks for tasks like autonomous driving and video surveillance, where its ability to handle complex datasets has proven invaluable  (Gupta et al., 2021; Alabyad et al., 2024).
       
ResNet has been utilised used in plant disease detection due to its ability to address challenges like vanishing gradients in deep neural networks (Cho, 2024). Several studies have highlighted its effectiveness in diagnosing plant diseases. A study employed ResNet-18 to classify tomato leaf diseases using a dataset with variations in lighting and orientation. The model utilized data augmentation and preprocessing techniques like blurring to enhance robustness. With its efficient balance of depth and computational demands, ResNet-18 achieved high accuracy in classifying healthy and diseased leaves, demonstrating its suitability for real-world applications where resource constraints exist (Padshetty and Ambika, 2023). Another study explored ResNet-50’s application to classify multiple plant diseases, leveraging transfer learning to improve performance on small datasets. The architecture’s residual blocks enabled it to capture complex patterns in diverse crops and disease categories effectively. This approach significantly reduced training time and improved classification precision, showcasing ResNet-50’s scalability and adaptability  (Desanamukula et al., 2024).
       
Additionally, researchers have introduced a modified ResNet variant, Leaky ReLU-ResNet, which integrates advanced activation functions to enhance feature extraction from plant images. Tested on the PlantVillage dataset, this method achieved superior classification metrics (e.g., accuracy of 94.56%), underscoring its potential for precision agriculture. ResNet-20, a lightweight version of ResNet, is particularly suitable for agricultural applications where computational resources may be constrained. Its ability to classify diseases with high accuracy and robustness has made it a preferred choice in studies on plant disease detection. Recent advancements in soybean disease detection have utilized enhanced versions of ResNet. For instance, a study integrated data augmentation technique, such as rotation and noise addition, with ResNet-20 to improve model performance. This approach demonstrated the potential of ResNet-based models to classify diseases accurately, achieving over 90% accuracy in controlled experiments. Similarly, hybrid models like Leaky ReLU-ResNet have incorporated advanced activation functions to further refine classification accuracy for plant diseases, including those affecting soybeans, demonstrating F1 scores exceeding 92% in real-world datasets. Such applications not only enhance early detection and treatment but also support scalable monitoring across large agricultural fields. These advancements provide a foundation for integrating artificial intelligence into precision agriculture, helping to optimize resource use and reduce losses due to disease outbreaks.
       
In this research, the ResNet-20 architecture is used to identify and classify two specific pest-related conditions affecting soybean leaves. The goal is to improve the detection process by utilizing DL advanced capabilities for more precise analysis of leaf images. This approach aims to enable early detection and better management of pest infestations, ultimately supporting more effective soybean cultivation practices.
Dataset used
 
Selecting an appropriate dataset is critical for all stages of the object recognition research process, from training the model to evaluating its performance. In this study, the training dataset was sourced from the Mendeley database (Mignoni, 2021). This dataset contains images of soybean leaves categorized into three distinct groups (Fig 1) :
1. Class Healthy-undamaged soybean leaves.
2. Caterpillar-soybean leaves damaged by caterpillars.
3. Diabrotica Speciosa-soybean leaves affected by the pest Diabrotica Speciosa.
In total, the dataset comprises 6,410 images distributed as follows:
Caterpillar: 3,309 images.
Diabrotica Speciosa: 2,205 images.
Healthy: 896 images.

Fig 1: Three classes of soybean leaf conditions.


       
These images provide a robust foundation for training and evaluating algorithms for the classification and identification of different types of soybean leaf conditions. The dataset is well-suited for research aimed at developing recognition systems for agricultural pest and plant health monitoring.
 
Model architecture
 
The model used in this study for soybeans is based on the ResNet architecture and is specifically adapted to classify soybean leaf images into three categories: Caterpillar, Diabrotica speciosa and Healthy. This section provides an overview of the architecture, followed by theoretical explanations of its components.
 
Input layer
 
The model accepts input images with dimensions of 224 x 224 x 3 (height, width and three color channels). This input size was chosen to standardize the data for computational efficiency and compatibility with the convolutional layers. The input is processed in batches of size 32.
 
Convolutional layers and feature extraction
 
The architecture begins with a series of convolutional layers that extract spatial features from the input images. The initial convolutional layer uses 16 filters of size 3´3, producing feature maps that represent low-level image characteristics such as edges and textures. Subsequent layers increase the number of filters, gradually capturing more complex patterns.

Each convolutional layer is followed by:
· Batch Normalization: This normalizes the activations, reducing internal covariate shifts and accelerating convergence during training.
· ReLU Activation: A non-linear function that introduces non-linearity to the model, enabling it to learn complex mappings.
· Residual connections.
· The hallmark of the ResNet architecture is the use of residual connections. Instead of learning a direct mapping, the model learns the residual or difference:
 
 H (x) = F (x) + x
 
F(x) = Transformation learned by the convolutional layers. 
x = Input to the residual block.
       
These skip connections address the vanishing gradient problem in deep networks, allowing the model to retain information from earlier layers.
       
In this architecture, residual connections are implemented at regular intervals, ensuring efficient feature propagation and reuse across layers.
 
Downsampling and feature aggregation
 
To capture features at different scales, the model includes downsampling layers
 
Strided Convolutions
 
Reduce the spatial dimensions of feature maps (e.g., from 224 x 224 to 112 x 112. These layers increase the receptive field of neurons, enabling the model to understand contextual information in the images.
 
Global average pooling
 
After extracting features, a Global Average Pooling (GAP) layer reduces the spatial dimensions of the feature maps to a single vector per channel. GAP provides spatial invariance, ensuring the model focuses on the most critical features regardless of their exact location in the image.
 
Fully connected layers
 
The feature vector from the GAP layer is passed through two dense layers
 
A fully connected layer with 512 units and ReLU activation, followed by a Dropout layer (for regularization).
       
The final output layer with 3 units (one for each class) and a softmax activation function, which outputs probabilities for the three categories.
 
Model Illustration
 
Fig 2 summarizes the ResNet-based architecture. The summary of the model’s parameters highlights its lightweight and efficient architecture, consisting of 314,931 total parameters. Among these, 313,139 were trainable parameters, while 1,792 were non-trainable. This streamlined design achieved a balance between computational efficiency and high accuracy, making it ideal for detecting soybean leaf conditions.

Fig 2: Model architecture of ResNet-20.

The ResNet-20 model was evaluated for its performance in detecting soybean leaf conditions. The model demonstrated strong performance, achieving high classification metrics and robust generalization capabilities. The training and validation accuracy trends (Fig 3) show steady improvement over 25 epochs, with both curves converging around an accuracy of 87%. This suggests the model avoids overfitting and performs well across datasets. The final test accuracy of 87.05% (loss: 0.4035) further supports the model’s robustness. These results are consistent with prior research, such as Singh et al., (2020), which achieved 96.5% accuracy in tomato plant disease detection; among the highest reported in their study (Zhang et al., 2018). However, unlike their larger, more complex model, the lightweight design of ResNet-20 in this study offers computational efficiency, making it more suitable for real-world applications in resource-constrained environments.

Fig 3: Accuracy and loss for training and validation.


       
The classification performance of the ResNet-20 CNN model in detecting soybean pests is shown in Table 1. The model achieved an accuracy of 86.58%. For the caterpillar class, the precision was 0.8632, recall was 0.9154 and F1-score was 0.8886, indicating strong performance in detecting caterpillar damage. The diabrotica speciosa class had a precision of 0.8894, recall of 0.8409 and an F1-score of 0.8645, showing reliable detection but slightly lower recall compared to the caterpillar class. The healthy class had a precision of 0.8171, recall of 0.7444 and an F1-score of 0.7791, which was comparatively lower, possibly due to the imbalance in the dataset. The macro average precision was 0.8566, recall 0.8336 and F1-score 0.8440. The weighted average F1-score was 0.8649, indicating that the model performed best in detecting caterpillar and diabrotica speciosa damages, with somewhat lower performance in classifying healthy leaves.

Table 1: Classification metrics to evaluate the ResNet-20’s performance.


       
The Matthews Correlation Coefficient (MCC) value of 0.7728 indicates a strong level of correlation between the predicted classifications and the actual outcomes across all classes in the dataset. This value suggests that the model exhibits a consistent performance, effectively identifying true positives and true negatives while minimizing false positives and false negatives. Such reliability in classification metrics reflects the model’s robustness and makes it a valuable tool for accurately predicting outcomes in the given context.
       
The confusion matrix, depicted in Fig 4, provides a detailed overview of the distribution between true and predicted labels for the classification model.

Fig 4: Confusion matrix.


       
The confusion matrix analysis of the ResNet-20 CNN model shown in Fig 4, which is designed to detect soybean pests, shows a generally strong performance in classifying different categories. The model demonstrates high accuracy in predicting soybean plants infested with Caterpillars (303 correct predictions) and those infested with Diabrotica Speciosa (185 correct predictions). The correct identification of healthy plants (67 correct predictions) further supports the model’s effectiveness. However, some misclassifications are evident, such as the mislabeling of healthy plants as being infested with pests (18 instances for Caterpillars and 5 for Diabrotica Speciosa).
       
This misclassification is likely due to the visual similarities between the two groups, which presents a common challenge in pest detection tasks. Research has shown that overlapping symptoms from different pests can complicate identification, as the severity of pest damage doesn’t always correlate directly with visible symptoms (Barbedo, 2019). The ROC curves for the ResNet-20 CNN model in detecting soybean pests demonstrate excellent performance across all three classes (Fig 5). The model shows strong accuracy in identifying Caterpillar infestations, with an AUC of 0.94, indicating good detection with a low false positive rate. The AUC for Diabrotica Speciosa is even higher at 0.95, suggesting that the model is particularly adept at identifying this pest. The highest AUC of 0.96 is achieved for healthy plants, indicating minimal false positives and excellent accuracy in distinguishing healthy crops. The comparison of the curves reveals that the model performs slightly better in detecting healthy plants and diabrotica speciosa compared to caterpillar infestations. The ROC analysis also highlights the potential for optimizing classification thresholds to balance sensitivity and specificity more effectively. To further enhance the model’s performance, fine-tuning of hyperparameters and data augmentation could be explored to improve generalization and robustness.

Fig 5: Roc curve.


       
When comparing the results with traditional methods discussed by Kim et al., (2024), it is evident that the performance of the ResNet-20 CNN model in this study, as shown in Table 1, demonstrates a solid classification accuracy of 86.58%. The model’s precision, recall and F1-score across different classes indicate its reliability, with the highest scores observed for the Caterpillar and Diabrotica Speciosa classes. In contrast, traditional pest monitoring methods, as shown by Kim et al., (2024), significantly lag behind in terms of accuracy (86.58% vs. 78.4%).

One limitation observed in the current model is the reduced recall for the healthy class (0.7444). This suggests that the model sometimes misclassifies healthy leaves as pest-infected leaves. Addressing this issue may require the inclusion of additional features or preprocessing techniques to better differentiate between subtle visual variations in healthy and pest-infected leaves. Future work could explore hybrid models or ensemble methods, as suggested by Gupta et al., (2021), to enhance classification performance further.
       
The relatively lower performance in the healthy class could also be linked to the class imbalance in the dataset, as the number of samples in this category was smaller compared to the pest-infected leaves classes. Addressing this imbalance through techniques such as oversampling, undersampling, or weighted loss functions could improve the model’s performance further. Moreover, integrating additional image preprocessing techniques and augmenting the dataset with diverse environmental conditions could help the model generalize better.
       
Despite these challenges, the results suggest that ResNet-20 can be an effective tool for automating pest detection in soybean cultivation. This automation could help farmers manage pest-infested leaves more effectively, reducing crop losses and enhancing productivity. Future work could focus on deploying the model in field conditions, integrating it with real-time imaging systems and expanding its capabilities to detect a wider range of plant diseases.
The study highlighted ResNet-20’s effectiveness in accurately classifying soybean leaf conditions with an accuracy of 86.58%, making it suitable for resource-constrained agricultural settings. Its lightweight design and robust performance underscore its potential for real-world pest detection, aiding in timely interventions and sustainable farming practices. Limitations such as lower recall for the “Healthy” class and misclassifications due to symptom similarities can be addressed through data augmentation and advanced modelling. Our future work will be focussing on field deployment, integration with real-time systems and extending its application to other crops, enhancing agricultural productivity and sustainability.
This research was funded by a 2022 Research Grant from Sangmyung University (2023-A000-0014).
 
Authors’ contributions
 
All authors contributed toward data analysis, drafting and revising the paper and agreed to be responsible for all the aspects of this work.
 
Availability of data and materials
 
Not Applicable.
 
Use of artificial intelligence
 
Not applicable.
 
Declarations
 
Author declares that all works are original and this manuscript has not been published in any other journal.
Author declares that they have no conflict of interest.

  1. Alabyad, N., Hany, Z., Mostafa, A., Eldaby, R., Tagen, I.A. and Mehanna, A. (2024, March). From vision to precision: The dynamic transformation of object detection in autonomous systems. In Proceedings of the 2024 6th International Conference on Computing and Informatics (ICCI) (pp. 332-344). IEEE.

  2. Al-Dosari, M.N.A. and Abdellatif, M.S. (2024). The environmental awareness level among saudi women and Its relationship to sustainable thinking. Acta Innovations. 52: 28-42. https://doi.org/10.62441/ActaInnovations.52.4 

  3. Alzubaidi, L., Zhang, J., Humaidi, A.J., Al-Dujaili, A., Duan, Y., Al-Shamma, O. and Farhan, L. (2021). Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. Journal of Big Data. 8: 1-74.

  4. AlZubi, A.A. (2023). Artificial intelligence and its application in the prediction and diagnosis of animal diseases: A review. Indian Journal of Animal Research. 57(10): 1265-1271. https://doi.org/10.18805/IJAR.BF-1684.

  5. Bagga, T., Ansari, A.H., Akhter, S., Mittal, A. and Mittal, A. (2024). Understanding Indian consumers’ propensity to purchase electric vehicles: An analysis of determining factors in environmentally sustainable transportation. International Journal of Environmental Sciences. 10(1): 1-13. https:// www.theaspd.com/resources/1.%20Electric%20 Vehicles%  20and%20Enviorment.pdf.

  6. Barbedo, J.G.A. (2019). Plant disease identification from individual lesions and spots using deep learning. Biosystems Engineering. 180: 96-107. https://doi.org/10.1094/PDIS- 03-15-0340-FE. 

  7. Cho, O.H. (2024). An evaluation of various machine learning approaches for detecting leaf diseases in agriculture. Legume Research.  47(4): 619-627. doi: 10.18805/LRF-787.

  8. Desanamukula, V.S., Dharma Teja, T. and Rajitha, P. (2024). An in-depth exploration of ResNet-50 and transfer learning in plant disease diagnosis. Proceedings of the 2024 International Conference on Inventive Computation Technologies (ICICT). 614-621. https://doi.org/10.1109/ICICT60155. 2024.10544802.

  9. Dordas, C. (2008). Role of nutrients in controlling plant diseases in sustainable agriculture: A review. Agronomy for Sustainable Development. 28: 33-46. https://doi.org/10.1051/agro: 2007051.

  10. Gupta, A., Anpalagan, A., Guan, L. and Khwaja, A.S. (2021). Deep learning for object detection and scene perception in self-driving cars: Survey, challenges and open issues. Array. 10: 100057.

  11. Hartman, G.L. and Hill, C.B. (2010). Diseases of soybean and their management. In The soybean: Botany, production and uses Wallingford, UK: CABI. (pp. 276–299).

  12. He, Y., Zeng, H., Fan, Y., Ji, S. and Wu, J. (2019). Application of deep learning in integrated pest management: A real- time system for detection and diagnosis of oilseed rape pests. Mobile Information Systems. 2019: 1-14. https:// doi.org/10.1155/2019/4570808.

  13. Kapustina, O., Burmakina, P., Gubina, N., Serov, N. and Vinogradov, V. (2024). User-friendly and industry-integrated AI for medicinal chemists and pharmaceuticals. Artificial Intelligence Chemistry. 100072. https://doi.org/10.1016/ j.aichem.2024.100072.

  14. Khan, A., Sohail, A., Zahoora, U. and Qureshi, A. S. (2020). A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review. 53: 5455-5516.

  15. Kim, B., Alamri, A.M. and AlQahtani, S.A. (2024). Leveraging machine learning for early detection of soybean crop pests. Legume Research - an International Journal.  47(6): 1023- 1031. doi: 10.18805/LRF-794.

  16. Kundu, R., Das, R., Geem, Z. W., Han, G.T. and Sarkar, R. (2021). Pneumonia detection in chest X-ray images using an ensemble of deep learning models. PLOS ONE. 16(9): e0256630.

  17. Li, M., Cheng, S., Cui, J., Li, C., Li, Z., Zhou, C. and Lv, C. (2023). High- performance plant pest and disease detection based on model ensemble with inception module and cluster algorithm. Plants. 12(1): 200. https://doi.org/10.3390/plants 12010200\.

  18. Li, W., Chen, P., Wang, B. and Xie, C. (2019). Automatic localization and count of agricultural crop pests based on an improved deep learning pipeline. Scientific Reports. 9(1): https:// doi.org/10.1038/s41598-019-43171-0.

  19. Mignoni, M.E. (2021). Images of soybean leaves]. Mendeley Data. (Version 1) [Data set https://doi.org/10.17632/bycbh 73438.1

  20. Moses, M.B., Nithya, S.E. and Parameswari, M. (2022). Internet of things and geographical information system based monitoring and mapping of real time water quality system. International Journal of Environmental Sciences. 8(1): 27-36. https://www.theaspd.com/resources/3.%20 Water% 20Quality%20Monitoring%20Paper.pdf. 

  21. MS, A. and HK, Y. (2024). Deep learning for early detection of tomato leaf diseases: A ResNet-18 approach for sustainable agriculture. International Journal of Advanced Computer Science and Applications. 15(1).

  22. Padshetty, S. and Ambika. (2023). Leaky ReLU-ResNet for plant leaf disease detection: A deep learning approach. Engineering Proceedings. 59(1): 39. https://doi.org/10. 3390/ engproc2023059039.

  23. Park, Y., Choi, S.H., Kwon, Y., Kwon, S., Kang, Y.J. and Jun, T. (2023). Detection of soybean insect pest and a forecasting platform using deep learning with unmanned ground vehicles.  Agronomy. 13(2): 477. https://doi.org/10.3390/agronomy 13020477.

  24. Shafiq, M. and Gu, Z. (2022). Deep residual learning for image recognition: A survey. Applied Sciences. 12(18): 8972.

  25. Shen, Y., Zhou, H., Li, J., Jian, F. and Jayas, D.S. (2018). Detection of stored-grain insects using deep learning. Computers and Electronics in Agriculture. 145: 319-325. https:// doi.org/10.1016/j.compag.2017.11.039.

  26. Suzuki, L.E.a.S., Casalinho, H.D. and Milani, I.C.B. (2024). Strategies and public policies for soil and water conservation and food production in Brazil. Soil Systems. 8(2): 45. https:// doi.org/10.3390/soilsystems8020045.

  27. Taye, M.M. (2023). Theoretical understanding of convolutional neural networks: Concepts, architectures, applications, future directions. Computation. 11(3): 52. https://doi.org/10. 3390/computation11030052.

  28. Thenmozhi, K. and Reddy, U.S. (2019). Crop pest classification based on deep convolutional neural network and transfer learning. Computers and Electronics in Agriculture. 164: 104906. https://doi.org/10.1016/j.compag.2019.104906.

  29. Thippanna, D.G., Priya, M.D. and Srinivas, T.A.S. (2023). An effective analysis of image processing with deep learning algorithms. International Journal of Computer Applications. 975: 8887.

  30. Yamashita, R., Nishio, M. and Do, R.K.G. (2018). Convolutional neural networks: An overview and application in radiology. Insights into Imaging. 9: 611-629. https://doi.org/10. 1007/s13244- 018-0639-9.

Editorial Board

View all (0)