Legume Research

  • Chief EditorJ. S. Sandhu

  • Print ISSN 0250-5371

  • Online ISSN 0976-0571

  • NAAS Rating 6.80

  • SJR 0.391

  • Impact Factor 0.8 (2024)

Frequency :
Monthly (January, February, March, April, May, June, July, August, September, October, November and December)
Indexing Services :
BIOSIS Preview, ISI Citation Index, Biological Abstracts, Elsevier (Scopus and Embase), AGRICOLA, Google Scholar, CrossRef, CAB Abstracting Journals, Chemical Abstracts, Indian Science Abstracts, EBSCO Indexing Services, Index Copernicus

Early Leaf Disease Detection of Soybean Plants using Convolution Neural Network Algorithm 

Abeer Alnuaim1, Alaa Altheneyan1, Ahmad Ali AlZubi2,*
  • https://orcid.org/0000-0002-2537-2439, https://orcid.org/0000-0002-4148-1512, https://orcid.org/0000-0001-8477-8319
1Department of Computer Science and Engineering, College of Applied Studies and Community Service, King Saud University, Riyadh 11495, Saudi Arabia.
2Department of Computer Science, Community College, King Saud University, Riyadh, Saudi Arabia.
  • Submitted27-03-2024|

  • Accepted23-01-2025|

  • First Online 29-01-2025|

  • doi 10.18805/LRF-808

Background: Agricultural specialists usually examine leaves closely to check for plant diseases, that process takes a long time and can have errors. To overcome these problems, nowadays, machine learning methods, specifically sequential convolution neural networks (CNN) are widely used, because of their extensive potential to extract features and patterns from the image dataset. 

Methods: This study introduces a complete methodology for detecting diseases in soybean plants by employing Convolutional Neural Networks (CNNs). A dataset sourced from the Mendeley database, containing images of soybean legume crops affected by caterpillars, Diabrotica Speciosa diseases and undamaged (healthy) foliage is used., The research explores the efficacy of the CNN Model in accurately classifying these diseases. The CNN algorithm is trained in such a way that it can handle the complex nature of soybean leaf imaging by utilizing convolutional layers for extracting features, pooling layers for reducing dimensionality and softmax layers for classification. 

Result: The training and validation results exhibit remarkable accuracy rates (97.64% accuracy) after 150 epochs. The evaluation metrics, such as precision, recall and F1-score, demonstrate the model’s performance across different leaf diseases of soybean, suggesting its ability to accurately identify instances inside each category. The classification matrices provide complete knowledge of the accuracy of the prediction of different diseases. The overall accuracy of the proposed model is 94.05%. This study can be utilized as a reference to increase the progress of agricultural disease detection, in turn, can enable enhance the reduction of crop losses and ensure food security.

Among other essential crops, soybean (Glycine max (L.) Merril) is the most significant seed legume globally. It contributes about 25% of the world’s edible oil production, serves as the primary source of protein concentrate for animal feed and is necessary for the making of chicken and fish feeds. Soybeans are also a significant supplier of raw materials for the food, pharmaceutical and other industries. India ranks fifth in soybean output volume and fourth in cultivation area worldwide, based on estimates from the FAO and AMIS. During the 2020-21 growing season, soybeans in India were cultivated on 12.06 million hectares, producing 13.58 million tonnes at a productivity rate of 1,126 kg/ha. In contrast, the global average soybean productivity is approximately 2,900 kg/ha, with the highest yields (>3,000 kg/ha) recorded in the United States and Brazil (FAO, 2020). However, soybean plants are exposed to a range of leaf diseases caused by pathogens including fungus, bacteria and viruses. If these diseases are not immediately recognised and controlled, they may have a significant negative effect on crop output and quality. Timely identification of leaf diseases is essential for implementing timely solutions that might reduce economic losses and promote sustainable farming practices. Traditional disease identification techniques often depend on agricultural experts visually examining the plants, a process that is time-consuming and susceptible to human mistakes. Consequently, there is an increasing need to use advanced methods like machine learning and computer vision to automate and improve disease detection procedures.
       
Computational methods such as Convolutional Neural Networks (CNN), Support vector machine (SVM), Random Forest (RF) etc. are widely recognized for their exceptional ability to address computer vision tasks (Saleem et al., 2022). CNN has gained widespread recognition for its proficiency in interpreting image data, particularly in applications such as image classification, segmentation and detection. The remarkable effectiveness of the CNN architecture in these tasks might be attributed to its capability to handle raw data without any previous knowledge. (Senan et al., 2020). Several researchers have contributed to improving the performance of CNNs. According to the research, enhancing the performance of the Convolutional Neural Networks (CNNs) model requires adjusting architectural parameters and weights (Gonzalez, 2007). Improving the performance of the model generally can be achieved by increasing the size of the training dataset, implementing transformations on the training data and fine-tuning different parameters (Zhang et al., 2020; Semara et al., 2024; Maltare et al., 2023; Bagga et al., 2024).
       
Various machine learning techniques have found applications in agricultural research (Rumpf et al., 2010). Traditionally, image classification tasks relied on handcrafted features like SIFT (van Dokkum et al., 2000), HoG (Zhang et al., 2012), SURF, followed by the utilization of learning algorithms within these feature spaces (Bay et al., 2008). However, the effectiveness of such approaches heavily depended on predefined features (Ma et al., 2021). Utilizing learned representations, which are more successful and efficient, has been more popular in recent years. With the use of representation learning, algorithms can automatically analyse large image collections and identify characteristics that help classify images with the least amount of mistakes (Taye, 2023). Sequential neural network techniques have become an effective tool for classifying images and recognizing objects. It is a kind of deep neural network (DNN) made especially for processing images and it draws inspiration from the human visual system. For object recognition, a number of CNN architectures have been put out; baseline models for these tasks are AlexNet (Mall et al., 2023) and LeNet (Alzubaidi et al., 2021).
       
This study introduces the benefits of the CNN model to identify soybean disease from leaf images captured in uncontrolled environments. The models are structured according to the sequential architect and the image dataset is sourced from the Mendeley database. The dataset images are preprocessed before sending for training. The overall performance of the proposed model is evaluated in terms of classification matrices such as accuracy, precision, recall, F1-Score and confusion matrix.
 
Related work
 
Farming used to be about feeding more people, but now it’s a really important part of economies worldwide. However, plant diseases are a big problem, causing a lot of losses in crops and money for agriculture and forestry. One example is soybean rust, a fungal disease that has caused substantial financial damage to soybean crops. Eliminating just 20% of the infection has the potential to generate farmers a profit of almost $11 million (Singh et al., 2020). Hence, prompt intervention is vital for the early detection and identification of plant diseases. Multiple techniques are available for identifying plant diseases. Certain diseases may not pre sent any observable symptoms or only become apparent when it is already too late to take action. For such situations, intricate examinations are required, frequently utilizing high-powered microscopes. Furthermore, some indications may solely be discernible in segments of the electromagnetic spectrum that are imperceptible to the human visual system (Liu and Wang, 2021). This section explores the current trends in utilizing deep learning  and CNNs architectures in the field of agriculture. Before deep learning became popular, image processing and machine learning methods were used to classify different plant diseases (Tugrul et al., 2022). Usually, these systems would go through a series of steps: first, digital images were taken using a digital camera and then image processing techniques like enhancement, segmentation, color space conversion and filtering were applied to prepare the images for further analysis. Subsequently, the images were subjected to feature extraction, resulting in the identification of important characteristics. These features were then employed as input for the classification process. The total accuracy of categorization was highly dependent on the techniques used for image processing and feature extraction (da Silva and Mendonca, 2004). Nevertheless, recent research has shown that networks trained using generic data can achieve state-of-the-art performance.
       
Artificial Intelligent based algorithms have consistently shown exceptional performance in almost all-important classification tasks (Salvi et al., 2021). The same architecture allows for both feature extraction and classification to be performed. Convolutional Neural Networks (CNNs), a distinct category of artificial neural networks, have been extensively utilized in several domains of pattern identification, including computer vision and speech recognition. Patil and Rane (2021) employed three architectural concepts to guarantee a certain level of invariance to shifts, scales and distortions: local receptive fields, shared weights and spatial or temporal sub-sampling. Several CNN architectures, like as LeNet, AlexNet and GoogLeNet, have been suggested for object recognition (Shamsaldin et al., 2019; Wasik and Pattinson, 2024; Cho, 2024; Porwal et al., 2024; AlZubi, 2023; Hai and Duong, 2024).
       
The LeNet architecture, pioneered by Boulent et al., (2019), was the inaugural convolutional neural network specifically engineered for identifying handwritten numerals. The model consists of an array of convolutional and sub-sampling layers, which are then followed by dense layers connected MLP (Boulent et al., 2019).
               
Researchers have proposed the use of Convolutional Neural Networks (CNNs) for plant disease classification and leaf identification. By examining leaf pictures, Atabay (2017) created a convolutional neural network (CNN) structure for plant recognition. According to Atabay (2017), the model’s classification accuracy was 97.24% for the Flavia leaf dataset and 99.11% for the Swedish leaf dataset. Using 1.8 million images from the ILSVRC 2012 dataset, Wallelign et al., (2018) trained a Convolutional Neural Network (CNN) using a deep learning technique. For the task of plant identification, they obtained an average precision of 0.486 (Wallelign et al., 2018). Plant diseases were categorised by Mohanty et al., (2016) using well-known deep convolutional neural network (CNN) models as AlexNet and GoogLeNet. With a publicly accessible dataset of 54,306 images, they attained an astounding accuracy of 99.35%. However, when tested on images taken in a different environment, the model’s accuracy dropped to 31.4%. Still, these results highlight how well deep convolutional neural networks (CNNs) classify plant illnesses (Mohanty et al., 2016).
The realm of machine learning, particularly deep learning, employs complex neural networks to interpret a large dataset. Various data types, spanning text, audio, video, images, time series, sensor data and even Internet of Things streams, find application in deep learning frameworks. Notably, convolutional neural networks (CNN) excel in processing image data, making them pivotal in tasks like distinguishing between diseased and healthy soybean leaves with remarkable precision. This process typically uses four key stages. The base is first established via data collecting and then the model is developed. Afterward, priority is given to data training, leading to the final stage of model testing.
 
Datasets
 
For image recognition throughout the process, from training to performance evaluation, a precise dataset is essential to empower the algorithms. In this study, the training dataset was sourced from the Mendeley database (Mignoni et al., 2022), encompassing three distinct categories of soybean leaf: Caterpillar, Diabrotica Speciosa and Healthy. These images depict soybean leaves affected by caterpillars, Diabrotica Speciosa disease and undamaged (healthy) foliage (Fig 1).

Fig 1: Categorizing Images of Soybean Plants as Healthy, Infected with Caterpillars and Diabrotica Speciosa.


       
Altogether, the dataset comprises 6,410 images, distributed across the three categories as follows: Caterpillar (3,309 images), Diabrotica Speciosa (2,205 images) and Healthy (896 images).

Pre-processing and tagging
 
Before feeding the raw images into the CNN classification algorithm for analysis, they were preprocessed to enhance their quality or adjust their attributes. Images sourced from various origins may exhibit diverse sizes, necessitating resizing and rescaling to ensure uniformity. This standardization is crucial for maintaining consistency and expediting the training process, as larger images entail higher processing overheads. To ensure uniformity, the images were resized to dimensions of 256×256x3 and then converted to grayscale. Subsequently, the resized soybean leaf images were categorized by type, with each image labeled according to its corresponding health status acronym. This categorization process identified three classes: healthy, caterpillar and Diabrotica Speciosa within both the training and test datasets.
 
Training Dataset
 
In developing a CNN model for detecting diseases in soybean plants, several sequential steps are involved. Initially, the process begins with importing the necessary libraries and compiling a dataset containing images of soybean leaf diseases, as depicted in Fig 2.

Fig 2: Flow chart of CNN model for the customized dataset.


       
Subsequently, the dataset is divided into three categories-training, validation and testing using an 8:1:1 ratio. Following this division, the dataset undergoes pre-processing and augmentation to enhance its quality and increase its diversity. During the training phase, the model is fed with tensor images from the training set, allowing it to learn and associate inputs with their corresponding labels. Finally, the model’s accuracy is evaluated by fine-tuning its learning outputs against the tensor labels, enabling an assessment of its performance.
 
Architecture of Convolutional Neural Network (CNN) Model
 
Three main building blocks are used in Convolutional Neural Networks (CNNs): convolutional layers, pooling layers and activation functions (usually Rectified Linear Units; ReLUs). The configuration of layers, their arrangement and the incorporation of additional processing units differ across architectures, thereby influencing their specificity and performance.
       
The convolution layer is a fundamental component of CNN architecture, positioned after the input layer and comprises a set of convolution kernels (neurons) (Boulent et al., 2019). Each kernel (neuron) corresponds to a small region, termed a receptive field, within the input image. The performance and training speed of CNNs are directly influenced by hyperparameters such as kernel size, number of kernels, stride length and pooling size. The hyper parameters of the CNN model are represented in Table 1.

Table 1: Hyper-parameter of CNN model.



The input image is divided into smaller segments, or receptive fields and convolved with a predetermined collection of weights by the layer. The convolution layer of CNN operates in the following manner (Alajrami et al., 2019).
 
 
 
Following the operation of convolution, the max pooling layer is added in CNN architecture. This layer conducts down sampling (Loussaief and Abdelkrim, 2018), aiming to reduce the dimensionality of the information gathered from the convolution layer while retaining essential features. Simultaneously, the pooling layer considers the following notations as input (Alajrami et al., 2019).
 
  
 
Fully connected layers constitute a crucial component of convolutional neural networks (CNNs), demonstrating significant effectiveness in image classification (Alyami et al., 2020). These layers take a finite number of neurons as input and classify them into relevant classes (Ali et al., 2019). Below are mathematical representations of the fully connected layer.

Taking into account the Jth node of a convolutional or pooling layer with the dimensions.
 
 
 
The output of a dimensional convolution or pooling process could be the input
 
 
 
To input into a fully connected layer, the tensor needs to be flattened into a 1D vector with the dimension:
 
 
 
The feature map obtained from the last maximum pooling layer is combined and transformed into a one-dimensional array, which is then passed to the fully connected layers (ReLu, Softmax). The mathematical expression for connected layers (ReLu, Softmax) is mentioned below (Alajrami et al., 2019).
 
 
  
Following multiple iterations of forward and backward propagation, the model has been trained and is now capable of making predictions. Fig 3 represents the complete architecture of the proposed CCN model.

Fig 3: Architecture of the CNN model with convolution and pooling layer specifications.



After training, the model’s performance is evaluated using test data to assess its ability to generalize to new data. Metrics such as overall accuracy, recall, precision and F1 score are computed from the classification report and confusion matrix to gauge the model’s effectiveness.
 
Evaluation parameters
 
A widely used algorithm for tackling classification problems, encompassing both binary and multiclass scenarios, is the confusion matrix. This matrix comprises four variables: True Positive (TP), False Positive (FP), False Negative (FN) and True Negative (TN) (Genemo, 2023). Utilizing the provided parameters (TP, FP, TN and FN), we calculated the accuracy rate, precision, recall, F1-score and specificity to evaluate the model’s performance. The formulas for assessing these metrics are provided below:
 
 
 
              
Accuracy quantifies a model’s overall performance and its ability to correctly predict labels. Precision assesses prediction accuracy by comparing the total number of predicted images in a soybean leaf disease category with the accurately predicted images in that category, as per the following equation. Recall evaluates the proportion of correctly predicted images in a soybean disease category compared to all actual observations. Finally, the F1 score combines recall and precision into a single metric for performance evaluation.
In this work, we report the outcomes acquired from the training and validation processes utilizing the entire dataset. This analysis primarily examines the results obtained from training with the enhanced dataset, as it acknowledges the capacity of convolutional networks to efficiently acquire features when exposed to larger datasets. Fig 4 (a) and 4 (b) illustrate the comparison of accuracy and loss throughout both the training and validation periods. The training phase reached an accuracy rate of 97.64% after 150 epochs. This implies that additional rounds have the potential to improve the accuracy of the data using the study methodology. Significantly, as the time of training grows, the number of epochs correspondingly increases.

Fig 4: Training and validation (a) accuracy and (b) loss after 150 epochs.



In addition, each class was subjected to an individual assessment to determine the efficacy and confidence rating of the trained model. Fig 5 exhibits a collection of photos illustrating both real and predicted diseases. The results show a high level of confidence and accuracy in the predictions.

Fig 5: The disease type and the level of confidence in recognizing the image within the dataset.


       
The confusion matrix represents the relationship between predicted classes and actual classes. The columns represent the predicted classes, while the rows represent the actual classes. It provides a visual depiction of the expected classification results. True positives are located on the diagonal of the matrix, whereas false negatives occupy the rest of the matrix. The confusion matrix in Fig 6 illustrates the classification of soybean leaf diseases into the categories of Caterpillar, Diabrotica Speciosa and Healthy.

Fig 6: Confusion matrix of soybean leaf disease.


       
The categorization report includes three classifications, namely Caterpillar, Diabrotica Speciosa and Healthy, which are used to assess the machine learning model’s performance. The model demonstrates good predictive power, as seen by high precision scores ranging from 0.87 to 0.98. These numbers accurately reflect the model’s accuracy in making positive predictions for each class. In addition, the model routinely achieves high recall values, ranging from 0.92 to 0.96, which demonstrates its effective ability to accurately identify real positive events. The F1-Scores, which balance precision and recall, are particularly noteworthy. They vary from 0.91 to 0.95 across courses, as seen in Table 2.

Table 2: classification accuracies, precision, F1-score and support.


       
The overall total accuracy rate of the model is 94.04%, that indicates its significant precision in identifying situations. Furthermore, the model’s efficacy is strengthened by the macro and weighted average metrics, which exhibit equitable performance across classes while taking into account possible class imbalances. 

We have compared several studies that examined deep learning models to help categorise diseases in soybean leaves Fig 7. Wu et al., (2023) introduced the Enhanced ConvNeXt model and it had an accuracy rate of 85.42%. Conversely, Wu et al., (2019) enhanced ResNet’s performance and attained a higher accuracy rate of 94.29%. S. Dixit et al., (2023) evaluated the performance levels of several models in their investigation, including CNN (83.9%), ResNet-V2 (93.01%) and KNN (71.98%). Bag et al., (2023) developed a Deep Belief Network that demonstrated its extraordinary capacity for identifying diseases by achieving an amazing detection rate of 99%. Karlekar and Seal (2020) achieved an excellent accuracy rate of 98.14% by successfully deploying a Convolutional Neural Network (CNN) called SoyNet. This result highlights the network’s excellent ability to diagnose diseases properly. In contrast, this work demonstrated good performance in comparison to the other models studied, achieving a 94% accuracy rate with the application of a Sequential CNN approach. Hence, the Sequential CNN turns out to be a strong tool for real-world use, allowing fast and precise identification of soybean leaf diseases that are important for good farming management.

Fig 7: Comparison of presented work with existing literature.

This study emphasized the effectiveness of Convolutional Neural Networks (CNNs) in precisely identifying soybean plant diseases. This study provides significant knowledge for the management of agricultural diseases and the protection of crops. The CNN model achieved remarkable accuracy in classifying soybean leaf photos of caterpillar-infested, Diabrotica Speciosa-affected and healthy foliage classes by carefully preprocessing and enhancing a varied dataset obtained from the Mendeley database. The CNN architecture successfully extracted features reduced dimensionality and enabled accurate classification with an attained accuracy rate of 94.04% after 150 epochs. The evaluation criteria, such as precision, recall and F1-score, confirmed that the model is good in all categories, demonstrating its effectiveness in detecting diseases. These discoveries have important implications for those involved in agriculture, allowing them to take preventive measures to reduce crop losses and assure food security. Future studies may investigate the scalability and generalizability of CNN models across many crop species and disease kinds, thereby promoting the development of comprehensive disease detection systems in agriculture.
 
Limitations and future works
 
Considering the encouraging outcomes, this study is subject to various constraints. The performance of the CNN model may be affected by the quality and variety of the dataset. This suggests that a wider and more diverse dataset is required to improve the model’s ability to generalize. Furthermore, the study exclusively examined soybean leaf diseases, hence restricting its relevance to other crop species and types of illnesses. In addition, the computational resources needed to train CNN models might be limiting, especially for researchers who have limited access to high-performance computing infrastructure.
       
To overcome these constraints and propel the area of agricultural disease detection, potential avenues for future study could involve: (1) Gathering a broader and more diversified dataset that encompasses different crop species and types of diseases, in order to strengthen the resilience and applicability of the models. (2) Investigating sophisticated convolutional neural network (CNN) structures and methodologies, such as transfer learning and ensemble approaches, to enhance the effectiveness and efficiency of the model. (3) Exploring the fusion of remote sensing data and IoT technology to enable immediate monitoring and early identification of crop diseases. (4) Performing field trials and validation studies to evaluate the practical feasibility and scalability of convolutional neural network (CNN)-based disease detection systems in agriculture. By focusing on these specific areas, future research efforts can make valuable contributions to the advancement of more efficient and scalable strategies for managing agricultural diseases and protecting crops.
The authors would like to thank the editors and reviewers for their review and recommendations and also to extend their thanks to King Saud University for funding this work through the Researchers Supporting Project (RSP2025R314), King Saud University, Riyadh, Saudi Arabia.
 
Funding statement
 
This work was supported by the Researchers Supporting Project (RSP2025R314), King Saud University, Riyadh, Saudi Arabia.
 
Author contributions
 
The authors contributed toward data analysis, drafting and revising the paper and agreed to be responsible for all aspects of this work..
 
Data availability statement
 
Not applicable
 
Declarations
 
Author(s) declare that all works are original and this manuscript has not been published in any other journal.
The authors declare that they have no conflict of interest.
 

  1. Ali, L., Valappil, N.K., Kareem, D.N.A., John, M.J. and Al Jassmi, H. (2019, November). Pavement crack detection and localization using convolutional neural networks (CNNs). In 2019 International conference on digitization (ICD) (pp. 217-221). IEEE. https://doi.org/10.1109/ICD47981. 2019. 9105786. 

  2. Alyami, H., Alharbi, A. and Uddin, I. (2020). Lifelong machine learning for regional-based image classification in open datasets. Symmetry. 12(12): 1-17. https://doi.org/10.3390/sym 12122094.

  3. Alzubaidi, L., Zhang, J., Humaidi, A.J., Al-Dujaili, A., Duan, Y., Al-Shamma, O. et al., and Farhan, L. (2021). Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. Journal of big Data. 8: 1-74. https:// doi.org/10.1186/s40537-021-00444-8.s 

  4. AlZubi, A.A. (2023). Artificial Intelligence and its application in the prediction and diagnosis of animal diseases: A Review. Indian Journal of Animal Research. 57(10): 1265-1271. 

  5. Alajrami, E., Ashqar, B.A.M., Abu-Nasser, B.S., Khalil, A.J., Musleh, M.M., Barhoom, A.M. and Abu-Naser, S.S. (2019). Handwritten signature verification using deep learning. International Journal of Academic Multidisciplinary Research (IJAMR). 2(1): 39-44. https://www.ijeais.org/ijamr.

  6. Atabay, H.A. (2017). A Convolutional Neural Network with a new architecture applied on leaf classification, Iioab Journal. 7: 326-331.

  7. Bag, V.V., Patil, M.B., Shelke, S., Birajdar, N., Sonkawade, A. and Rathod, R. (2023). Crop leaf disease detection in soybean crop using deep learning technique. In Springer eBooks (pp. 39-47). https://doi.org/10.1007/978-3-031-34644-6_5.

  8. Bagga, T., Ansari, A.H., Akhter, S., Mittal, A. and Mittal, A. (2024). Under standing Indian consumers’ propensity to purchase electric vehicles: An analysis of determining factors in environmentally sustainable transportation. International Journal of Environ- mental Sciences. 10(1): 1-13. https://www.theaspd.com/ resources/1.%20 Electric% 20Vehicles% 20and% 20 Enviorment.pdf.

  9. Bay, H., Ess, A., Tuytelaars, T. and Van Gool, L. (2008). Speeded-up robust features (SURF). computer vision and image under standing. 110(3): 346-359. https://doi.org/10.1016/ j.cviu.2007.09.014.

  10. Boulent, J., Foucher, S., Théau, J. and St-Charles, P. L. (2019). Convolutional neural networks for the automatic identification of plant diseases. Frontiers in Plant Science. 10: 941. https://doi org/10.3389/fpls.2019.00941.

  11. Cho, O.H. (2024). An evaluation of various machine learning approaches for detecting leaf diseases in agriculture. Legume Research. https://doi.org/10.18805/LRF-787.

  12. da Silva, E.A.B. and Mendonca, G.V. (2004). Digital Image Processing. The Electrical Engineering Handbook. 891-910. https:// doi.org/10.1016/B978-012170960-0/50064-5. 

  13. Dixit, S., Kumar, A., Haripriya, A., Bohre, K. and Srinivasan, K.  (2023). Classification and recognition of soybean leaf diseases in Madhya Pradesh and chhattisgarh using deep learning methods," 2nd International conference on paradigm shifts in communications embedded systems, machine learning and signal processing (PCEMS). Nagpur, India. 2023: pp. 1-6. doi: 10.1109/PCEMS58491.2023.10136030.

  14. Food and Agriculture Organization of the United Nations. (2020). FAO statistical database on global soybean production. FAO. https://www.fao.org.

  15. Genemo, M. (2023). Kidney stone detection and classification based on deep learning approach. International Journal of Advanced Natural Sciences and Engineering Researches. 7(4): 38-42. https://doi.org/10.59287/ijanser.545.

  16. Gonzalez, T.F. (2007). Handbook of approximation algorithms and metaheuristics. Handbook of Approximation Algorithms and Metaheuristics. 1-1432. https://doi.org/10.1201/978 1420010749.

  17. Hai, N.T. and Duong, N.T. (2024). An improved environmental management model for assuring energy and economic prosperity. Acta Innovations. 52: 9-18. https://doi.org/ 10.62441/ActaInnovations.52.2. 

  18. Karlekar, A. and Seal, A. (2020). SoyNet: Soybean leaf diseases classification. Computers and Electronics in Agriculture.172: 105342. https://doi.org/10.1016/j.compag. 2020.105342.

  19. Liu, J. and Wang, X. (2021). Plant diseases and pests detection based on deep learning: A review. Plant Methods. 17(1): 1-18. https://doi.org/10.1186/s13007-021-00722-9.

  20. Loussaief, S. and Abdelkrim, A. (2018). Convolutional neural network hyper-parameters optimization based on genetic algorithms. International Journal of Advanced Computer Science and Applications. 9(10): 252-266. https://doi.org/10.14569/ IJACSA.2018.091031.

  21. Ma, J., Jiang, X., Fan, A., Jiang, J. and Yan, J. (2021). Image matching from handcrafted to deep features: A survey. International Journal of Computer Vision. 129(1): 23-79. https://doi. org/ 10.1007/s11263-020-01359-2.

  22. Mall, P.K., Singh, P.K., Srivastav, S., Narayan, V., Paprzycki, M., Jaworska, T. and Ganzha, M. (2023). A comprehensive review of deep neural networks for medical image processing: Recent developments and future opportunities. Healthcare Analytics. 4: 100216. https://doi.org/10.1016/j.health.2023.100216. 

  23. Mignoni, M.E., Honorato, A., Kunst, R., Righi, R. and Massuquetti, A. (2022). Soybean images dataset for caterpillar and Diabrotica speciosa pest detection and classification. Data in Brief. 40:107756. https://doi.org/10.1016/j.dib. 2021.107756.

  24. Mohanty, S.P., Hughes, D.P. and Salathé, M. (2016). Using deep learning for image-based plant disease detection. Frontiers in Plant Science. 7: 1-10. https://doi.org/10.3389/fpls. 2016.01419.

  25. Maltare, N.N., Sharma, D. and Patel, S. (2023). An exploration and prediction of rainfall and groundwater level for the district of banaskantha. Gujrat. India. International Journal of Environmental Sciences. 9(1): 1-17. https://www. theaspd. com/resources/v9-1-1-Nilesh%20N.%20Maltare.pdf.

  26. Patil, A. and Rane, M. (2021). Convolutional neural networks: An overview and Its applications in pattern recognition. Smart Innovation, Systems and Technologies. 195: 21-30. https:// doi.org/10.1007/978-981-15-7078-0_3.

  27. Porwal, S., Majid, M., Desai, S.C., Vaishnav, J. and Alam, S. (2024). Recent advances, challenges in applying artificial intelligence and deep learning in the manufacturing industry. Pacific Business Review (International). 16(7): 143-152.

  28. Rumpf, T., Mahlein, A.K., Steiner, U., Oerke, E.C., Dehne, H.W. and Plümer, L. (2010). Early detection and classification of plant diseases with support vector machines based on hyperspectral reflectance. Computers and Electronics in Agriculture, 74(1): 91-99. https://doi.org/10.1016/j.compag. 2010. 06. 009.

  29. Saleem, M.A., Senan, N., Wahid, F., Aamir, M., Samad, A. and Khan, M. (2022). Comparative analysis of recent architecture of convolutional neural network. Mathematical Problems in Engineering. 2022. https://doi.org/10.1155/2022/7313612. 

  30. Salvi, M., Acharya, U.R., Molinari, F. and Meiburger, K.M. (2021). The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Computers in Biology and Medicine, 128, 104129. https://doi.org/10.1016/j. compbiomed. 2020.104129.

  31. Senan, N., Aamir, M., Ibrahim, R., Taujuddin, N.S.A.M. and Muda, W. H.N.W. (2020). An efficient convolutional neural network for paddy leaf disease and pest classification. International Journal of Advanced Computer Science and Applications.11(7): 116-122. https://doi.org/10.14569/IJACSA. 2020.0110716.

  32. Semara, I.M.T., Sunarta, I.N., Antara, M., Arida, I.N.S. and Wirawan, P.E. (2024). Tourism sites and environmental reservation. International Journal of Environmental Sciences. 10(1): 44-55. https://www.theaspd.com/resources/4.%20 Tourism% 20 Sites%20 and%20 Environmental%20 Reservation%20 objects.pdf.

  33. Shamsaldin, A., Fattah, P., Rashid, T. and Al-Salihi, N. (2019). A study of the convolutional neural networks applications. UKH Journal of Science and Engineering. 3(2): 31-40. https:// doi.org/10.25079/ukhjse.v3n2y2019.pp31-40.

  34. Singh, V., Sharma, N. and Singh, S. (2020). A review of imaging techniques for plant disease detection. Artificial Intelligence in Agriculture. 4: 229-242. https://doi.org/10.1016/j.aiia. 2020.10.002.

  35. Taye, M.M. (2023). Understanding of machine learning with deep learning: Architectures, workflow, applications and future directions. computers. 12: 91. https://doi.org/10.3390/ computers12050091.

  36. Tugrul, B., Elfatimi, E. and Eryigit, R. (2022). Convolutional neural networks in detection of plant leaf diseases: A Review. Agriculture (Switzerland). 12(8). https://doi.org/10.3390/ agriculture12081192.

  37. van Dokkum, P.G., Franx, M., Fabricant, D., Illingworth, G.D. and Kelson, D.D. (2000). Hubble space telescope photometry and keck spectroscopy of the rich cluster MS 1054²03: morphologies, butcher oemler effect and the color magnitude relation at z = 0.83. The Astrophysical Journal. 541(1): 95-111. https://doi.org/10.1086/309402.

  38. Wallelign, S., Polceanu, M. and Buche, C. (2018). Soybean plant disease identification using convolutional neural network. Proceedings of the 31st International Florida Artificial Intelligence Research Society Conference, FLAIRS 2018, 146-151.

  39. Wasik, S. and Pattinson, R. (2024). Artificial intelligence applications in fish classification and taxonomy: Advancing Our Under standing of Aquatic Biodiversity. FishTaxa. 31: 11-21. 

  40. Wu, Q., Ma, X., Liu, H., Bi, C., Yu, H., Liang, M., Zhang, J., Li, Q., Tang, Y. and Ye, G. (2023). A classification method for soybean leaf diseases based on an improved ConvNeXt model. Scientific Reports.13(1). https://doi.org/10.1038/s41598- 023-46492-3. 

  41. Wu, Q., Zhang, K. and Meng, J. (2019). Identification of soybean leaf diseases via deep learning. Journal of Institution of Engineers Series A. 100(4): 659-666. https://doi.org/ 10.1007/s40030-019-00390-y.

  42. Zhang, D., Guo, Q., Wu, G. and Shen, D. (2012). Sparse patch- based label fusion for multi-atlas segmentation. lecture notes in computer science (Including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics). 7509 LNCS. 94-102. https://doi.org/10.1007/978-3-642- 33530-3_8.

  43. Zhang, Y., Gao, J. and Zhou, H. (2020). Breeds classification with deep convolutional neural network. ACM International Conference Proceeding Series. 24: 145-151. https:// doi.org/10.1145/3383972.3383975.

Editorial Board

View all (0)