Indian Journal of Agricultural Research

  • Chief EditorV. Geethalakshmi

  • Print ISSN 0367-8245

  • Online ISSN 0976-058X

  • NAAS Rating 5.60

  • SJR 0.293

Frequency :
Bi-monthly (February, April, June, August, October and December)
Indexing Services :
BIOSIS Preview, ISI Citation Index, Biological Abstracts, Elsevier (Scopus and Embase), AGRICOLA, Google Scholar, CrossRef, CAB Abstracting Journals, Chemical Abstracts, Indian Science Abstracts, EBSCO Indexing Services, Index Copernicus

A Comparative Study and Optimization of Deep Learning Models for Grape Leaf Disease Identification

Rasika Gajendra Patil1,2,*, Ajit More 1
1Bharati Vidyapeeth Deemed to be University, Pune-411 030, Maharashtra, India.
2Bharati Vidyapeeth’s Institute of Management and Technology, Mumbai University, Navi-Mumbai-400 614, Maharashtra, India.

Background: In the domain of grape farming, the enduring difficulties of preventing germs and viruses pose significant risks to economic viability. Modern advances in artificial intelligence, machine learning, deep learning (AI, ML, and DL), and computer vision have produced efficient methods for identifying and classifying grape viral infections.

Methods: The development and improvement of deep learning algorithms designed particularly for identifying and categorizing grape leaf infections is the focus of this study. Utilizing deep intelligence techniques, five important predetermined Deep learning algorithms were used: DenseNet121, VGG19, VGG16, InceptionV3 and ResNet50V2.

Result: Comparing the training accuracy, validation accuracy, training loss and validation loss of these five deep learning models, Densenet121 model has shown best performance. Densenet121 model achieved a recall and accuracy score of 99.86%. These findings demonstrate the immense scope of our method for actual use in the production of grape leaf, offering a cheaper and more feasible method for preventing disease and minimizing monetary harm.

Diseases of the grape leaf pose a huge problem to the grape growing business, with large financial losses for grape producers and farmers (Metager et al., 2023). Historically, pesticides were intensively used to control these illnesses, which has been connected to environmental degradation and the creation of pesticide-resistant varieties, as well as causing health hazards to humans. Early disease identification is critical to preventing agricultural productivity losses (Lovell-Read et al., 2023).
 
In the age of smart agriculture, there is a rising need for novel innovations that may help with early disease diagnosis, precise classification and effective disease control (Ni et al., 2023). Emerging developments in AI and image processing are transforming farming, especially grape management (Munjal et al., 2023). The demand for adaptive disease detection and classification is increasing and artificial intelligence (AI) approaches, particularly deep learning, provide greater accuracy in recognizing grape diseases (Na et al., 2024 and Jia et al., 2022). Regarding these issues, our study focuses on a data-driven deep learning technique for grape disease diagnosis and categorization compared to past studies, our major focus is on developing DL algorithms that are especially adapted for illness detection (Saha et al., 2022). We use transfer learning strategies to improve the efficiency of DenseNet121, VGG19, VGG16, InceptionV3 and ResNet50V2. Comparing the results of our technique reveals significant gains in consistency. Transfer learning is very useful technique for small dataset and it requires less computational power. It allows to leverage pretrained and open-source models. In this research, we focus on the improvement and augmentation of transfer learning algorithms created particularly in order to identify and categorize grape leaves diseases.
       
We adjusted model parameters (like input_shape, weights, include_top, layer.trainable) fine-tuned neural network structures and used transfer learning techniques. These enhancements were applied methodically in order to overcome the inherent challenges connected with grape leaf disease diagnosis.

Related work
 
A significant increase in studies focused on plant disease categorization and grape-detecting leaf disease, with an increased focus on the use of sophisticated DL and ML approaches, Begum and Hazarika et al., (2022) and Akshai and Anitha (2021) proved the usefulness of DL through categorization of illness in plants, considerably increasing the status of the sector. Xie et al., (2020) have made significant advances in grape growing by developing an on-demand detection of grape leaf illnesses using enhanced CNN. The analysis of predictive machine learning algorithms by Huang et al., (2020) has extended the area of grape leaf fungus identification, increasing the variety of potential methods in this field. Deshpande and Kore (2023) offered an elaborate evaluation, delivering a modern viewpoint on the methods and approaches used in grape disease diagnosis, making it a useful asset for professionals and scholars. Peng Y et al., 2021 have shown the potential for optimizing categorization precision by incorporating the use of deep learning methods. Badeka et al., (2023) explored the field of accurate viticulture, demonstrating the potential of deep learning in grape maturity evaluation, notably using YOLOv7. Muneer and Fati et al., (2020) provided a novel technique for automatic plant categorization using deep learning. Focusing on issues specifically related to grape leaf disease issues, Afzal et al., (2023) highlighted the larger context of agricultural development through considerations of the problems. Khan et al., (2023) developed a unique autonomous separation and hyperparametric optimization-based system for leaf fungus categorization, demonstrating cutting-edge approaches in the area.
The study focused to utilise transfer learning techniques to improve grape leaf disease classification accuracy more efficiently. Fig 1 shows images of healthy and disease affected grape leaf’s. Transfer learning models which are good for image recognition with limited dataset have been chosen for the study, Cho O.H, 2024. These techniques are memory efficient and take less computational time. The architecture of transfer learning models namely, VGG16 model, VGG19, DenseNet121, InceptionV3 and ResNet50V2 is studied as follows:
 

Fig 1: Image Data Examples: healthy and disease.


 
VGG16 model
 
VGG 16 model is known for its low complexity and efficacy. Fig 2 shows architecture of VGG16 model. Convolutional regions are the main component of VGG16, which is accompanied by layers that maximize pooling. Such layering techniques use minor (3×3) screens layered on several levels to generate detailed illustrations of the provided visuals.
 

Fig 2: VGG16 Architecture.


       
Mathematically, the completely linked layers operate as follows:
 
Z=f(WX+b)          ..........(1)
 
Where,
X= Characteristic vector from the preceding layers.
W= Weight matrix.
Z= Resultant classification values.
       
The values are subjected to irregular stimulation by the parameter f, allowing the system to decide on categorization.
 
VGG 19 Model
 
The framework has 19 layers, including 16 feature retrieval convolutional phases and three layers fully interconnected that classify images into different types of disease (Assad et al., 2023). 
       
The mathematical illustration of the steps within the convolutional stages is as follows:
 
Y= W* X+b          ..........(2)
 
Y= Resultant feature map.
W= Filter weight.
X= Input image.
b= Bias term.
This method can be illustrated as:
 
Z=f(WX+b)          ..........(3)
 
Where,
X= Feature, vector from the preceding layers.
Z= Resultant category values.
To classfiy the results, the funcation f uses a non-linear mode. Because of its performance in identifying image operations, the VGG19 algorithm is the preferred option for grape disease foresight, as shown in Fig 3 (Rudenko et al., 2023).
 

Fig 3: VGG19 Architecture with 19 layers, 16 feature retrieval convolutional phases and 3 fully interconnected layers.


       
DenseNet121 model
 
The DenseNet121 design, which is known for its unusually dense connection sequence, is used in our approach. In the area of grape disease prediction utilizing DenseNet121, we have a dataset of 2207 pictures, each indicating the health status of grape leaves.
       
Mathematically, the computations within the dense blocks can be represented as:
 
Xi+1= H [Conv1 (BN (Relu (Conv1(Xi))))]          ...........(4)
 
Here,
Xi= Feature maps at the ith layer.
BN= Batch normalization.
Conv1= 3×3 convolution operation.
       
Relu signifies the activation function of the corrected quadratic unit.
 
H demonstrates the concatenation operation.
       
The uploaded picture is categorized into one of a thousand groups in the final fully linked layer. For picture categorization applications, especially when there is a lack of training data, DenseNet121 is a reliable and effective framework, as shown in Fig 4.
 

Fig 4: DenseNet121 Model.


 
Inception V3 model
 
We use the InceptionV3 architecture, as indicated in Fig 5 For grape disease identification, a system is built of sections, each including several convolutional layers with various kernel sizes. Rather than depending on a single huge convolution, InceptionV3 leverages two limited convolutions. For instance, this method converts a single 7×7 convolution into two 3´3 convolutions.
 

Fig 5: InceptionV3 architecture.


       
The softmax layer can be described statistically as follows:
 
          ..........(5)
 
Where,
P(class=i)= Likelihood that the supplied image belongs to class i.
zi= Raw score for class i and the denominator is the total of all classes.
       
The versatility and efficacy of the InceptionV3 architecture Agarwal et al., (2019) make it a potential choice for the exact estimation of grape diseases.
 
ResNet50V2 model
 
Fig 6 depicts the ResNet50V2 design, which is divided into 4 phases, each of which has several residual blocks. The first phase involves a single residual block with a 7×7 convolutional kernel and then includes a 3×3 max pooling layer. Prior phases each consist of 3 residual blocks with 3×3 convolution kernels.
 

Fig 6: ResNet50V2 CNN Architecture.


       
The ResNet50V2 residual block architecture will be described below:
       
X(i+1)= F(Xi , Wi)+ Xi          ..........(6)
 
Where,
Xi  and X(i+1)= Parameter maps of the input and output, correspondingly.
Wi= Weights of the layers of convolution within the block.
 
Data pre-processing
 
In the present analysis, a dataset of 2,207 visuals that had been divided into two categories shown in Fig 7, "healthy" and "disease" was used. The research started by integrating the necessary software tools and modules, containing recognized packages like TensorFlow, to make the procedure simpler.
 

Fig 7: Grape Leaves: Healthy and Diseased.

Fungal infection in grape leaves VGG 16 model
 
All the algorithms used in our analysis for identifying images have used Fig 7 as their primary reference image. Fig 7 indicates two grape leaves: a healthy leaf and an infected leaf with prominent dark flaws. Table 1 provides the model's significant metrics during the training and validation stages. The framework's ability to correctly classify situations was proven by its 99.91% training accuracy.
       
The image includes two graphs, one illustrating training accuracy and the other depicting validation accuracy, as seen in Fig 8. The model's training accuracy shows its ability to accurately categorise data points inside the training dataset, whereas validation accuracy measures its ability to predict new, previously observed data in the validation set.
 

Fig 8: Training and Validation Accuracy Progression of VGG16 model.


 
Grape leaf analysis using resNet50V2 model
 
Evaluation parameters of the resNet50V2 model while training and testing
 
In the training phase, which subsequently lasted 1177 seconds and included 138 iterations, the model obtained a loss of 0.0038, indicating minor prediction errors. It obtained an amazing training accuracy of 99.86%, showing its ability to predict effectively on the training dataset. The model has a 99.86% recall and accuracy, indicating its ability to detect significant events while minimising false positives. The total model performance was outstanding, with an AUC of 1.0000. The model accurately detected 2204 true positives and 2204 true negatives during training, with simply 3 false negatives and 3 false positives. The model maintained its accuracy on the validation dataset, with a low loss of 8.4689e-04 and a great validation accuracy of 99.95%. The model's amazing validation recalls of 99.95% and validation precision proved its great accuracy on new, previously unknown data. The validation area under curve (AUC) remained perfectly at 1.0000, validating the model's excellent efficiency. These outcomes highlight the model's incredible predictive capacity during both the training and validation stages, as seen in Table 1.
 

Table 1: Performance analysis of five deep learning models.



Performance of resNet50V2 model
 
The graph illustrating the training and validation accuracy of a ResNet50V2 model in an image classification challenge, as shown in Fig 9, provides useful information about its performance. To get into the technicalities, the training accuracy starts at around 75% and slowly increases, averaging around 99.5% after approximately five epochs. The validation accuracy starts at approximately 85% and eventually increases to around 99% after approximately five epochs. The graph indicates the ResNet50V2 model's outstanding performance in picture classification.
 

Fig 9: Training and Validation Accuracy,loss of ResNet50V2 Model.


 
Grape leaf analysis using denseNet121 model
 
Performance Metrics of the DenseNet121 Model during Training and Validation
 
The model produced a loss value of 0.0032 and a precision of 99.86%. The model achieved a recall and accuracy score of 99.86%, demonstrating its ability to properly identify significant data items while minimizing false positives, are detailed in Table 1. The model effectively observed 2204 true positives while generating 3 false negatives and 3 false positives. In the validation phase, the model continued to perform excellently, maintaining an accuracy of 100% and an AUC of 1.0. It correctly identified 2207 true positives while producing no false negatives or false positives.
 
Training and validation loss analysis in denseNet121 model
 
Fig 10 shows the connection between training loss and validation loss in the context of the DenseNet121 algorithm. The training loss, which measures the model's fit to the training data, frequently surpasses the validation loss, indicating overfitting, in which the algorithm becomes overly specialised in the training data.
 

Fig 10: DenseNet121 model training accuracy, loss and evaluation accuracy, Loss.


 
Grape leaf analysis using inceptionV3 model
 
Training and validation metrics for inceptionV3
 
This epoch's findings emphasise the algorithm's performance using the InceptionV3 design, as depicted in Table 1. The training dataset had a loss of 0.0082 and a precision rate of 99.95%. The model scored a flawless 1.0000 in recall, accuracy and AUC. It properly recognised 2206 true positives and 2206 true negatives. In the validation dataset, the model showed consistency with a loss of 0.0083, an accuracy rate of 99.95% and matching scores for recall, precision and AUC. These outcomes illustrate the InceptionV3 model's strong prediction powers as well as its ability to generalise well to previously unexplored data.
       
Monitoring Model Learning: Validation Loss Dynamics in InceptionV3, Fig 11 shows two graphs, one showing the changing average and the other indicating changes in the mean of the loss of validation for an InceptionV3 model. The image's accompanying text reveals an important observation, the validation loss constantly decreases with time, indicating the model's learning efficacy. As a result, it is a severe test of the model's capacity to generalise its knowledge to new, previously unknown data.
 

Fig 11: Monitoring Model Learning: Validation Loss Dynamics in InceptionV3.


 
Grape leaf analysis using VGG19 model
 
Training and validation metrics for VGG19
 
The algorithm performed well with constant precision, maintaining an outstanding 99.59% reliability in the experimental dataset are presented in Table 1. This was reflected in the validation data, which had a 99.73% accuracy. The model exhibited its capabilities in the training dataset by accurately detecting 2198 true positives and 2198 true negatives while experiencing just 9 false negatives and 9 false positives.
 
Evaluation of training and testing performance for VGG19 model
 
Fig 12 shows two graphs representing the training and validation accuracy of a VGG19 model. The VGG19 model in the picture presents an issue in diagnosing overfitting because training accuracy just barely exceeds validation accuracy, with the latter staying noticeably high, at or above 96%.
 

Fig 12: VGG 19 Model training and validation accuracy and loss.


 
Comparative analysis
 
Deep learning model performance on grape leaf classification
 
Fig 13 compares major metrics, focusing on loss and accuracy, for five different image classification methods: DenseNet121, Inception_V3, ResNet50V2, VGG16 and VGG19. In contrast, the VGG16 and VGG19 models had larger training losses and lower validation accuracy, indicating less accurate predictions on both training and validation datasets.
 

Fig 13: Visual comparison of loss and accuracy metrics on grape leaf disease classification for five image classification models.


 
Comparison of deep learning models for grape leaf disease classification
 
The accuracy of five distinct DL approaches, including VGG19, VGG16, ResNet50V2, InceptionV3 and DenseNet121, was evaluated on a given dataset. In Table 1, VGG19 had a recall of 0.9959. VGG16 produced even greater performance despite a training loss of 0.0083 and a training precision of 0.9991. ResNet50V2 performed well also, with a Hyperparameter Tuning of 0.0038 and an accuracy of 0.9986. InceptionV3 exceeded a training loss of 0.0082 and a training precision of 0.9995. DenseNet121 performed well, with a training loss of 0.0032 and a precision of 0.9986. These models demonstrated significant capabilities, with VGG16 and VGG19 demonstrating the highest levels of training accuracy and precision. ResNet50V2, InceptionV3 and DenseNet121 also performed well.
This research used current technology, including deep learning, image processing and AI, to efficiently control grape leaf diseases. The effectiveness of grape leaf disease identification and segmentation has been significantly improved by the planned application of five essential pre-trained deep learning models-DenseNet121, VGG19, VGG16, InceptionV3 and ResNet50V2-utilizing transfer learning methodologies. Comparing the training accuracy, validation accuracy, training loss and validation loss of these five deep learning models, it is observed that Densenet121 model shows best performance. Densenet121 model achieved a recall and accuracy score of 99.86%. A framework for incorporating modern science within farming for the benefit of our community and marks a significant step toward beneficial grape farming techniques.
We confirm that this work is original and has not been published elsewhere, nor is it currently under consideration for publication elsewhere.

  1. Afzal, I., Haq, M.Z.U., Ahmed, S., Hirich, A., Bazile, D. (2023). Challenges and perspectives for integrating quinoa into the agri-food system. Plants. 12: 33-61.

  2. Agarwal, M., Gupta, S.K., Biswas, K.K. (2019). Grape disease identification using convolution neural network. In: 2019 23rd international computer science and engineering conference (ICSEC). pp. 224-229. 

  3. Akshai, K.P. and Anitha, J. (2021). Plant disease classification using deep learning. 407-411. 

  4. Assad, A. and Bhat, M. and Bhat, Z.A. and Ahanger, A.N. and Kundroo, M. and Dar, R.A., Dar, B.A. (2023). Apple diseases: Detection and classification using transfer learning.  Quality Assurance and Safety of Crops and Foods. 15: 27-37.

  5. Badeka, E., Karapatzak, E., Karampatea, A., Bouloumpasi, E., Kalathas, I., Lytridis, C., Tziolas, E., Tsakalidou, V.N., Kaburlasos, V.G. (2023). A deep learning approach for precision viticulture, assessing grape maturity via YOLOv7. Sensors . 23: 8126.

  6. Begum, N. and Hazarika, M.K. (2022). Deep learning based Image processing solutions in food engineering: A Review. Agricultural Reviews. 43(3): 267-277. doi:10.18805/ag.R- 2182.

  7. Cho, O.H. (2024). An evaluation of various machine learning approaches for detecting Leaf Diseases in Agriculture. Legume Research. 47(4): 619-627. doi:10.18805/LRF- 787.

  8. Deshpande, P., Kore, S. (2023). Disease Detection for Grapes: A Review. In: Proceedings of International Conference on Computational Intelligence. [Tiwari, R., Pavone, M.F., Saraswat, M. (eds)], ICCI 2022. Algorithms for Intelligent Systems. Springer, Singapore.

  9. Huang, Z., Qin, A., Lu, J., Menon, A., Gao, J. (2020). Grape leaf disease detection and classification using machine learning. pp 870-877.

  10. Jia, L., Hongsong, Z., Xiaohui, Y., Ying, J. and Jiaman, D. (2022). A parallel convolution and decision fusion-based flower classification method. Mathematics. 15: 27-67.

  11. Khan, I.R., Sangari, M.S., Shukla, P.K., Aleryani, A., Alqahtani, O., Alasiry, A., Alouane, M.T.H. (2023). An automatic- segmentation- and hyper-parameter-optimization-based artificial rabbits algorithm for leaf disease classification. Biomimetics. 8: 438.

  12. Lovell-Read, F.A., Parnell, S., Cunniffe, N.J., Thompson, R.N. (2023). Using 'sentinel' plants to improve early detection of invasive plant pathogens. PLoS Comput Biol. Feb 2:19(2).

  13. Metagar, S.M. and Walikar, G.A. (2023). Machine learning models for plant disease prediction and detection: A Review. Agricultural Science Digest. doi:10.18805/ag.D-5893.

  14. Muneer, A. and Fati, S. (2020). Efficient and automated herbs classification approach based on shape and texture features using deep learning. IEEE Access. 8: 18.

  15. Munjal, D.and Singh, L., Pandey, M. and Lakra, S. (2023). A systematic review on the detection and classification of plant diseases using machine learning. International Journal of Software Innovation. 11(1): 1-25.

  16. Na, M.H. and Na, I.S. (2024). Detection  and classification of wilting in soybean crop using cutting-edge deep learning techniques. Legume Research. doi:10.18805/LRF-797.

  17. Ni, J., Zhou, Z., Zhao, Y., Han, Z. and Zhao, L. (2023). Tomato leaf disease recognition based on improved convolutional neural network with attention mechanism. Plant Pathology. 72: 1335-1344.

  18. Peng, Y., Zhao, S., Liu, J. (2021). Fused-deep-features based grape leaf disease diagnosis. Agronomy. 11(11): 2234. https://doi.org/10.3390/agronomy11112234.

  19. Rudenko, M., Kazak, A., Oleinikov, N., Mayorova, A., Dorofeeva, A., Nekhaychuk, D., Shutova, O. (2023). Intelligent monitoring system to assess plant development state based on computer vision in viticulture. Computation. 11: 171.

  20. Saha, Swapnil Sayan and Sandha, Sandeep and Srivastava, Mani. (2022). Machine learning for microcontroller-class hardware: A review. IEEE Sensors Journal. 22. 21362-21390. 

  21. Xie, X.M., Yuan, B., Liu, H., Jinrong, L., Shuqin, W.H. (2020). A deep-learning-based real-time detector for grape leaf diseases using improved convolutional neural networks. Frontiers in Plant Science. 11: 751. doi: 10.3389/fpls. 2020.00751.

Editorial Board

View all (0)