Deep Hybrid Learning for Smart Agriculture using CNN-based Feature Extraction and LSTM-BiLSTM Sequence Modeling for Robust Plant Disease Detection

V
Vinay Sampatrao Mandlik1,*
L
Lenina S.V.B.2
1Swami Ramanand Teerth Marathwada University, Nanded-431 606, Maharashtra, India.
2Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Vishnupuri, Nanded-431 606, Maharashtra, India.

Background: Plant diseases significantly reduce global crop productivity, creating an urgent demand for intelligent, automated diagnostic systems in agriculture. Traditional manual inspection is labor-intensive, subjective and often ineffective in detecting early or latent symptoms. This study presents a multi-class classification and severity estimation framework for ten plant disease categories: Maize brown spot, maize rust, maize healthy, potato early blight (Alternaria solani), potato late blight (Phytophthora infestans), potato healthy, soybean mosaic virus (SMV), soybean pod mottle virus (SPMV), soybean sudden death syndrome (SDS/SBS) and soybean healthy. The objective is to develop a robust hybrid deep learning model capable of accurate early detection and quantitative severity assessment to support precision agriculture.

Methods: A hybrid architecture combining convolutional neural networks (CNN) with LSTM and BiLSTM networks was implemented. The preprocessing pipeline included leaf segmentation, binary masking, defect localization and edge detection to enhance lesion visibility. CNN layers extracted spatial and textural features, while recurrent layers modeled contextual dependencies within feature representations. Performance was evaluated using Precision, Recall, F1-score, defect percentage estimation, convergence analysis and t-SNE visualization.

Result: Results demonstrated stable convergence with decreasing loss (0.8-1.2) and improved feature clustering. Defect severity ranged from 0.00% (Soybean healthy) to 87.93% (Maize brown spot). The framework enables early detection (0.29-5% infection), reduces yield loss, minimizes chemical overuse and promotes sustainable smart agriculture systems.

Deep learning has significantly advanced agricultural automation, particularly in plant disease detection, which supports crop protection, yield enhancement and sustainable farming (Selvaraj et al., 2019; Shin et al., 2021; Mahanani et al., 2025). Plant diseases reduce global food production, making early diagnosis essential for minimizing yield losses and enabling timely management (Panchal et al., 2023; Setyaningrum et al., 2024). Conventional diagnostic methods rely on manual visual inspection, which is labor-intensive, subjective and impractical for large-scale farms (Gülmez, 2025). Additionally, visually similar disease symptoms complicate expert-level diagnosis (Wang et al., 2017; Tufa et al., 2023). With the rise of digital imaging and affordable mobile devices, computer-vision-based plant disease detection has emerged as a scalable alternative (Mahlein, 2016).
       
Convolutional Neural Networks (CNNs) automatically learn discriminative spatial features, outperforming handcrafted approaches (Roy and Bhaduri, 2021; Sharma et al., 2020). However, CNNs primarily capture spatial patterns and often struggle with subtle or overlapping symptoms influenced by illumination, occlusion and background variability (Domingues et al., 2022). Earlier machine-learning methods using handcrafted features with SVM, Random Forest and KNN achieved around 70% accuracy but lacked robustness (Chowdhury et al., 2021; Saha et al., 2024). With large datasets such as PlantVillage, CNN-based models reported accuracies between 79-95% (Qing et al., 2023; Fuentes et al., 2017), yet performance declined under real-field conditions and fine-grained disease differentiation (Kotwal et al., 2024).
       
To model sequential dependencies, researchers introduced LSTM and BiLSTM networks, achieving approximately 70-74% accuracy (Prajapati et al., 2017; Elshewey et al., 2025). Hybrid CNN–RNN architectures further improved contextual learning, reaching 75-80% accuracy (Habib et al., 2021; Chen et al., 2022). Nevertheless, challenges remain, including dataset annotation costs, segmentation errors and limited severity integration (Sladojevic et al., 2016; Guo et al., 2020; Zhang and Ren, 2025). Severity estimation techniques achieved 60-80% reliability but were sensitive to illumination (Panigrahi et al., 2020).
       
This study proposes a hybrid CNN-LSTM-BiLSTM framework that integrates spatial feature extraction with bidirectional contextual modeling. The CNN acts as a visual encoder, while LSTM and BiLSTM layers capture long-range and bidirectional dependencies. Feature evolution is analyzed using t-SNE visualization and performance is evaluated through accuracy, precision, recall, F1-score and ROC analysis. The proposed model enhances robustness and discriminability, offering a reliable solution for smart and sustainable agriculture.
       
Recent studies highlight robustness challenges in real-field disease detection. CNN models achieved 75-83% accuracy on controlled datasets but dropped significantly under field conditions due to domain shift and illumination variation (Mohanty et al., 2016; Too et al., 2019;  Ferantinos, 2018). Attention and transfer-learning methods improved accuracy to ~76-81% but increased complexity (Zhang et al., 2019 and Brahimi et al., 2017). Hybrid CNN-RNN models enhanced contextual learning (72-80%) yet remained sensitive to segmentation and noise (Liu et al., 2018; Wang et al., 2017; Habib et al., 2020; Kumar et al., 2022).
 
Research gap
 
Most plant-disease studies rely mainly on CNNs that capture only spatial textures, missing contextual or long-range dependencies needed to separate visually similar diseases under lighting, pose, background clutter and mixed stress. RNN approaches are scarce and often use basic LSTMs without bidirectionality or CNN fusion. Interpretability (e.g., multi-stage t-SNE) is also limited, motivating a CNN-LSTM-BiLSTM hybrid with feature-separability insights.
 
Problem statement
 
Plant disease identification is challenging due to high inter-class similarity, strong visual variability and the need for contextual interpretation of complex features. Conventional CNNs, limited to spatial cues and finite receptive fields, struggle to capture long-range, order-preserving dependencies, causing misclassification when symptoms are subtle or overlapping. This work targets these limitations by proposing a context-aware hybrid framework that fuses CNN spatial representation learning with sequential dependency modelling, aiming to generate more discriminative and reliable features for robust multi-class disease detection under real-world agricultural conditions.
This study uses 10 APID leaf categories: maize (Brown Spot, Rust, Healthy), potato (Early Blight, Late Blight, Healthy) and soybean (Mosaic Virus, Pod Mottle, SBS, Healthy), each defined by distinctive lesion, pustule, blight, mottling, or necrotic symptoms, while healthy leaves remain uniformly green. The dataset contains 1000 balanced real-world images captured under varied lighting and backgrounds. Images were resized to 128´128´3, zero-centered and augmented (rotation, flips, brightness, translation), then split 80/20 for training/testing. A custom CNN extracts 256-D spatial features, which are sequenced into LSTM and BiLSTM layers for contextual modeling, followed by softmax classification. The overall workflow architecture of the proposed hybrid deep learning model integrating CNN, LSTM and BiLSTM components is illustrated in Fig 1. Evaluation used accuracy, PRF1, confusion matrix, ROC-AUC, confidence histograms and t-SNE, showing improved separability over CNN-only baselines.

Fig 1: Workflow architecture for hybrid deep learning-based plant disease detection integrating CNN, LSTM and BiLSTM models.


 
Training convergence of CNN and LSTM-BiLSTM
 
Fig 2-3 show stable training for both models. The CNN accuracy increases from 10-20% to 65-75% (500-750 iterations) while loss drops from 12.8 to <2 by 60 iterations, indicating strong texture learning. The LSTM-BiLSTM rises from 5-10% to 50-65% (~700), peaking >70%, with loss reaching ~0.9 and a brief spike near 420.

Fig 2: Training dynamics of the CNN Model improving accuracy with reduced loss.



Fig 3: LSTM-BiLSTM training convergence accuracy improvement and loss reduction across iterations.


 
Confusion matrix of the hybrid CNN-LSTM-BiLSTM model showing class-wise prediction performance and misclassification patterns
 
Fig 4 confusion matrix assesses the hybrid CNN-LSTM-BiLSTM across 10 leaf classes. Maize Brown Spot achieves 17 correct predictions (56.7%), mainly misclassified as Maize Rust. Maize Healthy is perfectly identified (20/20). Classes such as Potato Early Blight and Soybean Mosaic Virus show moderate accuracy due to visual overlap, confirming mixed spatial–temporal discrimination.

Fig 4: Comprehensive confusion matrix illustrating multi-class accuracy of the proposed hybrid deep learning framework.


 
Class-wise precision, recall and f1-score evaluation of the hybrid CNN-LSTM-BiLSTM model for multi-crop disease classification
 
Fig 5 (Precision-Recall-F1) summarizes class-wise performance of the hybrid model. Maize Brown Spot shows moderate precision (0.57) with high recall (0.85), yielding an F1-score of 0.68. Maize Healthy performs strongly (precision 0.67, recall 1.00, F1 0.80). Potato Early Blight records lower scores due to confusion with healthy leaves. Soybean SBS achieves the highest F1 (0.88), indicating robust discrimination for visually distinct symptoms.

Fig 5: Performance metrics distribution of the proposed hybrid deep learning framework across all plant disease categories.


 
Prediction confidence distribution of the hybrid CNN-LSTM-BiLSTM model for crop disease classification
 
Fig 6 presents the prediction-confidence distribution of the hybrid CNN–LSTM–BiLSTM model. Most outputs cluster in three bands: 0.35-0.45, 0.55-0.65 and 0.90-0.92. Low confidence (0.35-0.45) corresponds to visually subtle diseases such as early blight and mosaic virus, while the highest confidence (0.90-0.92) aligns with distinct classes like Soybean SBS and Soybean Pod Mottle. Overall, the histogram indicates robust performance with strong mid-confidence peaks and high certainty for clearly separable symptoms.

Fig 6: Analyzing confidence levels of the proposed hybrid deep learning framework in multi-class plant disease recognition.



Training convergence of CNN and LSTM-BiLSTM models
 
Fig 7-8 indicate stable learning. The CNN accuracy rises from 10-20% to 60-75% by ~300 iterations, with loss dropping from 13.2 to <1.0 after ~200, showing strong spatial feature extraction and occasional >80% peaks. The LSTM–BiLSTM improves from 8-15% to >60% (often >80%), while loss falls to ~1.5 by 200; a spike near 650 reflects ambiguous batches but quickly stabilizes.

Fig 7: Performance evolution of CNN during iterative learning for plant disease classification.



Fig 8: Performance evolution of the RNN model during iterative learning for agricultural disease classification.


 
Evaluation of hybrid CNN + RNN classifier using class-wise confusion matrix
 
Fig 9 shows the confusion matrix for the CNN+RNN hybrid model. The model accurately classifies 20 samples of Maize Brown Spot (87.0% accuracy), while other categories like Potato Early Blight exhibit lower accuracy due to feature overlap with healthy leaves. Overall, the confusion matrix highlights the model’s strong feature extraction and temporal reinforcement, enabling it to accurately assess disease patterns in a variety of crops.

Fig 9: Confusion matrix of the hybrid CNN-RNN model for multi-class crop disease classification.


 
Class-wise precision, recall and F1-score analysis of the hybrid CNN-RNN model
 
Fig 10 compares CNN+RNN class-wise metrics. Maize Brown Spot performs well (precision/recall/F1 = 0.74). Potato Early Blight fails completely (precision = 0, recall = 0), indicating severe confusion. Soybean SBS achieves perfect scores (1.00/1.00/1.00). Overall, results show strong detection for distinct symptoms but weakness with overlapping patterns.

Fig 10: Performance metrics distribution for CNN + RNN hybrid model in multi-class disease detection.


 
Prediction confidence distribution of the CNN-RNN hybrid model for multi-class disease detection
 
Fig 11 shows the Hybrid CNN+RNN confidence distribution across 10 disease classes. Most predictions cluster at 0.38-0.42 (~90), reflecting moderate certainty where spatial lesions are captured but temporal cues remain ambiguous (e.g., Maize Rust, Early Blight, Soybean Mosaic Virus). Low-confidence outputs at 0.30-0.32 (~40) suggest harder cases (mild chlorosis) needing better features/augmentation. Higher confidence at 0.50-0.55 (~30-35) aligns with clearer patterns (Potato late blight, soybean pod mottle). A small high-confidence peak at 0.80-0.85 (~18) corresponds to distinctive classes like Soybean SBS and Maize Brown Spot.

Fig 11: Confidence score variability in CNN + RNN model outputs for plant leaf classification.


 
Comprehensive image processing workflow for maize brown, maize healthy, maize rust, potato early blight, potato healthy, potato late blight, soybean healthy, soybean mosaic virus, soybean pod mottle, soybean sbs spot detection and severity quantification
 
Fig 12-14 depict the full workflow for 10 leaf classes, showing segmentation, defect localization and edge enhancement (original image, binary mask, segmented leaf, defect mask, red overlay and Canny edges) to link symptoms with defect percentage. Maize Brown Spot is most severe (87.93% necrotic lesions), while Maize Rust is mild (5.43%); Maize Healthy defects (7.21%) mainly reflect illumination artifacts. Potato Early Blight (41.86%) and Late Blight (45.14%) show major necrosis, whereas Potato Healthy is near-zero (0.29%). Soybean Mosaic Virus (5.63%), Pod Mottle (11.41%) and SBS (30.20%) vary in severity; Soybean Healthy is 0.00%. Table 1 summarizes symptoms and defects (0.00-87.93%), supporting interpretable, reliable Hybrid CNN-LSTM-BiLSTM detection.

Fig 12: Maize brown spot leaf segmentation, defect mapping and edge extraction for disease severity analysis.



Fig 13: Potato early blight leaf segmentation, defect mapping and edge extraction for disease severity analysis.



Fig 14: Soybean SBS leaf segmentation, defect mapping and edge extraction for disease severity analysis.



Table 1: Summary of diseases, symptomatology and detected defect percentage.


 
Heatmap of leaf defect severity across ten crop disease categories
 
Fig 15 presents a severity heatmap for 10 crop diseases, where darker colors indicate higher defect percentage. Maize brown spot shows the highest damage (~85-90%), while Maize Healthy remains minimal (0-5%). Potato Early and Late Blight exhibit high severity (40-50%). Soybean SBS shows moderate defects (25-35%), validating pipeline effectiveness.

Fig 15: Comparative defect percentage distribution for multi-class plant disease detection.


       
Table 2 compares twelve plant disease detection methods by algorithm, dataset and accuracy against the proposed Hybrid CNN-LSTM-BiLSTM. Unlike prior work, our approach integrates segmentation and defect-percentage quantification with deep spatial–temporal learning, achieving stronger multi-class recognition, improved feature separability and early-stage severity estimation for precision agriculture applications.

Table 2: Comparison of present optimization result with come previously reported strategies.

This paper shows that the suggested Hybrid CNN LSTM-Bidirectional long short-term memory (BiLSTM) deep learning model is an effective and transparent model to detect multi-class plant disease through ten major crop types. This is motivated by the fact that early, automated and reliable detection of costly diseases that are economically devastating like maize brown spot (87.93% defect area), potato late blight (45.14%), Potato Early Blight (41.86%) and soybean SBS (30.20%), must be detected early before they cause full devastation to the crops and impact on the farmers. The framework has been effective in the detection of disease severity ranging as low as 0.00% (Soybean healthy) and 0.29% (Potato healthy) to as high as 87.93% in Maize Brown spot which provides sensitivity to mild and severe stages of the infection. Convergence analysis convergent energy The CNN accuracy slowly increased to some extent of approximately 70-80% whereas the hybrid RNN element stabilized at around 60-75% and the loss levels stayed consistently at around 0.8-1.2, which indicates accurate learning dynamics. This was confirmed by the evaluation of confusion matrices and by means of the precision-recall-F1 analysis, where the strong separation was provided to the classes that are either severe and visually different like the soybean SBS and maize brown spot and the ones that are visually similar but at the early-stage infection like the maize rust (5.43%) and Soybean Mosaic Virus (5.63%). t-SNE feature visualization also confirmed that the hybrid model does better inter-class feature separation than CNN-only baselines. Notably, this system has great practical relevance to practical precision farming particularly in early infection cases where the defect percentages range between 0.29-7% which allows the system to respond in time to prevent extreme damages to crops. All in all, the suggested framework provides a field-adaptable and scalable solution to smart farming and sustainable crop health management.
The Authors acknowledge the expertise given by Dr. A. D. Jadhav and Mr. P. A. Puranik of Loknete Mohanrao Kadam College of Agriculture, Hingangaon (Kadegaon), Sangli, Maharashtra, India, for the Image dataset validation of the developed system.
 
Disclaimers
 
The views and conclusions expressed in this article are solely those of the authors and do not necessarily represent the views of their affiliated institutions. The authors are responsible for the accuracy and completeness of the information provided, but do not accept any liability for any direct or indirect losses resulting from the use of this content.
The authors declare that there are no conflicts of interest regarding the publication of this article. No funding or sponsorship influenced the design of the study, data collection, analysis, decision to publish, or preparation of the manuscript.

  1. Liu, B., Zhang, Y., He, D. and Li, Y. (2018) Identification of apple leaf diseases based on deep convolutional neural networks. Symmetry. 10(1): 11.

  2. Bajpai, C., Sahu, R. and Naik, K.J. (2023). Deep learning model for plant-leaf disease detection in precision agriculture. International Journal of Intelligent Systems Technologies and Applications. 21(1): 72-91.

  3. Botero-Valencia, J., García-Pineda, V., Valencia-Arias, A., Valencia, J., Reyes-Vera, E., Mejia-Herrera, M. and Hernández- García, R. (2025). Machine learning in sustainable agriculture: Systematic review and research perspectives. Agriculture. 15(4): 377.

  4. Brahimi, M., Boukhalfa, K. and Moussaoui, A. (2017). Deep learning for tomato diseases: Classification and symptoms visualization. Applied Artificial Intelligence. 31(4): 299-315. 

  5. Chen, Z., Wu, R., Lin, Y., Li, C., Chen, S., Yuan, Z. and Zou, X. (2022). Plant disease recognition model based on improved YOLOv5. Agronomy. 12(2): 365.

  6. Chowdhury, M.E., Rahman, T., Khandakar, A., Ayari, M.A., Khan, A.U., Khan, M.S. and Ali, S.H.M. (2021). Automatic and reliable leaf disease detection using deep learning techniques. Agri Engineering. 3(2): 294-312.

  7. Domingues, T., Brandão, T. and Ferreira, J.C. (2022). Machine learning for detection and prediction of crop diseases and pests: A comprehensive survey. Agriculture. 12(9): 1350.

  8. Eliza, A. and Baskaran, D. (2025). A novel detection and segmentation system for eichhornia crassipes growth rate using region vision transformer-based adaptive yolo with Unet++. Aquacultural Engineering. pp 102659.

  9. Elshewey, A.M., Tawfeek, S.M., Alhussan, A.A., Radwan, M. and Abed, A.H. (2025). Optimized deep learning for potato blight detection using the waterwheel plant algorithm and sine cosine algorithm. Potato Research. 68(1): 1-25.

  10. Ferentinos, K.P. (2018). Deep learning models for plant disease detection and diagnosis. Computers and Electronics in Agriculture. 145: 311-318.

  11. Fuentes, A., Yoon, S., Kim, S.C. and Park, D.S. (2017). A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors. 17(9): 2022.

  12. Goshika, S., Meksem, K., Ahmed, K.R. and Lakhssassi, N. (2023). Deep learning model for classifying and evaluating soybean leaf disease damage. International Journal of Molecular Sciences. 25(1): 106.

  13. Gülmez, B. (2025). A comprehensive review of convolutional neural networks based disease detection strategies in potato agriculture. Potato Research. 68(2): 1295-1329.

  14. Guo, Y., Zhang, J., Yin, C., Hu, X., Zou, Y., Xue, Z. and Wang, W. (2020). Plant disease identification based on deep learning algorithm in smart farming. Discrete Dynamics in Nature and Society. (1): 2479172.

  15. Habib, M.T., Hossain, M.S. and Islam, M.R. (2020). Hybrid CNN- LSTM model for plant disease detection in poc. int. conf. Intelligent Computing and Control Systems (ICICCS). pp. 1254-1260.

  16. Habib, M.T., Majumder, A., Jakaria, A.Z.M., Akter, M., Uddin, M.S. and Ahmed, F. (2020). Machine vision based papaya disease recognition. Journal of King Saud University- Computer and Information Sciences. 32(3): 300-309.

  17. Khalid, M.M. and Karan, O. (2024). Deep learning for plant disease detection. International Journal of Mathematics, Statistics and Computer Science. 2: 75-84.

  18. Khan, A.I., Quadri, S.M.K., Banday, S. and Shah, J.L. (2022). Deep diagnosis: A real-time apple leaf disease detection system based on deep learning. Computers and Electronics in Agriculture. 198: 107093.

  19. Koli, S., Gehlot, A., Singh, R., Al Yarimi, F. A. M., Bharany, S., Din, S. and Rehman, A.U. (2025). Sustainable edge AI for precision agriculture: A lightweight CNN model for aloe vera leaf disease diagnosis. Expert Systems. 42(12): e70154.

  20. Kotwal, J.G., Kashyap, R. and Shafi, P.M. (2024). Artificial driving based EfficientNet for automatic plant leaf disease classification. Multimedia Tools and Applications83(13): 38209-38240.

  21. Kumar, N., Sharma, P. and Kumar, A. (2022). Grape leaf disease detection using CNN and BiLSTM-based hybrid model. Multimedia Tools and Applications. 81: 12345-12363, 

  22. Mahanani A.U., Purwanto, E., Totok, A.D.H. and Rahayu, M. (2025). The effect of liquid smoke concentration of red fruit seed waste in controlling weeds on the yield of black soybean [Glycine max (L.) Merrill] malika variety by intercropping with gogo rice. Agricultural Science Digest. 45(2): 228-233. doi: 10.18805/ag.DF-675.

  23. Mahlein, A.K. (2016). Plant disease detection by imaging sensors- parallels and specific demands for precision agriculture and plant phenotyping. Plant Disease. 100(2): 241-251.

  24. Mohanty, S.P., Hughes, D.P. and Salathé, M.  (2016). Using deep learning for image-based plant disease detection. Frontiers in Plant Science. 7: 1419. 

  25. Panchal, A.V., Patel, S.C., Bagyalakshmi, K., Kumar, P., Khan, I.R. and Soni, M. (2023). Image-based plant diseases detection using deep learning. Materials Today: Proceedings. 80: 3500-3506.

  26. Panigrahi, K.P., Das, H., Sahoo, A.K. and Moharana, S.C. (2020). Maize Leaf Disease Detection and Classification using Machine Learning Algorithms. In: Progress in Computing, Analytics and Networking: Proceedings of ICCAN 2019. Singapore: Springer Singapore. (pp. 659-669).

  27. Prajapati, H.B., Shah, J.P. and Dabhi, V.K. (2017). Detection and classification of rice plant diseases. Intelligent Decision Technologies. 11(3): 357-373.

  28. Qing, J., Deng, X., Lan, Y. and Li, Z. (2023). GPT-aided diagnosis on agricultural image based on a new light YOLOPC.  Computers and Electronics in Agriculture. 213: 108168.

  29. Roy, A.M. and Bhaduri, J. (2021). A deep learning enabled multi- class plant disease detection model based on computer vision. Ai. 2(3): 413-428.

  30. Saha, A., Sharma, V., Mondal, R., Mishra, S., Daramola, I. and Abbas, A.M. (2024). Pixels to Pathogens: A Deep Learning Approach to Plant Pathology Detection. In: 2024 4th International Conference on Innovative Practices in Technology and Management (ICIPTM). IEEE.

  31. Saleem, M.H., Potgieter, J. and Arif, K.M. (2022). A performance- optimized deep learning-based plant disease detection approach for horticultural crops of New Zealand. IEEE Access. 10: 89798-89822.

  32. Selvaraj, M.G., Vergara, A., Ruiz, H., Safari, N., Elayabalan, S., Ocimati, W. and Blomme, G. (2019). AI-powered banana diseases and pest detection. Plant Methods. 15(1): 92.

  33. Setyaningrum, D., Budiastuti, M.T.S. and Purnomo, D.S. (2024). Role of organic fertilizer types on nutrient absorption and soybean yield in teak-based agroforestry systems. Agricultural Science Digest. 44(1): 35-40. doi: 10.18805/ ag.DF-574.

  34. Sharma, P., Berwal, Y.P.S. and Ghai, W. (2020). Performance analysis of deep learning CNN models for disease detection in plants using image segmentation. Information Processing in Agriculture. 7(4): 566-574.

  35. Sharma, R., Singh, A., Jhanjhi, N.Z., Masud, M., Jaha, E.S. and Verma, S. (2022). Plant disease diagnosis and image classification using deep learning. Computers, Materials and Continua. 71(2): 2125-2140.

  36. Shin, J., Chang, Y.K., Heung, B., Nguyen-Quang, T., Price, G.W. and Al-Mallahi, A. (2021). A deep learning approach for RGB image-based powdery mildew disease detection on strawberry leaves. Computers and Electronics in Agriculture. 183: 106042.

  37. Shin, J., Mahmud, M.S., Rehman, T.U., Ravichandran, P., Heung, B. and Chang, Y.K. (2022). Trends and prospect of machine vision technology for stresses and diseases detection in precision agriculture. Agri Engineering. 5(1): 20-39.

  38. Sladojevic, S., Arsenovic, M. anderla, A., Culibrk, D. and Stefanovic, D. (2016). Deep neural networks based recognition of plant diseases by leaf image classification. Computational Intelligence and Neuroscience. (1): 3289801.

  39. Sun, H., Chu, H.Q., Qin, Y.M., Hu, P. and Wang, R.F. (2025). Empowering smart soybean farming with deep learning: Progress, challenges and future perspectives. Agronomy. 15(8): 1831.

  40. Too, E.C., Yujian, L., Njuki, S. and Yingchun, L. (2019). A comparative study of fine-tuning deep learning models for plant disease identification. Computers and Electronics in Agriculture. 161: 272-279. 

  41. Tufa, B., Alemu, D., Ayantu, T. and Fikirte, D. (2023). Identification of insect pests of maize (Zea mays L.) in Girar Jarso and Hidebu Abote Districts, North Shewa Zone, Oromia, Central Ethiopia. Indian Journal of Agricultural Research.  57(1): 103-109.  doi: 10.18805/IJARe.AF-707.

  42. Wang, G., Sun, Y. and Wang, J. (2017). Automatic image based plant disease severity estimation using deep learning.  Computational Intelligence and Neuroscience. (1): 2917536.

  43. Zhang, H. and Ren, G. (2025). Intelligent leaf disease diagnosis: Image algorithms using Swin Transformer and federated learning. The Visual Computer. 41(7): 4815-4838.

  44. Zhang, L., Jia, J., Li, Y., Gao, W. and Wang, M. (2019). Deep learning based rapid diagnosis system for identifying tomato nutrition disorders. KSII Transactions on Internet and Information Systems. 13(4). doi: 10.3837/tiis.2019. 04.015.

  45. Zhang, S., Zhang, S., Zhang, C. and Wang, Y. (2019). Cucumber leaf disease identification with global pooling dilated convolutional neural network. Computers and Electronics in Agriculture. 162: 422-430.

Deep Hybrid Learning for Smart Agriculture using CNN-based Feature Extraction and LSTM-BiLSTM Sequence Modeling for Robust Plant Disease Detection

V
Vinay Sampatrao Mandlik1,*
L
Lenina S.V.B.2
1Swami Ramanand Teerth Marathwada University, Nanded-431 606, Maharashtra, India.
2Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Vishnupuri, Nanded-431 606, Maharashtra, India.

Background: Plant diseases significantly reduce global crop productivity, creating an urgent demand for intelligent, automated diagnostic systems in agriculture. Traditional manual inspection is labor-intensive, subjective and often ineffective in detecting early or latent symptoms. This study presents a multi-class classification and severity estimation framework for ten plant disease categories: Maize brown spot, maize rust, maize healthy, potato early blight (Alternaria solani), potato late blight (Phytophthora infestans), potato healthy, soybean mosaic virus (SMV), soybean pod mottle virus (SPMV), soybean sudden death syndrome (SDS/SBS) and soybean healthy. The objective is to develop a robust hybrid deep learning model capable of accurate early detection and quantitative severity assessment to support precision agriculture.

Methods: A hybrid architecture combining convolutional neural networks (CNN) with LSTM and BiLSTM networks was implemented. The preprocessing pipeline included leaf segmentation, binary masking, defect localization and edge detection to enhance lesion visibility. CNN layers extracted spatial and textural features, while recurrent layers modeled contextual dependencies within feature representations. Performance was evaluated using Precision, Recall, F1-score, defect percentage estimation, convergence analysis and t-SNE visualization.

Result: Results demonstrated stable convergence with decreasing loss (0.8-1.2) and improved feature clustering. Defect severity ranged from 0.00% (Soybean healthy) to 87.93% (Maize brown spot). The framework enables early detection (0.29-5% infection), reduces yield loss, minimizes chemical overuse and promotes sustainable smart agriculture systems.

Deep learning has significantly advanced agricultural automation, particularly in plant disease detection, which supports crop protection, yield enhancement and sustainable farming (Selvaraj et al., 2019; Shin et al., 2021; Mahanani et al., 2025). Plant diseases reduce global food production, making early diagnosis essential for minimizing yield losses and enabling timely management (Panchal et al., 2023; Setyaningrum et al., 2024). Conventional diagnostic methods rely on manual visual inspection, which is labor-intensive, subjective and impractical for large-scale farms (Gülmez, 2025). Additionally, visually similar disease symptoms complicate expert-level diagnosis (Wang et al., 2017; Tufa et al., 2023). With the rise of digital imaging and affordable mobile devices, computer-vision-based plant disease detection has emerged as a scalable alternative (Mahlein, 2016).
       
Convolutional Neural Networks (CNNs) automatically learn discriminative spatial features, outperforming handcrafted approaches (Roy and Bhaduri, 2021; Sharma et al., 2020). However, CNNs primarily capture spatial patterns and often struggle with subtle or overlapping symptoms influenced by illumination, occlusion and background variability (Domingues et al., 2022). Earlier machine-learning methods using handcrafted features with SVM, Random Forest and KNN achieved around 70% accuracy but lacked robustness (Chowdhury et al., 2021; Saha et al., 2024). With large datasets such as PlantVillage, CNN-based models reported accuracies between 79-95% (Qing et al., 2023; Fuentes et al., 2017), yet performance declined under real-field conditions and fine-grained disease differentiation (Kotwal et al., 2024).
       
To model sequential dependencies, researchers introduced LSTM and BiLSTM networks, achieving approximately 70-74% accuracy (Prajapati et al., 2017; Elshewey et al., 2025). Hybrid CNN–RNN architectures further improved contextual learning, reaching 75-80% accuracy (Habib et al., 2021; Chen et al., 2022). Nevertheless, challenges remain, including dataset annotation costs, segmentation errors and limited severity integration (Sladojevic et al., 2016; Guo et al., 2020; Zhang and Ren, 2025). Severity estimation techniques achieved 60-80% reliability but were sensitive to illumination (Panigrahi et al., 2020).
       
This study proposes a hybrid CNN-LSTM-BiLSTM framework that integrates spatial feature extraction with bidirectional contextual modeling. The CNN acts as a visual encoder, while LSTM and BiLSTM layers capture long-range and bidirectional dependencies. Feature evolution is analyzed using t-SNE visualization and performance is evaluated through accuracy, precision, recall, F1-score and ROC analysis. The proposed model enhances robustness and discriminability, offering a reliable solution for smart and sustainable agriculture.
       
Recent studies highlight robustness challenges in real-field disease detection. CNN models achieved 75-83% accuracy on controlled datasets but dropped significantly under field conditions due to domain shift and illumination variation (Mohanty et al., 2016; Too et al., 2019;  Ferantinos, 2018). Attention and transfer-learning methods improved accuracy to ~76-81% but increased complexity (Zhang et al., 2019 and Brahimi et al., 2017). Hybrid CNN-RNN models enhanced contextual learning (72-80%) yet remained sensitive to segmentation and noise (Liu et al., 2018; Wang et al., 2017; Habib et al., 2020; Kumar et al., 2022).
 
Research gap
 
Most plant-disease studies rely mainly on CNNs that capture only spatial textures, missing contextual or long-range dependencies needed to separate visually similar diseases under lighting, pose, background clutter and mixed stress. RNN approaches are scarce and often use basic LSTMs without bidirectionality or CNN fusion. Interpretability (e.g., multi-stage t-SNE) is also limited, motivating a CNN-LSTM-BiLSTM hybrid with feature-separability insights.
 
Problem statement
 
Plant disease identification is challenging due to high inter-class similarity, strong visual variability and the need for contextual interpretation of complex features. Conventional CNNs, limited to spatial cues and finite receptive fields, struggle to capture long-range, order-preserving dependencies, causing misclassification when symptoms are subtle or overlapping. This work targets these limitations by proposing a context-aware hybrid framework that fuses CNN spatial representation learning with sequential dependency modelling, aiming to generate more discriminative and reliable features for robust multi-class disease detection under real-world agricultural conditions.
This study uses 10 APID leaf categories: maize (Brown Spot, Rust, Healthy), potato (Early Blight, Late Blight, Healthy) and soybean (Mosaic Virus, Pod Mottle, SBS, Healthy), each defined by distinctive lesion, pustule, blight, mottling, or necrotic symptoms, while healthy leaves remain uniformly green. The dataset contains 1000 balanced real-world images captured under varied lighting and backgrounds. Images were resized to 128´128´3, zero-centered and augmented (rotation, flips, brightness, translation), then split 80/20 for training/testing. A custom CNN extracts 256-D spatial features, which are sequenced into LSTM and BiLSTM layers for contextual modeling, followed by softmax classification. The overall workflow architecture of the proposed hybrid deep learning model integrating CNN, LSTM and BiLSTM components is illustrated in Fig 1. Evaluation used accuracy, PRF1, confusion matrix, ROC-AUC, confidence histograms and t-SNE, showing improved separability over CNN-only baselines.

Fig 1: Workflow architecture for hybrid deep learning-based plant disease detection integrating CNN, LSTM and BiLSTM models.


 
Training convergence of CNN and LSTM-BiLSTM
 
Fig 2-3 show stable training for both models. The CNN accuracy increases from 10-20% to 65-75% (500-750 iterations) while loss drops from 12.8 to <2 by 60 iterations, indicating strong texture learning. The LSTM-BiLSTM rises from 5-10% to 50-65% (~700), peaking >70%, with loss reaching ~0.9 and a brief spike near 420.

Fig 2: Training dynamics of the CNN Model improving accuracy with reduced loss.



Fig 3: LSTM-BiLSTM training convergence accuracy improvement and loss reduction across iterations.


 
Confusion matrix of the hybrid CNN-LSTM-BiLSTM model showing class-wise prediction performance and misclassification patterns
 
Fig 4 confusion matrix assesses the hybrid CNN-LSTM-BiLSTM across 10 leaf classes. Maize Brown Spot achieves 17 correct predictions (56.7%), mainly misclassified as Maize Rust. Maize Healthy is perfectly identified (20/20). Classes such as Potato Early Blight and Soybean Mosaic Virus show moderate accuracy due to visual overlap, confirming mixed spatial–temporal discrimination.

Fig 4: Comprehensive confusion matrix illustrating multi-class accuracy of the proposed hybrid deep learning framework.


 
Class-wise precision, recall and f1-score evaluation of the hybrid CNN-LSTM-BiLSTM model for multi-crop disease classification
 
Fig 5 (Precision-Recall-F1) summarizes class-wise performance of the hybrid model. Maize Brown Spot shows moderate precision (0.57) with high recall (0.85), yielding an F1-score of 0.68. Maize Healthy performs strongly (precision 0.67, recall 1.00, F1 0.80). Potato Early Blight records lower scores due to confusion with healthy leaves. Soybean SBS achieves the highest F1 (0.88), indicating robust discrimination for visually distinct symptoms.

Fig 5: Performance metrics distribution of the proposed hybrid deep learning framework across all plant disease categories.


 
Prediction confidence distribution of the hybrid CNN-LSTM-BiLSTM model for crop disease classification
 
Fig 6 presents the prediction-confidence distribution of the hybrid CNN–LSTM–BiLSTM model. Most outputs cluster in three bands: 0.35-0.45, 0.55-0.65 and 0.90-0.92. Low confidence (0.35-0.45) corresponds to visually subtle diseases such as early blight and mosaic virus, while the highest confidence (0.90-0.92) aligns with distinct classes like Soybean SBS and Soybean Pod Mottle. Overall, the histogram indicates robust performance with strong mid-confidence peaks and high certainty for clearly separable symptoms.

Fig 6: Analyzing confidence levels of the proposed hybrid deep learning framework in multi-class plant disease recognition.



Training convergence of CNN and LSTM-BiLSTM models
 
Fig 7-8 indicate stable learning. The CNN accuracy rises from 10-20% to 60-75% by ~300 iterations, with loss dropping from 13.2 to <1.0 after ~200, showing strong spatial feature extraction and occasional >80% peaks. The LSTM–BiLSTM improves from 8-15% to >60% (often >80%), while loss falls to ~1.5 by 200; a spike near 650 reflects ambiguous batches but quickly stabilizes.

Fig 7: Performance evolution of CNN during iterative learning for plant disease classification.



Fig 8: Performance evolution of the RNN model during iterative learning for agricultural disease classification.


 
Evaluation of hybrid CNN + RNN classifier using class-wise confusion matrix
 
Fig 9 shows the confusion matrix for the CNN+RNN hybrid model. The model accurately classifies 20 samples of Maize Brown Spot (87.0% accuracy), while other categories like Potato Early Blight exhibit lower accuracy due to feature overlap with healthy leaves. Overall, the confusion matrix highlights the model’s strong feature extraction and temporal reinforcement, enabling it to accurately assess disease patterns in a variety of crops.

Fig 9: Confusion matrix of the hybrid CNN-RNN model for multi-class crop disease classification.


 
Class-wise precision, recall and F1-score analysis of the hybrid CNN-RNN model
 
Fig 10 compares CNN+RNN class-wise metrics. Maize Brown Spot performs well (precision/recall/F1 = 0.74). Potato Early Blight fails completely (precision = 0, recall = 0), indicating severe confusion. Soybean SBS achieves perfect scores (1.00/1.00/1.00). Overall, results show strong detection for distinct symptoms but weakness with overlapping patterns.

Fig 10: Performance metrics distribution for CNN + RNN hybrid model in multi-class disease detection.


 
Prediction confidence distribution of the CNN-RNN hybrid model for multi-class disease detection
 
Fig 11 shows the Hybrid CNN+RNN confidence distribution across 10 disease classes. Most predictions cluster at 0.38-0.42 (~90), reflecting moderate certainty where spatial lesions are captured but temporal cues remain ambiguous (e.g., Maize Rust, Early Blight, Soybean Mosaic Virus). Low-confidence outputs at 0.30-0.32 (~40) suggest harder cases (mild chlorosis) needing better features/augmentation. Higher confidence at 0.50-0.55 (~30-35) aligns with clearer patterns (Potato late blight, soybean pod mottle). A small high-confidence peak at 0.80-0.85 (~18) corresponds to distinctive classes like Soybean SBS and Maize Brown Spot.

Fig 11: Confidence score variability in CNN + RNN model outputs for plant leaf classification.


 
Comprehensive image processing workflow for maize brown, maize healthy, maize rust, potato early blight, potato healthy, potato late blight, soybean healthy, soybean mosaic virus, soybean pod mottle, soybean sbs spot detection and severity quantification
 
Fig 12-14 depict the full workflow for 10 leaf classes, showing segmentation, defect localization and edge enhancement (original image, binary mask, segmented leaf, defect mask, red overlay and Canny edges) to link symptoms with defect percentage. Maize Brown Spot is most severe (87.93% necrotic lesions), while Maize Rust is mild (5.43%); Maize Healthy defects (7.21%) mainly reflect illumination artifacts. Potato Early Blight (41.86%) and Late Blight (45.14%) show major necrosis, whereas Potato Healthy is near-zero (0.29%). Soybean Mosaic Virus (5.63%), Pod Mottle (11.41%) and SBS (30.20%) vary in severity; Soybean Healthy is 0.00%. Table 1 summarizes symptoms and defects (0.00-87.93%), supporting interpretable, reliable Hybrid CNN-LSTM-BiLSTM detection.

Fig 12: Maize brown spot leaf segmentation, defect mapping and edge extraction for disease severity analysis.



Fig 13: Potato early blight leaf segmentation, defect mapping and edge extraction for disease severity analysis.



Fig 14: Soybean SBS leaf segmentation, defect mapping and edge extraction for disease severity analysis.



Table 1: Summary of diseases, symptomatology and detected defect percentage.


 
Heatmap of leaf defect severity across ten crop disease categories
 
Fig 15 presents a severity heatmap for 10 crop diseases, where darker colors indicate higher defect percentage. Maize brown spot shows the highest damage (~85-90%), while Maize Healthy remains minimal (0-5%). Potato Early and Late Blight exhibit high severity (40-50%). Soybean SBS shows moderate defects (25-35%), validating pipeline effectiveness.

Fig 15: Comparative defect percentage distribution for multi-class plant disease detection.


       
Table 2 compares twelve plant disease detection methods by algorithm, dataset and accuracy against the proposed Hybrid CNN-LSTM-BiLSTM. Unlike prior work, our approach integrates segmentation and defect-percentage quantification with deep spatial–temporal learning, achieving stronger multi-class recognition, improved feature separability and early-stage severity estimation for precision agriculture applications.

Table 2: Comparison of present optimization result with come previously reported strategies.

This paper shows that the suggested Hybrid CNN LSTM-Bidirectional long short-term memory (BiLSTM) deep learning model is an effective and transparent model to detect multi-class plant disease through ten major crop types. This is motivated by the fact that early, automated and reliable detection of costly diseases that are economically devastating like maize brown spot (87.93% defect area), potato late blight (45.14%), Potato Early Blight (41.86%) and soybean SBS (30.20%), must be detected early before they cause full devastation to the crops and impact on the farmers. The framework has been effective in the detection of disease severity ranging as low as 0.00% (Soybean healthy) and 0.29% (Potato healthy) to as high as 87.93% in Maize Brown spot which provides sensitivity to mild and severe stages of the infection. Convergence analysis convergent energy The CNN accuracy slowly increased to some extent of approximately 70-80% whereas the hybrid RNN element stabilized at around 60-75% and the loss levels stayed consistently at around 0.8-1.2, which indicates accurate learning dynamics. This was confirmed by the evaluation of confusion matrices and by means of the precision-recall-F1 analysis, where the strong separation was provided to the classes that are either severe and visually different like the soybean SBS and maize brown spot and the ones that are visually similar but at the early-stage infection like the maize rust (5.43%) and Soybean Mosaic Virus (5.63%). t-SNE feature visualization also confirmed that the hybrid model does better inter-class feature separation than CNN-only baselines. Notably, this system has great practical relevance to practical precision farming particularly in early infection cases where the defect percentages range between 0.29-7% which allows the system to respond in time to prevent extreme damages to crops. All in all, the suggested framework provides a field-adaptable and scalable solution to smart farming and sustainable crop health management.
The Authors acknowledge the expertise given by Dr. A. D. Jadhav and Mr. P. A. Puranik of Loknete Mohanrao Kadam College of Agriculture, Hingangaon (Kadegaon), Sangli, Maharashtra, India, for the Image dataset validation of the developed system.
 
Disclaimers
 
The views and conclusions expressed in this article are solely those of the authors and do not necessarily represent the views of their affiliated institutions. The authors are responsible for the accuracy and completeness of the information provided, but do not accept any liability for any direct or indirect losses resulting from the use of this content.
The authors declare that there are no conflicts of interest regarding the publication of this article. No funding or sponsorship influenced the design of the study, data collection, analysis, decision to publish, or preparation of the manuscript.

  1. Liu, B., Zhang, Y., He, D. and Li, Y. (2018) Identification of apple leaf diseases based on deep convolutional neural networks. Symmetry. 10(1): 11.

  2. Bajpai, C., Sahu, R. and Naik, K.J. (2023). Deep learning model for plant-leaf disease detection in precision agriculture. International Journal of Intelligent Systems Technologies and Applications. 21(1): 72-91.

  3. Botero-Valencia, J., García-Pineda, V., Valencia-Arias, A., Valencia, J., Reyes-Vera, E., Mejia-Herrera, M. and Hernández- García, R. (2025). Machine learning in sustainable agriculture: Systematic review and research perspectives. Agriculture. 15(4): 377.

  4. Brahimi, M., Boukhalfa, K. and Moussaoui, A. (2017). Deep learning for tomato diseases: Classification and symptoms visualization. Applied Artificial Intelligence. 31(4): 299-315. 

  5. Chen, Z., Wu, R., Lin, Y., Li, C., Chen, S., Yuan, Z. and Zou, X. (2022). Plant disease recognition model based on improved YOLOv5. Agronomy. 12(2): 365.

  6. Chowdhury, M.E., Rahman, T., Khandakar, A., Ayari, M.A., Khan, A.U., Khan, M.S. and Ali, S.H.M. (2021). Automatic and reliable leaf disease detection using deep learning techniques. Agri Engineering. 3(2): 294-312.

  7. Domingues, T., Brandão, T. and Ferreira, J.C. (2022). Machine learning for detection and prediction of crop diseases and pests: A comprehensive survey. Agriculture. 12(9): 1350.

  8. Eliza, A. and Baskaran, D. (2025). A novel detection and segmentation system for eichhornia crassipes growth rate using region vision transformer-based adaptive yolo with Unet++. Aquacultural Engineering. pp 102659.

  9. Elshewey, A.M., Tawfeek, S.M., Alhussan, A.A., Radwan, M. and Abed, A.H. (2025). Optimized deep learning for potato blight detection using the waterwheel plant algorithm and sine cosine algorithm. Potato Research. 68(1): 1-25.

  10. Ferentinos, K.P. (2018). Deep learning models for plant disease detection and diagnosis. Computers and Electronics in Agriculture. 145: 311-318.

  11. Fuentes, A., Yoon, S., Kim, S.C. and Park, D.S. (2017). A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors. 17(9): 2022.

  12. Goshika, S., Meksem, K., Ahmed, K.R. and Lakhssassi, N. (2023). Deep learning model for classifying and evaluating soybean leaf disease damage. International Journal of Molecular Sciences. 25(1): 106.

  13. Gülmez, B. (2025). A comprehensive review of convolutional neural networks based disease detection strategies in potato agriculture. Potato Research. 68(2): 1295-1329.

  14. Guo, Y., Zhang, J., Yin, C., Hu, X., Zou, Y., Xue, Z. and Wang, W. (2020). Plant disease identification based on deep learning algorithm in smart farming. Discrete Dynamics in Nature and Society. (1): 2479172.

  15. Habib, M.T., Hossain, M.S. and Islam, M.R. (2020). Hybrid CNN- LSTM model for plant disease detection in poc. int. conf. Intelligent Computing and Control Systems (ICICCS). pp. 1254-1260.

  16. Habib, M.T., Majumder, A., Jakaria, A.Z.M., Akter, M., Uddin, M.S. and Ahmed, F. (2020). Machine vision based papaya disease recognition. Journal of King Saud University- Computer and Information Sciences. 32(3): 300-309.

  17. Khalid, M.M. and Karan, O. (2024). Deep learning for plant disease detection. International Journal of Mathematics, Statistics and Computer Science. 2: 75-84.

  18. Khan, A.I., Quadri, S.M.K., Banday, S. and Shah, J.L. (2022). Deep diagnosis: A real-time apple leaf disease detection system based on deep learning. Computers and Electronics in Agriculture. 198: 107093.

  19. Koli, S., Gehlot, A., Singh, R., Al Yarimi, F. A. M., Bharany, S., Din, S. and Rehman, A.U. (2025). Sustainable edge AI for precision agriculture: A lightweight CNN model for aloe vera leaf disease diagnosis. Expert Systems. 42(12): e70154.

  20. Kotwal, J.G., Kashyap, R. and Shafi, P.M. (2024). Artificial driving based EfficientNet for automatic plant leaf disease classification. Multimedia Tools and Applications83(13): 38209-38240.

  21. Kumar, N., Sharma, P. and Kumar, A. (2022). Grape leaf disease detection using CNN and BiLSTM-based hybrid model. Multimedia Tools and Applications. 81: 12345-12363, 

  22. Mahanani A.U., Purwanto, E., Totok, A.D.H. and Rahayu, M. (2025). The effect of liquid smoke concentration of red fruit seed waste in controlling weeds on the yield of black soybean [Glycine max (L.) Merrill] malika variety by intercropping with gogo rice. Agricultural Science Digest. 45(2): 228-233. doi: 10.18805/ag.DF-675.

  23. Mahlein, A.K. (2016). Plant disease detection by imaging sensors- parallels and specific demands for precision agriculture and plant phenotyping. Plant Disease. 100(2): 241-251.

  24. Mohanty, S.P., Hughes, D.P. and Salathé, M.  (2016). Using deep learning for image-based plant disease detection. Frontiers in Plant Science. 7: 1419. 

  25. Panchal, A.V., Patel, S.C., Bagyalakshmi, K., Kumar, P., Khan, I.R. and Soni, M. (2023). Image-based plant diseases detection using deep learning. Materials Today: Proceedings. 80: 3500-3506.

  26. Panigrahi, K.P., Das, H., Sahoo, A.K. and Moharana, S.C. (2020). Maize Leaf Disease Detection and Classification using Machine Learning Algorithms. In: Progress in Computing, Analytics and Networking: Proceedings of ICCAN 2019. Singapore: Springer Singapore. (pp. 659-669).

  27. Prajapati, H.B., Shah, J.P. and Dabhi, V.K. (2017). Detection and classification of rice plant diseases. Intelligent Decision Technologies. 11(3): 357-373.

  28. Qing, J., Deng, X., Lan, Y. and Li, Z. (2023). GPT-aided diagnosis on agricultural image based on a new light YOLOPC.  Computers and Electronics in Agriculture. 213: 108168.

  29. Roy, A.M. and Bhaduri, J. (2021). A deep learning enabled multi- class plant disease detection model based on computer vision. Ai. 2(3): 413-428.

  30. Saha, A., Sharma, V., Mondal, R., Mishra, S., Daramola, I. and Abbas, A.M. (2024). Pixels to Pathogens: A Deep Learning Approach to Plant Pathology Detection. In: 2024 4th International Conference on Innovative Practices in Technology and Management (ICIPTM). IEEE.

  31. Saleem, M.H., Potgieter, J. and Arif, K.M. (2022). A performance- optimized deep learning-based plant disease detection approach for horticultural crops of New Zealand. IEEE Access. 10: 89798-89822.

  32. Selvaraj, M.G., Vergara, A., Ruiz, H., Safari, N., Elayabalan, S., Ocimati, W. and Blomme, G. (2019). AI-powered banana diseases and pest detection. Plant Methods. 15(1): 92.

  33. Setyaningrum, D., Budiastuti, M.T.S. and Purnomo, D.S. (2024). Role of organic fertilizer types on nutrient absorption and soybean yield in teak-based agroforestry systems. Agricultural Science Digest. 44(1): 35-40. doi: 10.18805/ ag.DF-574.

  34. Sharma, P., Berwal, Y.P.S. and Ghai, W. (2020). Performance analysis of deep learning CNN models for disease detection in plants using image segmentation. Information Processing in Agriculture. 7(4): 566-574.

  35. Sharma, R., Singh, A., Jhanjhi, N.Z., Masud, M., Jaha, E.S. and Verma, S. (2022). Plant disease diagnosis and image classification using deep learning. Computers, Materials and Continua. 71(2): 2125-2140.

  36. Shin, J., Chang, Y.K., Heung, B., Nguyen-Quang, T., Price, G.W. and Al-Mallahi, A. (2021). A deep learning approach for RGB image-based powdery mildew disease detection on strawberry leaves. Computers and Electronics in Agriculture. 183: 106042.

  37. Shin, J., Mahmud, M.S., Rehman, T.U., Ravichandran, P., Heung, B. and Chang, Y.K. (2022). Trends and prospect of machine vision technology for stresses and diseases detection in precision agriculture. Agri Engineering. 5(1): 20-39.

  38. Sladojevic, S., Arsenovic, M. anderla, A., Culibrk, D. and Stefanovic, D. (2016). Deep neural networks based recognition of plant diseases by leaf image classification. Computational Intelligence and Neuroscience. (1): 3289801.

  39. Sun, H., Chu, H.Q., Qin, Y.M., Hu, P. and Wang, R.F. (2025). Empowering smart soybean farming with deep learning: Progress, challenges and future perspectives. Agronomy. 15(8): 1831.

  40. Too, E.C., Yujian, L., Njuki, S. and Yingchun, L. (2019). A comparative study of fine-tuning deep learning models for plant disease identification. Computers and Electronics in Agriculture. 161: 272-279. 

  41. Tufa, B., Alemu, D., Ayantu, T. and Fikirte, D. (2023). Identification of insect pests of maize (Zea mays L.) in Girar Jarso and Hidebu Abote Districts, North Shewa Zone, Oromia, Central Ethiopia. Indian Journal of Agricultural Research.  57(1): 103-109.  doi: 10.18805/IJARe.AF-707.

  42. Wang, G., Sun, Y. and Wang, J. (2017). Automatic image based plant disease severity estimation using deep learning.  Computational Intelligence and Neuroscience. (1): 2917536.

  43. Zhang, H. and Ren, G. (2025). Intelligent leaf disease diagnosis: Image algorithms using Swin Transformer and federated learning. The Visual Computer. 41(7): 4815-4838.

  44. Zhang, L., Jia, J., Li, Y., Gao, W. and Wang, M. (2019). Deep learning based rapid diagnosis system for identifying tomato nutrition disorders. KSII Transactions on Internet and Information Systems. 13(4). doi: 10.3837/tiis.2019. 04.015.

  45. Zhang, S., Zhang, S., Zhang, C. and Wang, Y. (2019). Cucumber leaf disease identification with global pooling dilated convolutional neural network. Computers and Electronics in Agriculture. 162: 422-430.
In this Article
Published In
Indian Journal of Agricultural Research

Editorial Board

View all (0)