The Role of Deep Learning in Enhancing Crop Sustainability: A Study on AlexNet’s Application in Detecting Bean Leaf Disease

1Zhejiang Academy of Agricultural Sciences, 198 Shiqiao Road, Hangzhou City, Zhejiang Province, 310021, China.
2School of Electrical and Computer Engineering, Yeosu Campus, Chonnam National University, 50, Daehak-ro, Yeosu-si, Jeollanam-do, 59626, Republic of Korea.
  • Submitted06-12-2024|

  • Accepted27-10-2025|

  • First Online 30-10-2025|

  • doi 10.18805/LRF-843

Background: Agriculture has always been the source of global food security but is challenged by crop diseases. Beans, a vital protein source, are particularly vulnerable to the diseases whch may occur due to varous pathogens ncludng bacteria fung etc. Accurate disease identification has become critical in order to maintain increasing demands of growing population.

Methods: This study utilized the AlexNet convolutional neural network (CNN) to classify bean leaf images into three classes (two disease classes i.e Angular Leaf Spot and Rust and one Healthy class). An open dataset containing leaf mages of beans belonging to all the three classes was used to train the model. The mages were first preprocessed and resized to 224x224 pixels for optimal model performance. The AlexNet model was trained for 25 epochs using cross-entropy loss, ReLU activation, max-pooling and dropout regularization.

Result: An accuracy of 95.4% was achieved while the validation accuracy was 78.2%. Other performance metrics, such as precision, recall and F1-score, highlighted strengths in identifying healthy leaves, with an overall accuracy of 82.81%. However, some misclassifications occurred between disease classes due to visual similarities. The results demonstrate AlexNet’s potential for automated plant disease detection, providing a scalable solution for enhancing agricultural practices and food security. Further optimization and integration with field applications are recommended for improved accuracy and usability.

Agriculture has been responsible for food security, as it directly impacts food availability for over 800 million undernourished people (Pawlak and Kołodziejczak, 2020; Beebe, 2000). It supports economic stability, particularly in developing countries where populations rely on locally produced staples. The globalization of agriculture highlights the need for resilient crop varieties to combat plant diseases, ensuring sustainable food production amidst growing challenges (Strange and Scott, 2005; Fiona and Denish, 2024). Among common crops, beans are particularly vulnerable to various diseases, which can greatly affect their productivity. Accurate and timely identification of plant diseases is crucial for preventing large-scale crop losses (Sahu and Sahu, 2021; Mallick et al., 2023).
       
Beans are an essential part of diets worldwide, valued not only for their nutritional content but also for their role in food security and combating malnutrition. Rich in protein, fiber, vitamins and minerals, beans contribute significantly to the diets of populations in both developed and developing nations (Broughton et al., 2003). Particularly in regions where access to animal protein is limited, beans serve as an affordable and nutrient-dense alternative, helping address protein and micronutrient deficiencies that contribute to malnutrition (FAO, 2016). This makes beans a crucial crop for improving dietary quality and supporting sustainable agricultural systems, especially in areas with high food insecurity (Jeong and Na, 2024).
       
Despite their nutritional benefits, bean crops are susceptible to a number of diseases that can reduce yield, impacting both farmers and consumers. Among these, Angular Leaf Spot and Rust are two of the most prevalent diseases affecting beans, each posing significant threats to crop productivity (Wagara and Kimani, 2007). Accurate identification and timely intervention are essential to manage these diseases and ensure a stable supply of beans. However, traditional disease identification methods are often labour-intensive, requiring expert knowledge and specialized equipment that may not be readily available in rural or resource-limited areas (Simhadri et al., 2024; Shoaib et al., 2023).  This limitation underscores the need for accessible, efficient and automated methods of plant disease detection.
       
The developments made recently in the realm of Information and Communication technology, particularly the Artificial Intelligence (AI), have proven quite proficient in addressing these challenges. Convolutional Neural Networks (CNNs), a subset of deep learning, have been widely adopted for image-based disease identification due to their high accuracy and ability to generalize across different datasets (Kamilaris and Prenafeta-Boldú, 2018; AlZubi et al., 2023; Al-Dosari et al., 2024; Lugito et al., 2022). AlexNet, a pioneering CNN architecture introduced by Krizhevsky et al., (2012), has proven particularly effective in large-scale image classification tasks. Its layered design, incorporating convolutional and fully connected layers with ReLU activations and dropout regularization, enhances model performance and reduces overfitting, making it well-suited for complex visual data such as plant disease images (Zhuoxin et al., 2022).
       
In this study, AlexNet was applied to classify bean leaf images into three classes: Angular Leaf Spot, Rust and Healthy. The aim was to assess the model’s performance in segregating healthy leaves from diseased leaves, using CNNs for crop health monitoring and disease management. 
 
Related work
 
AlexNet has been widely applied in image identification tasks due to its efficiency in handling complex image data. AlexNet became the standard for visual-related tasks after its success in the ImageNet challenge in 2012 (Singh et al., 2023; Akter et al., 2023). Since its development, AlexNet has set a foundation for deep learning in image recognition, particularly due to its performance in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), where it achieved unprecedented accuracy (Krizhevsky et al., 2012; Sladojevic and Anderla, 2016; Santiago et al., 2023). AlexNet’s architecture consists of five convolutional layers, along with max-pooling layers and 3 fully connected layers, which enable it to capture both spatial and hierarchical features of images. This model’s simplicity and effectiveness have led to its extensive use in diverse fields, particularly in the identification of diseases through image analysis.
       
AlexNet has shown significant promise in disease identification across various medical and agricultural applications. Litjens et al., (2017) highlighted AlexNet’s application across medical imaging modalities, demonstrating its capability to detect patterns indicative of diseases like cancer, liver cirrhosis and diabetic retinopathy. AlexNet has been utilized primarily for its architecture in medical image analysis tasks, making use of its ability to process large 2D CT slices for detecting interstitial patterns. The network employs large receptive fields in initial layers and smaller kernels in later layers, enhancing feature extraction. Additionally, AlexNet’s pre-trained models have demonstrated strong performance in various classification tasks, challenging human expert accuracy in some instances.
       
In medical imaging domain, for example, AlexNet has been applied to identify and classify diseases in radiographs. As Alom et al., (2018) discuss, CNNs like AlexNet can efficiently process large datasets and distinguish complex patterns, making them suitable for diagnosing conditions such as pneumonia and skin cancer through radiological images. Similarly, AlexNet was employed to analyze MRI brain images for diagnosing Alzheimer at an early stage by extracting significant features indicative of Mild Cognitive Impairment (MCI). The model utilized transfer learning techniques and achieved a high accuracy of 98.35% in classifying the images, demonstrating its effectiveness in early detection (Kumar et al., 2022). Spanhol et al., (2016) utilized AlexNet to classify breast cancer histopathology images, achieving high diagnostic accuracy. The architecture was adapted to handle high-resolution histopathological images which demonstrated superior performance compared to simpler models like LeNet. Liu et al. (2017) highlighted that using AlexNet for diagnosing tuberculosis (TB) improved classification accuracy for various TB manifestations in chest X-ray images. It demonstrated outstanding performance in detecting TB, achieving an 85.68% classification accuracy, which surpassed previous methods. Additionally, the architecture’s ability to be fine-tuned with transfer learning from natural image datasets enhanced its effectiveness in medical image analysis.
       
In agriculture, AlexNet has been instrumental in diagnosing plant diseases, a critical task for food security. Plant diseases can often be identified by specific visual symptoms and CNNs like AlexNet are well-suited for capturing these subtleties. Ferentinos (2018) utilized AlexNet demonstrated that CNNs can effectively distinguish healthy from diseased plants, even when the visual differences are subtle. Similarly, Too et al., (2019) compared various deep learning models, including AlexNet, for plant disease recognition and found that AlexNet performed competitively, further reinforcing its relevance in agricultural image identification. Specifically, in bean plants, AlexNet has been applied to detect common diseases, such as rust and bacterial blight, that visually manifest on leaves showing that the network could distinguish between healthy leaves and those affected by diseases with high accuracy (Yu et al., 2023). Their work highlights AlexNet’s ability to detect subtle patterns and textures in leaf imagery, which are indicative of various diseases. In another study, the AlexNet was implemented along with transfer learning, where pre-trained weights were fine-tuned to improve accuracy which enhanced model’s performance by adjusting hyperparameters. They achieved a classification accuracy of 95.31%. 
       
In this study, Alexnet was used to analyse leaf images of beans for the purpose of identifying diseased plants. The detailed methodology is given under the following section.
This study utilized the AlexNet deep learning model to classify bean leaf images into three categories: Angular Leaf Spot, Rust and Healthy. The dataset comprised 1,034 images, divided equally among the three classes (Fig 1). Images were resized to 224x224 pixels, a standard input size for AlexNet, which is designed to optimize the processing of large-scale images while reducing computational costs.

Fig 1: Three classes used for training and evaluation: a) Angular leaf spot, b) Rust and c) Healthy.


       
AlexNet achieved remarkable accuracy in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). It demonstrated that deep CNNs could be applied effectively to large-scale image classification tasks, sparking widespread interest in deep learning models.
 
Mathematical basis of AlexNet
 
AlexNet operates through a series of mathematical operations designed to mimic human visual processing, enhancing the model’s capacity for feature extraction and classification. These operations include convolution, activation functions, pooling, fully connected layers and dropout regularization, each contributing uniquely to AlexNet’s effectiveness in image classification.
 
Convolutional layers for feature extraction
 
Convolutional layers in AlexNet apply filters (kernels) over the input image to generate maps of feature, capturing 3-D patterns like edges and textures. This process is represented mathematically as:
 
 
Where,
I(m, n)= A pixel value in the input.
FFF= The filter.
S(i, j)= The output feature at (i, j).
       
This operation detects features specific to an image region, making it essential for hierarchical pattern recognition (Krizhevsky et al., 2012).
       
This operation detects features specific to an image region, making it essential for hierarchical pattern recognition (Krizhevsky et al., 2012).
       
Convolution’s effectiveness in identifying features has been validated in numerous studies, such as a 2021 study by Zhang et al., (2021) where convolutional layers in CNNs were crucial for accurately distinguishing subtle variations in plant leaf textures for disease detection.
 
Rectified linear unit (ReLU) activation for non-linearity
 
After convolution, AlexNet uses the ReLU activation function, defined as:
 
f(x) = m (0, x)
 
       
This non-linear function sets negative values to zero, enhancing computational efficiency and mitigating the vanishing gradient problem (Krizhevsky et al., 2012). ReLU helps the model to capture complex patterns by introducing non-linearity, which linear activations would miss (Glorot et al., 2011).
       
In practical applications, ReLU has demonstrated superior performance in deep learning tasks. For instance, in a medical imaging study, ReLU was found to significantly improve convergence speed and model accuracy when used in CNN architectures for tumor classification (Litjens et al., 2017).
 
Pooling layers for dimensionality reduction
 
Max-pooling layers follow certain convolutional layers to lessen the spatial dimensions but retaining vital features in order to lower computational demands. The max-pooling operation takes the maximum value within a k * k window, mathematically represented as:
 
P (i, j) = max {m,nÎkxk}s (m, n)
 
Where,
P (i, j) = The pooled output at position (i, j).
       
This operation enhances spatial invariance, enabling the model to focus on dominant features, such as edges, that are crucial for object recognition (Ranzato et al., 2007).
       
Pooling’s impact on dimensionality reduction and computational efficiency has been demonstrated in various fields. In environmental monitoring, for instance, CNN models with max-pooling were used to classify satellite images with high efficiency and accuracy, allowing the models to identify key land features while ignoring irrelevant details (Chollet, 2017).
 
Fully connected layers for high-level representation
 
AlexNet’s final layers are fully connected (FC), where each neuron connects to other neuron back to back. This structure enables complex relationships between features to be learned. The FC layer computes:
 
y = w .x + by
 
Where,
y = The output vector,
w= Weight matrix.
χ= input vector.
b= Bias.
       
The last FC layer applies a softmax function to convert output values into probabilities:
 
  
       
Softmax ensures a probability distribution, making it ideal for multi-class classification (Goodfellow et al., 2016). The importance of fully connected layers in classification was illustrated in a recent study by Abadi et al., (2016), which highlighted the role of FC layers in refining final decisions in image classification models used in autonomous driving.
 
Dropout regularization for overfitting prevention
 
To combat overfitting, AlexNet employs dropout in its fully connected layers. Dropout randomly “drops” some proportion of neurons during training. This forces the network to learn superfluous images and helps in improving the generalization. Dropout is mathematically represented as:
yi = χ. di
 
Where,
di= A binary variable with probability p of being zero, determining whether neuron χi is dropped.
       
This regularization technique has proven effective in reducing overfitting, as demonstrated by Srivastava et al., (2014), who showed that dropout significantly improved CNN performance on tasks with limited data by enhancing model robustness.
 
Architecture of the model
 
The model’s architecture begins with a convolutional layer that utilizes 96 filters, each with an 11x11 kernel and a stride of 4. This is followed by a ReLU activation function and a 3x3 max-pooling layer with a stride of 2. Subsequent layers consist of additional convolutional layers featuring 256 filters with 5x5 kernels, as well as 384 filters with 3x3 kernels, all employing ReLU activations. Max-pooling is applied at various stages to reduce dimensionality.
       
The model incorporates fully connected layers with 4,096 neurons and ReLU activations. To mitigate overfitting, dropout regularisation is applied at a rate of 0.5. The training process lasts for 25 epochs with a batch size of 32, optimizing for cross-entropy loss and accuracy metrics. The final layer employs a softmax function for multi-class classification. Validation metrics are monitored throughout training to ensure the model generalizes well to unseen data. Table 1 outlines the structure and parameters of each layer in the AlexNet model.The model is depicted in Figure.

Table 1: An outline of the structure and parameters of each layer in the AlexNet model.

The AlexNet model demonstrated promising results in classifying bean leaf diseases across various categories. The accuracy and loss graphs in terms of epochs are shown in Fig 2. After training for 25 epochs, the model achieved a training accuracy of 95.4% with a training loss of 0.1401. However, the validation accuracy peaked at 78.2%, accompanied by a validation loss of 0.8140. This discrepancy suggests some overfitting, as indicated by the higher accuracy on the training set compared to the validation data.

Fig 2: Receiver operating curves a) Training b) Validation.


       
Furthermore, the model’s performance was assessed using several key metrics. The Receiver Operating Characteristic (ROC) curve (Fig 3) illustrated the true positive rate across different classification thresholds, showcasing strong overall performance but with noticeable variations in sensitivity among the classes. The training accuracy increased steadily across epochs, reaching approximately 90% by the end of training indicating the model’s efficiency in learning the features from the training data. The training and validation loss curves further highlight this trend. The training loss decreased smoothly over time, indicating improved model fit on the training set.

Fig 3: Alexnet model used in the present study.


       
The confusion matrix (Fig 4) provides a comprehensive analysis of the model’s classification performance. It shows that the model performed well in categorizing most samples within each class, although some misclassifications were noted among similar categories. Specifically, the model successfully identified 33 samples of Angular Leaf Spot, but it encountered 7 false positives and 3 false negatives. In the case of Rust, 34 samples were accurately classified, with 4 false positives and 5 false negatives recorded. Notably, the Healthy class exhibited the best performance, achieving 39 correct predictions and only 3 false positives, which underscores the model’s effectiveness in distinguishing healthy leaves from those that are diseased.

Fig 4: Confusion matrix.


       
To further quantify the model’s performance, the evaluation metrics are provided in Table 2. For Angular Leaf Spot, the model achieved a precision of 0.8919, recall of 0.7674 and F1-score of 0.8250. The Rust category achieved a precision of 0.7727, recall of 0.7907 and F1-score of 0.7816. For the Healthy class, the precision, recall and F1-score were 0.8298, 0.9286 and 0.8764, respectively. The overall accuracy across all classes was 82.81%, with weighted averages of precision (0.8315), recall (0.8281) and F1-score (0.8273), reflecting a balanced performance in classifying leaf images.

Table 2: Classification metrics.


       
Additionally, the Matthews Correlation Coefficient (MCC) was calculated at 0.7444, indicating a strong positive correlation between predicted and actual classifications. This overall performance highlights AlexNet’s potential for bean disease classification, although the results suggest further optimization could enhance sensitivity, particularly between visually similar disease classes.
       
AlexNet has emerged as a powerful tool for detecting diseases in bean crops, incorporating its convolutional neural network architecture to analyze leaf images effectively. This approach is crucial for timely intervention in agricultural practices, as diseases like angular spot leaf disease and bean rust can significantly impact crop yield. The research indicates that AlexNet can achieve high accuracy rates, making it a reliable option for farmers to identify infected plants and take necessary actions. Many similar studies have provided similar results. In a study, AlexNet achieved 99.7% accuracy on training datasets and 96.8% on test datasets for bean disease detection (Suma et al., 2023). In addition, AlexNet has been successfully applied across multiple datasets, showcasing its adaptability in identifying various plant diseases beyond beans (Jajoo and Jain, 2022). Some studies have proposed modifications to AlexNet to improve its performance in detecting plant diseases, such as adjusting for different image sizes and angles (Yeh et al., 2021).
This study demonstrated the efficacy of AlexNet in classifying bean leaf diseases, achieving an overall accuracy of 82.81% across three categories: Angular Leaf Spot, Rust and Healthy. While the architecture itself adheres to AlexNet’s standard design, its application to bean disease identification addresses a significant gap in agricultural disease management. The study showcases how a pre-trained AlexNet model, combined with tailored datasets and preprocessing, can offer a cost-effective and computationally manageable solution for farmers and agricultural stakeholders.
       
The research also highlighted the practical implications of deploying automated disease detection tools in resource-limited regions, emphasising their potential to enhance food security and reduce crop losses. By focusing on a specific crop and diseases with high global importance, this work extends AlexNet’s application portfolio and paves the way for future innovations in sustainable agriculture. Further optimisation, larger datasets and real-world testing are recommended to refine performance and enhance the scalability of such systems.
Disclaimers
 
The views and conclusions expressed in this article are solely those of the authors and do not necessarily represent the views of their affiliated institutions. The authors are responsible for the accuracy and completeness of the information provided, but do not accept any liability for any direct or indirect losses resulting from the use of this content.
 
Authors’ contributions
 
All authors contributed toward data analysis, drafting and revising the paper and agreed to be responsible for all the aspects of this work.
 
Funding details
 
No funding.
 
Availability of data and materials
 
Not Applicable
 
Use of artificial intelligence
 
Not applicable.
 
Declarations
 
Authors declare that all works are original and this manuscript has not been published in any other journal.
Authors declare that they have no conflict of interest.

  1. Abadi, M. et al. (2016). TensorFlow: A system for large-scale machine learning. In: 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16). 265-283.

  2. Akter, S., Haque, A., Vasker, N., Hasan, M., Ovi, J.A. and Islam, M. (2023). Beans disease detection using convolutional neural network. Proceedings of the 2023 IEEE International Conference on Big Data and Applications (IBDAP). pp 1-6. https://doi.org/10.1109/ibdap58581.2023.10271983. 

  3. Alom, M.Z., Hasan, M., Yakopcic, C., Taha, T.M. and Asari, V.K. (2018). Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. https://doi.org/10.48550/arXiv.1802.06955.  

  4. Al-Dosari, M.N.A. and Abdellatif, M.S. (2024). The environmental awareness level among saudi women and its relationship to sustainable thinking. Acta Innovations. 52: 28-42. https://doi.org/10.62441/ActaInnovations.52.4. 

  5. AlZubi, A., Al-Zu’bi, M. (2023). Application of artificial intelligence in monitoring of animal health and welfare. Indian Journal of Animal Research. 57(11): 1550-1555. doi: 10.18805/IJAR.BF-1698.     

  6. Beebe, S., Gonzalez, A.V. and Rengifo, J. (2000). Research on trace minerals in common bean. Food and Nutrition Bulletin. 21(4): 387-391.

  7. Broughton, W.J., Hernandez, G., Blair, M. and Beebe, S. (2003). Beans (Phaseolus spp.)-model food legumes. Plant and Soil. 252(1): 55-128.

  8. Chollet, F. (2017). Deep Learning with Python. Manning Publications.

  9. Ferentinos, K. P. (2018). Deep learning models for plant disease detection and diagnosis. Computers and Electronics in Agriculture. 145: 311-318. https://doi.org/10.1016/ j.compag.2018.01.009.   

  10. Fiona, D. and Denish, A. (2024). Leveraging deep learning algorithms for the timely detection of diseases in bean leaves (Doctoral dissertation, Brac University).

  11. Food and Agriculture Organization (FAO). (2016). Pulses: Nutritious seeds for a sustainable future. FAO. [accessed online 12 nov 2024]. https://openknowledge.fao.org/server/api/ core/bitstreams/3adde3c2-79c7-4f94-81f0-75115726 159c/content. 

  12. Glorot, X., Bordes, A., and Bengio, Y. (2011). Deep Sparse Rectifier Neural Networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. pp 315-323.

  13. Goodfellow, I., Bengio, Y. and Courville, A. (2016). Deep Learning. MIT Press.

  14. Jajoo, P. and Jain, M. (2022). Plant disease detection over multiple datasets using AlexNet. Proceedings of the 2022 ACM International Conference on Computer Vision and Pattern Recognition (CVPR). pp 1-8. https://doi.org/ 10.1145/3590837.3590838. 

  15. Jeong, H.Y. and Na, I.S. (2024). Efficient faba bean leaf disease identification through smart detection using deep convolutional neural networks. Legume Research: An International Journal. 47(8). doi: 10.18805/LRF-798.

  16. Kamilaris, A. and Prenafeta-Boldú, F.X. (2018). Deep learning in agriculture: A survey. Computers and Electronics in Agriculture. 147: 70-90. https://doi.org/10.1016/j.compag. 2018.02.016. 

  17. Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2012). ImageNet classification with deep convolutional neural networks. Communications of the ACM. 60(6): 84-90. https:// doi.org/10.1145/3065386. 

  18. Kumar, L.S., Hariharasitaraman, S., Narayanasamy, K., Thinakaran, K., Mahalakshmi, J. and Pandimurugan, V. (2022). AlexNet approach for early stage Alzheimer’s disease detection from MRI brain images. Materials Today: Proceedings. 51: 58-65. https://doi.org/10.1016/j.matpr.2021.04.415.  

  19. Litjens, G. et al. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis. 42: 60-88. https://doi.org/10.1016/j.media.2017.07.005. 

  20. Liu, C., Cao, Y., Alcantara, M., Liu, B., Brunette, M., Peinado, J. and Curioso, W. (2017, September). TX-CNN: Detecting tuberculosis in chest X-ray images using convolutional neural network. In 2017 IEEE International Conference on Image Processing (ICIP). IEEE. (pp. 2314-2318). https:/ /doi.org/10.1109/ICIP.2017.8296695.

  21. Lugito, N.P.H., Djuwita, R., Adisasmita, A. and Simadibrata, M. (2022). Blood pressure lowering effect of Lactobacillus- containing probiotic. International Journal of Probiotics and Prebiotics. 17(1): 1-13. https://doi.org/10.37290/ ijpp2641-7197.17:1–13.

  22. Mallick, M.T., Biswas, S., Das, A.K., Saha, H.N., Chakrabarti, A. and Deb, N. (2023). Deep learning based automated disease detection and pest classification in Indian mung bean. Multimedia Tools and Applications. 82(8): 12017- 12041.  https://doi.org/10.1007/s11042-022-13673-7. 

  23. Pawlak, K. and Ko³odziejczak, M. (2020). The role of agriculture in ensuring food security in developing countries: Considerations in the context of the problem of sustainable food production.  Sustainability. 12(13): 5488. https://doi.org/10.3390/ su12135488. 

  24. Ranzato, M.A., Huang, F.J., Boureau, Y.L. and LeCun, Y. (2007). Unsupervised learning of invariant feature hierarchies with applications to object recognition. 2007 IEEE Conference on Computer Vision and Pattern Recognition.  pp 1-8.

  25. Sahu, P., Bhadoria, R. and Sahu, R. (2021). Deep learning models for beans crop diseases classification and visualization techniques. Computers and Electronics in Agriculture. 142: 215-223.

  26. Santiago, J.A., Nguyen, H. and Zepeda, M. (2023). CNN applications in crop disease classification: Addressing overfitting and class imbalance. Agricultural Informatics. 15(1): 45-59.

  27. Shoaib, M., Shah, B., Ei-Sappagh, S., Ali, A., Ullah, A., Alenezi, F. and Ali, F. (2023). An advanced deep learning models- based plant disease detection: A review of recent research. Frontiers in Plant Science. 14: 1158933.  https://doi.org/10.3389/fpls.2023.1158933. 

  28. Simhadri, C.G., Kondaveeti, H.K., Vatsavayi, V.K., Mitra, A. and Ananthachari, P. (2024). Deep learning for rice leaf disease detection: A systematic literature review on emerging trends, methodologies and techniques. Information  Processing in Agriculture. 12(2): 151-168.

  29. Priyanka, S., Sahu, R. and Raj, T. (2021). Deep learning models for beans crop diseases classification and visualization techniques. Computers and Electronics in Agriculture. 142: 215-223. https://doi.org/10.1016/j.inpa.2024.04.006. 

  30. Singh, V., Chug, A. and Singh, A. P. (2023). Classification of beans leaf diseases using fine tuned cnn model. Procedia Computer Science. 218: 348-356. https://doi.org/10.1016/ j.procs.2023.01.017.  

  31. Sladojevic, S., Arsenovic, M., and Anderla, A. (2016). Deep neural networks-based recognition of plant diseases by leaf image classification. Computers and Electronics in Agriculture. 124: 224-231.

  32. Spanhol, F.A., Oliveira, L.S., Petitjean, C. and Heutte, L. (2016). Breast cancer histopathological image classification using convolutional neural networks. In 2016 International Joint Conference on Neural Networks (IJCNN). IEEE. (pp. 2560-2567). 10.1109/IJCNN.2016.7727519. 

  33. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov,  R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research. 15: 1929-1958.

  34. Strange, R.N. and Scott, P.R. (2005). Plant disease: a threat to global food security. Annu. Rev. Phytopathol. 43(1): 83- 116.https://doi.org/10.1146/annurev.phyto.43.113004.13 3839.

  35. Suma, S.A., Haque, A., Vasker, N., Hasan, M., Ovi, J.A. and Islam, M. (2023). Beans Disease Detection Using Convolutional Neural Network. 4th International Conference on Big Data Analytics and Practices (IBDAP), Bangkok, Thailand, 1- 5. https://doi.org/10.1109/IBDAP58581.2023.10271983

  36. Too, E. C., Yujian, L., Njuki, S. and Yingchun, L. (2019). A comparative study of fine-tuning deep learning models for plant disease identification. Computers and Electronics in Agriculture. 161: 272-279. https://doi.org/10.1016/ j.compag.2018.03.032.  

  37. Wagara, I.N., and Kimani, P.M. (2007). Resistance of nutrient-rich bean varieties to major biotic constraints in Kenya. African Crop Science Journal. 15(2): 93-99.

  38. Yeh, J.F., Wang, S.Y. and Chen, Y.P. (2021). Crop disease detection by image processing using modified AlexNet. Proceedings of the 2021 IEEE Eurasia Conference on Biomedical Engineering, Healthcare and Sustainability (ECBIOS). pp 1-5. https://doi.org/10.1109/ECBIOS51820.2021.9510426. 

  39. Yu, F., Zhang, Q., Xiao, J., Ma, Y., Wang, M., Luan, R. and Zhang, H. (2023). Progress in the application of CNN-based image classification and recognition in whole crop growth cycles. Remote Sensing. 15(12): 2988. https://doi.org/ 10.3390/rs15122988. 

  40. Zhang, T., Sun, L. and Wu, Y. (2021). Applications of AlexNet in plant disease classification: A review. Computational Agriculture. 33(1): 46-56. https://doi.org/10.1109/ ACCESS.2021.3069646.  

  41. Zhuoxin, L., Cong, L., Linfan, D., Yanzhou, F., Xianyin, X., Huiying, M., Jun, Q., Liangliang, Z. (2022). 4. Improved AlexNet with Inception-V4 for plant disease diagnosis. Computational Intelligence and Neuroscience. doi: 10.1155/2022/ 5862600.

The Role of Deep Learning in Enhancing Crop Sustainability: A Study on AlexNet’s Application in Detecting Bean Leaf Disease

1Zhejiang Academy of Agricultural Sciences, 198 Shiqiao Road, Hangzhou City, Zhejiang Province, 310021, China.
2School of Electrical and Computer Engineering, Yeosu Campus, Chonnam National University, 50, Daehak-ro, Yeosu-si, Jeollanam-do, 59626, Republic of Korea.
  • Submitted06-12-2024|

  • Accepted27-10-2025|

  • First Online 30-10-2025|

  • doi 10.18805/LRF-843

Background: Agriculture has always been the source of global food security but is challenged by crop diseases. Beans, a vital protein source, are particularly vulnerable to the diseases whch may occur due to varous pathogens ncludng bacteria fung etc. Accurate disease identification has become critical in order to maintain increasing demands of growing population.

Methods: This study utilized the AlexNet convolutional neural network (CNN) to classify bean leaf images into three classes (two disease classes i.e Angular Leaf Spot and Rust and one Healthy class). An open dataset containing leaf mages of beans belonging to all the three classes was used to train the model. The mages were first preprocessed and resized to 224x224 pixels for optimal model performance. The AlexNet model was trained for 25 epochs using cross-entropy loss, ReLU activation, max-pooling and dropout regularization.

Result: An accuracy of 95.4% was achieved while the validation accuracy was 78.2%. Other performance metrics, such as precision, recall and F1-score, highlighted strengths in identifying healthy leaves, with an overall accuracy of 82.81%. However, some misclassifications occurred between disease classes due to visual similarities. The results demonstrate AlexNet’s potential for automated plant disease detection, providing a scalable solution for enhancing agricultural practices and food security. Further optimization and integration with field applications are recommended for improved accuracy and usability.

Agriculture has been responsible for food security, as it directly impacts food availability for over 800 million undernourished people (Pawlak and Kołodziejczak, 2020; Beebe, 2000). It supports economic stability, particularly in developing countries where populations rely on locally produced staples. The globalization of agriculture highlights the need for resilient crop varieties to combat plant diseases, ensuring sustainable food production amidst growing challenges (Strange and Scott, 2005; Fiona and Denish, 2024). Among common crops, beans are particularly vulnerable to various diseases, which can greatly affect their productivity. Accurate and timely identification of plant diseases is crucial for preventing large-scale crop losses (Sahu and Sahu, 2021; Mallick et al., 2023).
       
Beans are an essential part of diets worldwide, valued not only for their nutritional content but also for their role in food security and combating malnutrition. Rich in protein, fiber, vitamins and minerals, beans contribute significantly to the diets of populations in both developed and developing nations (Broughton et al., 2003). Particularly in regions where access to animal protein is limited, beans serve as an affordable and nutrient-dense alternative, helping address protein and micronutrient deficiencies that contribute to malnutrition (FAO, 2016). This makes beans a crucial crop for improving dietary quality and supporting sustainable agricultural systems, especially in areas with high food insecurity (Jeong and Na, 2024).
       
Despite their nutritional benefits, bean crops are susceptible to a number of diseases that can reduce yield, impacting both farmers and consumers. Among these, Angular Leaf Spot and Rust are two of the most prevalent diseases affecting beans, each posing significant threats to crop productivity (Wagara and Kimani, 2007). Accurate identification and timely intervention are essential to manage these diseases and ensure a stable supply of beans. However, traditional disease identification methods are often labour-intensive, requiring expert knowledge and specialized equipment that may not be readily available in rural or resource-limited areas (Simhadri et al., 2024; Shoaib et al., 2023).  This limitation underscores the need for accessible, efficient and automated methods of plant disease detection.
       
The developments made recently in the realm of Information and Communication technology, particularly the Artificial Intelligence (AI), have proven quite proficient in addressing these challenges. Convolutional Neural Networks (CNNs), a subset of deep learning, have been widely adopted for image-based disease identification due to their high accuracy and ability to generalize across different datasets (Kamilaris and Prenafeta-Boldú, 2018; AlZubi et al., 2023; Al-Dosari et al., 2024; Lugito et al., 2022). AlexNet, a pioneering CNN architecture introduced by Krizhevsky et al., (2012), has proven particularly effective in large-scale image classification tasks. Its layered design, incorporating convolutional and fully connected layers with ReLU activations and dropout regularization, enhances model performance and reduces overfitting, making it well-suited for complex visual data such as plant disease images (Zhuoxin et al., 2022).
       
In this study, AlexNet was applied to classify bean leaf images into three classes: Angular Leaf Spot, Rust and Healthy. The aim was to assess the model’s performance in segregating healthy leaves from diseased leaves, using CNNs for crop health monitoring and disease management. 
 
Related work
 
AlexNet has been widely applied in image identification tasks due to its efficiency in handling complex image data. AlexNet became the standard for visual-related tasks after its success in the ImageNet challenge in 2012 (Singh et al., 2023; Akter et al., 2023). Since its development, AlexNet has set a foundation for deep learning in image recognition, particularly due to its performance in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), where it achieved unprecedented accuracy (Krizhevsky et al., 2012; Sladojevic and Anderla, 2016; Santiago et al., 2023). AlexNet’s architecture consists of five convolutional layers, along with max-pooling layers and 3 fully connected layers, which enable it to capture both spatial and hierarchical features of images. This model’s simplicity and effectiveness have led to its extensive use in diverse fields, particularly in the identification of diseases through image analysis.
       
AlexNet has shown significant promise in disease identification across various medical and agricultural applications. Litjens et al., (2017) highlighted AlexNet’s application across medical imaging modalities, demonstrating its capability to detect patterns indicative of diseases like cancer, liver cirrhosis and diabetic retinopathy. AlexNet has been utilized primarily for its architecture in medical image analysis tasks, making use of its ability to process large 2D CT slices for detecting interstitial patterns. The network employs large receptive fields in initial layers and smaller kernels in later layers, enhancing feature extraction. Additionally, AlexNet’s pre-trained models have demonstrated strong performance in various classification tasks, challenging human expert accuracy in some instances.
       
In medical imaging domain, for example, AlexNet has been applied to identify and classify diseases in radiographs. As Alom et al., (2018) discuss, CNNs like AlexNet can efficiently process large datasets and distinguish complex patterns, making them suitable for diagnosing conditions such as pneumonia and skin cancer through radiological images. Similarly, AlexNet was employed to analyze MRI brain images for diagnosing Alzheimer at an early stage by extracting significant features indicative of Mild Cognitive Impairment (MCI). The model utilized transfer learning techniques and achieved a high accuracy of 98.35% in classifying the images, demonstrating its effectiveness in early detection (Kumar et al., 2022). Spanhol et al., (2016) utilized AlexNet to classify breast cancer histopathology images, achieving high diagnostic accuracy. The architecture was adapted to handle high-resolution histopathological images which demonstrated superior performance compared to simpler models like LeNet. Liu et al. (2017) highlighted that using AlexNet for diagnosing tuberculosis (TB) improved classification accuracy for various TB manifestations in chest X-ray images. It demonstrated outstanding performance in detecting TB, achieving an 85.68% classification accuracy, which surpassed previous methods. Additionally, the architecture’s ability to be fine-tuned with transfer learning from natural image datasets enhanced its effectiveness in medical image analysis.
       
In agriculture, AlexNet has been instrumental in diagnosing plant diseases, a critical task for food security. Plant diseases can often be identified by specific visual symptoms and CNNs like AlexNet are well-suited for capturing these subtleties. Ferentinos (2018) utilized AlexNet demonstrated that CNNs can effectively distinguish healthy from diseased plants, even when the visual differences are subtle. Similarly, Too et al., (2019) compared various deep learning models, including AlexNet, for plant disease recognition and found that AlexNet performed competitively, further reinforcing its relevance in agricultural image identification. Specifically, in bean plants, AlexNet has been applied to detect common diseases, such as rust and bacterial blight, that visually manifest on leaves showing that the network could distinguish between healthy leaves and those affected by diseases with high accuracy (Yu et al., 2023). Their work highlights AlexNet’s ability to detect subtle patterns and textures in leaf imagery, which are indicative of various diseases. In another study, the AlexNet was implemented along with transfer learning, where pre-trained weights were fine-tuned to improve accuracy which enhanced model’s performance by adjusting hyperparameters. They achieved a classification accuracy of 95.31%. 
       
In this study, Alexnet was used to analyse leaf images of beans for the purpose of identifying diseased plants. The detailed methodology is given under the following section.
This study utilized the AlexNet deep learning model to classify bean leaf images into three categories: Angular Leaf Spot, Rust and Healthy. The dataset comprised 1,034 images, divided equally among the three classes (Fig 1). Images were resized to 224x224 pixels, a standard input size for AlexNet, which is designed to optimize the processing of large-scale images while reducing computational costs.

Fig 1: Three classes used for training and evaluation: a) Angular leaf spot, b) Rust and c) Healthy.


       
AlexNet achieved remarkable accuracy in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). It demonstrated that deep CNNs could be applied effectively to large-scale image classification tasks, sparking widespread interest in deep learning models.
 
Mathematical basis of AlexNet
 
AlexNet operates through a series of mathematical operations designed to mimic human visual processing, enhancing the model’s capacity for feature extraction and classification. These operations include convolution, activation functions, pooling, fully connected layers and dropout regularization, each contributing uniquely to AlexNet’s effectiveness in image classification.
 
Convolutional layers for feature extraction
 
Convolutional layers in AlexNet apply filters (kernels) over the input image to generate maps of feature, capturing 3-D patterns like edges and textures. This process is represented mathematically as:
 
 
Where,
I(m, n)= A pixel value in the input.
FFF= The filter.
S(i, j)= The output feature at (i, j).
       
This operation detects features specific to an image region, making it essential for hierarchical pattern recognition (Krizhevsky et al., 2012).
       
This operation detects features specific to an image region, making it essential for hierarchical pattern recognition (Krizhevsky et al., 2012).
       
Convolution’s effectiveness in identifying features has been validated in numerous studies, such as a 2021 study by Zhang et al., (2021) where convolutional layers in CNNs were crucial for accurately distinguishing subtle variations in plant leaf textures for disease detection.
 
Rectified linear unit (ReLU) activation for non-linearity
 
After convolution, AlexNet uses the ReLU activation function, defined as:
 
f(x) = m (0, x)
 
       
This non-linear function sets negative values to zero, enhancing computational efficiency and mitigating the vanishing gradient problem (Krizhevsky et al., 2012). ReLU helps the model to capture complex patterns by introducing non-linearity, which linear activations would miss (Glorot et al., 2011).
       
In practical applications, ReLU has demonstrated superior performance in deep learning tasks. For instance, in a medical imaging study, ReLU was found to significantly improve convergence speed and model accuracy when used in CNN architectures for tumor classification (Litjens et al., 2017).
 
Pooling layers for dimensionality reduction
 
Max-pooling layers follow certain convolutional layers to lessen the spatial dimensions but retaining vital features in order to lower computational demands. The max-pooling operation takes the maximum value within a k * k window, mathematically represented as:
 
P (i, j) = max {m,nÎkxk}s (m, n)
 
Where,
P (i, j) = The pooled output at position (i, j).
       
This operation enhances spatial invariance, enabling the model to focus on dominant features, such as edges, that are crucial for object recognition (Ranzato et al., 2007).
       
Pooling’s impact on dimensionality reduction and computational efficiency has been demonstrated in various fields. In environmental monitoring, for instance, CNN models with max-pooling were used to classify satellite images with high efficiency and accuracy, allowing the models to identify key land features while ignoring irrelevant details (Chollet, 2017).
 
Fully connected layers for high-level representation
 
AlexNet’s final layers are fully connected (FC), where each neuron connects to other neuron back to back. This structure enables complex relationships between features to be learned. The FC layer computes:
 
y = w .x + by
 
Where,
y = The output vector,
w= Weight matrix.
χ= input vector.
b= Bias.
       
The last FC layer applies a softmax function to convert output values into probabilities:
 
  
       
Softmax ensures a probability distribution, making it ideal for multi-class classification (Goodfellow et al., 2016). The importance of fully connected layers in classification was illustrated in a recent study by Abadi et al., (2016), which highlighted the role of FC layers in refining final decisions in image classification models used in autonomous driving.
 
Dropout regularization for overfitting prevention
 
To combat overfitting, AlexNet employs dropout in its fully connected layers. Dropout randomly “drops” some proportion of neurons during training. This forces the network to learn superfluous images and helps in improving the generalization. Dropout is mathematically represented as:
yi = χ. di
 
Where,
di= A binary variable with probability p of being zero, determining whether neuron χi is dropped.
       
This regularization technique has proven effective in reducing overfitting, as demonstrated by Srivastava et al., (2014), who showed that dropout significantly improved CNN performance on tasks with limited data by enhancing model robustness.
 
Architecture of the model
 
The model’s architecture begins with a convolutional layer that utilizes 96 filters, each with an 11x11 kernel and a stride of 4. This is followed by a ReLU activation function and a 3x3 max-pooling layer with a stride of 2. Subsequent layers consist of additional convolutional layers featuring 256 filters with 5x5 kernels, as well as 384 filters with 3x3 kernels, all employing ReLU activations. Max-pooling is applied at various stages to reduce dimensionality.
       
The model incorporates fully connected layers with 4,096 neurons and ReLU activations. To mitigate overfitting, dropout regularisation is applied at a rate of 0.5. The training process lasts for 25 epochs with a batch size of 32, optimizing for cross-entropy loss and accuracy metrics. The final layer employs a softmax function for multi-class classification. Validation metrics are monitored throughout training to ensure the model generalizes well to unseen data. Table 1 outlines the structure and parameters of each layer in the AlexNet model.The model is depicted in Figure.

Table 1: An outline of the structure and parameters of each layer in the AlexNet model.

The AlexNet model demonstrated promising results in classifying bean leaf diseases across various categories. The accuracy and loss graphs in terms of epochs are shown in Fig 2. After training for 25 epochs, the model achieved a training accuracy of 95.4% with a training loss of 0.1401. However, the validation accuracy peaked at 78.2%, accompanied by a validation loss of 0.8140. This discrepancy suggests some overfitting, as indicated by the higher accuracy on the training set compared to the validation data.

Fig 2: Receiver operating curves a) Training b) Validation.


       
Furthermore, the model’s performance was assessed using several key metrics. The Receiver Operating Characteristic (ROC) curve (Fig 3) illustrated the true positive rate across different classification thresholds, showcasing strong overall performance but with noticeable variations in sensitivity among the classes. The training accuracy increased steadily across epochs, reaching approximately 90% by the end of training indicating the model’s efficiency in learning the features from the training data. The training and validation loss curves further highlight this trend. The training loss decreased smoothly over time, indicating improved model fit on the training set.

Fig 3: Alexnet model used in the present study.


       
The confusion matrix (Fig 4) provides a comprehensive analysis of the model’s classification performance. It shows that the model performed well in categorizing most samples within each class, although some misclassifications were noted among similar categories. Specifically, the model successfully identified 33 samples of Angular Leaf Spot, but it encountered 7 false positives and 3 false negatives. In the case of Rust, 34 samples were accurately classified, with 4 false positives and 5 false negatives recorded. Notably, the Healthy class exhibited the best performance, achieving 39 correct predictions and only 3 false positives, which underscores the model’s effectiveness in distinguishing healthy leaves from those that are diseased.

Fig 4: Confusion matrix.


       
To further quantify the model’s performance, the evaluation metrics are provided in Table 2. For Angular Leaf Spot, the model achieved a precision of 0.8919, recall of 0.7674 and F1-score of 0.8250. The Rust category achieved a precision of 0.7727, recall of 0.7907 and F1-score of 0.7816. For the Healthy class, the precision, recall and F1-score were 0.8298, 0.9286 and 0.8764, respectively. The overall accuracy across all classes was 82.81%, with weighted averages of precision (0.8315), recall (0.8281) and F1-score (0.8273), reflecting a balanced performance in classifying leaf images.

Table 2: Classification metrics.


       
Additionally, the Matthews Correlation Coefficient (MCC) was calculated at 0.7444, indicating a strong positive correlation between predicted and actual classifications. This overall performance highlights AlexNet’s potential for bean disease classification, although the results suggest further optimization could enhance sensitivity, particularly between visually similar disease classes.
       
AlexNet has emerged as a powerful tool for detecting diseases in bean crops, incorporating its convolutional neural network architecture to analyze leaf images effectively. This approach is crucial for timely intervention in agricultural practices, as diseases like angular spot leaf disease and bean rust can significantly impact crop yield. The research indicates that AlexNet can achieve high accuracy rates, making it a reliable option for farmers to identify infected plants and take necessary actions. Many similar studies have provided similar results. In a study, AlexNet achieved 99.7% accuracy on training datasets and 96.8% on test datasets for bean disease detection (Suma et al., 2023). In addition, AlexNet has been successfully applied across multiple datasets, showcasing its adaptability in identifying various plant diseases beyond beans (Jajoo and Jain, 2022). Some studies have proposed modifications to AlexNet to improve its performance in detecting plant diseases, such as adjusting for different image sizes and angles (Yeh et al., 2021).
This study demonstrated the efficacy of AlexNet in classifying bean leaf diseases, achieving an overall accuracy of 82.81% across three categories: Angular Leaf Spot, Rust and Healthy. While the architecture itself adheres to AlexNet’s standard design, its application to bean disease identification addresses a significant gap in agricultural disease management. The study showcases how a pre-trained AlexNet model, combined with tailored datasets and preprocessing, can offer a cost-effective and computationally manageable solution for farmers and agricultural stakeholders.
       
The research also highlighted the practical implications of deploying automated disease detection tools in resource-limited regions, emphasising their potential to enhance food security and reduce crop losses. By focusing on a specific crop and diseases with high global importance, this work extends AlexNet’s application portfolio and paves the way for future innovations in sustainable agriculture. Further optimisation, larger datasets and real-world testing are recommended to refine performance and enhance the scalability of such systems.
Disclaimers
 
The views and conclusions expressed in this article are solely those of the authors and do not necessarily represent the views of their affiliated institutions. The authors are responsible for the accuracy and completeness of the information provided, but do not accept any liability for any direct or indirect losses resulting from the use of this content.
 
Authors’ contributions
 
All authors contributed toward data analysis, drafting and revising the paper and agreed to be responsible for all the aspects of this work.
 
Funding details
 
No funding.
 
Availability of data and materials
 
Not Applicable
 
Use of artificial intelligence
 
Not applicable.
 
Declarations
 
Authors declare that all works are original and this manuscript has not been published in any other journal.
Authors declare that they have no conflict of interest.

  1. Abadi, M. et al. (2016). TensorFlow: A system for large-scale machine learning. In: 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16). 265-283.

  2. Akter, S., Haque, A., Vasker, N., Hasan, M., Ovi, J.A. and Islam, M. (2023). Beans disease detection using convolutional neural network. Proceedings of the 2023 IEEE International Conference on Big Data and Applications (IBDAP). pp 1-6. https://doi.org/10.1109/ibdap58581.2023.10271983. 

  3. Alom, M.Z., Hasan, M., Yakopcic, C., Taha, T.M. and Asari, V.K. (2018). Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. https://doi.org/10.48550/arXiv.1802.06955.  

  4. Al-Dosari, M.N.A. and Abdellatif, M.S. (2024). The environmental awareness level among saudi women and its relationship to sustainable thinking. Acta Innovations. 52: 28-42. https://doi.org/10.62441/ActaInnovations.52.4. 

  5. AlZubi, A., Al-Zu’bi, M. (2023). Application of artificial intelligence in monitoring of animal health and welfare. Indian Journal of Animal Research. 57(11): 1550-1555. doi: 10.18805/IJAR.BF-1698.     

  6. Beebe, S., Gonzalez, A.V. and Rengifo, J. (2000). Research on trace minerals in common bean. Food and Nutrition Bulletin. 21(4): 387-391.

  7. Broughton, W.J., Hernandez, G., Blair, M. and Beebe, S. (2003). Beans (Phaseolus spp.)-model food legumes. Plant and Soil. 252(1): 55-128.

  8. Chollet, F. (2017). Deep Learning with Python. Manning Publications.

  9. Ferentinos, K. P. (2018). Deep learning models for plant disease detection and diagnosis. Computers and Electronics in Agriculture. 145: 311-318. https://doi.org/10.1016/ j.compag.2018.01.009.   

  10. Fiona, D. and Denish, A. (2024). Leveraging deep learning algorithms for the timely detection of diseases in bean leaves (Doctoral dissertation, Brac University).

  11. Food and Agriculture Organization (FAO). (2016). Pulses: Nutritious seeds for a sustainable future. FAO. [accessed online 12 nov 2024]. https://openknowledge.fao.org/server/api/ core/bitstreams/3adde3c2-79c7-4f94-81f0-75115726 159c/content. 

  12. Glorot, X., Bordes, A., and Bengio, Y. (2011). Deep Sparse Rectifier Neural Networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. pp 315-323.

  13. Goodfellow, I., Bengio, Y. and Courville, A. (2016). Deep Learning. MIT Press.

  14. Jajoo, P. and Jain, M. (2022). Plant disease detection over multiple datasets using AlexNet. Proceedings of the 2022 ACM International Conference on Computer Vision and Pattern Recognition (CVPR). pp 1-8. https://doi.org/ 10.1145/3590837.3590838. 

  15. Jeong, H.Y. and Na, I.S. (2024). Efficient faba bean leaf disease identification through smart detection using deep convolutional neural networks. Legume Research: An International Journal. 47(8). doi: 10.18805/LRF-798.

  16. Kamilaris, A. and Prenafeta-Boldú, F.X. (2018). Deep learning in agriculture: A survey. Computers and Electronics in Agriculture. 147: 70-90. https://doi.org/10.1016/j.compag. 2018.02.016. 

  17. Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2012). ImageNet classification with deep convolutional neural networks. Communications of the ACM. 60(6): 84-90. https:// doi.org/10.1145/3065386. 

  18. Kumar, L.S., Hariharasitaraman, S., Narayanasamy, K., Thinakaran, K., Mahalakshmi, J. and Pandimurugan, V. (2022). AlexNet approach for early stage Alzheimer’s disease detection from MRI brain images. Materials Today: Proceedings. 51: 58-65. https://doi.org/10.1016/j.matpr.2021.04.415.  

  19. Litjens, G. et al. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis. 42: 60-88. https://doi.org/10.1016/j.media.2017.07.005. 

  20. Liu, C., Cao, Y., Alcantara, M., Liu, B., Brunette, M., Peinado, J. and Curioso, W. (2017, September). TX-CNN: Detecting tuberculosis in chest X-ray images using convolutional neural network. In 2017 IEEE International Conference on Image Processing (ICIP). IEEE. (pp. 2314-2318). https:/ /doi.org/10.1109/ICIP.2017.8296695.

  21. Lugito, N.P.H., Djuwita, R., Adisasmita, A. and Simadibrata, M. (2022). Blood pressure lowering effect of Lactobacillus- containing probiotic. International Journal of Probiotics and Prebiotics. 17(1): 1-13. https://doi.org/10.37290/ ijpp2641-7197.17:1–13.

  22. Mallick, M.T., Biswas, S., Das, A.K., Saha, H.N., Chakrabarti, A. and Deb, N. (2023). Deep learning based automated disease detection and pest classification in Indian mung bean. Multimedia Tools and Applications. 82(8): 12017- 12041.  https://doi.org/10.1007/s11042-022-13673-7. 

  23. Pawlak, K. and Ko³odziejczak, M. (2020). The role of agriculture in ensuring food security in developing countries: Considerations in the context of the problem of sustainable food production.  Sustainability. 12(13): 5488. https://doi.org/10.3390/ su12135488. 

  24. Ranzato, M.A., Huang, F.J., Boureau, Y.L. and LeCun, Y. (2007). Unsupervised learning of invariant feature hierarchies with applications to object recognition. 2007 IEEE Conference on Computer Vision and Pattern Recognition.  pp 1-8.

  25. Sahu, P., Bhadoria, R. and Sahu, R. (2021). Deep learning models for beans crop diseases classification and visualization techniques. Computers and Electronics in Agriculture. 142: 215-223.

  26. Santiago, J.A., Nguyen, H. and Zepeda, M. (2023). CNN applications in crop disease classification: Addressing overfitting and class imbalance. Agricultural Informatics. 15(1): 45-59.

  27. Shoaib, M., Shah, B., Ei-Sappagh, S., Ali, A., Ullah, A., Alenezi, F. and Ali, F. (2023). An advanced deep learning models- based plant disease detection: A review of recent research. Frontiers in Plant Science. 14: 1158933.  https://doi.org/10.3389/fpls.2023.1158933. 

  28. Simhadri, C.G., Kondaveeti, H.K., Vatsavayi, V.K., Mitra, A. and Ananthachari, P. (2024). Deep learning for rice leaf disease detection: A systematic literature review on emerging trends, methodologies and techniques. Information  Processing in Agriculture. 12(2): 151-168.

  29. Priyanka, S., Sahu, R. and Raj, T. (2021). Deep learning models for beans crop diseases classification and visualization techniques. Computers and Electronics in Agriculture. 142: 215-223. https://doi.org/10.1016/j.inpa.2024.04.006. 

  30. Singh, V., Chug, A. and Singh, A. P. (2023). Classification of beans leaf diseases using fine tuned cnn model. Procedia Computer Science. 218: 348-356. https://doi.org/10.1016/ j.procs.2023.01.017.  

  31. Sladojevic, S., Arsenovic, M., and Anderla, A. (2016). Deep neural networks-based recognition of plant diseases by leaf image classification. Computers and Electronics in Agriculture. 124: 224-231.

  32. Spanhol, F.A., Oliveira, L.S., Petitjean, C. and Heutte, L. (2016). Breast cancer histopathological image classification using convolutional neural networks. In 2016 International Joint Conference on Neural Networks (IJCNN). IEEE. (pp. 2560-2567). 10.1109/IJCNN.2016.7727519. 

  33. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov,  R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research. 15: 1929-1958.

  34. Strange, R.N. and Scott, P.R. (2005). Plant disease: a threat to global food security. Annu. Rev. Phytopathol. 43(1): 83- 116.https://doi.org/10.1146/annurev.phyto.43.113004.13 3839.

  35. Suma, S.A., Haque, A., Vasker, N., Hasan, M., Ovi, J.A. and Islam, M. (2023). Beans Disease Detection Using Convolutional Neural Network. 4th International Conference on Big Data Analytics and Practices (IBDAP), Bangkok, Thailand, 1- 5. https://doi.org/10.1109/IBDAP58581.2023.10271983

  36. Too, E. C., Yujian, L., Njuki, S. and Yingchun, L. (2019). A comparative study of fine-tuning deep learning models for plant disease identification. Computers and Electronics in Agriculture. 161: 272-279. https://doi.org/10.1016/ j.compag.2018.03.032.  

  37. Wagara, I.N., and Kimani, P.M. (2007). Resistance of nutrient-rich bean varieties to major biotic constraints in Kenya. African Crop Science Journal. 15(2): 93-99.

  38. Yeh, J.F., Wang, S.Y. and Chen, Y.P. (2021). Crop disease detection by image processing using modified AlexNet. Proceedings of the 2021 IEEE Eurasia Conference on Biomedical Engineering, Healthcare and Sustainability (ECBIOS). pp 1-5. https://doi.org/10.1109/ECBIOS51820.2021.9510426. 

  39. Yu, F., Zhang, Q., Xiao, J., Ma, Y., Wang, M., Luan, R. and Zhang, H. (2023). Progress in the application of CNN-based image classification and recognition in whole crop growth cycles. Remote Sensing. 15(12): 2988. https://doi.org/ 10.3390/rs15122988. 

  40. Zhang, T., Sun, L. and Wu, Y. (2021). Applications of AlexNet in plant disease classification: A review. Computational Agriculture. 33(1): 46-56. https://doi.org/10.1109/ ACCESS.2021.3069646.  

  41. Zhuoxin, L., Cong, L., Linfan, D., Yanzhou, F., Xianyin, X., Huiying, M., Jun, Q., Liangliang, Z. (2022). 4. Improved AlexNet with Inception-V4 for plant disease diagnosis. Computational Intelligence and Neuroscience. doi: 10.1155/2022/ 5862600.
In this Article
Published In
Legume Research

Editorial Board

View all (0)