Legume Research

  • Chief EditorJ. S. Sandhu

  • Print ISSN 0250-5371

  • Online ISSN 0976-0571

  • NAAS Rating 6.80

  • SJR 0.391

  • Impact Factor 0.8 (2024)

Frequency :
Monthly (January, February, March, April, May, June, July, August, September, October, November and December)
Indexing Services :
BIOSIS Preview, ISI Citation Index, Biological Abstracts, Elsevier (Scopus and Embase), AGRICOLA, Google Scholar, CrossRef, CAB Abstracting Journals, Chemical Abstracts, Indian Science Abstracts, EBSCO Indexing Services, Index Copernicus

Classification and Severity Level Assessment of Fusarium Wilt Disease in Chickpeas using Convolutional Neural Network

Ahmad Ali AlZubi1,*, Radha Raghuramapatruni2, Pushpa Kumari2
  • https://orcid.org/0000-0001-8477-8319
1Department of Computer Science, Community College, King Saud University, Riyadh, Saudi Arabia.
2GITAM School of Business, GITAM Deemed to be University, Visakhapatnam-530 045, Andhra Pradesh, India.
  • Submitted27-03-2024|

  • Accepted22-06-2024|

  • First Online 22-07-2024|

  • doi 10.18805/LRF-807

Background: The fusarium wilt disease of chickpea leaves is a common illness that leads to economic problems for farmers due decreased crop yield. Early disease detection and the implementation of suitable precautions can help to increase the yield of chickpeas. This study offers an improved method for Fusarium wilt disease prediction based on severity level using a convolutional neural learning algorithm.

Methods:  The Convolutional Neural Network (CNN) model is utilized in this work to identify leaf disease due to wilting. The dataset contains 4,339 images of chickpea leaves that were obtained from Kaggle. After preprocessing, the data is sent into the network model for training. The model shows acceptable classification and accuracy metrics.

Result: Deep learning methods are very useful tools for tracking leaf diseases at their early stages and can help farmers with the use of controlling methods. The proposed work looks for changes in the shape and color of chickpea leaves in order to predict severe fusarium disease. Training and validation accuracies show a balanced trade-off by giving satisfactory outcomes. The model shows an overall accuracy of 74.79%. The confusion matrix and classification parameters increase the model’s performance.

The agriculture industry represents a major and novel context for researchers and experts in the field of computer vision. Producing a large variety of valuable and significant crops and plants is the main objective of agriculture. Plant diseases must be managed at their early stage in farming since they reduce the quality and quantity of production (Maddikunta et al., 2021). Studies on diseases of different fruits and crops have been the attention of agricultural experts lately. Many strategies for identifying and categorizing fruit and agricultural diseases were developed by the researchers (Hang et al., 2019; Basha et al., 2020). The most common method for diagnosing diseases is by human inspections; nevertheless, there are many drawbacks to this approach, such as costs, time, availability and labor-intensiveness. The surfaces of leaves and fruits are where many bacterial and fungal infections first show symptoms. Among the most important food crops produced in the cold season in the world, chickpeas (Cicer arietinum) are mostly cultivated in drylands (Tadesse, 2017). The poor productivity can be attributed to many biotic and abiotic stressors. The two biotic stressors that have the most impact on chickpea output are soil-borne and foliar diseases. Mycoplasmas, nematodes, bacteria, viruses and fungus are among the pathogens that harm chickpeas and cause significant economic losses worldwide. Fungi impact chickpea roots, stems, leaves and pods, making them the most significant of these (Tadesse, 2017).
       
Traditionally, farmers or other specialists would manually check plants to detect plant diseases. The process of diagnosing plant diseases by visual inspection of the symptoms on plant leaves involves a very high level of complexity (Ferentinos, 2018).  Mass sample work and specialized laboratory equipment are needed for laboratory-based techniques including fluorescence in-situ hybridization (FISH), immunofluorescence (IF) and polymerase chain reaction (PCR) (Fang and Ramasamy, 2015). The complicated nature of plant diseases, along with the vast array of domesticated plants and their extant Phyto pathological issues, renders manual disease detection methods costly and time-consuming (Sharma et al., 2020). On the other hand, the photos that are being analyzed were captured with cameras that operate in the visible range of the electromagnetic spectrum, which is in wavelength range from 400 and 700 nm. In this approach, acquiring the input data may be done without the need for expensive machinery or skilled labor (Taheri-Garavand et al., 2021). Thus, data may be obtained quickly, cheaply and conveniently (i.e., in situ) by future users of the created protocol. In the meanwhile, deep learning (DL) methods describe a family of machine learning (ML) algorithms that are effective for data analysis and image processing. Since they can automatically extract characteristics from the input photos in order to diagnose diseases in plants. There are several uses for deep learning in agriculture (Patil and Pawar, 2017). Therefore, early illness identification is crucial for farmers as it aids in the prevention or control of disease transmission. Using a range of image processing, deep learning and machine learning approaches, several researchers have developed models for the classification and detection of diseases in plants. LeCun and Bengio (1995) used Convolutional Neural Networks (CNNs) for image classification with the advancement of computer systems having Graphical Processing Unit (GPU) embedded processors. The community of stacking feedforward neural networks includes CNNs (LeCun and Bengio, 1995).
       
In this work, an image detector for the classification of chickpea leaf diseases based on a lightweight and precise CNN is implemented on a low-cost, low-power core while maintaining high performance in terms of inference time and accuracy. The dataset is taken from the Kaggle database. The model gained good accuracy after training for 150 epochs. The model performance is evaluated by a confusion matrix and classification report.
 
Review of literature
 
Many machine learning (ML) algorithms like deep learning (DL) and image processing methods have been widely utilized in the detection of disease in plants. The sequential CNN model was used by (Militante et al., 2019) to identify the healthy sugarcane leaf class and six types of sugarcane leaf illness. To train their model, the researcher utilized a dataset consisting of 13,842 photos of sugarcane. The illness manifests itself in other sections of the plant, hence the authors’ consideration of the sugarcane and potato leaves alone was insufficient. A DL technique was developed (Amara et al., 2017) to automate the classification of two common banana diseases: banana speckles and Sigatoka bananas. They made use of an actual dataset of banana disease cases that was gathered via the Plant Village initiative. They employed a pre-processing phase whereby every image in the collection was grayscale and downsized to a pixel size of 60×60. The authors employed CNN’s LeNet architecture for feature extraction and the fully linked layer of CNN handled the classification phase thereafter. Apart from the healthy leaves, the model can distinguish between two forms of leaf illnesses. However, overfitting was a problem with the LeNet architecture. One issue that DNNs frequently face because of their huge number of parameters is that they have a propensity to overfit the training set, which makes them incapable of generalization (Alom et al., 2018). Other issues associated with CNNs include the selection of an architecture tailored to a particular task and the interpretability of the training results (black box effect). LeCun et al., (2015) and Goodfellow et al., (2016) are two references that provide further information regarding CNNs. In their review work, (Golhani et al., 2018) outlined the advantages and disadvantages of using hyperspectral data for the diagnosis of plant leaf diseases. Within a brief period, they also introduced NN techniques for SDI development. They found that tests of SDIs on a variety of hyperspectral sensors at the plant leaf size are necessary as long as they are relevant for appropriate crop protection. With an emphasis on potato leaf disease, (Bangari et al., 2022) provided a review of disease detection with CNN. After looking over several studies, they concluded that convolutional neural networks are more effective in identifying the illness. They also found that CNN significantly improved the highest degree of disease identification accuracy. Mukti and Biswas (2019) presented a transfer learning model with ResNet50 as a basis for plant disease diagnosis. The dataset has 87.867 pictures. Twenty percent of the dataset was utilized for validation and the remaining eighty percent was used for training. Their best accuracy percentage was 99.80%. Arya and Singh (2019) suggested that CNN is a capable model that can accurately identify a wide range of plant diseases. To diagnose illnesses in potato and mango leaves, they presented an approach based on a pre-trained model to evaluate several CNN architectures (e.g., AlexNet, shallow CNN). With AlexNet, the method was deemed more successful (up to 98.33%) than with a shallow CNN, which achieved an accuracy of just 90.85%). Deep learning is another term for the multi-layer CNN-based image classification technique (Le, Bengio, Hinton, 2015; Li et al., 2021; Li et al., 2021; Kim and AlZubi, 2024; Min et al., 2024; Porwal et al., 2024; Wasik and Pattinson, 2024, Maltare, 2023). Because CNNs are effective at characterizing and learning, they are mostly utilized for feature extraction and the features that are extracted exhibit translation invariance. The initial CNNs were LeNet-5 and the time delay network, which were studied in the 1980s and 1990s. (LeCun and Bengio, 1995).
Dataset information
 
The study uses a Kaggle dataset that was carefully selected to enable testing with images showing healthy and sick chickpea plants. According to a study by Hayit et al., (2023), the dataset categorizes the images into different classes based on the severity levels: 1(HR) indicates Highly Resistant (0%-10% plant wilt), 3(R) indicates Resistant (11%-20% plant wilt), 5(MR) indicates Tolerant/Moderately Resistant (21%-30% plant wilt), 7(S) indicates Susceptible (31%-50% plant wilt) and 9(HS) indicates Highly Susceptible (Over 51% plant wilt).
       
The variety of chickpea plant conditions in the dataset is an invaluable tool for investigating and creating efficient classification algorithms. This dataset, which is a substantial collection of 4,339 leaf images, carefully classifies each image based on the different degrees of Fusarium Wilt Disease severity in chickpea plants. Of these, 959 images show strong resistance, while 1,177 images show resistant conditions. Furthermore, 1,133 images depict situations that are either somewhat resistant or tolerant, 558 images depict susceptible conditions and 512 images depict extremely susceptible events. Models intended to identify Fusarium Wilt Disease in chickpea plants can be trained and evaluated using this rich and varied dataset, which carefully captures a range of severity levels. Fig 1 represents the severity of the diseases in chickpea plants.
 

Fig 1: Severity of Fusarium wilt disease in chickpea plants.


 
Convolution neural network (CNN) model process
 
In this work, a sequential CNN model is utilized for detecting the Fusarium wilt disease in chickpea leaves. Before the training process, image preprocessing (resizing, normalization and augmentation) of leaf images is needed to increase the accuracy of the CNN algorithms. 
 
Image preprocessing (Resizing, normalization and augmentation)
 
Due to the many sources and conditions related to obtaining images of infected chickpeas from various places, noise is inherent in the dataset, which may affect the resolution of the images and the identification outputs of the model. This decreased by preprocessing each image before sending it into the model’s convolution and max pooling layers.
       
These procedures, which together try to reduce the disruptive effect of noise, include image scaling, normalization and noise filtering. Improving the model’s prediction power is the main goal of these preprocessing phases. All of the images in the collection were resized to 256×256×32 pixels. Normalization is essential for effective machine learning (ML) training because pixel values span from 0 to 255. It is imperative to normalize pixel values to be between 0 and 1 to avoid learning process slowdowns brought on by big integer values included in the input image. Noise filtering is a crucial process that deals with image corruption caused by decoding errors or positive and negative signals carried by noise channels. The selected filtering method successfully reduces image noise. Expanding the dataset through augmentation is essential because it introduces small visual distortions that prevent overfitting during training (Fig 2). This adjustment improves testing performance and becomes essential during testing, particularly when assessing the model’s performance on rotated images.
 

Fig 2: Resized (256x256) and augmented image of chickpea plant.


       
In addition, the dataset is carefully separated into train and test using an 80 and 20 ratio. Twenty percent of the dataset is set to evaluate the model’s performance and the remaining eighty percent is used to train the proposed algorithm. The validation dataset is used in the model’s continuous evaluation during training, which is crucial for optimizing performance by fine-tuning hyperparameters.
 
Feature learning and classification
 
Feature extraction was utilized to automatically extract important features from the collected images using the sequential CNN model. This allowed for the easy classification of images into Fusarium wilt-related classes (1(HR), 3(R), 5(MR), 7(S) and 9(HS)). Within the model’s step-by-step structure, there are discrete feature extraction layers. With 64 filters in its 3×3 filter, the first convolution layer, conv2d, produces an output shape of (32, 256, 256, 64). An output shape of (32, 128, 128, 64) is obtained by repeatedly 2×2 pool size of max pooling. The process of extracting features and reducing spatial dimensions involves the application of filters and max pooling operations over successive convolutional layers. Lastly, completely connected layers for categorization are reached using Flatten, fully connected layers and softmax.
       
A representation of the CNN model’s convolution and max pooling layers is shown in Fig 3. Table 1 represents hyperparameters used for model execution. The input for the fully connected layers is the flattened output (32, 2048) from earlier layers. There are 64 neurons in the dense layer and there are 5 neurons in the dense (1) layer that correspond to the classes. Rectified Linear Unit known as ReLU is used to introduce nonlinear features, which is selected for deep Convolutional Neural Network (CNN) training because of its computational efficiency. The ReLU activation function maintains the current value for non-negative inputs and outputs zero for negative input values to effectively train the model.  The CNN model is trained on categorized training images as part of the model training procedure. The extracted features are then used in a classification process.
 

Fig 3: Feature learning and classification using sequential CNN model.


 

Table 1: Model summary of used hyperparameters.


       
In the suggested model, the SoftMax classifying layer is introduced to identify the probability related to the expected label of chickpea diseases, following the CNN layers. When the total probability equals one, it generates output values between 0 and 1, that represents the estimated probability that the input image belongs to a particular class. SoftMax has the following benefits: it is suitable for accepting the output from the final fully connected layer; it is rapid to train and predict; and it is simple to define the output probability range. There is no standard approach for determining appropriate hyperparameters, therefore it takes a lot of trial and error to find the best values for the model. The hyperparameter values, which include learning rate, loss function, epochs, batch size and algorithm optimization, are set before training.
       
Filters sometimes referred to as kernels, are used in convolution processes to systematically extract information from overlapping regions of an input image or feature map. Multiplying the filter elements by the matching input image elements and then adding up the results are the mathematical operations involved in this complex procedure. The following formulation effectively conveys the dynamics of a two-dimensional convolution operation via the interaction of an input image (I) and a kernel (K):
 
  
 
In this case, the coordinates of the image (I) are represented by i and j, whereas the coordinates of the kernel (K) are indicated by x and y. To reduce the probability of overfitting and improve computational efficiency, a max pooling layer is a strategic addition to the model. A Fully Connected Layer is added to the model thereafter, which is in responsible for classifying images according to patterns identified in earlier layers. A Softmax function is used in this layer to help classify the incoming data reliably and understandably. The softmax function converts the numerical values, x1, x2, x3…xn, of the neurons in the preceding layer into probabilities, Q1, Q2, Q3…Qn.
 
  
  
xk= Numerical.
jth= Neuron.
Qk= Denotes the probability of class k.
       
The Adam optimization approach is used to train a deep learning model, reducing error rates and minimizing loss function. It uses an adjustable learning rate, squared gradient scaling and a moving average of the gradient for parameter optimization. A learning rate of 0.0001 is used to balance training efficacy and model performance, while the loss function evaluates the model’s effectiveness in achieving its objectives.
 
Parameters of the evaluation matrix
 
Accuracy, precision and recall are three widely used parameters that were used to evaluate the performance of the proposed approach. For evaluating the accuracy, the ratio of correctly identified data to the total number of inputs determines accuracy and the error rate is expressed as the percentage of incorrectly detected values. Accuracy is a dependable variable that ensures accurate results in every class, allowing a thorough evaluation. Another important statistic is precision, which can be computed as the ratio of true positives to the total of false positives and true positives. Recall represents the ability of the model to locate relevant events within a dataset. The F1-Score, ranging from 0 to 1, serves as an equilibrium point between precision and recall in performance evaluation. The simple mathematical expressions for the Accuracy, precision, recall and F1 Score are given as:
 
         
 
The confusion matrix is represented by TP (True Positive), FP (False Positive), FN (False Negative) and TN (True Negative) values.
By controlling the diseases that lead to losses of crop yield, deep learning algorithms greatly enhance crop productivity and quality through the detection of leaf diseases in plants. To classify the diseases of chickpea leaves, a fast and easy sequential CNN learning model was presented in this study. After completing 150 epochs, the model achieves a training loss of 0.5141 with an accuracy of 78.57% (Fig 4). On the validation set, the loss is 0.5856 and the accuracy is 76.44%. The relatively close performance on both the training and validation sets suggests that the model is learning well from the training data and generalizing reasonably to unseen data, demonstrating a balanced trade-off between fitting the training data and avoiding overfitting.
 

Fig 4: Loss and Accuracy with respect to epoch for training and validation data.


       
Furthermore, a precise evaluation was carried out for each class to determine the efficacy and confidence level of the trained model. Fig 5 provides an example of a set of images where the disease was correctly detected by the model and the corresponding labels for the disease match. A confidence score that the model predicted is displayed in the output (Fig 5).
 

Fig 5: Actual and predicted diseased level for all classes of Fusarium wilt disease.


       
A concise summary of the model’s classification performance for each class is given by the confusion matrix, which also gives information on the distribution of real and predicted instances. The matrix for the 1(HR) class shows that 97 cases were correctly identified as 1 (HR), while 11 cases were incorrectly classified as 3 (R). For this class, no cases were mistakenly categorized as 5 (MR), 7 (S), or 9 (HS). Regarding the 3(R) class, the matrix indicates that 43 cases were correctly identified as 3 (R), but 71 cases were correctly classified but belonged to the next class, 1 (HR).
       
In addition, one incident was incorrectly classed as 7 (S), while 16 other instances were incorrectly classified as 5 (MR). The model accurately predicted 111 cases for the 5 (MR) class; four instances were misclassified as 3 (R) and four instances as 7(S). 38 cases in the 7(S) class show accurate predictions; the remaining 13 cases are misclassified as 5 (MR) and two cases are incorrectly classified as 3 (R). Lastly, the 9(HS) class shows accurate predictions for every instance, with no misclassifications. In summary, the confusion matrix clarifies the model’s advantages and disadvantages in categorizing examples into various classes, offering insightful information for analysis and future improvement.
       
Table 2 provides a comprehensive analysis of the performance of the model utilizing the important metrics such as accuracy, recall and F1-score across several class labels (1 (HR), 3 (R), 5 (MR), 7 (S) and 9 (HS)). The model has an accuracy of 0.6736 for Label 1(HR), which means that roughly 67.36% of the cases that were correctly predicted to be 1(HR) were detected. With a noteworthy recall of 0.8981, approximately 89.81% of real instances of 1 (HR) were successfully detected.
 

Table 2: Classification results for five classes of fusarium wilt disease of chickpeas.


       
The matching F1-score for this class is 0.7698, which displays a recall and precision balance that is in balance. For Label 3 (R), the model demonstrates a precision of 0.6636 together with a recall of 0.5420, yielding an F1-score of 0.5966. The F1-score for Label 5 (MR) is 0.7872, which is based on an acceptable precision of 0.7929 and recall of 0.7817. Label 7 (S) contributes to an F1-score of 0.7600 with precision of 0.8444 and recall of 0.6909. Lastly, Label 9 (HS) has an outstanding F1-score of 0.9545 because of its exceptional precision (0.9545) and recall (0.9545). The model’s overall accuracy is 74.79%. Further information about the model’s overall performance may be obtained from the macro and weighted averages of precision, recall and F1-score. The macro averages for precision, recall and F1-score are 0.7858, 0.7735 and 0.7737, respectively. The weighted averages for all classes are 0.7515 for precision, 0.7479 for recall and 0.7435 for F1-score. A thorough summary of the model’s classification accuracy for examples within each category is given by each of these metrics. It is possible to enhance fusarium wilt disease control strategies even more by integrating slow-wilting cultivars into an integrated management plan.
 
Limitations and future aspects
 
This work could be expanded in the future to enable a precise understanding of plant health by identifying and isolating various diseases that are present on a single leaf. The focus could be on improving the dataset of images and improving methods for estimating the severity of disease. To enable ongoing plant condition monitoring, it may also be investigated to integrate an Internet of Things (IoT)-based real-time monitoring system. To increase visibility and engagement, there is also the possibility of creating a mobile application and a specialized website for the sharing of data and insights. With a focus on plant pathology and agricultural management, this comprehensive approach aims to utilize technological advances for deeper and more complex solutions.
The application of integrated disease management techniques is necessary for the best control of Fusarium wilt in chickpeas. A key component of this strategy is the timely and accurate detection of the pathogen and its different pathogenic races. In summary, the use of deep learning algorithms significantly enhances the control of Fusarium wilt in chickpeas by solving the complex issues caused by biotic factors. The study presents an advanced multi-level deep learning model for fast and accurate detection of different diseases damaging chickpea leaves. The model achieves a balanced trade-off between training and validation accuracy, showing satisfactory results. With basic metrics, a thorough examination of the model’s ability to classify diseases using a confusion matrix is included in the study. Precision, recall and F1-score measures across several disease classes provide more information about the model’s advantages and limitations.
The authors would like to thank the editors and reviewers for their review and recommendations and also to extend their thanks to King Saud University for funding this work through the Researchers Supporting Project number (RSP2024R395), King Saud University, Riyadh, Saudi Arabia.
 
Funding statement
 
This work was supported by the Researchers Supporting Project number (RSP2024R395), King Saud University, Riyadh, Saudi Arabia.
 
Author contributions
 
The authors contributed toward data analysis, drafting and revising the paper and agreed to be responsible for all aspects of this work.
 
Data availability statement
 
Not applicable.
 
Declarations
 
Authors declares that all works are original and this manuscript has not been published in any other journal.
The authors declare that they have no conflict of interest.

  1. Alom, M.Z., Taha, T.M., Yakopcic, C., Westberg, S., Sidike, P., Nasrin, M.S., C.V.E.B., Awwal, A.A.S. and Asari, V.K. (2018). The history began from alexnet: a comprehensive survey on deep learning approaches. arXiv.org. https://arxiv.org/abs/1803.01164.

  2. Amara, J., Bouaziz, B., Algergawy, A. (2017). A deep learning- based approach for banana leaf diseases classification Lecture Notes in Informatics (LNI), Proceedings - Series of the Gesellschaft Fur Informatik (GI). 266: 79-88. 

  3. Arya, S., Singh, R.A. (2019). Comparative Study of CNN and AlexNet for Detection of Disease in Potato and Mango leaf. In Proceedings of the IEEE International Conference on Issues and Challenges in Intelligent Computing Techniques (ICICT), Ghaziabad, India. 1: 1-6. https://doi.org/10.1109/ICICT46931.2019.8977648

  4. Bangari, S., Rachana, P., Gupta, N., Sudi, P.S., Baniya, K.K. (2022). A Survey on Disease Detection of a potato Leaf Using CNN. In Proceedings of the 2nd IEEE International Conference on Artificial Intelligence and Smart Energy (ICAIS), Coimbatore, India. pp. 144-149. https://doi.org/10.1109/ICAIS53314.2022.9742963

  5. Basha, S.M., Rajput, D.S., Janet, J., Somula, R.S., Ram, S. (2020). Principles and practices of making agriculture sustainable: Crop yield prediction using random forest. Scalable Comput. Pract. Exp. 21: 591-599. https://doi.org/10.12 694/scpe.v21i4.1714

  6. Fang, Y., Ramasamy, R.P. (2015). Current and prospective methods for plant disease detection. Biosensors. 5: 537-561. https://doi.org/10.3390/bios5030537

  7. Ferentinos, K.P. (2018). Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 145: 311-318. https://doi.org/10.1016/j.compag.2018.01.009

  8. Golhani, K., Balasundram, S.K., Vadamalai, G., Pradhan, B. (2018). A Review of Neural Networks in Plant Disease Detection using Hyperspectral Data. Inf. Process. Agric. 5: 354- 371. https://doi.org/10.1016/j.inpa.2018.05.002

  9. Goodfellow, I. J., Bengio, Y. and Courville, A. (2016). Deep Learning. Cambridge, MA: MIT Press. Available online at: http://www.deeplearningbook.org

  10. Hang, J., Zhang, D., Chen, P., Zhang, J., Wang, B. (2019). Classification of plant leaf diseases based on improved convolutional neural network. Sensors. 19: 4161. https://doi.org/10.3 390/s19194161

  11. Hayit, T., Endes, A., Hayit, F. (2023). KNN-based approach for the classification of fusarium wilt disease in chickpea based on color and texture features. Eur. J. Plant Pathol. https://doi.org/10.1007/s10658-023-02791-z;Source.

  12. Kim, S.Y. and AlZubi, A.A. (2024). Blockchain and artificial intelligence for ensuring the authenticity of organic legume products in supply chains. Legume Research. https://doi.org/10.18805/LRF-786

  13. LeCun, Y., Bengio, Y. (1995). Convolutional networks for images, speech and time series. Handb. Brain Theory Neural Netw. 3361. 

  14. LeCun, Y., Bengio, Y., Hinton, G. (2015). Deep learning. Nature. 21: 436-444. doi: 10.1038/nature14539. https://doi.org/ 10.1038/nature14539. 

  15. Li, W., Fan, L., Wang, Z., Ma, C., Cui, X. (2021). Tackling mode collapse in multi-generator GANs with orthogonal vectors, Pattern Recognit. 110: 107646. https://doi.org/10.1016/j.patcog.2020.107646

  16. Li, W., Xu, L., Liang, Z., Wang, S., Cao, J., Lam, T.C., Cui, X. (2021). JDGAN: Enhancing generator on extremely limited data via joint distribution. Neurocomputing. 431: 148-162. https://doi.org/10.1016/j.neucom.2020.12.001.

  17. Maddikunta, P.K.R., Hakak, S., Alazab, M., Bhattacharya, S., Gadekallu, T.R., Khan, W.Z., Pham, Q.V. (2021). Unmanned Aerial Vehicles in Smart Agriculture: Applications, Requirements and Challenges, IEEE Sens. J. 21: 17608-17619. https://doi.org/10.1109/JSEN.2021.3049471 

  18. Maltare, N. N., Sharma, D., Patel, S. (2023). An exploration and prediction of rainfall and groundwater level for the District of Banaskantha, Gujrat, India. International Journal of Environmental Sciences. 9(1): 1-17 

  19. Militante, S.V., Gerardo, B.D., Medina, R.P. (2019). Sugarcane disease recognition using deep learning 2019 IEEE Eurasia conference on IOT, communication and engineering, ECICE. 575-578. https://doi.org/10.1109/ECICE47484.2019.8942690

  20. Min, P.K., Mito, K. and Kim, T.H. (2024). The Evolving Landscape of Artificial Intelligence Applications in Animal Health. Indian Journal of Animal Research. https://doi.org/10.18805/ IJAR.BF-1742.

  21. Mukti, I.Z., Biswas, D. (2019). Transfer Learning-based Plant Diseases Detection Using ResNet50, In Proceedings of the 4th IEEE International Conference on Electrical Information and Communication Technology (EICT), Khulna, Bangladesh. 1-6. https://doi.org/10.1109/EICT48899.2019.9068805.

  22. Patil, M.A.N., Pawar, M.V. (2017). Detection and classification of plant leaf disease. Iarjset. 4(4): 72-75. https://doi.org/10.17148/IARJSET/NCIARCSE.2017.20.

  23. Porwal, S., Majid, M., Desai, S. C., Vaishnav, J. and Alam, S. (2024). Recent advances, challenges in applying artificial intelligence and deep learning in the manufacturing industry. Pacific Business Review (International). 16(7): 143-152.  

  24. Sharma, P., Berwal, Y.P.S., Ghai, W. (2020). Performance Analysis of Deep Learning CNN Models for Disease Detection in Plants using Image Segmentation, Inf. Process. Agric. 7: 566-574. https://doi.org/10.1016/j.inpa.2019.11.001

  25. Tadesse, M. (2017). Survey of chickpea (Cicer arietinum L) Ascochyta blight (Ascochyta rabiei Pass.) disease status in production regions of Ethiopia, Plant. 5(1): 23 https://doi.org/10.11648/j.plant.20170501.15

  26. Taheri-Garavand, A., Nejad, A.R., Fanourakis, D., Fatahi, S., Majd, M.A. (2021). Employment of artificial neural networks for non-invasive estimation of leaf water status using color features: A case study in Spathiphyllum wallisii, Acta Physiol. Plant. 43: 1-11. https://doi.org/10.1007/ s11738-021-03244-y

  27. Wasik, S. and Pattinson, R. (2024). Artificial intelligence applications in fish classification and taxonomy: Advancing our understanding of aquatic biodiversity. Fish Taxa. 31: 11-21.

Editorial Board

View all (0)