Application of Densenet201- Convolution Neural Network for Detection of White Spot Syndrome Virus (WSSV) in Shrimp to Enhance Aquaculture Disease Management

K
Kyung-won Cho1
T
Taeho Kim2
I
In Seop NA3,*
1Cybertech Co., 276 Greenro, Educastle apartment 102, Unit 1705, Naju-si, Jeollanam-do, Republic of Korea.
2Department of Marine Production Management and Smart Aquaculture Research Center, Chonnam National University, 50, Daehak-ro, Yeosu-si, Jeollanam-do, 59626, Republic of Korea.
3Division of Culture Contents, Chonnam National University, 50, Daehak-ro, Yeosu-si, Jeollanam-do, 59626, Republic of Korea.

Background: White Spot Syndrome Virus (WSSV) is a major pathogen in shrimp aquaculture, causing severe economic losses. The early detection of WSSV is essential for managing outbreaks. Traditional diagnostic methods are effective but often slow and resource-intensive. This study investigates the DenseNet201-Convolution Neural Network model for efficient WSSV detection.

Methods: An online dataset of shrimp images was prepared, including healthy and WSSV-infected samples. Images were preprocessed and fed into DenseNet201 - convolutional neural network. The model was fine-tuned for WSSV detection. Its performance was evaluated using classification metrics.

Result: The model demonstrated high performance, achieving a training accuracy of 99.8% and a validation accuracy of 97%. Precision, recall and F1 score for the WSS class were 97.06%, 94.29% and 95.65%, respectively, while for the healthy class, they were 93.10%, 96.43% and 94.74%. The overall accuracy reached 95.24%, with an MCC of 0.905. The ROC curve showed an AUC of 1 for both classes, indicating perfect classification performance. The DenseNet201-based CNN model successfully detects WSS in shrimp with high accuracy and generalizability. This approach provides a robust tool for early disease detection in aquaculture, though future work should focus on dataset expansion and real-world validation to enhance the model’s robustness under diverse conditions.

Seafood farming is a vital part of the aquaculture industry and plays an important part in the economy of India. Over the past five years, India’s marine product production and exports have shown steady growth. Production increased from 141.64 lakh tonnes in 2019-2020 to an estimated 182.70 lakh tonnes in 2023-24. Similarly, exports are projected to rise from 13.29 lakh tonnes to 18.19 lakh tonnes during the same period (PIB, 2024). However, the growth of shrimp farming faces significant challenges due to diseases like White Spot Syndrome Virus (WSSV). WSSV is harmful infection, causing high mortality rates and economic losses in the industry.
       
Early and accurate detection of this virus is essential for effective disease management in aquaculture. Traditionally, WSSV detection relies on conventional methods (Wang et al., 2023; Zhang et al., 2022). While these methods are accurate, they have limitations, including high costs, the need for skilled personnel and time-consuming procedures. These challenges often delay disease detection, increasing the risk of outbreaks in shrimp farms. Advances in AI and DL have generated new avenues for reducing these challenges. Convolutional Neural Networks (CNNs) are particularly effective in detecting complex patterns from images, providing a faster and more accessible alternative. DenseNet201, a CNN architecture with densely connected layers, excels in feature extraction and reuse, making it an ideal tool for WSSV detection.
       
CNN models are renowned for their promptness and high accuracy, making them well-suited for disease detection tasks (Abade et al., 2021). CNNs are trained to analyze raw image data and effectively classify shrimp into categories based on visible signs of disease (Zhang et al., 2023). The models are further capable of localizing and identifying lesions, abnormalities, or pathogenic impacts, addressing critical challenges in shrimp health monitoring. Object detection algorithms built on CNN architectures, such as Faster R-CNN and YOLO, enhance the precision of disease detection by delineating regions of interest within shrimp images. These methods enable automated identification of diseased areas, facilitating continuous disease diagnosis and monitoring (Lai et al., 2022; Zhou et al., 2023; Zhang et al., 2022; Lugito et al., 2022; Wirawan and Mahendra, 2024). Such automation minimizes human involvement while improving diagnostic efficiency, which is particularly beneficial in large-scale aquaculture (Cho, 2024; Kim and AlZubi, 2024; Min and Kim, 2024).
       
Satoto et al., (2023) applied InceptionResNetV2 with data augmentation for classifying coastal shrimp species. They trained the model on a limited dataset of 350 images across seven shrimp classes. Despite this limitation, the model achieved an impressive 99.4% accuracy on average. However, the reported prediction accuracy ranged widely from 90.01% to 99.8%, underscoring the need for validation on larger datasets to ensure reliability and scalability. Prema et al., (2022) introduced a hybrid CNN-SVM model designed to assess shrimp freshness and quality through real-time vision-based systems. The model is IoT-enabled, allowing for continuous monitoring to ensure shrimp quality, which is often affected by improper storage, handling and processing. Their approach primarily targets the detection of spoilage post-harvest to prevent potential human health risks. By combining the deep learning capabilities of CNN model with the classification strength of Support Vector Machines, the model provides an effective framework for identifying and classifying shrimp quality based on freshness.
       
The main objective of proposed work is to develop a DL-based solution for detecting WSSV in shrimp, using the DenseNet201-CNN architecture. The study aims to enhance disease detection accuracy by effectively distinguishing between healthy and WSSV-infected shrimp through image classification. By using advanced deep learning techniques, the model seeks to address the limitations of conventional methods such as PCR and histopathology, which can be time-consuming, costly and require specialized expertise. Additionally, the study focuses on evaluating the model’s performance using metrics such as classification matrices, ROC-AUC and Matthews Correlation Coefficient (MCC), to ensure reliable and interpretable results.
Algorithm environment
 
The algorithm was built and executed in a Jupyter Notebook using Anaconda. It was developed in a Python v3.11 environment and trained on a PC with 16 GB of RAM. The training process utilized a TPU-v3:8 (8-core Tensor Processing Unit) environment to accelerate the machine learning tasks. TensorFlow module and Keras API were used to build and train the model.
 
Data collection
 
In this study, the dataset contains labeled shrimp images collected from an online dataset repository (Fig 1). The dataset is organized in subdirectories, where each subdirectory corresponds to a unique class (healthy and WSSV) and each class contains multiple images. The images were captured under varying conditions, including lighting and background and resized to a fixed resolution for input into the model. A total of 238 images were used for training, 30 for validation and 30 for testing. The following steps were performed for data preprocessing.

Fig 1: Images from datasets.


 
Data preprocessing
 
Image loading and labeling
 
Images were loaded using the Python Imaging Library (PIL). For each image, a check was performed to ensure that it could be opened and properly converted to the RGB color format. The images were resized to a uniform size of 224×224 pixels, maintaining their aspect ratio. The labels were automatically assigned based on the subdirectory names (sorted alphabetically), where each subdirectory represents a unique class. These labels were mapped to integer values for classification purposes.
 
Data augmentation
 
The variability of the dataset was enhanced using data augmentation process. These techniques included random horizontal flipping, random rotation (up to 20%) and random zooming (up to 20%). This augmented dataset enabled the model to generalize better to unseen images.
 
Data normalization
 
Image pixel values were normalized by scaling them to the range [0, 1] by dividing each pixel value by 255. This normalization step is essential for improving model convergence during training.
 
Data splitting
 
The dataset was divided into three subsets: training, validation and testing. A stratified split, which maintains the class distribution across splits, was performed. Typically, 80% of the data was used for training, 10% for validation and 10% for testing. The data were randomly shuffled before splitting.
 
DenseNet201- Convolutional neural network model architecture
 
A DenseNet201 model, pre-trained on the ImageNet dataset, was employed as the base architecture for the image classification task. DenseNet201 is a deep convolutional neural network that is known for its efficiency and performance in various computer vision tasks. A block diagram of DenseNet201-CNN is given in Fig 2. The model was modified by adding a custom classification head, as outlined below.

Fig 2: Block diagram of DenseNet201-CNN.


 
Base model
 
The DenseNet201 model was used without the top (classification) layer, allowing for the replacement of the original classifier with a custom one tailored for this task. The pre-trained weights were initialized using the ImageNet dataset to use the feature extraction capabilities learned from a diverse set of images.

 
Custom classification head
 
The output of the DenseNet201 base model was passed through a Global Average Pooling (GAP) layer to reduce the spatial dimensions of the feature maps

  
This layer was followed by a fully connected layer with 512 units and a ReLU activation function. To reduce overfitting, a dropout layer with a rate of 0.5 was applied after the fully connected layer. The final output layer consisted of two units (corresponding to the two classes) with a softmax activation function, which provides class probabilities

 
Thus, the model can be expressed as

 
Loss function and optimizer
 
The model was compiled using categorical crossentropy as the loss function, which is standard for multi-class classification tasks. The optimizer chosen was Adam, with a learning rate of 0.0001, to improve convergence speed and stability during training. Accuracy was used as the primary evaluation metric during training and validation.

 
Training procedure
 
Model checkpointing
 
A model checkpoint callback was used to prevent overfitting. This function monitored the accuracy of validation and saved the weights of the model to a file (best_densenet 201.keras) whenever the validation accuracy improved.
 
Training process
 
The model was trained for 30 epochs, with 32 batch sizes. The training process was conducted using TensorFlow and Keras, which allowed for efficient parallel computation across multiple CPU cores.
 
Evaluation
 
The ability of the trained model was accessed using several metrics to assess classification accuracy. The primary evaluation metric was classification accuracy, computed as the percentage of correctly classified images.


      
A confusion matrix was generated to visualize the model’s performance. Where true positive, true negative, false positive and false negative values are filled in the matrix.


 
The ROC(AUC) curve was generated to evaluate the ability of the model to discriminate between classes.

 
For binary classification tasks, the Matthews Correlation Coefficient (MCC) was calculated as an additional evaluation metrix to provide a balanced measure of classification performance. The formula for MCC is

In this study, a deep learning model was developed for the detection of White Spot Syndrome (WSS) in shrimp using a DenseNet201-based convolutional neural network (CNN). The performance of the model was assessed by training and validation accuracy and their losses, across 30 epochs (Fig 3).

Fig 3: Training and validation results over epochs.


       
The training accuracy steadily improved, reaching about 99.8% by the final epoch. This shows the model effectively learned from the training data. However, the validation accuracy increased more gradually and showed some fluctuations. It stabilized around 97%, suggesting that the model could generalize well to unseen data, but faced some challenges. For loss metrics, the training loss decreased consistently, reaching a low value of around 0.0009 by the final epoch. This indicates effective error minimization on the training data. The validation loss followed a similar pattern, initially dropping sharply and then stabilizing at approximately 0.0016. This suggests that while the model learned the key features for WSS detection, its performance on the validation set reached a plateau.
       
Overall, the model demonstrated strong performance, with high accuracy and low loss on both the training and validation datasets. However, the slight gap between training and validation performance indicates that further fine-tuning could improve generalization and reduce overfitting.
       
The confusion matrix for detecting White Spot Syndrome (WSS) disease reveals the model’s performance in classifying WSS Disease and Healthy samples (Fig 4). Specifically, 27 samples of WSS Disease were correctly classified as WSS Disease, while 1 sample of WSS Disease was misclassified as Healthy. Additionally, 2 healthy samples were incorrectly classified as WSS Disease and 33 healthy samples were accurately classified as Healthy. These results suggest that the model performed well overall, with only a few misclassifications, particularly between WSS Disease and Healthy samples. The relatively low number of misclassifications indicates strong model performance and its ability to distinguish between the two classes.

Fig 4: Confusion matrix.


       
The classification metrics for detecting WSS disease in shrimp show strong performance (Table 1). The precision for WSS Disease is 97.06%, indicating that the model correctly identified most WSS Disease instances. The recall for WSS Disease is 94.29%, meaning it identified 94.29% of all actual WSS Disease cases. For healthy samples, the precision is 93.10% and recall is 96.43%. This shows the model performed well in identifying healthy samples as well. The F1 scores are 0.9565 for WSS Disease and 0.9474 for healthy class, indicating a good balance between precision and recall. The overall accuracy is 95.24%, reflecting strong performance across both classes. The macro and weighted averages show high precision, recall and F1 scores, further supporting the model’s effectiveness. The support values of 28 for WSS Disease and 35 for healthy samples suggest a balanced dataset. Overall, the model performs well in classifying both WSS Disease and Healthy shrimp, with minor variations between the two classes.

Table 1: Classification matrix.


       
The model achieved a remarkable classification performance, with an ROC curve yielding an AUC of 1 for both classes (Fig 5). This indicates that the model can perfectly distinguish between WSS Disease and Healthy samples, with no false positives or false negatives. The ROC curve passes through the top-left corner, reflecting a true positive rate (TPR) of 1 and a false positive rate (FPR) of 0. This demonstrates the model’s exceptional ability to accurately classify both classes without any misclassification, indicating optimal performance. The Matthews Correlation Coefficient (MCC) for this confusion matrix is approximately.

Fig 5: ROC(AUC) curve.



This value indicates a strong positive correlation between the predicted and actual values, meaning the model performs well in distinguishing between the two classes.
       
A comparison of the presented model with other existing models is conducted in the preceding section (Table 2). This comparison highlights the strengths and weaknesses of the proposed model in terms of classification accuracy and overall effectiveness.

Table 2: A comparison of different model performance in Shrimp disease detection.


       
Ramachandran et al., (2023) detected WSS Virus in shrimp.  They used a Dense Inception CNN method. They also used the Inception structure to enhance multi-dimensional feature extraction. The model achieved an accuracy of 97.22%. Duong-Trung et al. (2020) addressed shrimp disease classification using DCNNs with transfer learning. The research focused on diseases affecting shrimp in Vietnam, particularly in the Mekong Delta, a major shrimp farming region. The authors used data collected from fieldwork in the Mekong Delta, which involved images of six diseases in shrimp. The model achieved an overall accuracy of 90.02%, demonstrating its effectiveness in handling non-standard images. The study shows the importance of early disease detection and the need for collaborative efforts in shrimp disease prevention.
       
Tuyen et al., (2023) developed WSSV disease susceptibility maps in shrimp. The ET model achieved good accuracy, with an AUC of 0.713, compared to Random Tree with 0.701 and J48 with 0.641. The susceptibility maps created with these models are expected to help better plan and control the spatial spread of WSSV in shrimp farming. Ashraf and Atia compared several DL models to detect diseases in shrimp. Their goal was to identify the best model for early shrimp disease detection. The study used five transfer learning models and found that MobileNetV1 achieved 95% accuracy in the first experiment and 92.5% accuracy in the second experiment. Hu et al. (2020) introduced ShrimpNet, CNN architecture designed for shrimp recognition in aquaculture. The dataset used for training and testing included six different categories of shrimp. ShrimpNet achieved an accuracy of 95.48%, making it a valuable tool for intelligent shrimp aquaculture.
               
The proposed DenseNet201-CNN model holds a strong position in WSSV detection with 95.24% accuracy, though it slightly lags behind DICNN model, which reached 97.22%. When compared to other models, such as ShrimpNet (95.48%) for recognition or Transfer Learning with CNNs (90.02%) for disease classification, the proposed model demonstrates comparable accuracy in disease detection tasks, especially in a targeted environment for WSSV detection.
The proposed DenseNet201-based CNN model for detecting White Spot Syndrome (WSS) in shrimp achieved remarkable results in terms of classification accuracy and performance metrics. The model demonstrated strong potential for use in shrimp disease detection with an overall accuracy of 95.24% and an MCC of 0.905. The ability of model to classify both WSS-infected and healthy shrimp was further validated by a ROC (AUC) score of 1. Although the model performed well, some challenges were observed with validation performance, suggesting that further fine-tuning could improve generalization and reduce overfitting. The model’s strong performance highlights the potential for CNN-based approaches in biological and medical image analysis, providing valuable tools for early disease detection in aquaculture.
       
The proposed model shows promising results, stile there are several limitations to consider. The dataset used in this study was relatively small, with only 238 images for training, which may not fully capture the diversity of shrimp images encountered in real-world settings. The model’s performance could be further enhanced by increasing the dataset size and incorporating additional image variations such as different shrimp species, lighting conditions and backgrounds. Future work will include advanced data augmentation procedures and the implementation of hybrid model architectures, such as EfficientNet or ResNet, to further boost performance and generalization. Additionally, the computational cost of training the model on a TPU environment may be a limiting factor for practical deployment in resource-constrained settings. Exploring model compression techniques or transfer learning on smaller datasets could help address this limitation. Overall, the proposed model represents a promising approach to early disease detection in shrimp farming and further improvements in both dataset size and model architecture could lead to even more accurate and efficient systems.
This research was funded by the Korea Institute of Marine Science and Technology Promotion (KIMST) funded by the Ministry of Oceans and Fisheries, Korea (RS-2022-KS221676, Development of Digital Flow-through Aquaculture System).
 
Disclaimers
 
The views and conclusions expressed in this article are solely those of the authors and do not necessarily represent the views of their affiliated institutions. The authors are responsible for the accuracy and completeness of the information provided, but do not accept any liability for any direct or indirect losses resulting from the use of this content.
 
Authors’ contributions
 
All authors contributed toward data analysis, drafting and revising the paper and agreed to be responsible for all the aspects of this work.
 
Data availability
 
The data analysed/generated in the present study will be made available from corresponding authors upon reasonable request..
Use of artificial intelligence
 
Not applicable.
 
Declarations
 
Authors declare that all works are original and this manuscript has not been published in any other journal.
The authors declare that there are no conflicts of interest regarding the publication of this article. No funding or sponsorship influenced the design of the study, data collection, analysis, decision to publish, or preparation of the manuscript.

  1. Abade, A., Ferreira, P.A. and Vidal, F.B. (2021). Plant diseases recognition on images using convolutional neural networks: A systematic review. Computers and Electronics in Agriculture. 185: 106125. https://doi.org/10.1016/j.compag. 2021.106125.

  2. Ashraf, A. and Atia, A. (2021). Comparative study between transfer learning models to detect shrimp diseases. In 2021 16th International Conference on Computer Engineering and Systems (ICCES) (pp. 1–6). IEEE. https://doi.org/10.1109/ ICCES54031.2021.9686116.

  3. Cho, O.H. (2024). An evaluation of various machine learning approaches for detecting leaf diseases in agriculture. Legume Research. 47(4): 619-627. doi: 10.18805/LRF-787.

  4. Duong-Trung, N., Quach, L. and Nguyen, C. (2020). Towards classifi- cation of shrimp diseases using transferred convolutional neural networks. Advances in Science, Technology and Engineering Systems Journal. 5(4): 724-732. https://doi. org/10.25046/aj050486.

  5. Hu, W., Wu, H., Zhang, Y., Zhang, S. and Lo, C. (2020). Shrimp recognition using ShrimpNet based on convolutional neural network. Journal of Ambient Intelligence and Humanized Computing. https://doi.org/10.1007/s12652- 020-01727-3.

  6. Kim, S.Y. and AlZubi, A.A. (2024). Blockchain and artificial intelligence for ensuring the authenticity of organic legume products in supply chains. Legume Research. 47(7): 1144-1150. doi: 10.18805/LRF-786.

  7. Lai, P., Lin, H., Lin, J., Hsu, H., Chu, Y., Liou, C. and Kuo, Y. (2022). Automatic measuring of shrimp body length using CNN and an underwater imaging system. Biosystems Engineering. 221: 224-235. https://doi.org/10.1016/j.biosystemseng. 2022.07.006.

  8. Lugito, N.P.H., Djuwita, R., Adisasmita, A. and Simadibrata, M. (2022). Blood pressure lowering effect of Lactobacillus- containing probiotic. International Journal of Probiotics and Prebiotics. 17(1): 1-13. https://doi.org/10.37290/ ijpp2641-7197.17:1-13.

  9. Min, P.K., Mito, K. and Kim, T.H. (2024). The evolving landscape of artificial intelligence applications in animal health. Indian Journal of Animal Research. 58(10): 1793-1798. ​doi: 10.18805/IJAR.BF-1742.

  10. Prema, K. and Visumathi, J. (2022). Hybrid approach of CNN and SVM for shrimp freshness diagnosis in aquaculture monitoring system using IoT-based learning support system. Journal of Internet Technology. 23(4): 801-810. https://doi. org/10.53106/160792642022072304015.

  11. Press Information Bureau (PIB). (2024). Department of Fisheries, Government of India. Source: DGCIS. https://pib.gov.in/ PressReleasePage.aspx?PRID=2040868.

  12. Ramachandran, L., Mohan, V., Senthilkumar, S. and Ganesh, J. (2023). Early detection and identification of white spot syndrome in shrimp using an improved deep convolutional neural network. Journal of Intelligent and Fuzzy Systems. 45(4): 6429-6440. https://doi.org/10.3233/jifs-232687.

  13. Satoto, B.D., Khotimah, B.K., Yusuf, M., Hapsari, R.K. and Irmawati, B. (2023). Classification of coastal shrimp species using deep learning InceptionResNetV2 with data augmentation. Technium: Romanian Journal of Applied Sciences and Technology. 16: 250-258.

  14. Tuyen, T.T., Al-Ansari, N., Nguyen, D.D., Le, H.M., Phan, T.N.Q., Prakash, I., Costache, R. and Pham, B.T. (2023). Prediction of white spot disease susceptibility in shrimps using decision tree-based machine learning models. Applied Water Science. 14(1). https://doi.org/10.1007/s13201-023-02049-3.

  15. Wang, P., Yang, L., Guo, B., Shang, Z., Gao, S. and Zhang, X. (2023). Highly sensitive detection of white spot syndrome virus with an RPA-CRISPR combined one-pot method. Aquaculture. 567: 739296. https://doi.org/10.1016/j.aquaculture.2023. 739296.

  16. Wirawan, P.E. and Mahendra, I.W.E. (2024). Turtle conservation and education center (TCEC) as a digital promotion strategy to increasing the number of tourist visits and sustainability. Acta Innovations. 52: 43-50. https://doi.org/ 10.62441/actainnovations.v52i.356.

  17. Zhang, H. and Gui, F. (2023). The application and research of new digital technology in marine aquaculture. Journal of Marine Science and Engineering. 11(401). https://doi.org/10. 3390/jmse11020401.

  18. Zhang, L., Zhou, X., Li, B., Zhang, H. and Duan, Q. (2022). Automatic shrimp counting method using local images and lightweight YOLOv4. Biosystems Engineering. 220: 39-54. https:// doi.org/10.1016/j.biosystemseng.2022.05.011.

  19. Zhang, T., Liu, X., Yang, X., Liu, F., Yang, H., Li, X., Feng, H., Wu, X., Jiang, G., Shen, H. and Dong, J. (2022). Rapid on-site detection method for white spot syndrome virus using recombinase polymerase amplification combined with lateral flow test strip technology. Frontiers in Cellular and Infection Microbiology. 12. https://doi.org/10.3389/fcimb. 2022.889775.

  20. Zhou, H., Kim, S.H., Kim, S.C., Kim, C.W., Kang, S.W. and Kim, H. (2023). Instance segmentation of shrimp based on contrastive learning. Applied Sciences. 13: 6979. https:// doi.org/10.3390/app13126979.
 

Application of Densenet201- Convolution Neural Network for Detection of White Spot Syndrome Virus (WSSV) in Shrimp to Enhance Aquaculture Disease Management

K
Kyung-won Cho1
T
Taeho Kim2
I
In Seop NA3,*
1Cybertech Co., 276 Greenro, Educastle apartment 102, Unit 1705, Naju-si, Jeollanam-do, Republic of Korea.
2Department of Marine Production Management and Smart Aquaculture Research Center, Chonnam National University, 50, Daehak-ro, Yeosu-si, Jeollanam-do, 59626, Republic of Korea.
3Division of Culture Contents, Chonnam National University, 50, Daehak-ro, Yeosu-si, Jeollanam-do, 59626, Republic of Korea.

Background: White Spot Syndrome Virus (WSSV) is a major pathogen in shrimp aquaculture, causing severe economic losses. The early detection of WSSV is essential for managing outbreaks. Traditional diagnostic methods are effective but often slow and resource-intensive. This study investigates the DenseNet201-Convolution Neural Network model for efficient WSSV detection.

Methods: An online dataset of shrimp images was prepared, including healthy and WSSV-infected samples. Images were preprocessed and fed into DenseNet201 - convolutional neural network. The model was fine-tuned for WSSV detection. Its performance was evaluated using classification metrics.

Result: The model demonstrated high performance, achieving a training accuracy of 99.8% and a validation accuracy of 97%. Precision, recall and F1 score for the WSS class were 97.06%, 94.29% and 95.65%, respectively, while for the healthy class, they were 93.10%, 96.43% and 94.74%. The overall accuracy reached 95.24%, with an MCC of 0.905. The ROC curve showed an AUC of 1 for both classes, indicating perfect classification performance. The DenseNet201-based CNN model successfully detects WSS in shrimp with high accuracy and generalizability. This approach provides a robust tool for early disease detection in aquaculture, though future work should focus on dataset expansion and real-world validation to enhance the model’s robustness under diverse conditions.

Seafood farming is a vital part of the aquaculture industry and plays an important part in the economy of India. Over the past five years, India’s marine product production and exports have shown steady growth. Production increased from 141.64 lakh tonnes in 2019-2020 to an estimated 182.70 lakh tonnes in 2023-24. Similarly, exports are projected to rise from 13.29 lakh tonnes to 18.19 lakh tonnes during the same period (PIB, 2024). However, the growth of shrimp farming faces significant challenges due to diseases like White Spot Syndrome Virus (WSSV). WSSV is harmful infection, causing high mortality rates and economic losses in the industry.
       
Early and accurate detection of this virus is essential for effective disease management in aquaculture. Traditionally, WSSV detection relies on conventional methods (Wang et al., 2023; Zhang et al., 2022). While these methods are accurate, they have limitations, including high costs, the need for skilled personnel and time-consuming procedures. These challenges often delay disease detection, increasing the risk of outbreaks in shrimp farms. Advances in AI and DL have generated new avenues for reducing these challenges. Convolutional Neural Networks (CNNs) are particularly effective in detecting complex patterns from images, providing a faster and more accessible alternative. DenseNet201, a CNN architecture with densely connected layers, excels in feature extraction and reuse, making it an ideal tool for WSSV detection.
       
CNN models are renowned for their promptness and high accuracy, making them well-suited for disease detection tasks (Abade et al., 2021). CNNs are trained to analyze raw image data and effectively classify shrimp into categories based on visible signs of disease (Zhang et al., 2023). The models are further capable of localizing and identifying lesions, abnormalities, or pathogenic impacts, addressing critical challenges in shrimp health monitoring. Object detection algorithms built on CNN architectures, such as Faster R-CNN and YOLO, enhance the precision of disease detection by delineating regions of interest within shrimp images. These methods enable automated identification of diseased areas, facilitating continuous disease diagnosis and monitoring (Lai et al., 2022; Zhou et al., 2023; Zhang et al., 2022; Lugito et al., 2022; Wirawan and Mahendra, 2024). Such automation minimizes human involvement while improving diagnostic efficiency, which is particularly beneficial in large-scale aquaculture (Cho, 2024; Kim and AlZubi, 2024; Min and Kim, 2024).
       
Satoto et al., (2023) applied InceptionResNetV2 with data augmentation for classifying coastal shrimp species. They trained the model on a limited dataset of 350 images across seven shrimp classes. Despite this limitation, the model achieved an impressive 99.4% accuracy on average. However, the reported prediction accuracy ranged widely from 90.01% to 99.8%, underscoring the need for validation on larger datasets to ensure reliability and scalability. Prema et al., (2022) introduced a hybrid CNN-SVM model designed to assess shrimp freshness and quality through real-time vision-based systems. The model is IoT-enabled, allowing for continuous monitoring to ensure shrimp quality, which is often affected by improper storage, handling and processing. Their approach primarily targets the detection of spoilage post-harvest to prevent potential human health risks. By combining the deep learning capabilities of CNN model with the classification strength of Support Vector Machines, the model provides an effective framework for identifying and classifying shrimp quality based on freshness.
       
The main objective of proposed work is to develop a DL-based solution for detecting WSSV in shrimp, using the DenseNet201-CNN architecture. The study aims to enhance disease detection accuracy by effectively distinguishing between healthy and WSSV-infected shrimp through image classification. By using advanced deep learning techniques, the model seeks to address the limitations of conventional methods such as PCR and histopathology, which can be time-consuming, costly and require specialized expertise. Additionally, the study focuses on evaluating the model’s performance using metrics such as classification matrices, ROC-AUC and Matthews Correlation Coefficient (MCC), to ensure reliable and interpretable results.
Algorithm environment
 
The algorithm was built and executed in a Jupyter Notebook using Anaconda. It was developed in a Python v3.11 environment and trained on a PC with 16 GB of RAM. The training process utilized a TPU-v3:8 (8-core Tensor Processing Unit) environment to accelerate the machine learning tasks. TensorFlow module and Keras API were used to build and train the model.
 
Data collection
 
In this study, the dataset contains labeled shrimp images collected from an online dataset repository (Fig 1). The dataset is organized in subdirectories, where each subdirectory corresponds to a unique class (healthy and WSSV) and each class contains multiple images. The images were captured under varying conditions, including lighting and background and resized to a fixed resolution for input into the model. A total of 238 images were used for training, 30 for validation and 30 for testing. The following steps were performed for data preprocessing.

Fig 1: Images from datasets.


 
Data preprocessing
 
Image loading and labeling
 
Images were loaded using the Python Imaging Library (PIL). For each image, a check was performed to ensure that it could be opened and properly converted to the RGB color format. The images were resized to a uniform size of 224×224 pixels, maintaining their aspect ratio. The labels were automatically assigned based on the subdirectory names (sorted alphabetically), where each subdirectory represents a unique class. These labels were mapped to integer values for classification purposes.
 
Data augmentation
 
The variability of the dataset was enhanced using data augmentation process. These techniques included random horizontal flipping, random rotation (up to 20%) and random zooming (up to 20%). This augmented dataset enabled the model to generalize better to unseen images.
 
Data normalization
 
Image pixel values were normalized by scaling them to the range [0, 1] by dividing each pixel value by 255. This normalization step is essential for improving model convergence during training.
 
Data splitting
 
The dataset was divided into three subsets: training, validation and testing. A stratified split, which maintains the class distribution across splits, was performed. Typically, 80% of the data was used for training, 10% for validation and 10% for testing. The data were randomly shuffled before splitting.
 
DenseNet201- Convolutional neural network model architecture
 
A DenseNet201 model, pre-trained on the ImageNet dataset, was employed as the base architecture for the image classification task. DenseNet201 is a deep convolutional neural network that is known for its efficiency and performance in various computer vision tasks. A block diagram of DenseNet201-CNN is given in Fig 2. The model was modified by adding a custom classification head, as outlined below.

Fig 2: Block diagram of DenseNet201-CNN.


 
Base model
 
The DenseNet201 model was used without the top (classification) layer, allowing for the replacement of the original classifier with a custom one tailored for this task. The pre-trained weights were initialized using the ImageNet dataset to use the feature extraction capabilities learned from a diverse set of images.

 
Custom classification head
 
The output of the DenseNet201 base model was passed through a Global Average Pooling (GAP) layer to reduce the spatial dimensions of the feature maps

  
This layer was followed by a fully connected layer with 512 units and a ReLU activation function. To reduce overfitting, a dropout layer with a rate of 0.5 was applied after the fully connected layer. The final output layer consisted of two units (corresponding to the two classes) with a softmax activation function, which provides class probabilities

 
Thus, the model can be expressed as

 
Loss function and optimizer
 
The model was compiled using categorical crossentropy as the loss function, which is standard for multi-class classification tasks. The optimizer chosen was Adam, with a learning rate of 0.0001, to improve convergence speed and stability during training. Accuracy was used as the primary evaluation metric during training and validation.

 
Training procedure
 
Model checkpointing
 
A model checkpoint callback was used to prevent overfitting. This function monitored the accuracy of validation and saved the weights of the model to a file (best_densenet 201.keras) whenever the validation accuracy improved.
 
Training process
 
The model was trained for 30 epochs, with 32 batch sizes. The training process was conducted using TensorFlow and Keras, which allowed for efficient parallel computation across multiple CPU cores.
 
Evaluation
 
The ability of the trained model was accessed using several metrics to assess classification accuracy. The primary evaluation metric was classification accuracy, computed as the percentage of correctly classified images.


      
A confusion matrix was generated to visualize the model’s performance. Where true positive, true negative, false positive and false negative values are filled in the matrix.


 
The ROC(AUC) curve was generated to evaluate the ability of the model to discriminate between classes.

 
For binary classification tasks, the Matthews Correlation Coefficient (MCC) was calculated as an additional evaluation metrix to provide a balanced measure of classification performance. The formula for MCC is

In this study, a deep learning model was developed for the detection of White Spot Syndrome (WSS) in shrimp using a DenseNet201-based convolutional neural network (CNN). The performance of the model was assessed by training and validation accuracy and their losses, across 30 epochs (Fig 3).

Fig 3: Training and validation results over epochs.


       
The training accuracy steadily improved, reaching about 99.8% by the final epoch. This shows the model effectively learned from the training data. However, the validation accuracy increased more gradually and showed some fluctuations. It stabilized around 97%, suggesting that the model could generalize well to unseen data, but faced some challenges. For loss metrics, the training loss decreased consistently, reaching a low value of around 0.0009 by the final epoch. This indicates effective error minimization on the training data. The validation loss followed a similar pattern, initially dropping sharply and then stabilizing at approximately 0.0016. This suggests that while the model learned the key features for WSS detection, its performance on the validation set reached a plateau.
       
Overall, the model demonstrated strong performance, with high accuracy and low loss on both the training and validation datasets. However, the slight gap between training and validation performance indicates that further fine-tuning could improve generalization and reduce overfitting.
       
The confusion matrix for detecting White Spot Syndrome (WSS) disease reveals the model’s performance in classifying WSS Disease and Healthy samples (Fig 4). Specifically, 27 samples of WSS Disease were correctly classified as WSS Disease, while 1 sample of WSS Disease was misclassified as Healthy. Additionally, 2 healthy samples were incorrectly classified as WSS Disease and 33 healthy samples were accurately classified as Healthy. These results suggest that the model performed well overall, with only a few misclassifications, particularly between WSS Disease and Healthy samples. The relatively low number of misclassifications indicates strong model performance and its ability to distinguish between the two classes.

Fig 4: Confusion matrix.


       
The classification metrics for detecting WSS disease in shrimp show strong performance (Table 1). The precision for WSS Disease is 97.06%, indicating that the model correctly identified most WSS Disease instances. The recall for WSS Disease is 94.29%, meaning it identified 94.29% of all actual WSS Disease cases. For healthy samples, the precision is 93.10% and recall is 96.43%. This shows the model performed well in identifying healthy samples as well. The F1 scores are 0.9565 for WSS Disease and 0.9474 for healthy class, indicating a good balance between precision and recall. The overall accuracy is 95.24%, reflecting strong performance across both classes. The macro and weighted averages show high precision, recall and F1 scores, further supporting the model’s effectiveness. The support values of 28 for WSS Disease and 35 for healthy samples suggest a balanced dataset. Overall, the model performs well in classifying both WSS Disease and Healthy shrimp, with minor variations between the two classes.

Table 1: Classification matrix.


       
The model achieved a remarkable classification performance, with an ROC curve yielding an AUC of 1 for both classes (Fig 5). This indicates that the model can perfectly distinguish between WSS Disease and Healthy samples, with no false positives or false negatives. The ROC curve passes through the top-left corner, reflecting a true positive rate (TPR) of 1 and a false positive rate (FPR) of 0. This demonstrates the model’s exceptional ability to accurately classify both classes without any misclassification, indicating optimal performance. The Matthews Correlation Coefficient (MCC) for this confusion matrix is approximately.

Fig 5: ROC(AUC) curve.



This value indicates a strong positive correlation between the predicted and actual values, meaning the model performs well in distinguishing between the two classes.
       
A comparison of the presented model with other existing models is conducted in the preceding section (Table 2). This comparison highlights the strengths and weaknesses of the proposed model in terms of classification accuracy and overall effectiveness.

Table 2: A comparison of different model performance in Shrimp disease detection.


       
Ramachandran et al., (2023) detected WSS Virus in shrimp.  They used a Dense Inception CNN method. They also used the Inception structure to enhance multi-dimensional feature extraction. The model achieved an accuracy of 97.22%. Duong-Trung et al. (2020) addressed shrimp disease classification using DCNNs with transfer learning. The research focused on diseases affecting shrimp in Vietnam, particularly in the Mekong Delta, a major shrimp farming region. The authors used data collected from fieldwork in the Mekong Delta, which involved images of six diseases in shrimp. The model achieved an overall accuracy of 90.02%, demonstrating its effectiveness in handling non-standard images. The study shows the importance of early disease detection and the need for collaborative efforts in shrimp disease prevention.
       
Tuyen et al., (2023) developed WSSV disease susceptibility maps in shrimp. The ET model achieved good accuracy, with an AUC of 0.713, compared to Random Tree with 0.701 and J48 with 0.641. The susceptibility maps created with these models are expected to help better plan and control the spatial spread of WSSV in shrimp farming. Ashraf and Atia compared several DL models to detect diseases in shrimp. Their goal was to identify the best model for early shrimp disease detection. The study used five transfer learning models and found that MobileNetV1 achieved 95% accuracy in the first experiment and 92.5% accuracy in the second experiment. Hu et al. (2020) introduced ShrimpNet, CNN architecture designed for shrimp recognition in aquaculture. The dataset used for training and testing included six different categories of shrimp. ShrimpNet achieved an accuracy of 95.48%, making it a valuable tool for intelligent shrimp aquaculture.
               
The proposed DenseNet201-CNN model holds a strong position in WSSV detection with 95.24% accuracy, though it slightly lags behind DICNN model, which reached 97.22%. When compared to other models, such as ShrimpNet (95.48%) for recognition or Transfer Learning with CNNs (90.02%) for disease classification, the proposed model demonstrates comparable accuracy in disease detection tasks, especially in a targeted environment for WSSV detection.
The proposed DenseNet201-based CNN model for detecting White Spot Syndrome (WSS) in shrimp achieved remarkable results in terms of classification accuracy and performance metrics. The model demonstrated strong potential for use in shrimp disease detection with an overall accuracy of 95.24% and an MCC of 0.905. The ability of model to classify both WSS-infected and healthy shrimp was further validated by a ROC (AUC) score of 1. Although the model performed well, some challenges were observed with validation performance, suggesting that further fine-tuning could improve generalization and reduce overfitting. The model’s strong performance highlights the potential for CNN-based approaches in biological and medical image analysis, providing valuable tools for early disease detection in aquaculture.
       
The proposed model shows promising results, stile there are several limitations to consider. The dataset used in this study was relatively small, with only 238 images for training, which may not fully capture the diversity of shrimp images encountered in real-world settings. The model’s performance could be further enhanced by increasing the dataset size and incorporating additional image variations such as different shrimp species, lighting conditions and backgrounds. Future work will include advanced data augmentation procedures and the implementation of hybrid model architectures, such as EfficientNet or ResNet, to further boost performance and generalization. Additionally, the computational cost of training the model on a TPU environment may be a limiting factor for practical deployment in resource-constrained settings. Exploring model compression techniques or transfer learning on smaller datasets could help address this limitation. Overall, the proposed model represents a promising approach to early disease detection in shrimp farming and further improvements in both dataset size and model architecture could lead to even more accurate and efficient systems.
This research was funded by the Korea Institute of Marine Science and Technology Promotion (KIMST) funded by the Ministry of Oceans and Fisheries, Korea (RS-2022-KS221676, Development of Digital Flow-through Aquaculture System).
 
Disclaimers
 
The views and conclusions expressed in this article are solely those of the authors and do not necessarily represent the views of their affiliated institutions. The authors are responsible for the accuracy and completeness of the information provided, but do not accept any liability for any direct or indirect losses resulting from the use of this content.
 
Authors’ contributions
 
All authors contributed toward data analysis, drafting and revising the paper and agreed to be responsible for all the aspects of this work.
 
Data availability
 
The data analysed/generated in the present study will be made available from corresponding authors upon reasonable request..
Use of artificial intelligence
 
Not applicable.
 
Declarations
 
Authors declare that all works are original and this manuscript has not been published in any other journal.
The authors declare that there are no conflicts of interest regarding the publication of this article. No funding or sponsorship influenced the design of the study, data collection, analysis, decision to publish, or preparation of the manuscript.

  1. Abade, A., Ferreira, P.A. and Vidal, F.B. (2021). Plant diseases recognition on images using convolutional neural networks: A systematic review. Computers and Electronics in Agriculture. 185: 106125. https://doi.org/10.1016/j.compag. 2021.106125.

  2. Ashraf, A. and Atia, A. (2021). Comparative study between transfer learning models to detect shrimp diseases. In 2021 16th International Conference on Computer Engineering and Systems (ICCES) (pp. 1–6). IEEE. https://doi.org/10.1109/ ICCES54031.2021.9686116.

  3. Cho, O.H. (2024). An evaluation of various machine learning approaches for detecting leaf diseases in agriculture. Legume Research. 47(4): 619-627. doi: 10.18805/LRF-787.

  4. Duong-Trung, N., Quach, L. and Nguyen, C. (2020). Towards classifi- cation of shrimp diseases using transferred convolutional neural networks. Advances in Science, Technology and Engineering Systems Journal. 5(4): 724-732. https://doi. org/10.25046/aj050486.

  5. Hu, W., Wu, H., Zhang, Y., Zhang, S. and Lo, C. (2020). Shrimp recognition using ShrimpNet based on convolutional neural network. Journal of Ambient Intelligence and Humanized Computing. https://doi.org/10.1007/s12652- 020-01727-3.

  6. Kim, S.Y. and AlZubi, A.A. (2024). Blockchain and artificial intelligence for ensuring the authenticity of organic legume products in supply chains. Legume Research. 47(7): 1144-1150. doi: 10.18805/LRF-786.

  7. Lai, P., Lin, H., Lin, J., Hsu, H., Chu, Y., Liou, C. and Kuo, Y. (2022). Automatic measuring of shrimp body length using CNN and an underwater imaging system. Biosystems Engineering. 221: 224-235. https://doi.org/10.1016/j.biosystemseng. 2022.07.006.

  8. Lugito, N.P.H., Djuwita, R., Adisasmita, A. and Simadibrata, M. (2022). Blood pressure lowering effect of Lactobacillus- containing probiotic. International Journal of Probiotics and Prebiotics. 17(1): 1-13. https://doi.org/10.37290/ ijpp2641-7197.17:1-13.

  9. Min, P.K., Mito, K. and Kim, T.H. (2024). The evolving landscape of artificial intelligence applications in animal health. Indian Journal of Animal Research. 58(10): 1793-1798. ​doi: 10.18805/IJAR.BF-1742.

  10. Prema, K. and Visumathi, J. (2022). Hybrid approach of CNN and SVM for shrimp freshness diagnosis in aquaculture monitoring system using IoT-based learning support system. Journal of Internet Technology. 23(4): 801-810. https://doi. org/10.53106/160792642022072304015.

  11. Press Information Bureau (PIB). (2024). Department of Fisheries, Government of India. Source: DGCIS. https://pib.gov.in/ PressReleasePage.aspx?PRID=2040868.

  12. Ramachandran, L., Mohan, V., Senthilkumar, S. and Ganesh, J. (2023). Early detection and identification of white spot syndrome in shrimp using an improved deep convolutional neural network. Journal of Intelligent and Fuzzy Systems. 45(4): 6429-6440. https://doi.org/10.3233/jifs-232687.

  13. Satoto, B.D., Khotimah, B.K., Yusuf, M., Hapsari, R.K. and Irmawati, B. (2023). Classification of coastal shrimp species using deep learning InceptionResNetV2 with data augmentation. Technium: Romanian Journal of Applied Sciences and Technology. 16: 250-258.

  14. Tuyen, T.T., Al-Ansari, N., Nguyen, D.D., Le, H.M., Phan, T.N.Q., Prakash, I., Costache, R. and Pham, B.T. (2023). Prediction of white spot disease susceptibility in shrimps using decision tree-based machine learning models. Applied Water Science. 14(1). https://doi.org/10.1007/s13201-023-02049-3.

  15. Wang, P., Yang, L., Guo, B., Shang, Z., Gao, S. and Zhang, X. (2023). Highly sensitive detection of white spot syndrome virus with an RPA-CRISPR combined one-pot method. Aquaculture. 567: 739296. https://doi.org/10.1016/j.aquaculture.2023. 739296.

  16. Wirawan, P.E. and Mahendra, I.W.E. (2024). Turtle conservation and education center (TCEC) as a digital promotion strategy to increasing the number of tourist visits and sustainability. Acta Innovations. 52: 43-50. https://doi.org/ 10.62441/actainnovations.v52i.356.

  17. Zhang, H. and Gui, F. (2023). The application and research of new digital technology in marine aquaculture. Journal of Marine Science and Engineering. 11(401). https://doi.org/10. 3390/jmse11020401.

  18. Zhang, L., Zhou, X., Li, B., Zhang, H. and Duan, Q. (2022). Automatic shrimp counting method using local images and lightweight YOLOv4. Biosystems Engineering. 220: 39-54. https:// doi.org/10.1016/j.biosystemseng.2022.05.011.

  19. Zhang, T., Liu, X., Yang, X., Liu, F., Yang, H., Li, X., Feng, H., Wu, X., Jiang, G., Shen, H. and Dong, J. (2022). Rapid on-site detection method for white spot syndrome virus using recombinase polymerase amplification combined with lateral flow test strip technology. Frontiers in Cellular and Infection Microbiology. 12. https://doi.org/10.3389/fcimb. 2022.889775.

  20. Zhou, H., Kim, S.H., Kim, S.C., Kim, C.W., Kang, S.W. and Kim, H. (2023). Instance segmentation of shrimp based on contrastive learning. Applied Sciences. 13: 6979. https:// doi.org/10.3390/app13126979.
 
In this Article
Published In
Indian Journal of Animal Research

Editorial Board

View all (0)