Application of Convolutional Neural Networks in the Detection and Classification of Fish Diseases using Image Analysis

1School of Computing and Artificial Intelligence, Hanshin University, 137 Hanshidae-gil, Osan-si, Gyeonggi-do, 18101, Korea.

Background: Early and accurate detection of fish diseases is essential for improving aquaculture productivity and reducing economic losses. Convolutional neural networks (CNNs) offer a promising solution for automated disease classification using image-based analysis, yet the impact of different data split strategies on model performance requires further investigation.

Methods: A CNN-based approach was developed to classify common fish diseases using a dataset comprising Aeromoniasis, Gill disease, Healthy Fish, Parasitic diseases, Red disease, Saprolegniasis and White spot disease. Two data split ratios, 80:10:10 and 70:15:15, were evaluated. Model performance was assessed using accuracy, macro-average F1-score, confusion matrices, validation loss and ROC curves.

Result: The 70:15:15 split achieved higher overall accuracy (86.46%) compared to the 80:10:10 split (82.81%) and demonstrated a superior macro-average F1-score (0.8474 vs. 0.8266). However, this configuration exhibited signs of overfitting, reflected by increased validation loss. In contrast, the 80:10:10 split showed more stable validation loss and reduced misclassification in certain disease categories. High precision was consistently observed across most classes, although recall varied due to class imbalance and overlapping visual features. The findings indicate that increasing the validation set size can enhance classification performance but may also introduce overfitting risks. Implementing appropriate regularization and data balancing techniques is therefore essential. The proposed CNN-based framework demonstrates strong potential for integration into automated aquaculture monitoring systems, enabling timely disease detection, minimizing losses and improving overall fish health management.

Coastal communities and developing countries rely heavily on fish rearing and aquaculture. The growing dependence on aquaculture is boosting national economies. It plays a key role in addressing hunger and increasing per capita income. As a sustainable approach, aquaculture contributes to food security and ensures a balanced diet. It also supports global economic growth and social stability (FAO, 2024). Over the past five decades, economic growth has led to a rise in fish consumption. Many people depend on aquatic food as a primary protein source. Fish provides essential nutrients to billions worldwide. Advancements in smart aquaculture technologies have increased fish production. As a result, per capita fish consumption rose from 9.0 kg in 1961 to 20.2 kg in 2015 and further to 22.8 kg by 2020 (Coppola et al., 2021). Various stakeholders, including industries, policymakers, NGOs and consumers, focus on improving aquatic environments for sustainable seafood production.
       
Global fish and seafood production currently stands at approximately 200 million tonnes annually (FAO, 2024). However, 20-25% of production is wasted due to factors such as disease outbreaks, poor transportation and industrial challenges (Siddiqui et al., 2017). Ensuring fish health is essential for sustaining this sector (Svein et al., 2021; Zhao et al., 2024). Disease spread is faster in fish farms than in the wild due to confined environments. Consequently, diseases pose a major challenge to fish farming. Traditional disease detection methods depend on fishermen and manual techniques. These are slow, prone to errors and often inaccurate (Li et al., 2022; Cho, 2024; AlZubi, 2023; Hai and Duong, 2024). Machine-based detection minimizes errors and enables data reproduction with high precision (Penttinen et al., 2018; Al-Dosari and Abdellatif, 2024; Untari and Satria, 2022; Wihardjo et al., 2024; Lugito et al., 2022). While human expertise is still valuable, AI and machine learning (ML) enhance characterization, classification and segmentation of fish diseases.
       
Deep learning has advanced automated image processing for specific problems (LeCun et al., 2015; Hasan et al., 2022). CNN models analyze raw image data, making them useful for object detection, classification and segmentation (Schmidhuber et al., 2015). Various algorithms like backpropagation, recurrent neural networks and deep reinforcement learning help train models for accurate image recognition (Jensen et al., 2015). CNNs enable efficient feature extraction and model training, making them effective for detecting fish diseases (Abade et al., 2021). Krizhevsky et al., (2017) introduced CNNs for image classification in the ImageNet Large Scale Visual Recognition Challenge, achieving significantly higher accuracy than classical algorithms. Lyubchenko et al., (2016) suggested object clustering within images, integrating K-means clustering with mathematical morphology for segmentation. However, their approach lacked a classifier, making it difficult to estimate accuracy. Additionally, their method was time-consuming and inefficient. Malik et al., (2017) worked on detecting Epizootic Ulcerative Syndrome (EUS) using edge detection and morphological operations for segmentation. They evaluated multiple classification models, achieving 86% accuracy with a neural network and 63.3% with the K-NN model. Ahmed et al., (2022) investigated salmon disease classification using a machine vision approach, where image processing techniques were employed for feature extraction, followed by SVM-based classification. Their model achieved 91% accuracy without augmentation and 94% with augmentation. However, the dataset was limited to 163 infected and 68 healthy fish images.
               
The present study proposes a CNN-based ML approach for classifying fish health states using an advanced classifier algorithm. By employing a systematic evaluation with multiple metrics and graphical representations, the model’s performance is rigorously assessed. The trained model is optimized to minimize loss and maximize accuracy on the validation set, demonstrating strong pattern recognition and generalization capabilities.
Dataset and preprocessing
 
This study utilized an open-source dataset to develop a fish disease detection model using CNN method (Biswas, 2023). The dataset consists of images representing fish affected by seven different conditions, which are categorized into bacterial infections (Aeromoniasis, Bacterial Gill Disease, Bacterial Red Disease), fungal infection (Saprolegniasis), parasitic infection (Parasitic Disease) and viral infection (White Spot Disease), as well as a class representing healthy fish. Each condition contains 250 images, ensuring a well-balanced dataset for both training and evaluation.
       
The dataset was sourced from a publicly accessible repository on Kaggle and collected from various sources, including a university agricultural department, an agricultural farm in Odisha, India and several agricultural websites.  To facilitate model training, preprocessing steps were applied to the images. The original images varied in size, so they were resized to a consistent dimension of 256×256 pixels to ensure uniformity when fed into the neural network. This resizing step is essential for DL models, as it ensures that all input data is of the same size and shape, thus preventing shape mismatches during the training phase. The resizing transformation can be expressed mathematically as follows:
 
Resize (I) → Inew  where Inew = Resize [1, (256,256)]
 
Next, the pixel values of the images were normalized to a range of [0, 1]. This normalization process involves dividing the pixel values by 255, as image pixel values usually range from 0 to 255 in RGB images. This transformation helps stabilize the gradient during training, making the model learn more efficiently. The pixel normalization can be described as:


To further improve the generalization capability of the model, data augmentation techniques were applied during training. Data augmentation helps prevent the model from overfitting by artificially increasing the variety of the training dataset. The augmentations applied include.
 
Random horizontal and vertical flip
 
This technique randomly flips an image along the horizontal and vertical axes. This introduces transformations of the images while preserving the main features of the objects (e.g., fish) in different orientations.
 
Random rotation
 
The images were randomly rotated by a maximum of ±20 degrees. This allows the model to become more invariant to rotational transformations in the input data, which is common in real-world applications where the object’s orientation can vary.
       
Mathematically, these augmentations can be represented as:
 
Augment (I) → Flip [Rotate (I)]
 
This equation indicates that the image (I) is first rotated (as denoted by Rotate(I)) and then flipped [as denoted by Flip (Rotate(I)]. The transformations are applied sequentially to create a new augmented version of the original image. Fig 1 shows the output of the image after resizing, flipping and segmentation.

Fig 1: Image output after resizing, flipping and segmentation.



For evaluating the model’s performance, two different partitioning schemes were used for this experiment. The dataset was split into three subsets: training, validation and testing.
 
70:15:15 split
 
In this partitioning scheme, 70% of the dataset was allocated for training, 15% for validation and the remaining 15% for testing. This partitioning method ensured that the model was trained on a substantial amount of data, while also being validated and tested on independent subsets to assess its generalization.
 
80:10:10 split
 
In this alternative partitioning, 80% of the dataset was used for training, with 10% for validation and 10% for testing. This partitioning was used to explore how the model would perform with a larger training set and smaller validation and test sets.
       
The training set was used to fit the model. The validation set was used to monitor the model’s performance during training and the test set was reserved for evaluating the model’s final performance. The splitting process was implemented using shuffling to ensure that the data points were randomly distributed across the training, validation and test sets. This shuffling helps avoid any bias in the model evaluation, which could occur if the data was split in a non-random manner. The splitting ratio (r) can be expressed mathematically as follows.
 
Train split = rtrain × n, Val split = rval × n, Test split = rtest × n
 
Where,
N= The total number of images and the partition ratios are either set as 70:15:15 or 80:10:10, depending on the chosen split configuration.
 
Model architecture
 
A sequential CNN model was designed for the fish classification task. CNNs are particularly well-suited for image classification tasks due to their ability to automatically learn spatial hierarchies of features from images. The architecture of the model includes multiple layers of convolution followed by pooling layers, which progressively learn more complex representations of the input image. The architecture of the model consists of the following layers.
 
Input layer
 
The model accepts inputs with a fixed shape of 256x256 pixels and 3 color channels (RGB images).
 
Convolutional layers
 
The model includes six convolutional layers with increasing filter sizes. The first few layers use 64 filters, followed by layers with 256 and 512 filters, aimed at capturing progressively more complex features of the images, such as textures and patterns associated with different fish diseases. All convolutional layers use the ReLU (Rectified Linear Unit) activation function, which introduces non-linearity and allows the model to capture complex patterns in the image data.
 
Max-pooling layers
 
After each convolutional layer, max-pooling operations were applied with a pool size of (2, 2). Max-pooling helps to reduce the spatial dimensions of the feature maps, which reduces computational complexity and prevents overfitting by downsampling the data.
 
Flatten layer
 
After the convolutional and pooling layers, the feature maps were flattened into a 1D vector to be used as input for the fully connected layers.
 
Fully connected layers
 
The flattened vector was passed through two fully connected layers. The first layer has 64 units and uses the ReLU activation function. The second fully connected layer corresponds to the output layer, which has 7 units (one for each class) and uses the softmax activation function to output class probabilities. The softmax function ensures that the output values sum to 1 and can be interpreted as probabilities for the respective classes. Fig 2 illustrates the process of feature extraction and classification using the sequential-CNN method.

Fig 2: Process of feature extraction and classification using sequential-CNN method.


 
Training procedure
 
The model was trained for 200 epochs, with each epoch consisting of multiple iterations over the training data. An epoch refers to one complete cycle through the entire training dataset. During each epoch, the model’s parameters were updated using the Adam optimizer, which adapts the learning rate during training to improve convergence. The Sparse Categorical Crossentropy loss function was used, as it is appropriate for multi-class classification problems where the labels are provided as integers.

 
Where,
yi= The true label for the ith class (often represented as a one-hot encoded vector),
pi= The predicted probability for the ith class (i.e., the output of the model’s softmax layer),
N= The total number of classes (in the present case, 7 different fish disease categories).
       
The model’s performance was evaluated by monitoring both training and validation accuracy and loss at the end of each epoch.
 
Evaluation metrics
 
After training, the model’s performance was evaluated on the test set using several. These metrics are:







 
The confusion matrix provides a detailed breakdown of the model’s performance for each class. A confusion matrix for a two-class classification problem is represented as:



The Receiver Operating Characteristic (ROC) Curve and Area Under the Curve (AUC) are a graphical representation that shows the performance of a model as the decision threshold varies.
 
Efficiency considerations
 
To optimize the training process, several techniques were implemented to ensure efficient use of computational resources. Caching and prefetching techniques were applied to improve the data pipeline’s performance.
 
Caching
 
Caching stores the dataset in memory after it is loaded and preprocessed for the first time. This ensures that the data is available for faster subsequent epochs, reducing the time required for loading the data during training.
 
Prefetching
 
Prefetching is used to overlap the data loading and training processes. This helps improve throughput by ensuring that the model never has to wait for data to be loaded, reducing bottlenecks in the data pipeline.
The performance of the CNN model was evaluated based on two different data split ratios: 80:10:10 and 70:15:15. Fig 3 presents the training and validation accuracy as well as the corresponding loss curves over 200 epochs for both configurations. For the 80:10:10 split, the model achieved a training accuracy of 93.80% with a training loss of 0.1576, while the validation accuracy reached 84.38% and the validation loss was 0.3984.

Fig 3: Training and validation accuracy and loss over 200 epochs at two data spilt ratios (80:10:10 and 70:15:15).


       
This result indicates that the model learned well from the training data while maintaining reasonable generalization on the validation set. In comparison, the 70:15:15 split resulted in an improved training accuracy of 98.79% and a significantly lower training loss of 0.0466. The validation accuracy also improved to 92.68%; however, the validation loss increased to 0.6901. The higher validation accuracy suggests better generalization, while the increased validation loss may indicate some degree of overfitting. The larger validation set in this split may have contributed to a more robust model evaluation but also introduced greater variability.
       
Fig 4 illustrates the confusion matrices for both split ratios, detailing the model’s ability to correctly classify different fish diseases. The model trained on the 70:15:15 split demonstrates improved classification performance, particularly for "Healthy Fish" and "Gill Disease," showing higher true positive values. However, the 80:10:10 split achieves more stable predictions, particularly for "Aeromoniasis" and "White Spot Disease," with lower misclassification rates.

Fig 4: Confusion matrix showing true and false predictions for two data split ratios: 80:10:10 and 70:15:15.


       
A noticeable observation is that the 70:15:15 model encounters more misclassifications for "Parasitic Diseases" and "Saprolengniasis," likely due to higher variance in training data distribution. The confusion matrices further confirm that although increasing the validation set size in 70:15:15 provides a better generalization estimate, it also introduces instability in certain disease classifications.
       
Table 1 and 2 present the classification metrics at data split ratios of 80:10:10 and 70:15:15, respectively. At an 80:10:10 split ratio, several classes (e.g., Aeromoniasis, Gill disease, Parasitic diseases, Saprolegniasis and White spot disease) exhibit perfect or near-perfect precision (i.e., 1.0).

Table 1: Classification metrics at data spilt ratio 80:10:10.



Table 2: Classification metrics at data spilt ratio 70:15:15.


       
However, recall for Parasitic diseases is relatively low (0.4286), suggesting the model struggles to correctly identify all positive cases for this class. Healthy Fish also shows a lower precision (0.7188) but a very high recall (0.9583), indicating the model is good at detecting most healthy fish but sometimes misclassifies other classes as healthy. Overall, the 82.81% accuracy indicates solid performance but also reveals potential issues with imbalanced predictions. The high macro-average precision indicates that, on average, the model avoids false positives; however, the lower macro-average recall highlights its tendency to miss some positive samples in specific classes.
       
When the validation set is larger (70:15:15), the overall accuracy improves to 86.46% and both macro and weighted averages also increase, reflecting better overall performance. Classes like Aeromoniasis, Gill disease and Healthy Fish achieve both high precision and high recall, demonstrating that the model distinguishes these conditions effectively. However, some classes, such as Red disease and Saprolegniasis, show perfect precision (1.0) but relatively low recall (0.6667 and 0.5556, respectively), indicating that while the model rarely mislabels other diseases as Red disease or Saprolegniasis, it does fail to capture a portion of true positive cases in those classes.
       
Fig 5 shows example outputs from the model’s classification of fish diseases, highlighting the actual disease label, the model’s predicted label and the confidence score for each sample. At 80:10:10 data separation, the fish with White spot disease is correctly predicted with 99.94% confidence, the healthy fish is identified with 100% confidence and the fish with Red disease is classified at 99.55% confidence. At 70:15:15 data separation, similar high-confidence predictions for White spot disease, healthy fish and Red disease, further illustrate the model’s capability to accurately recognize these conditions.

Fig 5: Confidence score for actual and predicted diseased classes.


       
Fig 6 compares the ROC curves and AUC values for each disease class under the two data splits (80:10:10 vs. 70:15:15). The left plot (80:10:10) shows that each disease class achieves a high AUC, ranging roughly from 0.92 to 0.99. This indicates that, for every disease, the model is able to maintain a high true positive rate (TPR) while keeping the false positive rate (FPR) relatively low. In particular, Red disease exhibits the highest AUC at about 0.993, suggesting the model is especially adept at distinguishing this class from others. Conversely, Saprolegniasis and White spot disease have slightly lower AUC values but still remain above 0.92, reflecting robust performance.

Fig 6: ROC(AUC) curves.


       
In the right plot (70:15:15), the AUCs remain high, with most classes showing slight improvements over the 80:10:10 split. For instance, Aeromoniasis improves from about 0.9655 to 0.9772 and Parasitic diseases also show a notable increase (0.9500 to 0.9900). As with the 80:10:10 split, Red disease remains one of the best-classified conditions (AUC ? 0.9924). Although Saprolegniasis and White spot disease maintain strong AUCs, they still rank slightly lower compared to the other classes. Overall, both ROC plots confirm the model’s strong discriminatory power across all disease classes. The consistently high AUC values, well above the 0.5 random-guess threshold, illustrate that the classifier performs significantly better than chance. The slight variations between the two splits echo earlier findings: while the 70:15:15 split can yield higher accuracy and AUC for certain classes, it may also show signs of overfitting in other metrics (such as validation loss).
       
Fig 3 highlights that both splits show an upward trend in accuracy and a downward trend in loss. However, the 70:15:15 split exhibits more fluctuation in validation loss, hinting at greater sensitivity to the validation data. Although the 70:15:15 split yields a higher validation accuracy (92.68% vs. 84.38%) and a better overall accuracy (86.46% vs. 82.81%), the increased validation loss (0.6901 vs. 0.3984) suggests overfitting. This finding is further supported by the confusion matrices (Fig 4), where the 70:15:15 model correctly classifies more instances in some diseases but shows instability in others. Many classes achieve near-perfect precision, indicating the model’s caution in labeling diseases. However, certain classes (e.g., Parasitic diseases, Saprolegniasis) have lower recall, meaning the model fails to identify some positive cases. This discrepancy may be due to class imbalance or overlapping visual features among diseases.
       
Despite the potential for overfitting, the higher validation accuracy under the 70:15:15 split suggests it provides better generalization for most disease categories. To further improve the model, data augmentation and balancing techniques could help address low recall in underrepresented classes. Additionally, regularization methods such as dropout, batch normalization, or weight decay can mitigate overfitting-particularly valuable in the 70:15:15 scenario. Finally, adjusting decision thresholds per class could help optimize the balance between precision and recall, especially for classes with skewed performance metrics.  Overall, the results indicate that while the 70:15:15 split leads to higher accuracy and AUC values for most classes, the 80:10:10 split exhibits more stable validation loss. Selecting the optimal approach may depend on the specific application requirements (e.g., whether minimizing false negatives or false positives is more critical) and the available dataset size. To further enhance model performance, implementing data augmentation techniques or gathering additional samples for underrepresented classes (e.g., Parasitic diseases, Saprolegniasis) could improve recall. Moreover, employing regularization methods such as dropout, batch normalization, or weight decay may help mitigate overfitting, particularly in the 70:15:15 split. Adjusting decision thresholds for each class could also help balance precision and recall for those classes with skewed performance metrics.
In conclusion, the CNN-based model effectively identifies multiple fish diseases with high accuracy, demonstrating its potential for aquaculture diagnostics. The study finds that a 70:15:15 data split improves generalization but increases validation loss, while an 80:10:10 split provides more stability with slightly lower accuracy. However, class imbalance, particularly in Parasitic diseases and Saprolegniasis, affects recall and the model shows a tendency to overfit larger validation sets. To enhance performance, future work will focus on data augmentation techniques like rotation, flipping and color jitter, along with exploring advanced architectures such as Vision Transformers and attention-based CNNs. Stronger regularization methods like dropout and batch normalization, as well as optimized decision thresholds, could improve adaptability. Ultimately, this study highlights the potential of AI-driven disease detection to revolutionize aquaculture health management by enabling early diagnosis, reducing economic losses and improving fish welfare.
 
Disclaimers
 
The views and conclusions expressed in this article are solely those of the authors and do not necessarily represent the views of their affiliated institutions. The authors are responsible for the accuracy and completeness of the information provided but do not accept any liability for any direct or indirect losses resulting from the use of this content.
Funding details
 
This work was supported by Hanshin University Research grant.
 
Availability of data and materials
 
Not applicable.
 
Use of artificial intelligence
 
Not applicable.
 
Declarations
 
Author declares that all works are original and this manuscript has not been published in any other journal.
Author declares that they have no conflict of interest.

  1. Abade, A., Ferreira, P.A. and Vidal, F., de B. (2021). Plant diseases recognition on images using convolutional neural networks: A systematic review. Computers and Electronics in Agriculture. 185: 106125. https://doi.org/ 10.1016/j.compag.2021.106125.

  2. Ahmed, M.S., Aurpa, T.T. and Azad, M.A.K. (2022). Fish disease detection using image based machine learning technique in aquaculture. Journal of King Saud University-Computer and Information Sciences. 34(8): 5170-5182. https:// doi.org/10.1016/j.jksuci.2021.05.003.

  3. Al-Dosari, M.N.A. and Abdellatif, M.S. (2024). The environmental awareness level among Saudi women and its relationship to sustainable thinking. Acta Innovations. 52: 28-42. https://doi.org/10.62441/ActaInnovations.52.4.

  4. AlZubi, A.A. (2023). Artificial intelligence and its application in the prediction and diagnosis of animal diseases: A review. Indian Journal of Animal Research. 57(10): 1265-1271. doi: 10.18805/IJAR.BF-1684.

  5. Biswas, S. (2023). Freshwater fish disease aquaculture in South Asia [Data set]. Kaggle. Retrieved November 25, 2025, from https://www.kaggle.com/datasets/subirbiswas19/ freshwater-fish-disease-aquaculture-in-south-asia.

  6. Cho, O.H. (2024). An evaluation of various machine learning approaches for detecting leaf diseases in agriculture. Legume Research. 47(4): 619-627. doi: 10.18805/LRF-787.

  7. Coppola, D., Lauritano, C., Esposito, F.P., Riccio, G., Rizzo, C. and De Pascale, D. (2021). Fish waste: From problem to valuable resource. Marine Drugs. 19(2): 116. https:// doi.org/10.3390/md19020116.

  8. FAO. (2024). The state of world fisheries and aquaculture. Food and Agriculture Organization of the United Nations. http:/ /www.fao.org/state-of-fisheries-aquaculture.

  9. Hai, N.T. and Duong, N.T. (2024). An improved environmental man agement model for assuring energy and economic prosperity. Acta Innovations. 52: 9-18. https://doi.org/10.62441/ ActaInnovations.52.2.

  10. Hasan, N., Ibrahim, S. and Azlan, A.A. (2022). Fish diseases de tection using convolutional neural network (CNN). International Journal of Nonlinear Analysis and Applications. 13(1): 1977-1984. https://doi.org/10.22075/ijnaa.2022. 5839.

  11. Jensen, L.B., Boltana, S., Obach, A., McGurk, C., Waagbø, R. and MacKenzie, S. (2015). Investigating the underlying mechanisms of temperature-related skin diseases in Atlantic salmon, Salmo salar L., as measured by quantitative histology, skin transcriptomics and composition. Journal of Fish Diseases. 38(11): 977-992. https://doi.org/ 10.1111/jfd.12314.

  12. Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM. 60(6): 84-90. https://doi.org/ 10.1145/3065386.

  13. LeCun, Y., Bengio, Y. and Hinton, G. (2015). Deep learning. Nature. 521(7553): 436-444. https://doi.org/10.1038/na ture14539.

  14. Li, D., Li, X., Wang, Q., and Hao, Y. (2022). Advanced techniques for the intelligent diagnosis of fish diseases: A review. Animals. 12(21): 2938. https://doi.org/10.3390/ani12212938.

  15. Lugito, N.P.H., Djuwita, R., Adisasmita, A. and Simadibrata, M. (2022). Blood pressure lowering effect of lactobacillus- containing probiotic. International Journal of Probiotics and Prebiotics. 17(1): 1-13. https://doi.org/10.37290/ ijpp2641-7197.17:1-13.

  16. Lyubchenko, V., Matarneh, R., Kobylin, O. and Lyashenko, V. (2016). Digital image processing techniques for detection and diagnosis of fish diseases. International Journal of Advanced Research in Computer Science and Software Engineering. 6: 79-83.

  17. Malik, S., Kumar, T. and Sahoo, A. (2017). A novel approach to fish disease diagnostic system based on machine learning. Advances in Image and Video Processing. 5(1). https:/ /doi.org/10.14738/aivp.51.2809.

  18. Penttinen, A., Parkkinen, I., Blom, S., Kopra, J. andressoo, J., Pitkänen, K., Voutilainen, M. H., Saarma, M. and Airavaara, M. (2018). Implementation of deep neural networks to count dopamine neurons in substantia nigra. European Journal of Neuroscience. 48(6): 2354-2361. https:// doi.org/10.1111/ejn.14129.

  19. Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks. 61: 85-117. https://doi.org/ 10.1016/j.neunet.2014.09.003.

  20. Siddiqui, S.A., Salman, A., Malik, M.I., Shafait, F., Mian, A., Shortis, M.R. and Harvey, E.S. (2017). Automatic fish species classification in underwater videos: Exploiting pre-trained deep neural network models to compensate for limited labelled data. ICES Journal of Marine Science. 75(1): 374-389. https://doi.org/10.1093/icesjms/fsx109.

  21. Svein, L., Timmerhaus, G., Johansen, L. and Ytteborg, E. (2021). Deep neural network analysis -A paradigm shift for histological examination of health and welfare of farmed fish. Aquaculture. 532: 736024. https://doi.org/10.1016/ j.aquaculture.2020.736024.

  22. Untari, D.T. and Satria, B. (2022). Repositioning culinary “Betawi Ora” as Bekasi eco-culinary tourism icon. International Journal of Environmental Sciences. 8(2): 15-24.

  23. Wihardjo, R.S.D., Muktiono, A. and Suwanda. (2024). Exploring critical thinking in environmental education: A PBL approach to reaction rate material at public senior high schools in Indonesia. International Journal of Environmental Sciences. 10(1): 14-25.

  24. Zhao, T., Jiang, X., Yu, L., Zhang, W., Zhan, H., Wang, Q., Zhang, X. and Xu, F. (2024). Comparison of nutritional status between migrant and nonmigrant school-age children in kunming, yunnan province, China. Current Topics in Nutraceutical Research. 22(2): 617-623. https://doi.org/ 10.37290/ctnr2641-452X.22:617-623.

Application of Convolutional Neural Networks in the Detection and Classification of Fish Diseases using Image Analysis

1School of Computing and Artificial Intelligence, Hanshin University, 137 Hanshidae-gil, Osan-si, Gyeonggi-do, 18101, Korea.

Background: Early and accurate detection of fish diseases is essential for improving aquaculture productivity and reducing economic losses. Convolutional neural networks (CNNs) offer a promising solution for automated disease classification using image-based analysis, yet the impact of different data split strategies on model performance requires further investigation.

Methods: A CNN-based approach was developed to classify common fish diseases using a dataset comprising Aeromoniasis, Gill disease, Healthy Fish, Parasitic diseases, Red disease, Saprolegniasis and White spot disease. Two data split ratios, 80:10:10 and 70:15:15, were evaluated. Model performance was assessed using accuracy, macro-average F1-score, confusion matrices, validation loss and ROC curves.

Result: The 70:15:15 split achieved higher overall accuracy (86.46%) compared to the 80:10:10 split (82.81%) and demonstrated a superior macro-average F1-score (0.8474 vs. 0.8266). However, this configuration exhibited signs of overfitting, reflected by increased validation loss. In contrast, the 80:10:10 split showed more stable validation loss and reduced misclassification in certain disease categories. High precision was consistently observed across most classes, although recall varied due to class imbalance and overlapping visual features. The findings indicate that increasing the validation set size can enhance classification performance but may also introduce overfitting risks. Implementing appropriate regularization and data balancing techniques is therefore essential. The proposed CNN-based framework demonstrates strong potential for integration into automated aquaculture monitoring systems, enabling timely disease detection, minimizing losses and improving overall fish health management.

Coastal communities and developing countries rely heavily on fish rearing and aquaculture. The growing dependence on aquaculture is boosting national economies. It plays a key role in addressing hunger and increasing per capita income. As a sustainable approach, aquaculture contributes to food security and ensures a balanced diet. It also supports global economic growth and social stability (FAO, 2024). Over the past five decades, economic growth has led to a rise in fish consumption. Many people depend on aquatic food as a primary protein source. Fish provides essential nutrients to billions worldwide. Advancements in smart aquaculture technologies have increased fish production. As a result, per capita fish consumption rose from 9.0 kg in 1961 to 20.2 kg in 2015 and further to 22.8 kg by 2020 (Coppola et al., 2021). Various stakeholders, including industries, policymakers, NGOs and consumers, focus on improving aquatic environments for sustainable seafood production.
       
Global fish and seafood production currently stands at approximately 200 million tonnes annually (FAO, 2024). However, 20-25% of production is wasted due to factors such as disease outbreaks, poor transportation and industrial challenges (Siddiqui et al., 2017). Ensuring fish health is essential for sustaining this sector (Svein et al., 2021; Zhao et al., 2024). Disease spread is faster in fish farms than in the wild due to confined environments. Consequently, diseases pose a major challenge to fish farming. Traditional disease detection methods depend on fishermen and manual techniques. These are slow, prone to errors and often inaccurate (Li et al., 2022; Cho, 2024; AlZubi, 2023; Hai and Duong, 2024). Machine-based detection minimizes errors and enables data reproduction with high precision (Penttinen et al., 2018; Al-Dosari and Abdellatif, 2024; Untari and Satria, 2022; Wihardjo et al., 2024; Lugito et al., 2022). While human expertise is still valuable, AI and machine learning (ML) enhance characterization, classification and segmentation of fish diseases.
       
Deep learning has advanced automated image processing for specific problems (LeCun et al., 2015; Hasan et al., 2022). CNN models analyze raw image data, making them useful for object detection, classification and segmentation (Schmidhuber et al., 2015). Various algorithms like backpropagation, recurrent neural networks and deep reinforcement learning help train models for accurate image recognition (Jensen et al., 2015). CNNs enable efficient feature extraction and model training, making them effective for detecting fish diseases (Abade et al., 2021). Krizhevsky et al., (2017) introduced CNNs for image classification in the ImageNet Large Scale Visual Recognition Challenge, achieving significantly higher accuracy than classical algorithms. Lyubchenko et al., (2016) suggested object clustering within images, integrating K-means clustering with mathematical morphology for segmentation. However, their approach lacked a classifier, making it difficult to estimate accuracy. Additionally, their method was time-consuming and inefficient. Malik et al., (2017) worked on detecting Epizootic Ulcerative Syndrome (EUS) using edge detection and morphological operations for segmentation. They evaluated multiple classification models, achieving 86% accuracy with a neural network and 63.3% with the K-NN model. Ahmed et al., (2022) investigated salmon disease classification using a machine vision approach, where image processing techniques were employed for feature extraction, followed by SVM-based classification. Their model achieved 91% accuracy without augmentation and 94% with augmentation. However, the dataset was limited to 163 infected and 68 healthy fish images.
               
The present study proposes a CNN-based ML approach for classifying fish health states using an advanced classifier algorithm. By employing a systematic evaluation with multiple metrics and graphical representations, the model’s performance is rigorously assessed. The trained model is optimized to minimize loss and maximize accuracy on the validation set, demonstrating strong pattern recognition and generalization capabilities.
Dataset and preprocessing
 
This study utilized an open-source dataset to develop a fish disease detection model using CNN method (Biswas, 2023). The dataset consists of images representing fish affected by seven different conditions, which are categorized into bacterial infections (Aeromoniasis, Bacterial Gill Disease, Bacterial Red Disease), fungal infection (Saprolegniasis), parasitic infection (Parasitic Disease) and viral infection (White Spot Disease), as well as a class representing healthy fish. Each condition contains 250 images, ensuring a well-balanced dataset for both training and evaluation.
       
The dataset was sourced from a publicly accessible repository on Kaggle and collected from various sources, including a university agricultural department, an agricultural farm in Odisha, India and several agricultural websites.  To facilitate model training, preprocessing steps were applied to the images. The original images varied in size, so they were resized to a consistent dimension of 256×256 pixels to ensure uniformity when fed into the neural network. This resizing step is essential for DL models, as it ensures that all input data is of the same size and shape, thus preventing shape mismatches during the training phase. The resizing transformation can be expressed mathematically as follows:
 
Resize (I) → Inew  where Inew = Resize [1, (256,256)]
 
Next, the pixel values of the images were normalized to a range of [0, 1]. This normalization process involves dividing the pixel values by 255, as image pixel values usually range from 0 to 255 in RGB images. This transformation helps stabilize the gradient during training, making the model learn more efficiently. The pixel normalization can be described as:


To further improve the generalization capability of the model, data augmentation techniques were applied during training. Data augmentation helps prevent the model from overfitting by artificially increasing the variety of the training dataset. The augmentations applied include.
 
Random horizontal and vertical flip
 
This technique randomly flips an image along the horizontal and vertical axes. This introduces transformations of the images while preserving the main features of the objects (e.g., fish) in different orientations.
 
Random rotation
 
The images were randomly rotated by a maximum of ±20 degrees. This allows the model to become more invariant to rotational transformations in the input data, which is common in real-world applications where the object’s orientation can vary.
       
Mathematically, these augmentations can be represented as:
 
Augment (I) → Flip [Rotate (I)]
 
This equation indicates that the image (I) is first rotated (as denoted by Rotate(I)) and then flipped [as denoted by Flip (Rotate(I)]. The transformations are applied sequentially to create a new augmented version of the original image. Fig 1 shows the output of the image after resizing, flipping and segmentation.

Fig 1: Image output after resizing, flipping and segmentation.



For evaluating the model’s performance, two different partitioning schemes were used for this experiment. The dataset was split into three subsets: training, validation and testing.
 
70:15:15 split
 
In this partitioning scheme, 70% of the dataset was allocated for training, 15% for validation and the remaining 15% for testing. This partitioning method ensured that the model was trained on a substantial amount of data, while also being validated and tested on independent subsets to assess its generalization.
 
80:10:10 split
 
In this alternative partitioning, 80% of the dataset was used for training, with 10% for validation and 10% for testing. This partitioning was used to explore how the model would perform with a larger training set and smaller validation and test sets.
       
The training set was used to fit the model. The validation set was used to monitor the model’s performance during training and the test set was reserved for evaluating the model’s final performance. The splitting process was implemented using shuffling to ensure that the data points were randomly distributed across the training, validation and test sets. This shuffling helps avoid any bias in the model evaluation, which could occur if the data was split in a non-random manner. The splitting ratio (r) can be expressed mathematically as follows.
 
Train split = rtrain × n, Val split = rval × n, Test split = rtest × n
 
Where,
N= The total number of images and the partition ratios are either set as 70:15:15 or 80:10:10, depending on the chosen split configuration.
 
Model architecture
 
A sequential CNN model was designed for the fish classification task. CNNs are particularly well-suited for image classification tasks due to their ability to automatically learn spatial hierarchies of features from images. The architecture of the model includes multiple layers of convolution followed by pooling layers, which progressively learn more complex representations of the input image. The architecture of the model consists of the following layers.
 
Input layer
 
The model accepts inputs with a fixed shape of 256x256 pixels and 3 color channels (RGB images).
 
Convolutional layers
 
The model includes six convolutional layers with increasing filter sizes. The first few layers use 64 filters, followed by layers with 256 and 512 filters, aimed at capturing progressively more complex features of the images, such as textures and patterns associated with different fish diseases. All convolutional layers use the ReLU (Rectified Linear Unit) activation function, which introduces non-linearity and allows the model to capture complex patterns in the image data.
 
Max-pooling layers
 
After each convolutional layer, max-pooling operations were applied with a pool size of (2, 2). Max-pooling helps to reduce the spatial dimensions of the feature maps, which reduces computational complexity and prevents overfitting by downsampling the data.
 
Flatten layer
 
After the convolutional and pooling layers, the feature maps were flattened into a 1D vector to be used as input for the fully connected layers.
 
Fully connected layers
 
The flattened vector was passed through two fully connected layers. The first layer has 64 units and uses the ReLU activation function. The second fully connected layer corresponds to the output layer, which has 7 units (one for each class) and uses the softmax activation function to output class probabilities. The softmax function ensures that the output values sum to 1 and can be interpreted as probabilities for the respective classes. Fig 2 illustrates the process of feature extraction and classification using the sequential-CNN method.

Fig 2: Process of feature extraction and classification using sequential-CNN method.


 
Training procedure
 
The model was trained for 200 epochs, with each epoch consisting of multiple iterations over the training data. An epoch refers to one complete cycle through the entire training dataset. During each epoch, the model’s parameters were updated using the Adam optimizer, which adapts the learning rate during training to improve convergence. The Sparse Categorical Crossentropy loss function was used, as it is appropriate for multi-class classification problems where the labels are provided as integers.

 
Where,
yi= The true label for the ith class (often represented as a one-hot encoded vector),
pi= The predicted probability for the ith class (i.e., the output of the model’s softmax layer),
N= The total number of classes (in the present case, 7 different fish disease categories).
       
The model’s performance was evaluated by monitoring both training and validation accuracy and loss at the end of each epoch.
 
Evaluation metrics
 
After training, the model’s performance was evaluated on the test set using several. These metrics are:







 
The confusion matrix provides a detailed breakdown of the model’s performance for each class. A confusion matrix for a two-class classification problem is represented as:



The Receiver Operating Characteristic (ROC) Curve and Area Under the Curve (AUC) are a graphical representation that shows the performance of a model as the decision threshold varies.
 
Efficiency considerations
 
To optimize the training process, several techniques were implemented to ensure efficient use of computational resources. Caching and prefetching techniques were applied to improve the data pipeline’s performance.
 
Caching
 
Caching stores the dataset in memory after it is loaded and preprocessed for the first time. This ensures that the data is available for faster subsequent epochs, reducing the time required for loading the data during training.
 
Prefetching
 
Prefetching is used to overlap the data loading and training processes. This helps improve throughput by ensuring that the model never has to wait for data to be loaded, reducing bottlenecks in the data pipeline.
The performance of the CNN model was evaluated based on two different data split ratios: 80:10:10 and 70:15:15. Fig 3 presents the training and validation accuracy as well as the corresponding loss curves over 200 epochs for both configurations. For the 80:10:10 split, the model achieved a training accuracy of 93.80% with a training loss of 0.1576, while the validation accuracy reached 84.38% and the validation loss was 0.3984.

Fig 3: Training and validation accuracy and loss over 200 epochs at two data spilt ratios (80:10:10 and 70:15:15).


       
This result indicates that the model learned well from the training data while maintaining reasonable generalization on the validation set. In comparison, the 70:15:15 split resulted in an improved training accuracy of 98.79% and a significantly lower training loss of 0.0466. The validation accuracy also improved to 92.68%; however, the validation loss increased to 0.6901. The higher validation accuracy suggests better generalization, while the increased validation loss may indicate some degree of overfitting. The larger validation set in this split may have contributed to a more robust model evaluation but also introduced greater variability.
       
Fig 4 illustrates the confusion matrices for both split ratios, detailing the model’s ability to correctly classify different fish diseases. The model trained on the 70:15:15 split demonstrates improved classification performance, particularly for "Healthy Fish" and "Gill Disease," showing higher true positive values. However, the 80:10:10 split achieves more stable predictions, particularly for "Aeromoniasis" and "White Spot Disease," with lower misclassification rates.

Fig 4: Confusion matrix showing true and false predictions for two data split ratios: 80:10:10 and 70:15:15.


       
A noticeable observation is that the 70:15:15 model encounters more misclassifications for "Parasitic Diseases" and "Saprolengniasis," likely due to higher variance in training data distribution. The confusion matrices further confirm that although increasing the validation set size in 70:15:15 provides a better generalization estimate, it also introduces instability in certain disease classifications.
       
Table 1 and 2 present the classification metrics at data split ratios of 80:10:10 and 70:15:15, respectively. At an 80:10:10 split ratio, several classes (e.g., Aeromoniasis, Gill disease, Parasitic diseases, Saprolegniasis and White spot disease) exhibit perfect or near-perfect precision (i.e., 1.0).

Table 1: Classification metrics at data spilt ratio 80:10:10.



Table 2: Classification metrics at data spilt ratio 70:15:15.


       
However, recall for Parasitic diseases is relatively low (0.4286), suggesting the model struggles to correctly identify all positive cases for this class. Healthy Fish also shows a lower precision (0.7188) but a very high recall (0.9583), indicating the model is good at detecting most healthy fish but sometimes misclassifies other classes as healthy. Overall, the 82.81% accuracy indicates solid performance but also reveals potential issues with imbalanced predictions. The high macro-average precision indicates that, on average, the model avoids false positives; however, the lower macro-average recall highlights its tendency to miss some positive samples in specific classes.
       
When the validation set is larger (70:15:15), the overall accuracy improves to 86.46% and both macro and weighted averages also increase, reflecting better overall performance. Classes like Aeromoniasis, Gill disease and Healthy Fish achieve both high precision and high recall, demonstrating that the model distinguishes these conditions effectively. However, some classes, such as Red disease and Saprolegniasis, show perfect precision (1.0) but relatively low recall (0.6667 and 0.5556, respectively), indicating that while the model rarely mislabels other diseases as Red disease or Saprolegniasis, it does fail to capture a portion of true positive cases in those classes.
       
Fig 5 shows example outputs from the model’s classification of fish diseases, highlighting the actual disease label, the model’s predicted label and the confidence score for each sample. At 80:10:10 data separation, the fish with White spot disease is correctly predicted with 99.94% confidence, the healthy fish is identified with 100% confidence and the fish with Red disease is classified at 99.55% confidence. At 70:15:15 data separation, similar high-confidence predictions for White spot disease, healthy fish and Red disease, further illustrate the model’s capability to accurately recognize these conditions.

Fig 5: Confidence score for actual and predicted diseased classes.


       
Fig 6 compares the ROC curves and AUC values for each disease class under the two data splits (80:10:10 vs. 70:15:15). The left plot (80:10:10) shows that each disease class achieves a high AUC, ranging roughly from 0.92 to 0.99. This indicates that, for every disease, the model is able to maintain a high true positive rate (TPR) while keeping the false positive rate (FPR) relatively low. In particular, Red disease exhibits the highest AUC at about 0.993, suggesting the model is especially adept at distinguishing this class from others. Conversely, Saprolegniasis and White spot disease have slightly lower AUC values but still remain above 0.92, reflecting robust performance.

Fig 6: ROC(AUC) curves.


       
In the right plot (70:15:15), the AUCs remain high, with most classes showing slight improvements over the 80:10:10 split. For instance, Aeromoniasis improves from about 0.9655 to 0.9772 and Parasitic diseases also show a notable increase (0.9500 to 0.9900). As with the 80:10:10 split, Red disease remains one of the best-classified conditions (AUC ? 0.9924). Although Saprolegniasis and White spot disease maintain strong AUCs, they still rank slightly lower compared to the other classes. Overall, both ROC plots confirm the model’s strong discriminatory power across all disease classes. The consistently high AUC values, well above the 0.5 random-guess threshold, illustrate that the classifier performs significantly better than chance. The slight variations between the two splits echo earlier findings: while the 70:15:15 split can yield higher accuracy and AUC for certain classes, it may also show signs of overfitting in other metrics (such as validation loss).
       
Fig 3 highlights that both splits show an upward trend in accuracy and a downward trend in loss. However, the 70:15:15 split exhibits more fluctuation in validation loss, hinting at greater sensitivity to the validation data. Although the 70:15:15 split yields a higher validation accuracy (92.68% vs. 84.38%) and a better overall accuracy (86.46% vs. 82.81%), the increased validation loss (0.6901 vs. 0.3984) suggests overfitting. This finding is further supported by the confusion matrices (Fig 4), where the 70:15:15 model correctly classifies more instances in some diseases but shows instability in others. Many classes achieve near-perfect precision, indicating the model’s caution in labeling diseases. However, certain classes (e.g., Parasitic diseases, Saprolegniasis) have lower recall, meaning the model fails to identify some positive cases. This discrepancy may be due to class imbalance or overlapping visual features among diseases.
       
Despite the potential for overfitting, the higher validation accuracy under the 70:15:15 split suggests it provides better generalization for most disease categories. To further improve the model, data augmentation and balancing techniques could help address low recall in underrepresented classes. Additionally, regularization methods such as dropout, batch normalization, or weight decay can mitigate overfitting-particularly valuable in the 70:15:15 scenario. Finally, adjusting decision thresholds per class could help optimize the balance between precision and recall, especially for classes with skewed performance metrics.  Overall, the results indicate that while the 70:15:15 split leads to higher accuracy and AUC values for most classes, the 80:10:10 split exhibits more stable validation loss. Selecting the optimal approach may depend on the specific application requirements (e.g., whether minimizing false negatives or false positives is more critical) and the available dataset size. To further enhance model performance, implementing data augmentation techniques or gathering additional samples for underrepresented classes (e.g., Parasitic diseases, Saprolegniasis) could improve recall. Moreover, employing regularization methods such as dropout, batch normalization, or weight decay may help mitigate overfitting, particularly in the 70:15:15 split. Adjusting decision thresholds for each class could also help balance precision and recall for those classes with skewed performance metrics.
In conclusion, the CNN-based model effectively identifies multiple fish diseases with high accuracy, demonstrating its potential for aquaculture diagnostics. The study finds that a 70:15:15 data split improves generalization but increases validation loss, while an 80:10:10 split provides more stability with slightly lower accuracy. However, class imbalance, particularly in Parasitic diseases and Saprolegniasis, affects recall and the model shows a tendency to overfit larger validation sets. To enhance performance, future work will focus on data augmentation techniques like rotation, flipping and color jitter, along with exploring advanced architectures such as Vision Transformers and attention-based CNNs. Stronger regularization methods like dropout and batch normalization, as well as optimized decision thresholds, could improve adaptability. Ultimately, this study highlights the potential of AI-driven disease detection to revolutionize aquaculture health management by enabling early diagnosis, reducing economic losses and improving fish welfare.
 
Disclaimers
 
The views and conclusions expressed in this article are solely those of the authors and do not necessarily represent the views of their affiliated institutions. The authors are responsible for the accuracy and completeness of the information provided but do not accept any liability for any direct or indirect losses resulting from the use of this content.
Funding details
 
This work was supported by Hanshin University Research grant.
 
Availability of data and materials
 
Not applicable.
 
Use of artificial intelligence
 
Not applicable.
 
Declarations
 
Author declares that all works are original and this manuscript has not been published in any other journal.
Author declares that they have no conflict of interest.

  1. Abade, A., Ferreira, P.A. and Vidal, F., de B. (2021). Plant diseases recognition on images using convolutional neural networks: A systematic review. Computers and Electronics in Agriculture. 185: 106125. https://doi.org/ 10.1016/j.compag.2021.106125.

  2. Ahmed, M.S., Aurpa, T.T. and Azad, M.A.K. (2022). Fish disease detection using image based machine learning technique in aquaculture. Journal of King Saud University-Computer and Information Sciences. 34(8): 5170-5182. https:// doi.org/10.1016/j.jksuci.2021.05.003.

  3. Al-Dosari, M.N.A. and Abdellatif, M.S. (2024). The environmental awareness level among Saudi women and its relationship to sustainable thinking. Acta Innovations. 52: 28-42. https://doi.org/10.62441/ActaInnovations.52.4.

  4. AlZubi, A.A. (2023). Artificial intelligence and its application in the prediction and diagnosis of animal diseases: A review. Indian Journal of Animal Research. 57(10): 1265-1271. doi: 10.18805/IJAR.BF-1684.

  5. Biswas, S. (2023). Freshwater fish disease aquaculture in South Asia [Data set]. Kaggle. Retrieved November 25, 2025, from https://www.kaggle.com/datasets/subirbiswas19/ freshwater-fish-disease-aquaculture-in-south-asia.

  6. Cho, O.H. (2024). An evaluation of various machine learning approaches for detecting leaf diseases in agriculture. Legume Research. 47(4): 619-627. doi: 10.18805/LRF-787.

  7. Coppola, D., Lauritano, C., Esposito, F.P., Riccio, G., Rizzo, C. and De Pascale, D. (2021). Fish waste: From problem to valuable resource. Marine Drugs. 19(2): 116. https:// doi.org/10.3390/md19020116.

  8. FAO. (2024). The state of world fisheries and aquaculture. Food and Agriculture Organization of the United Nations. http:/ /www.fao.org/state-of-fisheries-aquaculture.

  9. Hai, N.T. and Duong, N.T. (2024). An improved environmental man agement model for assuring energy and economic prosperity. Acta Innovations. 52: 9-18. https://doi.org/10.62441/ ActaInnovations.52.2.

  10. Hasan, N., Ibrahim, S. and Azlan, A.A. (2022). Fish diseases de tection using convolutional neural network (CNN). International Journal of Nonlinear Analysis and Applications. 13(1): 1977-1984. https://doi.org/10.22075/ijnaa.2022. 5839.

  11. Jensen, L.B., Boltana, S., Obach, A., McGurk, C., Waagbø, R. and MacKenzie, S. (2015). Investigating the underlying mechanisms of temperature-related skin diseases in Atlantic salmon, Salmo salar L., as measured by quantitative histology, skin transcriptomics and composition. Journal of Fish Diseases. 38(11): 977-992. https://doi.org/ 10.1111/jfd.12314.

  12. Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM. 60(6): 84-90. https://doi.org/ 10.1145/3065386.

  13. LeCun, Y., Bengio, Y. and Hinton, G. (2015). Deep learning. Nature. 521(7553): 436-444. https://doi.org/10.1038/na ture14539.

  14. Li, D., Li, X., Wang, Q., and Hao, Y. (2022). Advanced techniques for the intelligent diagnosis of fish diseases: A review. Animals. 12(21): 2938. https://doi.org/10.3390/ani12212938.

  15. Lugito, N.P.H., Djuwita, R., Adisasmita, A. and Simadibrata, M. (2022). Blood pressure lowering effect of lactobacillus- containing probiotic. International Journal of Probiotics and Prebiotics. 17(1): 1-13. https://doi.org/10.37290/ ijpp2641-7197.17:1-13.

  16. Lyubchenko, V., Matarneh, R., Kobylin, O. and Lyashenko, V. (2016). Digital image processing techniques for detection and diagnosis of fish diseases. International Journal of Advanced Research in Computer Science and Software Engineering. 6: 79-83.

  17. Malik, S., Kumar, T. and Sahoo, A. (2017). A novel approach to fish disease diagnostic system based on machine learning. Advances in Image and Video Processing. 5(1). https:/ /doi.org/10.14738/aivp.51.2809.

  18. Penttinen, A., Parkkinen, I., Blom, S., Kopra, J. andressoo, J., Pitkänen, K., Voutilainen, M. H., Saarma, M. and Airavaara, M. (2018). Implementation of deep neural networks to count dopamine neurons in substantia nigra. European Journal of Neuroscience. 48(6): 2354-2361. https:// doi.org/10.1111/ejn.14129.

  19. Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks. 61: 85-117. https://doi.org/ 10.1016/j.neunet.2014.09.003.

  20. Siddiqui, S.A., Salman, A., Malik, M.I., Shafait, F., Mian, A., Shortis, M.R. and Harvey, E.S. (2017). Automatic fish species classification in underwater videos: Exploiting pre-trained deep neural network models to compensate for limited labelled data. ICES Journal of Marine Science. 75(1): 374-389. https://doi.org/10.1093/icesjms/fsx109.

  21. Svein, L., Timmerhaus, G., Johansen, L. and Ytteborg, E. (2021). Deep neural network analysis -A paradigm shift for histological examination of health and welfare of farmed fish. Aquaculture. 532: 736024. https://doi.org/10.1016/ j.aquaculture.2020.736024.

  22. Untari, D.T. and Satria, B. (2022). Repositioning culinary “Betawi Ora” as Bekasi eco-culinary tourism icon. International Journal of Environmental Sciences. 8(2): 15-24.

  23. Wihardjo, R.S.D., Muktiono, A. and Suwanda. (2024). Exploring critical thinking in environmental education: A PBL approach to reaction rate material at public senior high schools in Indonesia. International Journal of Environmental Sciences. 10(1): 14-25.

  24. Zhao, T., Jiang, X., Yu, L., Zhang, W., Zhan, H., Wang, Q., Zhang, X. and Xu, F. (2024). Comparison of nutritional status between migrant and nonmigrant school-age children in kunming, yunnan province, China. Current Topics in Nutraceutical Research. 22(2): 617-623. https://doi.org/ 10.37290/ctnr2641-452X.22:617-623.
In this Article
Published In
Indian Journal of Animal Research

Editorial Board

View all (0)