Exploration and Implementation of Computational Convolutional Neural Network Model for Poultry Disease Detection

O
Ok-Hue Cho1,*
1Sangmyung University, Jongno-gu, Seoul, Republic of Korea.

Background: Despite a remarkable growth track of the poultry industry over the last two decades, the challenges remain significantly dominant. The production decreased due to disease outbreaks. The stacking and confinements of fowl lead to widespread infections and a sharp decline in the productivity of farms. Rapid detection of diseases to prevent such outbreaks requires constant monitoring with minimum dependence on human intervention. Computer vision and the use of AI-enabled algorithms facilitate timely screening and surveillance of the flock to prevent disease spread.

Methods: In this work, the sequential CNN model is used to explore the classification of diseases in poultry. The dataset is sourced from the Zenodo database and contains nearly 7,721 images. The preprocessing of the images is done via the Keras image dataset generator. The model focuses on the identification of three distinct classes of diseases from the faecal images of the chicken droppings. The use of a smaller convolutional filter minimises the network’s tendency to overfit during training.

Result: The study evaluated the efficacy of a classification model by considering many criteria such as dataset features, hyperparameters and model architecture. The model has a remarkable overall accuracy of 94.32%, indicating its strong categorisation capabilities across several categories. Although the model has a high overall accuracy, it exhibits differing degrees of performance across several classes, suggesting that there is potential for development in some areas.

Research on agriculture and animals continues to prioritise ensuring food security and providing enough healthy nourishment for the world’s growing population. The demand for land and resources increases as the population grows, reaching an estimated 9.6 billion by mid-century. There is an apparent transition towards animal-based dietary proteins in light of these problems and the growing emphasis on sustainability (FAO, 2022). Forecasts indicate that global consumption of meat and chicken will rise significantly, with an approximate 70% increase by 2050 (NAP, 2022; Koike et al., 2023; Zhao et al., 2024).
       
According to the Economic Survey 2022-23, India’s livestock industry, including dairy, poultry meat, eggs and fisheries, grew at an annual rate of 7.9% between 2014-15 and 2020-21 (FAO, 2022). Additionally, the sector’s share of agricultural GVA increased from 24.3% in 2014-15 to 30.1% in 2020-21. Innovations in technology and improvements in commercial production processes have significantly changed the chicken business in particular. This increase has been facilitated by improved feed formulas, automated feeding systems, temperature controls and cutting-edge disease management techniques. Innovative methods for monitoring animal health are needed as the poultry business develops. The needs of an increasingly organised and technically advanced business cannot be addressed by the existing dependence on manual, labour-intensive techniques. For this reason, improving productivity and assuring the long-term viability of chicken farming in India need the development and use of automated and data-driven health monitoring systems.
       
Surve et al., (2023) employed transfer learning using VGG16, ResNet50 and MobileNet CNN to enhance chicken disease detection and classification. The proposed model achieved an accuracy of 98% as reported. The dataset was collected from poultry farms and open sources, consisting of chicken images classified as healthy and sick. A restricted dataset of 78 images was used to train the model. The study predicted the use of a VGG16-based model but the approach to detection was constrained by chicken images where only morphological observations can be classified. Most diseases would exhibit visible symptoms lately while digestive symptoms like diarrhoea and droppings colour and consistency variations are early to detect. The published paper by Nakrosis et al., (2023) also employed image-processing algorithms to classify faecal images into six distinct categories. The multi-class classification utilised the K-means algorithm with YOLOv5 to achieve an accuracy of around 92%. The work shows significant improvement in the classification tasks but has low disease-specific predictability for immediate remedies. Several other approaches including unsupervised and supervised ML techniques have been utilised for poultry-dropping classification yet the complexity of the multi-class classification makes the overall task very challenging. Still, the deep learning architecture exhibits pronounced potential in this context. The current paper also presents a VGG16-based CNN model capable of performing automatic learning from complex data patterns. The results are quite prospective. Further, massive parallelisation of computations is easily possible in the models developed based on CNN and RNN (Nyalala et al., 2020, Junaidi et al., 2021, Vrindavanam et al., 2024; Semara et al., 2024; Maltare et al., 2023; AlZubi, 2023; Hai and Duong, 2024). Another similar study by Okinda et al., (2019) was based on a machine-vision-based monitoring system for chickens. The extraction of features was done via 2D posture shape descriptors and mobility features. The model developed on a Support Vector Machine (SVM) using radial basis kernel functions showed an accuracy of 0.975. Though the model was promising as it provided early warning and prevention of disease outbreaks in a non-invasive way, it needed to be validated for different types of chicken breeds and infection types. The dataset was limited; hence accuracy values also indicated the constraints. Various industries, including the animal industry, are extensively using ML techniques to handle huge amounts of data via artificial intelligence-based computations (Kim and AlZubi, 2024; Min et al., 2024).
       
The sequential CNN model was selected for this study due to its effectiveness in image-based classification tasks, particularly in disease detection. CNNs are well-suited for feature extraction from images, as they automatically learn spatial hierarchies of features, making them ideal for analyzing faecal images in poultry disease diagnosis. The sequential model, in particular, provides a straightforward and efficient architecture that allows for easy layer stacking, optimization and deployment. Several DL models can be used for image classification. ResNet solves vanishing gradient issues with skip connections but increases computational complexity. VGGNet uses deep layers with small kernel sizes, ensuring high accuracy but needing more resources. EfficientNet balances accuracy and efficiency through compound scaling but requires more memory and processing power. Vision Transformers (ViT) perform well on large datasets but need extensive training and high computation. CNNs use convolutional layers to automatically extract spatial features, reducing the need for manual feature engineering. Unlike traditional ML models, CNNs capture local patterns and spatial hierarchies, enhancing classification accuracy. Compared to transformer-based models, CNNs are computationally more efficient for moderate-sized datasets and real-time applications. CNNs outperform classical ML approaches (e.g., SVM, Random Forest) in image-based tasks due to their ability to recognize patterns and textures directly from raw images. In this study, the sequential CNN model was chosen for its balance between accuracy, computational efficiency and ease of implementation, making it a practical choice for poultry disease detection.
               
This study aims to develop and evaluate a CNN-based model for detecting common poultry diseases using chicken faecal images. A diverse dataset was sourced from the online database, including samples from different breeds affected by Newcastle Disease, Coccidiosis and Salmonella. The research focuses on implementing and fine-tuning a sequential CNN model with transfer learning for improved classification. The model’s performance is assessed using accuracy, precision, recall and F1-score.
Data source
 
A diverse dataset is essential for training and assessing machine learning models. It facilitates the development of precise algorithms that are important for the identification and classification of diseases in chickens. The reliability and variety of the dataset enhance the generalisation and practical use of the model. The open source Zenodo database provided the collection of chicken faecal images utilised in the present study (Machuve et al., 2021). The dataset includes images of both healthy and diseased chickens with Coccidiosis, Newcastle and Salmonella (Fig 1). The dataset includes fecal images from layers, crosses and native chicken breeds. These breeds are more susceptible to infections due to their longer lifespans. The life span on farms is up to 18 months, as compared to 5 weeks for broiler breeds. The poultry fecal images were collected from Arusha and Kilimanjaro regions in Tanzania between September 2020 and February 2021 using the Open Data Kit (ODK) app on mobile phones. Images of normal fecal material, representing the “healthy” class and those affected by Coccidiosis, representing the “cocci” class, were taken from poultry farms. Chickens were inoculated for Salmonella and fecal images for the “salmo” class were captured after one week. Similarly, chickens were inoculated for Newcastle disease and fecal images for the “ncd” class were taken within three days. The chicken dataset contains a total of 6,812 images. The dataset was split into 80% for training and 20% for testing in the experiments. Additionally, 10% of the training set was reserved for validation.

Fig 1: Healthy and diseased chickens.


 
Preparation of data for modeling (Preprocessing)
 
The first stage of image-based machine learning is preprocessing. That improves the quality of images by removing unwanted noise. To ensure uniformity, images are downsized to 256 by 256 pixels. This helps the model analyze data more effectively. The next step is normalization, setting the pixel values in all images to a range from 0 to 1 (Fig 2). During model training, this procedure improves integration. Each image is tagged with its corresponding health category, making it possible to distinguish between four classes: Newcastle, Salmonella, Coccidiosis and Healthy.

Fig 2: Data normalization process.


       
The process of creating additional images from an existing dataset to increase the variety and diversity is known as data augmentation. This method involves flipping, rotating, moving and zooming in and out of images. In order to reduce overfitting problems and enhance the model’s capacity to generalize to new data, data augmentation is essential.
 
Computational convolutional neural network (CNN) details
 
The Computational CNN model with sequential architecture is used to classify images. This pre-trained model has the potential to work well with large datasets.
 
Training learning
 
This computational method applies transfer learning, the process of employing a pre-trained CNN model as a feature extractor. During this step, only the weights of the subsequent layers are modified for the poultry disease dataset. The weights of the first few layers remain unaltered. By doing this, the information acquired from a huge dataset may be used to train the model in an efficient manner. The detailed procedure of this CNN model approach is shown in Fig 3.

Fig 3: Detailed procedure of computational convolutional neural network model for disease prediction.


       
The model setup begins by defining parameters such as an image size of 256 for input dimensions and a batch size of 32 for the number of images processed in each training batch. The training process runs for 25 epochs, representing the full iterations of the dataset during training.
       
The Sequential Model describes the architecture of the model, starting with Conv2D layers for feature extraction and ending with MaxPooling2D layers for spatial dimension reduction (Fig 4). Convolutional layers conv2d, numbered 1 through 5, consist of 5 layers. Similarly, there are five max-pooling layers with the labels max_pooling2d, numbered 1 through 5. The multi-dimensional feature maps are converted into a one-dimensional array by a Flatten layer, preparing them for Dense layers. After handling classification, the Dense layers provide output predictions.  A thorough analysis of the model’s parameters reveals 2,124,996 trainable parameters. These parameters are necessary for modifying the weights and biases of the model during training. The careful design and precise parameterization (Table 1) highlight the model’s capacity to efficiently interpret and categorize images of poultry disease.

Fig 4: Convolutional neural network architecture.



Table 1: List of hyperparameters used for computation.


 
Performance parameters
 
Several measures, including F1-score, Accuracy, Recall and Precision, are used to assess the model’s performance. These metrics are computed using sample numbers for true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN). The ratio of properly recognized tests to all samples is known as accuracy. The ratio of accurately predicted positive outcomes to all positive forecasts is measured as precision. The recall function quantifies the proportion of correctly predicted positive outcomes among all potential positive outcomes produced. Precision and Recall are balanced to provide the F1 Score, a single statistic that summarizes model performance. 







The model’s ability depends on many variables, including the features of the dataset, the selected hyperparameters and the details of the classification work. To get the best outcomes, experimentation and fine-tuning with various designs and settings can be required.
       
The model’s performance metrics after the 25th training period were as follows: The loss value, representing the average loss computed over the entire set of training samples, was 0.1582 (Fig 5). Better performance was indicated by a lower loss value. The model accurately identified roughly 94.19% of the training data samples, as shown by the accuracy of 0.9419. Regarding validation, the accuracy was 0.9405 and the loss value was 0.1797. These metrics showed that the model had good performance with reasonably low loss values and excellent accuracy on both the training and unseen validation data.

Fig 5: Model training and validation accuracy and loss curves.


       
Fig 6 displayed a set of pictures depicting predicted and actual diseases. This provided information about the trained model’s efficacy and degree of confidence. The predictions shown in the images reflected accuracy and certainty in their values. This visual depiction made it easier to determine the model’s effectiveness in identifying diseases.

Fig 6: Sample predictions of the model.


       
The model’s performance in identifying various categories was shown visually in the confusion matrix (Fig 7). With only one incorrect classification out of 220 cases, the model demonstrated excellent accuracy in detecting coccidiosis. However, it misclassified 15 Healthy samples, indicating lower accuracy for this category. Newcastle disease showed moderate performance, with multiple misclassifications across different categories. In contrast, the model exhibited very low misclassification rates and performed exceptionally well in recognizing Salmonella. Overall, while the model accurately classified coccidiosis and Salmonella, it struggled more with distinguishing Newcastle disease and Healthy samples.
       
The parameters for model evaluation are summarized in Table 2. The precision for coccidiosis was high at 97.77%, suggesting a low rate of false positives. Recall, representing the model’s ability to detect true positives, was also strong at 99.55%, indicating that most cases of coccidiosis were correctly identified. The F1-score was similarly high at 98.65%, reflecting well-balanced performance. The support, representing the total number of instances in each class, was 220, showing that the dataset was reasonably balanced.

Table 2: Classification matrices for all classes.


       
Conversely, for healthy samples, the recall was slightly lower at 92.79%, indicating some false negatives, but the precision was high at 94.50%, showing accurate positive predictions. As a result, the F1-score was 93.64%. The support for healthy samples was 222, suggesting that the dataset for this class was well-balanced. The precision for Newcastle disease was much lower at 81.82%, indicating a higher rate of false positives. Moreover, the recall was poor at 32.14%, reflecting a high proportion of false negatives. As a result, the disparity between recall and accuracy was evident in the comparatively low F1-score of 46.15%. The support for Newcastle disease was 28, showing fewer cases in this class.
       
Salmonella had a high recall rate of 98.29%, indicating that real positives were well captured and a high precision rate of 91.63%, suggesting minimal false positives. Consequently, the F1-score stood at 94.85%, demonstrating strong overall performance. The support for Salmonella was 234, suggesting a well-balanced dataset for this class. The model’s overall accuracy across all classes was 94.32%, demonstrating its ability to categorize a large portion of the dataset accurately. The macro average showed balanced performance across classes, with an accuracy of 91.43%, recall of 80.69% and an F1-score of 83.32%. The weighted average had a precision of 94.06%, recall of 94.32% and an F1-score of 93.72%, illustrating the model’s performance while considering class imbalance.
       
A comparative table (Table 3) presents the findings of this study alongside previous research, highlighting the accuracy levels of different models used for poultry disease detection. Wang et al., (2019) employed a DCNN-based approach to identify digestive disorders in broiler chickens using Faster R-CNN and YOLO-V3. Their study achieved a recall rate of 99.1% and a mean average precision (mAP) of 93.3% with Faster R-CNN, while YOLO-V3 attained a recall rate of 88.7% and an mAP of 84.3%. Similarly, Mbelwa et al., (2021) developed a CNN-based model to classify chicken feces into three disease categories, with the XceptionNet model achieving 94% accuracy, slightly outperforming a fully trained CNN model with 93.67% accuracy. Okinda et al., (2019) used a machine vision system integrating video surveillance and depth cameras to track movement and posture-related features for disease prediction. Their SVM-based model achieved accuracies of 97.5% and 97.8% when incorporating all feature variables.

Table 3: Comparison of presented work with literature.


       
In comparison, the Sequential CNN model in this study attained an accuracy of 94.32%, which is competitive with the reported CNN-based approaches. However, unlike studies that incorporated additional features such as movement patterns (Okinda et al., 2019) or optimized anchor boxes (Wang et al., 2019), our approach relies solely on fecal images. While this simplifies implementation and reduces the need for complex hardware setups, it also introduces limitations, as fecal color alone is not always a reliable disease indicator. The results indicate that model selection and feature engineering significantly impact classification accuracy. Approaches integrating multiple features, such as movement analysis or anchor box optimization, have demonstrated superior performance.
       
However, there are limitations to the approach. Fecal color as a diagnostic indicator is influenced by various factors, such as diet, stress, lighting conditions and the presence of multiple diseases that cause similar color changes. Moreover, mixed droppings from group-reared poultry make it difficult to attribute specific colors to individual birds. As such, fecal color alone is not a definitive diagnostic tool and should be viewed as a complementary method. Future work will focus on integrating additional parameters, such as clinical symptoms, behavioral observations and microbiological testing, to improve the system’s robustness and reliability. Additionally, expanding the dataset and testing the model in more diverse poultry environments will help enhance its generalization capabilities.
The proposed technique aimed to address challenges associated with human error in external examinations and the complexity of laboratory testing in poultry disease diagnosis. It was designed to assist in identifying common poultry diseases, such as Newcastle Disease, Coccidiosis and Salmonella, using images of chicken feces. While this method may support farmers and veterinarians in disease detection, it should be considered a complementary tool rather than a replacement for traditional diagnostic methods, including clinical signs, post-mortem analysis and laboratory tests. Future research should focus on expanding the dataset by incorporating more diseases and integrating additional parameters, such as clinical symptoms and microbiological testing, to enhance accuracy and reliability.
Funding details
 
This research was funded by a 2024 Research Grant from Sangmyung University (2024-A000-0089).
 
Data availability
 
The data analysed/generated in the present study will be made available from corresponding authors upon reasonable request.
 
Use of artificial intelligence
 
Not applicable.
 
Declarations
 
Author declares that all works are original and this manuscript has not been published in any other journal.
Author declares that they have no conflict of interest.

  1. AlZubi, A.A. (2023). Artificial intelligence and its application in the prediction and diagnosis of animal diseases: A review. Indian Journal of Animal Research. 57(10): 1265-1271. doi: 10.18805/IJAR.BF-1684.

  2. FAO. (2022). World Food and Agriculture-Statistical Yearbook 2022. Rome. https://doi.org/10.4060/cc2211en.

  3. Hai, N.T. and Duong, N.T. (2024). An improved environmental management model for assuring energy and economic prosperity. Acta Innovations. 52: 9-18. https://doi.org/ 10.62441/ActaInnovations.52.2. 

  4. Junaidi, A., Lasama, J., Adhinata, F.D. and Iskandar, A.R. (2021). Image Classification for Egg Incubator using Transfer Learning of VGG16 and VGG19. In: IEEE International Conference on Communication, Networks and Satellite (COMNETSAT). Purwokerto, Indonesia. (pp. 324-328). https://doi.org/10.1109/COMNETSAT53002.2021.9530826.

  5. Kim, S.Y. and AlZubi, A.A. (2024). Blockchain and artificial intelligence for ensuring the authenticity of organic legume products in supply chains. Legume Research. 47(7): 1144-1150. doi: 10.18805/LRF-786.

  6. Koike, T., Yamamoto, S., Furui, T., Miyazaki, C., Ishikawa, H. and Morishige, K.I. (2023). Evaluation of the relationship between equol production and the risk of locomotive syndrome in very elderly women. International Journal of Probiotics and Prebiotics. 18(1): 7-13. https://doi.org/ 10.37290/ijpp2641-7197.18:7-13. 

  7. Machuve, D., Nwankwo, E., Mduma, N., Mbelwa, H., Maguo, E. and Munisi, C. (2021). Machine learning dataset for poultry diseases diagnostics (Version 2) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.4628934.

  8. Maltare, N.N., Sharma, D. and Patel, S. (2023). An exploration and prediction of rainfall and groundwater level for the district of Banaskantha, Gujrat, India. International Journal of Environmental Sciences. 9(1): 1-17. https://www.theaspd. com/resources/v9-1-1-Nilesh%20N.%20Maltare.pdf.

  9. Mbelwa, H., Mbelwa, J. and Machuve, D. (2021). Deep convolutional neural network for chicken diseases detection. International Journal of Advanced Computer Science and Applications. 12(2). https://doi.org/10.14569/IJACSA.2021.0120295.

  10. Min, P.K., Mito, K. and Kim, T.H. (2024). The evolving landscape of artificial intelligence applications in animal health. Indian Journal of Animal Research. 58(10): 1793-1798. doi: 10.18805/IJAR.BF-1742.

  11. Nakrosis, A., Paulauskaite-Taraseviciene, A., Raudonis, V., Narusis, I., Gruzauskas, V., Gruzauskas, R. and Lagzdinyte- Budnike, I. (2023). Towards early poultry health prediction through non-invasive and computer vision-based dropping classification. Animals. 13: 3041. https://doi.org/10.3390/ ani13193041.  

  12. Nyalala, I., Okinda, C., Korohou, T., Wang, J., Achieng, T., Wamalwa, P., Mang, T. and Shen, M. (2020). A review on computer vision systems in monitoring of poultry: A welfare perspective. Artificial Intelligence. 4: 184-208. https://doi.org/ 10.1016/j.aiia.2020.09.002. 

  13. National Action Plan (NAP) for Egg and Poultry-2022 For Doubling Farmers’ Income by 2022. Department of Animal Husbandry, Dairying and Fisheries, Ministry of Agriculture and Farmers Welfare, Government of India.

  14. Okinda, C., Lu, M., Liu, L., Nyalala, I., Muneri, C., Wang, J., Zhang, H. and Shen, M. (2019). A machine vision system for early detection and prediction of sick birds: A broiler chicken model. Biosystems Engineering. 188: 229-242. https://doi.org/10.1016/j.biosystemseng.2019.09.015.

  15. Semara, I.M.T., Sunarta, I.N., Antara, M., Arida, I.N.S. and Wirawan, P.E. (2024). Tourism sites and environmental reservation. International Journal of Environmental Sciences. 10(1): 44-55. https://www.theaspd.com/resources/4.%20 Tourism  %20Sites %20and%20 Environmental% 20 Reservation %20objects.pdf.

  16. Srivastava, K. and Pandey, P. (2023). Deep learning-based classification of poultry disease. International Journal of Automation and Smart Technology. 13(1): 24-39. https://doi.org/10.5875/ausmt.v13i1.2439.

  17. Surve, J., Kanwade, B. and Patil, S. (2023). Enhancing chicken disease detection and classification using transfer learning and convolutional neural networks. Journal of Southwest Jiaotong University. 58(1): 1462-1473.

  18. Vrindavanam, J., Kumar, P., Kamath, G., Chandrashekhar, N. and Patil, G. (2024). Poultry disease identification in faecal images using vision transformer. Medicon Agriculture and Environmental Sciences. 6(1): 5-15. https://doi.org/ 10.55162/MCAES.06.150.

  19. Wang, J., Shen, M., Liu, L., Xu, Y. and Okinda, C. (2019). Recognition and classification of broiler droppings based on deep convolutional neural network. Journal of Sensors. pp 1- 10. https://doi.org/10.1155/2019/3823515.

  20. Zhao, T., Jiang, X., Yu, L., Zhang, W., Zhan, H., Wang, Q., Zhang, X. and Xu, F. (2024). Comparison of nutritional status between migrant and nonmigrant school-age children in Kunming, Yunnan Province, China. Current Topics in Nutraceutical Research. 22(2): 617-623. https://doi.org/ 10.37290/ctnr2641-452X.22:617-623. 

Exploration and Implementation of Computational Convolutional Neural Network Model for Poultry Disease Detection

O
Ok-Hue Cho1,*
1Sangmyung University, Jongno-gu, Seoul, Republic of Korea.

Background: Despite a remarkable growth track of the poultry industry over the last two decades, the challenges remain significantly dominant. The production decreased due to disease outbreaks. The stacking and confinements of fowl lead to widespread infections and a sharp decline in the productivity of farms. Rapid detection of diseases to prevent such outbreaks requires constant monitoring with minimum dependence on human intervention. Computer vision and the use of AI-enabled algorithms facilitate timely screening and surveillance of the flock to prevent disease spread.

Methods: In this work, the sequential CNN model is used to explore the classification of diseases in poultry. The dataset is sourced from the Zenodo database and contains nearly 7,721 images. The preprocessing of the images is done via the Keras image dataset generator. The model focuses on the identification of three distinct classes of diseases from the faecal images of the chicken droppings. The use of a smaller convolutional filter minimises the network’s tendency to overfit during training.

Result: The study evaluated the efficacy of a classification model by considering many criteria such as dataset features, hyperparameters and model architecture. The model has a remarkable overall accuracy of 94.32%, indicating its strong categorisation capabilities across several categories. Although the model has a high overall accuracy, it exhibits differing degrees of performance across several classes, suggesting that there is potential for development in some areas.

Research on agriculture and animals continues to prioritise ensuring food security and providing enough healthy nourishment for the world’s growing population. The demand for land and resources increases as the population grows, reaching an estimated 9.6 billion by mid-century. There is an apparent transition towards animal-based dietary proteins in light of these problems and the growing emphasis on sustainability (FAO, 2022). Forecasts indicate that global consumption of meat and chicken will rise significantly, with an approximate 70% increase by 2050 (NAP, 2022; Koike et al., 2023; Zhao et al., 2024).
       
According to the Economic Survey 2022-23, India’s livestock industry, including dairy, poultry meat, eggs and fisheries, grew at an annual rate of 7.9% between 2014-15 and 2020-21 (FAO, 2022). Additionally, the sector’s share of agricultural GVA increased from 24.3% in 2014-15 to 30.1% in 2020-21. Innovations in technology and improvements in commercial production processes have significantly changed the chicken business in particular. This increase has been facilitated by improved feed formulas, automated feeding systems, temperature controls and cutting-edge disease management techniques. Innovative methods for monitoring animal health are needed as the poultry business develops. The needs of an increasingly organised and technically advanced business cannot be addressed by the existing dependence on manual, labour-intensive techniques. For this reason, improving productivity and assuring the long-term viability of chicken farming in India need the development and use of automated and data-driven health monitoring systems.
       
Surve et al., (2023) employed transfer learning using VGG16, ResNet50 and MobileNet CNN to enhance chicken disease detection and classification. The proposed model achieved an accuracy of 98% as reported. The dataset was collected from poultry farms and open sources, consisting of chicken images classified as healthy and sick. A restricted dataset of 78 images was used to train the model. The study predicted the use of a VGG16-based model but the approach to detection was constrained by chicken images where only morphological observations can be classified. Most diseases would exhibit visible symptoms lately while digestive symptoms like diarrhoea and droppings colour and consistency variations are early to detect. The published paper by Nakrosis et al., (2023) also employed image-processing algorithms to classify faecal images into six distinct categories. The multi-class classification utilised the K-means algorithm with YOLOv5 to achieve an accuracy of around 92%. The work shows significant improvement in the classification tasks but has low disease-specific predictability for immediate remedies. Several other approaches including unsupervised and supervised ML techniques have been utilised for poultry-dropping classification yet the complexity of the multi-class classification makes the overall task very challenging. Still, the deep learning architecture exhibits pronounced potential in this context. The current paper also presents a VGG16-based CNN model capable of performing automatic learning from complex data patterns. The results are quite prospective. Further, massive parallelisation of computations is easily possible in the models developed based on CNN and RNN (Nyalala et al., 2020, Junaidi et al., 2021, Vrindavanam et al., 2024; Semara et al., 2024; Maltare et al., 2023; AlZubi, 2023; Hai and Duong, 2024). Another similar study by Okinda et al., (2019) was based on a machine-vision-based monitoring system for chickens. The extraction of features was done via 2D posture shape descriptors and mobility features. The model developed on a Support Vector Machine (SVM) using radial basis kernel functions showed an accuracy of 0.975. Though the model was promising as it provided early warning and prevention of disease outbreaks in a non-invasive way, it needed to be validated for different types of chicken breeds and infection types. The dataset was limited; hence accuracy values also indicated the constraints. Various industries, including the animal industry, are extensively using ML techniques to handle huge amounts of data via artificial intelligence-based computations (Kim and AlZubi, 2024; Min et al., 2024).
       
The sequential CNN model was selected for this study due to its effectiveness in image-based classification tasks, particularly in disease detection. CNNs are well-suited for feature extraction from images, as they automatically learn spatial hierarchies of features, making them ideal for analyzing faecal images in poultry disease diagnosis. The sequential model, in particular, provides a straightforward and efficient architecture that allows for easy layer stacking, optimization and deployment. Several DL models can be used for image classification. ResNet solves vanishing gradient issues with skip connections but increases computational complexity. VGGNet uses deep layers with small kernel sizes, ensuring high accuracy but needing more resources. EfficientNet balances accuracy and efficiency through compound scaling but requires more memory and processing power. Vision Transformers (ViT) perform well on large datasets but need extensive training and high computation. CNNs use convolutional layers to automatically extract spatial features, reducing the need for manual feature engineering. Unlike traditional ML models, CNNs capture local patterns and spatial hierarchies, enhancing classification accuracy. Compared to transformer-based models, CNNs are computationally more efficient for moderate-sized datasets and real-time applications. CNNs outperform classical ML approaches (e.g., SVM, Random Forest) in image-based tasks due to their ability to recognize patterns and textures directly from raw images. In this study, the sequential CNN model was chosen for its balance between accuracy, computational efficiency and ease of implementation, making it a practical choice for poultry disease detection.
               
This study aims to develop and evaluate a CNN-based model for detecting common poultry diseases using chicken faecal images. A diverse dataset was sourced from the online database, including samples from different breeds affected by Newcastle Disease, Coccidiosis and Salmonella. The research focuses on implementing and fine-tuning a sequential CNN model with transfer learning for improved classification. The model’s performance is assessed using accuracy, precision, recall and F1-score.
Data source
 
A diverse dataset is essential for training and assessing machine learning models. It facilitates the development of precise algorithms that are important for the identification and classification of diseases in chickens. The reliability and variety of the dataset enhance the generalisation and practical use of the model. The open source Zenodo database provided the collection of chicken faecal images utilised in the present study (Machuve et al., 2021). The dataset includes images of both healthy and diseased chickens with Coccidiosis, Newcastle and Salmonella (Fig 1). The dataset includes fecal images from layers, crosses and native chicken breeds. These breeds are more susceptible to infections due to their longer lifespans. The life span on farms is up to 18 months, as compared to 5 weeks for broiler breeds. The poultry fecal images were collected from Arusha and Kilimanjaro regions in Tanzania between September 2020 and February 2021 using the Open Data Kit (ODK) app on mobile phones. Images of normal fecal material, representing the “healthy” class and those affected by Coccidiosis, representing the “cocci” class, were taken from poultry farms. Chickens were inoculated for Salmonella and fecal images for the “salmo” class were captured after one week. Similarly, chickens were inoculated for Newcastle disease and fecal images for the “ncd” class were taken within three days. The chicken dataset contains a total of 6,812 images. The dataset was split into 80% for training and 20% for testing in the experiments. Additionally, 10% of the training set was reserved for validation.

Fig 1: Healthy and diseased chickens.


 
Preparation of data for modeling (Preprocessing)
 
The first stage of image-based machine learning is preprocessing. That improves the quality of images by removing unwanted noise. To ensure uniformity, images are downsized to 256 by 256 pixels. This helps the model analyze data more effectively. The next step is normalization, setting the pixel values in all images to a range from 0 to 1 (Fig 2). During model training, this procedure improves integration. Each image is tagged with its corresponding health category, making it possible to distinguish between four classes: Newcastle, Salmonella, Coccidiosis and Healthy.

Fig 2: Data normalization process.


       
The process of creating additional images from an existing dataset to increase the variety and diversity is known as data augmentation. This method involves flipping, rotating, moving and zooming in and out of images. In order to reduce overfitting problems and enhance the model’s capacity to generalize to new data, data augmentation is essential.
 
Computational convolutional neural network (CNN) details
 
The Computational CNN model with sequential architecture is used to classify images. This pre-trained model has the potential to work well with large datasets.
 
Training learning
 
This computational method applies transfer learning, the process of employing a pre-trained CNN model as a feature extractor. During this step, only the weights of the subsequent layers are modified for the poultry disease dataset. The weights of the first few layers remain unaltered. By doing this, the information acquired from a huge dataset may be used to train the model in an efficient manner. The detailed procedure of this CNN model approach is shown in Fig 3.

Fig 3: Detailed procedure of computational convolutional neural network model for disease prediction.


       
The model setup begins by defining parameters such as an image size of 256 for input dimensions and a batch size of 32 for the number of images processed in each training batch. The training process runs for 25 epochs, representing the full iterations of the dataset during training.
       
The Sequential Model describes the architecture of the model, starting with Conv2D layers for feature extraction and ending with MaxPooling2D layers for spatial dimension reduction (Fig 4). Convolutional layers conv2d, numbered 1 through 5, consist of 5 layers. Similarly, there are five max-pooling layers with the labels max_pooling2d, numbered 1 through 5. The multi-dimensional feature maps are converted into a one-dimensional array by a Flatten layer, preparing them for Dense layers. After handling classification, the Dense layers provide output predictions.  A thorough analysis of the model’s parameters reveals 2,124,996 trainable parameters. These parameters are necessary for modifying the weights and biases of the model during training. The careful design and precise parameterization (Table 1) highlight the model’s capacity to efficiently interpret and categorize images of poultry disease.

Fig 4: Convolutional neural network architecture.



Table 1: List of hyperparameters used for computation.


 
Performance parameters
 
Several measures, including F1-score, Accuracy, Recall and Precision, are used to assess the model’s performance. These metrics are computed using sample numbers for true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN). The ratio of properly recognized tests to all samples is known as accuracy. The ratio of accurately predicted positive outcomes to all positive forecasts is measured as precision. The recall function quantifies the proportion of correctly predicted positive outcomes among all potential positive outcomes produced. Precision and Recall are balanced to provide the F1 Score, a single statistic that summarizes model performance. 







The model’s ability depends on many variables, including the features of the dataset, the selected hyperparameters and the details of the classification work. To get the best outcomes, experimentation and fine-tuning with various designs and settings can be required.
       
The model’s performance metrics after the 25th training period were as follows: The loss value, representing the average loss computed over the entire set of training samples, was 0.1582 (Fig 5). Better performance was indicated by a lower loss value. The model accurately identified roughly 94.19% of the training data samples, as shown by the accuracy of 0.9419. Regarding validation, the accuracy was 0.9405 and the loss value was 0.1797. These metrics showed that the model had good performance with reasonably low loss values and excellent accuracy on both the training and unseen validation data.

Fig 5: Model training and validation accuracy and loss curves.


       
Fig 6 displayed a set of pictures depicting predicted and actual diseases. This provided information about the trained model’s efficacy and degree of confidence. The predictions shown in the images reflected accuracy and certainty in their values. This visual depiction made it easier to determine the model’s effectiveness in identifying diseases.

Fig 6: Sample predictions of the model.


       
The model’s performance in identifying various categories was shown visually in the confusion matrix (Fig 7). With only one incorrect classification out of 220 cases, the model demonstrated excellent accuracy in detecting coccidiosis. However, it misclassified 15 Healthy samples, indicating lower accuracy for this category. Newcastle disease showed moderate performance, with multiple misclassifications across different categories. In contrast, the model exhibited very low misclassification rates and performed exceptionally well in recognizing Salmonella. Overall, while the model accurately classified coccidiosis and Salmonella, it struggled more with distinguishing Newcastle disease and Healthy samples.
       
The parameters for model evaluation are summarized in Table 2. The precision for coccidiosis was high at 97.77%, suggesting a low rate of false positives. Recall, representing the model’s ability to detect true positives, was also strong at 99.55%, indicating that most cases of coccidiosis were correctly identified. The F1-score was similarly high at 98.65%, reflecting well-balanced performance. The support, representing the total number of instances in each class, was 220, showing that the dataset was reasonably balanced.

Table 2: Classification matrices for all classes.


       
Conversely, for healthy samples, the recall was slightly lower at 92.79%, indicating some false negatives, but the precision was high at 94.50%, showing accurate positive predictions. As a result, the F1-score was 93.64%. The support for healthy samples was 222, suggesting that the dataset for this class was well-balanced. The precision for Newcastle disease was much lower at 81.82%, indicating a higher rate of false positives. Moreover, the recall was poor at 32.14%, reflecting a high proportion of false negatives. As a result, the disparity between recall and accuracy was evident in the comparatively low F1-score of 46.15%. The support for Newcastle disease was 28, showing fewer cases in this class.
       
Salmonella had a high recall rate of 98.29%, indicating that real positives were well captured and a high precision rate of 91.63%, suggesting minimal false positives. Consequently, the F1-score stood at 94.85%, demonstrating strong overall performance. The support for Salmonella was 234, suggesting a well-balanced dataset for this class. The model’s overall accuracy across all classes was 94.32%, demonstrating its ability to categorize a large portion of the dataset accurately. The macro average showed balanced performance across classes, with an accuracy of 91.43%, recall of 80.69% and an F1-score of 83.32%. The weighted average had a precision of 94.06%, recall of 94.32% and an F1-score of 93.72%, illustrating the model’s performance while considering class imbalance.
       
A comparative table (Table 3) presents the findings of this study alongside previous research, highlighting the accuracy levels of different models used for poultry disease detection. Wang et al., (2019) employed a DCNN-based approach to identify digestive disorders in broiler chickens using Faster R-CNN and YOLO-V3. Their study achieved a recall rate of 99.1% and a mean average precision (mAP) of 93.3% with Faster R-CNN, while YOLO-V3 attained a recall rate of 88.7% and an mAP of 84.3%. Similarly, Mbelwa et al., (2021) developed a CNN-based model to classify chicken feces into three disease categories, with the XceptionNet model achieving 94% accuracy, slightly outperforming a fully trained CNN model with 93.67% accuracy. Okinda et al., (2019) used a machine vision system integrating video surveillance and depth cameras to track movement and posture-related features for disease prediction. Their SVM-based model achieved accuracies of 97.5% and 97.8% when incorporating all feature variables.

Table 3: Comparison of presented work with literature.


       
In comparison, the Sequential CNN model in this study attained an accuracy of 94.32%, which is competitive with the reported CNN-based approaches. However, unlike studies that incorporated additional features such as movement patterns (Okinda et al., 2019) or optimized anchor boxes (Wang et al., 2019), our approach relies solely on fecal images. While this simplifies implementation and reduces the need for complex hardware setups, it also introduces limitations, as fecal color alone is not always a reliable disease indicator. The results indicate that model selection and feature engineering significantly impact classification accuracy. Approaches integrating multiple features, such as movement analysis or anchor box optimization, have demonstrated superior performance.
       
However, there are limitations to the approach. Fecal color as a diagnostic indicator is influenced by various factors, such as diet, stress, lighting conditions and the presence of multiple diseases that cause similar color changes. Moreover, mixed droppings from group-reared poultry make it difficult to attribute specific colors to individual birds. As such, fecal color alone is not a definitive diagnostic tool and should be viewed as a complementary method. Future work will focus on integrating additional parameters, such as clinical symptoms, behavioral observations and microbiological testing, to improve the system’s robustness and reliability. Additionally, expanding the dataset and testing the model in more diverse poultry environments will help enhance its generalization capabilities.
The proposed technique aimed to address challenges associated with human error in external examinations and the complexity of laboratory testing in poultry disease diagnosis. It was designed to assist in identifying common poultry diseases, such as Newcastle Disease, Coccidiosis and Salmonella, using images of chicken feces. While this method may support farmers and veterinarians in disease detection, it should be considered a complementary tool rather than a replacement for traditional diagnostic methods, including clinical signs, post-mortem analysis and laboratory tests. Future research should focus on expanding the dataset by incorporating more diseases and integrating additional parameters, such as clinical symptoms and microbiological testing, to enhance accuracy and reliability.
Funding details
 
This research was funded by a 2024 Research Grant from Sangmyung University (2024-A000-0089).
 
Data availability
 
The data analysed/generated in the present study will be made available from corresponding authors upon reasonable request.
 
Use of artificial intelligence
 
Not applicable.
 
Declarations
 
Author declares that all works are original and this manuscript has not been published in any other journal.
Author declares that they have no conflict of interest.

  1. AlZubi, A.A. (2023). Artificial intelligence and its application in the prediction and diagnosis of animal diseases: A review. Indian Journal of Animal Research. 57(10): 1265-1271. doi: 10.18805/IJAR.BF-1684.

  2. FAO. (2022). World Food and Agriculture-Statistical Yearbook 2022. Rome. https://doi.org/10.4060/cc2211en.

  3. Hai, N.T. and Duong, N.T. (2024). An improved environmental management model for assuring energy and economic prosperity. Acta Innovations. 52: 9-18. https://doi.org/ 10.62441/ActaInnovations.52.2. 

  4. Junaidi, A., Lasama, J., Adhinata, F.D. and Iskandar, A.R. (2021). Image Classification for Egg Incubator using Transfer Learning of VGG16 and VGG19. In: IEEE International Conference on Communication, Networks and Satellite (COMNETSAT). Purwokerto, Indonesia. (pp. 324-328). https://doi.org/10.1109/COMNETSAT53002.2021.9530826.

  5. Kim, S.Y. and AlZubi, A.A. (2024). Blockchain and artificial intelligence for ensuring the authenticity of organic legume products in supply chains. Legume Research. 47(7): 1144-1150. doi: 10.18805/LRF-786.

  6. Koike, T., Yamamoto, S., Furui, T., Miyazaki, C., Ishikawa, H. and Morishige, K.I. (2023). Evaluation of the relationship between equol production and the risk of locomotive syndrome in very elderly women. International Journal of Probiotics and Prebiotics. 18(1): 7-13. https://doi.org/ 10.37290/ijpp2641-7197.18:7-13. 

  7. Machuve, D., Nwankwo, E., Mduma, N., Mbelwa, H., Maguo, E. and Munisi, C. (2021). Machine learning dataset for poultry diseases diagnostics (Version 2) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.4628934.

  8. Maltare, N.N., Sharma, D. and Patel, S. (2023). An exploration and prediction of rainfall and groundwater level for the district of Banaskantha, Gujrat, India. International Journal of Environmental Sciences. 9(1): 1-17. https://www.theaspd. com/resources/v9-1-1-Nilesh%20N.%20Maltare.pdf.

  9. Mbelwa, H., Mbelwa, J. and Machuve, D. (2021). Deep convolutional neural network for chicken diseases detection. International Journal of Advanced Computer Science and Applications. 12(2). https://doi.org/10.14569/IJACSA.2021.0120295.

  10. Min, P.K., Mito, K. and Kim, T.H. (2024). The evolving landscape of artificial intelligence applications in animal health. Indian Journal of Animal Research. 58(10): 1793-1798. doi: 10.18805/IJAR.BF-1742.

  11. Nakrosis, A., Paulauskaite-Taraseviciene, A., Raudonis, V., Narusis, I., Gruzauskas, V., Gruzauskas, R. and Lagzdinyte- Budnike, I. (2023). Towards early poultry health prediction through non-invasive and computer vision-based dropping classification. Animals. 13: 3041. https://doi.org/10.3390/ ani13193041.  

  12. Nyalala, I., Okinda, C., Korohou, T., Wang, J., Achieng, T., Wamalwa, P., Mang, T. and Shen, M. (2020). A review on computer vision systems in monitoring of poultry: A welfare perspective. Artificial Intelligence. 4: 184-208. https://doi.org/ 10.1016/j.aiia.2020.09.002. 

  13. National Action Plan (NAP) for Egg and Poultry-2022 For Doubling Farmers’ Income by 2022. Department of Animal Husbandry, Dairying and Fisheries, Ministry of Agriculture and Farmers Welfare, Government of India.

  14. Okinda, C., Lu, M., Liu, L., Nyalala, I., Muneri, C., Wang, J., Zhang, H. and Shen, M. (2019). A machine vision system for early detection and prediction of sick birds: A broiler chicken model. Biosystems Engineering. 188: 229-242. https://doi.org/10.1016/j.biosystemseng.2019.09.015.

  15. Semara, I.M.T., Sunarta, I.N., Antara, M., Arida, I.N.S. and Wirawan, P.E. (2024). Tourism sites and environmental reservation. International Journal of Environmental Sciences. 10(1): 44-55. https://www.theaspd.com/resources/4.%20 Tourism  %20Sites %20and%20 Environmental% 20 Reservation %20objects.pdf.

  16. Srivastava, K. and Pandey, P. (2023). Deep learning-based classification of poultry disease. International Journal of Automation and Smart Technology. 13(1): 24-39. https://doi.org/10.5875/ausmt.v13i1.2439.

  17. Surve, J., Kanwade, B. and Patil, S. (2023). Enhancing chicken disease detection and classification using transfer learning and convolutional neural networks. Journal of Southwest Jiaotong University. 58(1): 1462-1473.

  18. Vrindavanam, J., Kumar, P., Kamath, G., Chandrashekhar, N. and Patil, G. (2024). Poultry disease identification in faecal images using vision transformer. Medicon Agriculture and Environmental Sciences. 6(1): 5-15. https://doi.org/ 10.55162/MCAES.06.150.

  19. Wang, J., Shen, M., Liu, L., Xu, Y. and Okinda, C. (2019). Recognition and classification of broiler droppings based on deep convolutional neural network. Journal of Sensors. pp 1- 10. https://doi.org/10.1155/2019/3823515.

  20. Zhao, T., Jiang, X., Yu, L., Zhang, W., Zhan, H., Wang, Q., Zhang, X. and Xu, F. (2024). Comparison of nutritional status between migrant and nonmigrant school-age children in Kunming, Yunnan Province, China. Current Topics in Nutraceutical Research. 22(2): 617-623. https://doi.org/ 10.37290/ctnr2641-452X.22:617-623. 
In this Article
Published In
Indian Journal of Animal Research

Editorial Board

View all (0)