Indian Journal of Animal Research

  • Chief EditorM. R. Saseendranath

  • Print ISSN 0367-6722

  • Online ISSN 0976-0555

  • NAAS Rating 6.40

  • SJR 0.233, CiteScore (0.606)

  • Impact Factor 0.4 (2024)

Frequency :
Monthly (January, February, March, April, May, June, July, August, September, October, November and December)
Indexing Services :
Science Citation Index Expanded, BIOSIS Preview, ISI Citation Index, Biological Abstracts, Scopus, AGRICOLA, Google Scholar, CrossRef, CAB Abstracting Journals, Chemical Abstracts, Indian Science Abstracts, EBSCO Indexing Services, Index Copernicus

Sequential Convolutional Neural Network Techniques to Improve Detection of Lesions in Dogs

Ok Hue Cho1,*
1Department of Animation, Sangmyung University, 37, Hongjimun 2-gil, Jongno-gu, Seoul, Republic of Korea.

Background: Animal health and well-being depend on the early identification and treatment of diseases, especially skin conditions. Machine learning (ML) techniques, such as convolutional neural networks (CNNs), offer promising tools for automating disease detection based on visual symptoms.

Methods: This study employed a sequential CNN algorithm for skin lesion detection using digital images. The dog lesion dataset which consisted of 551 images categorized into five lesions: bumps, hair loss, hot spots, rashes and sores classes, was utilized for training and testing of the CNN model. The outcomes of the proposed model after training and testing were obtained in terms of performance metrics such as recall, accuracy, precision and F1-score. The adaptability and effectiveness of the model in classifying dog skin lesions were accessed by the overall accuracy of positive prediction.

Result: The developed model achieved a remarkable overall accuracy rate of 84.38% in classifying lesion classes. Its precision and recall tests validate its dependability, underscoring its potential to improve ailments detection and treatment in dogs, eventually increasing their well-being.

An essential physical barrier and an accurate indication of general health is the skin, the largest organ in an animal’s body. Dogs are exposed to a variety of skin conditions that impact their skin and hair (Griffin et al., 2001; Bourguignon et al., 2013). These ailments might be simple, short-term issues or long-term, complex disorders, each needing a unique diagnosis and course of care. Skin lesions in dogs can arise from various causes, including environmental factors, genetic predispositions, infections, allergies, or autoimmune responses. The skin which acts as a barrier to protect the body from dangerous chemicals, microbiological infections and environmental weather conditions, is a vital organ (Candi et al., 2005; Elias and Steinhoff, 2008). Various skin diseases in dogs can emerge having symptoms like itching, redness, swelling, hair loss and other lesions. Dogs can develop a variety of skin conditions, including bacterial, fungal, parasitic and allergy-related skin diseases. Treatment including skin scraping, acetate-tape preparation, impression smears and fine-needle aspiration cytology are frequently employed to diagnose a dermatological issue. A histological investigation is used for skin infections that do not respond to conventional treatment and is typically used to confirm the diagnosis of cutaneous neoplasms (Mandhare et al., 2022). The present diagnostic method for skin diseases depends on physician-assisted biopsies, which might miss early indicators of the condition. In response to these obstacles, a hybrid approach that integrates deep learning techniques has the potential to yield timely and precise results, hence decreasing the necessity for human assessment.  Artificial intelligence (AI) can offer useful tools to veterinary medicine to help with persistent disease management issues and diagnosis challenges (Min et al., 2024; Maltare et al., 2023). One kind of neural network that is commonly used for medical image processing is convolutional neural networks or CNNs (Yu et al., 2021; Cho, 2024). These networks are capable of segmenting images, detecting objects and classifying images. CNN models have been trained to recognize and categorize a wide range of objects using large datasets including hundreds to millions of images. Convolutional Neural Networks (CNNs) have emerged as a promising tool in dynamic veterinary medicine providing novel insights into and treatments for skin problems in dogs. Utilizing CNNs’ computational capability, researchers hope to better understand dog skin conditions, providing veterinarians and pet owners with an effective tool for early identification and customized treatments.                   
       
Skin diseases make up an estimated 21% of veterinary surgeons’ caseload in general small animal practice, making them a frequently seen ailment in clinical practice (Hill, 2006). Many pet owners consider their pet’s skin and coat status to be a sign of their overall health, therefore if these conditions decrease, it might be cause for concern. Skin diseases are typically caused by allergies to parasites, especially fleas, to the environment and adverse food responses (AFRs). Hair loss (alopecia) and the presence of bumps on a dog’s skin can be indicative of a range of dermatological issues, including allergies, infections, inflammatory responses, or even more serious underlying conditions.
       
Deep Learning (DL) is the branch of Artificial Intelligence (AI) in which a computer program analyzes raw data and learns various features needed to identify hidden patterns in it. Recently, there has been a noticeable advancement in this topic in terms of Deep Learning-based computations’ ability to analyze various types of data, especially images and natural language. The essential image processing techniques, such as morphological procedures for skin detection, are also used to classify skin disorders. Several research works have highlighted the versatility of CNNs in accurately identifying and classifying dermatological conditions based on visual cues. According to Lui et al.  (2020), a deep learning algorithm that advantage of the foundation of Inception-v4 took into account all major causes of skin diseases, 45 metadata variables about the patient’s medical history, concatenated one to six images of skin conditions and assessed the metadata and image processing output. They created color gradients, classified the illnesses using feedforward backpropagation artificial neural networks (ANNs) and used K-means clustering to identify the spread of diseases (Arifin et al., 2012). Yasir et al., (2014), have suggested a technique for removing disease-related characteristics and recognizing colored skin photos using a convolutional neural network. Srinivasu et al., (2021).  introduced a deep learning system that makes use of long short-term memory (LSTM) and MobileNet V2. Of them, one based on MobileNet V2 demonstrated improved accuracy and efficiency for small processing devices. To classify precancerous and cancerous tumors of the uterine cervix, an automated framework for colposcopy image processing was described (Buiu et al., 2020).
       
With the use of CNN-based models, medical imaging techniques like computed tomography and magnetic resonance imaging have been effectively used to identify and diagnose disorders (Yadav and Jadhav, 2019). In the study, a multispectral imaging device was used to gather 95 images from sick or non-diseased dog skins. The original images were resized, rotated and repositioned and data augmentation was used to expand the data size by a factor of 1000. Recent CNN technology has grown deeper and wider to obtain greater accuracies (Hsu et al., 2020; Narin et al., 2021). Haar et al., (2023) demonstrated the efficacy of CNNs as extracting features for object recognition, even though they have been utilized extensively for photographic images. Han et al., (2018) classified clinical photos of 12 skin disorders using CNN-based Resnet-152. Xie et al., (2020), suggested a feature block to extract channel-wise dimensions and boundary information. Accurate boundary and geographic information extraction was enhanced as a result. Based on a lightweight CNN called MobileNet, a classification model was created for this investigation (Sae-Lim et al., 2019). Compared to regular MobileNet, modified MobileNet performed better, as indicated by the F1-score. Thirteen levels of depth wise convolution layers made up the suggested model. According to Yu et al., (2016), the suggested system employed contrast enhancement and FCRN for segmentation and CNN and FCRN (Fully Convolutional Residual Networks) for classification. First, characteristics including color, form and texture were retrieved from photos using data augmentation, enhancement and segmentation (Aijaz et al., 2022). CNN and Long Short-Term Memory (LSTM), two deep learning algorithms, demonstrated classification accuracy of 84.2% and 72.3%, respectively, for different forms of psoriasis. Han (2018) classified clinical photos of 12 skin conditions using CNN-based Resnet-152. On the ImageNet test set, the error rate for the residual network was 3.7%. This study identified three categories of diseases: dermatitis, psoriasis and herpes (Wei et al., 2018). Their segmentation method was based on a grey-level co-occurrence matrix (GLCM). The many kinds of skin disorders are identified and recognized by vertical picture segmentation analysis. Support vector machines (SVMs) were used to classify various diseases. Ballerini et al., (2012) used an algorithm for the categorization of non-melanoma skin lesions, which was based on the K-NN classifier closest neighbor (K-NN).
       
In this work, an ML computational method is used to detect the lesions in dogs. The dataset consists of five classes:  bumps, hair loss, hot spots, rashes and sores. The training of sequential CNN used the preprocessed data to detect the lesions. Preprocessing increases the quality of the data and improves the performance of the proposed model. The results are evaluated as classification metrics and confusion matrix.
The Sequential CNN model is developed using a Jupyter Notebook in the Anaconda environment. The training process is performed on a high-performance PC with Python version 3.11, powered by a TPU-v3:8 with 16 GB of RAM. TensorFlow, an open-source deep learning framework, is used for training and deploying models. The Keras API is used to build and manage neural networks.
 
Dataset
 
The dog lesions dataset consists of 551 images classified into five categories (Table 1). This dataset is a valuable resource for training and evaluating machine learning models, specifically convolutional neural networks (CNNs), that are designed to diagnose dogs’ health conditions using visual symptoms. The dataset provides detailed and diverse training data, with each image representing a unique instance of a dog’s condition. Researchers and veterinarians can use this dataset to develop ML models that can accurately classify and diagnose different dog lesions. This is useful for veterinary care and improved health outcomes for dogs.

Table 1: Dataset description.


 
Data preprocessing
 
Data preprocessing for Convolutional Neural Network (CNN) input comprises multiple essential steps to ensure optimal model performance. At first, the images are uniformly resized to a dimension of 256x256 to maintain consistency throughout the dataset and to be compatible with the CNN architecture. After the process of resizing, normalization is performed to adjust the pixel values so that they fall within a standardized range, usually ranging from 0 to 1. This helps to stabilize the training process and improve the convergence of the model. In addition, data augmentation techniques, such as rotation, flipping and cropping, are used to create a variety of training samples. This helps to reduce overfitting and improve the ability to apply the learned information to new data. In addition, researchers have the option to incorporate additional information, such as labels or metadata, to enhance the input and provide context for better comprehension and performance of the model. The preprocessing steps have a significant impact on preparing input data for CNN models. They help improve the learning and inference processes, as well as enhance the model’s ability to operate in different applications.
 
Sequential convolution neural network
 
A convolutional neural network (CNN) is an advanced computer program specifically developed to analyze and process visual data, such as images. Unlike less complex neural networks, convolutional neural networks (CNNs) possess additional layers that enhance their ability to efficiently process images. At its core, a CNN consists of several specialized layers.
 
Input layer
 
The input layer of a neural network receives the unprocessed data, usually in the format of a two-dimensional array that represents an image. If the image is in grayscale, each element in the array corresponds to the brightness level of a single pixel. A color image consists of three arrays that represent the red, green and blue channels.
 
Convolution layers
 
In this layer, the input image undergoes convolution with a collection of adaptable filters (also known as kernels). This operation involves performing an element-wise multiplication between the filter and the input image patch and then summing up the results. Mathematically, the output feature map Oij at position (i, j) is computed as:
 
 
 
Where,
I (i+m, j+n)= Pixel value at position (i+m, j+n) in the input image.
Km, n= Filter kernel coefficient at position (m, n).
F= Size of the filter and b is the bias term.
 
Pooling layer
 
After the convolution process, the feature maps undergo a pooling layer to decrease their spatial dimensions. One frequently used pooling operation is max pooling, which involves keeping the highest value in a specific area (such as a 2x2 window). The formula to calculate output is given as:
 
 
 
Where,
O2i+m, 2j+n= 2i+mth row and 2j+nth column of the original feature map.
 
Dense layer
 
The flattened output from the previous layers is linked to neurons in the dense layer, also known as the fully connected layer. Every neuron in a given layer receives input from every neuron in the preceding layer. The dense layer computes its output by taking the weighted sum of the inputs and applying an activation function, usually ReLU (Rectified Linear Unit) or sigmoid. The output yK of neuron k in the dense layer is given by:
 
 
 
xi= Input from the previous layer.
wi,k= Weight connecting neuron i in the previous layer to neuron k in the dense layer.
bk= Bias of neuron k.
f = ReLU activation function.
 
Output layer
 
It is usual practice to utilize a sigmoid activation function for binary classification and softmax activation for multi-class classification. The workflow for the classification of dog lesion detection using the CNN model is depicted in Fig 1.

Fig 1: Workflow for the classification of dogs’ disease using the CNN model.


 
Algorithm of the proposed CNN model
 
A neural network architecture was created in the study utilizing the Keras Sequential API. This architecture classifies images of different categories using convolutional layers and fully linked layers (Fig 2). The architecture includes multiple convolutional layers.

Fig 2: Feature extraction and classification of instances using the CNN model.


       
The first layer, named conv2d, starts with 64 filters of size 3´3 and utilizes the Rectified Linear Unit (ReLU) activation function. It operates on input data with dimensions of 256x256 pixels and three-color channels. Following the initial layer, subsequent convolutional layers, numbered 2 through 6 (conv2d_1 to conv2d_5), progressively add more filters and employ ReLU activation. Max pooling layers (max_pooling2d_1 to max_pooling2d_5) are inserted between these convolutional layers to gradually reduce the size of the images. The 2D output from the convolutional and max pooling layers is then flattened into a 1D array with 2048 items using a Flatten Layer. Subsequently, two fully connected layers, referred to as Dense Layer 1 and Dense Layer 2 (dense and dense_1), are employed. These layers connect all neurons from the preceding layer. Dense Layer 1 comprises 64 neurons with ReLU activation. Dense Layer 2 serves as the output layer, housing three neurons representing the various classes. To derive class probabilities, Dense Layer 2 utilizes the softmax activation function. To summarise, the architecture is designed to perform well in tasks involving image classification. It achieves this by utilizing fully connected layers for making predictions and convolutional layers for extracting features from the images.
 
Performance matrices
 
Performance metrics are important parameters  for evaluating  the efficacy of Convolutional Neural Networks (CNNs) in applications such as image categorization. These metrics offer essential information into the model’s capacity to accurately categorize images and create predictions. Accuracy, a metric that quantifies the ratio of correctly classified cases, is an essential metric for evaluating the overall performance of a model. Precision and recall measure the model’s capacity to precisely predict positive outcomes and capture all positive instances, respectively, providing valuable information on different elements of performance. The F1 score is a composite metric that takes into account both precision and recall, providing an accurate evaluation of performance. Confusion matrices provide a comprehensive study of model performance by offering a detailed breakdown of predictions in comparison to true labels. In addition, ROC curves and AUC offer insights into the balance between sensitivity and specificity at various threshold values. These performance measures allow academics and practitioners to collectively evaluate, compare and enhance CNN models for different image analysis tasks.







In this research work, sequential CNN architectures were developed to identify lesions in dogs by analyzing a provided dataset containing images of lesioned dogs. After 75 training cycles, the model finished with a training accuracy of 89.60% and a training loss of 0.3847. The model seems to have done rather well on the training set, based on the approximate match between the model’s predictions and actual values. However, during this period, the validation’s accuracy was 87.50% and its validation loss was 0.4339. The model does quite well on the training set but less well on the validation set. This disparity indicates the need for more design work or parameter adjustments to increase the capacity of model to generalise new data (Fig 3).

Fig 3: Loss and accuracy performance over epochs.


       
Further, the predictions and corresponding confidence scores for multiple instances are presented in Fig 4. The model consistently performs well across different classes. However, slight variations in accuracy and confidence scores offer useful insights into its ability to distinguish between each class.

Fig 4: Predictions of classes.


       
The confusion matrix presents an analysis of the accuracy of the classification of the model for five specific categories: bumps, hair loss, hot spots, rashes and sores. Each row in the matrix corresponds to the actual class labels, while each column represents the predicted class labels (Fig 5).

Fig 5: Confusion matrix: Actual and predicted labels.


       
The matrix showed that, with true positive rates of 32 and 16, respectively, the model had good classification accuracy for hot patches and rashes. There were no mistaken categorisation of rashes; all cases were handled appropriately. Most of the time, hot spots were correctly detected; but, sometimes, they were mistaken for other conditions, such rashes and hair loss. Notable performance was also shown by bumps and sores, with 5 and 22 true positives, respectively. But sometimes, the model mistaken certain sores as bumps and rashes and misclassified cases of bumps as hot spots and sores. These findings show the general efficacy of the model but also point up several shortcomings, namely in terms of differentiating between comparable lesion types. Additional training data and model tweaking, among other refinement techniques, may help the model minimise these misclassifications and increase overall diagnostic accuracy.
       
The effectiveness of the model can be understood by output metrics that distinguish between various types of skin lesions in dogs. In the bump category, the precision of the model was 0.7143, indicating that about 71.43% of the events that were predicted to be bumps were really bumps; the recall, however, was 0.556, indicating that 55.56% of the cases that were actually bumps were identified. An f1-score of 0.6250 provides a good recall and accuracy combination for this area. On the other hand, hair loss had a higher recall of 0.6667 and a precision of 0.8571, with a f1-score of 0.7500. The model was particularly well, producing a f1-score of 0.8421, indicating its outstanding capacity to properly identify this kind of lesion, with a precision of 0.8889 and a recall of 0.8000 for hot spots. The model achieved the highest recall of 1.0000 for rashes, accurately identifying every instance of rashes with a precision of 0.7619 and an F1-score of 0.8649. Sores had the highest recall (0.8462) and accuracy (1.0000) with a F1-score of 0.9167. The model’s macro averages for recall, precision and f1-scores were 0.7997, 0.7737 and 0.8444, respectively. The total accuracy of the model was 0.8438. As can seen from the weighted averages of 0.8573 for accuracy, 0.8438 for recall and 0.8409 for the f1-score that the model performs well across different kinds of lesions. These numbers also indicate potential areas for model improvement.
       
The performance of algorithm is assessed using ROC curves and the accompanying AUC values, which provide a thorough understanding of the model’s classification effectiveness for various dog skin lesions (Fig 6). The AUC of 0.9359 for hot spots on the ROC curve suggests a high degree of accuracy in distinguishing them from other forms of lesions. With AUC values of 1.0, the model showed remarkable classification accuracy for rashes and hair loss, indicating complete differentiation between these classes and non-target lesions. With an AUC of 0.9911, Sores had good performance. An AUC of 0.95 on the ROC curve for bumps indicates good effectiveness in differentiating bumps from other lesions. With good or almost perfect AUC ratings in every category, these findings highlight the model’s strong classification skills and suggest that it is quite successful in differentiating between different types of skin lesions in dogs.

Fig 6: ROC (AUC) Curve for different lesions.


       
There are few papers published on skin lesion detection in dogs. A comparison with available literature is plotted in Fig 7. Hwang et al., (2022) developed DL models for classifying three dog skin diseases using normal and multispectral images. Single models achieved accuracies of 0.80, 0.70 and 0.82 for bacterial dermatosis, fungal infection and hypersensitivity allergic dermatosis, respectively. Consensus models, combining both image types, achieved higher accuracies of 0.89, 0.87 and 0.87 for the same conditions. The consensus approach improved accuracy and balanced performance, enhancing both sensitivity and specificity. Rathnayaka et al., (2022) presented an intelligent system for detecting skin diseases in dogs by integrating ontology-based clinical information extraction with advanced technologies. The system comprises a mobile application featuring disease identification, severity level detection, a domain-specific knowledge base and an AI-based chatbot. Utilizing Convolutional Neural Networks (CNNs) and Natural Language Processing (NLP), the application extracts features from lesion images to classify skin conditions and assess disease severity. The system achieved disease classification accuracy ranging from 77.78% to 100% across four CNN models and a high severity level identification accuracy of 99.62%. This work using the CNN model achieved an accuracy of 84.38%, indicating good performance.

Fig 7: Comparison of work with existing literature.

The study focuses on the utilization of image processing and ML methodologies in identifying dog skin lesions. The application of Convolutional Neural Networks (CNN) in digital image processing follows a specific procedure that includes the collection of image datasets, preprocessing, representation, interpretation and detection. This involves performing specialized tasks such as channel ordering, normalization, image resizing and data augmentation. The efficacy of the created classifier is assessed by evaluating its adaptability through metrics like recall, accuracy, precision and F1-score. The precision and recall tests assess the model’s ability to reliably predict each class within the dog skin lesion study. The classification report evaluates the model’s efficacy in distinguishing between different lesions. The overall accuracy rate is 84.38%. All these metrics collectively verify the model’s accuracy in identifying instances within the designated classes. Their perceptive appraisal of its total efficacy underscores its potential for practical implementation.
       
ML has the potential to predict dog lesions, but it has some limitations such as data accessibility, inconsistent or biased data and integrating data from different sources. Further research will focus on developing hybrid algorithms for analyzing complex datasets and understanding lesion dynamics. DL algorithms can extract complex characteristics from various datasets, improving predictions and lesion dynamics. Real-time surveillance systems powered by ML can identify early signs of skin lesions in dogs, allowing veterinarians and policymakers to take immediate action to prevent spread. Therefore, future research will extend ML techniques to improve understanding and management of dog skin conditions.
This research was funded by a 2022 Research Grant from Sangmyung University (2023-A000-0013).
 
Data availability statement
 
Not applicable.
 
Declarations
 
Author(s) declare that all works are original and this manuscript has not been published in any other journal.
The author declare that they have no conflict of interest.

  1. Aijaz, S.F., Khan, S.J., Azim, F., Shakeel, C.S. and Hassan, U. (2022). Deep learning application for effective classification of different types of psoriasis. Journal of Healthcare Engineering. 15: 7541583. doi: 10.1155/2022/7541583. 

  2. Arifin, M.S., Kibria, M.G., Firoze, A., Amini, M.A. and Yan, H. (2012).  Dermatological Disease Diagnosis using Color-skin Images. In: 2012 International Conference on Machine Learning and Cybernetics. IEEE. 5: 1675-1680.

  3. Ballerini, L., Fisher, R.B., Aldridge, B. and Rees, J. (2012). Non- melanoma Skin Lesion Classification using Colour Image Data in a Hierarchical K-NN Classifier. In: 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI). IEEE. pp 358-361.

  4. Bourguignon, E., Guimarães, L.D., Ferreira, T.S. and Favarato, E.S. (2013). Dermatology in dogs and cats. Insights from Veterinary Medicine. 1: 3-34.

  5. Buiu, C., Dãnãilã, V.R. and Rãduþã, C.N. (2020). MobileNetV2 ensemble for cervical precancerous lesions classification.  Processes. 8(5): 595. https://doi.org/10.3390/pr8050595.

  6. Candi, E., Schmidt, R. and Melino, G. (2005). The cornified envelope: A model of cell death in the skin. Nature Reviews Molecular  Cell Biology. 6(4): 328-340.

  7. Cho, O.H. (2024). An evaluation of various machine learning approaches for detecting leaf diseases in agriculture. Legume Research. 47(4): 619-627. https://doi.org/ 10.18805/LRF-787. 

  8. Elias, P.M. and Steinhoff, M. (2008). “Outside-to-inside” (and now back to “outside”) pathogenic mechanisms in atopic dermatitis. Journal of Investigative Dermatology. 128(5): 1067-1070.

  9. Griffin, C.E., Miller, W.H., Scott, D.W. (2001). Small Animal Dermatology  (6th ed.). W.B. Saunders Company.

  10. Haar, L.V., Elvira, T. and Ochoa, O. (2023). An analysis of explainability methods for convolutional neural networks. Engineering Applications of Artificial Intelligence. 117: 105606. https:/ /doi.org/10.1016/j.engappai.2022.105606.

  11. Han, S.S., Kim, M.S., Lim, W., Park, G.H., Park, I. and Chang, S.E. (2018). Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. Journal of Investigative Dermatology.  138(7): 1529-1538.

  12. Hwang, S., Shin, H.K., Park, J.M., Kwon, B. and Kang, M. (2022). Classification of dog skin diseases using deep learning with images captured from multispectral imaging device. Molecular and Cellular Toxicology/Molecular and Cellular Toxicology. 18(3): 299-309. https://doi.org/10.1007/ s13273-022-00249-7.

  13. Hill, P., Lo, A., Eden, C.A.N., Huntley, S., Morey, V., Ramsey, S. et al. (2006). Survey of the prevalence, diagnosis and treatment of dermatological conditions in small animals in general practice. Veterinary Record. 158(16): 533-539.

  14. Hsu, C.C., Zhuang, Y.X. and Lee, C.Y. (2020). Deep fake image detection based on pairwise learning. Applied Sciences.  10(1): 370. https://doi.org/10.3390/app10010370.

  15. Liu, Y., Jain, A., Eng, C., Way, D.H., Lee, K., Bui, P., et al. (2020). A deep learning system for differential diagnosis of skin diseases. Nature Medicine. 26(6): 900-908.

  16. Mandhare, S.S., Kadam, D.P., Gadhave, P.D., Garud, K.V., Galdhar, C.N. and Thorat, V.D. (2022). Prevalence and clinico- pathological studies of inflammatory canine skin infections  in Mumbai. Indian Journal of Veterinary Pathology. 46(2): 116-120.

  17. Min, P.K., Mito, K. and Kim, T.H. (2024). The evolving landscape of artificial intelligence applications in animal health. Indian Journal of Animal Research. 58(10): 1793-1798. https:// doi.org/10.18805/IJAR.BF-1742. 

  18. Maltare, N.N., Sharma, D. and Patel, S. (2023). An Exploration and prediction of rainfall and groundwater level for the district of Banaskantha, Gujrat, India. International Journal of Environmental Sciences. 9(1): 1-17. https:// www.theaspd.com/resources/v9-1-1-Nilesh%20N.%20 Maltare.pdf.

  19. Narin, A., Kaya, C. and Pamuk, Z. (2021). Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks. Pattern Anal Applic.  24: 1207-1220.and Applications. 

  20. Rathnayaka, R.M.N.A., Anuththara, K.G.S.N., Wickramasinghe, R.J.P., Gimhana, P.S., Weerasinghe, L. and Wimalaratne, G. (2022). Intelligent System for Skin Disease Detection of Dogs with Ontology-based Clinical Information Extraction. In: Proceedings of the 2022 IEEE 13th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON). IEEE. (pp. 59-66). https:// doi.org/10.1109/UEMCON54665.2022.9965696.

  21. Sae-Lim, W., Wettayaprasit, W. and Aiyarak, P. (2019). Convolutional Neural Networks using Mobile Net for Skin Lesion Classification. In: 2019 16th International Joint Conference on Computer Science and Software Engineering (JCSSE). IEEE. pp 242-247. 

  22. Srinivasu, P.N., SivaSai, J.G., Ijaz, M.F., Bhoi, A.K., Kim, W. and Kang, J.J. (2021). Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM. Sensors. 21(8): 2852. https://doi.org/10.3390/ s21082852.

  23. Wei, L.S., Gan, Q. and Ji, T. (2018). Skin disease recognition method based on image color and texture features.  Computational and Mathematical Methods in Medicine. 8145713. https://doi.org/10.1155/2018/8145713. 

  24. Xie, F., Yang, J., Liu, J., Jiang, Z., Zheng, Y. and Wang, Y. (2020). Skin lesion segmentation using high-resolution convolutional neural network. Computer Methods and Programs in Biomedicine. 186: 105241. doi: 10.1016/j.cmpb.2019. 105241.

  25. Yadav, S.S. and Jadhav, S.M. (2019). Deep convolutional neural network based medical image classification for disease diagnosis. Journal of Big Data. 6(1): 1-18.

  26. Yasir, R., Rahman, M.A. and Ahmed, N. (2014). Dermatological Disease Detection using Image Processing and Artificial Neural Network. In: 8th International Conference on Electrical and Computer Engineering. IEEE. pp 687-690.

  27. Yu, H., Yang, L. T., Zhang, Q., Armstrong, D. and Deen, M.J. (2021). Convolutional neural networks for medical image analysis: State-of-the-art, comparisons, improvement and perspectives.  Neurocomputing. 444: 92-110.

  28. Yu, L., Chen, H., Dou, Q., Qin, J. and Heng, P.A. (2016). Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE Transactions on Medical Imaging. 36(4): 994-1004.

Editorial Board

View all (0)