Indian Journal of Animal Research

  • Chief EditorK.M.L. Pathak

  • Print ISSN 0367-6722

  • Online ISSN 0976-0555

  • NAAS Rating 6.50

  • SJR 0.263

  • Impact Factor 0.4 (2024)

Frequency :
Monthly (January, February, March, April, May, June, July, August, September, October, November and December)
Indexing Services :
Science Citation Index Expanded, BIOSIS Preview, ISI Citation Index, Biological Abstracts, Scopus, AGRICOLA, Google Scholar, CrossRef, CAB Abstracting Journals, Chemical Abstracts, Indian Science Abstracts, EBSCO Indexing Services, Index Copernicus
Indian Journal of Animal Research, volume 58 issue 9 (september 2024) : 1614-1621

Application of Machine Learning in Drone Technology for Tracking of Tigers

Ahmad Ali AlZubi1,*, Abdulrhman Alkhanifer2
  • https://orcid.org/0000-0001-8477-8319
1Department of Computer Science, Community College, King Saud University, Riyadh, Saudi Arabia.
2Department of Computer Science, King Saud University, Riyadh, Saudi Arabia
Cite article:- AlZubi Ali Ahmad, Alkhanifer Abdulrhman (2024). Application of Machine Learning in Drone Technology for Tracking of Tigers . Indian Journal of Animal Research. 58(9): 1614-1621. doi: 10.18805/IJAR.BF-1759.

Background: Tigers, as iconic apex predators and symbols of biodiversity conservation, face numerous threats to their existence. Effective tracking and monitoring are essential for understanding and preserving these majestic creatures and their habitats. The convergence of machine learning and drone technology has emerged as transformative tools in the field of tiger tracking. Drones, or Unmanned Aerial Vehicles (UAVs), have rapidly become invaluable assets in wildlife conservation. Machine learning algorithms, with their capacity to analyze complex datasets, make predictions and automate decision-making processes, offer a novel approach to processing the massive amounts of data generated by drones, including images, sounds and sensor readings.

Methods: This paper explores the historical significance of tiger tracking, the pivotal role of drones in conservation and the transformative capabilities of machine learning in wildlife monitoring. In this work, an accurate framework for tiger detection based on YOLOv8 is utilized.

Result: By examining the interplay between machine learning, drone technology and tiger conservation, this paper highlights the potential for innovation and the challenges that lie ahead, promising a brighter future for these iconic creatures and their ecosystems. The fine-tuned YOLOv8 model demonstrates exceptional object detection performance, boasting a mAP50 of 0.9820 and a mAP50-95 of 0.6856, coupled with precise classification (precision 0.9646) and robust instance capture (recall 0.9580).

The monitoring and preservation of tigers pose a crucial and long-lasting problem for both environmentalists and conservationists (Sarkar et al., 2021). Technological advancements have greatly impacted wildlife conservation efforts, with drones, also known as Unmanned Aerial Vehicles (UAVs), emerging as a particularly valuable tool for conservationists. (Ancin-Murguzur et al., 2020). Machine learning utilizes the capabilities of artificial intelligence (AI) to analyze complex patterns, generate forecasts and automate decision-making procedures (Aguilar-Lazcano et al., 2023).
       
This paper explores the convergence of machine learning and drone technology in tiger tracking. It discusses the significance of tiger tracking, the role of drones in conservation and the transformative capabilities of machine learning in wildlife monitoring. The paper highlights the relevance of AI-powered drone technology in ensuring the survival of tigers and their ecosystems. The experimental data is acquired using YOLOv8. The dataset is obtained from the Kaggle database. The number of individual animals that can be observed by humans is restricted due to their physical and cognitive limitations (Browning, 2022). A thorough approach to tiger monitoring in difficult environments is ensured by integrating data from various sources, such as ground sensors and local knowledge. This enables researchers to study tigers in their own environments while minimizing any disruption (Kamran et al., 2021). Furthermore, the unobtrusive characteristic of drones minimizes anxiety in tigers and guarantees the preservation of their innate habits without disturbance (Li et al., 2023). This tackles a prevalent issue linked to conventional land-based tracking techniques. Drones are widely recognized for their capacity to rapidly traverse extensive regions, rendering them very efficient and economical for gathering data (Hodge et al., 2021).
 
Literature review
 
Drones have become essential instruments in the field of tiger conservation, allowing conservationists to view and track the activities of these elusive top predators in ways that were previously difficult or invasive (Hossain, 2022; Choi et al., 2023; Min et al., 2024). Artificial intelligence (AI) has proven useful in a variety of fields besides the cultivation of legume crops, such as big data analysis and animal research. AI algorithms are being used more and more to handle enormous volumes of data effectively, providing insights and forecasts that help decision-makers across a range of industries. Furthermore, artificial intelligence (AI) methods are being used in animal research to investigate behavior patterns, genetic variables and health outcomes, among other topics, advancing our knowledge of and efforts to improve animal welfare (Na et al., 2024; Kim and Kim, 2023; Porwal et al., 2024; Wasik and Pattinson 2024). Quadcopters and hexacopters offer a unique combination of stability and agility that makes them ideal for shooting close-up photos and movies over water or in densely forested areas since they can take off and land vertically. These drones’ observation time varies based on factors including battery capacity and flying mission conditions (Hildmann et al., 2019; Wilson et al., 2022). In a single flight, hexacopters and quadcopters may often record observation times of 20 to 30 minutes. Table 1 presents the advantages and disadvantages of quadcopters/hexacopters compared to traditional methods.
 

Table 1: Comparison of Quadcopters/hexacopters and traditional flying methods.


 
Machine learning and its applications
 
Machine learning is widely used in various fields, such as healthcare, finance and notably, animal tracking (Directions 2023). Some techniques that have been developed using machine and deep learning are mentioned in Table 2 for wildlife conservation.
 

Table 2: Resources for machine and deep learning based wild life conservation.


 
The integration of machine learning with drone technology
 
Machine learning algorithms analyze this data, enabling instantaneous decision-making and automation. Machine learning enabling the recognition of tigers in photographs taken by drones and offers understanding of their actions (Alrayes et al., 2022; Cho, 2024; Maltare, 2023). The methods employed for the collection of data are:
 
Cameras
 
They provide crucial visual data for the purpose of monitoring (Tuia et al., 2022). High-quality cameras provide intricate photos and films, facilitating the identification of individual tigers through distinctive characteristics (Shi et al., 2022). Also, thermal imaging cameras which are capable of detecting variations in temperature, rendering them highly effective for following nocturnal activities and detecting tigers in settings with little illumination (Butcher et al., 2021).
 
Sensors
 
Various types of sensors are employed to gather diverse sets of information, enhancing the understanding of tiger behavior, movements and ecological interactions (Ram et al., 2023). Similarly, LiDAR (Light Detection and Ranging) technology produces intricate 3D maps of the landscape, which can assist in evaluating habitats and delineating tiger territory (Shanley et al., 2021). The LiDAR-based habitat model had the lowest classification accuracy (OOB = 5.8%, k = 0.77). Multispectral and Hyperspectral Sensors have the ability to gather data that extends beyond the range of wavelengths visible to the human eye, thereby uncovering specific information about the well-being of vegetation and the surrounding environmental circumstances (Adão et al., 2017).
 
Satellite imagery
 
Satellite imagery offers a bird’s-eye view of tiger habitats and can be used to assess changes in land cover and habitat fragmentation (Ahmad et al., 2023). The accuracy evaluation revealed a Kappa value of 0.87 and an overall classification accuracy of 88.5%.
 
Data Pre-processing and feature extraction
 
There are a number of machine learning techniques and algorithms that are frequently used in the field of wildlife monitoring, with a specific focus on tigers:
 
i). Supervised learning algorithms
 
Support Vector Machines (SVM)
 
SVM are commonly employed for the purpose of species classification. It operates by identifying the most advantageous hyper plane that effectively distinguishes several categories of data, such as tigers from other animals or background (Vidal et al., 2021).
 
Decision trees
 
Decision trees are highly efficient for the purpose of species identification. The classification of animals is accomplished by the utilization of a hierarchical decision tree graph, which takes into account characteristics such as size, stripes and color patterns (Song and Lu, 2015).
 
ii). Unsupervised learning techniques
 
Clustering Algorithm
 
Clustering techniques such as k-means are useful for categorizing tigers according on their activities. For instance, they can assist in identifying social hierarchies or detecting atypical behavioural patterns, which could potentially indicate the presence of sickness or stress (Tabianan et al., 2022).
 
iii). Deep learning
 
Convolutional neural networks (CNNs)
 
CNNs are highly proficient in image analysis and are commonly employed to detect and monitor tigers in photos and videos obtained from camera traps or drones (Kishore et al., 2021). By discerning distinctive characteristics, they can distinguish certain individuals, enabling the continuous tracking of individual tigers (Fergus et al., 2023). The experiment showed that, with an accuracy of 99.31%, it is feasible to obtain high animal detection accuracy across the 12 species.
 
Recurrent neural networks (RNNs)
 
RNNs are utilized in the study of time-series data, enabling the monitoring of actions and movements over a while. They can assist in comprehending tiger behaviors such as mating, hunting, or territorial patrolling (Zhang, 2012).
 
2.4 Tracking and localization algorithms
 
Tiger monitoring relies on tracking and localization algorithms, which offer up-to-date data on the exact whereabouts and motion of these creatures. Various algorithms are utilized for this objective such as.
 
Kalman filters
 
Kalman filters are iterative estimators that forecast the future whereabouts of a tiger by leveraging its past coordinates. These devices are extremely useful for accurately tracking and determining the location of objects or individuals in real-time, especially in scenarios where the data may be unreliable or ambiguous. In contrast to tigers (AUC= 0.83, TSS= 0.66), leopard distribution maps had a notably high degree of discrimination (AUC= 0.90, TSS= 0.80) (Rather et al., 2020).
 
Particle filters
 
Particle filters are capable of estimating the probability distribution of a tiger’s location, which makes them well-suited for situations where there is uncertainty or variability in the tracking data. They are especially beneficial for monitoring numerous tigers concurrently (Kambhampati et al., 2004).
 
Hidden markov models (HMMs)
 
Hidden markov models (HMMs) are employed to represent the locomotion patterns of tigers. Through the analysis of seen data, it is possible to make predictions about concealed states, such as the whereabouts of a tiger, as well as the transitions that occur between these states (Joo et al., 2013).
 
Image and video analysis techniques
 
Convolutional Neural Networks (CNNs) play a crucial role in the detection of tigers and can accurately distinguish individual tigers by recognizing their unique stripe patterns and facial traits (Shi et al., 2020). In addition, tigers may be rapidly detected and localized in photos or movies using object detection techniques. YOLO (You Only Look Once) techniques provide swift detection and delineation of tigers, hence facilitating expedient analysis (Srivastava et al., 2021). On the basis of this data, predictive algorithms, which frequently employ recurrent neural networks (RNNs), can predict future tiger behavior. These predictions are useful for organizing conservation strategies and mitigating conflicts between humans and tigers (Chatterjee et al., 2022). In Sumatra, Indonesia, machine learning and thermal imaging drones were utilized to monitor leopards at night. This innovative method revealed crucial behavioural insights, such as foraging patterns and territorial migrations (Rietz et al., 2023). Machine learning algorithms were employed at the Chitwan National Park in Nepal to analyze LiDAR data collected by drones. The provision of precise 3D maps of the park’s landscape significantly improved habitat preservation efforts (Wu et al., 2023).
Dataset description
 
The dataset is sourced from Kaggle.  The dataset contains a total of 4413 images of tigers. It is divided into training, validation and test datasets in the ratio of 80:10:10.
 
YOLOv8 overview
 
In this work, the tiger detection method is implemented using YOLOv8 from Ultralytics.
       
The most recent version of the YOLO object detection model, known as YOLOv8, keeps the same architecture as its predecessors while bringing about several notable improvements. Feature pyramid network (FPN) and Path aggregation network (PAN) are two innovative neural network designs that are noteworthy advances. Furthermore, a new labeling tool with features like customizable hotkeys, labeling shortcuts and auto-labeling expedites the annotation process. Together, these tools make image annotation for model training simpler. The FPN creates feature maps that can detect objects at a variety of scales and resolutions by methodically decreasing spatial resolution while expanding feature channels. The PAN design, on the other hand, improves the network’s capacity to capture features at various scales and resolutions by aggregating features from several network levels through skip connections. This capacity is essential for accurately identifying items that differ in size and shape.
       
The backbone, neck and head are the three primary parts of the YOLOv8 model’s overall architecture. From the input image, the backbone network extracts relevant features. The neck functions as a bridge between the head and backbone networks, improving feature resolution and decreasing feature map dimensions at the same time. The head network is made up of three detection networks, one for each type of object such as small, medium and large that work together to provide an all-encompassing and adaptable object detection system.
 
Parameters and metrices
 
Learning rates (lr/pg0, lr/pg1, lr/pg2)
 
The amount that the model modifies its parameters while being trained is determined by learning rates. The model will effectively converge without fluctuating or becoming stuck if the learning rate is balanced. To get the best performance out of the tiger tracking model, these rates may have to be adjusted.
 
Metrics
 
mAP50-95(B) and mAP50(B)
 
One important metric for object detection is Mean Average Precision (mAP), It displays the model’s object location accuracy. Higher mAP values, especially in the 50–95% confidence interval, signify improved tiger tracking precision.
 
Precision (B) and Recall (B)
 
Precision measures the accuracy of tiger predictions, while recall assesses the model’s ability to detect all actual tigers. A balance is crucial; high precision ensures accurate predictions, while high recall prevents missing tigers.
 
Features of the model
 
Model/GFLOPs
 
The model’s computational complexity is reflected in the quantity of floating-point operations performed per second. For real-time applications, like tracking tigers, a lower GFLOPs value ensures more efficient processing.
 
Model/parameters
 
The complexity of the model is indicated by the number of parameters. To capture the finer details of tigers without overfitting the training set, complexity must be balanced.
 
PyTorch model/speed_ms
 
The processing efficiency of the model is indicated by speed in milliseconds. For real-time tiger tracking, faster processing is preferred because it allows for quicker reactions to environmental changes.
 
Loss Values (val/box_loss, val/cls_loss, val/dfl_loss, train/box_loss, train/cls_loss, train/dfl_loss)
 
The model’s learning efficiency from the tiger tracking dataset is indicated by loss values experienced during training and validation. Lower loss values signify successful training, ensuring that the model performs better when applied to new data.
Graph 1 shows graphics of different metrics and values associated with the YOLOv8 model’s training and evaluation for tracking tigers.
 

Graph 1: Graphics of different metrices.


       
The outputs of tiger tracking using YOLOv8 are presented in Table 3. The learning rates (lr/pg0, lr/pg1, lr/pg2) are set to 0.0001. The model has outstanding object detection capabilities, with an accuracy of 0.9820 for mean average precision at 50% intersection over union (mAP50). A larger range of intersection over union criteria (50-95%) is taken into account and the mAP50-95 of 0.6856 indicates strong performance over a variety of detection accuracy levels. With high precision and recall metrics of 0.9646 and 0.9580, respectively, the model demonstrates its accuracy in identifying and capturing tigers.
 

Table 3: Results of tiger tracking using YOLOv8.


       
The computational efficiency of the model is demonstrated by the number of parameters (26,854,899) and GFLOPs (79.10). In terms of performance, the model shows a relatively fast inference time, analyzing data in 6.60 milliseconds using PyTorch. In the training (train/box_loss, train/cls_loss, train/dfl_loss) and validation (val/box_loss, val/cls_loss, val/dfl_loss) phases, the model’s performance in bounding box prediction, class prediction and detection face localization is indicated by the specified losses. The results show that the model is well-trained and achieves a good balance between precision and recall.
       
Fig 1 shows the sample results for the YOLOv8 model. The YOLOv8 method has the highest degree of confidence in detecting every target.
 

Fig 1: The sample picture of tracking of tigers using YOLOv8.

The combination of drone technology and machine learning has fundamentally transformed the field of tiger tracking, providing novel and effective answers to the obstacles encountered in their conservation efforts. The welfare and protection of tigers are of utmost importance for conservationists and researchers. In order to ensure their ethical treatment, it is essential to minimize stress and disturbance, strictly adhere to appropriate protocols and collaborate with local communities and stakeholders, where necessary, to obtain their consent and involve them in conservation efforts (Rieder et al., 2021). Ethical tiger tracking should focus on conservation outcomes that safeguard tiger populations and their habitats, with transparent reporting through accurate documentation and conscientious utilization of tracking data for conservation purposes (Isabelle and Westerlund, 2022).
       
In this paper, tiger detection is done using YOLOv8. Exhibiting outstanding object detection capabilities, the fine-tuned YOLOv8 model achieves a remarkable mAP50 of 0.9820 and a mAP50-95 of 0.6856. It excels in precise classification (precision 0.9646) and adeptly captures instances with a strong recall of 0.9580.
 
Future directions
 
The integration of drone technology and machine learning offers a promising solution for solving the conservation difficulties faced by tigers, thereby shaping the future of tiger tracking. Moreover, there is an increasing emphasis on incorporating cutting-edge sensor technologies such as LiDAR, hyperspectral imaging and thermal imaging into drone systems. Furthermore, there is a requirement for improvements in the real-time data processing capabilities of drones to facilitate prompt analysis and decision-making in addressing emergent conservation concerns.
The authors would like to thank the editors and reviewers for their review and recommendations and also to extend their thanks to King Saud University for funding this work through the Researchers Supporting Project number (RSP2024R395), King Saud University, Riyadh, Saudi Arabia.
 
Funding statement
 
This work was supported by the Researchers Supporting Project number (RSP2024R395), King Saud University, Riyadh, Saudi Arabia.
 
Author contributions
 
All authors contributed toward data analysis, drafting and revising the paper and agreed to be responsible for all aspects of this work.
 
Data availability statement
 
Not applicable.
 
Declarations
 
Authors declare that all works are original and this manuscript has not been published in any other journal.
The authors declare that they have no conflict of interest.

  1. Adão, T., Hruška, J., Pádua, L., Bessa, J., Peres, E., Morais, R. and Sousa, J.J. (2017). Hyperspectral imaging: A review on UAV-based sensors, data processing and applications for agriculture and forestry. Remote Sensing. 9(11). https://doi.org/10.3390/rs9111110.

  2. Aguilar-Lazcano, C.A., Espinosa-Curiel, I.E., Ríos-Martínez, J.A., Madera-Ramírez, F.A. and Pérez-Espinosa, H. (2023). Machine learning-based sensor data fusion for animal monitoring: Scoping review. Sensors. 23(12): 1-28. https://doi.org/10.3390/s23125732.

  3. Ahmad, A., Kanagaraj, R. and Gopi, G.V. (2023). Wildlife habitat mapping using Sentinel-2 imagery of Mehao Wildlife Sanctuary, Arunachal Pradesh, India. Heliyon. 9(3): e13 799. https://doi.org/10.1016/j.heliyon.2023.e13799.

  4. Ahumada, J.A., Fegraus, E., Birch, T., Flores, N., Kays, R., Brien, T.G.O., Palmer, J., Schuttler, S., Zhao, J.Y., Jetz, W., Kinnaird, M., Kulkarni, S., Lyet, A. and Thau, D. (2019). Thematic Section/: Bringing Species and Ecosystems Together with Remote Sensing Tools to Develop New Biodiversity Metrics and Indicators Wildlife Insights/ : A Platform to Maximize the Potential of Camera Trap and Other Passive Sensor Wildlife Data for the Planet. 2014 (Wwf 2018). https://doi.org/10.1017/S0376892919000 298.

  5. Alrayes, F.S., Alotaibi, S.S., Alissa, K.A., Maashi, M., Alhogail, A., Alotaibi, N., Mohsen, H. and Motwakel, A. (2022). Artificial intelligence-based secure communication and classification for drone-enabled emergency monitoring systems. Drones.  6(9). https://doi.org/10.3390/drones6090222.

  6. Ancin-Murguzur, F.J., Munoz, L., Monz, C. and Hausner, V.H. (2020).  Drones as a tool to monitor human impacts and vegetation changes in parks and protected areas. Remote Sensing in Ecology and Conservation. 6(1): 105-113. https://doi.org/10.1002/rse2.127.

  7. Berger-wolf, T.Y., Rubenstein, D.I., Stewart, C.V and Holmberg, J.A. (2017). Wildbook/ : Crowdsourcing, computer vision and data science for conservation. October.

  8. Browning, H. (2022). Assessing measures of animal welfare. Biology and Philosophy. 37(4): 36. https://doi.org/10.1007/s10539-022-09862-1.

  9. Butcher, P.A., Colefax, A.P., Gorkin, R.A., Kajiura, S.M., López, N.A., Mourier, J., Purcell, C.R., Skomal, G.B., Tucker, J.P., Walsh, A.J., Williamson, J.E. and Raoult, V. (2021). The drone revolution of shark science: A review. Drones. 5(1): 1-28. https://doi.org/10.3390/drones5010008.

  10. Chatterjee, M., Chatterjee, N., Chandel, P., Bhattacharya, T. and Kaul, R. (2022). Predicting negative human-tiger (Panthera  tigris) interactions in mosaic landscapes around Dudhwa and Pilibhit tiger reserves in India. Frontiers in Conservation  Science. 3(October): 1-12. https://doi.org/10.3389/fcosc.2022.999195.

  11. Cho, O.H. (2024). An evaluation of various machine learning approaches for detecting leaf diseases in agriculture. Legume Research. https://doi.org/10.18805/LRF-787.

  12. Choi, H.W., Kim, H.J., Kim, S.K. and Na, W.S. (2023). An overview of drone applications in the construction industry. Drones. 7(8). https://doi.org/10.3390/drones7080515. Data source: https://www.kaggle.com/datasets/gauravpendharkar/tiger-detection-dataset.

  13. Directions, F. (2023). Understanding of machine learning with deep learning: Computers MDPI. 12(91): 1-26. https://doi.org/10.3390/computers12050091.

  14. Fergus, P., Chalmers, C., Longmore, S., Wich, S., Warmenhove, C., Swart, J., Ngongwane, T., Burger, A., Ledgard, J. and Meijaard, E. (2023). Empowering wildlife guardians: An equitable digital stewardship and reward system for biodiversity conservation using deep learning and 3/ 4G camera traps. Remote Sensing. 15(11): 1-29. https://doi.org/10.3390/rs15112730.

  15. Graving, J.M., Chae, D., Naik, H. and Li, L. (2019). Deep Pose Kit, a Software Toolkit for Fast and Robust Animal Pose Estimation using Deep Learning. 1-42. https://doi.org/10.7554/eLife.47994.sa2.

  16. Hildmann, H., Kovacs, E., Saffre, F. and Isakovic, A.F. (2019). Nature-inspired drone swarming for real-time aerial data- collection under dynamic operational constraints. Drones. 3(3): 1-25. https://doi.org/10.3390/drones3030071.

  17. Hodge, V.J., Hawkins, R. and Alexander, R. (2021). Deep reinforcement learning for drone navigation using sensor data. Neural Computing and Applications. 33(6): 2015-2033. https://doi.org/10.1007/s00521-020-05097-x.

  18. Hossain, R. (2022). A short review of the drone technology. International Journal of Mechatronics and Manufacturing Technology. 7(2): 53-68.

  19. Isabelle, D.A. and Westerlund, M. (2022). A review and categorization of artificial intelligence-based opportunities in wildlife, ocean and land conservation. Sustainability (Switzerland).  14(4): 1-22. https://doi.org/10.3390/su14041979.

  20. Joo, R., Bertrand, S., Tam, J. and Fablet, R. (2013). Hidden Markov Models: The Best Models for Forager Movements? PLoS ONE. 8(8). https://doi.org/10.1371/journal.pone.0071246.

  21. Kambhampati, S.S., Tangirala, K.V, Namuduri, K.R. and Jayaweera, S.K. (2004). Particle filtering for target tracking. Wireless Personal and Multimedia Communications. 3-5.

  22. Kamran, S., Ullmann, W. and Linde, A. (2021). Development, testing and implementation of insect-catching drones. In Asia- Pacific forest sector outlook: Innovative forestry for a sustainable future. Youth contributions from Asia and the Pacific. 10: 12.

  23. Kellenberger, B., Tuia, D. and Morris, D. (2020). AIDE: Accelerating image-based ecological surveys with interactive machine learning. Methods in Ecology and Evolution. 11(12): 1716- 1727. https://doi.org/10.1111/2041-210X.13489.

  24. Kim, E.J., Kim, J.Y. (2023). Exploring the Online News Trends of the Metaverse in South Korea: A Data-Mining-Driven Semantic Network Analysis. Sustainability. 15: 16279. https://doi.org/10.3390/su152316279

  25. Kishore, T., Jha, A., Kumar, S., Bhattacharya, S. and Sultana, M. (2021). Deep CNN based automatic detection and identification of bengal tigers. In International Conference on Computational Intelligence in Communications and Business Analytics Cham: Springer International Publishing.  pp. 189-198. https://doi.org/10.1007/978-3-030-75529-4_15.

  26. Li, S., Wang, G., Zhang, H. and Zou, Y. (2023). Observing individuals and behavior of hainan gibbons (Nomascus hainanus) using drone infrared and visible image fusion technology. Drones. 7(9): 543. https://doi.org/10.3390/drones7090543https://www.doi.org/10.1007/978-3-030-75529-4_15.

  27. Maltare, N.N., Sharma, D., Patel, S. (2023). An exploration and prediction of rainfall and groundwater level for the District of Banaskantha, Gujrat, India. International Journal of Environmental Sciences. 9(1): 1-17.

  28. Mathis, A., Mamidanna, P., Cury, K.M., Abe, T., Murthy, V. N., Mathis, M.W. and Bethge, M. (2021). User-defined body parts with deep learning. Nature Neuroscience. https://doi.org/10.1038/s41593-018-0209-y.

  29. Min, P.K., Mito, K. and Kim, T.H. (2024). The evolving landscape of artificial intelligence applications in animal health. Indian Journal of Animal Research. https://doi.org/10.18805/ IJAR.BF-1742

  30. Na, J.C., Kim, E.J., Kim, J.Y. (2024). Unveiling Metaverse Social Trends: Analysing Big Data Regarding Online Sports News with LDA-Based Topic Modelling. Revista De Psicología Del Deporte (Journal of Sport Psychology). 33(1): 115-125. https://rpdonline.com/index.php/rpd/article/view/1533.

  31. Porwal, S., Majid, M., Desai, S.C., Vaishnav, J. and Alam, S. (2024). Recent advances, challenges in applying artificial intelligence and deep learning in the manufacturing industry.  Pacific Business Review (International). 16(7): 143-152.

  32. Ram, M., Sahu, A., Srivastava, N., Jhala, L., Zala, Y. and Venkataraman, M. (2023). Conservation Management of the Endangered Asiatic Lions in Gujarat, India, Using GPS Satellite Telemetry. Animals. 13(1): 1-13. https://doi.org/10.3390/ani13010125.

  33. Rather, T.A., Kumar, S. and Khan, J.A. (2020). Multi-scale habitat modelling and predicting change in the distribution of tiger and leopard using random forest algorithm. Scientific Reports. 10(1): 1-19. https://doi.org/10.1038/s41598-020-68167-z.

  34. Rieder, E., Larson, L.R., ‘t Sas-Rolfes, M. and Kopainsky, B. (2021). Using Participatory System Dynamics Modeling to Address Complex Conservation Problems: Tiger Farming as a Case Study. Frontiers in Conservation Science. 2 (September). https://doi.org/10.3389/fcosc.2021.696615.

  35. Rietz, J., van Beeck Calkoen, S.T.S., Ferry, N., Schlüter, J., Wehner, H., Schindlatz, K.H., Lackner, T., von Hoermann, C., Conraths, F.J., Müller, J. and Heurich, M. (2023). Drone- based thermal imaging in the detection of wildlife carcasses and disease management. Transboundary and Emerging Diseases. 1-12. https://doi.org/10.1155/2023/5517000.

  36. Sarkar, M.S., Amonge, D.E., Pradhan, N., Naing, H., Huang, Z. and Lodhi, M.S. (2021). A review of two decades of conservation efforts on tigers, co-predators and prey at the junction of three global biodiversity hotspots in the transboundary far-eastern himalayan landscape. Animals. 11(8). https://doi.org/10.3390/ani11082365.

  37. Shandilya, S.K., Srivastav, A., Yemets, K., Datta, A. and Nagar, A.K. (2023). YOLO-based segmented dataset for drone vs. bird detection for deep and machine learning algorithms. Data in Brief. 50: 109355. https://doi.org/10.1016/j.dib.2023.109355.

  38. Shanley, C.S., Eacker, D.R., Reynolds, C.P., Bennetsen, B.M.B. and Gilbert, S.L. (2021). Using LiDAR and Random Forest to improve deer habitat models in a managed forest landscape. Forest Ecology and Management. 499(July): 119580. https://doi.org/10.1016/j.foreco.2021.119580.

  39. Shi, C., Liu, D., Cui, Y., Xie, J., Roberts, N.J. and Jiang, G. (2020). Amur tiger stripes: Individual identification based on deep convolutional neural network. Integrative Zoology. 15(6): 461-470. https://doi.org/10.1111/1749-4877.12453.

  40. Song, Y.Y. and Lu, Y. (2015). Decision tree methods: applications for classification and prediction. Shanghai Archives of Psychiatry. 27(2): 130-135. https://doi.org/10.11919/j.issn.1002-0829.215044.

  41. Srivastava, S., Divekar, A.V., Anilkumar, C., Naik, I., Kulkarni, V. and Pattabiraman, V. (2021). Comparative analysis of deep learning image detection algorithms. Journal of Big Data. 8(1). https://doi.org/10.1186/s40537-021-00434-w.

  42. Tabianan, K., Velu, S. and Ravi, V. (2022). K-Means clustering approach for intelligent customer segmentation using customer purchase behavior data. Sustainability (Switzerland).  14(12): 1-15. https://doi.org/10.3390/su14127243.

  43. Tuia, D., Kellenberger, B., Beery, S., Costelloe, B.R., Zuffi, S., Risse, B., Mathis, A., Mathis, M.W., van Langevelde, F., Burghardt,  T., Kays, R., Klinck, H., Wikelski, M., Couzin, I.D., van Horn, G., Crofoot, M.C., Stewart, C.V. and Berger-Wolf, T. (2022). Perspectives in machine learning for wildlife conservation. Nature Communications. 13(1): 1-15. https://doi.org/10.1038/s41467-022-27980-y.

  44. Vidal, M., Wolf, N., Rosenberg, B., Harris, B.P. and Mathis, A. (2021). Perspectives on individual animal identification from biology and computer vision. Integrative and Comparative Biology. 61(3): 900-916. https://doi.org/10.1093/icb/icab107.

  45. Wang, C., Yu, X., Xia, S., Liu, Y., Huang, J. and Zhao, W. (2022). Potential habitats and their conservation status for swan geese (anser cygnoides) along the east asian flyway. Remote Sensing. 14(8): 1-14. https://doi.org/10.3390/rs14081899.

  46. Wasik, S. and Pattinson, R.  (2024). Artificial intelligence applications in fish classification and taxonomy: Advancing our understanding of aquatic biodiversity. Fish Taxa. 31: 11-21. 

  47. Wilson, A.M., Boyle, K.S., Gilmore, J.L., Kiefer, C.J. and Walker, M.F. (2022). Species-specific responses of bird song output in the presence of drones. Drones. 6(1): https://doi.org/10.3390/drones6010001.

  48. Wu, Z., Zhang, C., Gu, X., Duporge, I., Hughey, L.F., Stabach, J.A., Skidmore, A.K., Hopcraft, J.G.C., Lee, S.J., Atkinson, P.M., McCauley, D.J., Lamprey, R., Ngene, S. and Wang, T. (2023). Deep learning enables satellite-based monitoring of large populations of terrestrial mammals across heterogeneous landscape. Nature Communications. 14(1): https://doi.org/10.1038/s41467-023-38901-y.

  49. Zhang, G.P. (2012). Neural Networks for Time-Series Forecasting. Handbook of Natural Computing. 1-4(October): 461-477. https://doi.org/10.1007/978-3-540-92910-9_14.

Editorial Board

View all (0)