Indian Journal of Animal Research

  • Chief EditorK.M.L. Pathak

  • Print ISSN 0367-6722

  • Online ISSN 0976-0555

  • NAAS Rating 6.50

  • SJR 0.263

  • Impact Factor 0.4 (2024)

Frequency :
Monthly (January, February, March, April, May, June, July, August, September, October, November and December)
Indexing Services :
Science Citation Index Expanded, BIOSIS Preview, ISI Citation Index, Biological Abstracts, Scopus, AGRICOLA, Google Scholar, CrossRef, CAB Abstracting Journals, Chemical Abstracts, Indian Science Abstracts, EBSCO Indexing Services, Index Copernicus
Indian Journal of Animal Research, volume 57 issue 12 (december 2023) : 1717-1724

Application of Machine Learning in Drone Technology for Tracking Cattle Movement

Ahmad Ali AlZubi1,*
1Department of Computer Science, Community College, King Saud University, Riyadh, Saudi Arabia.
Cite article:- AlZubi Ali Ahmad (2023). Application of Machine Learning in Drone Technology for Tracking Cattle Movement . Indian Journal of Animal Research. 57(12): 1717-1724. doi: 10.18805/IJAR.BF-1697.

Background: Drones and other unmanned aerial vehicles have expanded the freedom to control and observe operations from distant areas. This paper presents a comprehensive review of the application of machine learning in drone technology for monitoring and analyzing cattle movement patterns. The traditional methods of tracking cattle movement, such as manual surveys or using satellite imagery, are time-consuming and often lack precision. With the integration of machine learning algorithms, drones offer a cost-effective and efficient solution to monitor large grazing areas accurately. This project will use algorithms to be able to test the viability and possible advantages of merging machine learning and drones for tracking cattle movement, 

Methods: This study makes use of a dataset of images collected from open data initiatives and crowd-sourced ground truth. Support Vector Machine (SVMs) are one of the machine learning approaches used as a classifier. The encouraging findings demonstrate that if a low precision (10 to 25%) is acceptable, true positive rates in the series of 70 to 85% are feasible. The study also covers data acquisition-related characteristics, like image resolution. 

Result: The integration of machine learning algorithms in drone technology for tracking cattle movement represents a promising approach to revolutionizing the livestock industry.

Drone technology refers to the use of unmanned aerial vehicles (UAV) for various purposes, such as data collection, surveillance, aerial photography and delivery (Elmeseiry et al., 2021). A UAV can range in size from a few grams to hundreds of kilograms (AV Nano Hummingbird, NASA Ikhana) and the weight is frequently proportionate to their possible payload. The cost-benefit analysis for agricultural applications is typically optimal with intermediate sizes (Schad and Fischer, 2022). Drone technology, coupled with the power of machine learning has been now used in agriculture, specifically in tracking the movement of cattle (Benos et al., 2021). This is important for effectively managing cattle in a farm set-up. The traditional methods of monitoring cattle are labour-intensive and time-consuming. Wearable devices like GPS collars using sensor technology and remote-sensing satellites have been used to monitor animal movement and health, but they come with many challenges (Handcock et al., 2009; Neethirajan, 2017; Asmare, 2022). The majority of satellite sensors lack the spatial resolution necessary to distinguish between individual animals, making satellite images unsuitable for this task (Barbedo and Koenigkan, 2018). Due to the extensive geographic coverage of satellites, it may be difficult to focus on individual animals (Brown et al., 2022), producing images lacking clarity.
       
Also, cloud impurities can obscure important features during cattle farm surveys. While using manned aircraft for the surveys is technically possible, it comes with several disadvantages such as high operating costs, increased noise levels that can disturb animals, the risk of human fatalities due to accidents and aircraft that may not be suitable for image devices as sensors. To overcome these limitations, unmanned aerial systems (UAS) are a more practical solution for cattle monitoring. Drones are lightweight, affordable aircraft platforms comprising of an unmanned aerial vehicle (UAV), sensor payloads and a ground control centre. This makes them a better option than human planes or satellites (EASA, 2018).
       
Using drones to observe animals is becoming more and more common. For example, there have been studies on deer, elk, hippopotamuses (Ceballos et al., 2020), rhinoceroses, elephants and other terrestrial mammals. Also, some patents have been filed for AI-based monitoring of livestock (Spencer et al., 2023; Kuper et al., 2018, 2020, 2021; Kurimoto, 2023). There are few studies that have focused on using drones for cattle round-up, identification, grazing behavior and health monitoring (Neethirajan, 2017; Hughey et al., 2018; Norouzzadeh et al., 2018; Burke et al., 2019; Asmare, 2022). Limited studies on cattle can be attributed to the issues such as limited dataset availability, the complexity of cattle behaviour and regulatory restrictions such as data privacy, certification and compliance with industry standards (Cravero et al., 2022; Akhigbe et al., 2021; Manning et al., 2021; Carr, 2013).
       
Andrew et al., (2017) were the first one to use deep neural networks for automatic Holstein Friesian cattle detection using UAVs, achieving 99.79% accuracy in training and 98.13% in testing. Rivas et al., (2018) explored multirotor drones for cattle detection, obtaining up to 98.78% accuracy with fewer than 10 targets in frames. Barbedo et al., (2019) utilized CNN models with UAVs to detect Canchim cattle with accuracies above 95%, even in varying lighting conditions. Recent advancements in machine learning algorithms have improved species recognition in aerial videos, enhancing drone survey accuracy (Eikelboom et al., 2019). Yang et al., (2021) employed 3D visualization to track cows from drones but faced challenges with dynamic scenes and large groups of cows, suggesting continuous model training. Ojo et al., (2022) introduced a cost-effective drone-based system for monitoring livestock activity and farm operations, eliminating the need for multiple stationary cameras on large farms.
 
Problem statement
 
Drones can track and monitor cattle and measure sea animal characteristics with thermal and visible-spectrum cameras (Lahoz-Monfort and Magrath, 2021). There are several software programs that have been created to monitor the movement, position and estimated vision fields of people from aerial photographs (McFarlane and Schofield, 1995Norstrøm, 2001). These programs can also automate the process of identifying species (Falzon et al., 2019) and have the potential to be valuable tools for studying collective movement and behaviour, such as groups of birds or fish (Hughey et al., 2018). Also, drones show extreme potential to track and keep an eye on groups on their own, as we can see in Fig 1.

Fig 1: Analysis of aerial video footage to estimate species richness, group mobility and everybody’s posture.


       
However, when it comes to cattle, which often move and stay together as a group, their body size can limit the effectiveness of these techniques. Traditionally, data gathering for the management and protection of cattle is done by human field employees who tally the animals, observe their behavior and monitor natural areas. These initiatives are costly, effortful and time-consuming (Hodgson et al., 2018). In addition, there is an increased risk of generating biased datasets due to complexities involved in mitigating observer subjectivity and ensuring strong inter-observer consistency, along with animal reactions in response to observer presence (Rollinson et al., 2021). Human activity in the field poses threats to both wildlife and people, as well as their habitats, especially during wildlife preservation operations conducted from aircraft (Junker et al., 2020). The number of animals that can be observed at once, the level of detail and time accuracy of the information that can be collected and the area that can be effectively monitored are all restricted by the physical and cognitive limitations of humans (Sherman et al., 2020). The automatic detection and counting of species by drones not only has the potential to improve population estimates in the future but also offers an innovative approach to enhance the accuracy and efficiency of cattle monitoring, contributing to more effective management and conservation efforts.
 
Research objectives
 
To study the role of Machine learning in drone technology for tracking the movement of cattle.
The study was performed from April 2023 to August 2023 in the Computer Science Department, Community College, King Saud University.
 
Dataset
 
The present make use of dataset available on the crowdsourcing platform, MicroMappers, which collects information on cattle detection projects. For this study 2014 dataset was utilized which contains information on animals that were tagged by volunteers. It allows for spotting and tagging animal movement in RGB photographs to generate a ground truth (Ofli et al., 2016; Rey, 2016; Kellenberger et al., 2018). This investigation exclusively utilized RGB images captured by a Canon PowerShot S110 RGB camera mounted on a UAV. These RGB photos, with a resolution of 3000 × 4000 pixels, constitute the primary components of our image acquisition system. We specifically focused on RGB photos due to a lack of empirical support for other image types, as recognizing animals in false-colour photographs would be a significantly challenging and time-consuming task for a broader audience. Moreover, RGB cameras are not only more commonly used but also more cost-effective (Rey, 2016).
       
Through crowd-sourcing, a total of 6,800 RGB pictures (each measuring 2500 by 4000 pixels) were evaluated by multiple participants. The main goal was to construct a polygon around each animal seen in the photographs, without differentiating between species. The crowd successfully created 7,482 polygons in a total of 675 photos from 7 different flights, providing a factual basis for the data.
 
Feature extraction
 
In order to extract features from the dataset, histograms were generated for each image. A histogram is a graph that displays the distribution of pixel intensity values in an image, essentially counting how many pixels have specific intensity levels or colours. While simple to compute, these features can be surprisingly effective in certain situations. The histograms for the red, green and blue bands were calculated over a region centred on the object, covering an area of roughly 10 cm in ground sampling distance (GSD) and 25 × 25 pixels. The size of the region was adjusted when a different GSD was used. To ensure that histogram are equivalent even though the region’s size varies for each item; it is recommended that the bin counts be divided by the region’s size. 38 characteristics were produced by the histograms, which were defined using 15 bins. The steps involved are shown in Fig 2.

Fig 2: Steps involved in feature extraction.


 
Support vector machine (SVM)
 
The SVM classifiers locate a linear border dividing the two classes in the feature space. Any additional data can be categorized after this border has been established by examining what side of the border it lies on. The mathematical foundation and framework for SVMs are described below.
       
The SVM classifier finds a limit in the D-layered highlight space that best recognizes the positive and negative classes given a dataset of n objects delegated either sure or negative and characterized by D qualities. In the 2D model, various lines could be built and utilized as limits as they cause no order blunders on the preparation set. For example, a negative item is arranged exceptionally near the green line, causing it to appear to be risky. The dark line, interestingly, keeps all things as distant from it as could be expected, improving the probability that it will accurately gauge the classes of approaching items from a test set (Fig 3). Subsequently, the SVM will endeavour to find the line that augments the edge - otherwise called the distance - between the limit and the articles that are nearest to it.
       
Take into account a vector w that is perpendicular to the boundary. If the following conditions are true, an item is arranged on the positive side of the limit:
 
 
 
Where
 = Object’s feature vector.
b = A quantity that is constant and is known as bias.
When the feature vector is projected onto using the dot product, if the last option is a unit vector, the predisposition identifies the location of the boundary along.
The scoring function, denoted by the equation
  accepts positive qualities for objects arranged on the positive side and negative characteristics for those arranged on the negative side. The score function should have the required attribute that items that are exactly in the “gutters” take a value of -1 or +1 and other things that are farther from the boundary take a value of below -1 or above +1.
 
The results of various visual word characteristics and histograms are presented in terms of ROC curves . The training and testing were performed on datasets that had an equal number of animals and background items. Two feature types, namely the histogram of colours (HOC) and the histogram of visual words (HOV), were used to classify the data. Fig 4 displays the ROC curves that represent the grouping of these two feature types.

Fig 4: ROC curves for the classification using the colour histogram (red line), the visual word histogram (blue line) and the combination of both feature types (green line).


       
The outcomes of the categorization are displayed in Fig 5, depicting the photos at various levels of rescaling. It should be noted that the resolution of the photographs was consistently reduced by a factor of at least 2. This reduction was motivated by the fact that employing a more modest ground sample distance (GSD) than 6 cm was deemed unnecessary for achieving the desired performance compared to the Matlab code developed for this investigation. Moreover, the computer program that was developed in Matlab for this study, would also have slowed down and required more resources to process color data in the pictures.

Fig 5: Classification of ROC curves at half, third and fourth of the original image resolution are shown in the diagram below (blue, red and green lines, respectively). The approximate ground sampling distances are 6, 10 and 12 cm, respectively.


 
Classification accuracy
 
The process of classification is executed with the purpose of distinguishing the targeted object from all other objects within its surroundings. In other words, it assigns an object to one category based on some features leading to the object being represented in a more meaningful and informative way, providing further insights into its characteristics and properties. The classification accuracy describes how well the machine learning model can correctly classify cattle and non-cattle objects or regions in the images or data collected by the drones. Classification accuracy is calculated as follows:
 

 
Here, number of Correctly Classified Samples is the count of regions or objects where the SVM correctly predicted whether they contain cattle or not and the total Number of Samples is the total count of regions or objects that were classified by the SVM. A high classification accuracy indicates that the SVM is doing a good job of distinguishing cattle from non-cattle objects in the drone images. Conversely, a low classification accuracy suggests that the model is making a significant number of incorrect predictions.
       
This study’s classification issue is strictly binary, which means there are just two classes at play. Binary classification is typically easier, even though some techniques are easily extended to multi-class classification. The backdrop class will also be known as the negative class, while the animal class will be known as the positive class. High visual heterogeneity is frequently found in the background class. The positive class is also quite varied in this dataset, which is a more particular feature, as seen in Fig 6 (Rey, 2016). The majority of the creatures are light-furred, however, others are dimmer and browner and cattle come in two colours: grey and black. The different shapes are also significant. It is frequently, but not always, the case that the animal has a shadow nearby.

Fig 6: Visual diversity of animals (Rey, 2016).


       
However, detection of cattle in the field through image analysis is like finding needles in a haystack. Animals are extremely rare in these datasets, making them a tiny fraction of the images. This results in a skewed positive-to-negative sample ratio, with very few images containing animals compared to those without, which significantly impacts the classifier’s performance. In machine learning, classifiers aim for high precision, prioritizing accurate positive predictions among all predictions. Yet, in this scenario, achieving high precision is challenging due to the scarcity of animal images. Classifiers tend to be overly cautious, producing fewer positive predictions to minimize false positives, ultimately maximizing precision but diminishing recall-the ability to correctly identify all animal instances in the dataset.
       
In this specific task, prioritizing recall rate takes precedence over precision. High recall ensures that the system identifies as many animals as possible, even if it means accepting a certain level of false positives. This approach aligns with the principle that, given sufficiently high precision, users can directly confirm the validity of each detection and subsequently eliminate any false positives . In practice, even if the correctness of these detections hovers at a relatively modest rate, such as 15%, it would still be advantageous. Reviewing and verifying these findings would be a significantly less time-consuming process than manually inspecting each image within the entirety of the dataset. This trade-off between correctness and efficiency underscores the practicality and utility of emphasizing recall over precision in this specific context.
       
One of the primary focuses in cattle monitoring with drones is automating the process of cattle detection. Having information about cattle’s movement is crucial to understand their accessibility to pasture lands for grazing ensuring the availability to fresh and nutritious vegetation, benefitting their health. However, degradation of grasslands has been posing a problem for grazing animals (Horn and Isselstein, 2022) as they may wander off to far away regions which can be dangerous to them. Drone technology coupled with artificial intelligence has proved to be helpful in keeping their count in a herd. Researchers have explored the use of machine learning algorithms to identify and track individual cattle within a herd with a greater accuracy.
       
Mücher et al., (2022) achieved impressive cow detection accuracy exceeding 95%. They also did well in distinguishing individual cows (around 91% accuracy) and recognizing different cow postures (approximately 88% accuracy). Another study by Xu et al., (2020) used a mask R-CNN model to count cattle from aerial imagery. Their results showed high accuracy, reaching 94% in pastures and 92% in feedlots. This research opened the door for automated cattle counting and tracking using drones or quadcopters.
       
Beyond simple detection, drones equipped with cameras and sensors have been utilized to analyze cattle behavior and assess their health using drone-collected data. The potential for early detection of anomalies, such as signs of illness or distress, through changes in behavior has been documented (Al-Thani et al., 2020). This approach was extended by integrating thermal imaging to detect variations in body temperature, contributing to the early diagnosis of health issues (Kays et al., 2019; Burke et al., 2019).
       
Geospatial tracking of cattle is a vital component of drone-based monitoring systems. Researchers have integrated GPS and GIS technologies into drone operations to track cattle movements and optimize grazing strategies (Turner et al., 2000), who demonstrated how drone-based geospatial data could inform farmers about the efficiency of grazing patterns and help prevent overgrazing.
       
Data integration from various sources, including drone imagery, environmental sensors and RFID tags, has become a significant area of research (Won et al., 2020). They proposed a comprehensive cattle monitoring system that integrates data streams for holistic insights into cattle health, behavior and environmental conditions. This integrated approach enables data-driven decision-making for livestock management.
       
Recent studies have used aerial photos from open data initiatives to detect cattle. To label animals in remote sensing imagery, volunteers are recruited through crowdsourcing platforms like eMammal (emammal.si.edu), Agouti (agouti.eu) and Zooniverse (www.zooniverse.org). These platforms allow volunteers to annotate images with species labels of the individuals in them (Tuia et al., 2022). These human classifications can help train deep learning models for better performance in the future.
 
Drone technology and machine learning in the world
 
The combination of machine learning, drones and UAVs produces results that are more precise, accurate and effective for object detection, Cattle movement and image classification. Fig 7 displays the results of combining drone and machine learning research using the data gathered throughout this investigation. In the fields of drone, UAV and ML research, the USA holds the lead. Asia Pacific comes in second with 40.0% of the research. The minimum drone, UAV and machine learning research are done in the African and Latin American regions, where technology is still lagging. The Asia-Pacific area has an advantage over Europe since Japan country and Korea have been leaders in robotics and machine innovation. Yet, it is anticipated that more drones will be used for product delivery, aerial remote sensing, precision agriculture, monitoring cattle movement, surveying and mapping (Khan and Mulla, 2019).

Fig 7: Regional research relating to drone and machine learning.


 
Research on drones and UAVs uses machine-learning methods
 
Fig 8 also demonstrates the utilization of ML strategies during the previous four years in several fields. The link between UAVs, drones and machine learning has employed numerous algorithms. The SVM has the largest percentage (38%) of all algorithms. Due to its capacity to deal with data noise, it is the most widely used algorithm. CNN is in second place with 26% of the market. The deployment of k-nearest neighbours is imminent given their 20% shares. The other algorithms used are-Naive Bayes, liquid state, ANN and Multi-agent learning.

Fig 8: Drone technology uses machine learning algorithms (Self-created.


       
At last, as the drone-based cattle monitoring becomes more widespread, ethical considerations and regulatory challenges have emerged. The ethical implications of constant surveillance on livestock and guidelines for responsible drone-based cattle monitoring was discussed by Neethirajan (2023). He highlighted the need for regulations that balance the benefits of technology with animal welfare and privacy concerns.
Low flying altitude, compact size, excellent resolution, lightweight and adaptability are benefits of the drone. Research on the application of drones with machine learning has been synthesised in this work. SVMs have been employed for classification to handle the unique characteristics of the dataset. The study on combining drones, UAVs and ML is still in its early stages. The use of CNN and the support vector in UAV is unmatched. Most of UAV exploration and use shares are gathered in the USA and Asia-Pacific. Because of the ascent of business unregistered UAVs, security and protection issues exist. The objective of additional examination in this space is to foster a model for finding and grouping unregistered customer drones. The exploration is as yet being finished. Sooner rather than later, prepared AI models will be made for recognizing objects in satellite and UAV information.
The authors would like to thank the editors and reviewers for their review and recommendations and also to extend their thanks to King Saud University for funding this work through the Researchers Supporting Project (RSP2023R395), King Saud University, Riyadh, Saudi Arabia.
This work was supported by the Researchers Supporting Project (RSP2023R395), King Saud University, Riyadh, Saudi Arabia.
The author contributed toward data analysis, drafting and revising the paper and agreed to be responsible for all aspects of this work.
Not applicable.
Author(s) declare that all works are original and this manuscript has not been published in any other journal.
All authors declare that they have no conflict of interest.

  1. Akhigbe, B. I., Munir, K., Akinade, O., Akanbi, L. and Oyedele, L. O. (2021). IoT technologies for livestock management: a review of present status, opportunities and future trends. Big Data and Cognitive Computing. 5(1): 10.

  2. Al-Thani, N., Albuainain, A., Alnaimi, F. and Zorba, N. (2020). Drones for sheep livestock monitoring. In 2020 IEEE 20th Mediterranean Electrotechnical Conference (MELECON) IEEE. (pp. 672-676). 

  3. Andrew, W., Greatwood, C. and Burghardt, T. (2017). Visual Localisation and Individual Identification of Holstein Friesian Cattle via Deep Learning. In: Proceedings of the IEEE International Conference on Computer Vision Workshops. (pp. 2850-2859).

  4. Asmare, B. (2022). A review of sensor technologies applicable for domestic livestock production and health management. Advances in Agriculture, https://doi.org/10.1155/2022/ 1599190.

  5. Barbedo, J.G.A. and Koenigkan, L.V. (2018). Perspectives on the use of unmanned aerial systems to monitor cattle. Outlook on Agriculture. 47(3): 214-222. 

  6. Barbedo, J.G.A., Koenigkan, L.V., Santos, T.T. and Santos, P.M. (2019). A study on the detection of cattle in UAV images using deep learning. Sensors. 19(24): 5436.

  7. Benos, L., Tagarakis, A. C., Dolias, G., Berruto, R., Kateris, D. and Bochtis, D. (2021). Machine Learning in Agriculture: A Comprehensive Updated Review. Sensors. 21(11): 3758. https://doi.org/10.3390/s21113758

  8. Brown, J., Qiao, Y., Clark, C., Lomax, S., Rafique, K. and Sukkarieh, S. (2022). Automated aerial animal detection when spatial resolution conditions are varied. Computers and Electronics in Agriculture. 193: 106689.

  9. Burke, C., Rashman, M., Wich, S., Symons, A., Theron, C. and Longmore, S. (2019). Optimizing observing strategies for monitoring animals using drone-mounted thermal infrared cameras. International Journal of Remote Sensing. 40: 439-467.

  10. Carr, E.B. (2013). Unmanned aerial vehicles: Examining the safety, security, privacy and regulatory issues of integration into US airspace. National Centre for Policy Analysis (NCPA). Retrieved on September.

  11. Ceballos, G., Ehrlich, P. R. and Raven, P. H. (2020). Vertebrates on the Brink as Indicators of Biological Annihilation and the Sixth Mass Extinction. Proceedings of the National Academy of Science. 117(24): 13596-13602.

  12. Cravero, A., Pardo, S., Sepúlveda, S. and Muñoz, L. (2022). Challenges to use machine learning in agricultural big data: A systematic literature review. Agronomy, 12(3): 748. MDPI AG. Retrieved from http://dx.doi.org/10.3390/ agronomy12030748.

  13. Eikelboom, J. A.J., Wind, J., van de Ven, E., Kenana, L.M., Schroder, B., de Knegt, H.J., van Langevelde, F. and Prins, H.H. T. (2019). Improving the precision and accuracy of animal population estimates with aerial image object detection. Methods in Ecology and Evolution, 10: 1875-1887.

  14. Elmeseiry, N., Alshaer, N. and Ismail, T. (2021). A detailed survey and future directions of unmanned aerial vehicles (UAVs) with potential applications. Aerospace. 8(12): 363. https:/ /doi.org/10.3390/aerospace8120363.

  15. European Aviation Safety Agency (EASA) (2018) Opinion no 01/ 2018 - introduction of a regulatory framework for the operation of unmanned aircraft systems in the ‘open’ and ‘specific’ categories. Available at: https://www.faa.gov/ uas/getting_ started/part_107/.

  16. Falzon, G., Lawson, C., Cheung, K. W., Vernes, K., Ballard, G. A., Fleming, P. J. and Meek, P. D. (2019). ClassifyMe: a field- scouting software for the identification of wildlife in camera trap images. Animals, 10(1): 58.

  17. Handcock, R., Swain, D., Bishop-Hurley, G., Patison, K., Wark, T., Valencia, P., Corke, P. and O’Neill, C. (2009). Monitoring Animal Behaviour and Environmental Interactions Using Wireless Sensor Networks, GPS Collars and Satellite Remote Sensing. Sensors, 9(5): 3586-3603. https:// doi.org/10.3390/s90503586.

  18. Hodgson, J. C., Mott, R., Baylis, S.M., Pham, T.T., Wotherspoon, S., Kilpatrick, A. D. and Koh, L. P. (2018). Drones count wildlife more accurately and precisely than humans. Methods in Ecology and Evolution. 9(5): 1160-1167.

  19. Horn, J. and Isselstein, J. (2022). How do we feed grazing livestock in the future? A case for knowledge driven grazing systems. Grass and Forage Science, 77(3): 153-166.

  20. Hughey, L.F., Hein, A. M., Strandburg-Peshkin, A. and Jensen, F. H. (2018). Challenges and solutions for studying collective animal behaviour in the wild. Philosophical Transactions of the Royal Society B: Biological Sciences. 373: 20170005.

  21. Junker, J., Petrovan, S. O., Arroyo-RodrÍguez, V., Boonratana, R., Byler, D., Chapman, C. A. and KÜhl, H. S. (2020). A severe lack of evidence limits effective conservation of the world’s primates. BioScience. 70(9): 794-803.

  22. Kays, R., Sheppard, J., Mclean, K., Welch, C., Paunescu, C., Wang, V., Kravit, G. and Crofoot, M. (2019). Hot monkey, cold reality: Surveying rainforest canopy mammals using drone-mounted thermal infrared sensors. International Journal of Remote Sensing. 40: 407-419.

  23. Kellenberger, B., Marcos, D. and Tuia, D. (2018). Detecting mammals in UAV images: Best practices to address a substantially imbalanced dataset with deep learning. Remote sensing of environment. 216: 139-153.

  24. Khan, A.I. and Al-Mulla, Y. (2019). Unmanned aerial vehicle in the machine learning environment. Procedia Computer Science. 160: 46-53.

  25. Kuper, D.T., Balsley, D.C. and Blair, T. N. (2018). U.S. Patent No. 9,924,700. Washington, DC: U.S. Patent and Trademark Office.

  26. Kuper, D.T., Balsley, D.C., Gray, P. and Sexten, W.J. (2020). U.S. Patent No. 10,628,756. Washington, DC: U.S. Patent and Trademark Office.

  27. Kuper, D.T., Balsley, D.C., Gray, P. and Sexten, W.J. (2021). U.S. Patent No. 11,055,633. Washington, DC: U.S. Patent and Trademark Office.

  28. Kurimoto, M. (2023). U.S. Patent Application No. 17/910,571.

  29. Lahoz-Monfort, J.J. and Magrath, M. J. (2021). A comprehensive overview of technologies for species and habitat monitoring and conservation. BioScience. 71(10): 1038-1062.

  30. Manning, J., Power, D. and Cosby, A. (2021). Legal complexities of animal welfare in Australia: Do on-animal sensors offer a future option?. Animals. 11(1): 91.

  31. McFarlane, N.J. and Schofield, C.P. (1995). Segmentation and tracking of piglets in images. Machine vision and applications. 8: 187-193.

  32. Mücher, C.A., Los S., Franke G.J., Kamphuis C. Detection, identification and posture recognition of cattle with satellites, aerial photography and UAVs using deep learning techniques. International Journal of Remote Sensing. 43(7): 2377-92.

  33. Neethirajan, S. (2017). Recent advances in wearable sensors for animal health management. Sensing and Bio-Sensing Research. 12: 15-29.

  34. Neethirajan, S. (2023). The significance and ethics of digital livestok farming. Agri. Engineering. 5(1): 488-505

  35. Norouzzadeh, M. S., Nguyen, A., Kosmala, M., Swanson, A., Palmer, M. S., Packer, C. and Clune, J. (2018). Automatically identifying, counting and describing wild animals in camera-trap images with deep learning. Proceedings of the National Academy of Sciences, 115(25): E5716-E5725.

  36. Norstrøm, M. (2001). Geographical Information System (GIS) as a tool in surveillance and monitoring of animal diseases. Acta Veterinaria Scandinavica. 42(1): 1-7.

  37. Ofli, F., Meier, P., Imran, M., Castillo, C., Tuia, D., Rey, N. and Joost, S. (2016). Combining human computing and machine learning to make sense of big (aerial) data for disaster response. Big Data. 4(1): 47-59.

  38. Ojo, J. I., Tu, C., Owolawi, P. A., Du, S. and Plessis, D. D. (2022). Review of Animal Remote Managing and Monitoring System. In Proceedings of the 2022 5th Artificial Intelligence and Cloud Computing Conference (pp. 285-291).

  39. Rey, N. (2016). Combining UAV imagery and machine learning for wildlife conservation. Master Thesis. Ecole Polytechnique Federale De Lausanne.

  40. Rivas, A., Chamoso, P., González-Briones, A. and Corchado, J. M. (2018). Detection of cattle using drones and convolutional neural networks. Sensors, 18(7): 2048.

  41. Rollinson, C. R., Finley, A. O., Alexander, M. R., Banerjee, S., Dixon Hamil, K. A., Koenig, L. E. and Zipkin, E. F. (2021). Working across space and time: nonstationarity in ecological research and application. Frontiers in Ecology and the Environment.    19(1): 66-72.

  42. Schad, L. and Fischer, J. (2022). Opportunities and risks in the use of drones for studying animal behaviour. Methods in Ecology and Evolution.

  43. Sherman, J., Ancrenaz, M. and Meijaard, E. (2020). Shifting apes: Conservation and welfare outcomes of Bornean orangutan rescue and release in Kalimantan, Indonesia. Journal for Nature Conservation. 55: 125807.

  44. Spencer, C., Wallener, D. and Sember, J. (2023). U.S. Patent Application No. 18/088,819.

  45. Torney, C. J., Lamont, M., Debell, L., Angohiatok, R. J., Leclerc, L.M. and Berdahl, A. M. (2018). Inferring the rules of social interaction in migrating caribou. Philosophical Transactions of the Royal Society B: Biological Sciences. 373: 20170385.

  46. Tuia, D., Kellenberger, B., Beery, S., Costelloe, B. R., Zuffi, S., Risse, B. and Berger-Wolf, T. (2022). Perspectives in machine learning for wildlife conservation. Nature Communications.  13(1): 792.

  47. Turner, L.W., Udal, M.C., Larson, B.T. and Shearer, S.A. (2000). Monitoring cattle behavior and pasture use with GPS and GIS. Canadian Journal of Animal Science, 80(3): 405-413.

  48. Won, D., Chi, S. and Park, M. W. (2020). UAV-RFID integration for construction resource localization. KSCE Journal of Civil Engineering, 24: 1683-1695.

  49. Xu, B., Wang, W., Falzon, G., Kwan, P., Guo, L., Chen, G. and Schneider, D. (2020). Automated cattle counting using Mask R-CNN in quadcopter vision system. Computers and Electronics in Agriculture, 171: 105300.

  50. Yang, F., Zhu, N., Pei, S. and Cheng, I. Real-time open field cattle monitoring by drone: a 3d visualization approach. ISBN: 978-989-8704-32-0 © 2021.

Editorial Board

View all (0)