Agricultural Science Digest

  • Chief EditorArvind kumar

  • Print ISSN 0253-150X

  • Online ISSN 0976-0547

  • NAAS Rating 5.52

  • SJR 0.156

Frequency :
Bi-monthly (February, April, June, August, October and December)
Indexing Services :
BIOSIS Preview, Biological Abstracts, Elsevier (Scopus and Embase), AGRICOLA, Google Scholar, CrossRef, CAB Abstracting Journals, Chemical Abstracts, Indian Science Abstracts, EBSCO Indexing Services, Index Copernicus

Creation of Aerial Arecanut Dataset and Aerial Arecanut Ripeness Level Detection using Deep Neural Networks

V.M. Aparanji1,*, M. Chaitanya1, M.T. Gurukiran1, A.S. Guruprasad1, T.M. Manjunath1, Sumalatha Aradhya1, H.K. Ravi1
1Department of Electronics and Communication Engenering, Siddaganga Institute of Technology, Tumkur-572 103, Karnataka, India.

Background: Arecanut, a commercially significant crop grown across various regions of the country and contributes significantly to India’s global ranking as the second-largest producer. Harvesting arecanut at precise stages is paramount to achieve optimal yields. Typically, this task requires minimum of two skilled individuals: a professional tree climber proficient in nut plucking and another adept at assessing ripeness. Before proceeding to harvest all the arecanuts, the climber picks one or two nuts and indicates the other person to check it for ripeness assessment. The assessor verifies ripeness by peeling the nut with teeth and uses roti/dhoti (an equipment used to harvest arecanut) to harvest arecanut if it is ripened.  Inaccurate predictions can lead to significant crop losses. Manual harvesting, while effective, presents challenges such as labour shortages, time consumption, and potential for life-threatening circumstances. 

Methods: To address these challenges this paper presents the creation of aerial arecanut dataset and accurate ripeness detection of arecanut using Deep Neural Networks (DNN) such as AlexNet and VGG-16. Images of Arecanut bunch are fed as input to the DNN. The features like colour, shape, texture, etc., will be extracted and given as input features to the classifier. The classifier will categorize the arecanut into three classes: unripened, intermediate and ripened. 

Result: The accuracy of classification of arecanut ripeness level is 94.58% and 96.87% using AlexNet and VGG-16 respectively. If this model is integrated in roti/dhoti, the life of the human being can be saved. Or if this model is integrated with the climber unit and cutter unit, it will transform the arecanut harvesting system into a fully automated harvesting system and further it reduces the man power required to harvest the arecanut, overcomes the life threats during harvesting and arecanut harvesting process becomes faster compared to the conventional method.

Agriculture plays a significant role in the socioeconomic development of India. It serves as the backbone of the Indian economy. Approximately 50% of the Indian workforce depends on agriculture for their livelihood (Shalini, 2014). In the global context, India holds a significant position as both a major producer and consumer of arecanut. Notably, states such as Karnataka, Tamil Nadu, Kerala, Assam, Meghalaya and West Bengal contribute significantly to arecanut production in the country. During the 2013-14 period, arecanut production in India exceeds 7 lakh tons, with Karnataka leading the production with 457,560 tons from an area of 218,010 hectares (Deshmukh et al.,  2019).
       
India occupies a major part in arecanut production compared to other countries. It is very important to harvest the arecnut at right time to get the good quality produce and to reduce the losses.Experienced persons are required to identify the correct stage of the arecanut. Majorly Arecanut can be classified into unripe, intermediate and ripe. Usually farmers will harvest the arecanut at an intermediate and ripe stages depending on the usage (Prakash, 2012).
       
Arecanut is a commercial crop and various soft computing and artificial intelligence methods and tools play a crucial role in identifying and classifying defects, grades and maturity levels of arecanuts. But these researches were carried on the arecanut available on ground that is after harvesting. There is no research work available on aerial arecanut. There is no automated system or technique to detect whether areca is ripened or not so that a decision can be taken to harvest arecanut. Traditionally, harvesting of arecanut has relied on manual methods. Conventional method of harvesting arecanut requires two people with the necessary skills to harvest arecanut. One professional tree climber has to climb the tree and pluck an arecanut and pass the plucked areca to the one who is on the ground to examine the ripeness. The person on the ground peels the nut with his teeth to ensure the ripeness and decides whether areca can be harvested or not. The person on the ground uses roti/dhoti (an equipment used to harvest arecanut) to harvest arecanut if it is ripened. This conventional method is a tedious as well as time consuming. It requires manpower and there is a life threat for the person who climbs the tree. Many deaths are reported during this process.  Now a days it is very difficult to get the labours. If there is an automated system which can detect whether arecanut is ripened or not, it will be easier to take a decision on arecanut harvesting. This work is an attempt in finding the arecanut ripeness level.
       
In order to train any deep learning model, dataset is required. Dataset is available for the arecanut after harvesting but there is no dataset available for aerial arecanut.  So to start this work, an attempt is made to create the dataset of aerial arecanut. In order to create the dataset, three stages of arecanut are considered: Ripened, unripen and intermediate.
       
Once the dataset is created, the images from the dataset is given as input to the Deep Neural Networks (DNNs): AlexNet and VGG-16. These networks extract the features such as colour, texture, shape, size, etc. The extracted features are given as inputs to the classifiers present in DNNs to predict the ripeness level.  If this model is implemented in dhoti/roti (used to harvest areca) it eliminates the manpower required to climb the tree and the human life risk can be avoided. Or by integrating this model with a climbing and cutter units, the arecanut harvesting system becomes completely automated and the time required to harvest arecanut can be reduced along with reduced man power requirement. Further this ripeness detection model can be extended to detect the different stages of coconut and can be extended to harvest coconut. The objectives of the work are as follows: 1. To create the dataset consisting of different images of aerial arecanut for the three different stages (unripen, intermediate and Ripened). 2. To give accurate classification of arecanut by analyzing the extracted features using deep Neural Netwroks (DNN). Evaluate and compare the performance.
 
Related work
 
In recent literature it is diffciult to find the conventional methods to detect the  (i) maturity levels of arecanut (ii) quality of arecanut (iii) diseases in arecanuts. Now a days Artificial Intelligence (AI) is used in all most all the fields like automobiles, business, education, fincance, healthcare etc. Machine learning and deep learning are the part of Artificial Intelligence. Machine learning are used in drone technology to tack the movement of animals (AlZubi et al., 2023; AlZubi, 2024). Deep neural networks or complex neural networks are also used to estimate the angle of arrival, angle of departure, target detection and recognition in wireless communication (Naoumi et al., 2023; Naoumi et al., 2024; Delamou et al., 2023). Machine learning models are used to detect the plant, crop and leaf diseases (Lee and Kim, 2024; Metagar and Walikar, 2024; Cho, 2024; Bong-Hyun, 2024). With the above discussions, we can say that the Artificial intelligence plays an important role in the recoginition and classification of images and videos. In this section an attempt is made to refer the work related to the above said problems using machine learning and deep learning algorithms.
       
CNN is used to automatically classify and grade arecanuts as good quality and bad quality based on their size, color and texture. The CNN model consists of Conv2D, maxpooling2D, dropout, flattenand dense layers. The system aims to improve the accuracy and efficiency of arecanut classification and grading, eliminating human errors and biases. The implementation of data pre-processing, augmentation and model architecture has achieved a better accuracy of 97% using the customized CNN model (Amrutha Bhat et al., 2023). The raw arecanuts are classified into four classes: ape, bette, milleand gorabalu, utilizing colour features and a K-NN (K Nearest Neighbour) classifier model across three stages: segmentation, feature extractionand classification (Anilkumar et al., 2021).
       
The utilization of pre-trained CNN for image classification tasks, focusing on the comparison between using AlexNet with support vector machines (SVM) classifier and transfer learning. Transfer Learning involves fine-tuning the last few layers of a pretrained network to enhance classification accuracy. CNN such as AlexNet, have shown superior performance in image classification compared to traditional methods. The methodology involves inputting image data to a pretrained CNN for feature extraction, followed by classification using SVM or Transfer Learning. The benefits of using pretrained CNN models include eliminating manual feature extraction and simplifying the learning process for new tasks (Huang et al., 2023).
       
The classification of raw arecanut into four classes as ape, bette, mille and gorabalu using color features and K-NN (K Nearest Neighbor) classifier model with three stages: segmentation, feature extraction and classification. The segmentation is performed using K-means clustering to remove unwanted background and shadows, followed by the extraction of color histogram and color moments features.The four color moments computed are mean, standard deviation, skewness and kurtosis. The K-NN classifier is then used for classification with varying K values and distance measures like Euclidean distance measure, manhattan distance measure, cosine distance measure and chebychev distancemeasure to achieve better accuracy (Bharadwaj et al., 2017).
       
Convolutional neural network (CNN) is used to identify and categorize the disease in arecanut. The dataset consists of 1,100 images of arecanut combinedly for healthy and diseased. The approach achived 93.05% of accuracy (Ajit Hegde et al., 2023).
       
From the previous discussions, it can be concluded that there is no existing technology to determine the maturity level of aerial arecanut before harvesting. Various soft computing, artificial intelligence methods, Deep Neural Networks, Machine learning techniques and tools play a crucial role in identifying and classifying defects in arecanuts, maturity levels of arecanut, detect the quality of arecanut, detect the diseases in arecanuts. But these works are carried out on the arecanut which is already harvested. In this work, an attempt is made to classify the aerial arecanut into 3 classes namely unripe, intermediate, ripe using deep neural networks: AlexNet and VGG-16 for feature extraction as well as classification. DNN is preferred to solve complex problems as the network grows deeper, the more sophisticated is the pattern searching. AlexNet and VGG-16 are selected as they resulted in good accuracy during the classification of the images.
Dataset is available for the harvested arecanut but there is no dataset available for aerial arecanut.  So to start this work, an attempt is made to create the dataset of aerial arecanut. Creation of aerial arecanut dataset is explained in the next section.  Once the dataset is created, the ripeness level of arecanuts is detected using Deep Neural Networks (DNNs) and is explained in the subsequent sections.
 
Creation of aerial arecanut dataset
 
There is no dataset available for the aerial arecanut. The region used to collect the dataset is Tumkur district, Karnataka, India. The three different areca farms in Tumkur: Gubbi, Madhugiri and Kunigal are considered. Trees with the small height consisting of three different stages (unripe, intermediate and ripe) of arecanut were selected so that one of the author could able to capture the image of arecanut bunches by climbing the ladder. More number of images were captured for all the three stages of arecanut out of which 1000 good quality images per each stage is considered. The ground truth for these 3000 images are taken from the farmers/owners of that areca farm. Sample images of all the three stages is shown in Fig 1 to 3. The farmers could identify the stages of arecanut using the colour. From these figures it is clear that the green coloured arecanuts belong to unripened stage. Unripened arecanut is changing its state to intermediate by turning into orange but it also contains the green colour in it. Orange coloured arecanuts are identified as ripened stage. The images in the dataset differs in resolution, so the images are resized to 224 X 224 pixels to ensure that all images in the dataset have a consistent size. Further we are planning to increase the size of the dataset by collecting more number of images in different regions, different seasons with different lighting and weather conditions.

Fig 1: Sample images of unripened arecanut.



Fig 2: Sample images of intermediate stage arecanut.



Fig 3: Sample images of ripened arecanut.


 
Deep neural networks for arecanut ripeness level detection
 
Deep Neural Networks (DNN) are the extension of Neural Networks used to solve the complex problems.  DNN is preferred to solve complex problems as the network grows deeper, the more sophisticated is the pattern searching. Convolutional Neural Network (CNN) is one of the Deep Neural Network used for image analysis, image classification, object detection, medical image processing, Natural Language processing (NLP) etc (Naik et al., 2020, Hamlili et al., 2022, Huang et al., 2022, Huang et al., 2023, Naik et al., 2024). There is no research work available on aerial arecanut. The research work on the image analysis or image classification of harvested arecanuts is done using AlexNet and VGG (Kumar et al., 2019, Chandrashekhara 2019, Kusumadhara et al., 2020, Bharadwaj et al., 2021). AlexNet and VGG-16 are the different types of CNN and are selected to classify the aerial arecanut as unripe, intermediate and ripe.
 
Arecanut ripeness detection using AlexNet
 
The architecture of AlexNet comprises of five convolutional layers, each tasked with extracting hierarchical features from input images. As illustrated in Fig 4. arecanut image is given as input to the AlexNet. The initial convolutional layer comprises 96 kernels of size 11x11 pixels with a stride of 4 pixels, applied to the input image to produce feature maps. Following suit, the second convolutional layer employs 256 kernels of size 5x5 pixels with a stride of 1 pixel, facilitating finer feature extraction. The subsequent convolutional layers-third, fourthand fifth-further enhance the extracted features, incorporating 384 kernels of size 3x3 pixels in the third and fourth layersand 256 kernels of size 3x3 pixels in the fifth layer. Each convolutional layer is succeeded by a Rectified Linear Unit (ReLU) activation function, introducing non-linearity to accelerate convergence by addressing the vanishing gradient problem. Overlapping max-pooling layers follow the first and second convolutional layers, utilizing a 3x3 kernel size with a stride of 2 pixels, resulting in feature maps with reduced spatial dimensions. This overlapping pooling enhances the capture of spatial dependencies and strengthens the network’s ability to extract invariant features across diverse input regions. Post-convolutional layers, AlexNet integrates three fully connected layers, each hosting 4096 neurons, facilitating high-level feature representation and enabling complex non-linear mappings between input and output. Dropout regularization is employed during training to prevent overfitting, randomly deactivating neurons to enhance the network’s generalization performance. Rectified Linear Units (ReLU) serve as the activation function across all layers, accelerating convergence by addressing the vanishing gradient problem. Dropout regularization is crucial in preventing overfitting in the fully connected layers, achieving this by randomly dropping neurons during training to encourage the learning of more robust and generalizable features. The final fully connected layer serves as the output layer, housing neurons corresponding to the number of classes in the classification task.

Fig 4: Architecture of AlexNet used in arecanut ripeness level detection.


 
Arecanut ripeness detection using VGG-16
 
VGG-16 improves performance compared to AlexNet (Chaitanya, 2024). Fig 5 shows the VGG-16 model to which the images of arecanut are fed as input. Within these layers are 13 convolutional layers and 3 fully connected layers.The structure is characterized by a series of convolutional layers followed by max-pooling layers, all arranged in a stacked manner within creasing depth.This configuration facilitates the acquisition of complex hierarchical representations of visual features, fostering the model’s ability to make precise predictions with high reliability and accuracy. Each arecanut image undergoes processing through a series of convolution layers with a small receptive field of  3 x 3, each containing 64 filters and followed by Rectified Linear Unit (ReLU) activation function. Maintaining a fixed convolution stride of 1 pixel and padding of 1 pixel preserves the spatial resolution, resulting in output activation maps identical in size to the input image dimensions. Subsequently, these activation maps are subjected to spatial max pooling across a 2 x 2 pixel window with a stride of 2 pixels, effectively reducing the size ofthe activations by half. Thus, at the conclusion of the first stack, the dimensions of the activations measure 112 x 112 x 64. Following this, the activations traverse a similar second stack, label it with 128 filters compared to the initial 64. Consequently, the size post-second stack reduces to 56 x 56 x 128. The subsequent stage involves a third stack featuring three convolutional layers and a max pool layer, applying 256 filters, resulting in an output size of 28 x 28 x 256. Finally, two stacks each comprising three convolutional layers, with 512 filters in each, are employed. The output dimensions after both stacks conclude at 7 x 7 x 512.

Fig 5: Architecture of VGG-16 used in arecanut ripeness level detection.

The dataset creation is explained in the previous section.  For each stage 1000 images are considered which yielded to 3000 images. Among the 3000 images, approximately 80:20 images were used to train and test the deep learning models respectively. 
       
During the testing phase, the effectiveness of these models is assessed through a range of performance metrics such as accuracy, precision, recall and F1-score.  Accuracy represents the ratio of correctly classified instances to the total instances, computed using equation (1). Precision signifies the ratio of true positive predictions to all positive predictions made by the model, ideally falling within the [0, 1] range and determined by equation (2). Recall indicates the ratio of true positive instances to all positive instances in the dataset, also ideally ranging between [0, 1] and calculated through equation (3). F1 score, the harmonic mean of precision and recall, typically ranges between [0, 1] and is ed using equation (3). F1 score, the harmonic mean of precision and recall, typically ranges between [0, 1] and is computed using equation (4).
 
 ....(1)

....(2)

....(3)
 
 
Where,
TP= Number  of  true  positives (correctly predicted positives).
TN= Number of true negatives (correctly predicted negatives).
FP= Number of false positives (positives predicted incorrectly).
FN= Number of false negatives (negatives predicted incorrectly).
 
 
  ....(4)
       
The AlexNet and VGG-16 models achieved an accuracy of 94.58% and 96.87% respectively in classifying unripe, intermediate and ripe arecanuts as illustrated in Fig 6.

Fig 6: Graph of accuracy and loss obtained during the classification of arecanut ripeness level using (a) AlexNet (b) VGG-16.


       
Table 1 shows the confusion matrix plotted during testing using AlexNet and VGG16. With respect to AlexNet, 160 images are considered for each stage out of which 153 images are correctly identified as unripened, 144 images are correctly predicted as intermediate and 157 images are correctly classified as ripened. The three images of ripened stage are misclassified as intermediate whereas, 7 images belonging to unripened are misclassified as intermediate. More number of misclassification is found in the images belonging to intermediate stage. The ground truth was obtained from the farmers who made decisions based on the colour of the areca bunch present in the tree.  But the model misclassified few images belonging from one stage to its neighbour stage.That is unripe stage is misclassified as intermediate level but not as ripe level, similarly intermediate stage is misclassified as either unripe or ripe and ripe stage is classified as intermediate stage not as ripe stage. This misclassification is due to, the model has extracted the features and those features are present on the decision boundary lines because of which the misclassification is done to the neighbour stage and more number of misclassification is found on intermediate stage. Fig 7. shows the decision boundaries between each stage of arecanut. Fig 8 and 9 show the sample images of correctly classified and misclassified images respectively.

Table 1: Confusion matrix of areca ripeness level classification using AlexNet and VGG-16.



Fig 7: Decision boundaries between different stages of arecanut ripeness.



Fig 8: Sample images of correctly classified arecanut bunches.



Fig 9: Sample images of misclassified arecanut bunches.


       
Table 2 gives Comparison of performance metrics obtained using AlexNet and VGG-16. Upon analyzing accuracy, precision, recall and F1 score of AlexNet and VGG-16, it becomes evident that VGG-16 outperforms AlexNet. The number of misclassifications is reduced using VGG-16 due to which the performance metrics have obtained higher values compared to AlexNet. This is due to the fact that the number of convolutional layers are more in VGG-16 compared to AlexNet that yields more precise features to get extracted which becomes the input to the classifier, there in it reduces the problem of over fitting.

Table 2: Comparison of performance metrics obtained using AlexNet and VGG-16.


       
As there is no research available on the aerial arecanut, we are unable to compare the results with other research.
There is no automated system to detect the ripeness level of aerial arecanut. Detecting the arecanut ripeness level relays on the traditional method which requires man power. But it is time consuming and there is a problem of life threat also.  So in this work, an attempt is made to create the dataset of aerial arecanut and classify the arecanuts based on its ripeness level. The two Deep Neural Networks i.e. AlexNet and VGG-16 are trained and tested with the dataset and the performance metrics are compared. The results show that VGG-16 has superior performance compared to AlexNet in classifying arecanuts into unripe, intermediateand ripe categories. In future, we plan to increase the size of the dataset by collecting more number of images in different regions, different seasons with different lighting and weather conditions to increase the performance of VGG-16. We also planning to integrate a camera, microcontroller and VGG-16 model with roti/dhoti. Once the roti/dhoti is brought nearer to the arecanut bunch in the tree, Microcontroller connected to roti/dhoti can signal the camera to capture the arecanut image from the tree. The image can be analysed using VGG-16 model to find whether arecanut is ripened or not and a signal can be sent to the farmer indicating the ripeness level of the arecanut. Based on the ripeness level former can operate roti/dhoti to harvest arecanut. This set up will overcome the challenge of life threats. This set up requires the farmer to have mobile application and also depending on the microcontroller, power supply should be interfaced to the roti/dhoti. If this VGG-16 model is integrated with the climber unit and cutter unit, it will transform the arecanut harvesting system into a fully automated harvesting system and eliminates the man power required to harvest the arecanut as well as reduce the time required to harvest arecanut and also overcomes the life threats during harvesting.
The authors thank Director, Chief Executive Officer and Principal of Siddaganga Institute of Technology, Tumakuru, Karnataka for the research facilities. The authors are thankful to the Biotechnology Ignition Grant (BIG), awarded by the Biotechnology Industry Research Assistance Council (BIRAC), Department of Biotechnology (DBT), Government of India for providing the funds under Agriculture and Allied Areas category to carry out the research work.
This work was supported by Biotechnology Ignition Grant (BIG), awarded by the Biotechnology Industry Research Assistance Council (BIRAC), Department of Biotechnology (DBT), Government of India.
All authors contributed toward data set preparation, data analysis, drafting and revising the paper and agreed to be responsible for all aspects of this work.
Not applicable.
Authors declare that all works are original and this manuscript has not been published in any other journal.
The views and conclusions expressed in this article are solely those of the authors and do not necessarily represent the views of their affiliated institutions. The authors are responsible for the accuracy and completeness of the information provided, but do not accept any liability for any direct or indirect losses resulting from the use of this content.
The authors declare that there are no conflicts of interest regarding the publication of this article.

  1. Amrutha, B., Shetty, A.S., Suvarna, D.M., Goutham, Anchan, R.S. (2023). A CNN based approach to classify areca nuts based on grades.Journal of Emerging Technologies and Innovative Research. 10: 469-472.  

  2. Ajit Hegde, Vijaya Shetty Sadanad, Chinmay Ganapati Hegde, Krishnamurthy Manjunath Naik, Kanaad Deepak Shastri. (2023). Identification and categorization of disease in arecanut: a machine learning approach. Indonesian Journal of Electrical Engineering and Computer Science. 31: 1803-1810.

  3. AlZubi, A.A. (2023). Application of machine learning in drone technology for tracking cattle movement. Indian Journal of Animal Research. 57(12): 1717-1724. doi.org/10. 18805/IJAR.BF-1697.

  4. AlZubi, A.A., Abdulrhman,  A. (2024). Application of Machine Learning in Drone Technology for Tracking of Tigers. Indian Journal of Animal Research. 58(9): 1614-1621. doi: 10.18805/IJAR.BF-1759.

  5. Anilkumar, M.G., Karibasaveshwara, T.G., Pavan, H.K., Urankar, S., Deshpande, A. (2021). Detection of Diseases in Arecanut Using Convolutional Neural Networks. International Research  Journal of Engineering and Technology. 8: 4282- 4286.

  6. Bharadwaj and Dinesh, R.(2017). Possible Approaches to Arecanut Sorting/Grading using Computer Vision: A Brief Review. Proceedings of International Conference on Computing. Communication and Automation.

  7. Bharadwaj, N.K., Kumar, V.N. (2021). Classification and grading of arecanut using texture based block wise local binary patterns. Turkish Journal of Computer and Mathematics Education. 12: 575-586. 

  8. Bong-Hyun, K., Alamri, A.M. and AlQahtani, S.A. (2024). Leveraging Machine Learning for Early Detection of Soybean Crop Pests. Legume Research. 47(6): 1023-1031. doi: 10.18805/ LRF-794.

  9. Chaitanya, M., Gurukiran, M.T., Guruprasad, A.S., Manjunath, T.M., Aparanji, V.M. (2024). Ripeness Detection of Areca Nut Using VGG-16, 2024 International Conference on Distributed Computing and Optimization Techniques (ICDCOT), Bengaluru, India. pp. 1-5.

  10. Aparanji, V.M. (2024). Ripeness Detection of Areca Nut Using VGG-16, 2024 International Conference on Distributed Computing and Optimization Techniques (ICDCOT), Bengaluru, India. pp. 1-5.

  11. Chandrashekhara H. (2019). Classification of arecanut using neural networks with feed forward techniques. International Journal of Research in Advent Technology. 7:998-1003.

  12. Cho, O.H. (2024). An Evaluation of Various Machine Learning Approaches for Detecting Leaf Diseases in Agriculture. Legume Research.  47(4): 619-627. doi: 10.18805/LRF-787.

  13. Deshmukh, P.S., Patil, P.G., Shahare, P.U., Bhanage, G.B., Dhekale, J.S., Dhande, K.G., Aware, V.V. (2019). Effect of mechanical and chemical treatments of arecanut (Areca catechu L.) Fruit husk on husk and its fibre. Waste Management. 95: 458-465.

  14. Delamou, Mamady andBazzi, Ahmad andChafii, Marwa and Amhoud, El Mehdi. (2023). Deep Learning-based Estimation for Multitarget Radar Detection. pp 1-5.

  15. Hamlili, F.Z., Beladgham, M., Khelifi, M., Ahmed, B. (2022). Transfer learning with Resnet 50 for detecting COVID 19 in chest X ray images. Indonesian Journal of  Electrical Engineering and Computer Science. 25:1458-1468. 

  16. Huang, F., Yu, L,  Shen, T. and  Jin, L. (2019). Chinese Herbal Medicine Leaves Classification Based on Improved AlexNet Convolutional Neural Network. Proceedings of IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference.

  17. Huang, G.H., et al. (2022). Deep transfer learning for the multilabel classification of chest X ray images. Diagnostics. 12. 

  18. Huang, S.Y., Wan-Jia, A.N., De-Shun, Z. (2023). Image classification and adversarial robustness analysis based on hybrid quantum-classical convolutional neural network. Optics Communication. 533. 

  19. Kumar, K.P.M., Gubbi, V. (2019). Arecanut grade analysis using image processing techniques.  Int. J. Recent Technol. Eng. 7: 1-6.

  20. Kusumadhara, S., Ravikumar, M., Raghavendra, P. (2020). A frame work for grading of White Chali Type arecanuts with machine learning algorithms. Int. J. Recent Technol. Eng. 8: 2782-2789.

  21. Lee, W.B., Bong-Hyun, K. (2024). Machine Learning Innovations for Precise Plant Disease Detection: A Review. Legume Research. 47(10): 1633-1638. doi: 10.18805/LRF-799.

  22. Metagar, S.M. and Walikar, G.A. (2024). Machine learning models for plant disease prediction and detection: A Review.  Agricultural Science Digest. 44(4): 591-602. doi: 10.18805/ ag.D-5893.

  23. Naik, P.M., Rudra, B., Mohammadi, R. (2020). Transfer learning based automatic detection of coronavirus disease 2019 (COVID 19) from chest X ray images. J. Biomed.Phys. Eng. 10: 559-568. 

  24. Naik, P.M., Rudra, B. (2024). Quantum inspired Arecanut X ray image classification using transfer learning. IET Quant. Comm. 1-7.

  25. Naoumi, Salmane and Bazzi, Ahmad andBomfin, Roberto andChafii, Marwa. (2024). Complex Neural Network based Joint AoA and AoD Estimation for Bistatic ISAC. IEEE Journal of Selected Topics in Signal Processing. pp 1-15. 

  26. Naoumi, S., Bazzi, A., Bomfin, R. and Chafii, M. (2023). “Deep Learning- Enabled Angle Estimation in Bistatic ISAC Systems,” 2023 IEEE Globecom Workshops (GCWkshps), Kuala Lumpur, Malaysia. 854-859.

  27. Prakash, K. (2012). Arecanut Economy at the Cross Roads. Report of Special Scheme on Cost of Cultivation of  Arecanut in Karnataka (GOI). 

  28. Shalini, M. (2014). Role of Agriculture In Indian Economy. International Journal of Engineering, Science and Mathematics. 5: 284-288.

Editorial Board

View all (0)