Automatic detection and classification of objects is an important functionality of image analysis. Due to the nature and size of objects and the varied visual features, it becomes challenging to detect and classify objects in aerial images. Manual detection of objects in these images is very time consuming due to the nature and that data captured in these images. It is desirable to automate the detection of various features or objects from these images.
The conventional methods for object classification involve two stages:
(i) Identify the regions with object presence in the image and
(ii) Classify the objects in the regions.
Additionally, detection of objects becomes challenging in presence of complexities in background, size, noise, and distance parameters. This work proposes a customized YOLO to detect and classify different objects such as garbage waste, plastic waste and vehicles in the images.
Introduction
I. INTRODUCTION
Municipal waste handing is one of the huge challenges faced by the city authorities, especially in a developing country like India, where the population density is very high. The Municipal Solid Waste Management and Handling Rules of 2000 explains, collection, segregation, transportation and suitable disposal of municipal wastes as an obligatory part of the municipal authorities
Segregation is rarely undertaken. Also, incentives are not given to encourage people for practicing segregation. With such limited efforts to educate citizens, the lack of knowledge about environmental consequences leaves them apathetic to segregation.
When segregation is poor, recycling becomes ineffective and almost impossible. Compared to the budget allotted for municipal waste management, the scale of waste needed to be handled is very high. Also, the rate of generation is constantly increasing for every passing year, which the budget fails to cope up with. Proper waste handling is possible within the budget, only if the public cooperate by actively taking part in waste segregation and recycling.
II. EXISTING SYSTEM
Even though a number of system has been developed for waste sorting or management however those are pricey, required high talent to function and maintain, centered to specific form of waste, and evolved specially for evolved international locations. In India, no such generation is to be had to type the massive quantity of waste price-correctly, successfully and efficiently. even though research had been conducted in India context, there is still no practical implementation.
Effective recycling relies on effective sorting.
X-ray generation provides a extraordinarily effective manner of sorting waste primarily based on density or atomic variety with the capability to distinguish materials no longer feasible with different technology. Detection technology presents a number X-ray imaging solutions for waste sorting, consisting of advanced twin-power answers for material discrimination based totally on atomic quantity.
III. OBJECTIVES
The purpose of the Yolo detection system is to classify the different objects, garbage waste. The system will detect the objects present in the image i.e vehicles and then they will be classified into different categories. .
The developed system's goal is to achieve the following five primary points:
Affordable: The technologies must be affordable, as the cost is one of the most important considerations during the design phase.
Portable: Solution to be mobile and easy .
Secure: The system should be secure by ensuring that components are in a safe area.
Performance: Image data loading through program and live streaming makes performance measures crucial.
Accurate: Because the system must be precise, we have chosen the most precise among them
IV. METHODOLOGY
A. Dataset Preparation
The first component of building a deep learning network is to gather our initial dataset. We need the images themselves as well as the labels associated with each image.
B. Split the Dataset.
A training set: A training set is used by our classifier to “learn” what each category looks like by making predictions on the input data and then correct itself when predictions are wrong
A testing set: After the classifier has been trained, we can evaluate the performing on a testing set.
C. Train the Model
The goal here is for our network to learn how to recognize each of the categories in our labeled data. When the model makes a mistake, it learns from this mistake and improves itself.
D. Classify Waste of Image
For each of the images in our testing set, we present them to the network and ask it to predict what it thinks the label of the image is. We then tabulate the predictions of the model for an image in the testing set.
V. IMPLEMENTATION
A. Object Detection using YOLOv3
Step 1: Download the models
a. The yolov3.weights file (containing the pre-trained network’s weights)
b. The yolov3.cfg file (containing the network configuration)
c. The coco.names file which contains the 80 different class names used in the COCO dataset
2. Step 2: Initialize the parameters
The YOLOv3 algorithm generates bounding boxes as the predicted detection outputs. Every predicted box is associated with a confidence score. In the first stage, all the boxes below the confidence threshold parameter are ignored for further processing.
3. Step 3: Load the model and classes
The file coco.names contains all the objects for which the model was trained. We read class names.
a. yolov3.weights : The pre-trained weights.
b. yolov3.cfg : The configuration file.
4. Step 4: Read the input
In this step we read the image, video stream or the webcam.
In addition, we also open the video writer to save the frames with detected output bounding boxes.
5. Step 5: Process each frame
The input image to a neural network needs to be in a certain format called a blob.
6. Step 6: Getting the names of output layers
The forward function in OpenCV’s Net class needs the ending layer till which it should run in the network. Since we want to run through the whole network, we need to identify the last layer of the network.
code snippet (net.forward(getOutputsNames(net))).
7. Step 7: Post-processing the network’s output
The network outputs bounding boxes are each represented by a vector of number of classes + 5 elements.
The first 4 elements represent the center_x, center_y, width and height. The fifth element represents the confidence that the bounding box encloses an object.
VI. OUTCOMES
VII. ACKNOWLEDGMENT
We are grateful to each of the members of the Major Project Committee members of CSE Department REVA who has provided each of us with extensive skilled guidance on research activity.
Finally, we wish to express our deep gratitude to our parents for their constant words of encouragement throughout our studies.
Conclusion
Finally, utilizing the YOLO framework, we designed a GUI-based Waste classification system that can classify various types of garbage such as dry, wet, and vehicle. This system is dependable for object classification and identification, reducing human intervention and preventing infection and contamination. The accuracy of the system can be increased by adding more images to the database. We will strive to improve our system in the future so that it can differentiate a wide range of waste products by altering some of the characteristics.
References
[1] Chaudhuri, D., Samal, A., An automatic bridge detection technique for multispectral images. IEEE Trans. Geosci. Remote Sens. 2008
[2] Hu, J., Razdan, A., Femiani, J.C., Cui, M., Wonka, P., Road network extraction and intersection detection from aerial images by tracking road footprints. IEEE Trans. Geosci. Remote Sens., 200
[3] Goodin, D.G., Anibas, K.L., Bezymennyi, M., 2015. Mapping land cover and land use from object-based classification: an example from a complex agricultural landscape. Int. J. Remote Sens. 36, 4702-4723.
[4] Han, J., Zhang, D., Cheng, G., Guo, L., Ren, J., Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning. IEEE Trans. Geosci. Remote Sens. 2015.
[5] Wang, J., Song, J., Chen, M., Yang, Z., Road network extraction: a neural-dynamic framework based on deep learning and a finite state machine. Int. J. Remote Sens.2015
[6] Duarte, D., et al. \"Satellite image classification of building damages using airborne and satellite image samples in a deep learning approach.\" ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences 4.2 (2018).
[7] Alganci, U., Soydas, M., & Sertel, E. (2020). Comparative research on deep learning approaches for airplane detection from very high-resolution satellite images. Remote Sensing, 12(3), 458
[8] Zhao, Z. Q., Zheng, P., Xu, S. T., & Wu, X. (2019). Object detection with deep learning: A review. IEEE transactions on neural networks and learning systems, 30(11), 32123232.
[9] Shen, X., & Wu, Y. (2012, June). A unified approach to salient object detection via low rank matrix recovery. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 853-860). IEEE
[10] Zhao, Z. Q., Zheng, P., Xu, S. T., & Wu, X. (2019). Object detection with deep learning: A review. IEEE transactions on neural networks and learning systems, 30(11), 32123232
[11] Shen, X., & Wu, Y. (2012, June). A unified approach to salient object detection via low rank matrix recovery. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 853-860). IEEE.