Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Aman Mishrikotkar, Ayush Kamdi, Pratik Maske, Hardik Barbatkar, Dr. Ravindra Jogekar
DOI Link: https://doi.org/10.22214/ijraset.2022.40463
Certificate: View Certificate
The concept of the paper was inspired by the recent surge in the automated car industry. The designed car was capable of detecting the road signals and taking the right and left turns accordingly. Object detection is a key ability required by most computer used in automated vehicles. The latest research in this area has been making great progress in many directions. Object detection and tracking has a variety of uses, our paper explain how to use convolutional neural network for object detection in autonomous vehicles. Automatic car always has the potential to solve traffic problems with the help of Convolution Neural Network (CNN). However, in the current scenario complete autonomy is still to be achieved. Although today\'s CNN have brought us closer to autonomy than ever before. CNN contain artificial neurons which are trained using preset rules and these rules determine whether it will provide an output or not when given a set of inputs. CNN will analyze various road footages, which include various scenarios such as collisions, empty roads, traffic, etc. CNN will analyze and send appropriate instructions to the car such as brake, accelerate, slow down, etc.
I. INTRODUCTION
A self-driving car can be defined as a vehicle that can guide itself without human interaction. It is a combination of sensors, camera, artificial intelligence, etc. To qualify as fully autonomous, a vehicle must be able to navigate without human involvement to a predetermined destination over roads that have not been adapted for its use. Autonomous car has reduced costs due to less wastage of fuel, increased safety, increased mobility, increased customer satisfaction and that’s why it has more advantage than traditional cars. Biggest advantage of using a self-driving car is significantly fewer traffic accidents. According to a recent technical report by the National Highway Traffic Safety Administration, More than 94% of road accidents are caused by human errors including poor decision making, distractions, etc. In last few years, there has been a significant increase in research about development of the autonomous vehicles. The task of environment sensing in autonomous vehicle is known as perception. In many autonomous driving systems, the object detection is itself one of the most important task, as this task allows the car to control and tackle near obstacles. Therefore, we need object detection algorithm as accurate as possible.
One of the most accurate and high-speed algorithm is You Only Look Once (YOLO) algorithm. YOLO is a portending technique that gives correct results with less background errors. The algorithm has excellent learning capabilities that empower it to learn the representations of objects and apply them in object detection. You only look once algorithm uses a single bounding box technique to determine the height, width, center of objects [3].
A. How Does Object Detection Work?
To explore the concept of object detection it’s necessary to start with image classification. Image classification goes through levels of incremental complexity.
In a real-life scenario, we need to go long way off locating just one object but rather multiple objects in one image. For example, a autonomous car has to find the location of other cars, trees, traffic lights, animals, signs, humans and take convenient action based on this data. [14]
In the case of bounding boxes, there are also some condition where we want to find the perfect boundaries of the objects. This process is called instance segmentation. [4]
II. LITERATURE REVIEW
Autonomous vehicle consists of several sensors, it can be handled using different algorithms. These algorithms use different technologies. Currently many countries are working on the development of the autonomous vehicles. The growth of each country in automobile industry is determined by advancement made on the autonomous vehicle. Self-driving car advancement involves different researches and issues in the initial days. Hence in this paper we propose information about what are the enhancement done compare to last century, new resource development for the autonomous vehicles and explains about the technical and non-technical issues and challenges which autonomous vehicle developer have to face in the future.
A. Comparison of Machine Learning Algorithm’s on Self-Driving Car Navigation
Many statistical reports pointed that more than 94% of accident causes came from direct human causes such as violating the speed limit, illegal overtaking, and suddenly cutting in. Therefore, the autonomous car was swiftly developed by starting a scaled RC-Car platform. In this research, they built a self-driving car for collecting data. Nvidia Jetson Nano is a small microprocessor board for developing and training models by using GPU 128-core Maxwell to rapidly processing AI frameworks and models for applications such as image classification, object detection and segmentation [17]. For the training model, they used the three models, which are Support Vector Machine (SVM), Artificial Neural Network Multilayer Perceptron (ANN-MLP), and Convolution Neural Network - Long Short Term Memory (CNN-LSTM) [18] for comparison to finding the best accuracy for self- driving car model (SDCM).
The SVM can encourage both classification and regression problems, including linear and non-linear hyper- plane by using a kernel function to reduce complicated feature spaces. The ANN-MLP [18] is an artificial neural network and it is a nonparametric estimator to use for classifying and detecting objects. Convolution Neural Network Long Short-Term Memory (CNN-LSTM) is one of the models which is suitable for fixing classification problems, which consist of five main layers: Convolution stage, Detector stage, Pooling stage, LSTM stage, and fully connected stage [18]. They propose three-speed levels and three scenarios for comparing the accuracy of each algorithm: SVM, ANN-MLP, and CNN-LSTM. For the first experiment, they set up the three-speed levels, which are 1, 2, and 3 km/h, respectively, by without obstacle on the road. According to first experiment, it can be seen that the percentage of the accuracy rate of the CNN-LSTM algorithm is the highest performance of all models at every speed level without obstacle on the road. According to second experiment, although they add one condition to a scenario by adding an obstacle, CNN-LSTM is still the best accuracy algorithm. In the final experiment, it is apparent that even though we add more the obstacles on the road, the percentage of accuracy rate of the CNN-LSTM algorithm is higher than any other algorithm in the experiment. From the comparison algorithms of machine learning: SVM, ANN-MLP, and CNN-LSTM on different scenarios and different speed levels. From the paper, it can be seen that the percentage of the accuracy rate of the CNN-LSTM algorithm is the highest efficiency, not only with obstacles but also without obstacle
III. METHODOLOGY
Object detection is one of the most important tasks in the Autonomous driving to achieve safety for customer. To do object detection, in this paper we explained about implementation of the object detection using the TensorFlow method [15]. This method makes the object detection more efficient. We introduce end to end learning using a deep reinforcement method. From this method we identify a total number of collisions and also measures the performance rate to reach the goal. The main objective of this model is to increase the performance to reach the destination. In this paper we used YOLO method to identify the object by creating the bounding boxes and finds the class probabilities for each box then finds objects in the captured images [16].
To build autonomous car, detection of objects in the driving environment is key feature. To do this different sensor are required like camera and so on. Each of these sensors used different way to detect object. Working flow of all these sensors are explained in this paper. Convolutional neural network uses neural network technology. We explain implementation of vehicle detection on the road using convolutional network. To build self-driving car, here we explain implementation method for lane and object detection using YOLO and CNN algorithms. [1] The deep learning and machine learning techniques are used for object detection [3] [14]. We introduced a new hybrid method i.e., Local Multiple System. This includes the features of Convolutional Neural Network (CNN) and Support Vector Machine. The objective of this paper is to gives a deep knowledge about the autonomous vehicles like which sensors to be used, challenges to implement the autonomous vehicles, classification of the vehicles.
Object detection involves different ways like feature extraction from the image, extraction of RGB values and also bounding box creation [2]. We give bounding box method for object identification by considering the height, width and color of the object. We introduced integrated method for real time object identification using bounding box approach for the captured images. The integrated approach uses a Faster CNN and YOLO algorithm. Objects can be detected in both images and videos.
A. CNN based Traffic Sign Detection and Recognition for Outdoor Travel
Generally, traditional computer vision methods were developed to detect and recognize traffic signs [2], but this requires time-consuming manual work to extract important features in images. While applying deep learning, we create a model that efficiently classifies traffic signs images and learn to identify the most appropriate features for this problem by its own. While using deep neural networks method, the model will require a large number of data and huge matrix multiplication operations which requires more computational power to tackle with this new type of algorithm which is called Convolutional Neural Network. It has been observed that Convolutional Neural Network is more efficient and faster than a regular deep neural network for problems related to computer vision. Convolutional neural networks are very important in the computer vision field [1]. Convolutional Network model are easy and faster to train on images comparatively to the traditional models. To train and test the model we are going to use German Traffic Sign Dataset which contains more than 50,000 traffic sign images which is divided into 43 classes. This dataset is big enough which will help train model more accurately and help us to achieve better results. [4]
B. Data Preprocessing
Firstly, we need to introduce the images to a single dimension. To not compress too much data and to not stretch the image too much we need to decide the dimension and save the accurate image. So, we have decided to resize every image to 32 x 32 x 3 dimension. [3]
Next, we will convert this image to augmented images which will help our model to find more features in the images. Hence preprocessing is very crucial step as it reduces the amount of features and execution time.
C. Model Architecture
CNN architecture consists of three types of layers: Convolutional Layer, Pooling Layer, and Fully-Connected Layer. [14]
D. Automatic Lane Detection and Tracking
Being able to detect lane lines could be a crucial task for any self-driving autonomous vehicle. In this paper, to identify lane lines on the road, OpenCV is used. OpenCV method uses input images to find any lane lines command among and also for rendering out an illustration of the lane. The OpenCV tools like color selection, the region of interest selection, grey scaling, Gaussian smoothing, Canny Edge Detection, and Hough Transform line detection are being employed [16]. A color detection algorithm identifies pixels in a picture that matches a given color or color range. Region of interest selection allows you to select a rectangle in an image, crop the rectangular region and finally display the cropped image. Grey scaling is the method of changing an image from different color spaces e.g., RGB, CMYK, etc. to shades of grey. In the Gaussian Blur operation, the image is convolved with a mathematician filter rather than the box filter. The Gaussian filter could be a low-pass filter that removes the high-frequency elements. Canny Edge Detection is used to detect the edges in a picture. It accepts a grayscale image as input and uses a multi-stage algorithm. The Hough Transform line is a method that is used in image processing to detect any shape if that shape can be represented in mathematical form. The goal is to piece along a pipeline to detect the line segments within the image, then average/extrapolate them and draw them onto the image for the show.
E. For Automatic Traffic Light Monitoring and Control
A high-speed camera that operates at 500 fps on a car because it would capture five images during a single period of lamp blinking. The proposed detection system consists of six modules that include loading, band-pass filter, binarization, buffering, detection, and classification. The binarization module first estimates the state of the traffic light dynamics, which includes the blinking amplitude, offset, and phase. It uses the Kalman filter for state estimation. Subsequently, it determines an appropriate threshold for binarizing the filtered image based on the estimated state and finally binarizes the filtered image. The buffering module relays the image, which has stronger signals compared to the previous images. This is because recognizing colors and areas from an image with non-maximum brightness is difficult. The detection module extracts the contours from the peak binarized image. Further, the size and shape are used to exclude candidates to prevent false detections. The classification module classifies the lamp color using the contours and RGB images.
[1] Dr. Ravindra Jogekar and Dr. Nandita Tiwari, “unconventional technique for improving farmer yields by exposing and mitigating foliage diseases in an extensively adaptable deep learning and computational model through microbiological vegetation assessment” International Journal of Future Generation and Communication Networks. 2020;13(3): 3516-3526. ISSN: 2233-7857. [2] Dr. Ravindra Jogekar and Dr. Nandita Tiwari, “Summary of Leaf-based plant disease detection systems: A compilation of systematic study findings to classify the leaf disease classification schemes” 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4). [3] Dr. Ravindra Jogekar and Dr. Nandita Tiwari, “A Review of Deep Learning Techniques for Identification and Diagnosis of Plant Leaf Disease” Smart Trends in Computing and Communications: Proceedings of SmartCom 2020, Smart Innovation, Systems and Technologies 182. [4] Dr. Ravindra Jogekar and Dr. Nandita Tiwari, “Leaf-based plant disease detection systems depiction: An overview of outcomes of statistical studies for the identification of classification schemes for leaf diseases” [5] Dr. Ravindra Jogekar and Dr. Nandita Tiwari, “Enhanced adaptive creation of visualisation network by detection of leaves” Materials Today: Proceedings. [6] https://researchleap.com/research-in-autonomous-driving-a-historic-bibliometric-view-of-the-research-development-in-autonomous-driving [7] https://appsilon.com/object-detection-yolo-algorithm/ [8] https://www.seeedstudio.com/blog/2019/04/16/introduction-to-artificial-intelligence-for-makers/ [9] https://blog.tensorflow.org/2018/07/move-mirror-ai-experiment-with-pose-estimation-tensorflow-js.html?m=1 [10] https://www.lfedge.org/2020/10/15/pushing-ai-to-the-edge-part-two-edge-ai-in-practice-and-whats-next/ [11] https://www.lfedge.org/2020/10/15/pushing-ai-to-the-edge-part-two-edge-ai-in-practice-and-whats-next/ [12] https://projectswiki.eleceng.adelaide.edu.au/projects/index.php/Projects:2017s1-181_BMW_Autonomous_Vehicle_Project_Camera_Based_Lane_Detection_in_a_Road_Vehicle_for_Autonomous_Driving [13] https://www.semanticscholar.org/paper/The-key-technology-toward-the-self-driving-car-Zhao-Liang/7b890762fd952aa45e785a8fff8eb80bdeb87478 [14] Ekim Yurtsever, Jacob Lambert1, Alexander Carballo, And Kazuya Takeda, “A Survey of Autonomous Driving: Common Practices and Emerging Technologies”, IEEE-2020. [15] Mike Daily, Swarup Medasani, Daimler Protics, Trivedi, San Diego, “Self-Driving Cars”, IEEE-2019. [16] R. Poonkuzhali, Vineet Kumar Singh, “Virtual Self Driving Car using the Techniques of Image Processing and Deep Neural Networks”, International Journal of Engineering and Advanced Technology (IJEAT) ISSN: 22498958 (Online), Volume-9 Issue-5, June 2020. [17] SadanandHowal, Aishwarya Jadhav, ChandrakirtiArthshi, SapanaNalavade, Sonam Shinde, “Object Detection for Autonomous Vehicle Using TensorFlow”, 2019. [18] NVIDIA Autonomous Machine ( Jetson Nano Development Kits ). Retrieved from 2020, 17th International Conference on Electrical Engineering, Computer, Telecommunications and Information Technology (ECTI-CON)
Copyright © 2022 Aman Mishrikotkar, Ayush Kamdi, Pratik Maske, Hardik Barbatkar, Dr. Ravindra Jogekar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET40463
Publish Date : 2022-02-22
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here