This paper is on a project an autonomous car is a vehicle capable of sensing its environment and operating without human involvement . A human passenger is not required to take control of then vehicle at any time , nor is a human passenger required to be present in the vehicle at all. Autonomous cars are the future smart cars anticipated to be driver less, efficient and crash avoiding ideal urban car of the future. In this regard the first challenge would be to customize and imbibe existing technology in conventional vehicle to translate them to a near expected autonomous car.
Introduction
I. INTRODUCTION
In a new automotive application, we have used convolutional neural networks (CNNs) to map the raw pixels from a front-facing camera to the steering commands for a self driving car.
This powerful end-to-end approach means that with minimum training data from humans, the system learns to steer, with or without lane markings, on both local roads and highways. The system can also operate in areas with unclear visual guidance such as parking lots or unpaved roads. CNNs have revolutionized the computational pattern recognition process. Prior to the its widespread adoption of CNNs, most pattern recognition tasks were performed using an initial stage of hand-crafted feature extraction followed by a classifier.
The important breakthrough of CNNs is that features are now learned automatically from training examples. The CNN approach is especially powerful when applied to image recognition tasks because the convolution operation captures the 2D nature of images. By using the convolution kernels to scan an entire image, relatively few parameters need to be learned compared to the total number of operations.
II. SYSTEM OVERVIEW
The system consists of a L298N Motor driver , Raspberry Pi 3 Model B+ , Raspberry module camera module V2 , Robocraze DIY 4-wheel drive robot , Arduino Uno , Microcontroller board , Lithium Polymer battery.
III. COMPONENTS
A. L298N Motor Driver
L298 is a high current and high voltage IC. Its receives TTL logic signals and operates different loads like motors, solenoid, relays etc. It is mostly used in motor driver's designing. It has two specific pins for enabling or disabling the particular device attached at its output.
B. Raspberry Pi 3 Model B+
The Raspberry Pi 3 Model B is the third generation Raspberry Pi. This powerful credit card sized single board computer can be used for many applications and supersedes the original Raspberry Pi Model B+ and Raspberry Pi 2 Model B. Whilst maintaining the popular board format the Raspberry Pi 3 Model B brings you a more powerful processer, 10x faster than the first generation Raspberry Pi. Additionally it adds wireless LAN & Bluetooth connectivity making it the ideal solution for powerful connected designs.
C. Raspberry Pi Camera Module V2
The Raspberry Pi NoIR Camera Module v2 is a quality 8 megapixel Sony IMX219 image sensor custom designed add-on board for Raspberry Pi, featuring a fixed focus lens. It's capable of 3280 x 2464 pixel static images and also supports 1080p30, 720p60 and 640x480p60/90 video. It attaches to Pi by way of one of the small sockets on the board upper surface and uses the dedicated CSi interface , designed especially for interfacing to cameras. The board itself is tiny, at around 25mm x 23mm x 9mm. It also weighs just over 3g, making it perfect for mobile .
D. Robocraze DIY 4-wheel Drive Robot
Smart Car Chassis 4WD/Racing Car/Robot Car Chassis/Wheels/motors
SES four deceleration direct current machine curve to be nimble, the directivity is good, four actuations, horsepower fullness, the chassis big and steady very easy to expand.
Mechanical Specifications 1. motor power supply is 3V~6V. All the parameter above is tested without load 2. DC Gear Motor 3. No load speed (6V): 200RPM.
Mechanical structure is simple, very easy to install
Universal castor to avoid slippage Placeholders for different sensors, servos and modules
4AA battery holder and required brackets, screws and nuts
E. Arduino Uno R3 Microcontroller board
Arduino UNO is a microcontroller board based on the ATmega328P. It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz ceramic resonator, a USB connection, a power jack, an ICSP header and a reset button. It contains everything needed to support the microcontroller; simply connect it to a computer with a USB cable or power it with a AC-to-DC adapter or battery to get started. You can tinker with your UNO without worrying too much about doing something wrong, worst case scenario you can replace the chip for a few dollars and start over again.
F. Lithium Polymer battery
A lithium polymer battery, or more correctly lithium-ion polymer battery (abbreviated as LiPo, LIP, Li-poly, lithium-poly and others), is a rechargeable battery of lithium-ion technology using a polymer electrolyte instead of a liquid electrolyte. Very high conductivity semisolid (gel) polymers form this electrolyte. These batteries provide higher specific energy than other lithium battery types and are used in applications where weight is a critical feature, such as mobile devices, radio-controlled aircraft and some electric vehicles.
IV. FUTURE SCOPE
Autonomous sensors play an essential role in automated driving: they allow cars to monitor their surroundings, detect oncoming obstacles, and safely plan their paths. In combination with automotive software and computers, they will soon allow the automation system to take over full control of the vehicle, thereby saving drivers a significant amount of time by doing tasks in much more efficient and safe ways. Given the fact that the average driver spends approximately 50 minutes in a car daily, just imagine how valuable autonomous vehicles could be for the fast-paced world we live in.
While autonomous vehicle technology appears to be developing at a continual pace, no commercially available vehicles have yet passed the required level 4 ranking for road-safe autonomous vehicles. There is still a huge amount of technology improvement that needs to be taken seriously by manufacturers in order to ensure autonomous vehicle safety on the roads.
V. OUTCOME/RESULT
We have used a simulation made by Udacity to get the data samples. After this the first step is to import all the images from the location. Here we are using the centre images only, not the right and left one..
The next part is visualisation of data. This is important because if we have lot of angles for left curve and we have very little angle for right curve then the model will generalize to go mostly on the left side so we want to balance the data to get equal amount of data of each class we are splitting it into the bins and plotting the graph.
Then we are removing the redundant data. Here we are getting a lot of values when angle is 0 , So we are removing these values and along with that were shuffling the values and distributing the steering angle in evenly manner. The next step we are storing both images and steering angles in different arrays. Print (images path [0] steering [0]) The fourth step is splitting of data, so we have splitted our data as framing and validation. Validation data will be used to test the performance of are created Model after each of epox. The next step that is average mention so what we are doing here is we have slightly changed the pictures either we are shifting little bit towards left or right so that we can get many data sheet.
Next step is pre-processing of image. To earlier in the pic everything is visible, but we used only track. Last episode creating the model size that is 31*48 with 24 filter.
Conclusion
For our autonomous driving car architecture, we have divided it into two main parts software and hardware. The software pun includes the control block, which is the main part of our architecture. The control block includes the localization, detection, motion and mission planning modes . We have used state-of-the On the art algorithms for each module. On the hardware side, we have med Drive-kit for controlling the car, which receives input through the PID controller from the planning module. We have also integrated different sensors with the simulation software for acquiring the perception of the environment. The future work includes the improvement of data fusion algorithms and driving at high spends.
References
[1] Deshpande, Pawan. \"Road Safety and Accident Prevention in India: A review.\" Int J Adv Engg Tech/Vol. V/Issue II/April-June 64 (2014): 68.
[2] Campbell, Mark, Magnus Egerstedt, Jonathan P. How, and Richard M. Murray. \"Autonomous driving in urban environments: approaches, lessons and challenges.\" Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 368, no. 1928 (2010): 4649-4672.
[3] Cranswick, Marc. Pontiac Firebird: The Auto-Biography.
[4] Veloce Publishing Ltd, 2013.
[5] Burgan, Michael. The Pontiac Firebird. Capstone, 1999. Temple, David W., Dennis Adler, and Chuck Jordan. GM\'s Motorama: the glamorous show cars of a cultural phenomenon. Motorbooks, 2006.