People who are visually impaired face great difficulties in recognizing banknotes for day-to-day transactions as the texture and shape of many banknotes are very similar. Also, the Indian currency system does not have a separate banknote in circulation that might help people who are visually disabled. This paper presents a real-time Indian currency detection system for visually impaired persons. The proposed work makes use of the vision-based system to facilitate the visually impaired people to prosperously recognize banknotes. In this paper, a recognition system is implemented by making use of various machine learning and deep learning approaches which can produce an accuracy of up to 90%. In this system, totally two experiments are carried out on the banknote’s image. The first one is the analysis of the different regions of the banknote, the second experiment is the examination of the banknote image which was captured using a camera in a regulated environment. Our system is extended to support the visually impaired people by executing a “Voice-based assistance system” which would help us in giving the voice output. This system can identify all the seven banknotes used in India and exhibit the result with both textual and auditory output. People who are visually impaired will be able to easily use it in daily transactions.
Introduction
I. INTRODUCTION
Humans have got great recognition capacity, be it through touch or feel. Any object can be identified through naked eyes and memorized by the brain. But for people with vision impairment, identifying most of the things might not be an easy task, especially those which are which is used on daily basis. Coins that are made of metals and have imprints of them can be touched and recognized easily whereas Indian currency system doesn’t have a separate banknote in circulation that might help the people who are visually disabled. Banknotes are of different colour and sizes, the tactile marks can easily fade away over time. This might still not help and people might’ve to suffer without proper help. Detection of banknotes can be done in two ways. It can be either using sensor i.e, sensor-based recognition system or using camera i.e, vision-based system. The main drawbacks of using sensors over camera is that there are many electrical components involved in building it.
The aim of this project is to evaluate the feasibility of machine learning and deep learning approaches in recognizing the Indian banknote. This system is further divided into two stages which are:
Using conventional machine learning we analyse the effect of different regions of Indian Rupee banknotes and its effects
Using deep learning approach we evaluate the different orientations of the banknote and its further effects
One of the major highlights of the proposed system is the voice-based output. After the recognition of a certain banknote the system provides a loud and clear audio telling out the monetary value of the note through the computer’s in-built speaker.
In conclusion, this project helps in contributing new ideas to develop assistive technologies for visually impaired and blind people to complete their task and lead their daily lives without any hassles.
While accuracy is the important part of this proposed system, classification of notes is an important task, this is when image pre-processing helps in extracting better accuracy. Sometimes the notes maybe withered or of low quality, so the quality of image should be improved, histogram equalization technique can be applied to improve the banknote image quality.
II. EXISTING SYSTEM
So far, many work has been done on the ‘Banknote recognition system’. These systems have done excellent job in detecting the banknotes as well. One of the examples for the similar work is Hungarian banknote recognition system where the images of the banknotes were captured with the help of phone camera. Even though the system did great work in recognizing the Hungarian currency, the system suffered less accuracy because the system was built based on ‘Sensor based system’. As we know, Sensor based system includes the involvement of various electrical components, this makes the system to be heavy or bulkier. The machine learning algorithms which were used in the work suffered in producing less accurate images. Moreover, the system didn’t have voice output which would have helped visually impaired people. These points can be added as the drawbacks of the mentioned system. Another system was built to recognize Malaysian currency, Ringgit. In this work, many complicated algorithms such as KNN, SVM, BN, DT, etc are used and the system was lacked in the feature such as generating voice-based output.
III. PROPOSED SYSTEM
The proposed system has been built based on ‘Vision based system’. Using various machine learning and deep learning approaches, the system gives the accuracy of up to 90%. Totally two experiments will be performed on the banknote image. First one is the analysis of different region of the banknote, second experiment is the examination of the banknote image which was captured using camera in a regulated environment. Our system is extended to support the visually impaired people by executing “Voice-based assistance system” which would help us in giving the voice output.
OV. MATERIALS & METHODS
A. Datasets
In this work, Indian banknotes are used as Dataset. The dataset has the notes of Rs.10, Rs.20, Rs.50, Rs.100, Rs.200, Rs.500 and Rs.2000. As we Know, Indian currency has various security features. The random pics we have taken of the dataset is the 7 class of notes. The random snaps of these 7 notes are taken in more than 12 different positions.
Those 12 different angles are mentioned below:
P1: Picture with the straight focus
P2: Picture with the backside focused.
P3: Picture with left flip
P4: Picture with right flip
P5: Half image of the note
P6: Inverted image of the note
P7: Picture folded to half from the rights side.
P8: Picture folded to half from the left side.
P9: Blurry Image of the note
P10: Vertical Image of front face of the note.
P11: Vertical image of back face of the note.
P12: Picture folded to 1/4th from lower at straight focus.
B. Modules
OpenCV:Open source computer vision library, relatively called as OpenCV is used as one of ML libraries. It helps us to get a real time optimized cv tools and libraries. In this work, it is used for gray scale conversion of the image.
Mahotas: It is also one of the CVs and an image processing library for the python. It has collection of function and helps to perform numerous actions without the need of writing many lines of codes.
Hu moment: Hu moment is an image descriptor which is used to characterize the object’s shape in an image. The image which is needed to be described can either be segmented binary image or outline of an image.
Random Forest: Random-forest is one of the supervised ML Algorithm. This algorithm is broadly used for classification and regression problems.
K-Means clustering: It is also one of the ML algorithms and comes under unsupervised ML algorithms. This algorithm takes the datasets which is unlabeled and classify them to a cluster. If K=3, that means number of clusters to be created is 3.
gTTs: Google text to speech ,It is a library which is available in python and an cli tool which can interface with gTTs API. In the proposed system , this library helps us to get the voice-output.
V. SYSTEM ARCHITECTURE
The above diagram illustrates the system architecture. In the beginning, the system capture the image of the banknote using the built-in camera. The captured image will be fed to the system.
Random forest and K-means clustering algorithms helps us in classification of the dataset.
After the classification of the dataset, the feature extraction is done using OpenCV, which helps us in extracting the features of the banknote, which is nothing but RGB colors.
Further, the captured image will be matched with the trained dataset.
The system identifies the banknote with the particular dataset, then show us the result
VI. FUTURE WORK
Apart from the voice-based output, Using Transfer learning (TL) strategy and algorithm like CNN, the proposed system can be furthermore developed to have a Fake note detection system.
Conclusion
In the Hungarian Banknote detection system, the system was built using many sensor devices which made the system bulkier and get the less accuracy results. Since our proposed system is built using “Vision-based” method, the system is able produce well accurate result. In the Malaysian banknote detection system, The Algorithm such as K-Nearest neighbor, Support vector machine, Direct tree, and Bayesian network were used. This work has more accuracy compared to the system which has been proposed by us, because, the existing system was built for the Malaysian currency Ringgits. Compared to our Indian currency Rupees, The Malaysian Ringgits has lesser security traits. In conjunction with it, The existing system didn’t had voice based output feature. Using the API such as Google text to speech (gTTs API), our proposed system is able to produce the voice-based output, which would help the visually impaired people recognize the banknote even more easier.
References
[1] Solymár, Z., et al. Banknote recognition for visually impaired. 20th European Conference on Circuit Theory and Design (ECCTD). 2011.
[2] Sun, L. Banknote Image Collection System Design Based on TMS320C6416 DSK. International Conference on Control Engineering and Communication Technology. 2012.
[3] Mohamed, A., M.I. Ishak, and N. Buniyamin. Development of a Malaysian Currency Note Recognizer for the Vision Impaired. in Engineering and Technology (S-CET), 2012 Spring Congress on. 2012.
[4] Pham, T.D., et al., Banknote recognition based on optimization of discriminative regions by genetic algorithm with one-dimensional visible-light line sensor. Pattern Recognition, 2017. 72(Supplement C): p. 27-43.
[5] Costa, C.M., G. Veiga, and A. Sousa. Recognition of Banknotes in Multiple Perspectives Using Selective Feature Matching and Shape Analysis. in 2016 International Conference on Autonomous Robot Systems and Competitions (ICARSC). 2016.
[6] Kamal, S., et al. Feature extraction and identification of Indian currency notes. in 2015 Fifth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG). 2015.
[7] Jin, O., et al. Recognition of New and Old Banknotes Based on SMOTE and SVM. in 2015 IEEE 12th Intl Conf on Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom). 2015.
[8] Rahman, S., P. Banik, and S. Naha. LDA based paper currency recognition system using edge histogram descriptor. in 2014 17th International Conference on Computer and Information Technology (ICCIT). 2014.
[9] Duraisamy, S. and S. Emperumal, Computer-aided mammogram diagnosis system using deep learning convolutional fully complex valued relaxation neural network classifier. IET Computer Vision, 2017. 11(8): p. 656-662.
[10] Yang, Y., D. Li, and Z. Duan, Chinese vehicle license plate recognition using kernel-based extreme learning machine with deep convolutional features. IET Intelligent Transport Systems, 2018.