Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Uttam R. Patole, Krishnagopal Rajesh Kumar Sinha, Gaurav Prakash Khairnar, Ritik Shivram Zinjurde, Sanika Jitendrasingh Chauhan
DOI Link: https://doi.org/10.22214/ijraset.2023.50662
Certificate: View Certificate
Human-Machine Interface (HMI) devices, such as keyboards and mice, have been the primary means of input for computers since their inception. However, these devices have limitations that make them unsuitable for individuals with motor disabilities, motor impairment, or diseases such as paralysis, muscular dystrophy, polio, cerebral palsy, and others. This limits their ability to fully engage in computer-related activities, which can have a significant impact on their quality of life. This research paper proposes that new and emerging technologies, such as Brain-Machine Interface (BMI) and Machine Learning (ML), could be utilized to design a more convenient and accessible HMI solution that improves the quality of life for individuals with physical disabilities. BMI technology enables communication between the brain and external devices, while ML can analyse data and make predictions based on patterns and models. Combining these technologies can provide a more intuitive and adaptive interface that can detect and respond to the user\'s intentions and needs. These new HMI methods could either replace or supplement the existing ones, offering an alternative or backup when needed. Moreover, utilizing BMI and ML can ensure that the new HMI solution is user-friendly for all individuals, regardless of their physical abilities. The proposed HMI solution could potentially enhance the user\'s independence, reduce the need for assistance, and promote inclusion and accessibility for all. In conclusion, this research paper proposes a more convenient and accessible HMI solution that leverages emerging technologies such as BMI and ML. The proposed solution could potentially offer an alternative or backup to traditional HMI methods and promote inclusivity for individuals with physical disabilities.
I. INTRODUCTION
The Brain-Machine Interface (BMI) is a new and emerging technology that utilizes an Electroencephalogram (EEG) headset to read the brain waves produced in the human brain [2]. The collected data can be analyzed using advanced techniques such as Machine Learning (ML) or other AI techniques to find correlations between the brain wave patterns produced by a person and the actions they performed when the brain wave pattern was recorded. However, due to the unique thinking patterns of each individual, there is a possibility that the implementation may not function accurately for everyone. To mitigate this, an ML model can be trained with the help of a Neural Network (NN) to generate a generalized model that is likely to be accurate for most people. Classification of EEG signals is typically performed in the following bands: α, β, δ, θ, and γ, which are used to classify and name the signals from various areas of the head, recorded by each EEG electrode. However, various artifacts can be present in EEG signals, such as Electrocardiogram (ECG), Electromyography (EMG), and eye movement artifacts. Therefore, pre-processing of raw brain signals, extraction of significant features, and classification play a crucial role in the performance of the BMI system [3]. EEG headsets integrated with the ThinkGear chip facilitate signal processing and send the collected data to an open network socket due to the chip [5]. For convenience and ease of use, an EEG headset equipped with the same chip is chosen for this system. The proposed BMI system has the potential to provide a user-friendly and convenient human-machine interface for people suffering from neurogenic diseases, motor impairment, or disabilities, as well as for able-bodied individuals, surpassing traditional HMIs in terms of ease of use and convenience.
II. LITERATURE SURVEY
A. Wireless Gyro-mouse for Text Input on a Virtual Keyboard, 2022 45th International Spring Seminar on Electronics Technology (ISSE), 2022.
In their paper entitled 'Wireless Gyro-mouse for Text Input on a Virtual Keyboard' presented at the 2022 45th International Spring Seminar on Electronics Technology (ISSE), Rares Pogoreanu and Radu Gabriel Bozomitu presented a novel Human-Machine Interface (HMI) system [1]. The system utilizes a 3D axis Gyroscope sensor, a microprocessor, and OptiKey on-screen keyboard to function as a pointing device that can be used by people with disabilities.
However, the authors note that this implementation is not suitable for individuals with upper body paralysis. Additionally, since the system relies on the OptiKey keyboard software, which only runs on the Windows platform, it is limited to use as a pointing device for devices running Windows and is not portable.
To summarize, Pogoreanu and Bozomitu's study presents a new HMI system using a wireless gyro-mouse and on-screen keyboard as a pointing device. While the system is user-friendly for people with certain disabilities, it is limited in its functionality and portability.
B. A Single Electrode Blink for Text Interface, 2020 IEEE International Conference for Innovation in Technology (INOCON), 2020.
Natranjan, et al. implemented a system that detected blink using single electrode electroencephalogram (EEG) headset and processed it to trigger keypress on default Windows on-screen keyboard [5].
This implementation has the drawback that latency is high, it takes too long to type sentences. Also, since they used OpenVibe which runs only on Windows and Windows built-in keyboard, this implementation is limited to run only on Windows Operating System, hence it is not portable. Eye blinks can also be easily detected with the help of Camera inputs and Machine Learning models without needing to invest on an expensive EEG headset.
C. Wearable Multifunctional Computer Mouse Based on EMG and Gyro for Amputees, 2020 2nd International Conference on Advanced Information and Communication Technology (ICAICT), 2020.
Rokib Raihan, et al. implemented a portable Electromyogram (EMG) detection circuit that operates on a single supply, while also introducing an auto-thresholding algorithm and muscle contraction detection algorithm to help amputees control the mouse cursor [4]. However, this system has some drawbacks; It cannot be fully utilized by amputees or handicapped people who have lost either their biceps or triceps muscles. If both muscles are lost, this system cannot be used at all. Individuals with muscular dystrophy or muscle atrophy may also find it difficult to use. Additionally, this system is not suitable for people suffering from paralysis in their upper body, as neck movements are required for gyro output, and EMG depends on signals from motor neurons which may be absent in individuals with neurogenic diseases. Furthermore, during inflammatory and dystrophic muscle diseases, this system may not function as intended.
III. OBJECTIVES
IV. IMPLEMENTATION
The proposed system was implemented and achieved with help of two main phases, namely training the neural network-based model and testing the built model. This is shown in Fig. 1 and Fig. 2 respectively. To implement the system following steps were performed.
A. EEG Signal Acquisition
Using an EEG headset, it was possible for us to gather brain signals that can be used to detect whether the user is trying to focus or not. This is typically achieved by measuring the power spectral density (PSD) of the EEG signal in specific frequency bands, such as alpha and beta bands, which we known to be associated with cognitive processes like attention and focus.
To gather the signals, the EEG headset is placed on the user's head appropriately, assuring that the non-invasive electrodes are placed at the appropriate position according to placement guidelines specific for the headset and turn it on. For transmitting the signals gathered, connect the EEG to a computer, in our case since the EEG headset had Bluetooth support, thus, we connected it to our system using Bluetooth via a COM port on Windows. To connect and pair we simply used Windows Bluetooth pairing and to get the EEG data, we made a program in Python that could read and process information being transferred to COM port to which the EEG headset was connected from our system. This data was pre-processed by below methods and stored into a .csv file. The data was recorded in small batches depending on whether the user is focusing or not in order to easily label it.
B. Pre-processing
The signal obtained from the EEG headset needs to go through several preprocessing steps. These include temporal filtering, stimulation-based epoching, time-based epoching, and calculation of the logarithmic band power. Filtering is necessary to eliminate noise from the signal, which is achieved by applying a fourth-order temporal Butterworth bandpass filter. Stimulation-based epoching involves slicing the signal into chunks of a specified length that follow a stimulation event, while time-based epoching segments the signal into blocks at regular intervals, with the duration selected based on the length of an eyeblink, which is typically between 0.5s and 1.5s, averaging to 1s. Finally, logarithmic band power calculation assigns a single number that summarizes the contribution of a given frequency band to the overall power of the signal. Low noise is essential to avoid underfitting and overfitting issues that plagues Deep Neural Network (DNN) Models.
???????C. Training and Testing the Neural Network
In order to create a deep neural network on the EEG dataset, several design choices were made with regards to the architecture of the network. This involved selecting the number and types of layers, activation functions, loss functions, and optimization algorithms. Feature engineering was also performed on the EEG dataset. This involved selecting and extracting relevant features from the raw EEG data that could be used to train the deep neural network. Some of the features that were extracted included the power spectral density (PSD) of specific frequency bands, such as alpha and beta, which are known to be associated with cognitive processes like attention and focus. Other features included the calculation of time-domain statistical features such as mean, standard deviation, and variance. These features were selected based on their relevance to the task of detecting focus, as well as their ability to provide meaningful information to the deep neural network. Feature engineering was an iterative process, with different combinations of features being tested and evaluated for their effectiveness in improving the performance of the deep neural network. Once these choices were made, the network was trained on the EEG training dataset that was previously created. During training, the weights of the network were updated iteratively until the loss function was minimized, indicating that the network was accurately predicting the focus state of the individual.
After training, the network was then tested on a separate testing dataset to evaluate its performance. The performance of the network was evaluated using several metrics such as accuracy, precision, recall, and F1 score. If the performance of the network was not satisfactory, the network architecture was modified, specifically the hyperparameters were tuned to improve the performance.
Once the trained network had achieved satisfactory performance, it was ready to be used to make predictions on new EEG data in real-time. This would involve the live, real-time gathering of data from the EEG headset, which would then be fed into the network to determine whether the person was focusing or not. In summary, the process involved selecting a deep neural network architecture, training the network on the EEG dataset, testing the network's performance, tuning the network's hyperparameters if necessary, and finally, using the trained network to make predictions on new EEG data in real-time.
???????D. Implementing the System
The architecture of the prototype of the proposed system can be represented by Fig. 3. A microcontroller was coupled with a gyro sensor and accelerometer in order to obtain multi-axis 3D motion data so that we can create a human interface that will allow the user to input spatial (continuous and multi-dimensional) data to a machine. These inputs can be used to simulate a pointing device such as a mouse and is used to control mouse cursor or pointer of a computer.
An EEG headset worn by the user is connected to the system that can run model and pass real-time EEG data to it so that it can make predictions. These predictions can then be mapped to some predefined functions such as click, selection, enter, et cetra commands that a machine may support, effectively simulating a Human Machine Interface (HMI) device. The EEG headset that is to be worn by the user is coupled with aforementioned microcontroller setup and these devices are powered by portable batteries and communicate with the targeting machine system, a computer in our case via Bluetooth.
V. EXPERIMENTAL SETUP
The experiment was conducted on people between the ages of 16 to 53 years having different genders. The mental state of the user’s mood was also considered but in later testing it was found to be of no importance thus was dropped from being a feature. The proposed HMI system’s architecture design is represented in Fig. 3. To check if our proposed HMI system was correctly functioning, a program was made that put, the proposed system’s spatial input and correct prediction of whether the user is focusing or not, to the test. This program is referenced below.
The test was carried out in an incremental manner, first unit testing was done then the integration testing was done after which the system testing was finally performed. Test-1 was a simple test where we would only check if core modules that is our EEG headset and Deep Neural Network model were working correctly together and detecting whether the user is focusing or not in real-time. Test-2 dealt with testing the movement of the pointing device with help of a accelerometer and gyro sensor coupled with a microcontroller, Arduino in our case, according to the 3D motion of the user’s head. Finally, we coupled both the aforementioned modules and proceeded with Test-3 which was our system level test where we tested the entire system. Test-3 was a manual testing procedure where we had to type in “Test3 Passed!” by using our proposed HMI system.
VII. ADVANTAGES
VIII. DISADVANTAGES
IX. FUTURE SCOPE
The study has shown promising results in utilizing EEG-based Brain-Machine Interfaces for revolutionizing Human-Computer Interaction. However, there is still a lot of potential for further research and improvement in this field.
Future studies could focus on increasing the accuracy of the prediction model by incorporating more advanced algorithms and techniques. Additionally, the study could be expanded to include more participants to increase the diversity of the data and account for individual differences. As advancements are made in the field of manufacturing, EEG headsets can be integrated with other devices that may get worn on the head, such as headphones, earphones or other headsets. This would make the use of EEG headsets more convenient for everyday use along with getting its manufacturing cost reduced. Additionally, EEG headsets with a high number of channels can be used to extract more data out of a person's brain. This newly found data can be used to find new patterns and correlations between a person's action and brain activity. This, in turn, can lead to new applications and use cases for EEG-based brain-machine interface technology. Furthermore, this technology can be expanded to other animals with similar brain structure to humans, such as chimpanzees, monkeys, and other primates. This can open up new avenues of research and understanding into the neural activity of these animals and their behavior.
Overall, the results of this study indicate a promising future for the development and utilization of EEG-based Brain-Machine Interfaces for improving Human-Computer Interaction. Further research in this area has the potential to unlock new opportunities for individuals with disabilities and improve the efficiency and convenience of interactions between humans and technology.
X. ACKNOWLEDGMENT
We are deeply grateful for the opportunity to delve into this intriguing topic and broaden our knowledge and perspectives through this project. Our heartfelt gratitude goes out to Imonsar Technologies Pvt. Ltd. for their unwavering support and sponsorship, which has made this research possible.
We would also like to extend our sincere appreciation to our professor, Uttam R. Patole, for his invaluable guidance, mentorship, and encouragement throughout the project. Without his profound expertise and wisdom, this research would not have been possible.
Furthermore, we would like to express our gratitude to the faculty of Sir Visvesvaraya Institute of Technology for their constant support and feedback throughout this project. Their constructive criticism, encouragement, and guidance have been crucial in shaping our research and enabling us to achieve our goals. This project would not have been possible without their remarkable support.
In conclusion, Brain-Machine Interface (BMI) technology has emerged as a promising approach that utilizes the Electroencephalogram (EEG) headset to read the brain waves produced in the human brain. By analyzing the collected data using advanced techniques such as Machine Learning (ML) or other AI techniques, correlations between the brain wave patterns and actions performed can be identified. The proposed BMI system has the potential to provide a user-friendly and convenient human-machine interface for people suffering from neurogenic diseases, motor impairment, or disabilities, as well as for able-bodied individuals. To implement the proposed system, two main phases, namely training the neural network-based model and testing the built model, were performed. The data was preprocessed by temporal filtering, stimulation-based epoching, time-based epoching, and calculation of the logarithmic band power, and then used to train and test the neural network. The performance of the network was evaluated using several metrics such as accuracy, precision, recall, and F1 score. The results showed that the proposed BMI system has the potential to provide a reliable and accurate method for detecting focus in individuals. However, further research is required to address the limitations and challenges associated with the unique thinking patterns of each individual and the presence of various artifacts in EEG signals. Overall, the proposed system has the potential to revolutionize the field of human-machine interfaces and improve the quality of life for individuals with motor impairment, neurogenic diseases, or disabilities.
[1] R. Pogoreanu and R. G. Bozomitu, \"Wireless Gyromouse for Text Input on a Virtual Keyboard,\" 2022 45th International Spring Seminar on Electronics Technology (ISSE), 2022, pp. 1-4, doi:10.1109/ISSE54558.2022.9812793. [2] S. Becker, K. Dhindsa, L. Mousapour and Y. Al Dabagh, \"BCI Illiteracy: It’s Us, Not Them. Optimizing BCIs for Individual Brains,\" 2022 10th International Winter Conference on Brain-Computer Interface (BCI), 2022, pp. 1-3, doi: 10.1109/BCI53720.2022.9735007. [3] M. M. Wankhade and S. S. Chorage, \"Eye-Blink artifact Detection and Removal Approaches for BCI using EEG,\" 2021 International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT), 2021, pp. 718-721, doi:10.1109/RTEICT52294.2021.9574024. [4] M. R. Raihan, A. B. Shams and M. Ahmad, \"Wearable Multifunctional Computer Mouse Based on EMG and Gyro for Amputees,\" 2020 2nd International Conference on Advanced Information and Communication Technology (ICAICT), 2020, pp. 129-134, doi: 10.1109/ICAICT51780.2020.9333476. [5] H. D, R. M, R. Jadon and Natarajan, \"A Single Electrode Blink for Text Interface (BCI),\" 2020 IEEE International Conference for Innovation in Technology (INOCON), 2020, pp. 1-5, doi: 10.1109/INOCON50539.2020.9298387.
Copyright © 2023 Uttam R. Patole, Krishnagopal Rajesh Kumar Sinha, Gaurav Prakash Khairnar, Ritik Shivram Zinjurde, Sanika Jitendrasingh Chauhan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET50662
Publish Date : 2023-04-19
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here