Drowsiness and intoxication are major contributors to car accidents, posing significant risks to road safety. The implementation of effective drowsiness detection technologies could help prevent numerous fatal accidents by alerting fatigued drivers in advance. Various techniques can be adopted to monitor driver attentiveness while driving and provide timely notifications.
In the context of self-driving cars, sensors play a crucial role in identifying signs of sleepiness, anger, or extreme emotional changes in drivers. These sensors continuously monitor facial expressions and detect facial landmarks to assess the driver\'s state and ensure safe driving. Once such changes are detected, the system promptly assumes control of the vehicle, reducing its speed, and alerts the driver through alarms to draw attention to the situation. To enhance accuracy, the proposed system integrates with the vehicle\'s electronics, tracking its statistics and providing precise results.
In this research, we have implemented real-time image segmentation and drowsiness detection using machine learning methodologies. Specifically, an emotion detection method based on Support Vector Machines (SVM) has been employed, utilizing facial expressions. The algorithm underwent testing under varying luminance conditions and exhibited superior performance compared to existing research, achieving an 83.25% accuracy rate in detecting facial expression changes.
Introduction
I. INTRODUCTION
Drowsiness poses a significant threat to road safety and is responsible for a substantial number of serious car accidents in our daily lives. According to the National Highway Traffic Safety Administration, driver fatigue leads to approximately 150 fatalities per year in the United States alone, resulting in 71,000 injuries and causing losses of up to 12.5 billion dollars. Additionally, a recent report indicates that the U.S. government and businesses collectively spend around 60.4 billion dollars annually to address accidents related to drowsiness.
The impact of drowsiness extends beyond financial costs, as it also incurs a toll on individuals. Consumers bear approximately 16.4 billion dollars in property damage, healthcare expenses, lost time, and reduced productivity due to drowsiness-related incidents.
II. RELATED WORK
A. Eyes Localization
In the context of driver drowsiness, our research focuses on a specific area of the face known as the Eye Region of Interest (EROI), which lies between the forehead and the mouth. This region is chosen due to the consistent position of the eyes in facial anthropometric properties. Leveraging the symmetrical nature of the eyes, we employ a detection technique that involves vertically sweeping a rectangular mask across the face. The mask has an estimated height corresponding to the eye and a width equal to the face's width. By calculating the symmetry within this region, we can identify the area with the highest symmetry measurement, which corresponds to the eye region. Subsequently, within this identified region, we perform symmetry calculations separately for the left and right sides of the face. The point with the highest symmetry value indicates the center of the eye.
B. Dataflow Diagram
A use-case diagram in the Unified Modelling Language is a type of behavioral diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical overview of the functionality provided by a system in terms of actors, their goals, and any dependencies between those use cases. The main purpose of a use case diagram is to show what system functions are performed for which actor. The roles of the actors in the system can be depicted.
III. METHODOLOGIES
A. Proposed Method
Machine Learning, a subfield of Artificial Intelligence (AI), involves the development of computer programs that can learn from experience. These programs are commonly referred to as machine learning programs or learning programs. Machine Learning encompasses four main types of learning methods:
a. Supervised Learning: In this approach, the learning program is provided with labeled training data, where each data point is associated with a corresponding target or output. The program learns to make predictions or classify new data based on the patterns and relationships it discovers in the labeled examples.
b. Unsupervised Learning: Unlike supervised learning, unsupervised learning deals with unlabeled data, where the learning program explores the data to identify inherent patterns, structures, or relationships without explicit guidance. Clustering and dimensionality reduction are common techniques used in unsupervised learning.
c. Semi-Supervised Learning: This learning method combines elements of both supervised and unsupervised learning. It leverages a small amount of labeled data along with a larger set of unlabeled data to improve the learning program's performance in making predictions or classifying new instances.
d. Reinforcement Learning: In reinforcement learning, the learning program interacts with an environment and learns to make decisions or take actions based on feedback received in the form of rewards or punishments. Through trial and error, the program aims to maximize its cumulative reward by discovering optimal strategies or policies.
e. These four types of learning methods in machine learning provide diverse approaches to tackle different problem domains and learning scenarios.
2. In this test, the camera is positioned directly in front of the driver, while the driver performs head rotations. The driver starts by turning their head to the right, returning to the starting position, and then rotating their head to the left before returning to the center position. The objective of this test is to determine the angle at which our head/eye detection algorithm begins to falter and assess its consistency across frames.
The test setup and results are presented, indicating that the difference in detection angles is relatively small compared to the previous test, measuring only around 5 degrees. This suggests that the algorithm consistently and reliably detects the driver's eyes as long as the driver is facing the camera directly and their gaze does not deviate beyond 35 degrees in either direction from the front-facing position.
Conclusion
1) The proposed research work demonstrates the robustness of real-time Drowsiness Detection Techniques regardless of changes in illumination. Our study incorporates support vector machine and image processing clustering methods for instant classifications and video analysis, utilizing input from the corresponding hardware.
2) Extensive testing of the algorithm has been conducted with various input parameters. Notably, the algorithm exhibited superior accuracy when tested under illumination conditions, specifically when the distance from the camera was optimal.
References
[1] F. Guede-Fernández, M. Fernández-Chimeno, J. Ramos-Castro, and M. A. García-González, ‘‘Driver drowsiness detection based on respiratory signal analysis,’ ``IEEE Access, vol. 7, pp. 81826–81838, 2019, doi: 10.1109/ACCESS.2019.2924481.
[2] Y. Saito, M. Itoh, and T. Inagaki, ‘‘Driver assistance system with a dual control scheme: Effectiveness of identifying driver drowsiness and preventing lane departure accidents,’’ IEEE Trans. Human-Mach.
[3] J. Yu, S. Park, S. Lee, and M. Jeon, ‘‘Driver drowsiness detection using condition-adaptive representation learning framework,’’ IEEE Trans. Intell. Transp. Syst., vol. 20, no. 11, pp. 4206–4218, Nov. 2019, doi: 10.1109/TITS.2018.2883823.
[4] Y. Hu, M. Lu, C. Xie, and X. Lu, ‘‘\\textit Driver drowsiness recognition via 3D conditional GAN and two-level attention Bi-LSTM,’’ IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 12, pp. 4755–4768, Dec. 2020, doi: 10.1109/TCSVT.2019.2958188.
[5] G. Li and W.-Y. Chung, ‘‘Combined EEG-gyroscope-tDCS brain machine interface system for early management of driver drowsiness,’’ IEEE Trans. Human-Mach. Syst., vol. 48, no. 1, pp. 50–62, Feb. 2018, doi: 10.1109/THMS.2017.2759808.
[6] G. Li, B.-L. Lee, and W.-Y. Chung, ‘‘Smartwatch-based wearable EEG
system for driver drowsiness detection,’’ IEEE Sensors J., vol. 15, no. 12, pp. 7169–7180, Dec. 2015, doi: 10.1109/JSEN.2015.2473679.
[7] M. Sunagawa, S.-I. Shikii, W. Nakai, M. Mochizuki, K. Kusukame, and H. Kitajima, ‘‘Comprehensive drowsiness level detection model combining multimodal information,’’ \\textit IEEE Sensors J., vol. 20, no. 7, pp. 3709–3717, 2020, doi: 10.1109/JSEN.2019.2960158.
[8] A. Dasgupta, D. Rahman, and A. Routray, \"A smartphone-based drowsiness detection and warning system for automotive drivers,\" \\textit IEEE Trans. Intell. Transp. Syst., vol. 20, no. 11, pp. 4045–4054, Nov. 2019, doi: 10.1109/TITS.2018.2879609.
[9] M. Ramzan, H. U. Khan, S. M. Awan, A. Ismail, M. Ilyas, and A. Mahmood, ‘‘A survey on state-of-the-art drowsiness detection techniques,’’ IEEE Access, vol. 7, pp. 61904–61919, 2019, doi: 10.1109/ACCESS.2019.2914373.
[10] S. Kaplan, M. A. Guvensan, A. G. Yavuz, and Y. Karalurt, ‘‘Driver behavior analysis for safe driving: A survey,’’ \\textit IEEE Trans. Intell. Transp. Syst., vol. 16, no. 6, pp. 3017–3032, Dec. 2015, doi: 10.1109/TITS.2015.2462084.