Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Aakansh Garg, Abhinav Dagar, Chetanya Khurana, Dr. Rajesh Singh, Ms. Kalpana Anshu
DOI Link: https://doi.org/10.22214/ijraset.2022.42947
Certificate: View Certificate
Various industries around the world were affected by the COVID-19 pandemic. Some sectors, like the development industry, have remained open despite the closures. The WHO has issued a warning for workers to wear a mask and avoid working in areas with high risks of infection. This paper developed a computing system that will automatically detect the presence of masks among workers on construction sites during the onset of the pandemic. It collected over a thousand images and added them to a database. The algorithm was trained and tested on various object detection models. It had been then ready to detect the presence of individuals using the Faster R-CNN Inception V2. The space between people was computed using the Euclidian distance. The model was then trained on various pictures and videos to spot the presence of masks.
I. INTRODUCTION
The spread of COVID19 has resulted in the deaths of more than 1,841,000 people worldwide and the death of 351,000 people in the US on December 31, 2020 [1]. The spread of the virus is often avoided by minimizing the impact of the virus on the environment [2], [3] or by preventing the transmission of the virus from person to person by practicing physical distance and wearing masks. The WHO described physical distances as keeping six or two meters away from others and recommended that keeping body distance and wearing a mask could significantly reduce the spread of the COVID19 virus [4] [7]. Like other sectors, the development industry has been affected, where unnecessary projects have been suspended or reduced. However, many infrastructure projects will not be postponed because they play an important role in people's lives. This will revitalize bridges, widening of roads, highway repairs and other important infrastructure projects to keep the transport system running. Without the strengthening of infrastructure projects, the safety of construction workers will not be overlooked. Due to the high density of construction workers, the risk of infection spread in construction sites is high [8].Therefore, systematic safety monitoring of infrastructure projects, ensuring physical distancing and wearing masks can improve the safety of construction workers.
In some cases, safety officers are often assigned to infrastructure projects to examine workers to detect cases that either social distancing or mask wearing isn't satisfied. However, once there are numerous workers on a construction site, it's difficult for the officers to work out hazardous situations. Also, assigning safety officers increases the amount of individuals onsite, raising the prospect of transmission even more, and putting workers and officers during a more dangerous situation. Recently, online video capturing in construction sites has become quite common. An Automatic System to observe the Physical Social Distancing and to ensure that the Construction Workers are wearing a mask in COVID19 Pandemic. Texas A&M University School of Computer Science Moein Razavi* (moeinrazavi@tamu.edu), Wahid Janfaza* Texas A&M University School of Computer Science and Engineering (vahidjanfaza@tamu.edu), Benyamin Sadeghi* University of Wisconsin, Milwaukee (bsadeghi@uwm.edu), Ehsan Alikhani*, State University of New York in Systems Science and Industrial Management, Binghamton (ealikha1@binghamton.edu), Hamed Alikhani School of Engineering, Texas A&M Bachelor's College of Industrial and Manufacturing Engineering (hamedalikhani@tamu).
II. LITERATURE REVIEW
Acquisition issues purpose to find and classify gadgets within the photograph [12]. masks and frame distance detection are categorized as troubles locating something this manner. object algorithms are being developed from the closing twenty years. considering that 2014, in-intensity application of learning to locate an item has caused superb fulfillment, improving accuracyand pace of acquisition [13]. item acquisition is split into categories; One-phase acquisition, consisting of just appears as soon as (YOLO), and two-section acquisitions, which includes local-based Convolutional Neural Networks (R-CNN). each of those are used because the two-level icons have excessive nearby availability and visual acuity accuracy, while unmarried-level acquisition attains high determinant pace [14].
R-CNN or areas with CNN features (R-CNNs), presented by Girshick et al. [15], the use of 4 steps; First, itselects some circles inside the photograph as candidate packing containers, then returns them to a stable and speedy size photo. 2nd, it makes use of CNN feature releases in all regions. in the end, the characteristics of every region typically predict the boundary field class using the SVM separator [15], [16]. but, function extraction in all areas may be very useful resource extensive and time ingesting as candidate containers are scattered, making the version carry out routine calculations. rapid R-CNN handles difficulties by taking the complete picture due to the fact the enter to CNN eliminates features [17]. rapid R-CNN hurries up the R- CNN community with the aid of converting selected searches through the nearby houses network (RPF) to lessen the variety ofcandidate packing containers [18] - [20]. RCNN pace is a kind of brief-time period detector [18]. Acquisition of a masks indicates whether ornot someone is sporting a mask throughout the photograph [21], [22]. actual-time detection starts to discover people in the picture after which suggests the estimated distance between them [23] .for the reason that outbreak of the COVID- 19 epidemic, many research have been carried out and completed to determine facial mask and physical distance from the gang. Jiang et al. [21] A single- degree detector, called the Retina-Face-mask, used a function pyramid community to integrate excessive-degree semantic facts. upload a layer of interest to get a short mask on the photograph. They located higher detection accuracy as compared to previously developed fashions. Militante & Dionisio [24] used the VGG- 16 CNN model and received ninety six% accuracy to discover humans sporting a masks or no longer. Rezaei and Azarmi [25] developed a model that supports YOLOv4 to stumble on that a person is carrying a mask and following physical distance. They educated their model in some of the largest accessible databases to be had to all and completed 99.eight% acquisition accuracy in real-time. Ahmed et al. [23] employed YOLOv3 to locate humans within the crowd after which used the Euclidian distance formulation to locate the real distance between the two people in the crowd. they also use a monitoring set of rules to track folks who violate physical distance for the duration of video. They acquired an initial ninety two% accuracy and multiplied accuracy via analyzing through passing to ninety eight%. Drones are utilized in construction projects to seize real-time video and offer on line data that enables to seize troubles quick. The examine gathered big-scale image statistics on drones and utilized in- intensity studying strategies to achieve gadgets. Asadi and Han [26] advanced a system toenhance facts acquisition on drones in production sites. Mirsalar Kamari and Ham [10] used picture information of drones for the duration of construction website to speedy locate ventilated regions inside the work place. [27] we've got used in-depth video getting to know networks taken from drones to tune moving objects. The details found at the drones are regularly inaccurate due to the fact they do no longer continually see the crew or imply that they may be carrying a face mask and are familiar with transferring away physically or not.
III. METHODOLOGY
This research obtains a facemask dataset available on the net and increases the amount of knowledge by way of including greater pix. It is very effective in a multilayered structure when obtaining and assessing the necessary features of graphical images. It describes the system proposed composed entirely of acquisitions of images Then, the paper trains more than one faster RCNN object detection models to choose the foremost correct model for masks detection. Forthe physical distance detection, the paper uses a faster RCNN version to stumble on humans then makes use of the Euclidian distance to get the human's distance actually supported the pixel numbers in the photograph switch getting to know is employed to increase accuracy. The version was implemented on more than one movie of street preservation tasks in Houston, TX, to point out the overall performance of the version.
A. Dataset
A part of the dataset of face mask changed into received from MakeML internet site [28] that incorporates 853 snap shots that every picture consists of one or more than one regular faces with diverse illumination and poses the pictures are already annotated with faces with a mask, without masks and incorrect mask sporting. To extend the training records 1,000 different images with their annotations have been brought to the database. the complete of 1,859 pix became used due to the fact the facemask dataset. a few samples of images with their annotations are illustrated in determine 1, where 3 types of masks wearing are annotated which include an accurate masks carrying, wrong wearing, and without masks.
???????B. Face Mask Detection
For the mask detection venture, 5 special object detection fashions inside the TensorFlow object detection model Zoo [29] were educated and examined on the masks dataset to fit their accuracy and pick out the simplest model for the mask detection. table 1 suggests the fashions and their accuracies.
The quicker R-CNN Inception ResNet V2 800*1333 become decided on thanks to its highest accuracy, i.e.,99.8%.
???????C. Physical Distance Detection
This article used the physical distance detector model developed by Roth [31]. The version determines the physical distance in three steps. People detection, photo conversion and distance measurement. Roth [32] trained mode is available in the Zoo version of TensorFlow item detection in COCOfacts sets with 120,000 images.
Faster RCNN Inception V2 with coconut weight was chosen for human detection via model evaluation due to the highest detector performance score among all models. Photo Conversion implements the ability to convert photos taken with a camera into German Veronica at any point in time. 2 represents the first photo taken from an upright location in Veronica, Germany, where the dimensions of the photo are dated linearly with the actual dimensions [30].
The relationship among pixel of (x, y) in the germander speedwell image and pixel of (u, v) in the unique photograph is defined as:
in which x=x′/w′ and y=y′/w′. For the transformation matrix, the OpenCV library in Python changed into used [33]. in the end, the distance among every pair of individuals is measured via estimating the gap among the bottom-middle factor of every boundary field up the germander speedwell view. the precise bodily distance, i.e., two ft, turned into approximated as one hundred twenty pixels within the picture [31].
???????D. Faster R-CNN Model
Discern three suggests a schematic architecture of the quicker RCNN model. The faster RCNN includes the vicinity concept network (RPN) and consequently the short RCNN because the detector community. The input photo is professional the Convolutional Neural Networks (CNN) backbone to extract the capabilities. The RPN then shows bounding containers which might be applied in the area of hobby (ROI)pooling layer to perform pooling at the photo's capabilities.
Then, the community passes the output of the ROI pooling layer via two absolutely connected (FC) layers to supply the center of a pair of FC layers that one of them determines the class of each item and therefore the opposite one performs a regression to decorate the proposed boundary packing containers [17], [18],[34].
???????E. Transfer Learning
The size of the training dataset for the mask detection is constrained. since the masks detection model may be a complex network, the scarce information might lead to an overfitted model. transfer mastering can be a common approach in gadget gaining knowledge of whilst the dataset is constrained and thereforethe training procedure is computationally pricey. transfer getting to know makes use of the weights of a pre-educated model on a significant dataset at some stage in a similar community due to the fact the place to begin in schooling [35]. throughout these studies, we used the weights of the TensorFlow pre- skilled object detection fashions for the model schooling
??????? IV. RESULTS AND DISCUSSION
For each mask discovery network and body distance detection Google Collaboratory has changed to use. Google Collaboratory can be the most advanced cloud provider with Google research that provides a python system environment for running gadget learning codes and data testing codes and provides loose access to a variety of GPUs including the Nvidia K80s, T4s, P4s and P100 and which includes in-depth reading libraries [36]. during this test, for face detection, we used 1 batch length, the stimulant has a zero value.9 with the reading rate of cosine decay (zero charge reading base.008), so the image length is 800 * 1333. the most comprehensive step is 200,000 so the translation training changed to a standstill while the phase loss reached less than 0.07, which occurred near step forty-two (four decisions). For the physical distance acquisition version, we have used 1 batch length, a total of 200 steps, so the power booster is 0.9 per step charging guide step. number one was from 0 to 90,000, when the school fee becomes 2e-four, the second step goes from ninety, 000 to one hundred and twenty, 000, when the school price becomes 2e-5, so i -1 / Step 3 from 120,000 to two hundred, when the cost of the school was 2e-6.
.
Two models have been combined: mask detection and physical distance platen. Four videos of the road conservation project filmed in Houston, Texas, were versioned after testing began. define 5 shows a 1 second screenshot in each case. In Case 1, version detected an unmasked employee with 99% accuracy and a masked employee with 97% accuracy. The model further emphasized workers who do not maintain physical distance with people away from others with pink containers denoting areas of high risk of infection and green boxes denoting convenient areas. For Case 2, the version detected incorrect and correct mask wearing with 89% and 94% accuracy, respectively.
This model has been shown to keep the operator at a distance. The slightly lower accuracy of the wrong sports mask is due to the small diversity of training data for the wrong sports mask in the dataset. Cases 3 and 4 imply excessive precision in determining the physical distance from the face mask. The third case version detected the operator's mask on the left side of the image with relatively low accuracy compared to the low range of training information for a rotating head with this type of mask wearing.
???????
This paper evolved a model to detect masks sporting and physical distancing amongst construction employees to guarantee their protection within the COVID19 pandemic. The paper discovered a facemask dataset such as pictures of people carrying masks, without carrying masks, and carrying masks incorrectly. to increase the education dataset, 1,000 photos with differing types of masks sporting were accrued and added to the dataset to make a dataset with 1,853 pictures. A quicker than before RCNN Inception ResNet V2 community become chosen amid one of a kind fashions that yielded the accuracy of ninety nine.8% for mask detection. For physical distance detection, the faster than earlier than RCNN Inception V2 became wont to locate human beings in vicinity and a exchange matrix was wont to put off the impact of the factorof view on real distance. The Euclidean distance then converted the pixels in the transformed photo to the actual distance between people. The version was set to a 6-foot threshold to prevent physical distance violations. Transfer learning has become a school fashion. A four-film combined model of a real road conservation project in Houston, Texas was commonly used to evaluate. The results of the four cases described here were achieved with an average of 90% accuracy detection of different types of masks worn with the help of remediation specialists. The model also accurately identified people who were too close and did not maintain proper physical distance. Road construction owners and contractors can use the results of this release to regularly monitor workers to prevent pollution and beautify the protection of people. Destiny Studies can hire versions for various production plans, such as construction work.
[1] Johns Hopkins University, “COVID-19 Map - Johns Hopkins Coronavirus Resource Center,” Johns Hopkins Coronavirus Resource Center, 2020. https://coronavirus.jhu.edu/map.html (accessed Jul. 30, 2020). [2] WHO, “Water, sanitation, hygiene and waste management for COVID-19: technical brief, 03 March 2020,” World Health Organization, 2020. [3] R. Jahromi, V. Mogharab, H. Jahromi, and A. Avazpour, “Synergistic effects of anionic surfactants on coronavirus (SARS-CoV-2) virucidal efficiency of sanitizing fluids to fight COVID-19,” bioRxiv, p. 2020.05.29.124107, Jun. 2020, doi: 10.1101/2020.05.29.124107. [4] R. Ellis, “WHO Changes Stance, Says Public Should Wear Masks,” 2020.https://www.webmd.com/lung/news/20200608/who-changes-stance-says-public-should-wear- masks (accessed Jul. 31, 2020). [5] S. Feng, C. Shen, N. Xia, W. Song, M. Fan, and B. J. Cowling, “Rational use of face masks in the COVID 19 pandemic,” The Lancet Respiratory Medicine, vol. 8, no. 5. Lancet Publishing Group, pp. 434–436, May 01, 2020, doi: 10.1016/S2213- 2600(20)30134-X. [6] WHO, “Advice on the use of masks in the context of COVID-19,” 2020. Accessed: Jul. 31, 2020. [Online]. Available: https://www.who.int/publications-. [7] WHO, “COVID-19 advice - Know the facts | WHO Western Pacific,” 2020. https://www.who.int/westernpacific/emergencies/covid-19/information/physical-distancing (accessedJul. 31, 2020). [8] M. Afkhamiaghda and E. Elwakil, “Preliminary modeling of Coronavirus (COVID-19) spread in construction industry,” J. Emerg. Manag., vol. 18, no. 7, pp. 9–17, Jul. 2020, doi: 10.5055/JEM.2020.0481. [9] M. Kamari and Y. Ham, “Automated Filtering Big Visual Data from Drones for Enhanced Visual Analytics in Construction,” in Construction Research Congress 2018, Mar. 2018, vol. 2018- April, pp. 398–409, doi: 10.1061/9780784481264.039. [10] M. Kamari and Y. Ham, “Analyzing Potential Risk of Wind-Induced Damage in Construction Sites and Neighboring Communities Using Large-Scale Visual Data from Drones,” in Construction Research Congress 2020: Computer Applications - Selected Papers from the Construction Research Congress 2020, 2020, pp. 915–923, doi: 10.1061/9780784482865.097. [11] Y. Ham and M. Kamari, “Automated content-based filtering for enhanced vision-based documentation in construction toward exploiting big visual data from drones,” Autom. Constr., vol. 105, p. 102831, Sep. 2019, doi: 10.1016/j.autcon.2019.102831. [12] L. Liu et al., “Deep Learning for Generic Object Detection: A Survey,” Int. J. Comput. Vis., vol. 128, no. 2, pp. 261–318, Feb. 2020, doi: 10.1007/s11263-019-01247-4. [13] Z. Q. Zhao, P. Zheng, S. T. Xu, and X. Wu, “Object Detection with Deep Learning: A Review,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11. Institute of Electrical and Electronics Engineers Inc., pp. 3212–3232, Nov. 01, 2019, doi: 10.1109/TNNLS.2018.2876865. [14] L. Jiao et al., “A survey of deep learning-based object detection,” IEEE Access, vol. 7, pp. 128837– 128868, 2019, doi: 10.1109/ACCESS.2019.2939201. [15] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Sep. 2014, pp. 580–587, doi: 10.1109/CVPR.2014.81. [16] Z. Zou, Z. Shi, Y. Guo, and J. Ye, “Object Detection in 20 Years: A Survey,” 2019, Accessed: Aug. 02, 2020. [Online]. Available: http://arxiv.org/abs/1905.05055. [17] R. Girshick, “Fast R-CNN,” 2015. Accessed: Aug. 03, 2020. [Online]. Available: https://github.com/rbgirshick/. [18] S. Ren, K. He, R. Girshick, and J. Sun, “Faster RCNN: Towards Real-Time Object Detection with Region Proposal Networks.” pp. 91–99, 2015. [19] V. Carbune et al., “Fast multi-language LSTMbased online handwriting recognition,” Int. J. Doc.Anal. Recognit., vol. 23, no. 2, pp. 89–102, 2020, doi: 10.1007/s10032-020-00350-4. [20] H. Jiang and E. Learned-Miller, “Face detection with the faster R-CNN,” in 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), 2017, pp. 650–657. [21] M. Jiang, X. Fan, and H. Yan, “RetinaMask: A Face Mask detector,” 2020, [Online]. Available: http://arxiv.org/abs/2005.03950. [22] Z. Wang et al., “Masked Face Recognition Dataset and Application,” Mar. 2020, Accessed: Aug. 02, 2020. [Online]. Available: http://arxiv.org/abs/2003.09093. [23] I. Ahmed, M. Ahmad, J. J. P. C. Rodrigues, G. Jeon, and S. Din, “A deep learning-based social distance monitoring framework for COVID-19,” Sustain. Cities Soc., p. 102571, Nov. 2020, doi: 10.1016/j.scs.2020.102571. [24] S. V. Militante and N. V. Dionisio, “Real-Time Facemask Recognition with Alarm System using Deep Learning,” in 2020 11th IEEE Control and System Graduate Research Colloquium, ICSGRC 2020 - Proceedings, Aug. 2020, pp. 106–110, doi: 10.1109/ICSGRC49013.2020.9232610. [25] M. Rezaei and M. Azarmi, “DeepSOCIAL: Social Distancing Monitoring and Infection Risk Assessment in COVID-19 Pandemic,” Appl. Sci., vol. 10, no. 21, p. 7514, Oct. 2020, doi: 10.3390/app10217514. [26] K. Asadi and K. Han, “An Integrated Aerial and Ground Vehicle (UAV-UGV) System for Automated Data Collection for Indoor Construction Sites,” in Construction Research Congress 2020: Computer Applications Selected Papers from the Construction Research Congress 2020, Nov. 2020, pp. 846–855, doi: 10.1061/9780784482865.090. [27] C. Li, X. Sun, and J. Cai, “Intelligent Mobile Drone System Based on Real-Time Object Detection,” JAI, vol. 1, no. 1, pp. 1–8, 2019, doi: 10.32604/jai.2019.06064. [28] MakeML, “Mask Dataset | MakeML – Create Neural Network with ease,” 2020. https://makeml.app/datasets/mask (accessed Nov. 11, 2020). [29] V. Rathod, A-googler, S. Joglekar, Pkulzc, and Khanh, “TensorFlow 2 Detection Model Zoo,” 2020. https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_z oo.md (accessed Dec. 23, 2020). [30] I. S. Kholopov, “Bird’s Eye View Transformation Technique in Photogrammetric Problem of Object Size Measuring at Low-altitude Photography,” vol. 133, no. Aime, pp. 318–324, 2017, doi: 10.2991/aime- 17.2017.52. [31] B. Roth, “A social distancing detector using a Tensorflow object detection model, Python and OpenCV” Towards Data Science, 2020.https://towardsdatascience.com/a-socialdistancing- detector-using-a- tensorflow-objectdetection- model-python-and-opencv-4450a431238 (accessed Dec. 22, 2020). [32] B. Roth, “GitHub - basileroth75/covid-socialdistancing- detection: Personal social distancing detector using Python, a Tensorflow model and OpenCV,” 2020. https://github.com/basileroth75/covid- socialdistancing- detection (accessed Dec. 22, 2020). [33] A. Rosebrock, “4 Point OpenCV getPerspective Transform Example - PyImageSearch,” 2014. https://www.pyimagesearch.com/2014/08/25/4-point-opencv-getperspective-transform-example/ (accessed Dec. 23, 2020). [34] S. Ananth, “Faster R-CNN for object detection, Towards Data Science,” 2019. https://towardsdatascience.com/faster-r-cnn-forobject- detection-a-technical-summary- 474c5b857b46 (accessed Nov. 11, 2020). [35] A. R. Zamir, A. Sax, W. Shen, L. Guibas, J. Malik and S. Savarese, “Taskonomy: Disentangling Task Transfer Learning”,2018. Accessed: Jan.03.2021. [Online] Available: http://taskonomy.vision/ . [36] Collaboratory, “Frequently Asked Questions”, 2022. https://research.google.com/colaboratory/faq.html (accessed Nov.11.2020).
Copyright © 2022 Aakansh Garg, Abhinav Dagar, Chetanya Khurana, Dr. Rajesh Singh, Ms. Kalpana Anshu . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET42947
Publish Date : 2022-05-19
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here