Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Santhosh K, Sushil Kumar G N, Umesh R G, Suraksha M S, Dr. Praveen Kumar K V
DOI Link: https://doi.org/10.22214/ijraset.2023.51522
Certificate: View Certificate
5G is poised to support new emerging service types that help in the realization of futuristic applications. These services include enhanced Mobile BroadBand (eMBB), ultra-Reliable Low Latency Communication (uRLLC), and massive Machine-Type Communication (mMTC). 5G New Radio (NR) is envisioned to efficiently support ultra-reliable low-latency communication (URLLC) for new services and application with high reliability, availability and low latency such as factory automation and autonomous vehicles. 5G promises massive increases in traffic volume and data rates. Next generation wireless networks are expected to be extremely complex due to their massive heterogeneity in terms of the types of network architectures they incorporate, the types and numbers of smart IoT devices they serve, and the types of emerging applications they support. In such large-scale and, radio resource allocation and management (RRAM) becomes one of the major challenges encountered during system design and deployment. In this context, emerging Deep Reinforcement Learning (DRL) techniques are expected to be one of the main enabling technologies to address the RRAM in future wireless networks. The paper provides a detailed analysis of the impact of various parameters on the system performance, including the number of users, the signal-to-interference-plus-noise ratio, and. The proposed approach has the potential to significantly improve the performance of 5G networks and enable new applications and services that require high data rates, low latency, and reliable communication. We propose an algorithm for data bearers in millimeter wave (mmWave) frequency band.
I. INTRODUCTION
Radio resource allocation and management (RRAM) is regarded as one of the essential challenges encountered in modern wireless communication networks. Nowadays, modern
wireless networks are becoming more heterogeneous and complex in terms of the types of emerging radio access networks (RANs) they integrate, the explosive number and types of smart devices they serve, and the types of disruptive applications and services they support.
The massive growth in traffic volume and data rate continues to evolve with the introduction of fifth generation of wireless communication (5G). We conduct a comprehensive and in-depth literature review and classify existing related works based on both the radio resources they are addressing and the type of wireless networks they are investigating. To this end, we carefully identify the types of DRL algorithms utilized in each related work, the elements of these algorithms, and the main findings of each related work. Finally, we highlight important open challenges and provide insights into several future research directions in the context of DRL-based RRAM. In an attempt to learn the implied characteristics of intercellular interference and inter-beam interference, we propose an learning based algorithm based on a reinforcement learning (RL) framework. We use this framework to derive a near-optimal policy to maximize the end-user signal to interference plus noise (SINR) and sum-rate capacity.
A. Background
The massive growth in traffic volume and data rate continues to evolve with the introduction of fifth generation of wireless communications (5G). Deep Reinforcement Learning (DRL) is a subfield of machine learning that involves training agents to make decisions based on their interactions with an environment. In recent years, DRL has shown promising results in a variety of applications, including robotics, gaming, and autonomous driving. One area where DRL has shown great potential is in optimizing wireless communication networks, specifically 5G networks. In a 5G network, multiple devices communicate with each other over radio waves, and the network infrastructure must manage the allocation of resources such as frequency bands, power, and transmission beams to ensure efficient communication. One of the key challenges in 5G networks is the need to coordinate the actions of multiple devices to minimize interference and improve network performance. This requires joint optimization of beamforming, power control, and interference coordination. Traditional optimization methods for these tasks can be complex and computationally expensive, making them impractical for real-time applications. Future wireless networks are therefore expected to meet this massive demand for the data rates. The importance of reinforcement learning in power control has been demonstrated in [1]-[3]. It also enhances the usability of the network and increases the cellular capacity. For data bearers, beam forming, power control, and interference coordination, can improve the robustness of these data bearers, improve the data rates received by the end-users, and avoid retransmissions.
B. Problem Statement
In this paper, we provide an answer to the question whether a method exists that can perform the joint beamforming, power control, and interference coordination by introducing a different approach to power control in wireless networks. In such a setting, it is not only the transmit power of the serving BS that is controlled as in standard implementations, but also the transmit powers of the interfering base stations from a central location as shown in Fig. 1.
A major question here is whether there exists a method that
(1) Can jointly solve for the beamforming and interference coordination
(2) Achieve the upper bound on SINR, and
(3) Avoids the exhaustive search in the action space for data bearer.
The aim of this paper is to propose an algorithm for this joint solution by utilizing the ability of reinforcement learning to explore the solution space by learning from interaction.
C. Objectives
The ultimate goal is to improve the overall network performance by maximizing the quality of service (QoS) for all users while minimizing the interference between them. We propose a DRL-based algorithm where the beam forming vectors and transmit powers at the base stations are jointly controlled to maximize the objective function. The goal of the project is to develop a DRL-based solution that can achieve significant improvements in network performance in terms of throughput, energy efficiency, and interference management compared to existing solutions. The project will evaluate the proposed solution through simulations and compare the results with existing methods to demonstrate its effectiveness.
II. LITERATURE SURVEY
In [1] it proposes a closed-loop power control algorithm for the downlink of VoLTE radio bearer, using reinforcement learning (RL) to address performance tuning issues in an indoor cellular network. The paper demonstrates that the lower bound loss in effective signal to interference plus noise ratio resulting from neighboring cell failure is sufficient for VoLTE power control in practical cellular networks.
The RL-based algorithm outperforms current industry standards in simulations by maintaining an effective downlink signal to interference plus noise ratio against network operational issues and faults. In paper [2], the authors investigate the combination of millimeter-wave (mm-wave) communications and non-orthogonal multiple access (NOMA) as key enabling technologies for 5G wireless mobile communication. They consider a two-user uplink mm-wave-NOMA system where a base station equipped with an analog beam forming structure with a single radio-frequency chain serves two NOMA users. The paper formulates an optimization problem to maximize the achievable sum rate of the two users, while ensuring a minimal rate constraint for each user. This problem involves joint power control and beam forming to steer to the two users simultaneously, subject to an analog beam forming structure. The proposed sub-optimal solution achieves a close-to-bound uplink sum-rate performance according to extensive simulations.
In [3], the authors discuss the evolution of wireless communication systems to 5G and how non-line-of-sight (NLOS) transmission is a common issue, particularly with millimeter-wave (mmWave) communications. While beamforming techniques have been employed in previous works to improve NLOS transmission, the high cost of controlling antennas is a drawback. Therefore, the authors propose a dynamic transmission power control scheme using a deep Q-network (DQN) with a convolutional neural network. They demonstrate the effectiveness of their proposed scheme through simulation results.
In [4], the authors address the issue of inter-cell interference in heterogeneous ultra-dense networks with millimeter-wave macro cells and small cells. While directional links provide spatial diversity, the concurrent directional transmissions of adjacent base stations (BSs) lead to severe inter-cell interference and downlink performance degradation. Managing inter-cell interference in such a dynamic and unpredictable environment is challenging. To tackle this, the authors propose an online learning-based transmission coordination algorithm based on the multigame framework. The effectiveness of their proposed scheme is verified through numerical simulations
Paper [5] addresses a dynamic multichannel access problem where users need to select the channel to transmit data from multiple correlated channels following an unknown joint Markov model. To overcome the challenge of unknown system dynamics, the paper proposes an adaptive deep Q-network (DQN) approach that can achieve the same optimal performance as the fixed pattern channel switching policy. The paper shows that DQN can learn and adapt to time-varying scenarios through simulations.
In Paper [6], the authors propose a deep learning-based method for automatic modulation recognition (AMR) to achieve higher accuracy in cognitive radio (CR) systems. They use two convolutional neural networks (CNNs) trained on different datasets to classify modulation modes that are easy or difficult to distinguish. The paper also adopts dropout instead of pooling operation to improve recognition accuracy.
Paper [7] proposes unsupervised deep learning-based power control schemes for non-orthogonal random access (NORA) in wireless communication systems, where multiple nodes utilize the identical preamble simultaneously to transmit data over the same time-frequency resources. The proposed schemes maximize the minimum rate based only on the timing advance (TA) information, as full channel knowledge is not available. Numerical results show the effectiveness of the proposed DL-based NORA over conventional methods.
In Paper [8], the authors propose a deep learning-based beam management and interference coordination (BM-IC) method for dense millimeter-wave (mmWave) networks to address the challenge of severe signal pathloss. Simulation results show that the proposed method can achieve comparable sum-rate to conventional BM-IC algorithms with much less computation time.
Paper [9] proposes a method to enhance downlink user throughput distribution in heterogeneous basestation networks using sub6 GHz in frequency-division multiplexing (FDM). The method resolves the race condition between involved basestations in sub-exponential times using a central location based on user reported downlink signal-to-interference-plus-noise ratio (SINR) and coordinates.
Finally, Paper [10] proposes an algorithm to predict handover success or failure in wireless communication systems, leading to improved handover success rates and inter-radio access technology (RAT) handover success rates. The proposed algorithm shows similar failure rates as the standard handover algorithm with the least number of users and the shortest duration.
In [11] By analysing the signal-to-interference-plus-noise ratio (SINR), it was discovered that the equal gain transmission scheme in a nonlinear MISO system achieves the maximum SINR only at the point of local optimal transmit power. An approximated optimal transmit power was then derived. To maximize the SINR in all transmit power regions, a precoding and power control algorithm was proposed for a MISO system with nonlinear power amplifiers under a power constraint.
In [12] The performance of 5G communication networks that consist of SBSs with coordinated beamforming operating at mmWave frequency band and MBSs operating at sub6 GHz was analysed. A clustering method was proposed to eliminate intra-cell interference by selecting some SBSs.
Average distance from the Kth SBS to a user was used to obtain signal-to-interference-ratio (SINR) and rate coverage probability expressions. The analysis was conducted on a 5G mmWave network where MBSs operate at sub-6 GHz and SBSs operate coordinated beamforming with 28 GHz.
In paper [13] the combination of beam-forming with a differential modulation scheme for the DL was proposed to overcome the problem of UEs equipped with only one single antenna. The frequency diversity was also used to enable the integration of the differential modulation scheme in the DL. This was done to enable UEs to be equipped with a small number of antennas (single antenna in this work). In [14], A DNN was trained using the training data generated by solving the offline power control problem. It was observed through simulations that the proposed approach provides excellent performance. The sensitivity of mmWave signals to blockage and the use of narrow beams were discussed as they greatly impact the coverage and reliability of highly-mobile links. The use of narrow beams and the sensitivity of mmWave signals to blockage greatly impact the coverage and reliability of highly-mobile links has been discussed in [15].
In [16], In this paper, tuning cellular network performance against wireless impairments that always occur was proposed as a way to improve reliability to end-users. Cellular network performance tuning was formulated as a reinforcement learning (RL) problem, and a solution was provided to improve the performance for indoor and outdoor environments.
In [17] Beamforming is an effective means to enhance the quality of the received signals in multiuser MISO systems. Iterative algorithms are traditionally used to find the optimal beamforming solution, which introduces high computational delay and is not suitable for real-time implementation. In [18] an intelligent algorithm was proposed for performance optimization of massive MIMO beamforming. The proposed algorithm's key novelty is the combination of three neural networks that cooperatively implement the deep adversarial reinforcement learning workflow.
III. METHODOLOGY
A. Network Model
The network being considered is an orthogonal frequency division multiplexing (OFDM) multi-access downlink cellular network with L base stations (BSs). The network includes a serving BS and one or more interfering BSs, and operates in a downlink scenario where a BS transmits to a single user equipment (UE). User association with their serving BS is based on distance, and each UE is served by at most one BS. The BSs are spaced at an intersite distance of R, and the UEs are randomly distributed within their service area. The cell radius is greater than half of the intersite distance (r > R/2) to allow for overlapping coverage. Data bearers use mmWave frequency bands.
B. System Model
We are adopting a multi-antenna setup where each BS employs a uniform linear array (ULA) of M antennas and the UEs have single antennas.
Beamforming vectors: We compute a beamforming (BF) vector for a Uniform Linear Array (ULA) antenna system based on a given angle of arrival (AOA) theta. A ULA is a type of antenna array where multiple antennas are arranged linearly along a line. The distance between two adjacent antennas is usually half of the wavelength of the electromagnetic signal of interest. This configuration allows the array to have directional properties, i.e., it can transmit/receive signals more effectively in certain directions.
Where d and k denote the antenna spacing and the wavenumber, while θn represents the steering angle. Finally, a(θn) is the array steering vector in the direction of θn. The value of θn is obtained by dividing the antenna angular space between 0 and π radians by the number of antennas M.
Then we compute a channel for a wireless communication system. The channel is modeled as a linear combination of the contributions of individual signal paths between the transmitter and the receiver. The function first sets some parameters such as the path loss exponent (PLE), antenna gain, and the steering angle "theta" for each of the signal paths. It also determines if the carrier frequency is in the mmWave range. Based on the path loss model and the Bernoulli distribution for the
probability of Line-of-Sight (LOS) transmission, the function computes the path loss and the channel coefficient for each path. The channel coefficient is modelled as a complex Gaussian variable with zero mean and variance proportional to the inverse of the path loss. The beamforming vectors are combined with the channel coefficients to obtain the channel vector. Finally, the channel gain is normalized by the square root of the number of antennas in the ULA and the resulting channel vector is computed.
Further is we implement a method that computes the received power, interference power, and signal-to-interference-plus-noise ratio (SINR) for a user in a wireless communication system. The method takes in several parameters such as the user's location, the power transmitted by two base stations (BSs), and whether the user is associated with the first or second BS. The method first computes the noise power based on the temperature and bandwidth. It then computes the channel for the user from the serving and interfering BSs using a beamforming technique (if enabled). Based on the user's association with a BS, it computes the received power and interference power for that BS. It then calculates the interference plus noise power and the SINR. Finally, the method returns the received power, interference power, and SINR as a list.
IV. DEEP REINFORCEMENT LEARNING
Deep reinforcement learning (DRL) is a branch of machine learning that combines reinforcement learning (RL) with deep neural networks to enable agents to learn from their environment and take actions that maximize a cumulative reward signal. In DRL, an agent interacts with an environment by receiving observations, selecting actions, and receiving rewards. The environment is defined by a set of states, actions, and a reward function. The agent's goal is to learn a policy that maps states to actions that maximize the expected cumulative reward over time.
The key elements of DRL are:
Deep Q-Learning (DQL) agent using a neural network to approximate the optimal Q-function, which maps states to the expected utility of taking each possible action in that state. The agent interacts with an environment by taking actions, receiving a reward and observing the resulting state. The agent's behavior is guided by the Q-function approximation, which is learned by minimizing the mean squared error between the Q-values predicted by the neural network and the expected Q-values computed using the Bellman equation. The agent stores its experiences in a replay buffer and samples mini-batches of experiences from the buffer to update the Q-function approximation.
The DQL agent uses an exploration-exploitation strategy to balance between exploring new actions and exploiting the knowledge learned so far. The exploration rate starts high and decays over time, encouraging the agent to initially explore new actions, but gradually rely more on the Q-function approximation.
Policy Selection: Q-learning is classified as an off-policy reinforcement learning algorithm, meaning that it can find a policy close to optimal even if it uses a different exploratory policy to choose actions.
The selected action selection policy for Q-learning aims to balance exploration and exploitation, and it operates in two modes: exploration and exploitation. During exploration, the agent selects actions randomly to discover more effective actions at each time step, whereas in exploitation, the agent selects actions that maximize the state-action value function based on previous experiences. The epsilon-greedy action selection policy used in Q-learning is designed to allow the agent to explore new actions with a specific probability while also exploiting its current knowledge with another probability. The value of epsilon controls the exploration-exploitation trade-off, and it can be modified to change the level of exploration.
???????C. Proposed algorithm
We propose an Algorithm which is a DRL-based approach. It can perform both power control and interference coordination without any explicit commands from the UE. This use of the DQN may provide a lower computational overhead. The main steps of Algorithm are as follows:
V. RESULTS
The performance of a beamforming algorithm for a specific system was evaluated in terms of convergence, run time, coverage, and sum-rate. The system in question is a Uniform Linear Array (ULA) with M elements, which is used to improve the signal strength and reduce interference for a wireless communication system. The results show that as the size of the ULA M increases, the number of episodes required for convergence also increases. This means that the algorithm takes longer to find the optimal beamforming weights that maximize the Signal-to-Interference-plus-Noise Ratio (SINR) for the system. However, the effect of the constant threshold SINR on convergence remains minimal, which indicates that the algorithm is relatively robust to changes in this parameter. The run-time complexity of the algorithm also increases with M due to the larger number of beams that the algorithm must search through to increase the joint SINR. This means that the algorithm takes longer to compute the optimal beamforming weights as the size of the array increases. The coverage for data bearers improves with increasing M, where the SINR monotonically increases with the increase in M, leading to an increase in beamforming array gain. This means that the beamforming algorithm is able to provide better coverage for the wireless communication system as the size of the ULA increases. Finally, the results show that the sum-rate capacity of the system increases logarithmically with M. This means that the algorithm is able to improve the overall data rate of the wireless communication system, but the rate of improvement decreases as the size of the ULA increases. Overall, the results of the algorithm indicate that the beamforming algorithm is effective at improving the performance of the wireless communication system in terms of convergence, coverage, and sum-rate. However, the algorithm's run-time complexity increases with M, which may limit its practical implementation in systems with very large arrays.
In Fig. 5 It provides a comparison between proposed and Fixed power allocation (FPA) which doesn’t use an dedicated machine learning algorithm. The achieved signal-to-interference-plus-noise ratio (SINR) is proportional to the size of the ULA antenna, M, as demonstrated. This relationship is expected since the beamforming array gain which increases with M. The transmit power is nearly maximized, where power is constant in FPA.
In Figure 6, we plot the complementary cumulative distribution function (CCDF) for coverage. As M grows larger, so does the chance of achieving a specific effective SINR. This is due to the fact that the effective SINR is determined by the beamforming array gain, which increases with M.
In Fig. 7 it demonstrates the convergence episode time as the function of number of antennas M. If number of antennas increase the learning time for each episode also increases.
In Fig.8 As the number of antennas in the ULA array increases, the sum rate capacity typically also increases, at least up to a certain point. This is because the addition of more antennas allows for more spatial diversity, which can improve the reliability and performance of the wireless communication system.
VI. ACKNOWLEDGEMENT
Any achievement does not depend solely on the individual efforts but on the guidance, encouragement and co-operation of intellectuals, elders and friends. We extend our sincere thanks to Head of Department of Computer Science and Engineering, Sapthagiri College of Engineering, and our guide, Professor of Department of Computer Science and Engineering, Sapthagiri College of Engineering, for constant support, advice and regular assistance throughout the work. Finally, we thank our parents and friends for their moral support.
This paper proposes a novel approach to address the challenge of power control and interference coordination in wireless networks. By formulating the beamforming and interference coordination problem as a non-convex optimization problem, the paper maximizes the signal-to-interference plus noise ratio (SINR) using deep reinforcement learning. The paper makes significant contributions by handling the race condition between base stations and proposing an alternative to beamforming for improving the non-line-of-sight (NLOS) transmission performance. The paper\'s approach also solves the power allocation problem to maximize the sum-rate of user equipments (UEs) under the constraints of transmission power and quality targets using deep reinforcement learning. The convolutional neural network is utilized to estimate the Q-function of the deep reinforcement learning problem. The central location handling the race condition is similar to the coordinated multipoint approach, and it uses the user-reported downlink SINR and coordinates to resolve the race condition in sub-exponential times in the number of antennas. This research is significant as it addresses a critical problem in wireless communication networks and proposes a novel approach to tackle the issue. The paper\'s findings have practical implications for the development and optimization of 5G networks, which require efficient and effective techniques to improve the network\'s QoS. This paper provides a starting point for further research in this area, which could lead to more advanced and sophisticated techniques for optimizing wireless communication networks. Overall, the paper\'s proposed deep reinforcement learning-based approach shows a significant improvement in the quality of service (QoS) for user equipment in 5G networks. This approach enables joint beamforming and power control while coordinating interference from other base stations to optimize the users\' SINR. This optimization improves the network\'s spectral efficiency, resulting in better network performance and more satisfied users.
[1]. F. B. Mismar and B. L. Evans, “Q-Learning Algorithm for VoLTE Closed Loop Power Control in Indoor Small Cells,” in Proc. Asilomar Conference on Signals, Systems, and Computers, Oct. 2018. [2]. L. Zhu, J. Zhang, Z. Xiao, X. Cao, D. O. Wu, and X. Xia, “Joint Power Control and Beamforming for Uplink Non-Orthogonal Multiple Access in 5G Millimeter-Wave Communications,” IEEE Transactions on Wireless Communications, vol. 17, no. 9, pp. 6177–6189, Sep. 2018. [3]. C. Luo, J. Ji, Q. Wang, L. Yu, and P. Li, “Online Power Control for 5G Wireless Communications: A Deep Q-Network Approach,” in Proc. IEEE International Conference on Communications, May 2018. [4]. R. Kim, Y. Kim, N. Y. Yu, S. Kim, and H. Lim, “Online Learning-based Downlink Transmission Coordination in Ultra-Dense Millimeter Wave Heterogeneous Networks,” IEEE Transactions on Wireless Communications, vol. 18, no. 4, pp. 2200–2214, Mar. 2019. [5]. Wang, H. Liu, P. H. Gomes, and B. Krishnamachari, “Deep Reinforcement Learning for Dynamic Multichannel Access in Wireless Networks,” IEEE Transactions on Cognitive Communications and Networking, vol. 4, no. 2, pp. 257–265, Jun. 2018. [6]. Y. Wang, M. Liu, J. Yang, and G. Gui, “Data-Driven Deep Learning for Automatic Modulation Recognition in Cognitive Radios,” IEEE Transactions on Vehicular Technology, vol. 68, no. 4, pp. 4074–4077, Apr. 2019. [7]. S. Jang, H. Lee, and T. Q. S. Quek, “Deep learning-based power control for non-orthogonal random access,” IEEE Communications Letters, pp. 1–1, Aug. 2019. [8]. Pei Zhou, Xuming Fang, Senior Member, IEE, Xianbin Wang, Fellow, IEEE, Yang Long, Member, IEEE, Rong He, and Xiao Han “Deep Learning Based Beam Management and Interference Coordination in Dense mmWave Networks” IEEE Transactions on Aerospace and Electronic Systems. [9]. F. B. Mismar and B. L. Evans, “Deep Learning in Downlink Coordinated Multipoint in New Radio Heterogeneous Networks,” IEEE Wireless Communications Letters, vol. 8, no. 4, pp. 1040–1043, Aug. 2019. [10]. F. B. Mismar and B. L. Evans, “Partially Blind Handovers for mmWave New Radio Aided by Sub-6 GHz LTE Signaling,” in Proc. IEEE International Conference on Communications Workshops, May 2018. [11]. Jeongju Jee, Girim Kwon and Hyuncheol Park “Precoding Design and Power Control for SINR Maximization of MISO System With Nonlinear Power Amplifiers” IEEE Transactions on Vehicular Technology ( Volume: 69, Issue: 11, November 2020). [12]. Sisai Fang; Gaojie Chen; Xiaodong Xu; Shujun Han; Jie Tang “Millimeter-Wave Coordinated Beamforming Enabled Cooperative Network: A Stochastic Geometry Approach” IEEE Transactions on Communications 2021. [13]. Kun Chen-Hu; Yong Liu; Ana García Armada “Non-Coherent Massive MIMO-OFDM Down-Link based on Differential Modulation” IEEE Transactions on Vehicular Technology 2020. [14]. Mohit K. Sharma; Alessio Zappone; Mérouane Debbah; Mohamad Assaad “Deep Learning Based Online Power Control for Large Energy Harvesting Networks ” IEEE Transactions on Cognitive Communications and Networking 2019. [15]. Ahmed Alkhateeb; Sam Alex; Paul Varkey; Ying Li; Qi Qu; Djordje Tujkovic “Deep Learning Coordinated Beamforming for Highly-Mobile Millimeter Wave Systems ” IEEE Signal Processing 2019. [16]. Faris B. Mismar; Jinseok Choi; Brian L. Evans “A Framework for Automated Cellular Network Tuning with Reinforcement Learning” IEEE Transactions on Communications 2019. [17]. Wenchao Xia; Gan Zheng; Yongxu Zhu; Jun Zhang; Jiangzhou Wang “A Deep Learning Framework for Optimization of MISO Downlink Beamforming” IEEE Transactions on Communications 2020. [18]. Taras Maksymyuk; Juraj Gazda; Oleh Yaremko; Denys Nevinskiy “Deep Learning Based Massive MIMO Beamforming for 5G Mobile Network x” 2018 IEEE 4th International Symposium on Wireless Systems.
Copyright © 2023 Santhosh K, Sushil Kumar G N, Umesh R G, Suraksha M S, Dr. Praveen Kumar K V. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET51522
Publish Date : 2023-05-03
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here