Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Raghav Bhutada, Shashwat Vora
DOI Link: https://doi.org/10.22214/ijraset.2023.52370
Certificate: View Certificate
Technologies utilising lithium-ion batteries are essential in altering the economy and lowering reliance on fossil fuels. Electricity is being used in manufacturing, transportation, and services. In Europe With in a decade, according to the Commission, everything that can be electrified in Europe will be. To ensure the security of the functioning of battery-powered electronic devices and to direct users in adopting behaviours that can increase battery life and reusability, it is essential to be able to estimate the state of charge (SOC) accurately. In this study, we explore how machine learning models can forecast the state of charge (SOC) of cylindrical Li-Ion batteries while taking into account a range of cells and charge-discharge cycles.
I. INTRODUCTION
The progress from petroleum product to environmentally friendly power energy is notable as the wanted change in our general public. To decrease the discharge of Carbon dioxide (CO2) from traditional transportation, the turn of events of Electric Vehicles (EV) is developing rapidly. Battery innovation will be one of the main key empowering agents for the environmentally friendly power energy change.
Lithium-particle batteries have been broadly utilized in electric vehicles. It is projected that the worldwide EV stock will extend to 140 million by 2030 [1]. Lithium-particle (Li-particle) battery is the most well-known took on power supply of EV because of its high energy thickness, long life expectancy, lightweight, and low self-release rate [19]. A few elements could influence the exhibition and security of Li-particle battery, for example, encompassing temperature, over-charge, or over-release. An abuse of the battery can prompt a more limited battery duration. To defeat these issues, Battery The executives Frameworks (BMS) are applied to guarantee the dependability and steadiness of the utilization of Li-particle batteries.
One significant boundary for the BMS battery wellbeing the board is the battery Province of Charge (SOC) assessment which makes a difference to keep the battery from over-charge and over-release. SOC shows how much accessible charge in the battery which can be addressed by a worth in rate. This worth is expected to stay somewhere in the range of 0 and 100 percent, even though it is feasible to disregard these cut-off points in an over-release or over-charge circumstance [24]. The battery itself doesn't straightforwardly give data on its SOC esteem.
The estimation of SOC esteem is complicated, and blunder inclined due to the backhanded assessment and the non-straight nature of electrochemical responses in the battery. Significant data, for example, the deliberate release current, voltage, and surrounding temperature can be utilized to gauge the SOC by implication [5].
Erroneous estimation of SOC could prompt unsound EV execution and even abbreviate the battery duration, subsequently, lessening the ecological advantages of zap.
As a general rule, the SOC assessment procedures concentrated on in the writing can be partitioned into three classes: direct techniques, model-based strategies, and data driven techniques [25]. The immediate strategies search for the connection among SOC and the actual battery trademark boundaries. The SOC worth can be assessed by the noticed parameters. The model-based SOC assessment strategies for the most part centre around displaying the substance and electrical properties of the battery. Regularly, the model-based techniques are utilized in cooperation with versatile channels, for example, Kalman channel, H-endlessness channel, and Molecule channel, and so on. Model-based strategies require an extensive comprehension of the electrochemical properties in the battery space and can't be utilized for SOC forecast.
This work proposes an information driven approach for SOC assessment in view of Profound Learning procedures. Profound realizing, which can surmise non-direct capabilities, is a generally taken on information driven technique to handle the battery SOC assessment issue [27].
Given a adequate measure of preparing information and a proper setup, the SOC worth can be anticipated precisely without the requirement for a refined electrochemical model. Various kinds of brain networks (NNs) like Convolutional Brain Organizations (CNNs).
II. METHODOLOGY
A. RNN
A subset of neural networks called recurrent neural networks permits the retention of data across time. RNNs feature connections within layers that create cyclic directed graphs, in contrast to feedforward neural networks, which are acyclic directed graphs. This gives neural networks the ability to have a state and hence memory. Along with the current time step, the prior state's data is used as input. Due to the consideration of linkages between the present and the past, it is effective for sequential data prediction. Fig. 1 is an illustration of how the architecture of an RNN for SOC estimate developed over time.
Battery characteristics such as voltage, current, and temperature are contained in the input vector at time step t, which is designated as Input ht represents the hidden state at time step t, while SOCt stands for the output SOC value at time step t. A typical method for timeseries termed many-to-many is shown in Fig. 1, where the network receives numerous input steps with one prediction at each step. However, there are additional methods, such as the many-to-one and the one-to-many, in which the first instance involves feeding numerous time-steps with a single output while the second case involves using a single input to create multiple time-steps. We adopted the many-to-many technique for the first model (low frequency) since the two battery datasets have extremely different sample frequencies.
B. LSTM
The long short-term memory is a type of RNN which is widely used to learn long-term dependencies without experiencing the exploding and vanishing gradient problems. The forward pass of an LSTM cell can be defined by the following steps.
In the equations , ????????, ???????? are the forget-gate, input-gate, output-gate; ???????? and ????? are the cell state and hidden state at time step t respectively; ???? is the sigmoid function; ? is the Hadamard product;???? denotes the weight matrix; ???????? is the input vector at time step t and ???? is the bias. The first step in the LSTM cell is to determine what information will be forgotten from the cell state ????????−1. The forget-gate uses a sigmoid function, in which outputs are always between 0 and 1.
The result represents therefore how much should be forgotten, with 0 and 1 representing respectively discarding everything or keeping everything from the previous cell’s state. As shown in the equations, the decisions of gates are based on the current input and hidden state as well as on the network’s weights and biases.
C. Proposed LSTM Approach
The paper proposes two different deep LSTM models for two different datasets, since the datasets have different cycle lengths. The models use the Scaled Exponential Linear Units (SELU) activation function in all LSTM cells and hidden dense layers. In the output layer, a linear activation function is applied to generate the final SOC value.
The first model is used for the UNIBO dataset. It consists of three LSTM layers and two dense layers, and is a deep neural network used to map the learned states to the desired SOC output. Each LSTM layer has 256, 256, and 128 cells, respectively. The architecture of the first model is shown in Fig. 2. The input layer contains battery parameters, including voltage, current, and temperature at each step. As it is a deep LSTM network, each LSTM layer returns a sequence, and the SOC value is estimated at every step using the many-to-many approach.
The input time series for the deep LSTM network is defined as [Inputt0, Inputt1, …, Inputtn], where n is the number of steps in the entire discharge cycle, and Input = [Vt, It, Tt] represents voltage, current, and temperature at each time step, respectively.
Although the entire discharge cycle is fed to the network, only the part that precedes the step under examination is available as input for SOC estimation. In other words, the hidden state from previous steps and the current input at step t are used to estimate the output at step t.
D. Data Normalization
Since the input features have different ranges, such as the temperature has much higher values than voltage and current, the trained model could give more importance to this feature over the
others due to its larger value. To avoid this problem, the minimum maximum normalization method is used to scale all input features into the same common scale.
E. Model Training
The proposed models are implemented by using the Keras library [6]. The Adam algorithm [15] is chosen as the optimizer to update the network weights and biases with the learning rate configured as 0.00001. All proposed models are trained for 1000 epochs, but the training process would stop earlier if there is no further improvement of validation loss within 50 epochs. The Huber loss [13] is used as the loss function. Its peculiarity is that it can be quadratic or linear depending on the error value.
III. RESULTS
In this section, the proposed deep LSTM models are trained and tested using the two datasets mentioned earlier. The performance of the models on each dataset is discussed, and the source code and results of the implementation are available.
To evaluate the proposed models, Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) are used. Mean Square Error (MSE) is the sum of the squared distances between the predicted and target variables divided by the number of samples. RMSE is the square root of MSE, which scales the output value to the same scale as MAE and is more sensitive to outliers because it penalizes the model by squaring the error. MAE, on the other hand, is more robust to outliers because the error is not squared. MAE is an L1 loss function that calculates the sum of the absolute differences between the predicted and target variables. MAE is more appropriate for problems where the training data contain outliers.
A. UNIBO Power tools Dataset
The proposed model’s performance is evaluated on constant current discharge in the UNIBO dataset tests. The training set for this dataset contains 7738 discharging cycles, and the proposed model is trained on it. For evaluation purposes, one cell is selected from each group of test types (standard, high current, pre-conditioned) and nominal capacity as testing data. The testing data is not seen during the training process. The overall MAE and RMSE on all testing data are 0.69% and 1.34% respectively.
To further examine the proposed model’s performance, Table 2 presents the performance of each test type. The standard test type with 4.0Ah nominal capacity and high current test type with 2.85Ah nominal capacity have the worst performance, as the dataset contains only two cell tests of this kind.
However, in other test types with sufficient data, the model can achieve accurate results with an RMSE lower than 1%. Figure 4, Figure 5, and Figure 6 show examples of SOC estimation results of the proposed model on the standard, high current, and preconditioned test types, respectively. The first and last discharge cycles within the entire test of each battery cell are presented to demonstrate the SOC estimation performance under different health statuses. The black line represents the actual observed SOC value, and the red dashed line represents the SOC value estimated by the proposed model. The model estimates the SOC of the 3.0Ah nominal capacity cells correctly and without large fluctuation in each of the three test types. Furthermore, the estimations of standard test types with 2.0Ah and 2.85Ah nominal capacity are accurate too. The proposed model is capable of estimating SOC under different battery health statuses. Good performance is achieved from the preconditioned test type, demonstrating that the storage temperature before testing does not significantly affect the battery discharging behaviour in terms of SOC estimation. However, there are some errors during the ending steps of standard 4.0Ah nominal capacity and high current 2.85Ah nominal capacity battery cell cycles. It is acceptable as there is only one training example of that kind of setup.
B. LG Dataset
The proposed model's performance was assessed on the LG 18650HG2 Li-ion battery dataset, which involved dynamic discharge currents. The training set included six mixed driving cycles for three temperatures: 0°C, 10°C, and 25°C. Three different time series lengths, with 300, 500, and 700 steps (which roughly equate to 30, 50, and 70 seconds in duration) were tested. The test set comprised a UDDS, an LA92, a US06 driving cycle, and one mixed driving cycle for each of the three different temperatures in the dataset. The 300-step model achieved an MAE and RMSE of 1.47% and 1.99%, respectively, while the 500-step model reached an MAE and RMSE of 1.54% and 2.12%. The 700-step model had an MAE and RMSE of 1.94% and 2.72%. All results were tested on data at all temperatures. Table 3 presents the model's performance under each temperature for different input lengths.
The best performance was obtained at room temperature (25°C) with 300 input steps, indicating that the battery's operation is most stable under this condition. The model learned the battery's behaviour under room temperature with the provided driving cycles without the need for a long history. However, for temperatures below room temperature, better performance was achieved with the 500-input model, indicating that increasing input steps could improve estimation results. The worst results were obtained with 700 input steps, suggesting that an appropriate increment of input steps should be selected carefully to avoid performance degradation. Fig. 7 shows the SOC estimation results on mixed driving cycles under 0°C, 10°C, and 25°C temperatures. The estimation results were competitive and without significant errors under the three temperatures. However, errors were observed in the ending steps of mixed cycles under 0°C temperature due to their more dynamic discharge pattern.
IV. FINAL REMARKS
In this paper, a deep LSTM NN is proposed to estimate SOC over two different Li-ion battery datasets. Discharge cycles with both constant and dynamic current under various ambient temperatures are used to train and test the proposed models. The evaluation results show that the proposed models can learn the battery dynamic behaviour during discharge. Battery SOC can be estimated accurately by using the measured voltage, current, and temperature values, with 1.34% and 1.99% RMSE in constant current and dynamic current discharge cycle respectively. We have also shown how the proposed estimation is robust with respect to different State of Health statuses. The SOH is another important parameter for battery management. As future work, we suggest using deep LSTM networks for SOH estimation as we believe it can be effective as well.
[1] International Energy Agency. 2020. Global EV Outlook 2020. OECD Publishing, Paris. 276 pages. https://doi.org/10.1787/d394399e-en [2] United States Environmental Protection Agency. 2020. EPA Urban Dynamometer Driving Schedule (UDDS). https://www.epa.gov/emission-standards-referenceguide/epa-urban-dynamometer-driving-schedule-udds [3] Christian Campestrini, Thomas Heil, Stephan Kosch, and Andreas Jossen. 2016. A comparative study and review of different Kalman filters by applying an enhanced validation method. Journal of Energy Storage 8 (2016), 142–159. https: //doi.org/10.1016/j.est.2016.10.0040 [4] Ephrem Chemali, Phillip J. Kollmeyer, Matthias Preindl, Ryan Ahmed, and Ali Emadi. 2018. Long Short-Term Memory Networks for Accurate State-of-Charge Estimation of Li-ion Batteries. IEEE Transactions on Industrial Electronics 65, 8 (2018), 6730–6739. https://doi.org/10.1109/TIE.2017.2787586 [5] K. W. E. Cheng, B. P. Divakar, Hongjie Wu, Kai Ding, and Ho Fai Ho. 2011. BatteryManagement System (BMS) and SOC Development for Electrical Vehicles. IEEE Transactions on Vehicular Technology 60, 1 (2011), 76–88. https://doi.org/10.1109/ TVT.2010.2089647 [6] François Chollet and Others. 2015. Keras. https://keras.io. [7] Shengmin Cui, Xiaowa Yong, Sanghwan Kim, Seokjoon Hong, and Inwhee Joe. 2020. An LSTM-Based Encoder-Decoder Model for State-of-Charge Estimation Lithium-Ion Batteries In Intelligent Algorithms in Software Engineering, Radek Silhavy (Ed.). Springer International Publishing, Cham, 178–188 [8] Wim De Mulder, Steven Bethard, and Marie-Francine Moens. 2015. A survey on the application of recurrent neural networks to statistical language modeling. Computer Speech and Language 30, 1 (2015), 61–98. https://doi.org/10.1016/j.csl. 2014.09.005 [9] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, Piscataway, 6645– 6649. https://doi.org/10.1109/ICASSP.2013.6638947 [10] M.A. Hannan, M.S.H. Lipu, A. Hussain, and A. Mohamed. 2017. A review of lithium-ion battery state of charge estimation and management system in electric vehicle applications: Challenges and recommendations. Renewable and Sustainable Energy Reviews 78 (2017), 834–854. https://doi.org/10.1016/j.rser.2017.05.001 [11] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9, 8 (1997), 1735–1780. https://doi.org/10.1162/neco.1997.9. 8.1735 [12] Dickson N. T. How, M. A. Hannan, M. S. Hossain Lipu, and Pin Jern Ker. 2019. State of Charge Estimation for Lithium-Ion Batteries Using Model-Based and Data-Driven Methods: A Review. IEEE Access 7 (2019), 136116–136136. https: //doi.org/10.1109/ACCESS.2019.2942213 [13] Peter J. Huber. 1992. Robust Estimation of a Location Parameter. Springer New York, New York, NY, 492–518. https://doi.org/10.1007/978-1-4612-4380-9_35 [14] Asifullah Khan, Anabia Sohail, Umme Zahoora, and Aqsa Saeed Qureshi. 2020. A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review 53, 8 (2020), 5455–5516.
Copyright © 2023 Raghav Bhutada, Shashwat Vora. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET52370
Publish Date : 2023-05-16
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here