Back Propagation Algorithm research is now very active in the Artificial Neural Network (ANN) and machine learning communities. It has increased a wide range of applications, including image compression, pattern recognition, time series prediction, sequence identification, data filtering, and other intelligent processes carried out by the human brain, have had enormous results. In this paper, we give a quick introduction to ANN and BP algorithms, explain how they operate, and highlight some of the ongoing research projects and the difficulties they face
Introduction
I. INTRODUCTION
Artificial neural networks (ANNs) are logical techniques that are based on how the human brain learns. Synthetic neural networks (ANNs) consists of tiny processing units known as Artificial Neurons, which can be trained to carry out complex calculations, and processes information similarly to organic neurons in the brain. Humans learn to read, write, comprehend speech, detect patterns, and distinguish them all through imitating others. ANNs are trained rather than programmed in a similar manner. A large family of Artificial Neural Networks (ANN) known as Back Propagation (BP) have a design made up of numerous interconnected layers. The BP ANNs are an example of an ANN type whose deepest-descent learning algorithm is used. They can also minimise the inaccuracy of nonlinear functions with high levels of complexity if given the right number of Hidden units.
Many complex problems in the real world have been effectively solved by ANN, such as forecasting future trends based on a company's vast historical data. All engineering disciplines, including biological modelling, decision and control, health and medical, engineering, and manufacturing, have effectively adopted ANN.
Both the Feed Forward ANN and the Feedback ANN are members of the BP family (Recurrent Networks). We will just look at Feed Forward BP ANN in this section because it is crucial to understand it before studying Feedback BP.
II. LITERATURE SURVAY
The output value of the feed forward computation neural networks is not close to the target or teacher output value. Target and actual feed forward values have different lead error values. In order for the neural network model to provide the best prediction output with excellent tolerance, the error rate must be kept to a minimum. Backpropagation can be used for this.
A key back propagation milestone:
J. Kelly, Henry Arthur, and E. Bryson deduced the fundamental concept of continual backpropagation with respect to the control hypothesis in 1961.
A multi-orchestrate dynamic system improvement approach was presented by Bryson and Ho in 1969.
Hopfield introduced his concept of a neuronal framework in 1982.
Backpropagation received affirmation in 1986 thanks to the efforts of David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams.
Wan was the first person to use the backpropagation approach to win a stellar example acknowledgement competition in 1993
III. METHODOLOGY
The Back-propagation Neural Network (BPNN) Algorithm, which is conventional, commonly utilised to address a variety of real-world issues. In order to identify the mistakes in the hidden layers, the BPNN calculates the errors of the output layer. Back-Propagating is an extremely effective solution for issues where the relationship between the output and inputs cannot be determined. It has been effectively used in a variety of applications because of its adaptability and learning capabilities [7].
An input layer, at least one intermediate hidden layer, and an output layer are the minimum number of layers that make up a Back-Propagation network.
Units are often wired in a feed-forward manner, with the input units entirely connected to the units in the output layer are fully connected to the hidden layer and hidden units.
The input layer of the network is shown the input pattern. Up until they arrive at the output units, these inputs are sent throughout the network. The actual or anticipated output pattern is created by this forward pass. The desired outputs are provided as a component of the training vector because back propagation is a supervised learning process. An error signal is generated by subtracting the actual network outputs from the anticipated outputs. The back propagation step uses this error signal as the starting point to propagate errors back through the neural network by calculating each hidden processing unit's contribution and determining the necessary adjustment to obtain the desired output. The neural network has just "learned" from an experience when the connection weights are modified. Any input pattern will result in the appropriate output once the network has been trained.
IV. LEARNING
Given these presumptions, it is preferable to describe a BP's operation in detail. Consider a very basic BP in which there are two input units (I1, I2), two hidden units (H1, H2), and two output units (O1, O2). Then, there's a BP with three levels (Figure 1). Also, all units in the layer below are connected to each layer's level by weights. The weights wI,H connect the Input units to the Hidden units, or, more specifically, w1,1, w1,2, w2,1, w2,2. Instead, the Output units—practically, w3,5, w3,6, w4,5, and w4,6—are coupled to the Hidden units through the weights wH, O. A BP requires multiple steps to operate; the first ones come after the activation conditions:
It is necessary for the BP to be subjected to a specific type of input for at least a specific amount of time;
It is necessary to believe that the Output units tend towards at least a specific type of objective known as the Target for the entire time that the BP is subjected to a specific type of input;
It is necessary for the BP to display a value, even a random one at the beginning among all its unit connections that are its weights;
V. BACK PROPAGATION ALGORITHM AND COMPUTATIONAL PROCESS
The aforementioned [Figure 1] illustrates how the backpropagation process mechanism operates on a daily basis. These calculation processes will take place during the backwards propagation, as described below.
Rate of Find Error: Here, we must compare the model output to the real-world output.
Minimum Error: Cross-checking to see if the mistake has been minimised.
Refresh the Weights: If the error exceeds the permissible range, refresh the weights and biases. Following then, verify the error once more. Up till the mistake becomes low, repeat the process.
Neural Network Model: The model is ready to be used for data forecasting once the error rate is within an acceptable range.
VI. WORKING WITH BACK PROPAGATION
Input, Hidden, and Output layers are shown in the feedforward artificial neural network in the aforementioned [Figure 2]. Two nodes with corresponding weights are present in each layer. The bias node is fully connected to all other nodes in the model.
The following are some of the network's concepts:
X1 and X2 are input nodes.
W1 to W8: Weights of respective layers from input to output
H1, H2: Hidden Layer Nodes with net out from respective inputs
HA1, HA2: Hidden Layer Nodes with activation output O1, O2: Output Layer Nodes with net out from respective inputs OA1, OA2: Output Layer Nodes with activation output B1, B2: Bias Nodes for Hidden and Output layers, respectively
VII. BACK-PROPAGATION ALGORITHM TRAINING
An input pattern and a desired or target output pattern comprise a finite number of pattern pairs used in supervised training for the feed forward back-propagation network. The input layer displays an input pattern. The neurons pass the pattern activations in this area.to the neurons in the following layer, which is a buried layer. The activations of the hidden layer neurons are dictated by the weights and the inputs, and the outputs are obtained by employing a bias and maybe also a threshold function. The output neurons receive these hidden layer outputs as inputs, and the output neurons process the inputs using a threshold function and an optional bias. The activations from the output layer control the network's overall output. The pattern calculated and the A function of this inaccuracy is calculated for each component of the input pattern, and then the weights of connections between the hidden layer and the output layer are adjusted.
The connection weights between the input and hidden layers are calculated in a manner akin to that of the output error. With each pattern pair assigned for network training, the process is repeated. Cycles or epochs refer to each iteration of all training patterns. Once the mistake is within the required tolerance, the operation is repeated as many times as necessary. The determined output error at the output layer is multiplied to obtain the adjustment for a neuron's threshold value. the output neuron and the learning rate parameter that were used in this layer's weight adjustment calculation. A Back-Propagation network can be tested on a second set of inputs to assess how well it detects untrained patterns after learning the proper classification for a set of inputs from a training set.
Hence, the network's ability to generalise is crucial when using back-propagation learning.
VIII. APPLICATIONS ON BACKPROPAGATION
Classification Issues: Currently, the goal is to determine if a particular "information point" belongs in Class 1, Class 2, or Class 3. The neural system is prepared to find the example when irregular focuses are assigned to a particular class. When planning is complete, it will apply what it has learned to precisely group new focuses.
Functional Approximation: Currently, the organisation makes unreliable estimates of a particular capacity. The goal is to find the real example, and it is encouraged with all the details. The system accurately measures the estimation of the gaussian capacity after preparation (underneath).
Time-Series Forecasting: Now, the goal is to build a neural network that can predict a worth based on a piece of provided time-arrangement information (for example, financial exchange forecast dependent on given patterns). The neural system's contributions must be refactored in chunks in order to advance this problem, and the output will be the next piece of information that comes right after that one (see beneath).
Conclusion
A supervised learning neural network model that is widely used in numerous engineering sectors worldwide is the back-propagation neural network (BPNN). While being widely used in the most useful ANN applications and performing rather well, it has certain issues delayed convergence and convergence to local minima are issues. Because of this, applying artificial neural networks to complex issues is quite difficult.
In this article, we\'ve demonstrated how a backpropagation neural network works well when dealing with big data sets. By altering the, the performance can be the quantity of hidden neurons and the rate of learning. It takes a long time to train on a large quantity of data because of its iterative training and gradient-based training, which are both more slower than necessary. We cannot claim that there is a network out there for every type of database. Continue testing your data on various neural networks to discover which one suits your data the best.
References
[1] J. Kelly, Henry Arthur, and E. Bryson deduced the fundamental concept of incessant backpropagation in relation to the control theory (1961).
[2] A multi-orchestrate dynamic system improvement method was presented by Bryson and Ho (1969).
[3] Hopfield brought his neural framework concept (1982).
[4] Backpropagation received support thanks to the work of David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams (1986).
[5] Wan was the first person to use the backpropagation approach to win a stellar example acknowledgement contest (1993).
[6] Budiharjo S. Triyuni W. Agus Perdana, and H. Tutut, “Predicting tuition fee payment problem using backpropagation neural network model,” (2018)
[7] M. Huan, C. Ming, and Z. Jianwei, “Study on the prediction of real estate price index based on hhga-rbf neural network algorithm,” International Journal of u - and e-Service, Science and Technology, SERSC Australia, ISSN: 2005-4246 (Print); pp.2207-9718 (Online), vol.8, no.7, July, (2015) DOI: 10.142 57/ijunnes st.2015.8.7.11.
[8] Muhammad, A. Khubaib Amjad, and H. Mehdi, “Application of data mining using artificial neural network: survey,” International Journal of Database Theory and Application, vol.8, no.1, (2015) DOI: 10.14257/ijdta.2015.8.1.25.
[9] P. Jong, “The characteristic function of CoreNet (Multi-level single-layer artificial neural networks),” Asia- Pacific Journal of Neural Networks and Its Applications, vol.1, no.1, (2017) DOI: 10.21742/AJNNIA.201 7.1.1.02
[10] L. Wei, “Neural network model for distortion buckling behaviour of cold-formed steel compression members,” Helsinki University of Technology Laboratory of Steel Structures Publications 16, (2000)
[11] The concept of Back-Propagation Learning by examples from the http://hebb.cis.uoguelph.ca/~skremer/Teachin g/27642/BP/node3.html