Ideally, the signals which are pure can exist only on paper. As there are some techniques for denoising the provided signal up to some degree, so procedure during that time it is important that such techniques must be reconcilable with the most of the devices. This article describes a for denoising with the help of an autoencoder using image processing technique and algorithms which are based on deep learning. With the aid of autoencoders, noise reduction is not accomplished using a conventional method in which the output signal is essentially the same signal that was used as an input previously. Here the main focus remains originality as the autoencoder follows a back propagation process It is one of the approaches that focuses on the techniques described in this article are interchangeable. i.e., Working for any signal and having, reliability, efficient Ness and compatibility with more devices.
Introduction
I. INTRODUCTION
Artificial Neural Network designed to automatically encode the image into a standard format after compressing the input image into a (LSR) vector. After converting the input image to LSR vector format, you can extract image features. The primary goal is making the image crystal clear by eliminating the extra turbulence. It can be used on a signal to cut down or eliminate any image noise. The main reason of removing noise is to help the hidden layer of the autoencoder to learn more strong features and become more efficient and reduce the risk of overfitting.
Two sections make up the blocks of an auto-encoder: the first is an encoder section, also known as encoding, and the second is a decoder section, also known as decoding. [1] The encoding section takes a signal and gives the result as a compressed signal in the form of an LSR vector, which is a type of vector known as the LSR vector. The input for a decoder signal is vector that is an LSR vector and it gives the output in the form of image.
According to Fig.1, the input designated as (sig) passes the encoding section before becoming E. (sig). It’s output signal now serves as an LSR vectors. [2] The S Vector is then processed, eventually becoming D. (s), And that is the final output o.
Abstractly x and s should be the same, the latter not the original. Throughout this process some properties of the signal are lost and it is not same as the original.
As mentioned above, the implementation of such algorithms occurs when the signal is in LSR format. This is because context can be derived from signal since everything will be set of numbers.
When the image goes along the autoencoder, it is first compressed in the first section known as encoding block. Images are converted to LSR vectors during compression. That vector is sent along fully connected layers in the decoding block to increase the size and reconstruct the output image and it loses some small features due to compression and decompression. [3] When an image first goes through a noisy autoencoder, the output image is effectively an exact version of the same input in the clear form. This is because autoencoders effectively omit some of the features, for example the added noise.
There are losses. However, this is the noise lost which ids present in the input signal. With the help of this idea, this paper helps to develop a system to remove or reduce noise from image, confirm the plausibility of the idea, and discuss its extension.
II. MODEL FOR IMAGE REDUCTION
In ICML paper in 2008, the author has explained that adding certain noise to the signal ultimately helps to make the inner layers more robust. An autoencoder helps to get back the original signal even after turbulence was purposefully added to signal. The same can also be done in an autoencoder that are trained for a series of images. This idea gives the same image with the noise level attenuated. Autoencoder denoising not only improves image quality, but it also improves the accuracy of the recognizer.
Depending on the signal's characteristics, noise may be added. It is not necessary to perform the noise function if the signal already contains some noise. The following are only a few examples of external noise:
Caused by poor quality or defective image sensor
Any deviation in colour, or deviation in brightness of image
Quantization noise
Generated during format conversion
Noise that occurs at scanning or post-processing
[2] This technique removes the noise from the signal and can alternatively be called a pre-processing filter. [4]A perfect pre-processing filter can reduce a lot of noise. In order to obtain the noise-free version, pre-processing is done after the training of an autoencoder is completed on a collection of noisy images.
[5] The core suggested model is the decoder portion of an autoencoder. Features of the block image (in this case, noise) are excluded due to reconstruction loss. However, in essence, you require an encoder to transform image into format that the decoder can understand.
III. DATASET PREPARATION
[1]A (MNIST) database dataset is chosen to help input the autoencoder. This is a viral database of 60,000 hand drawn 28 x 28-pixel, grayscale images.
This dataset is divided into two parts as first is a training set and second is a test set. Poisson noise is strategically added to test set utilizing a standard random distribution centred at 0.5 and with a normal deviation of 0.5. To train the auto associator to denoise the input image, noise is added.
IV. CONVOLUTIONAL AUTOENCODER IMPLEMENTATION
A. Encoder (Compression)
A convolutional autoencoder includes a suitable method or way which is static and that accepts image dimensions like width and height also depth, a filter in the form of tuple value, and the requires neurons in a well-connected LSR vector. [7] Then shape and channel dimensions of input are initialized with the last channel order. Every entered data is exceeded across an autoencoder, handed throughout multiple layers, and the very last output is flattened to shape a vector LSR.
B. Decoder (Reconstruction)
The vector of is first used as the decoder's input. Then, a three-D extent with completely linked layers is created by the decoder. The volume of the transposed convolution layer will increase as the LSR vector passes through it. Leaky Relu is then applied, and normalization is completed.
V. AUTOENCODER TRAINING
A. Noise adding to MNIST dataset
The Pixel Intensities of the photographs are scaled from zero to one.[8] The commonly allotted random sampling noise is focused on 0.5 and 0.5 standard deviation is included to the NumPy representation.
Looking at Figure 5 and 6, we are able to see that the picture is pretty damaged. [6] Now that our dataset is prepared, we can feed it to autoencoder network.
B. Powerful Autoencoder’s Construction
When designing the autoencoder, the Adams optimizer with a preliminary gaining knowledge of rate of 1e-three was chosen. After then, a mean squared error (mse) loss compiles. On our dataset, the auto encoder will now be trained. The autoencoder outperforms the dataset.
After the completion of Training and validating the model on 55000 and 15000 Samples, we acquired the following Outputs: See figure 7.
[9] At first, validation loss increases, however because the quantity of epochs will increase, the graph began to descend. The loss did not grow with epochs, which shows that model was not overfitted.
VI. RESULTS
A picture of numbers that at the start contained noise was leaked first. Figure 6 indicates the formerly recorded autoencoder inputs and outputs. Looking on the enter tool, the noise is so loud that even the bare eye can slightly see the numbers. Whilst the tool goes through an autoencoder, the noise in the picture is eliminated and the numbers inside the image are visible as in fig 8 and in fig 9 .
The noisy image had a PSNR of 29.52 dB and an MSE of 72.25. The model performed absolute accuracy through manually testing images from the MNIST data set.
Conclusion
The suggestion is about how noise could be reduced somewhat. An autoencoder that accepts input as an image, compresses it, and then immediately turns it into an LSR vector during the encoding stage has been trained. [10] The decoding phase of the LSR vector is followed by a fully linked layer, which enlarge the size and convert the vector into an image. Some of the image\'s properties are lost during decoding. It is applied to the creation system that denoises a chosen signal. The only thing that is lost in the decoding process as properties is Noise.
References
[1] Calderara, S., Piccinini, P. & Cucchiara, R.Vision smoke detection system using image energy and Colour information Machin .
[2] V. Agrawal, S. Dhekane, N. Tuniya and V. Vyas, \"Image Caption Generator Using Attention Mechanism,\" 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT),2021,1-6,doi:.Rosebrock,Adrian.“Fireandsmokedetectionwith Keras and Deep Learning. ” by image search, 18 Nov. 2019
[3] Cestari, Luis & Worrell, Clarence & Milke, James. (2005). Advanced fire detection algorithms using data from the home smoke detector project. Fire Safety Journal - FIRE SAFETY J. 40.128.10.101 6/j.firesaf.2004.07.004.
[4] O. Vinyals, A. Toshev, S. Bengio and D. Erhan, \"Show and tell: A neural image caption generator,\" 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3156-3164, doi:10.1109/CVPR.2015.7298935. Jacob,I.Jeena.\"Capsule network based biometric recognition system.\" Journal of Artificial Intelligence1,no.02(2019):83-94.
[5] P. Mathur, A. Gill, A. Yadav, A. Mishra and N. K. Bansode, \"Camera2Caption: A real-time image caption generator,\" 2017 International Conference on Computational Intelligence in Data Science(ICCIDS), 2017, pp. 1-6, doi: 10.1109/ICCIDS.2017.8272660.X. Ye, L. Wang, H. Xing and L. Huang, \"Denoising hybrid noises in image with stacked autoencoder,\"2015IEEE International Conference on Information and Automation, Lijiang, 2015, pp.2720-2724.
[6] L.Yasenko,Y.Klyatchenko and O.Tarasenko Klyatchenko,\"Image noise reduction by denoising autoencoder,\"2020IEEE11thInternational Conference on Dependable Systems, Services and Technologies(DESSERT),Kyiv,Ukraine,2020, pp.351-355.
[7] S. -H. Han and H. -J. Choi, \"Explainable Image Caption Generator Using Attention and Bayesian Inference,\" 2018 International Conference on Computational Science and Computational Intelligence (CSCI), 2018, pp. 478-481, doi: 10.1109/CSCI46756.2018.00098.
[8] M. M. A. Baig, M. I. Shah, M. A. Wajahat, N. Zafar and O. Arif, \"Image Caption Generator with Novel Object Injection,\" 2018 Digital Image Computing: Techniques and Applications (DICTA), 2018, pp. 1-8, doi: 10.1109/DICTA.2018.8615810.
[9] S. -H. Han and H. -J. Choi, \"Domain-Specific Image Caption Generator with Semantic Ontology,\" 2020 IEEE International Conference on Big Data and Smart Computing (Big Comp), 2020, pp. 526-530, doi: 10.1109/BigComp48618.2020.00-12.
[10] A. Singh and D. Vij, \"CNN-LSTM based Social Media Post Caption Generator,\" 2022 2nd International Conference on Innovative Practices in Technology and Management (ICIPTM), 2022, pp. 205-209,doi: 10.1109/ICIPTM54933.2022.9754189.