Old photos are an integral part of everybody’s life; they remind us of how one person has spent their life. As people used hard copies of photos before, those photos suffered severe degradation. This degradation in real-time images is intricate, causing the typical restoration that might be solved through supervised learning to fail to generalize due to the domain gap between synthetic and real images. Therefore, this method uses various autoencoders to restore and colourize old images. Furthermore, this model uses a unique triplet domain translation network on real images and synthetic photo pairs.
Precisely, VAEs, which are variational autoencoders, are trained to transform old pictures and clean pictures into two latent spaces. Therefore, the translation between these two latent spaces is comprehended with simulated paired data. This translation generalizes well to authentic images because the domain gap is encompassed in the close-packed latent space. Moreover, to manoeuvre numerous degradations present in one old picture, this model designs a world branch with a partial non-local block targeting the structured faults, like scrapes and dirt marks, and an area branch targeting unstructured faults, like noises and fuzziness.
Two branches are blended within the latent space, resulting in an improved ability to renew old pictures from numerous defects. Additionally, it applies another face refinement network to revive fine details of faces within the old pictures, thus generating photos with amplified quality. Another autoencoder is encoded with colour images, and then the decoder decodes the features extracted from the encoder. Once a model is trained, testing is performed to colourize the photographs.
Introduction
I. INTRODUCTION
Photos are taken to capture the happy memories that are otherwise gone. Although time flies by, one can still invoke the moments of the past by watching them. However, old photo prints disintegrate when kept in poor environment, which causes the content of photo permanently damaged. Luckily, as mobile cameras and scanners become more convenient, people can now digitalize the photos and invite a talented specialist for reconstruction. However, manual retouching is typically burdensome and time-taking, which leaves stack of old photos impossible to restore. Hence, it's appealing to style automatic algorithms which can instantaneously repair old photos for people that wish to bring old photos back to life. Before the deep learning times, there are some trails that restore photos by automatically detecting the localized faults like scratches and freckles, and filling within the damaged areas with in painting process.
II. LITERATURE SURVEY
Siqi Zhang et al. [13] proposed a “unique Consecutive Context Perceive Generative Adversarial Networks (CCPGAN) for serial sections inpainting that can learn semantic information from its neighboring image and reinstate the damaged regions of serial sectioning images to the maximum extent.”
Lingbo Yang et al. [12] proposed “HiFaceGAN, a collaborative suppression, and replenishment framework that works in a dual-blind fashion, reducing dependence on degradation prior or structural guidance for training.”
X.Lu et al. [10] proposed a “A feed-forward image processing using CNN with multiple arbitary holes various sizes at the time of testing.”
III.METHODOLOGY
In this paper, we proposed a novel network to address the problem of old photo restoration via deep latent space translation. We are using deep learning to restore old photos that suffered from severe degradation there are many approaches currently available but the main problem with previous conventional restoration techniques was that they were not able to generalize. This is caused because they are all using supervised learning which is a problem caused by the domain gap between the real old picture and the ones that are synthesized for training. There is a big difference with the synthesized old images and the real old ones. We can observe that the synthesized image is already in high definition even with the fake scratches and colour changes compared to the other one that contains way less details they addressed this issue by creating their own new network specifically for the task basically they used two variational auto encoders called VAEs.
IV.PROPOSED SYSTEM
The architecture of Image Restoration is as shown within the Fig 2. Before the image is restored it passes through some stages. In order to decrease the domain gap, this formulates the old photo restoration problem, where to find out the mapping in between the clean images and old photos as images are taken from distinct domains model.
V.IMPLEMENTATION
The issue with the conventional restoration techniques is addressed by creating new network specifically for the task basically we used two variational auto encoders called VAEs.
The translation into latent spaces “Tz” (Fig 4) is learned through synthetic paired data, but is able to generalize well on real photos since this same domain gap is way smaller on such compact latent spaces.
The domain gap from the two latent spaces produced by the VAEs is closed by training an adversarial discriminator.
From the Fig 4 we can observe that the new domains from the latent spaces, “Zx” and “Zr”, are much closer to each other than the original pictures “R” and synthetic old pictures “X”.
The mapping to restore the degraded photos is dome in this latent space.
In order to recover the fine details of faces in the old photos from the picture in the latent space “z”(Fig 7) we have added a face refinement network, and using the degraded face into multiple regions of the network. This widely enhances the perceptual quality of the faces.
trained and fitted.
Number of epochs= 300 epochs Batch size = 16 per batch Output: Outputs coloured image.
???????VII.RESULTS
This model is mainly categorized into two parts i) Image restoration a) Enhancing unscratched images. b) Removing scratches and folds from photos. ii) Image colorization. This model takes input a image and outputs an image with no scratches and has a high resolution or improves colouring of image. This model is trained on 300 epochs and achieved 86%accuracy
A. Image Colorization
1. Accuracy: The model in the project is checked against the accuracy measure and we are attaining accuracy of 86% with 300 epochs.
Fig 17 shows that the amount of images coloured increased with a number of epochs. So with 300 epochs, it achieved 86% accuracy. With the increase in the number of epochs loss is reduced.
???????VIII.FUTURE ENHANCEMENTS
Nonetheless, as the dataset used contains a few old photos with faults, this method cannot handle complex shading, which can be addressed by including more such photographs in the training network or explicitly evaluating the shading effects during synthesis.
Conclusion
This project concludes that the domain gap between synthetic photos and authentic old images is reduced, and latent space is used to translate to clean images. Compared with prior methods, this method suffers more minor generalization problems. Using this method, the scrapes can be in-painted with better structural consistency. To reconstruct the face areas of old images the Coarse-to-fine generator with the spatial adaptive condition is proposed. The black and white images are colorized with 86% accuracy. This method displays good performance in restoring severely damaged old photographs.
References
[1] H. Liu, B. Jiang, Y. Xiao, and C. Yang, “Coherent semantic attention for image inpainting,” arXiv preprint arXiv:1905.12384, 2019.
[2] L. Xu, J. S. Ren, C. Liu, and J. Jia, “Deep convolutional neural network for image deconvolution,” in Advances in Neural Information Processing Systems, 2014, pp. 1790– 1798.
[3] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “Gans trained by a two time- scale update rule converge to a local nash equilibrium,” in Advances in Neural Information Processing Systems, 2017, pp. 6626–6637.
[4] W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European conference on computer vision. Springer, 2016, pp. 154– 169.
[5] Y. Ren, X. Yu, R. Zhang, T. H. Li, S. Liu, and G. Li, “Structureflow: Image inpainting via structure- aware appearance flow,” arXiv preprint arXiv:1908.03852, 2019.
[6] F. Stanco, G. Ramponi, and A. De Polo, “Towards the automated restoration of old photographic prints: a survey,” in The IEEE Region 8 EUROCON 2003. Computer as a Tool., vol. 2. IEEE, 2003, pp. 370– 374.
[7] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro, “Image inpainting for irregular holes using partial convolutions,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 85–100.
[8] I. Giakoumis, N. Nikolaidis, and I. Pitas, “Digital image processing techniques for the detection and removal of cracks in digitized paintings,” IEEE Transactions on Image Processing, vol. 15, no. 1, pp. 178– 188, 2005.
[9] J. Sun, W. Cao, Z. Xu, and J. Ponce, “Learning a convolutional neural network for non- uniform motion blur removal,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 769–777.
[10] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Generative image in painting with contextual attention,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5505–5514.
[11] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3113–3155, 2017
[12] Lingbo Yang , Shanshe Wang , Siwei Ma , Chang Liu , Pan Wang,” HiFaceGAN: Face Renovation via Collaborative Suppression and Replenishment,” in Proceedings of the IEEE Conference on Computer Vision, 2021.
[13] Siqi Zhangi , Lei Wangi , Jie Zhang2 , Ling GU2 , Xiran Jhai 1 , Xiaoyue Zhai2 , Xianzheng Shai , And Shijie Chang 1,” Consecutive Context Perceive Generative Adversarial Networks for Serial Sections Inpainting,” in Proceedings of the IEEE Conference on Big Data Research,2020
[14] Xiaodong Li , Xingfan Zhu ,Ziyin Zhou , Qilong Sun , Qingyu Liu,” Focusing on Persons: Colorizing Old Images Learning from Modern Historical Movies,” in Proceedings of the IEEE Conference on Computer Vision, 2021.
[15] “Meitu,”https://www.meitu.com/en
[16] “Remini photo enhancer,” https://www.bigwinepot.com/index en.htm