Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Rekha G S, Shivanshu Pande, Samarth C Shetty, Shweta Patil, Rohan Siwach
DOI Link: https://doi.org/10.22214/ijraset.2023.50518
Certificate: View Certificate
Predicting poverty level from satellite imagery is a challenging task that has recently been tackled using deep neural networks. The goal of this task is to use satellite images of a specific area to predict the poverty level of the inhabitants living in that area.One approach to solving this task is to use DNN, which are a type of neural network that are well-suited for image classification tasks. The DNNs take in the satellite images as input and extract features from them using a series of convolutional layers. These layers are designed to learn the patterns and features present in the images, such as the presence of roads, buildings, light. Once the DNNs have extracted the relevant features from the images, they pass them through fully connected layers to make the final poverty level prediction. To train these neural networks, a large dataset of satellite images and their corresponding poverty level labels is required. This dataset is used to train the network to make predictions on new, unseen images. The performance of the network can be evaluated using metrics such as accuracy, precision, and recall. Overall, deep neural networks are a powerful tool for predicting poverty level from satellite imagery, as they are able to automatically learn the relevant features from the images and make accurate predictions. However, the accuracy of the predictions can be limited by the quality and size of the training dataset.
I. INTRODUCTION
The United Nations has prioritized the eradication of all forms of poverty, but poverty remains a major issue, especially in developing countries where data on key economic indicators is limited. Accurate data on poverty levels is essential for philanthropic agencies and governments to allocate resources and interven- tions where needed and to monitor progress towards achieving the Sustainable Development Goals. However, collecting data on poverty in developing countries is challenging due to the high cost of on-the-ground surveys. This lack of data hinders progress in reducing poverty and achieving the Sustainable Development Goals.
Predicting poverty levels from satellite imagery can provide valuable insights into the living conditions and socio-economic status of individuals and communi- ties in a specific area. This information can aid in the allocation of resources and aid to those in need and inform government policy and development strategies. Additionally, satellite imagery provides a cost-effective and non-invasive way to obtain information about a specific area. In contrast to traditional meth- ods such as surveys and door-to-door visits, satellite imagery can cover large geographical areas quickly and with minimal disruption to the inhabitants.
The objective of this survey paper is to automate the poverty detection pro- cess by devising a methodology for the detection of poverty in any given region of the earth using image processing techniques. This project aims to increase the accuracy of the prediction model and to deal with low pixelized/resolution satellite image datasets in a better way. By automating the poverty detection process, this project can provide accurate and reliable data on poverty levels, which can aid in the allocation of resources and interventions to reduce poverty and achieve the Sustainable Development Goals.
II. LITERATURE SURVEY
A. Use of Machine Learning and Satellite Imagery for Urban Environment Analysis
In the paper [1] by Chitturi, Varun el at. the authors has studied the use of convolutional neural networks for analyzing satellite imagery of entire cities to understand urban environments. A dataset of 140,000 samples from 6 cities in Europe was created and labeled for the machine learning community, and the researchers performed a two-step task consisting of predicting the urban land use classes from the satellite imagery and then reducing the features extracted from the convolutional classifier into a lower-dimensional space. The experiments showed that some urban environments are easier to identify than others, both within and across cities, with classifying high, medium, and low-density urban environments being more challenging due to their subjective nature and easier differentiation being seen between agricultural lands, forests, and airports.
The study also found that the model performed poorly when trained on samples from one city and tested on samples from a different city, but better performance was observed when a more diverse set of cities was used, with the model struggling most with high, medium, and low-density urban fabric classes, while airports and forests were easier to distinguish.
In the paper [2] by A. Perez, el.at the authors have first pre-processed the satellite imagery by applying atmospheric correction and vegetation removal techniques to eliminate the effects of atmospheric scattering and vegetation on the image. Then, they extracted several features from the satellite imagery, such as NDBI (Normalized Difference Built-up Index), NDWI (Normalized Difference Water Index),NDVI (Normalized Difference Vegetation Index) which are indices that are commonly used to identify and differentiate different land cover types, such as vegetation, water, and urban areas. The authors also obtained socio- economic data from the World Bank’s Living Standards Measurement Study, which was used to label the training data. They then trained several machine learning algorithms, such as Random Forest, K-nearest Neighbors and Support Vector Machines, on the features extracted from the satellite imagery and socio- economic data. The results of the study showed that the model was able to achieve an overall accuracy of around 70tested on a held-out dataset. The authors also performed an ablation study to understand the contribution of each feature to the final accuracy and found that NDVI and NDBI were the most important features.
In the paper [3] by Simone Piagges el at, the author first collected high- resolution satellite imagery of different cities and extracted several features from the images, such as NDVI , NDBI , NDWI and texture features to capture the characteristics of the urban environment. They also obtained socio-economic data from national and local statistics, which was used to label the training data. The authors then trained several machine learning algorithms such as Random Forest, Gradient Boosting and Support Vector Machines on the dataset. They also used a combination of a convolutional neural network (CNN) and a Random Forest algorithm to extract features from the satellite imagery, which was then used in the final model. The results of the study showed that the model was able to achieve an overall accuracy of around 80dataset. The authors also performed an ablation study to understand the contribution of each feature to the final accuracy and found that NDVI and NDBI were the most important features.
III. USING MACHINE LEARNING AND SATELLITE IMAGERY FOR MEASURING HUMAN DEVELOPMENT
In the paper [4] by Andrew Head, el al the authors aimed to use satellite imagery as an alternative to traditional survey-based methods for measuring human development, which can be costly and time-consuming. They used machine learning algorithms and image processing techniques to extract features from the satellite imagery that were indicative of human development levels. The authors first collected high-resolution satellite imagery of different regions and applied pre-processing techniques such as atmospheric correction and vegetation removal to eliminate the effects of atmospheric scattering and vegetation on the image. Then, they extracted several features from the satellite imagery such as NDBI , NDWI , NDVI and texture features to capture the characteristics of the urban environment. These indices are commonly used to identify and differentiate different land cover types, such as vegetation, water, and urban areas. They also obtained socio-economic data from national and local statistics, which was used to label the training data. The authors then trained several machine learning algorithms such as Random Forest, Gradient Boosting and Support Vector Machines on the dataset. They also used a combination of a convolutional neural network (CNN) and a Random Forest algorithm to extract features from the satellite imagery, which was then used in the final model. They evaluated the performance of the model by comparing its predictions of human development with traditional survey.
In the paper [5] by Jean, Neal, et al, the authors aimed to use satellite imagery in combination with other socio-economic data to predict poverty lev- els at the household level. They used machine learning algorithms and image processing techniques to extract features from the satellite imagery that were indicative of poverty levels. The authors first collected night-time light intensity data from the Defense Meteorological Satellite Program’s Operational Line-scan System as a feature. Night-time light intensity is correlated with economic ac- tivity, and can therefore be used as an indicator of poverty. They also obtained socio-economic data from various sources, including household surveys and cen- sus data. The dataset included data from over 1,700 households in Peru, which were used to train and test the model. The authors then trained several machine learning algorithms such as Random Forest, Gradient Boosting and Support Vector Machines on the dataset. They evaluated the performance of the model by comparing its predictions of poverty with ground truth data from the World Bank’s Living Standards Measurement Study. The results of the study showed that the model was able to achieve an overall accuracy of around 80tested on a held-out dataset. They also found that the use of satellite imagery improved the performance of the model compared to using demographic/socio-economic data alone. The authors conclude that the combination of satellite imagery and other socio-economic data can be an effective means of predicting poverty levels at the household level, and that this approach could be useful in developing countries where poverty data is often limited or difficult to obtain.
IV. DEEP LEARNING APPROACH TO ACHIEVE SUPER RES- OLUTION BUILDING EXTRACTION FROM LOW-RESOLUTION SATELLITE IMAGES
In the paper "Making Low-Resolution Satellite Images Reborn: A Deep Learn- ing Approach for Super-Resolution Building Extraction" by Lixian Zhang, Run- min el at, the authors [13] proposed a deep learning approach to enhance the resolution of low-resolution satellite images, specifically for building extraction. The goal of this approach is to improve the accuracy of building extraction from low-resolution images, which is a significant challenge in remote sensing. The authors first collected a dataset of low-resolution satellite images and corresponding high-resolution images. They then pre-processed the images by applying techniques such as atmospheric correction and pan-sharpening to en- hance the spatial resolution. They used a deep convolutional neural network (CNN) architecture called SRResNet to generate super-resolution images from the low-resolution images.The authors also used a CNN-based object detector called RetinaNet to extract building information from the high-resolution im- ages. They trained the SRResNet and RetinaNet models using the dataset, and evaluated their performance using metrics such as PSNR and F1-score. The results of the study showed that the proposed approach improved the accuracy of building extraction from low-resolution images, as compared to using tra- ditional interpolation methods. The authors conclude that the proposed deep learning approach can effectively enhance the resolution of low-resolution satel- lite images. .
V. HIGH-RESOLUTION MAP GENERATION USING COL- ORIMETRY
The paper by authors Suresh Merugu, Kamal Jain [14]. In this research, the authors propose a method for generating high resolution maps from low resolu- tion maps using colorimetry . The method first involves extracting the spectral response curve of each pixel with is of low resolution then using this curve to interpolate the values of missing pixels in high resolution . this method is ef- fective with an accuracy of 95to increase the amount of training data available for tasks such as land-use and land-cover classification, which can be limited by the availability of real-world images. They have used other methods like MLC, ANN etc. Therefore, getting a high-resolution image from low resolutions helps in feature extraction. The challenge of determining the relative proportions of different classes within a pixel has been a long-standing issue in the research community. Despite numerous methodologies being developed, there remains a gap in achieving complete accuracy. The reason for this is the difficulty in estab- lishing a one-to-one correspondence between each pixel and its corresponding ground data. In this study, a new approach is proposed to address this issue by utilizing the mixing principles of colors as described in colorimetry. The method- ology involves mapping the mixed-pixel color onto a chromaticity diagram and then using surrounding pixel information to estimate the class proportions. The resampling process becomes more precise when considering well-defined bound- aries. The contextual information can then be utilized to generate a resampled image containing only the actual colors present. This is done by multiplying the fraction of each class by the total number of pixels in the mixed pixel, resulting in a count of the number of pixels of each color.
VI. SYNTHETIC MULTISPECTRAL SATELLITE IMAGE GENER- ATION USING GANS
The paper by authors. In this research, the authors propose a method for generating synthetic multispectral satellite images using generative adversarial networks (GANs)[15]. The goal of this approach is to increase the amount of training data available for tasks such as landuse and land-cover classification, which can be limited by the availability of real-world images. The authors used a GAN architecture called the Pix2Pix GAN to generate synthetic images from real-world images. They used a dataset of multispectral satellite images and trained the GAN to generate new synthetic images that were similar in appearance to the real-world images. They then used the synthetic images to train a supervised classification algorithm, such as a support vector machine (SVM), to classify the land-use and land-cover. The authors evaluated the performance of the synthetic images in comparison to real-world images using metrics such as overall accuracy, kappa coefficient, and F1-score. They found that the synthetic images generated by the GAN were able to improve the accuracy of the land-use and land-cover classification when used as training data. In summary, the authors proposed a GAN-based approach to generate synthetic multispectral satellite images, which can be used to increase the amount of training data available for land-use and land-cover classification task.
VII. ROBUSTNESS OF NEURAL NETWORKS AGAINST IMAGE CORRUPTION AND IRREGULARITIES
In the paper [16] by E. Rusak, L. Schott, R. S. Zimmermann, J. Bitterwolf, O. Bringmann, M. Bethge, and W. Brendel The authors propose a simple approach for training neural networks to be more robust against diverse image corrup- tions by augmenting the training data with artificially corrupted images. The corrupted images are generated by applying various types of corruption to the original images, such as blur, noise, and color shift. By training the neural net- work on both the original and corrupted images, the model can learn to recognize and classify images despite the presence of corruptions.
This approach can be useful for a variety of applications where image quality may be compromised, such as in satellite imagery or medical imaging. The proposed methodology has several advantages, including its simplicity, flexibility, and effectiveness. It is easy to implement and can be integrated into existing neural network training pipelines. The use of artificially corrupted images allows for the creation of a diverse and representative training set, improving model generalization and re- ducing the risk of overfitting. The approach also leads to improved performance on corrupted test images, making it useful in applications where image quality may be compromised. It can be applied to a wide range of image classification tasks, making it a useful tool.
VIII. DEEP LEARNING TECHNIQUES FOR SALIENT, CATEGORY SPECIFIC OBJECT DETECTION
In the paper [17] by Han, Junwei, et al. The authors begin by defining salient object detection as the task of identifying objects that stand out from their sur- roundings, and category-specific object detection as the task of detecting objects belonging to a specific category, such as cars, people, or buildings. They then in- troduce various deep learning techniques for these tasks, including CNNs, RNNs, and hybrid CNN-RNN models. The authors highlight the strengths of CNNs in processing image data and extracting features, as well as their limitations in handling large-scale datasets and variations in object appearance. RNNs, on the other hand, are better suited for handling sequential data, but are limited in their ability to process large amounts of data and are computationally expensive. The hybrid CNN-RNN models address some of these limitations by combining the strengths of both CNNs and RNNs.The authors then discuss the challenges and limitations of current object detection methods, such as dealing with vari- ations in object appearance, handling large-scale datasets, and addressing the imbalance between the number of positive and negative samples in the data. They also provide an overview of the current state-of-the-art in deep learning techniques for salient and categoryspecific object detection, including methods such as Multi-scale Convolutional Networks (MCN), Region-based CNNs (R- CNNs), and You Only Look Once (YOLO).Additionally, the paper highlights the importance of data pre-processing and data augmentation techniques, which are crucial for improving the performance of deep learning models for object de- tection. The authors also discuss the importance of transfer learning, where pretrained models can be fine-tuned for specific tasks, reducing the amount of data required for training and improving the overall accuracy of the model. Fur- thermore, the authors evaluate various state-of-the-art methods for salient and category-specific object detection, including R-CNN, Faster R-CNN, YOLO, and Retina Net. They compare the performance of these methods on different datasets, providing a quantitative analysis of their accuracy and speed. Overall, the paper provides a comprehensive review of the current advancements in deep learning techniques for salient and category-specific object detection. It high- lights the importance of data pre-processing, data augmentation, and transfer learning in improving the performance of deep learning models. The authors provide insights into the strengths and limitations of different approaches, mak- ing it a valuable resource for researchers and practitioners in this field.
The paper [19] by M. Xie, N. Jean, et al,The authors provide a thorough analysis of the current state-of-the-art deep learning techniques for salient and category-specific object detection. They summarize the strengths and limi- tations of various approaches, including CNNs, RNNs, and hybrid CNN-RNN models. Additionally, the paper highlights the challenges and limitations of cur- rent object detection methods, such as handling variations in object appearance and dealing with large-scale datasets. The authors’ comprehensive approach provides a valuable resource for researchers and practitioners in the field, offering guidance on which methods may be more suitable for different tasks and applications. The findings of this paper can inform future research and help advance the development of deep learning techniques for object detection.
The paper [20 ] by J. Wang et al, highlights the potential of deep learning techniques in advancing the field of object detection, particularly in the areas of salient and categoryspecific object detection. The authors provide a compre- hensive review of different deep learning approaches, including CNNs, RNNs, and hybrid models, and their performance on various datasets. They also dis- cuss the challenges and limitations of current methods and provide insights into which methods may be more suitable for different tasks and applications. This paper serves as a valuable reference for researchers and practitioners in the field, providing guidance for future research and development in this area.
IX. ACKNOWLEDGEMENT
This paper’s research was carried out with the support of B.M.S College of Engineering and under the supervision of Rekha G S.
This survey paper makes an effort to devise a methodology for the detection of Poverty using image processing techniques and deep learning, thus automating the detection process and reducing the manual labour, time and expenditure involved in survey process. The studies show the potential of using satellite im- agery and machine learning to predict poverty. They proposed different methods such as colorimetry and GANs to generate synthetic images to improve the ac- curacy of land-use and land-cover mapping, which can be a valuable tool for poverty prediction. Also, the direct use of Deep Learning architectures like ANN and CNN were very useful in predicting the same, different corelation indexes and methods were also discussed.
[1] Chitturi, Varun Nabulsi, Zaid (2021). Predicting Poverty Level from Satellite Imagery using Deep Neural Networks. [2] A. Perez, C. Yeh, G. Azzari, M. Burke, D. Lobell, and S. Ermon. Poverty prediction with public Landsat 7 satellite imagery and machine learning. [3] Predicting City Poverty Using Satellite Imagery Simone Piaggesi , An- drew Young , Laetitia Gauvin, Rihannan, Natalia Adler Stefaan ,Verhulst , Leo Ferres -2020 [4] Andrew Head, Mélanie Manguin, Nhat Tran, and Joshua E. Blumenstock. Can human development be measured with satellite imagery? (2018) [5] Jean, Neal, et al. \"Combining satellite imagery and machine learning to predict poverty.\" Science 353.6301 (2018): 790-794 [6] Mr. Shashank Shekhar1, Ms. Pratibha Singh, Dr. Shailesh Tiwari Predicting Poverty Index on Satellite Images DHS Data using Transfer Learning .2021 [7] S P Subasha*, Rajeev Ranjan Kumarb and K Adityac.Satellite data and machine learning tools for predicting poverty in rural India , February 2019 [8] N. Audebert, B. Le Saux, and S. Lefèvre. Joint learning from earth observation and openstreetmap data to get faster better semantic maps. CVPR Workshop. 2018 [9] Multi-Task Deep Learning for Predicting Poverty from Satellite Images Shailesh M. Pandey, Tushar Agarwal, Narayanan C Krishnan (2018) [10] An Investigation on Deep Learning Approaches to Combining Nighttime and Daytime Satellite Imagery for Poverty Prediction Ye Ni, Xutao Li, Yunming Ye (2020)L. Dunai, B. D. Garcia, I. Lengua and G. Peris-Fajarnés, \"3D CMOS sensor based acoustic object detectionand navigation system for blind people,\" IECON 2012 - 38th Annual Conference on IEEE Industrial Electronics Society, 2012, pp. 4208-4215, Doi: 10.1109/IECON.2012.6389214. [11] Generating Interpretable Poverty Maps using Object Detection in Satel- lite Images Kumar Ayush, Burak Uzkent, Marshall Burke, David Lobell, Stefano Ermon (2020) [12] J. P. R. Dave and H. A. Pandya. Satellite image classification with data augmentation and convolutional neural network. Springer Singapore, Singapore, 2020 [13] Making Low-Resolution Satellite Images Reborn: A Deep Learning Ap- proach for Super-Resolution Building Extraction by Lixian Zhang 1ORCID, Runmin Dong 1, Shuai Yuan 2, Weidie Li 3,Juepeng Zheng 1ORCID and Hao- huan Fu[12]- 2021 [14] Subpixel level mapping of remotely sensed image using colorimetry.\" The Egyptian journal of remote sensing and space science 21.1 (2018): 65-72. [15] Chitturi, Varun, and Zaid Nabulsi. \"Predicting Poverty Level from Satellite Imagery using Deep Neural Networks, ean, Neal, et al. \"Combining satellite imagery and machine learning to predict poverty.\" Science 353.6301 (2016): 790- 794 [16] E. Rusak, L. Schott, R. S. Zimmermann, J. Bitterwolf, O. Bringmann, M. Bethge, and W. Brendel. A simple way to make neural networks robust against diverse image corruptions, 2020.[20][4] [17] Han, Junwei, et al. \"Advanced deep-learning techniques for salient and category- specific object detection: a survey.\" IEEE Signal Processing Magazine 35.1 (2018): 84-100. [18] E. Rusak, L. Schott, R. S. Zimmermann, J. Bitterwolf, O. Bringmann, M. Bethge, and W. Brendel. A simple way to make neural networks robust against diverse . image corruptions, 2020 [19] M. Xie, N. Jean, M. Burke, D. Lobell, and S. Ermon. Transfer learning from deep features for remote sensing and poverty mapping. Sept. 2015 [20] J. Wang, Q. Qin, Z. Li, X. Ye, J. Wang, X. Yang, and X. Qin. Deep hierarchical representation.
Copyright © 2023 Rekha G S, Shivanshu Pande, Samarth C Shetty, Shweta Patil, Rohan Siwach. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET50518
Publish Date : 2023-04-16
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here