Sunday, November 27, 2016

Transfer Learning with Satellite Imagery

Looking for new papers on Machine Learning this last week, I came across an interesting article. Titled “Combining Satellite Imagery and Machine Learning to Predict Poverty” it hits on some topics from class, specifically about data clustering in linear spaces that we have been discussing over the last week. The basic premise of the article was that by using various types of satellite imagery as training data, they could accurately predict the poverty level (or wealth) of third world areas. This was of importance to the authors because there are still large economic "data gaps" within Africa, the continent of interest. The basic gist of the problem is that for most African nations, surveying for income issues is cost prohibitive. If the growing governments had more accurate reports about finances of all areas in their country, it would help them immensely in distributing aid.

One of the most interesting things about this article is the use of transfer learning. Transfer learning is leveraging the principle that Convolutional Neural Networks are layered and re-purposing one of the layers that was previously trained on another data set. The article doesn’t go into the specific details of the algorithms used; it just gives a high level overview of the process the authors used. The authors’ first step to building their CNN was, strangely enough, training on simple images. A set of labeled images consisting of 1000 different simple categories gives CNN the ability to discern between simple properties of images. The paper gives the example of "cat" as a possible label in the data. The data couldn't have anything less to do with wealth distribution of nations. At this point in the training process, the CNN is still general purpose.

The next step after general purpose image detection was to re-tune the CNN, training it to predict the nighttime light intensities based off of daytime satellite images. Essentially, the model within the CNN is now learning to break apart daytime images into a linear space and quantify what the corresponding results at nighttime would be based off of the respective values via a linear mapping. Fortunately, Google maps has hi-res data that was available to the researchers to use for this task. Note that this step is now starting to form information used by the next training phase. It is building predictive clusters to be re-used.

Lastly, the authors train the CNN one more time using what little data they had from the aforementioned wealth surveys. This is to re-correlate the daytime linear space formed from the satellite images to an actual metric they possessed for poverty. The formed clusters now, somehow, become mapped from a picture of a piece of land to wealth and asset holdings. The reasoning the authors give for this being more informative than simply using nighttime lights is interesting as well. Lights, by their nature, are binary.  Is it on or off? Where as looking at roads is much more informative:  how many roads and how well are they maintained? Looking at actual physical features is much more informative. By using regression, the CNN is able to work out what features become the most dominant or, conversely, unimportant.

I have seen pictures of the lights on the East Coast at nighttime hundreds of times but never thought beyond the fact it looked interesting. To see a group of people use that information reminds me, just because I haven't thought of anything interesting to do with the information, doesn't mean that there isn't anything that can be done with it.

The article I am referring to in this post can be found here.

http://science.sciencemag.org/content/353/6301/790

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.