Dstl Satellite Imagery Competition, 1st Place Winner's Interview: Kyle Lee

Kaggle Team|

Dstl Satellite Imagery Kaggle Competition Winners Interview Kyle Lee

Dstl's Satellite Imagery competition, which ran on Kaggle from December 2016 to March 2017, challenged Kagglers to identify and label significant features like waterways, buildings, and vehicles from multi-spectral overhead imagery. In this interview, first place winner Kyle Lee gives a detailed overview of his approach in this image segmentation competition. Patience and persistence were key as he developed unique processing techniques, sampling strategies, and UNET architectures for the different classes.

The Basics

What was your background prior to entering this challenge?

During the day, I design high-speed circuits at a semiconductor startup - e.g. clock-data recovery, locked loops, high-speed I/O, etc. - and develop ASIC/silicon/test automation flows.

Kyle on Kaggle.

Kyle on Kaggle.

Even though I don’t have direct deep learning research or work experience, the main area of my work that has really helped me in these machine/deep learning competitions is planning and building (coding) lots and lots of design automation flows very quickly.

Do you have any prior experience or domain knowledge that helped you succeed in this competition?

The key competition that introduced me to the tools and techniques needed to win was Kaggle’s “Ultrasound Nerve Segmentation” that ended in August 2016 (and I saw many familiar names from that competition in this one too!).

Knowledge accumulated from vision/deep learning related home projects and other statistical learning competitions has also helped me in this effort. Patience picked up from running and tweaking long circuit simulations at work over days/weeks were transferable and analogous to neural network training too.

Like many of the competitors, I didn’t have direct experience with multi-spectral satellite imagery.

How did you get started competing on Kaggle?

I joined Kaggle after first trying to improve my 3-layer shallow networks on Lasagne for single-board computer (SBCs, e.g. Raspberry Pi) stand-alone inferencing/classification systems for various home/car vision hobbyist projects, and wanted a more state of the art solution. Then I came across Kaggle’s State Farm Distracted Driver contest, which was a perfect fit. This was after completing various online machine learning courses - Andrew Ng’s Machine Learning course, Geoffrey Hinton’s course on Neural Networks, to name a few.

This was early 2016 - and it’s been quite a journey since then!

What made you decide to enter this competition?

As I mentioned earlier I participated in one of the earliest segmentation challenges on Kaggle - the “Ultrasonic Nerve Segmentation”. In that competition, I was ranked 8th on the public leaderboard but ended up as a 12th on private LB - a cursed “top silver” position (not something any hard worker should get!). Immediately after that I was looking forward to the next image segmentation challenge, and this was the perfect opportunity.

More importantly, I joined to learn what neural/segmentation networks have to offer apart from medical imaging, and to have fun! Over the course of the competition, I definitely achieved this goal since this competition was extra fun - viewing pictures of natural scenery is therapeutic and kept me motivated everyday to improve my methodology.

Let’s get technical

What was your general strategy?

In summary my solution is based on the following:

  1. Multi-scaled patch / sliding window generation (256x256 & 288x288 primary, 224x224, 320x320 added for ensembling), and at edges the windows overlapped to cover the entire image.
  2. U-NET training & ensembling with a variety of models that permuted bands and scales
  3. Oversampling on rare classes - oversampling was performed by sliding in smaller steps over positive frames and sliding in larger steps over negative frames than default window size.
  4. Index methods for waterways - namely a combination of Non-Differential Water Index (NDWI) and Canopy Chlorophyl Content Index (CCCI)
  5. Post-processing on roads, standing water versus waterways, and small versus large vehicles. This post-processing resolved class confusion between standing water and waterways, cleaned up artifacts on the roads, and gave some additional points to the large vehicle score.
  6. Vehicles - I did some special work here to train and predict only on frames with roads and buildings. I also only used RGB bands, a lot of averaging, and used merged networks (large+small) for large vehicle segmentation.
  7. Crops - The image was first scaled to 1024x1024 (lowered resolution), then split into 256x256 overlapping sliding windows.
A VISUAL OVERVIEW OF THE COMPLETE SOLUTION (ALL CLASSES)

A VISUAL OVERVIEW OF THE COMPLETE SOLUTION (ALL CLASSES)

What preprocessing and feature engineering did you do?

I performed registration of A and M images, and used sliding window at various scales. In addition, I also oversampled some of the rare classes in some of the ensemble models. The sliding window steps are shown below:

PATCH DISTANCE FOR OVERSAMPLED CLASSES

PATCH DISTANCE FOR OVERSAMPLED CLASSES

Oversampling standing water and waterway together was a good idea since it helped to reduce the amount of class confusion between the two, with reduced artifacts (particularly for standing water predictions).

As far as band usage is concerned, I mostly used panchromatic RGB + M-band and some of the SWIR (A) bands. For the A-bands I mostly did not use all the bands, but randomly skipped a few bands to save training time and RAM.

As mentioned earlier, for vehicles I trained and predicted only on patches/windows with roads and/or buildings - this helped to cut down the amount of images needed for training, and allowed for significant oversampling of vehicle patches. This scheme was applied also on test images, so results are pipelined as you can see from the flowchart.

Finally, preprocessing involved the use of mean/standard deviation normalization using the training set - in other words, each training/validation/test patch was subtracted by the mean and divided by the standard deviation of the training set only.

What supervised learning methods did you use?

The UNET segmentation network from the “Ultrasonic Nerve Segmentation” competitions and other past segmentation competitions was widely used in my approach, since it is the most easily scalable/sizeable fully convolutional network (FCN) architecture for this purpose. In fact, if I am not mistaken, most - if not all - of the top competitors used some variant of the UNET.

I made tweaks to the original architecture with batch-normalization on the downstream paths + dropout on the post-merge paths, and all activation layers switched to Exponential Linear Unit (ELU). Various widths (256x256, 288x288, etc.) and depths were used depending on the various classes via cross-validation scores.

For example, in my experiments, the structure class converged best - both in terms of train time and CV - with a UNET that had a wider width (288x288) and a shallow depth (3 groups of 2x conv layers + maxpool).

VARIOUS UNET ARCHITECTURES FOR DIFFERENT CLASSES

Overall, I generated 40+ models of various scales/widths/depths, training data subsamples, and band selections.

FULL MODEL (WIDTH/DEPTH, SAMPLING, BANDS)  LISTING OF ALL CLASSES

FULL MODEL (WIDTH/DEPTH, SAMPLING, BANDS) LISTING OF ALL CLASSES

FULL MODEL (WIDTH/DEPTH, SAMPLING, BANDS)  LISTING OF ALL CLASSES

FULL MODEL (WIDTH/DEPTH, SAMPLING, BANDS) LISTING OF ALL CLASSES

In terms of cross validation, I used a random patch split of 10-20% across images (depending on class, the rarer the larger). For oversampled classes only 5% random patch were used. Only one fold per model was used to cut down on runtime in all cases.

Training set was train-time augmented (both image+mask) with rotations at 45 degrees, 15-25% zooms/translations, shears, channel shift range (some models only), and vertical+horizontal flips. No augmentation with ensembling was performed on validation or test data.

Optimization wise I used the Jaccard loss directly with Adam as optimizer (I did not get much improvement from NAdam). I also had a learning rate policy step which dropped the learning rate at around 0.2 of the initial rate for every 30 epochs.

Ensembling involved the use of mask arithmetic averaging (most classes), unions (only on standing water and large vehicles), intersections (only on waterways using NDWI and CCCI).

What was your most important insight into the data?

My understanding is that most competitors had either weak public or private scores with standing water and vehicles, which I spent extra effort to deal with in terms of pre- and post-processing. I believe stabilizing these two (actually three) classes - standing water, large and small vehicles made a large impact on my final score relative to other top competitors.

Standing Water Versus Waterways

For standing water, one of the main issues with standing water was class confusion with waterways. As described earlier, oversampling both standing water and waterways helps to dissolve waterway artifacts in standing water UNET predictions, but there was still a lot of waterway-like remnants, as shown below in raw ensembled standing water predictions:

EXAMPLES OF MISCLASSIFIED POLYGONS IN STANDING WATER

The key to resolving this was to realize that from a common sense perspective - waterways always touch the boundary of the image, while standing water mostly does not (or has a small overlap area / dimension only). Moreover, the NDWI mask (generated as part of waterways) could be overlapped with the raw standing water predictions, and very close broken segments could be merged (convexHull) to form a complete contour that may touch the boundary of the image. In short, boundary contact checking for merged water polygons was part of my post-processing flow which pushed some misclassified standing water images into the waterway class.

Vehicles - Large and Small

The other important classes which I spent a chunk of time on were the two vehicle classes. Firstly, I noticed - both on the training data and just simply common sense - is that vehicles are almost always located on or near roads, and near buildings.

EXAMPLES OF SMALL VEHICLES RELATIVE TO ROADS AND BUILDINGS

By restricting training and prediction to only patches containing buildings and roads, I was naturally able to allow for oversampling of vehicle patches, and narrow down the scope of scenery for the network to focus on. Moreover, I chose only RGB images, since in all other bands vehicles were either not visible, or displaced significantly).

Secondly, many vehicles were very hard to distinguish between large and small classes both in terms of visibility (blurred) and mask areas. For reference, their mask areas from training data are shown in the histogram below, and there is a large area overlap between large and small vehicles from around 50-150 pixels^2.

CONTOUR/MASK AREA HISTOGRAM OF SMALL VS LARGE VEHICLES

CONTOUR/MASK AREA HISTOGRAM OF SMALL VS LARGE VEHICLES

To deal with this, I trained additional networks merging both small+large vehicles, and took the union of this network with large vehicle only network ensemble. The idea is that networks that merge both small+large are able to predict better polygons (since there is no class confusion). I then performed area filtering of this union (nominally at 200pixel^2) to extract large vehicles only. For small vehicles, it was basically just to take the average ensemble of small vehicle predictions, and remove whichever contours overlapped with large vehicles and/or over the area threshold. Additionally, both vehicle masks were cleaned by negating their masks with buildings, trees, and other classes.

Post-competition analysis showed that this approach helped large vehicle private LB score - which if I did not, would have dropped by -59%. On the other hand small vehicles did not have any improvement from the area threshold removal process above.

Were you surprised by any of your findings?

Surprisingly, waterways could well be generated using simple and fast index methods. I ended up with a intersection of NDWI and CCCI masks (with boundary contact checking to filter out standing water / building artifacts) rather than using deep learning approaches, thus freeing up training resources for other classes. The public and private LB score for this class seemed competitive relative to other teams who may have used deep learning methods.

Finally, here is my CV-public-private split per class.

FINAL LOCAL CV, PUBLIC LB, and PRIVATE LB PER CLASS COMPARISONS

FINAL LOCAL CV, PUBLIC LB, and PRIVATE LB PER CLASS COMPARISONS

The asterisk (*) for private LB score on crops indicate a bug with OpenCV’s findContours, that if I had used the correct WKT generating script for that class I would have had a crop private LB score of 0.8344 instead of 0.7089. As a result this solution could have achieved an overall private LB score of 0.50434 (over 0.5 - yay!) rather than 0.49272.

The bug had to do with masks spanning the entire image not being detected as a contour - I had only found this out after the competition and would have done a WKT mask dump ‘diff’ if I had the time. All other classes were using the correct shapely versions of the submission script.

My guess is that my vehicle and standing water scores (combined) were the ones that made a difference in this competition, since the other top competitors had either weak vehicle scores or weak standing water scores.

Which tools did you use?

Keras with Theano backend + OpenCV / Rasterio / Shapely for polygon manipulation.

No pretrained models were used in the final solution, although I did give fine-tuned (VGG16) classifier-coupling for merged vehicle networks a shot - to no avail.

How did you spend your time on this competition?

Since this was a neural network segmentation competition, most of time (80%+) was spent on tuning and training the different networks and monitoring the runs. The remaining (20%) was on developing the post and pre-processing flows. From a per class effort perspective, I spent over 70% of the overall time on vehicles, standing water, and structures, and I spent the least time on crops.

In terms of submissions, I used a majority of the submissions trying to fine tune polygon approximation. I first tried bounding boxes, then polygon approximation, and then polygon with erosion in OpenCV. Ultimately, I ended up using rasterio/shapely to perform polygon to WKT conversion. All classes (except trees) had no approximation, while trees were first resized to 1550x1550 - effectively approximating the polygons - before being converted to WKT format.

What does your hardware setup look like?

I used three desktops for this contest. The first two were used for all the training/inferencing of all classes, while the last one (#3) was only run on crops.

  1. GTX1080 (8GB) + 48GB desktop system RAM
  2. GTX1070 (8GB) + 48GB desktop system RAM
  3. GTX960 (4GB) + 16GB desktop system RAM.

What was the run time for both training and prediction of your winning solution?

It took about three days to train and predict - assuming all models and all preprocessing scales can be run in parallel. One day for preprocessing, one day to train and predict, and another day to predict vehicles + generate submission.

Thank You

Once again, thank you to Dstl and Kaggle for hosting and organizing this terrific image segmentation competition - I believe this is by far the most exciting (and busy, due to the number of classes) competition I have had, and I am sure this is true for many others too.

It’s always interesting to see what neural networks can accomplish with segmentation - first medical imaging, now multi-spectral satellite imagery! I personally hope to see more of these type of competitions in the future.

Words of wisdom

What have you taken away from this competition?

A lot of experience training neural networks - particularly segmentation networks, working with multi-spectral images, and improving on traditional computer vision processing skills. Some of the solution sharing by the top competitors were absolutely fascinating as well - especially clever tricks with multi-scale imagery in a single network.

Looking back, what would you do differently now?

I would have added some ensembling to crops, added heat-map based averaging (and increase the test overlap windows at some expense of runtime), dilated structures training mask (which helped structure scoring for some competitors), and removed most of the expensive rare scale (320x320, for example) ensembling on tracks.

I would also have fixed the contour submission issue on crops had I caught that earlier.

Do you have any advice for those just getting started in data science?

  1. Nothing beats learning by practice and competition, so just dive in a Kaggle competition that appeals to you - whether it be numbers, words, images, videos, audio, satellite imagery, etc. (and that you can commit to early on if you want to do well).

  2. Moreover, data science is an ever evolving field. In fact, this field wasn’t even on the radar a decade ago - so be sure to keep to date on the architectural improvements year-by-year. Don’t worry, most other competitors are starting on the same ground as you, especially with some of the new developments.

  3. Having more systems helps in terms of creating experiments and ensemble permutations, but it’s not absolutely necessary if you have a strong flow or network.

  4. However, for this particular competition, having >= 2 GPU systems will definitely help due to the sheer number of classes and models involved.

  5. Most importantly, have fun during the competitions - it won’t even feel like work when you are having fun (!) Having said that, I am still a beginner in many areas in data science - and still learning, of course.

Bio

Kyle Lee works as a circuit and ASIC designer during the day. He has been involved in data science and deep learning competitions since early 2016 out of his personal interest for automation and machine learning. He holds a Bachelor’s degree in Electrical and Computer Engineering from Cornell University.