SmoothGrad

Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, Martin Wattenberg

What is a sensitivity mask?

When a machine learning model makes a prediction, often times we would like to determine which features of the input (pixels, for images) were important for the prediction. If the model makes a misprediction, we might want to know which features contributed to the misclassification. We can visualize the feature importance mask as a grayscale image with the same dimensions as the original image with brightness corresponding to importance of the pixel.

Computing sensitivity

There are many techniques to compute a sensitivity mask for an image for a particular prediction. The simplest approach is to take the gradient of a class prediction neuron with respect to the input pixels. This tells us how much a small change to each pixel would affect the prediction. Visually, this mask tends to be noisy.

The SmoothGrad technique often significantly denoises this sensitivity mask. This technique adds pixel-wise Gaussian noise to many copies of the image, and simply averages the resulting gradients.

Gradient
SmoothGrad

Augmenting other methods

The SmoothGrad technique augments other sensitivity techniques. With this release, we have provided implementations of several sensitivity techniques and their SmoothGrad counterparts. This is not comprehensive, and we are accepting pull requests to add new methods!

Below, we show vanilla gradients, Integrated gradients, and Guided Backpropagation as they apply to 200 randomly chosen images from the ImageNet dataset using the Inception V3 model.

When the model makes a mistake, we show the row with a light red background and give the option to choose which mask to visualize - label or the prediction. Often times you can see why the model made a mistake!

ImageGradientIntegratedGuided Backprop
Gradient
Gradient × Image
Only show mispredictions
Mouse over images