Building Powerful Image Classification Models Using Very Little Data
Building Powerful Image Classification Models Using Very Little Data
In Tutorials (https://blog.keras.io/category/tutorials.html).
In this tutorial, we will present a few simple yet eective methods that you can
use to build a powerful image classier, using only very few training examples --
just a few hundred or thousand pictures from each class you want to be able to
recognize.
a machine with Keras, SciPy, PIL installed. If you have a NVIDIA GPU that
you can use (and cuDNN installed), that's great, but since we are working
with few images that isn't strictly necessary.
a training data directory and validation data directory containing one
subdirectory per image class, lled with .png or .jpg images:
data/
train/
dogs/
dog001.jpg
dog002.jpg
...
cats/
cat001.jpg
cat002.jpg
...
validation/
dogs/
dog001.jpg
dog002.jpg
...
cats/
cat001.jpg
cat002.jpg
...
In our examples we will use two sets of pictures, which we got from Kaggle
(https://www.kaggle.com/c/dogs-vs-cats/data): 1000 cats and 1000 dogs
(although the original dataset had 12,500 cats and 12,500 dogs, we just took
the rst 1000 images for each class). We also use 400 additional samples from
each class as validation data, to evaluate our models.
That is very few examples to learn from, for a classication problem that is far
from simple. So this is a challenging machine learning problem, but it is also a
realistic one: in a lot of real-world use cases, even small-scale data collection can
be extremely expensive or sometimes near-impossible (e.g. in medical imaging).
Being able to make the most out of very little data is a key skill of a competent
data scientist.
How dicult is this problem? When Kaggle started the cats vs. dogs competition
(with 25,000 training images in total), a bit over two years ago, it came with the
following statement:
"In an informal poll conducted many years ago, computer vision experts posited that
a classier with better than 60% accuracy would be dicult without a major
advance in the state of the art. For reference, a 60% classier improves the guessing
probability of a 12-image HIP from 1/4096 to 1/459. The current literature
suggests machine classiers can score above 80% accuracy on this task [ref]
(http://xenon.stanford.edu/~pgolle/papers/dogcat.pdf)."
In the resulting competition, top entrants were able to score over 98% accuracy
by using modern deep learning techniques. In our case, because we restrict
ourselves to only 8% of the dataset, the problem is much harder.
A message that I hear often is that "deep learning is only relevant when you have
a huge amount of data". While not entirely incorrect, this is somewhat
misleading. Certainly, deep learning requires the ability to learn features
automatically from the data, which is generally only possible when lots of
training data is available --especially for problems where the input samples are
very high-dimensional, like images. However, convolutional neural networks --a
pillar algorithm of deep learning-- are by design one of the best models available
for most "perceptual" problems (such as image classication), even with very
little data to learn from. Training a convnet from scratch on a small image
dataset will still yield reasonable results, without the need for any custom
feature engineering. Convnets are just plain good. They are the right tool for the
job.
But what's more, deep learning models are by nature highly repurposable: you
can take, say, an image classication or speech-to-text model trained on a large-
scale dataset then reuse it on a signicantly dierent problem with only minor
changes, as we will see in this post. Specically in the case of computer vision,
many pre-trained models (usually trained on the ImageNet dataset) are now
publicly available for download and can be used to bootstrap powerful vision
models out of very little data.
In order to make the most of our few training examples, we will "augment" them
via a number of random transformations, so that our model would never see
twice the exact same picture. This helps prevent overtting and helps the model
generalize better.
generators can then be used with the Keras model methods that accept data
generators as inputs, fit_generator , evaluate_generator and
predict_generator .
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
These are just a few of the options available (for more, see the documentation
(http://keras.io/preprocessing/image/)). Let's quickly go over what we just
wrote:
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
Here's what we get --this is what our data augmentation strategy looks like.
Training a small convnet from scratch: 80%
accuracy in 40 lines of code
The right tool for an image classication job is a convnet, so let's try to train one
on our data, as an initial baseline. Since we only have few examples, our number
one concern should be overtting. Overtting happens when a model exposed
to too few examples learns patterns that do not generalize to new data, i.e. when
the model starts using irrelevant features for making predictions. For instance, if
you, as a human, only see three images of people who are lumberjacks, and
three, images of people who are sailors, and among them only one lumberjack
wears a cap, you might start thinking that wearing a cap is a sign of being a
lumberjack as opposed to a sailor. You would then make a pretty lousy
lumberjack/sailor classier.
Data augmentation is one way to ght overtting, but it isn't enough since our
augmented samples are still highly correlated. Your main focus for ghting
overtting should be the entropic capacity of your model --how much
information your model is allowed to store. A model that can store a lot of
information has the potential to be more accurate by leveraging more features,
but it is also more at risk to start storing irrelevant features. Meanwhile, a model
that can only store a few features will have to focus on the most signicant
features found in the data, and these are more likely to be truly relevant and to
generalize better.
There are dierent ways to modulate entropic capacity. The main one is the
choice of the number of parameters in your model, i.e. the number of layers and
the size of each layer. Another way is the use of weight regularization, such as L1
or L2 regularization, which consists in forcing model weights to taker smaller
values.
In our case we will use a very small convnet with few layers and few lters per
layer, alongside data augmentation and dropout. Dropout also helps reduce
overtting, by preventing a layer from seeing twice the exact same pattern, thus
acting in a way analoguous to data augmentation (you could say that both
dropout and data augmentation tend to disrupt random correlations occuring in
your data).
The code snippet below is our rst model, a simple stack of 3 convolution layers
with a ReLU activation and followed by max-pooling layers. This is very similar
to the architectures that Yann LeCun advocated in the 1990s for image
classication (with the exception of ReLU).
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 150, 150)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
On top of it we stick two fully-connected layers. We end the model with a single
unit and a sigmoid activation, which is perfect for a binary classication. To go
with it we will also use the binary_crossentropy loss to train our model.
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vect
ors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
batch_size = 16
model.fit_generator(
train_generator,
steps_per_epoch=2000 // batch_size,
epochs=50,
validation_data=validation_generator,
validation_steps=800 // batch_size)
model.save_weights('first_try.h5') # always save your weights after trainin
g or during training
Note that the variance of the validation accuracy is fairly high, both because
accuracy is a high-variance metric and because we only use 800 validation
samples. A good validation strategy in such cases would be to do k-fold cross-
validation, but this would require training k models for every evaluation round.
We will use the VGG16 architecture, pre-trained on the ImageNet dataset --a
model previously featured on this blog. Because the ImageNet dataset contains
several "cat" classes (persian cat, siamese cat...) and many "dog" classes among
its total of 1000 classes, this model will already have learned features that are
relevant to our classication problem. In fact, it is possible that merely recording
the softmax predictions of the model over our data rather than the bottleneck
features would be enough to solve our dogs vs. cats classication problem
extremely well. However, the method we present here is more likely to
generalize well to a broader range of problems, including problems featuring
classes absent from ImageNet.
The reason why we are storing the features oine rather than adding our fully-
connected model directly on top of a frozen convolutional base and running the
whole thing, is computational eency. Running VGG16 is expensive, especially
if you're working on CPU, and we want to only do it once. Note that this prevents
us from using data augmentation.
batch_size = 16
generator = datagen.flow_from_directory(
'data/train',
target_size=(150, 150),
batch_size=batch_size,
class_mode=None, # this means our generator will only yield batches
of data, no labels
shuffle=False) # our data will be in order, so all first 1000 image
s will be cats, then 1000 dogs
# the predict_generator method returns the output of a model, given
# a generator that yields batches of numpy data
bottleneck_features_train = model.predict_generator(generator, 2000)
# save the output as a Numpy array
np.save(open('bottleneck_features_train.npy', 'w'), bottleneck_features_trai
n)
generator = datagen.flow_from_directory(
'data/validation',
target_size=(150, 150),
batch_size=batch_size,
class_mode=None,
shuffle=False)
bottleneck_features_validation = model.predict_generator(generator, 800)
np.save(open('bottleneck_features_validation.npy', 'w'), bottleneck_features
_validation)
We can then load our saved data and train a small fully-connected model:
train_data = np.load(open('bottleneck_features_train.npy'))
# the features were saved in order, so recreating the labels is easy
train_labels = np.array([0] * 1000 + [1] * 1000)
validation_data = np.load(open('bottleneck_features_validation.npy'))
validation_labels = np.array([0] * 400 + [1] * 400)
model = Sequential()
model.add(Flatten(input_shape=train_data.shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(train_data, train_labels,
epochs=50,
batch_size=batch_size,
validation_data=(validation_data, validation_labels))
model.save_weights('bottleneck_fc_model.h5')
Thanks to its small size, this model trains very quickly even on CPU (1s per
epoch):
To further improve our previous result, we can try to "ne-tune" the last
convolutional block of the VGG16 model alongside the top-level classier. Fine-
tuning consist in starting from a trained network, then re-training it on a new
dataset using very small weight updates. In our case, this can be done in 3 steps:
in order to perform ne-tuning, all layers should start with properly trained
weights: for instance you should not slap a randomly initialized fully-
connected network on top of a pre-trained convolutional base. This is
because the large gradient updates triggered by the randomly initialized
weights would wreck the learned weights in the convolutional base. In our
case this is why we rst train the top-level classier, and only then start
ne-tuning convolutional weights alongside it.
we choose to only ne-tune the last convolutional block rather than the
entire network in order to prevent overtting, since the entire network
would have a very large entropic capacity and thus a strong tendency to
overt. The features learned by low-level convolutional blocks are more
general, less abstract than those found higher-up, so it is sensible to keep
the rst few blocks xed (more general features) and only ne-tune the last
one (more specialized features).
ne-tuning should be done with a very slow learning rate, and typically
with the SGD optimizer rather than an adaptative learning rate optimizer
such as RMSProp. This is to make sure that the magnitude of the updates
stays very small, so as not to wreck the previously learned features.
After instantiating the VGG base and loading its weights, we add our previously
trained fully-connected classier on top:
Finally, we start training the whole thing, with a very slow learning rate:
batch_size = 16
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
Here are a few more approaches you can try to get to above 0.95:
This post ends here! To recap, here is where you can nd the code for our three
experiments:
If you have any comment about this post or any suggestion about future topics to
cover, you can reach out on Twitter (https://twitter.com/fchollet).
blog.keras.io (https://blog.keras.io/building-powerful-image-classication-models-using-
very-little-data.html) by Francois Chollet