Problem in Accuracy of the model (Stanford Car Dataset)

435 views
Skip to first unread message

Toqi Tahamid

unread,
Oct 4, 2016, 1:18:47 PM10/4/16
to Keras-users
I am using Inceptionv3 for pre-training the model. When I freeze all convolutional layers of InceptionV3 and train the fully connected layers , I am getting 48% accuracy for train data and 26% accuracy for test data after 15 epoch.

Then when I started to fine-tuning the convNet, the accuracy for train data is 76% and test data is 46% after 15 epoch.

I am using the Stanford Car Dataset for training my network. The dataset contains 16,185 images of 196 classes of cars. The data is split into 8,144 training images and 8,041 testing images, where each class has been split roughly in a 50-50 split. Classes are typically at the level of Make, Model, Year, e.g. 2012 Tesla Model S or 2012 BMW M3 coupe.


This is my code:

base_model = InceptionV3(weights='imagenet', include_top=False)

x = base_model.output

x = GlobalAveragePooling2D()(x)

x = Dense(1024, activation='relu')(x)

predictions = Dense(196, activation='softmax')(x)

model = Model(input=base_model.input, output=predictions)

for layer in base_model.layers:
     layer.trainable = False

model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

model.fit_generator(train_generator,

                   samples_per_epoch=nb_train_samples,

                   nb_epoch=nb_epoch,

                   validation_data=validation_generator,

                   nb_val_samples=nb_validation_samples)

for i, layer in enumerate(base_model.layers):

      print(i, layer.name)


for layer in model.layers[:172]:

  layer.trainable = False


for layer in model.layers[172:]:

  layer.trainable = True


from keras.optimizers import SGD

model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])


nb_epoch = 15

model.fit_generator(train_generator,

                   samples_per_epoch=nb_train_samples,

                   nb_epoch=nb_epoch,

                   validation_data=validation_generator,

                   nb_val_samples=nb_validation_samples)


Can anyone suggest me what am I doing wrong  here? Why isn't the test data accuracy increasing?



Daπid

unread,
Oct 4, 2016, 1:29:16 PM10/4/16
to Toqi Tahamid, Keras-users
On 4 October 2016 at 19:18, Toqi Tahamid <toqit...@gmail.com> wrote:
> x = GlobalAveragePooling2D()(x)

Why are you doing this? This will reduce your input to only 2048
numbers and loose all spatial information.

On a side note, Inception is a very large network, you may want to run
that part, save the output, and use that as a input for training. This
way you don't have to recompute it again and again.
Message has been deleted

Toqi Tahamid

unread,
Oct 4, 2016, 11:45:15 PM10/4/16
to Keras-users, toqit...@gmail.com
1. So you are saying I should omit this line - x = GlobalAveragePooling2D()(x) ?

2. I should run the inception without its imageNet weights with my dataset once. Then I save the weights and fine tune the last fully connected layers with the weights that I just save?

Daπid

unread,
Oct 5, 2016, 2:00:26 AM10/5/16
to Toqi Tahamid, Keras-users
On 4 October 2016 at 22:52, Toqi Tahamid <toqit...@gmail.com> wrote:
> 1. So you are saying I should omit this line - x = GlobalAveragePooling2D()(x) ?

If all your images are the same size, yes. That may (or may not) help.

Another thing is that *maybe* the last features of Inception are too
high level, they are good at telling a cat and a car apart, but
perhaps they aren't good at telling a Tesla from a BMW, because they
weren't trained for this. If that is the case, you may want to cut the
network further down. Or, as François did on his cat/dog classifier,
retrain the top layers of the network as well.

I haven't read the paper, I don't know how exactly was it trained, so
I may be completely wrong here.

> 2. I should run the inception without its imageNet weights with my dataset once. Then I save the weights and fine tune the last fully connected layers with the weights that I just save?

What you are doing is correct: freeze the pre trained weights, train
the top layer, and then finetune the whole thing. What I am suggesting
is a possible optimisation. Right now, I guess most of your training
is spent computing the InceptionV3 features, and since the layers are
frozen, the same numbers are being computed over and over again. My
suggestion is to compute the output of the frozen layers once, save
them, and train directly on them. It involves a bit more code, but
less computing time, consider if it is worth the effort for you.


/David.

Toqi Tahamid

unread,
Oct 6, 2016, 3:55:35 PM10/6/16
to Keras-users, toqit...@gmail.com
I am getting this error when omit GlobalAveragePooling2D

 

Using Theano backend.
Found 8144 images belonging to 196 classes.
Found 8041 images belonging to 196 classes.
Traceback (most recent call last):

  File "<ipython-input-1-78d528fa154a>", line 1, in <module>
    runfile('D:/Machine Learning/Models/Cal All/Inception.py', wdir='D:/Machine Learning/Models/Cal All')

  File "D:\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
    execfile(filename, namespace)

  File "D:\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 87, in execfile
    exec(compile(scripttext, filename, 'exec'), glob, loc)

  File "D:/Machine Learning/Models/Cal All/Inception.py", line 81, in <module>

    x = Dense(1024, activation='relu')(x)

  File "d:\git\keras\keras\engine\topology.py", line 470, in __call__
    self.assert_input_compatibility(x)

  File "d:\git\keras\keras\engine\topology.py", line 411, in assert_input_compatibility
    str(K.ndim(x)))

Exception: Input 0 is incompatible with layer dense_1: expected ndim=2, found ndim=4

Daπid

unread,
Oct 7, 2016, 1:37:12 AM10/7/16
to Toqi Tahamid, Keras-users
On 6 October 2016 at 21:55, Toqi Tahamid <toqit...@gmail.com> wrote:
> Exception: Input 0 is incompatible with layer dense_1: expected ndim=2,
> found ndim=4

You need to flatten it before passing it to dense layers.
Reply all
Reply to author
Forward
0 new messages