Conv1d with Error: expected dense to have shape (None, 800, 1) but got array with shape (200, 1, 1)

318 views
Skip to first unread message

kuangs...@gmail.com

unread,
Aug 7, 2017, 5:47:14 AM8/7/17
to Keras-users

I try to build a conv1d model, and my code is as follows:

#my train data is (X,Y) with the shape: X.shape=(200,2000,1), Y.shape=(200,1,1)

#create model

model=Sequential()
model.add(Conv1D(filters = 5, kernel_size=400, activation='relu', input_shape=X.shape[1:]))
model.add(Dropout(0.4))
model.add(Activation('sigmoid'))
model.add(MaxPooling1D())
model.add(Dense(1, activation='sigmoid'))

#Compile model

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

#Fit the model

model.fit(X, Y, epochs=150, batch_size=10, verbose=2)


##But the errors are as follows:

ValueError Traceback (most recent call last)
in ()
33 model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
34 # Fit the model
---> 35 model.fit(X, Y, epochs=150, batch_size=10, verbose=2)
36 # calculate predictions
37 #predictions_NCS = model.predict(X2)

C:\Program Files\Anaconda3\lib\site-packages\keras\models.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs)
861 class_weight=class_weight,
862 sample_weight=sample_weight,
--> 863 initial_epoch=initial_epoch)
864
865 def evaluate(self, x, y, batch_size=32, verbose=1,

C:\Program Files\Anaconda3\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs)
1356 class_weight=class_weight,
1357 check_batch_axis=False,
-> 1358 batch_size=batch_size)
1359 # Prepare validation data.
1360 if validation_data:

C:\Program Files\Anaconda3\lib\site-packages\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size)
1236 output_shapes,
1237 check_batch_axis=False,
-> 1238 exception_prefix='target')
1239 sample_weights = _standardize_sample_weights(sample_weight,
1240 self._feed_output_names)

C:\Program Files\Anaconda3\lib\site-packages\keras\engine\training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
138 ' to have shape ' + str(shapes[i]) +
139 ' but got array with shape ' +
--> 140 str(array.shape))
141 return arrays
142

ValueError: Error when checking target: expected dense_63 to have shape (None, 800, 1) but got array with shape (200, 1, 1)



I couldn't understand the error and don't know how to solve this problem. Can someone help me? Thank you!

Daπid

unread,
Aug 7, 2017, 6:39:29 AM8/7/17
to kuangs...@gmail.com, Keras-users
Inspect your model ussing .summary():

Layer (type)                 Output Shape              Param #  
=================================================================
conv1d_1 (Conv1D)            (None, 1601, 5)           2005     
_________________________________________________________________
dropout_1 (Dropout)          (None, 1601, 5)           0        
_________________________________________________________________
activation_1 (Activation)    (None, 1601, 5)           0        
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 800, 5)            0        
_________________________________________________________________
dense_1 (Dense)              (None, 800, 1)            6        
=================================================================


As you can see, the MaxPooling does pooling over a window, reducing your data to just 800 time steps. Now, the Dense layer is applied time wise, giving you one number per each of the 800 time steps.

You maybe want a global max pooling instead, giving you just 5 numbers (the number of filters).

Also, you'll want to replace your huge kernel of the first layer by several stacked layers with smaller kernels.

--
You received this message because you are subscribed to the Google Groups "Keras-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to keras-users+unsubscribe@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/keras-users/a8bcb860-04ff-435e-afa0-afa7c366dc09%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

kuangs...@gmail.com

unread,
Aug 7, 2017, 7:22:11 AM8/7/17
to Keras-users, kuangs...@gmail.com
Hi David,

I replaced the Maxpooling by GlobalMaxpooling, then it worked.
I also tried the other settings you adviced me. But I can't tell now, if the results are better or not. Maybe i should learn more about Keras. After all, it is really nice of you. Thank you very much.

在 2017年8月7日星期一 UTC+2下午12:39:29,David Menéndez Hurtado写道:
To unsubscribe from this group and stop receiving emails from it, send an email to keras-users...@googlegroups.com.

kuangs...@gmail.com

unread,
Aug 7, 2017, 10:40:16 AM8/7/17
to Keras-users, kuangs...@gmail.com
I have changed my code a little. It seemed worked, but with bad results.
###my train data is (X,Y) with the shape: X.shape=(200,2000,1), Y.shape=(200,1,1)
The code are as follows:

seed = 7
np.random.seed(seed)

# create model
model=Sequential()
model.add(Conv1D(filters = 1000, kernel_size=5,
                 padding='valid',
                 activation='relu',
                 strides=1,input_shape=X.shape[1:]))

# We add a vanilla hidden layer:
#model.add(Activation('relu'))
# we use max pooling:
model.add(GlobalMaxPooling1D())

# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1))
model.add(Activation('linear'))


model.compile(loss='binary_crossentropy',
              optimizer='adam',
metrics=['accuracy'])
model.fit(X, Y, epochs=150, batch_size=20,  verbose=2)

But the results are:

Epoch 1/150
40s - loss: 821.9923 - acc: 0.0000e+00
Epoch 2/150
40s - loss: 302.9678 - acc: 0.0000e+00
Epoch 3/150
41s - loss: 205.8258 - acc: 0.0000e+00
Epoch 4/150
42s - loss: 148.6056 - acc: 0.0000e+00
Epoch 5/150
42s - loss: 100.2924 - acc: 0.0000e+00
Epoch 6/150
41s - loss: 49.8188 - acc: 0.0000e+00
Epoch 7/150
43s - loss: -9.6716e+00 - acc: 0.0000e+00

It can be seen, the accuracy is always only 0. Can someone help me with the setting for the layers, so that I
can get a better result.

pranoy Radhakrishnan

unread,
Aug 8, 2017, 10:10:29 AM8/8/17
to Keras-users, kuangs...@gmail.com
Try using a softmax activation function at the output layer. Use categorical cross entropy as the loss function.

kuangs...@gmail.com

unread,
Aug 10, 2017, 7:45:09 AM8/10/17
to Keras-users, kuangs...@gmail.com
Thank you for your advice. But it didn't work. My output (Train-data) Y=[373.6, 378.7, 380.2, ...] with 200 real numbers, not numbers between 0 and 1, neither integrals. So how can I exactly write my code?

在 2017年8月8日星期二 UTC+2下午4:10:29,pranoy Radhakrishnan写道:
Reply all
Reply to author
Forward
0 new messages