ValueError: total size of new array must be unchanged

已查看 527 次
跳至第一个未读帖子

neha makhija

未读,
2016年2月26日 07:23:192016/2/26
收件人 theano-users
Hey,
          I am trying to run the below code for my images

import network3
from network3 import Network    
from network3 import ConvPoolLayer, FullyConnectedLayer, SoftmaxLayer

training_data
, validation_data, test_data = network3.load_data_shared()
mini_batch_size
= 10
#net = Network([
       
#FullyConnectedLayer(n_in=784, n_out=100),
       
#SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)
#net.SGD(training_data, 60, mini_batch_size, 0.1,
           
#validation_data, test_data)

net
= Network([
       
ConvPoolLayer(image_shape=(mini_batch_size, 1, 200, 667),
                      filter_shape
=(5, 1, 5, 5),
                      poolsize
=(4, 4)),
       
ConvPoolLayer(image_shape=(mini_batch_size, 5, 49, 166),
                      filter_shape
=(10, 5, 5, 5),
                      poolsize
=(2, 2)),
       
FullyConnectedLayer(n_in=10*4*4, n_out=500),
       
SoftmaxLayer(n_in=500, n_out=3)], mini_batch_size)
net
.SGD(training_data, 60, mini_batch_size, 0.1,
            validation_data
, test_data)


when run the following in the command line
$ THEANO_FLAGS=floatX=float32,optimizer=None,exception_verbosity=high,optimizer_excluding=scan  python test.py
 I get the following error

ValueError: total size of new array must be unchanged
Apply node that caused the error: Reshape{4}(sigmoid.0, TensorConstant{[  1   5  49 166]})
Inputs types: [TensorType(float32, (True, False, False, False)), TensorType(int64, vector)]
Inputs shapes: [(1, 5, 49, 165), (4,)]
Inputs strides: [(161700, 32340, 660, 4), (8,)]
Inputs values: ['not shown', array([  1,   5,  49, 166])]

Debugprint of the apply node:
Reshape{4} [@A] <TensorType(float32, (True, False, False, False))> ''  
 |sigmoid [@B] <TensorType(float32, (True, False, False, False))> ''  
 | |Elemwise{add,no_inplace} [@C] <TensorType(float32, (True, False, False, False))> ''  
 |   |DownsampleFactorMax{(4, 4), (4, 4), True, (0, 0)} [@D] <TensorType(float32, (True, False, False, False))> ''  
 |   | |ConvOp{('imshp', (1, 200, 667)),('kshp', (5, 5)),('nkern', 5),('bsize', 1),('dx', 1),('dy', 1),('out_mode', 'valid'),('unroll_batch', 1),('unroll_kern', 5),('unroll_patch', False),('imshp_logical', (1, 200, 667)),('kshp_logical', (5, 5)),('kshp_logical_top_aligned', True)} [@E] <TensorType(float32, (True, False, False, False))> ''  
 |   |   |Reshape{4} [@F] <TensorType(float32, (True, True, False, False))> ''  
 |   |   | |Subtensor{int64:int64:} [@G] <TensorType(float32, matrix)> ''  
 |   |   | | |<TensorType(float32, matrix)> [@H] <TensorType(float32, matrix)>
 |   |   | | |ScalarFromTensor [@I] <int64> ''  
 |   |   | | | |Elemwise{mul,no_inplace} [@J] <TensorType(int64, scalar)> ''  
 |   |   | | |   |<TensorType(int64, scalar)> [@K] <TensorType(int64, scalar)>
 |   |   | | |   |TensorConstant{1} [@L] <TensorType(int8, scalar)>
 |   |   | | |ScalarFromTensor [@M] <int64> ''  
 |   |   | |   |Elemwise{mul,no_inplace} [@N] <TensorType(int64, scalar)> ''  
 |   |   | |     |Elemwise{add,no_inplace} [@O] <TensorType(int64, scalar)> ''  
 |   |   | |     | |<TensorType(int64, scalar)> [@K] <TensorType(int64, scalar)>
 |   |   | |     | |TensorConstant{1} [@L] <TensorType(int8, scalar)>
 |   |   | |     |TensorConstant{1} [@L] <TensorType(int8, scalar)>
 |   |   | |TensorConstant{[  1   1 200 667]} [@P] <TensorType(int64, vector)>
 |   |   |<TensorType(float32, 4D)> [@Q] <TensorType(float32, 4D)>
 |   |DimShuffle{x,0,x,x} [@R] <TensorType(float32, (True, False, True, True))> ''  
 |     |<TensorType(float32, vector)> [@S] <TensorType(float32, vector)>
 |TensorConstant{[  1   5  49 166]} [@T] <TensorType(int64, vector)>

Storage map footprint:
 - Elemwise{add,no_inplace}.0, Shape: (1, 5, 49, 165), ElemSize: 4 Byte(s), TotalSize: 161700 Byte(s)
 - Constant{-1}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - TensorConstant{0.10000000149}, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
 - sigmoid.0, Shape: (1, 5, 49, 165), ElemSize: 4 Byte(s), TotalSize: 161700 Byte(s)
 - Constant{-1}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - Constant{0}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - TensorConstant{210}, Shape: (1,), ElemSize: 2 Byte(s), TotalSize: 2.0 Byte(s)
 - TensorConstant{2}, Shape: (1,), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
 - TensorConstant{[  1   5  49 166]}, Shape: (4,), ElemSize: 8 Byte(s), TotalSize: 32 Byte(s)
 - <TensorType(int64, scalar)>, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - TensorConstant{0.0}, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
 - <TensorType(float32, matrix)>, Shape: (210, 133400), ElemSize: 4 Byte(s), TotalSize: 112056000 Byte(s)
 - <TensorType(float32, vector)>, Shape: (210,), ElemSize: 4 Byte(s), TotalSize: 840 Byte(s)
 - <TensorType(float32, 4D)>, Shape: (5, 1, 5, 5), ElemSize: 4 Byte(s), TotalSize: 500 Byte(s)
 - <TensorType(float32, vector)>, Shape: (5,), ElemSize: 4 Byte(s), TotalSize: 20 Byte(s)
 - <TensorType(float32, 4D)>, Shape: (10, 5, 5, 5), ElemSize: 4 Byte(s), TotalSize: 5000 Byte(s)
 - <TensorType(float32, vector)>, Shape: (10,), ElemSize: 4 Byte(s), TotalSize: 40 Byte(s)
 - <RandomStateType>, ElemSize: 64 Byte(s)
 - w, Shape: (160, 500), ElemSize: 4 Byte(s), TotalSize: 320000 Byte(s)
 - b, Shape: (500,), ElemSize: 4 Byte(s), TotalSize: 2000 Byte(s)
 - <RandomStateType>, ElemSize: 64 Byte(s)
 - w, Shape: (500, 3), ElemSize: 4 Byte(s), TotalSize: 6000 Byte(s)
 - b, Shape: (3,), ElemSize: 4 Byte(s), TotalSize: 12 Byte(s)
 - TensorConstant{0.10000000149}, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
 - TensorConstant{[  1 160]}, Shape: (2,), ElemSize: 8 Byte(s), TotalSize: 16 Byte(s)
 - TensorConstant{1.0}, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
 - TensorConstant{0.10000000149}, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
 - TensorConstant{0.10000000149}, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
 - TensorConstant{0.10000000149}, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
 - Constant{-1}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - Constant{-1}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - Constant{-1}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - Constant{-1}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - Constant{-1}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - TensorConstant{0.10000000149}, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
 - TensorConstant{1}, Shape: (1,), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
 - TensorConstant{[  1 500]}, Shape: (2,), ElemSize: 8 Byte(s), TotalSize: 16 Byte(s)
 - Constant{-1}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - Constant{-1}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - TensorConstant{0.10000000149}, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
 - Constant{-1}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - TensorConstant{0}, Shape: (1,), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
 - TensorConstant{[  1   1 200 667]}, Shape: (4,), ElemSize: 8 Byte(s), TotalSize: 32 Byte(s)
 - Reshape{4}.0, Shape: (1, 1, 200, 667), ElemSize: 4 Byte(s), TotalSize: 533600 Byte(s)
 - ConvOp{('imshp', (1, 200, 667)),('kshp', (5, 5)),('nkern', 5),('bsize', 1),('dx', 1),('dy', 1),('out_mode', 'valid'),('unroll_batch', 1),('unroll_kern', 5),('unroll_patch', False),('imshp_logical', (1, 200, 667)),('kshp_logical', (5, 5)),('kshp_logical_top_aligned', True)}.0, Shape: (1, 5, 196, 663), ElemSize: 4 Byte(s), TotalSize: 2598960 Byte(s)
 - TensorConstant{0.10000000149}, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
 - Constant{0}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

Please help me out

Thanks
Neha








Pascal Lamblin

未读,
2016年3月1日 16:07:312016/3/1
收件人 theano...@googlegroups.com
The output of your first conv layer has size (1, 5, 49, 165), but the
second layer expects an input of size [ 1, 5, 49, 166].
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to theano-users...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.


--
Pascal
回复全部
回复作者
转发
0 个新帖子