Trying to build a fully convolutional neural network

1,038 views
Skip to first unread message

emmanue...@gmail.com

unread,
Sep 13, 2016, 1:20:57 PM9/13/16
to Keras-users
Dear Keras community,

I'm trying to perform segmentation on 3D images (width, length, depth). The image's voxels contain gray values (i.e. channels=1). For now I have been using Elektronn (http://elektronn.org/documentation/), a python library based on Theano, because it is well adapted to 3D images and you can build fully convolutional neural networks (FCNN). Recently I have realized that Keras also has 3D conv and pooling layers, and I want to give it a try. So I started to retranscribe my model from Elektronn to Keras, but ran into errors, which raised some questions.

With fully convolutional neural nets:
[1] The input can be of any size
[2] Instead of having a single label as an output, it outputs an array of labels

In order to train a FCNN with elektronn, I have to provide the data and the targets, which are both arrays. In other words, I provide 3D images and the corresponding segmented ground truths (which are also 3D images). I tried to do the same with Keras, but it doesn't seem to work the same way. Is it possible to achieve something similar with Keras?

Here are links to my script and the error, which is linked to the target format.

I have looked into the documentation, but did not find the precise format the training data and targets are supposed to have in Keras. 
Does somebody have an example or tips onto how to build a FCNN with Keras? That would be great!

Cheers,
Manu

Qixianbiao Qixianbiao

unread,
Sep 13, 2016, 10:14:31 PM9/13/16
to Keras-users

I have three suggestions
###to keep the after-convoluted images have same size with original input, you should use 'same'
###linear should be 'sigmoid'
###'MSE' -- >  'binary_crossentropy'

To build a really working demo, you should find some mature code from github....
Pls refer to the following two codes:

paper reference:

# ======================================================================
# Generate dummy data:
import h5py
import numpy as np

Nsamples = 100
Nclasses = 2
dim = 39

data  = np.random.random((Nsamples, 1, dim, dim, dim))

target = np.random.random((Nsamples, dim, dim, dim))
target[target>=0.5] = 1
target[target<0.5] = 0
target = target.astype(int) # array containing randomly distributed 0 and 1 (represent the two classes)


# ======================================================================
# Define my model:
from keras.models import Sequential
model = Sequential()

from keras.layers.convolutional import Convolution3D, MaxPooling3D

model.add(Convolution3D(input_shape=(1,None,None,None), 
                        nb_filter=32, 
                        kernel_dim1=6, kernel_dim2=6, kernel_dim3=6, 
                        init='uniform', 
                        activation='relu', 
                        bias=True, 
                        border_mode='valid'))   ###to keep the after-convoluted images have same size with original input, you should use 'same'
                        
model.add(MaxPooling3D( pool_size=(2,2,2), 
                        strides=None, 
                        border_mode='valid'))   
                        
model.add(Convolution3D(nb_filter=32, 
                        kernel_dim1=4, kernel_dim2=4, kernel_dim3=4, 
                        init='uniform', 
                        activation='relu', 
                        bias=True, 
                        border_mode='valid'))
                        
model.add(MaxPooling3D( pool_size=(2,2,2), 
                        strides=None, 
                        border_mode='valid'))
                        
model.add(Convolution3D(nb_filter=32, 
                        kernel_dim1=3, kernel_dim2=3, kernel_dim3=3, 
                        init='uniform', 
                        activation='relu', 
                        bias=True, 
                        border_mode='valid'))
                        
model.add(Convolution3D(nb_filter=32, 
                        kernel_dim1=3, kernel_dim2=3, kernel_dim3=3, 
                        init='uniform', 
                        activation='relu', 
                        bias=True, 
                        border_mode='valid'))
                        
model.add(Convolution3D(nb_filter=Nclasses, 
                        kernel_dim1=1, kernel_dim2=1, kernel_dim3=1, 
                        init='uniform', 
                        activation='linear',     ###linear should be 'sigmoid'
                        bias=True, 
                        border_mode='valid'))
  
                        
# ======================================================================                     
# Configure the training process:
model.compile(optimizer='sgd', loss='mse', metrics=['accuracy'])  
###'MSE' -- >  'binary_crossentropy'


# ======================================================================
# Run training process:
model.fit(data, target, nb_epoch=150, batch_size=1)



在 2016年9月14日星期三 UTC+8上午1:20:57,manu写道:

manu

unread,
Sep 15, 2016, 12:42:28 PM9/15/16
to Keras-users
Thanks for your tips and the examples!
In the end my problem was that my target arrays did not fit the output size of my network. It works now :-)

nico

unread,
Oct 27, 2016, 4:29:46 PM10/27/16
to Keras-users
Hey manu, can you post your solution for the dimensions of the target array?

Thanks!

gioh....@gmail.com

unread,
Jan 18, 2017, 7:42:45 PM1/18/17
to Keras-users
Hey manu, can you post your solution for the dimensions of the target array?

I get the same error :/

Thanks!
Reply all
Reply to author
Forward
0 new messages