how to feed hdf5 with multi labels to caffe?

253 views
Skip to first unread message

villiams leung

unread,
Jun 9, 2015, 5:46:58 AM6/9/15
to caffe...@googlegroups.com

hi,guys.
i have two questions here.
1.i would like to use caffe to do facial keypoint detection, now i have a dataset with image and the corresponding facial keypoint (X1,Y1),(X2,Y2)...
and i wonder how to pack this facial keypoint in labels in hdf5?
i have packed some discrete label like 0, 1, 2. the dimension of label dataset is N x 1.
if i pack the facial keypoint into hdf5, should i use dimension with N x (2*facial point)?

2.i also do some multi labels training, i have 2 labels here.
and i pack the the labels into N x 2 (dimension) ps. i also try pack the labels into N x 2 x1 x1 ,dimensions.
and i add the slice layer, but the accuracy is bad.
the accuracy of label 1(0,1) is about 40% and label 2 (0,1,2) is about 70%. i have train the some data in torronto convnet(single label,and i train 2 models),the label 1 is about 99%, and label2 is about 70%.

the dimension of my current hdf5 is
data :N x 1 x 40 x 40 (with 40x40 pixels gray picture)
label: N x 2 (col 1 is label 1 and col 2 is label2 ) //// i also try N x 2 x 1 x 1 (col 1 is label 1 and col 2 is label2 )

thank you

and here is the configuraion of my prototxt and solver, could you tell me how to improve my accuracy? thank you for your help.

------train.prototxt
name: "2FEA"
layer {
name: "data"
type: "HDF5Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
hdf5_data_param {
source: "hdf5_classification/trainin1.txt"
batch_size: 100
shuffle:true
}

}
layer {
name: "data"
type: "HDF5Data"
top: "data"
top: "label"
include {
phase: TEST
}
hdf5_data_param {
source: "hdf5_classification/testin1.txt"
batch_size: 100
shuffle:true
}

}

layer {
name: "slice_pair"
type: "Slice"
bottom: "label"
top: "gender"
top: "ages"
slice_param {
slice_dim: 1
slice_point: 1
}
}

layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 16
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type: "gaussian"
std: 0.0001
}
bias_filler {
type: "constant"
value:0
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 1
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "pool1"
top: "pool1"
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 48
pad: 5
kernel_size: 5
stride: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
}
}
}

layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}

layer {
name: "relu2"
type: "ReLU"
bottom: "pool2"
top: "pool2"
}
layer {
name: "conv3"
type: "Convolution"
bottom: "pool2"
top: "conv3"
convolution_param {
num_output: 64
pad: 2
kernel_size: 3
stride: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value:0
}
}
}

layer {
name: "pool3"
type: "Pooling"
bottom: "conv3"
top: "pool3"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "pool3"
top: "pool3"
}
layer {
name: "conv4"
type: "Convolution"
bottom: "pool3"
top: "conv4"
convolution_param {
num_output: 64
pad: 1
kernel_size: 2
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value:0
}
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "conv4"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 100
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value:0
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip1"
bottom: "ages"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip1"
bottom: "ages"
top: "loss"
}
layer {
name: "accuracy2"
type: "Accuracy"
bottom: "ip1"
bottom: "gender"
top: "accuracy2"
include {
phase: TEST
}
}
layer {
name: "loss2"
type: "SoftmaxWithLoss"
bottom: "ip1"
bottom: "gender"
top: "loss2"
}

---solver.protoxtxt

The train/test net protocol buffer definition

net: "h5eg/HOG_Train_test.prototxt"

test_iter: 100

Carry out testing every 500 training iterations.

test_interval: 100

The base learning rate, momentum and the weight decay of the network.

base_lr: 0.01
momentum: 0.9
weight_decay: 0.00005
power:0.75
gamma: 0.0001

The learning rate policy

lr_policy: "inv"
display: 1000

The maximum number of iterations

max_iter: 200000

snapshot intermediate results

snapshot: 20000
snapshot_prefix: "2FEA"

solver mode: CPU or GPU

solver_mode: GPU
#single feature:0.41

Reply all
Reply to author
Forward
0 new messages