x = np.array([[[[0.5]],[[-0.5]]]], dtype=float32) res = net.predict(x, oversample=False)
name: "SimpleNet"
input: "data"
input_dim: 1
input_dim: 2
input_dim: 1
input_dim: 1
layer {
name: "ip1"
type: "InnerProduct"
bottom: "data"
top: "ip1"
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
inner_product_param {
num_output: 2
weight_filler {
type: "xavier"
}
}
}
layer {
name: "prob"
type: "Softmax"
bottom: "ip2"
top: "prob"
}
name: "SimpleNet"
layer {
name: "simple"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
data_param {
source: "train_data_lmdb"
batch_size: 64
backend: LMDB
}
}
layer {
name: "simple"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
data_param {
source: "test_data_lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "data"
top: "ip1"
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
inner_product_param {
num_output: 2
weight_filler {
type: "xavier"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
def load_csv_into_lmdb(csv_name, lmdb_name):
df = pd.read_csv(csv_name)
y = df.ix[:,0].as_matrix()
x = df.ix[:,1:].as_matrix()
x = x[:,:,None,None]
Btw. not a python pro, but what is happening here?
def load_csv_into_lmdb(csv_name, lmdb_name):
df = pd.read_csv(csv_name)
y = df.ix[:,0].as_matrix()
x = df.ix[:,1:].as_matrix()
x = x[:,:,None,None]Aren't you forgetting the labels at df.ix[:,2:] ?
I also don't see how you're adding both x and y to the datum float_data field. And then for the label your using datum.label = int(y[i]) ?
datum.float_data.extend(x[i].flat)