Hi oeb,
I redesigned the net as follows:
[('data', (1, 3, 64, 64)),
('conv1', (1, 64, 60, 60)),
('pool1', (1, 64, 30, 30)),
('conv2', (1, 96, 30, 30)),
('conv3', (1, 128, 30, 30)),
('conv4', (1, 256, 30, 30)),
('pool2', (1, 256, 15, 15)),
('ip1', (1, 500)),
('ip2', (1, 2)),
('prob', (1, 2))]
After re-training accuracies obviously went up but the filters remained random. I realise i kept the large fully connected layer at the bottom - could this still be the reason? Classification is binary, could it be that there is little difference between the two classes and therefore there is only a vague classification boundary between the two? These are examples of each class, class 0 on the left and class 1 on the right:
![](https://lh3.googleusercontent.com/-LQmf7GLazYI/V3Y2tn8yMYI/AAAAAAAAAFk/_vPlENyfKdojnJzbjQihhCNSQUJsCcZTACKgB/s320/without441_.out.png)
![](https://lh3.googleusercontent.com/-Ct00p4w-U5k/V3Yx0sei6dI/AAAAAAAAAFE/aybgsc0BIcgb_xtCcqBJR_HKItEoQMzYwCKgB/s320/with5_.out.png)