Dear Alex,
Thank you very much again for your time.
I find good to mention that I am using the following example
http://arxiv.org/abs/1505.07427
Alex Kendall, Matthew Grimes and Roberto Cipolla "PoseNet: A
Convolutional Network for Real-Time 6-DOF Camera Relocalization."
Proceedings of the International Conference on Computer Vision (ICCV),
2015.
For your questions
1) As I am a new user I also have no much info but I am using the same size of images with quaternion positions which is exactly same inputs as the Alex Kendal example. Hence I dont think there will be problem.
2) Yes, because I am using the same size of images.
3) I have checked net.blobs[blob_name].diff. I have 2 Euclidean loss layers and outputs are 1 for loss_xyz and 500 for loss_wpqr
4) I also provide same size of images upon deploying
5) I couldnt get this
I have trained two models by using same layers. In the first one I have used gray scale images and I got validation error which is unexptectedly high.
Then I collected RGB color images by using different camera and the results were almost same even though the color images were slightly better.
I am planning to train one more time by using higher amount of data this time expecting to get better results. But I still dont get deploy part because it gives me totally bad results than validation results.
I am confused with the following. What does exactly validation results represent ? As I say to my model I am gonna use these images as validation before I start my training, then is it normal that I am getting better results with validation data then an image that model has not seen before ?
Can you check my script that I am using to get deploy results? I doubt that I may be trying to output incorrect data.
Also. I am providing the script that I am using to get validation results which I got from Alex Kendal's example.
I am looking forward to hear you
Regards
6 Ekim 2017 Cuma 13:02:52 UTC+2 tarihinde Alex Ter-Sarkisov yazdı: