requesting simple tutorial for mortal home users to toy with DIY bvlc_googlenet deepdream eyecandy

126 views
Skip to first unread message

Marie Duopoint

unread,
Jul 10, 2015, 10:21:06 AM7/10/15
to caffe...@googlegroups.com

Caffe is very comprehensive, it can do all sorts of stuff that has little to no connection to each other (audio, images, OCR, etc.) and there really isn't any rigorous and exhaustive documentation you can work with (quote: "All questions about usage, installation, code, and applications should be searched for and asked on the caffe-users mailing list."). Given no greater purpose to begin with, the learning barrier is extremely high, especially if you have no degree in computer science and don't know much about machine learning.


Many people got interested in caffe now because of the impressive visuals from the pre-trained bvlc_googlenet. But this teaches nothing really about caffe itself. All you do is install it, copy&paste and press enter. And the pre-trained model only outputs birds, dogs, buildings and a couple of other pre-defined things. Which got very boring by now if you have been in touch with popular content.

So you can figure that all those people who were accidentally advertized recently, try to use caffe but then hit this impossible limit where the conclusion is:

- you preferably need a bunch of GPUs for $15,000
- you need a zillion images to make this work which aren't ready for unrestricted download anywhere
- you have no examples how in particular this all has to look like if you DIY
- there are no pre-compiled image sets that you actually care about at all
- it takes forever to test if you even used proper settings
- no one can give you usable advice because whatever you are doing is hyper-specific in application, considering that caffe conversely is hyper-general by design


That's just pretty frustrating and annoying. The problem here is: caffe is not a toy. But people love toys and learn really fast from using toys.

My rationale is, that if I could somehow rig the setup/model into learning essentially nonsense (i.e. not to distinguish anything meaningful), to output the impressive eyecandy stuff first, then I could go from there and teach it something useful. This would then obviously eat more computational power. But then I could also slowly learn more about using caffe properly.

Maybe that sounds entirely backwards, maybe it is impossible what do I know, but it is just how any newcomer intuitively makes experiences.


So it would be really nice and helpful, if there were any such tutorials, that show how you can in fact compute a less-usable "nonsense" model just for the purpose of getting some fast pretty looking results, with the deepdream code or similar, with some mediocre home GPU/CPU. Also the faster anything works, the faster people can intuitively learn and get a feeling for the capabilities of the software. If you are supposed to wait 12-48 hours each time you do a simple test ... that's just imhumane to any good learning process.


Thank you for your attention and consideration.

Mario Klingemann

unread,
Jul 13, 2015, 11:06:15 AM7/13/15
to caffe...@googlegroups.com
Well, I've been where you are about a week ago. I wanted to dive into deep learning for a long time and deepdream finally gave me the motivation to suffer through the horrible installation process. But in the end and a lot of googling I got a running system. 

I also thought that I'd need an expensive new machine with ideally several GPU's to train new models, but then someone pointed out to me that it's much more economic (at least if you want to just experiment for a while) to just fire up an Amazon EC2 instance with GPU support and run caffee and ipython notebook there.It's a little bit of a to set it all up, but there are some tutorials how to do it - this one worked for me to get all the CUDA stuff to run on EC2: https://github.com/BVLC/caffe/wiki/Install-Caffe-on-EC2-from-scratch-(Ubuntu,-CUDA-7,-cuDNN) And this explains how to get a notebook server running: https://gist.github.com/iamatypeofwalrus/5183133 - once you have this setup you can open ipnb files running on your server in the browser and start writing and running python directy there.

I've started to train my own models for exactly the purpose of getting a different output for deepdream, but it's not as simple. The results depend a lot on how your categories look like and how they are different from each other. Sometimes I do see something that reminds me of something I've trained the model with, but overall the results are rather abstract and "convolutionary". But right now I'm also just trying this with models that have been trained for maybe 10000 iterations, so I can't say if it gets more realistically looking after 100000 or 1000000 iterations.

Mario

npit

unread,
Jul 14, 2015, 3:34:05 AM7/14/15
to caffe...@googlegroups.com
Bumping your great post.
Reply all
Reply to author
Forward
0 new messages