A few new papers from Caffe authors on arXiv

276 views
Skip to first unread message

Daniel Golden

unread,
Nov 24, 2014, 12:01:24 PM11/24/14
to caffe...@googlegroups.com
Donahue, Jeff, et al. "Long-term Recurrent Convolutional Networks for Visual Recognition and Description." arXiv preprint arXiv:1411.4389 (2014).
http://arxiv.org/abs/1411.4389v2

Long, Jonathan, Evan Shelhamer, and Trevor Darrell. "Fully Convolutional Networks for Semantic Segmentation." arXiv preprint arXiv:1411.4038 (2014).

Stefano Fabri

unread,
Nov 24, 2014, 12:50:20 PM11/24/14
to caffe...@googlegroups.com

Daniel Golden

unread,
Nov 24, 2014, 1:00:31 PM11/24/14
to caffe...@googlegroups.com
Done!

Jason Yosinski

unread,
Nov 24, 2014, 3:27:44 PM11/24/14
to Daniel Golden, caffe...@googlegroups.com
Oh, well while we're plugging papers made possible by Caffe...


How transferable are features in deep neural networks?
Jason Yosinski, Jeff Clune, Yoshua Bengio, Hod Lipson
NIPS 2014, http://papers.nips.cc/paper/5427-how-transferable-are-features-in-deep-neural-networks
(see also arXiv preprint with supplementary: http://arxiv.org/abs/1411.1792)

</shameless>

Summary below.


Actually there are probably quite a few Caffe users going to NIPS in
two weeks. Anyone up for a meetup some evening there to get to know
other users?

cheers,
jason





Summary:

Many people have noticed that the first layers of neural nets trained
on images tend to produce Gabor features and color blobs, prompting
the suspicion that such features are generic to many image datasets
and tasks. But to what extent is this true? And to what extent are
higher layers generic?

In this study we measure the generality of features as the extent to
which they are transferrable from one task to another, and in the
process come across a few interesting results:

- Transferability is negatively affected by two distinct issues: not
only the specialization of higher layer neurons to their original
task, but also optimization difficulties encountered when chopping
neural nets in half, severing connections between co-adapted neurons.
- Which of these two effects dominates can depend on whether features
are transferred from the bottom, middle, or top of the network.
- Features in the middle of a network can transfer well to other
semantically similar tasks but much more poorly to semantically
distant tasks.
- We also observe a surprising effect that initializing a network
with transferred features from almost any number of layers can produce
a boost to generalization that lingers even after extensive
fine-tuning to the target dataset.


---------------------------
Jason Yosinski, Cornell Computer Science Ph.D. student
http://yosinski.com/ +1.719.440.1357
> --
> You received this message because you are subscribed to the Google Groups
> "Caffe Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to caffe-users...@googlegroups.com.
> To post to this group, send email to caffe...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/caffe-users/7a2589bf-de61-485e-94ef-ad667b0356df%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Jeff Donahue

unread,
Nov 24, 2014, 5:12:12 PM11/24/14
to Jason Yosinski, Daniel Golden, caffe...@googlegroups.com
Thanks for the plugs Daniel, and thanks Jason for posting your awesomely useful work! I helped with a review of your paper and remember thinking what a great resource it was for anyone who wants to finetune an ImageNet-pretrained CNN -- i.e., every vision researcher/practitioner in 2014 :). Great to see that it got in to NIPS -- well-deserved!

Evan and I and other Berkeley folks will be at NIPS; would be great to meet up at some point!

Jason Yosinski

unread,
Nov 24, 2014, 7:11:54 PM11/24/14
to Jeff Donahue, Daniel Golden, caffe...@googlegroups.com
> I helped with a review of your paper and remember thinking what
> a great resource it was for anyone who wants to finetune an
> ImageNet-pretrained CNN -- i.e., every vision researcher/practitioner in
> 2014 :).

I hope so! It's a pretty simple experiment, right? I was just
surprised no-one else had run it yet...

> Evan and I and other Berkeley folks will be at NIPS; would be great to meet
> up at some point!

Sounds good! Maybe we can coordinate on this list closer to then.

jason


---------------------------
Jason Yosinski, Cornell Computer Science Ph.D. student
http://yosinski.com/ +1.719.440.1357


Evan Shelhamer

unread,
Nov 26, 2014, 6:14:18 PM11/26/14
to Jason Yosinski, Jeff Donahue, Daniel Golden, caffe...@googlegroups.com
Thanks for the mentions Daniel and thanks Jason for the fine-tuning analysis.

I agree that it'd be great to have a meet up at NIPS! Let's start a thread the weekend before -- Montréal's a fine city for caffeine.

Evan Shelhamer

Reply all
Reply to author
Forward
0 new messages