Visual signature for reducing image search space

59 views
Skip to first unread message

Ben Leong

unread,
Jul 28, 2010, 6:51:43 PM7/28/10
to ImageNet Community
Hi,

I know this question has been raised briefly, but there seems to be no
answers given in the community.

Does ImageNet currently provide representative images for each concept
(say top K , k=5,20 etc determined by AMT voters and confidence) or
some representation in some feature space ( e.g. an average image ).
For a given image, I am interested to conduct an efficient search
through all possible images to find its closest neighbor to deduce the
corresponding synset. As the number of images grow, it could be rather
expensive.

Ben

Fei-Fei Li

unread,
Aug 7, 2010, 12:25:02 AM8/7/10
to imagenet-...@googlegroups.com
This is a very good question, and we'd also love to provide this
service. We had this in mind. But currently we do not have the bandwidth
to do this project yet.

If anyone in the community would like to do this, we'd be happy to
provide help and support!

Best,
Fei-Fei

---------------------------------------------
Li, Fei-Fei Ph.D.
(publish under L. Fei-Fei)
Assistant Professor
Computer Science Dept.
Stanford University
353 Serra Mall, 2A Room 246
Stanford, CA 94305-9025

Tel: (650)725-3860
Website: http://vision.stanford.edu
---------------------------------------------

Thomas Deselaers

unread,
Aug 9, 2010, 5:45:06 AM8/9/10
to imagenet-...@googlegroups.com, Fei-Fei Li, Vittorio Ferrari
Dear all,

We have performed a study that goes into that direction.

We identify prototypes $\mu_s$ for every category $s$ in ImageNet by
identifying the image I with the smallest SSD to all images within the
category.

\mu_s = \arg\min_{I\in s} \sum_{I' \in s} D(I,I')

The SSD can also be considered a measure of quality of the prototype $\mu_s$:

q(\mu_s)= \sum_{I' \in s} D(\mu_s,I').


We performed this study using GIST descriptors, bag of surf features,
and color histograms.

If there is interest in this data, I could provide a list of the
prototype images and their quality measures for all synsets in
ImageNet (on the September 2009 release).

Best regards,
Thomas Deselaers and Vittorio Ferrari
--
http://thomas.deselaers.de

--
http://thomas.deselaers.de

Gordon Chalmers

unread,
Aug 9, 2010, 12:23:31 PM8/9/10
to imagenet-...@googlegroups.com
 
All
Technically speaking any small amout of paint invisible to
the naked eye (with amount of 10000 distinct varyings of
frequency - emmitting/absorbing property) applied to any
object should allow for immediate object identify. 
 
Without anything spatial is good.  Only identify.  It doesnt
really cost anyone any money to do so. 
 
  Gordon Chalmers
 
 
 

 
On Mon, Aug 9, 2010 at 9:19 AM, Gordon Chalmers <chalmers.g...@gmail.com> wrote:
 
Dear All, 
 
For curiousity - dont mean to irritate your news group. 
 
How come industry, which spends quite some money in
their products, dont use some small amout of color somehow
to identify product. 
 
That is to say, of a 1 inch square of invivisible to the naked
eye is sent to a product for a camera to identify.  From the
idenfifying the shape/also direct of the object can get found. 
Get the  square find the size all proportions then rotate etc...
resize ... to find location (angle view), if one wants location
in addition to object.  
 
Then one can use the industrry datasheet to idenfity all
qualities of an object. 
 
  Identifying the object first should get the object localizing
  without any use of 1inch^2. 
 
 
Its actually less than .001% (i dont know) to do so.  
Especially when so much money is spent in normal preparing
of painting/die things - any object. 

Gordon Chalmers

unread,
Aug 9, 2010, 12:19:14 PM8/9/10
to imagenet-...@googlegroups.com
 
Dear All, 
 
For curiousity - dont mean to irritate your news group. 
 
How come industry, which spends quite some money in
their products, dont use some small amout of color somehow
to identify product. 
 
That is to say, of a 1 inch square of invivisible to the naked
eye is sent to a product for a camera to identify.  From the
idenfifying the shape/also direct of the object can get found. 
Get the  square find the size all proportions then rotate etc...
resize ... to find location (angle view), if one wants location
in addition to object.  
 
Then one can use the industrry datasheet to idenfity all
qualities of an object. 
 
  Identifying the object first should get the object localizing
  without any use of 1inch^2. 
 
 
Its actually less than .001% (i dont know) to do so.  
Especially when so much money is spent in normal preparing
of painting/die things - any object. 
 

 
 

Fei-Fei Li

unread,
Aug 9, 2010, 1:14:38 PM8/9/10
to tho...@deselaers.de, imagenet-...@googlegroups.com, Vittorio Ferrari
Thomas and Vitto,
Excellent! This is very interesting. Yes, we'd be interested in your
data. We should talk (probably offline) of how to advertise this work of
yours via the ImageNet web/dataset.

On a slightly broader note, we're definitely thinking hard of ways to
create an ImageNet community where we could provide a platform for
people to do creative and interesting things on the dataset.

So your work is a good motivation for us to jump start this!

If not earlier, we can chat about this at ECCV in a serious way.

Best,
Fei-Fei

---------------------------------------------
Li, Fei-Fei Ph.D.
(publish under L. Fei-Fei)
Assistant Professor
Computer Science Dept.
Stanford University
353 Serra Mall, 2A Room 246
Stanford, CA 94305-9025

Tel: (650)725-3860
Website: http://vision.stanford.edu
---------------------------------------------

Ben Leong

unread,
Aug 9, 2010, 1:29:23 PM8/9/10
to imagenet-...@googlegroups.com, Fei-Fei Li, Vittorio Ferrari
That is great Thomas. Can you please release the list and their
quality measures, the name of the paper which we can cite ?

Gordon Chalmers

unread,
Aug 9, 2010, 1:35:20 PM8/9/10
to imagenet-...@googlegroups.com
 
 All of you perhaps didnt understand the comment.  I wont send any more email today. 
 
 A frequency with a some-dye .00001 accurate allows one to find the image in the INDUSTRY 
 database of ALL objects.   It actually gets all image identifying extremely easy
 
Its like reading 'color' optically.  Except its in a different frequency range. 
 
    Have a nice day - only to academia. 

Gordon Chalmers

unread,
Aug 9, 2010, 1:47:46 PM8/9/10
to imagenet-...@googlegroups.com

I apologize for the email.  I was trying to learn SSD. 
 
Noticed that there something called 'challenge' in your internet-page. 
 
  The vision identifying that I propose to all of you - does not ruin the earlier work. 
 
All the work in images (without applying any frequency-dye)  can get identified with
the images without freq-dye.  So when the dye is implemented it wont wreck any
of your work or whatever you do. 
 
 Like I said - a good day for all of you.   Thanks for the imagenet work. 

Thomas Deselaers

unread,
Aug 10, 2010, 12:56:59 PM8/10/10
to Fei-Fei Li, imagenet-...@googlegroups.com, Vittorio Ferrari
Dear Fei-Fei, Dear all,

Fei-Fei Li <feif...@cs.stanford.edu> wrote:
> Excellent! This is very interesting. Yes, we'd be interested in your data.
> We should talk (probably offline) of how to advertise this work of yours via
> the ImageNet web/dataset.

Our prototypes according to GIST descriptors are now online at
http://www.vision.ee.ethz.ch/~calvin/imagenet/prototypes.html

In the next couple of days we will also put the prototypes according
to SURF bag of words and color histograms online.

> If not earlier, we can chat about this at ECCV in a serious way.

Yes. This is great. We can talk at ECCV. Both Vitto and me will be there.

Best,
thomas
--
http://thomas.deselaers.de

Gordon Chalmers

unread,
Aug 12, 2010, 7:37:39 PM8/12/10
to imagenet-...@googlegroups.com
 
  For some reason, I dont want anyone to thank me for the good work. 
 
  Please appreciate instead.   
 
  Gordon


 
On Thu, Aug 12, 2010 at 4:36 PM, Gordon Chalmers <chalmers.g...@gmail.com> wrote:
 
I was thinking of the paper of D. Lowe, with the Sift algorithm. 
 
  If one uses approx 10,000 distinct frequencies to insert in objects (with the
    usual painting - every object does get some) then one can identify the actual
    items with range also pointing direct from the color-placements. 
 
 Isnt that good.   I attach part of my proposed grad work, as i am looking to
 enter a normal pogram somewhere. 
 
Gordon Chalmers

Gordon Chalmers

unread,
Aug 12, 2010, 7:36:54 PM8/12/10
to imagenet-...@googlegroups.com
 
I was thinking of the paper of D. Lowe, with the Sift algorithm. 
 
  If one uses approx 10,000 distinct frequencies to insert in objects (with the
    usual painting - every object does get some) then one can identify the actual
    items with range also pointing direct from the color-placements. 
 
 Isnt that good.   I attach part of my proposed grad work, as i am looking to
 enter a normal pogram somewhere. 
 
Gordon Chalmers
 
 


 
On Tue, Aug 10, 2010 at 9:56 AM, Thomas Deselaers <dese...@gmail.com> wrote:
adaptivevision.wps
Reply all
Reply to author
Forward
0 new messages