If anyone in the community would like to do this, we'd be happy to
provide help and support!
Best,
Fei-Fei
---------------------------------------------
Li, Fei-Fei Ph.D.
(publish under L. Fei-Fei)
Assistant Professor
Computer Science Dept.
Stanford University
353 Serra Mall, 2A Room 246
Stanford, CA 94305-9025
Tel: (650)725-3860
Website: http://vision.stanford.edu
---------------------------------------------
We have performed a study that goes into that direction.
We identify prototypes $\mu_s$ for every category $s$ in ImageNet by
identifying the image I with the smallest SSD to all images within the
category.
\mu_s = \arg\min_{I\in s} \sum_{I' \in s} D(I,I')
The SSD can also be considered a measure of quality of the prototype $\mu_s$:
q(\mu_s)= \sum_{I' \in s} D(\mu_s,I').
We performed this study using GIST descriptors, bag of surf features,
and color histograms.
If there is interest in this data, I could provide a list of the
prototype images and their quality measures for all synsets in
ImageNet (on the September 2009 release).
Best regards,
Thomas Deselaers and Vittorio Ferrari
--
http://thomas.deselaers.de
Dear All,For curiousity - dont mean to irritate your news group.How come industry, which spends quite some money intheir products, dont use some small amout of color somehowto identify product.That is to say, of a 1 inch square of invivisible to the nakedeye is sent to a product for a camera to identify. From theidenfifying the shape/also direct of the object can get found.Get the square find the size all proportions then rotate etc...resize ... to find location (angle view), if one wants locationin addition to object.Then one can use the industrry datasheet to idenfity allqualities of an object.Identifying the object first should get the object localizingwithout any use of 1inch^2.Its actually less than .001% (i dont know) to do so.Especially when so much money is spent in normal preparingof painting/die things - any object.
On a slightly broader note, we're definitely thinking hard of ways to
create an ImageNet community where we could provide a platform for
people to do creative and interesting things on the dataset.
So your work is a good motivation for us to jump start this!
If not earlier, we can chat about this at ECCV in a serious way.
Best,
Fei-Fei
---------------------------------------------
Li, Fei-Fei Ph.D.
(publish under L. Fei-Fei)
Assistant Professor
Computer Science Dept.
Stanford University
353 Serra Mall, 2A Room 246
Stanford, CA 94305-9025
Tel: (650)725-3860
Website: http://vision.stanford.edu
---------------------------------------------
Fei-Fei Li <feif...@cs.stanford.edu> wrote:
> Excellent! This is very interesting. Yes, we'd be interested in your data.
> We should talk (probably offline) of how to advertise this work of yours via
> the ImageNet web/dataset.
Our prototypes according to GIST descriptors are now online at
http://www.vision.ee.ethz.ch/~calvin/imagenet/prototypes.html
In the next couple of days we will also put the prototypes according
to SURF bag of words and color histograms online.
> If not earlier, we can chat about this at ECCV in a serious way.
Yes. This is great. We can talk at ECCV. Both Vitto and me will be there.
Best,
thomas
--
http://thomas.deselaers.de
I was thinking of the paper of D. Lowe, with the Sift algorithm.If one uses approx 10,000 distinct frequencies to insert in objects (with theusual painting - every object does get some) then one can identify the actualitems with range also pointing direct from the color-placements.Isnt that good. I attach part of my proposed grad work, as i am looking toenter a normal pogram somewhere.Gordon Chalmers