Kinect Depth image dataset for hand pose recognition

1442 views
Skip to first unread message

Zafar Ansari

unread,
Dec 3, 2012, 5:10:28 AM12/3/12
to openn...@googlegroups.com
Hello all,
Can anyone point me to any depth image dataset that only has hand poses of a known sign language (eg ASL)? If there are any depth datasets as the ones at http://www.iis.ee.ic.ac.uk/icvl/ges_db.htm and http://www.idiap.ch/resource/gestures/ (both of these are RGB datasets) also please reply. I am interested in static poses with preferably segmented hand (above the wrist). I need these for my project on sign language recognition using Kinect.

Vahag

unread,
Dec 4, 2012, 12:48:47 AM12/4/12
to openn...@googlegroups.com
Hi Zafar,
The only web resource, where I found a big dataset of depth images, is http://gesture.chalearn.org/data
I think, they`ll have depth images of ASL too, but you need to have a look.

Vahagn

Zafar Ansari

unread,
Dec 5, 2012, 7:26:58 AM12/5/12
to openn...@googlegroups.com
Thanks Vahag, but this dataset has very few samples of ASL. It is a more general purpose dataset. I am now thinking of making a dataset of my own.:)

erayberger

unread,
Dec 5, 2012, 5:26:42 PM12/5/12
to openn...@googlegroups.com
Hi Zafar,

We have already done what you need. It's included in SigmaNIL framework, which will be available in a few days.
You can see a video of how it works here http://vimeo.com/45073111

SigmaNIL framework also has a tool for creating your own datasets and having your custom hand shape recognition targets. This tool will also be in the package.

Best,

Eray Berger

Zafar Ansari

unread,
Dec 5, 2012, 11:06:51 PM12/5/12
to openn...@googlegroups.com
The video is very promising indeed. I am excited and have a few queries.
Am I correct in assuming that you are using a machine learning framework for classification? Have you published your findings? Are you using OpenNI framework for skeleton tracking? Have you also looked at two-handed signs?

Eray BERGER

unread,
Dec 10, 2012, 5:20:57 AM12/10/12
to openn...@googlegroups.com
Hi Zafar,

Sorry for late response. Yes we're using machine learning algorithms (our own implementation). The tool I mentioned is mainly a training tool that created data for real time classification.

Yes we're currently using OpenNI and NiTE for skeleton tracking and hand tracking. There's also KinectSDK option in SigmaNIL, so you can choose to make it work with KinectSDK also. We checked two hand signs, it works, but to make it work with the library we have to change the hand tracking handling (sth like, when two hands come closer interpret that image as a single shape, etc).

I'll let you know about the release soon.

Best,

Eray


--
You received this message because you are subscribed to the Google Groups "OpenNI" group.
To view this discussion on the web visit https://groups.google.com/d/msg/openni-dev/-/q6ijl9iNO88J.
To post to this group, send email to openn...@googlegroups.com.
To unsubscribe from this group, send email to openni-dev+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/openni-dev?hl=en.

Reply all
Reply to author
Forward
0 new messages