--
You received this message because you are subscribed to the Google Groups "OpenNI" group.
To post to this group, send email to openn...@googlegroups.com.
To unsubscribe from this group, send email to openni-dev+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/openni-dev?hl=en.
http://www.pointclouds.org/documentation/tutorials/template_alignment.php#template-alignment
Cheers,
Radu.
--
Point Cloud Library (PCL) - http://pointclouds.org
>>>>>>> using the Kinect, OpenNI& OpenCV. I plan on releasing more
Although the code is still in matlab, the results are very impressive
of this algorithm :
http://www.youtube.com/watch?v=1GhNXHCQGsM
http://info.ee.surrey.ac.uk/Personal/Z.Kalal/
Would love to see this algo in C/C++ :)
--
Regards,
buZz
--
Are you aware of the
http://sourceforge.net/projects/qopentld/ effort (C++ implementation of
TLD using Qt and OpenCV) ?
Regards,
Olivier
Cheers,
Radu.
--
Point Cloud Library (PCL) - http://pointclouds.org
On 08/18/2011 08:43 AM, jm...@monkeystable.com wrote:
> Another Update for anyone still following - Ok, I've been doing a
> bunch of research and no coding for the past couple of weeks. Here
> are the conclusions I've come to so far.
>
> Reverse engineering the OpenTLD algorithm is not a viable option since
> the OpenTLD algorithm is designed to track& recognize only one face
> (or object) per Recognizer. This means that each user would have
> their own Recognizer, which would actually consist of 2 detectors, one
> for positive detections and another for negative. I think that this
> is simply too resource intensive and creates too much memory overhead,
> since all of the images are loaded when the system is being used and
> each Recognizer is also learning independent of the other Recognizers.
>
> 2D Face Recognition techniques seem to be either be inaccurate or need
> high resolution images which the Kinect can't provide effectively. It
> also seems that the most popular 2D technique is the EigenFaces
> method, which was the method I used in my first implementation. This
> method takes the images which it learns and uses them to create an set
> of average images to use for comparisons. This means that the more
> images that you add to the training set the less accurate your results
> are since the averages between the images become more and more fine
> tuned. A possible solution to this may be to implement a system
> similar to OpenTLD, where each user has their own EigenFace
> Recognizer, but that seems like overkill.
>
> It appears that 3D Face Recognition is the way to go. I have seen
> some videos which claim to be able to do Expression Recognition and
> Facial Feature tracking with the Kinect& PCL. The users are
> generally pretty close to the camera in these videos, so I'm not
> completely sure if PCL is a viable option, but it seems to be the best
> route so far. That being said, the main hurdle that I am currently
> dealing with is a lack of a PCL .NET wrapper. Can any PCL users tell
> me if I will need to switch to C++ if I want to use PCL?
>
> Thanks - Jman
>
> On Aug 2, 10:34 am, "j...@monkeystable.com"<j...@monkeystable.com>
> wrote:
>> Radu - Can PCL do mesh deformations or would I need to use another
>> library to create a mesh from the Kinect's depth data?
>>
>> I've started reading about a 3D Face Recognition technique which (i
>> think) works as follows:
>> * A Database of 3D Face Meshes is shipped with the product
>> - When a New Face is Learned
>> 1. Depth Data& Label of New Face is obtained
>> 2. Depth Data is converted to a Mesh
>> 3. Closest Match to New Face Mesh in Face Database is found and
>> recorded
>> 4. Closest Match Face is loaded into the Face Signature Mesh
>> 5. Face Signature Mesh is deformed by the Closest Match Face to the
>> difference between the Face Signature Mesh and the New Face Mesh until
>> a Minimum Cohesion is met (ie. the difference between the 2 meshes is
>> below a certain threshold)
>> 6. The Database IDs of the Face Meshes used combined with their
>> weights become the New Face's Signature and is stored in the
>> Recognition Database with the New Face's Label
>> - When a Face is Recognized
>> 1. Depth Data of Face is Obtained
>> 2. Depth Data is converted to a Mesh
>> 3. Repeat steps 3 to 5 of Learning Process
>> 4. Database IDs of the Face Meshes combined with their weights are
>> made into the Face's Signature
>> 5. The Recognition Database is checked for the Face's Signature
>> (within a certain threshold)
>>
>> I'm hoping that the 3D Face Recognition techniques will be more
>> accurate than 2D techniques. Although now that I write out the
>> process, it seems like it may be a bit processor& memory intensive.
>> read more �
>
> ...
>
> read more »