RealMotion - Face Recognition with OpenNI & OpenCV

2,530 views
Skip to first unread message

tempm...@mailinator.com

unread,
Jul 19, 2011, 10:39:28 PM7/19/11
to OpenNI
I have gotten a rough tech demo of a Face Recognition System working
using the Kinect, OpenNI & OpenCV. I plan on releasing more
information (and possibly source code) once I polish the system a bit
more. I currently have Face Detection running on the entire RGB Image
Feed, which lags the system down to around 15 fps. I want to try to
optimize a subsystem for faces which will hopefully be able to
maintain 30fps.

Anyway, here's a little demo of me setting up and using the system on
myself: http://www.youtube.com/watch?v=o3harjO7f_E

Thanks for watching - Jman

Joshua Blake

unread,
Jul 19, 2011, 11:21:00 PM7/19/11
to openn...@googlegroups.com
Looks pretty interesting, although I'd hope to see a follow up video where you learn at least 2 or 3 faces and recognize them independently.
 
It also seems pretty dark in your room. Maybe better lighting would improve reliability?
 
Are you planning to make it only analyze the pixels around where a head joint is located on a player? It should definitely speed up with subregions.
 
I'll also say that .NET code is fine for me. :)  Will you post it to github under BSD/Apache 2 or similar?
 
Thanks,
Josh

---
Joshua Blake
Microsoft Surface MVP
OpenKinect Community Founder http://openkinect.org

(cell) 703-946-7176
Twitter: http://twitter.com/joshblake
Blog: http://nui.joshland.org
Natural User Interfaces in .NET book: http://bit.ly/NUIbook






--
You received this message because you are subscribed to the Google Groups "OpenNI" group.
To post to this group, send email to openn...@googlegroups.com.
To unsubscribe from this group, send email to openni-dev+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/openni-dev?hl=en.


jm...@monkeystable.com

unread,
Jul 20, 2011, 12:16:02 AM7/20/11
to OpenNI
I am definitely planning on a follow up video with more people,
unfortunately there was no one else available when I recorded. I have
tested 2 faces and they work very well, I hope to have 3 or 4 in the
next video.

Yeah, my room doesn't have good lighting at all. I'm going to try to
fix that for the next video too.

The optimization I had in mind for the Face subsystem is to only
analyze the pixels which a user occupies. I think there may be
functionality in OpenNI (and hopefully supported by the Kinect) to
help me determine these pixels. I'm not sure what kind of
functionality OpenKinect or KinectSDK gives you, but if your able to
get the skeleton setup before the face recognition, then I think that
using the location of the head joint would be a great optimization. I
hope to use my Face Recognition to load the Skeleton Configuration
data for a User automatically, so that they don't need to use the
calibration pose, so I can't use joint locations.

Licensing... I'll need to be reading up on my options there. That
being said, I will likely release the source code in some form.

Thanks for voicing your interest - Jman

On Jul 19, 11:21 pm, Joshua Blake <joshbl...@gmail.com> wrote:
> Looks pretty interesting, although I'd hope to see a follow up video where
> you learn at least 2 or 3 faces and recognize them independently.
>
> It also seems pretty dark in your room. Maybe better lighting would improve
> reliability?
>
> Are you planning to make it only analyze the pixels around where a head
> joint is located on a player? It should definitely speed up with subregions.
>
> I'll also say that .NET code is fine for me. :)  Will you post it to github
> under BSD/Apache 2 or similar?
>
> Thanks,
> Josh
>
> ---
> Joshua Blake
> Microsoft Surface MVP
> OpenKinect Community Founderhttp://openkinect.org
>
> (cell) 703-946-7176
> Twitter:http://twitter.com/joshblake
> Blog:http://nui.joshland.org
> Natural User Interfaces in .NET book:http://bit.ly/NUIbook
>
> On Tue, Jul 19, 2011 at 10:39 PM, tempmon...@mailinator.com <

MichaelK

unread,
Jul 20, 2011, 4:33:40 AM7/20/11
to OpenNI
I don't know, what exactly you are doing in the code, but I tell you
what I would do to optimize the performance.

We got an event that is called, when a new user has found. When this
event is called, I would load the calibration data for this user, so
the user is automatically calibrated. Then I would check the position
of the Head-Joint and make some template matching. If the user face is
known, I would save the information of this skeleton and the username.
This way we know, that this skeleton is that user - no more matching
is needed! If the user is not recognized, I would save his face to
make sure, the next time he is recognized.

The framerate should not be reduced, because once the user is in the
scene no more matching has to be made ;) Only if a user enters the
scene, it could happen, that the framerate gets down.

On Jul 20, 6:16 am, "tempmon...@mailinator.com"

jm...@monkeystable.com

unread,
Jul 20, 2011, 10:04:19 AM7/20/11
to OpenNI
Hehehe, I guess great minds think alike. What you described below is
essentially the same process I had planned on implementing. My only
question is this, how would you load the calibration data for the new
user if you don't know who the user is yet (you loaded the calibration
data before the face recognition is performed)? I want to avoid
making Users who have already gone through the skeleton calibration
process from needing to do the calibration pose again. So I don't
think I will be able to use the Head-Joint for the location of the
face, since no calibration data will be loaded when I am running the
face recognition.

I should also point out that the Face Recognition part of the system
doesn't cause very much lag since the Face Recognition is run on a
cropped image of the User's face. The lag is actually caused by the
Face Detection part of the system because this system runs on the
entire 640x480 image, with a minimum face size of 1 / 12th the size of
the entire image. This gives detection coverage for distances between
0 and 8 feet from the Kinect. I tried using 1 / 8th the size of the
entire image, which is recommended by OpenCV for real-time detection,
but this shrank the detection coverage to about 0 to 5 feet from the
Kinect, which I wasn't happy with.

MichaelK - I would love the opportunity to pick your brain a little
bit in regards to your Custom Gesture system in Unity, if you are able
to allow me to. Once I polish up the Face Recognition Systems, the
next module to work on will be Custom Gestures and your system seems
very robust and easy to use. I noticed you made your system for a
company though, so I understand if you need to keep it under wraps.

Thanks again guys - Jman

MichaelK

unread,
Jul 20, 2011, 12:29:48 PM7/20/11
to OpenNI
I would use a standard calibration file, saved to disk. You could use
a person which is ~1,7 meters as a reference. That calibration should
work for users being 1,5m - 1,9m.
This way you can use the Head joint and the user pixels to create an
image of the head boundarys. Simply use the moste left, the most top
and the most right pixel as these three boundarys. For the bottom I
would use the neck - but this should be checked, because I don't
exactly know how the Neck joint is positioned.
You could also use the Head orientation to know exactly where the user
is looking and which image tempalte you should use! I think that would
improve the correctnes of the detection, because the face is not
always aligned in front to the kinect...

Just keep asking :)
I also want to implement a face detection, after I have finished the
gesture recognition.

Regards from Germany.

On Jul 20, 4:04 pm, "tempmon...@mailinator.com"

jm...@monkeystable.com

unread,
Jul 20, 2011, 9:52:05 PM7/20/11
to OpenNI
A quick update - I've spent most of the day playing with training the
Face Recognition system to recognize multiple faces and to
discriminate between faces it recognizes and those it doesn't. I've
concluded that simply using the OpenCV Eigen Recognition system will
not suffice, there is either too much distortion when resizing the
face image to 100x100, or the Histogram Equalization is over powered.
There are also serious problems with recognizing a user's face with
the more faces you add, even for a single User. I would have expected
that the more images of a User's Face that you have the more accurate
the application should be, but with my implementation this doesn't
seem to be the case.

I am considering investigating other forms of Face Recognition. Any
suggestions will be appreciated.

Thanks - Jman

Radu B. Rusu

unread,
Jul 21, 2011, 2:25:35 AM7/21/11
to openn...@googlegroups.com, jm...@monkeystable.com

On 07/20/2011 06:52 PM, jm...@monkeystable.com wrote:
> A quick update - I've spent most of the day playing with training the
> Face Recognition system to recognize multiple faces and to
> discriminate between faces it recognizes and those it doesn't. I've
> concluded that simply using the OpenCV Eigen Recognition system will
> not suffice, there is either too much distortion when resizing the
> face image to 100x100, or the Histogram Equalization is over powered.
> There are also serious problems with recognizing a user's face with
> the more faces you add, even for a single User. I would have expected
> that the more images of a User's Face that you have the more accurate
> the application should be, but with my implementation this doesn't
> seem to be the case.
>
> I am considering investigating other forms of Face Recognition. Any
> suggestions will be appreciated.

http://www.pointclouds.org/documentation/tutorials/template_alignment.php#template-alignment

Cheers,
Radu.
--
Point Cloud Library (PCL) - http://pointclouds.org

>>>>>>> using the Kinect, OpenNI& OpenCV. I plan on releasing more

MichaelK

unread,
Jul 21, 2011, 4:36:21 AM7/21/11
to OpenNI
I would use the orientation value of the head to make sure I get the
front of the face. Then I would update the local saved image, when he
gets nearer to the kinect, because then the image got more details.
And I would save only one image of the face. In the video you are
making 4-5 pictures?! Thats pretty unconvenient...

On Jul 21, 3:52 am, "j...@monkeystable.com" <j...@monkeystable.com>
wrote:

Bastiaan van den Berg

unread,
Jul 21, 2011, 7:39:00 AM7/21/11
to openn...@googlegroups.com
On Thu, Jul 21, 2011 at 03:52, jm...@monkeystable.com
<jm...@monkeystable.com> wrote:
> I am considering investigating other forms of Face Recognition.  Any
> suggestions will be appreciated.

Although the code is still in matlab, the results are very impressive
of this algorithm :

http://www.youtube.com/watch?v=1GhNXHCQGsM
http://info.ee.surrey.ac.uk/Personal/Z.Kalal/

Would love to see this algo in C/C++ :)

--
Regards,
buZz

jm...@monkeystable.com

unread,
Jul 21, 2011, 9:55:04 AM7/21/11
to OpenNI
I am already only detecting the fronts of faces using the HAAR Object
Detector in OpenCV. The HAAR Cascade is trained only to recognize
frontal faces.

I doubt this method would work very well. The EigenFaces system is
very sensitive to lighting, orientation and size. The reason that I
take 5 pictures in the video is because of the different lightings and
face sizes at different places in the room. Also, if you have only 1
image in your recognition database, then any detected object which is
passed through the Recognition System will be recognized as that 1
image. This is because the EigenFaces algorithm is based on averages.

Also, while I will agree that manually taking the 5 pictures is
inconvenient, I would say that it is not out of the question. When
you calibrate the Xbox for Face Recognition the Xbox does essentially
the same thing, but in an automated training system. You are asked to
stand in certain places in the Kinect's visible area and match certain
poses, the entire calibration process takes about 1 to 2 minutes.

I am starting to look into 3D Face Recognition as an alternative
(although I may need to write my own implementation, which will be
tricky). According to the few articles I've read it has a higher
accuracy rate than its 2D cousins.

Thanks again for all the feedback, its good to see people interested
in the project.

- Jman

jm...@monkeystable.com

unread,
Jul 21, 2011, 10:10:49 AM7/21/11
to OpenNI
Wow, this is awsome. I wonder if I can get the same results from a
Kinect. I also like how it learns what is NOT the object it is
looking for. This also seems very useful for finger gestures & poses.

I'm not too familiar with Matlab unfortunately, but I'm going to dive
into this project as soon as I can.

Thanks - Jman

On Jul 21, 7:39 am, Bastiaan van den Berg <buzzti...@gmail.com> wrote:
> On Thu, Jul 21, 2011 at 03:52, j...@monkeystable.com
>
> <j...@monkeystable.com> wrote:
> > I am considering investigating other forms of Face Recognition.  Any
> > suggestions will be appreciated.
>
> Although the code is still in matlab, the results are very impressive
> of this algorithm :
>
> http://www.youtube.com/watch?v=1GhNXHCQGsMhttp://info.ee.surrey.ac.uk/Personal/Z.Kalal/

Naëm Baron

unread,
Jul 22, 2011, 4:20:13 AM7/22/11
to OpenNI
Really awesome !
It's all Matlab... dreamin' of a C++ version (or even C#) !

Amir Hirsch

unread,
Jul 22, 2011, 6:30:21 AM7/22/11
to openn...@googlegroups.com
Zdenek started a mailing list for OpenTLD: http://groups.google.com/group/opentld

porting to C++ is happening too: https://sourceforge.net/projects/qopentld/

--

jm...@monkeystable.com

unread,
Jul 25, 2011, 1:05:59 PM7/25/11
to OpenNI
I took a break from Kinect Development over the weekend to relax, play
video games and get some drinking done. I spent a while thinking
about the future of the RealMotion Project and have decided that
having a single set of eyes on the source code is going to be
ineffective and simply doesn't make sense. The scope of the project
as a whole is very large and I am only one man. So I have decided to
release my current build under GPL Licensing. You can download from
this url: http://www.monkeystable.com/RealMotion/RealMotion-v0.01.zip

This release of RealMotion includes the RealMotion Library DLL &
Source Code and a Compiled Executable & Source Code of the Face
Recognition Application I used to make the video from above. As I've
mentioned, this release will require quite a lot of re-writing,
optimizing and expansion. But it is my hope that by getting more
people looking at the source code we will all be able to move forward
together.

Any and all suggestions are always appreciated. Thanks for your
feedback.

- Jman

On Jul 19, 10:39 pm, "tempmon...@mailinator.com"

jm...@monkeystable.com

unread,
Jul 28, 2011, 1:26:29 PM7/28/11
to OpenNI
Another quick update - I've started to chip away at the OpenTLD MatLab
code. Progress is slow (sometimes crawling) at best, MatLab is fairly
cryptic for a newcomer and I don't have money for a license to play
around with it, which makes things even harder. That being said, I've
read through about 85% of the Initialization Code and I have a pretty
good handle on syntax, operators & special characters. I'm finally
getting to some of the real juicy bits of the code, currently looking
at Generating Features.

I am still debating whether I will use OpenTLD's detection algorithm
or OpenCV's EigenFaces. Either way I am pretty sure that I will need
to use more than one Recognizer Class. It seems that both OpenTLD &
OpenCV work best if they are trying to recognize one object per class,
so it may make sense to have a Recognizer Class for each User in the
Database. Either that or I need to find a completely different
recognition technique. *sigh*

- Jman

On Jul 25, 1:05 pm, "j...@monkeystable.com" <j...@monkeystable.com>
wrote:
> I took a break from Kinect Development over the weekend to relax, play
> video games and get some drinking done.  I spent a while thinking
> about the future of theRealMotionProject and have decided that
> having a single set of eyes on the source code is going to be
> ineffective and simply doesn't make sense.  The scope of the project
> as a whole is very large and I am only one man.  So I have decided to
> release my current build under GPL Licensing.  You can download from
> this url:http://www.monkeystable.com/RealMotion/RealMotion-v0.01.zip
>
> This release ofRealMotionincludes theRealMotionLibrary DLL &

Olivier Aubert

unread,
Jul 28, 2011, 4:22:35 PM7/28/11
to openn...@googlegroups.com
Hello

Are you aware of the
http://sourceforge.net/projects/qopentld/ effort (C++ implementation of
TLD using Qt and OpenCV) ?

Regards,
Olivier

jm...@monkeystable.com

unread,
Aug 2, 2011, 10:34:29 AM8/2/11
to OpenNI
Radu - Can PCL do mesh deformations or would I need to use another
library to create a mesh from the Kinect's depth data?

I've started reading about a 3D Face Recognition technique which (i
think) works as follows:
* A Database of 3D Face Meshes is shipped with the product
- When a New Face is Learned
1. Depth Data & Label of New Face is obtained
2. Depth Data is converted to a Mesh
3. Closest Match to New Face Mesh in Face Database is found and
recorded
4. Closest Match Face is loaded into the Face Signature Mesh
5. Face Signature Mesh is deformed by the Closest Match Face to the
difference between the Face Signature Mesh and the New Face Mesh until
a Minimum Cohesion is met (ie. the difference between the 2 meshes is
below a certain threshold)
6. The Database IDs of the Face Meshes used combined with their
weights become the New Face's Signature and is stored in the
Recognition Database with the New Face's Label
- When a Face is Recognized
1. Depth Data of Face is Obtained
2. Depth Data is converted to a Mesh
3. Repeat steps 3 to 5 of Learning Process
4. Database IDs of the Face Meshes combined with their weights are
made into the Face's Signature
5. The Recognition Database is checked for the Face's Signature
(within a certain threshold)

I'm hoping that the 3D Face Recognition techniques will be more
accurate than 2D techniques. Although now that I write out the
process, it seems like it may be a bit processor & memory intensive.

- Jman


On Jul 21, 2:25 am, "Radu B. Rusu" <r...@willowgarage.com> wrote:
> On 07/20/2011 06:52 PM, j...@monkeystable.com wrote:
>
> > A quick update - I've spent most of the day playing with training the
> > Face Recognition system to recognize multiple faces and to
> > discriminate between faces it recognizes and those it doesn't.  I've
> > concluded that simply using the OpenCV Eigen Recognition system will
> > not suffice, there is either too much distortion when resizing the
> > face image to 100x100, or the Histogram Equalization is over powered.
> > There are also serious problems with recognizing a user's face with
> > the more faces you add, even for a single User.  I would have expected
> > that the more images of a User's Face that you have the more accurate
> > the application should be, but with my implementation this doesn't
> > seem to be the case.
>
> > I am considering investigating other forms of Face Recognition.  Any
> > suggestions will be appreciated.
>
> http://www.pointclouds.org/documentation/tutorials/template_alignment...
>
> Cheers,
> Radu.
> --
> Point Cloud Library (PCL) -http://pointclouds.org

jm...@monkeystable.com

unread,
Aug 18, 2011, 11:43:34 AM8/18/11
to OpenNI
Another Update for anyone still following - Ok, I've been doing a
bunch of research and no coding for the past couple of weeks. Here
are the conclusions I've come to so far.

Reverse engineering the OpenTLD algorithm is not a viable option since
the OpenTLD algorithm is designed to track & recognize only one face
(or object) per Recognizer. This means that each user would have
their own Recognizer, which would actually consist of 2 detectors, one
for positive detections and another for negative. I think that this
is simply too resource intensive and creates too much memory overhead,
since all of the images are loaded when the system is being used and
each Recognizer is also learning independent of the other Recognizers.

2D Face Recognition techniques seem to be either be inaccurate or need
high resolution images which the Kinect can't provide effectively. It
also seems that the most popular 2D technique is the EigenFaces
method, which was the method I used in my first implementation. This
method takes the images which it learns and uses them to create an set
of average images to use for comparisons. This means that the more
images that you add to the training set the less accurate your results
are since the averages between the images become more and more fine
tuned. A possible solution to this may be to implement a system
similar to OpenTLD, where each user has their own EigenFace
Recognizer, but that seems like overkill.

It appears that 3D Face Recognition is the way to go. I have seen
some videos which claim to be able to do Expression Recognition and
Facial Feature tracking with the Kinect & PCL. The users are
generally pretty close to the camera in these videos, so I'm not
completely sure if PCL is a viable option, but it seems to be the best
route so far. That being said, the main hurdle that I am currently
dealing with is a lack of a PCL .NET wrapper. Can any PCL users tell
me if I will need to switch to C++ if I want to use PCL?

Thanks - Jman

On Aug 2, 10:34 am, "j...@monkeystable.com" <j...@monkeystable.com>
wrote:
> ...
>
> read more »

Radu B. Rusu

unread,
Aug 18, 2011, 12:33:17 PM8/18/11
to openn...@googlegroups.com, jm...@monkeystable.com
PCL is C++ yes. There's Python bindings in the works, but not C# or any other .NET wrappers at the moment.

Cheers,
Radu.
--
Point Cloud Library (PCL) - http://pointclouds.org

On 08/18/2011 08:43 AM, jm...@monkeystable.com wrote:
> Another Update for anyone still following - Ok, I've been doing a
> bunch of research and no coding for the past couple of weeks. Here
> are the conclusions I've come to so far.
>
> Reverse engineering the OpenTLD algorithm is not a viable option since

> the OpenTLD algorithm is designed to track& recognize only one face


> (or object) per Recognizer. This means that each user would have
> their own Recognizer, which would actually consist of 2 detectors, one
> for positive detections and another for negative. I think that this
> is simply too resource intensive and creates too much memory overhead,
> since all of the images are loaded when the system is being used and
> each Recognizer is also learning independent of the other Recognizers.
>
> 2D Face Recognition techniques seem to be either be inaccurate or need
> high resolution images which the Kinect can't provide effectively. It
> also seems that the most popular 2D technique is the EigenFaces
> method, which was the method I used in my first implementation. This
> method takes the images which it learns and uses them to create an set
> of average images to use for comparisons. This means that the more
> images that you add to the training set the less accurate your results
> are since the averages between the images become more and more fine
> tuned. A possible solution to this may be to implement a system
> similar to OpenTLD, where each user has their own EigenFace
> Recognizer, but that seems like overkill.
>
> It appears that 3D Face Recognition is the way to go. I have seen
> some videos which claim to be able to do Expression Recognition and

> Facial Feature tracking with the Kinect& PCL. The users are


> generally pretty close to the camera in these videos, so I'm not
> completely sure if PCL is a viable option, but it seems to be the best
> route so far. That being said, the main hurdle that I am currently
> dealing with is a lack of a PCL .NET wrapper. Can any PCL users tell
> me if I will need to switch to C++ if I want to use PCL?
>
> Thanks - Jman
>
> On Aug 2, 10:34 am, "j...@monkeystable.com"<j...@monkeystable.com>
> wrote:
>> Radu - Can PCL do mesh deformations or would I need to use another
>> library to create a mesh from the Kinect's depth data?
>>
>> I've started reading about a 3D Face Recognition technique which (i
>> think) works as follows:
>> * A Database of 3D Face Meshes is shipped with the product
>> - When a New Face is Learned

>> 1. Depth Data& Label of New Face is obtained


>> 2. Depth Data is converted to a Mesh
>> 3. Closest Match to New Face Mesh in Face Database is found and
>> recorded
>> 4. Closest Match Face is loaded into the Face Signature Mesh
>> 5. Face Signature Mesh is deformed by the Closest Match Face to the
>> difference between the Face Signature Mesh and the New Face Mesh until
>> a Minimum Cohesion is met (ie. the difference between the 2 meshes is
>> below a certain threshold)
>> 6. The Database IDs of the Face Meshes used combined with their
>> weights become the New Face's Signature and is stored in the
>> Recognition Database with the New Face's Label
>> - When a Face is Recognized
>> 1. Depth Data of Face is Obtained
>> 2. Depth Data is converted to a Mesh
>> 3. Repeat steps 3 to 5 of Learning Process
>> 4. Database IDs of the Face Meshes combined with their weights are
>> made into the Face's Signature
>> 5. The Recognition Database is checked for the Face's Signature
>> (within a certain threshold)
>>
>> I'm hoping that the 3D Face Recognition techniques will be more
>> accurate than 2D techniques. Although now that I write out the

>> process, it seems like it may be a bit processor& memory intensive.

>> read more �
>

jm...@monkeystable.com

unread,
Aug 18, 2011, 5:00:20 PM8/18/11
to OpenNI
Gah... i was hoping that wasn't the case and I was just missing
the .NET wrapper. I'm reluctant to abandon .NET, since I already have
a nice code base setup.

Well, I have some experience with calling Unmanaged Code in C#. Do
any PCL Users see any major blockers for the development of a .NET
Wrapper for PCL?

I'm guessing that the general process for making a Wrapper is as
follows:
- Identify necessary public functions in Unmanaged Code to be called
from .NET
- Create Intermediary .NET Objects to represent the Unmanaged Data
Structures used in Unmanaged Functions
- Create Interop Functions for Unmanaged Functions

It's been a long time since I've called Unmanaged Code from .NET, any
pointers would be appreciated.

- Jman

On Aug 18, 12:33 pm, "Radu B. Rusu" <r...@willowgarage.com> wrote:
> PCL is C++ yes. There's Python bindings in the works, but not C# or any other .NET wrappers at the moment.
>
> Cheers,
> Radu.
> --
> Point Cloud Library (PCL) -http://pointclouds.org
> ...
>
> read more »

Joshua Blake

unread,
Aug 18, 2011, 7:01:38 PM8/18/11
to openn...@googlegroups.com
I saw a thread where someone was able to compile PCL as a C++/CLI assembly:
> ...
>
> read more »

MichaelK

unread,
Aug 19, 2011, 5:42:09 AM8/19/11
to OpenNI
Hi jman,

I also used PCL in my implementation and it works pretty fine. The
only problem is, that the maximum distance of the users face is about
1m :( And that is indeed not practicable in real usage... But I think
3d face recognition will also only work in a small distance, because
when the user is 2m away from the kinect, the depth image is not good
enough.

On Aug 18, 5:43 pm, "j...@monkeystable.com" <j...@monkeystable.com>
wrote:
> > > >>>>> help me determine these pixels.  I'm not sure what kind of...
>
> read more »

Muhammad Azeem Nawaz

unread,
Jul 12, 2012, 6:02:12 AM7/12/12
to openn...@googlegroups.com
Hi  Jman,
Hope you will be doing fine. I am currently application similar to the one you showed in demo. I was wondering that if you could share the code!! It will help me to understand it and save my time.

Hope to hear from you soon.

Thank You.

Regards,
-Azeem
Reply all
Reply to author
Forward
0 new messages