Fwd: Regarding GSoc project "Control of simulations with gesture input"

194 views
Skip to first unread message

Nathan Kimball

unread,
Mar 11, 2015, 11:13:39 AM3/11/15
to cc-dev...@googlegroups.com, Ankit Bansal, Piotr Janik
Hi,

This is my first time as a GSoC mentor. In the past couple of weeks, I have received two inquiries for the Control of Simulations with Gesture Input project that that were sent directly to me.  I responded directly to the candidates, who have and taken up the work earnestly.  Shortly, I included in my responses my co-mentor Piotr Janik.

It would have been better if these conversations has taken place on the more public cc-developers@googlegroups.com list.  I'm sorry the procedure was not clear to me from the beginning.  So, to correct the situation, I am forwarding the conversation that I have had with the two applicants to the list, and will continue the conversation on the list.

This is the second conversation.

Best,  -Nathan Kimball

Forwarded conversation
Subject: Regarding GSoc project "Control of simulations with gesture input"
------------------------

From: Ankit Bansal <ankitba...@gmail.com>
Date: Fri, Mar 6, 2015 at 2:15 PM
To: nkim...@concord.org


Sir

I am Ankit Bansal from Birla Institute Of Technology and Science, India. I am a computer science student, with a keen interest towards research and development in the field of HCI. In this regard, I am working on a social outreach research project which aims to make life of autistic children simpler, and better.

My project centers around creating an immersive environment for children suffering from autism, which would involve using devices like Kinect, Smartphones and Tablets to help them learn and grow. (More details on this can be found at http://www.bits-pilani.ac.in/pilani/ProjectCommunicate/home). 

Through this project, I have a good idea of the basics of making visually stimulating and interesting interactive games, and also a good grasp on the basics of augmented reality. 

One of my projects, the Kinect based painting app was one of the winners of a Hackathon conducted by Microsoft. Being a painting app, control was an important aspect I needed to take care of. Gesture recognition hence formed an integral part of this application.

I also have experience with using Javascript and JQuery for the purpose of development of a website I made.

Sir I believe that with my prior experience and knowledge (about thinking of interactive games which are attractive), technical skills (familiarity with JavaScript, Python, C#, Kinect etc) as well as willingness to learn, I can make a valuable contribution to your projects.

Hence Sir, I wanted to express my interest in working on the project "Control of simulations with gesture input"

Sir I just need some help in figuring out if am on the right track. For example, your video on metal forces, if an individual gives a certain gesture which implies pressing, is it supposed to give the pressure sense on the basis of depth, or height, or does that entirely depend upon the developer?

I have started exploring various aspects of the project:

Phase 1: I have explored the gesture recognition capabilities of Kinect, and have experience in working with them.

Phase 2:  Once the gestures are in decided, their recognition, and correponding linking needs to be worked on using concord's APIs, which I'll be studying. I have already worked on such a project which required real time integration on a screen based on gesture inputs.


Looking forward to your favorable response.

Thank You

Ankit Bansal
B.E.(Hons.) Computer Science
Minor: M.Sc.(Tech) Finance
Undergraduate Student
BITS Pilani

----------
From: Nathan Kimball <nkim...@concord.org>
Date: Fri, Mar 6, 2015 at 4:14 PM
To: Ankit Bansal <ankitba...@gmail.com>


Hello Ankit,

You certainly have very relevant experience for this project. I encourage you to apply for it. 

You seem to be familiar with the Kinect API.  I want to know more about it's ability to be used with web pages...in a browser, particularly with Javascript.  It would be helpful to look into Kinect Javascript APIs.

We have not decided on which input device to support. We are also looking at the Leap system, which I know has a Javascript API. It would be interesting to compare the APIs of those two systems. You may not have access to the hardware, but I believe that it is free to become a Leap developer though their website, and download documentation and APIs.

You might also start looking at some of the Concord Consortium simulations that can be browsed from http://lab.concord.org/ . The simulations you see there are created using JSON data that calls an simulation engine.  The original simulation engine was create in Java and can be see here: http://mw.concord.org/modeler/ . Much of this engine has been ported to HTML5, but work on this continues.  Very likely we will be working with this HTML5 codebase for this project.

Thank you for writing.

-Nathan Kimball


----------
From: Nathan Kimball <nkim...@concord.org>
Date: Fri, Mar 6, 2015 at 4:17 PM
To: Piotr Janik <janikp...@gmail.com>


Hi,

Here is the second candidate. His first email should be under mine.

Best, -Nathan

----------
From: Ankit Bansal <ankitba...@gmail.com>
Date: Sat, Mar 7, 2015 at 1:08 AM
To: Nathan Kimball <nkim...@concord.org>


Sir

Thank You for your reply.

I believe that Kinect does provide an API to be used with webpages (HTML5 and JavaScript). I shall study the depth of that API and the functionality it provides and try and get back to you preferably by today itself.

As far as Leap motion controller is concerned, I haven't had any exposure to it yet, but I have always found it as an interesting concept. I'll look through their JavaScript APIs. I'll go through both these and let you know which seems better to work with, from the point of view of the project.

I did look at a few simulations, and will go through the links you have provided in order to get a better view as to which input device can be considered more intuitive. I have experience in using JSON as well as I worked on a chrome application before, so I hope to be able to learn quickly.

I will try and get back to you as soon as possible after exploring all three domains in detail. 

Thank You
--
Ankit Bansal
B.E.(Hons.) Computer Science
Minor: M.Sc.(Tech) Finance
Undergraduate Student
BITS Pilani

----------
From: Nathan Kimball <nkim...@concord.org>
Date: Sat, Mar 7, 2015 at 2:36 PM
To: Ankit Bansal <ankitba...@gmail.com>
Cc: Piotr Janik <janikp...@gmail.com>


Hello Ankit,

Your plan sounds good.

Our goal for the gesture input for control of simulation is for a tool for research rather than a product for everyone to use (at least right away). We will use it with students in small interview situations to see what works best and if the gestures themselves help kids learn. We will modify it based on the research with students.  Therefore, we want to work with a system that will be the easiest to modify that also has enough capability to actually capture the gestures we want. So those are the criteria that we want to use to evaluate the two systems.  If the two systems both have those criteria, then we will want to look to other criteria.  For instance, the Leap system is cheaper, and simpler to set up. That has advantages, but for research, that is not the most important one.

I'm am adding to this email chain, Piotr Janik, who is our technical expert and other GSoC mentor for this work. Please include him in your future emails.

-Nathan



----------
From: Ankit Bansal <ankitba...@gmail.com>
Date: Sun, Mar 8, 2015 at 1:38 AM
To: Nathan Kimball <nkim...@concord.org>
Cc: Piotr Janik <janikp...@gmail.com>


Sir

Thank You for the insight, I am in the process of exploring the various APIs for both input devices. As you pointed out, the gestures are needed to help kids learn. Since the two devices are very different, can you please tell me what kind of gestures we are trying to target. Would it require precision upto the finger level, i.e. what each finger does, and be limited to only hands. Or would gestures like jumping to apply force (for the metal sheet pressure example I mentioned) also be incorporated? Basically would we be restricting ourselves at this stage to understand gestures by students by hands alone, or we would like to use other body parts as well. 

This would be crucial in deciding which device to consider and till what extent, and also the hardware limitations. 
Looking forward to your response.

Thank You

----------
From: Nathan Kimball <nkim...@concord.org>
Date: Sun, Mar 8, 2015 at 10:40 AM
To: Ankit Bansal <ankitba...@gmail.com>
Cc: Piotr Janik <janikp...@gmail.com>


Hello Ankit,

Very good question.  The emphasis right now should be on detail tracking of the hands and fingers.  I'm aware that the Kinect has capabilities for the whole body, and that could be very exciting to use, however it may be impractical for most school situations. Clenched fist, flat hand, fingers together and separately, and motions of the hands together have been imagined. 

-Nathan

----------
From: Ankit Bansal <ankitba...@gmail.com>
Date: Sun, Mar 8, 2015 at 11:52 AM
To: Nathan Kimball <nkim...@concord.org>
Cc: Piotr Janik <janikp...@gmail.com>


Hello Sir

I have been through both APIs and capabilites of both the sensors, and both seem as viable options. As you had mentioned a few mails back, we need to look into a few criteria before choosing one. In regard to this, I have thought of a few aspects that can be considered.

Firstly, what is the age group of children we are looking to target? Going through the site, I gathered that based on the videos and animations, the student group being targeted here would be teenagers (secondary school / high school , i.e. around class 10 here in India)

The leap motion controller architecture is such that there are two spaces of interaction, Hover zone and Touch Zone as shown by the image below. As per my understanding of the gesture recognition capabilities, in order to interact with the videos, we will not require pin point accuracy but rather how the animation would be affected by the gesture as a whole (i.e. the act of two hands joining rather than how accurately two fingers are in contact with each other). Looking at the age of the children involved, I believe that the leap motion controller can be a good option as would allow taking more gestures into account, and integrating them into the simulations.


If we choose to extend this gesture recognition and learning-by-doing scenario to younger children however, I would prefer the Kinect. The reason for this being that the leap motion controller requires you to move your hands in a fixed area, while the Kinect allows full body interaction. As I had mentioned before, I have had a chance to work with autistic children, and use their performances against normal children as a measure. What was observed was that using their entire body seemed more intuitive and engaging for both kinds of children. Another aspect is that they require more room for error as compared to teenagers, which would limit our approach when incorporating gestures using leap-motion, as leap motion as very high accuracy for skeletal tracking of fingers, and even a slight change can lead to a different interpretation.

I also wanted to mention that Kinect v2 allows three kinds of hand positions to be recognised: flat hand, closed fist, and two fingers out, and hence using them from a distance can also be an option.

Looking forward to your reply.

Thank You


----------
From: Ankit Bansal <ankitba...@gmail.com>
Date: Tue, Mar 10, 2015 at 10:04 AM
To: Nathan Kimball <nkim...@concord.org>
Cc: Piotr Janik <janikp...@gmail.com>


Sir

I hope you have gone through my previous e-mail where I have tried to highlight a few aspects that can be taken into consideration while choosing an input device.

Meanwhile I have been through the APIs of Leap motion Controller and tried to understand and work with them as much as I can without the hardware device being available as of now. Since I am familiar with Kinect, going through the js libraries was relatively less time consuming. If we could make a final decision on choice of hardware, I can begin to focus my efforts and try and make a few applications.

Looking forward to your response

Thank You


Piotr Janik

unread,
Mar 11, 2015, 1:08:25 PM3/11/15
to cc-dev...@googlegroups.com, Ankit Bansal
Hi Ankit,

thanks for that comparison. I would be interested in technical details too. For example:
- API comparison, what gestures are supported out of the box
- what is necessary to access those devices in the web browser (browser plugins, drivers, web server?)
- flexibility, extensibility
- which device seems to be more popular in the web-based projects

There is another email thread on cc-developers group regarding this GSoC project where Rahul posted his comparison between LeapMotion and Kinect. You don't have to duplicate his work, but some comments or additional info can be useful. Especially taking into account that you've already worked with Kinect.

Could you present a piece of code that you are proud of? :) It can be a pull request, your own project, anything. It would be nice if there is some JavaScript in it, but it doesn't have to. Although some JavaScript code sample would be nice too. You've mentioned Kinect app that won Microsoft hackaton - it sounds good, I would be interested in seeing it too. 

I think it's also relevant to this project, as at some point in time we will have to integrate gestures support with the Lab Framework. You can try to implement one of the listed features, it's a nice way to present your coding skills. I'm familiar with the Lab Framework, so I can help if you get stuck with something.

Thanks,
Piotr

Ankit Bansal

unread,
Mar 12, 2015, 1:08:38 PM3/12/15
to cc-dev...@googlegroups.com, ankitba...@gmail.com
Hello

A few details are:
1) As Rahul has mentioned Kinect has a few available gesture libraries openly but none of them are official. We only get the coordinates of the various joints and those can be used in an algorithm to get a gesture. Just as kinect has points on the body as reference, the leap motion has points on the hands, making it more accurate. Leap has a few gestures which are circle, swipe, screen tap and key tap which are provided by default. New ones can be defined as well. If we considering only hands and fingers, we can think of using the leap motion and pre code a few custom gestures. 
2) In case of Kinect, we need to include the javaScript library in our code and then start interacting. In order for any app to work, no additional things need to be installed on the host computer. Just connecting the kinect configures the device to windows. It can then be used.
In case of leapmotion development also, the same principle holds. HTML5 browsers can directly access the data from the controllers and use it. There are quite a few interesting plugins for leap js but not as many for kinect for web. Although there are some libraries available.
3) For web generally the leap motion is considered better as the person can interact with the browser while being seated comfortably. But for our research purposes, there are a few points in favor of Kinect stated below.

As far as comparison is concerned, the two primarily differ in their scope. Kinect has excellent skeletal detection capability, but as mentioned before, that extends to the entire body. Kinect uses 25 points on the body. And with the location of these points, we can track them and use algorithms to get particular results. (Kinect also recognises a sitting skeleton)  For example, a dance based application I have worked on, triggers a particular event when the hands, and legs are made to move in a particular way. Using kinect can help if we want to extend the scope from just hands.
To put this perspective, a few mails back I had got from Nathan, he said 
Clenched fist, flat hand, fingers together and separately, and motions of the hands together have been imagined. All these are supported in the Kinect Xbox One sensor and in addition to that, the ability of drag and drop can also be incorporated using them. 
One thing that is important to note is that whenever we think of hand gestures in order to interact with the video, it is more intuitive for a child to move his hand completely rather than have a few finger movements in a specific way
(which will anyway have to be coded for the leap motion controller separately). For example, the gesture of push and pull requires hand movement yes, but this hand movement can give equally acceptable results with both devices. What I am trying to say is that though the leap motion would give a higher degree for accuracy, flexibilty and extensibility suffer. 

As I have mentioned above, leap motion supports few hand based gestures like circle, swipe, screen tap; which depend on movement of the entire hand, and can be replicated in the kinect device as well. Almost all the gestures that we can consider by using the leap motion controller (except detailing upto each finger joint, which I believe wont be needed either, because that would be annoying for the child as well. Unintended actions due to too much accuracy.) can be imitated by kinect. The advantage now being, scalability. If for research purposes, we further decide to incorporate full body movement like, say, jumping to increase pressure, we can now add that too :), without losing out on any functionality we may already have. We can use our full arm to show a bigger swipe, which can lead to a drastically different action. Basically, the scope of gestures that can be incorporated increases manifold, and for research purposes that should be great sign.:)

Even voice commands that are supported can further help in interacting with the videos. The painting app I mentioned, uses voice and gestures both, giving a cool user experience.

Leapmotion is a great device no doubt, and will serve our purpose, but it will restrict us in a way. In the off-chance that finger tracking is essential to us, there are third party libraries for kinect that allow us to track fingers as well, and we can always use them. 
(An example is 
).
Basically we will now be able to extract the leap motion controller functionality and improve upon it using these. We can keep these features in mind before going for a particular device.

I shall send to you some of the code I have written by tomorrow, as it is on the lab computer, where I generally do my work and git is inaccessible from our hostel internet due to some technical reasons. I have the demo video for my paint application ready, which you can have a look at.


I'll also look into the lab framework and try to implement one of the features, and get back to you as soon as possible.

Looking forward to your reply. 

Thank You
Ankit Bansal
B.E.(Hons.) Computer Science
Minor: M.Sc.(Tech) Finance
Undergraduate Student
BITS Pilani

Ankit Bansal

unread,
Mar 13, 2015, 12:53:30 PM3/13/15
to cc-dev...@googlegroups.com
Hello 

I hope you have been through the analysis, and we are soon able to choose a final input device for the project.

These are a few of my codes:
1) Doodle Art: Kinect Paint App: Main Code. 
One of the winners of Microsoft Code.Fun.Do Hackathon held at BITS-Pilani.




2) A few javaScript (jQuery and Ajax) codes written for pumpapp, a dynamic URL sharing tool:
       
Please let me know if you have any questions or need to see any more.

Thank You

Nathan Kimball

unread,
Mar 13, 2015, 4:46:41 PM3/13/15
to cc-dev...@googlegroups.com, Piotr Janik
Hi Ankit,

Thanks very much for your device comparison and your code samples. You certainly have relevant experience. I will leave it to Piotr for more insight about your code samples. For the comparison, you have outlined a classic trade-off between the two devices, one of extensiblity (kinect) vs. ease of use (leap), among other factors.  I am presently checking with other project members (our researchers) to see if the question of extensibility is important at this point in the research.  I'm quite certain we won't be making a hard decision right away on the device.

But there are many things for you and the others interested in this topic to do. As Piotr mentioned, looking at our molecular modeling HTML5 libraries will be very useful.  From his earlier email:

I think it's also relevant to this project, as at some point in time we will have to integrate gestures support with the Lab Framework. You can try to implement one of the listed features, it's a nice way to present your coding skills. I'm familiar with the Lab Framework, so I can help if you get stuck with something.

Best, -Nathan

--
--
----
post message :cc-dev...@googlegroups.com
unsubscribe: cc-developer...@googlegroups.com
more options: http://groups.google.com/group/cc-developers?hl=en
---
You received this message because you are subscribed to the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cc-developer...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Ankit Bansal

unread,
Mar 14, 2015, 12:53:41 PM3/14/15
to cc-dev...@googlegroups.com, Piotr Janik
Hello

There is another point that came to my mind, and I just wanted to share it. From the perspective of a child, if he is already learning sitting on a chair in front of a screen, whether he does it by clicking or by moving his fingers above the leap device, he is likely not to observe any major difference if he anyway has to click and type or move and gesture in that limited space(speaking from experience with a few children, when we were trying to select a device for our project). As opposed to this, standing and learning and playing would add a new perspective altogether. Extensibility yes Kinect, but as far as ease of use is concerned, it would remain equal on both devices for the user/child because he just has to do appropriate gestures, which is independent of the device, only the distance would differ.
So apart from extensibility, engaging the child into the learning process also becomes an important aspect.

I will be looking at the molecular libraries and working on the problems mentioned from today itself.

Looking forward to your reply

Thank You

You received this message because you are subscribed to a topic in the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cc-developers/K0_fsgPxfjw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cc-developer...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Nathan Kimball

unread,
Mar 15, 2015, 10:37:09 AM3/15/15
to cc-dev...@googlegroups.com, Piotr Janik
Hi Ankit,

I think you have made a very interesting point regarding the activity of the student and the scale of the interaction.  It is a researchable point, and I have asked the research team to think about it. I don't have an answer to it right away.  (This happens to be the time of the national science teachers convention in the US, so a lot of people are busy or away.)

I encourage you to continue to look at how either one of the systems that we have discussed will interface with our models, as in previous posts.  This may well be the more difficult issue of HCI. 

Best, -Nathan Kimball



Ankit Bansal

unread,
Mar 15, 2015, 10:48:13 AM3/15/15
to cc-dev...@googlegroups.com, Piotr Janik
Hello

Yes, I have been thinking of that aspect as well. But since that is largely dependent on the videos, I will first see all of them, and then start deciding on how each can be approached using an input device.

Thank You

Ankit Bansal

unread,
Mar 16, 2015, 11:48:11 AM3/16/15
to cc-dev...@googlegroups.com
Dear Piotr

I was going through the project which tells us to get up the lab framework to support a spinner UI. I just need a little help getting started on that. If any changes are to be made in the lab framework I understand that I will have to make changes in the src folder of the "lab" project, as that is the one that deals with the UI elements. I now know the code that needs to be added to incorporate this functionality. 

I did try adding it to one of the online simulations and here is a screenshot. I changed one of the inputs to incorporate the spinner.
Inline image 1

This can be done individually for all the interactions which require number inputs but I just want to know whether that is the way to go about it (since this is interaction specific) or whether you expect something else.Please Do let me know so I can try again.

Thank You

On Mon, Mar 16, 2015 at 2:48 AM, Ankit Bansal <ankitba...@gmail.com> wrote:
Dear Piotr

I was going through the project which tells us to get up the lab framework to support a spinner UI. I just need a little help getting started on that. If any changes are to be made in the lab framework I understand that I will have to make changes in the src folder of the "lab" project, as that is the one that deals with the UI elements. I now know the code that needs to be added to incorporate this functionality. But the example given here already does contain it(cml file)
Should I try to add support for the rest of the individual examples?

Thank You

On Sun, Mar 15, 2015 at 8:18 PM, Ankit Bansal <ankitba...@gmail.com> wrote:
Hello

Yes, I have been thinking of that aspect as well. But since that is largely dependent on the videos, I will first see all of them, and then start deciding on how each can be approached using an input device.

Thank You
On Sun, Mar 15, 2015 at 8:07 PM, Nathan Kimball <nkim...@concord.org> wrote:
Hi Ankit,

I think you have made a very interesting point regarding the activity of the student and the scale of the interaction.  It is a researchable point, and I have asked the research team to think about it. I don't have an answer to it right away.  (This happens to be the time of the national science teachers convention in the US, so a lot of people are busy or away.)

I encourage you to continue to look at how either one of the systems that we have discussed will interface with our models, as in previous posts.  This may well be the more difficult issue of HCI. 

Best, -Nathan Kimball



On Sat, Mar 14, 2015 at 12:53 PM, Ankit Bansal <ankitba...@gmail.com> wrote:
Hello

There is another point that came to my mind, and I just wanted to share it. From the perspective of a child, if he is already learning sitting on a chair in front of a screen, whether he does it by clicking or by moving his fingers above the leap device, he is likely not to observe any major difference if he anyway has to click and type or move and gesture in that limited space(speaking from experience with a few children, when we were trying to select a device for our project). As opposed to this, standing and learning and playing would add a new perspective altogether. Extensibility yes Kinect, but as far as ease of use is concerned, it would remain equal on both devices for the user/child because he just has to do appropriate gestures, which is independent of the device, only the distance would differ.
So apart from extensibility, engaging the child into the learning process also becomes an important aspect.

I will be looking at the molecular libraries and working on the problems mentioned from today itself.

Looking forward to your reply

Thank You
On Sat, Mar 14, 2015 at 2:16 AM, Nathan Kimball <nkim...@concord.org> wrote:
Hi Ankit,

Thanks very much for your device comparison and your code samples. You certainly have relevant experience. I will leave it to Piotr for more insight about your code samples. For the comparison, you have outlined a classic trade-off between the two devices, one of extensiblity (kinect) vs. ease of use (leap), among other factors.  I am presently checking with other project members (our researchers) to see if the question of extensibility is important at this point in the research.  I'm quite certain we won't be making a hard decision right away on the device.

But there are many things for you and the others interested in this topic to do. As Piotr mentioned, looking at our molecular modeling HTML5 libraries will be very useful.  From his earlier email:

I think it's also relevant to this project, as at some point in time we will have to integrate gestures support with the Lab Framework. You can try to implement one of the listed features, it's a nice way to present your coding skills. I'm familiar with the Lab Framework, so I can help if you get stuck with something.

Best, -Nathan
On Fri, Mar 13, 2015 at 12:53 PM, Ankit Bansal <ankitba...@gmail.com> wrote:
Hello 

I hope you have been through the analysis, and we are soon able to choose a final input device for the project.

These are a few of my codes:
1) Doodle Art: Kinect Paint App: Main Code. 
One of the winners of Microsoft Code.Fun.Do Hackathon held at BITS-Pilani.




2) A few javaScript (jQuery and Ajax) codes written for pumpapp, a dynamic URL sharing tool:
       
Please let me know if you have any questions or need to see any more.

Thank You

----------
----------
From: Nathan Kimball <nkim...@concord.org>
Date: Sun, Mar 8, 2015 at 10:40 AM
To: Ankit Bansal <ankitba...@gmail.com>
Cc: Piotr Janik <janikp...@gmail.com>


Hello Ankit,

Very good question.  The emphasis right now should be on detail tracking of the hands and fingers.  I'm aware that the Kinect has capabilities for the whole body, and that could be very exciting to use, however it may be impractical for most school situations. Clenched fist, flat hand, fingers together and separately, and motions of the hands together have been imagined. 

-Nathan

----------
From: Ankit Bansal <ankitba...@gmail.com>
Date: Sun, Mar 8, 2015 at 11:52 AM
To: Nathan Kimball <nkim...@concord.org>
Cc: Piotr Janik <janikp...@gmail.com>


Hello Sir

I have been through both APIs and capabilites of both the sensors, and both seem as viable options. As you had mentioned a few mails back, we need to look into a few criteria before choosing one. In regard to this, I have thought of a few aspects that can be considered.

Firstly, what is the age group of children we are looking to target? Going through the site, I gathered that based on the videos and animations, the student group being targeted here would be teenagers (secondary school / high school , i.e. around class 10 here in India)

The leap motion controller architecture is such that there are two spaces of interaction, Hover zone and Touch Zone as shown by the image below. As per my understanding of the gesture recognition capabilities, in order to interact with the videos, we will not require pin point accuracy but rather how the animation would be affected by the gesture as a whole (i.e. the act of two hands joining rather than how accurately two fingers are in contact with each other). Looking at the age of the children involved, I believe that the leap motion controller can be a good option as would allow taking more gestures into account, and integrating them into the simulations.


If we choose to extend this gesture recognition and learning-by-doing scenario to younger children however, I would prefer the Kinect. The reason for this being that the leap motion controller requires you to move your hands in a fixed area, while the Kinect allows full body interaction. As I had mentioned before, I have had a chance to work with autistic children, and use their performances against normal children as a measure. What was observed was that using their entire body seemed more intuitive and engaging for both kinds of children. Another aspect is that they require more room for error as compared to teenagers, which would limit our approach when incorporating gestures using leap-motion, as leap motion as very high accuracy for skeletal tracking of fingers, and even a slight change can lead to a different interpretation.

I also wanted to mention that Kinect v2 allows three kinds of hand positions to be recognised: flat hand, closed fist, and two fingers out, and hence using them from a distance can also be an option.

Looking forward to your reply.

Thank You


----------
From: Ankit Bansal <ankitba...@gmail.com>
Date: Tue, Mar 10, 2015 at 10:04 AM
To: Nathan Kimball <nkim...@concord.org>
Cc: Piotr Janik <janikp...@gmail.com>


Sir

I hope you have gone through my previous e-mail where I have tried to highlight a few aspects that can be taken into consideration while choosing an input device.

Meanwhile I have been through the APIs of Leap motion Controller and tried to understand and work with them as much as I can without the hardware device being available as of now. Since I am familiar with Kinect, going through the js libraries was relatively less time consuming. If we could make a final decision on choice of hardware, I can begin to focus my efforts and try and make a few applications.

Looking forward to your response

Thank You




--

Nathan Kimball

unread,
Mar 16, 2015, 11:55:02 AM3/16/15
to cc-dev...@googlegroups.com
Hi,

I have learned that Piotr will be away for a few days (till Thursday). Perhaps Dan or someone else could answer Ankit.

Thanks, -Nathan

Daniel Damelin

unread,
Mar 16, 2015, 12:38:02 PM3/16/15
to cc-dev...@googlegroups.com
Hi Ankit,

We would want the author to be able to choose when to use this style of user input rather than replacing an existing input type (not sure if you were recommending changing this on all interactives for a particular numerical input type). For this to be useful it would need some minimal configuration options:
  • maxValue (optional)
  • minValue (optional)
  • initialValue
  • stepSize (how much each click of the up/down arrows would change the spinner value)
  • width
  • height
  • numberFormat (which would use format strings like the line graph’s xFormatter/yFormatter)

Other properties that would make sense would include the type of things all components work with:
  • id
  • type
  • property (for connecting it to a model property or custom parameter)
  • label
  • units
  • tooltip
  • helpIcon

You can see examples of these properties in use for the current numericOutput widget which is pretty close to the spinner. It’s just an output rather than an input like the spinner, but is similar in many other ways.

-Dan
On Mar 16, 2015, at 11:48 AM, Ankit Bansal <ankitba...@gmail.com> wrote:

Dear Piotr

I was going through the project which tells us to get up the lab framework to support a spinner UI. I just need a little help getting started on that. If any changes are to be made in the lab framework I understand that I will have to make changes in the src folder of the "lab" project, as that is the one that deals with the UI elements. I now know the code that needs to be added to incorporate this functionality. 

I did try adding it to one of the online simulations and here is a screenshot. I changed one of the inputs to incorporate the spinner.
<Screenshot (3).png>

Ankit Bansal

unread,
Mar 16, 2015, 12:57:28 PM3/16/15
to cc-dev...@googlegroups.com
Hello

Thanks for your reply Dan.
If I understood correctly, I am supposed to implement a new data structure altogether which would allow me to set and use the spinner, with some predefined attributes and some settable by the user?
Yes, what I had interpreted it as was changing the input type in each of the individual simulations. So do I add a new setup to the framework then, like numeric output widget, or was the approach correct then?

Thanks
...

Daniel Damelin

unread,
Mar 16, 2015, 1:03:46 PM3/16/15
to cc-dev...@googlegroups.com
On Mar 16, 2015, at 12:57 PM, Ankit Bansal <ankitba...@gmail.com> wrote:

Hello

Thanks for your reply Dan.
If I understood correctly, I am supposed to implement a new data structure altogether which would allow me to set and use the spinner, with some predefined attributes and some settable by the user?
Yes, what I had interpreted it as was changing the input type in each of the individual simulations. So do I add a new setup to the framework then, like numeric output widget, or was the approach correct then?

You would create an entirely new widget like the numeric output widget that an author could choose to use while authoring an interactive. You should not change existing interactives to use this new spinner. There is a example interactive that displays the use of various widgets. You should add the spinner to this interactive so it can be demoed.

Thanks,
-Dan

P.S. One other issue to consider. We want our interactives to work on tablets, so the spinner buttons have to be large enough to work with a touch interface. If you don’t have access to a tablet don’t worry about it. We can possibly tweak this later.

Ankit Bansal

unread,
Mar 16, 2015, 1:06:13 PM3/16/15
to cc-dev...@googlegroups.com
Ok I think I get the idea. I'll work on making a new js file for the spinner, and I try and put it within the said project.

Thanks

You received this message because you are subscribed to a topic in the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cc-developers/K0_fsgPxfjw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cc-developer...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--

Scott Cytacki

unread,
Mar 16, 2015, 1:08:09 PM3/16/15
to cc developers
--
Scott Cytacki
The Concord Consortium

Daniel Damelin

unread,
Mar 16, 2015, 1:13:14 PM3/16/15
to cc-dev...@googlegroups.com
And update the interactive-metadata file:

Not sure what else needs to be updated in order to serialize the interactive state as well.

-Dan

Ankit Bansal

unread,
Mar 16, 2015, 1:13:23 PM3/16/15
to cc-dev...@googlegroups.com
Hello Scott

Yes, that was the directory I was referring to (/controllers, to be accurate). Thank You for the insight. The details will surely be helpful as I continue to work.
I will try and get back with a working code soon.

Thank You

You received this message because you are subscribed to a topic in the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cc-developers/K0_fsgPxfjw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cc-developer...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--

Ankit Bansal

unread,
Mar 16, 2015, 1:14:43 PM3/16/15
to cc-dev...@googlegroups.com
Ok Dan

Will take care of that aspect as well, and if something else needs attention.

Thank You

Piotr Janik

unread,
Mar 19, 2015, 7:30:09 AM3/19/15
to cc-dev...@googlegroups.com
2015-03-16 18:13 GMT+01:00 Daniel Damelin <ddam...@concord.org>:
And update the interactive-metadata file:

Not sure what else needs to be updated in order to serialize the interactive state as well.

Component needs to implement .serialize() method.

In general the required interface of a widget controller is described here:

Numeric output can be a good example, although there is only one-way binding between widget and model property (it only displays property, but can't change it).
Radio button has a two-way binding (displays property and can also change it) which would be necessary for the spinner.

Ankit Bansal

unread,
Mar 19, 2015, 1:09:46 PM3/19/15
to cc-dev...@googlegroups.com

Thank you Piotr

I was thinking on the lines of these only. I had been reading other controllers for reference, like the button controller etc. I will continue work and contact you as and when required.

You received this message because you are subscribed to a topic in the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cc-developers/K0_fsgPxfjw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cc-developer...@googlegroups.com.

Ankit Bansal

unread,
Mar 21, 2015, 2:07:45 PM3/21/15
to cc-dev...@googlegroups.com
Hello Piotr

I had been busy for the past few days due to my exams. Now I shall start working and giving time. 
As you suggested I went through the code for radio button,  and as I observed everything depended on the 'checked' option. Both numeric option controller and radio button have a lot of functions.
As i began to read the basic skeleton in the interactives-controller.js, I was thinking, that right now, if I just begin with a simple definition of the spinner UI, just the instantiation of the model without any update functions etc as per the requirement. The controller for now would just display. Will this approach be fine?
For this simple element, apart from the js file I'll be writing, what files will I need to begin editing to get the basic code running. Just a little push would be helpful.

Thank You

--

Ankit Bansal

unread,
Mar 21, 2015, 4:30:33 PM3/21/15
to cc-dev...@googlegroups.com
Hello

I am having some problem setting up the development environment. For some reason, I am getting this error which says 
ruby script/check-development-dependencies.rb
script/check-development-dependencies.rb:57:in `rescue in nodejs_check': undefined method `[]=' for nil:NilClass (NoMethodError)
from script/check-development-dependencies.rb:42:in `nodejs_check'
from script/check-development-dependencies.rb:113:in `<main>'
make[1]: *** [clean] Error 1
make[1]: Leaving directory `/home/ankit/lab'
make: *** [everything] Error 2

Can somebody help me resolve this problem.

Despite this, I have begun to write the files by understanding the logic so  I don't waste any time. The skeleton of the spinner.js is almost ready. Will shortly be proceeding to write the code for view controller of spinner.

Thank You

Ankit Bansal

unread,
Mar 22, 2015, 10:53:36 AM3/22/15
to cc-dev...@googlegroups.com
Meanwhile, I was also looking into the slider orientation aspect, which I think would work on by making changes in the interactive-metadata.js file, and handling it for the controller.
Please let me know if there is any error in this approach,

Thank You

Piotr Janik

unread,
Mar 23, 2015, 7:57:13 AM3/23/15
to cc-dev...@googlegroups.com
Hi,

2015-03-22 15:53 GMT+01:00 Ankit Bansal <ankitba...@gmail.com>:
Meanwhile, I was also looking into the slider orientation aspect, which I think would work on by making changes in the interactive-metadata.js file, and handling it for the controller.
Please let me know if there is any error in this approach,

Yes, that's a good approach. 
 
I am having some problem setting up the development environment. For some reason, I am getting this error which says 
ruby script/check-development-dependencies.rb
script/check-development-dependencies.rb:57

If you take a look at the line that is failing, you should notice that it's related to NodeJS dependency check. Do you have NodeJS installed?
There was a bug in this script that was preventing it from printing a meaningful message. I fixed that, so you can pull changes and run it again.

As i began to read the basic skeleton in the interactives-controller.js, I was thinking, that right now, if I just begin with a simple definition of the spinner UI, just the instantiation of the model without any update functions etc as per the requirement. The controller for now would just display. Will this approach be fine?

Yes, that's very reasonable approach. The first step can be to display spinner UI, then you can try to connect it to Lab model property and finally try to support modification of the property value.
 
For this simple element, apart from the js file I'll be writing, what files will I need to begin editing to get the basic code running. Just a little push would be helpful.

I hope all the necessary info is in this thread, take a look at Scott's and Dan's messages. You have to create a new JS file (SpinnerController), then add it to the list of widget controllers in InteractiveController and finally add its options to interactive-metadata.js.

Best regards,
Piotr

Ankit Bansal

unread,
Mar 23, 2015, 5:56:40 PM3/23/15
to cc-dev...@googlegroups.com
Hello Piotr

Thanks for the reply. I will work on these and contact you tomorrow in case I face any more difficulty.
Just a question though, should I work on these, or already try linking some simulations to leap or kinect?
Because that will not take as much time, once I am familiar with your code structures. Hence I had decided to focus on the given problems first.

Thanks

Piotr Janik

unread,
Mar 24, 2015, 8:47:32 AM3/24/15
to cc-dev...@googlegroups.com
2015-03-23 22:56 GMT+01:00 Ankit Bansal <ankitba...@gmail.com>:
Just a question though, should I work on these, or already try linking some simulations to leap or kinect?

I think it's better to focus on the small coding tasks rather than trying to integrate Leap or Kinect right now. It requires broader discussion and design decisions.

- Piotr

Ankit Bansal

unread,
Mar 24, 2015, 6:23:28 PM3/24/15
to cc-dev...@googlegroups.com
Hi Piotr

Thanks for your reply
I have sent a pull request, and working on the two tasks at hand. I just need a way to test them, though I've made the files to an extent.

Thanks!

--
--
----
post message :cc-dev...@googlegroups.com
unsubscribe: cc-developer...@googlegroups.com
more options: http://groups.google.com/group/cc-developers?hl=en
---
You received this message because you are subscribed to a topic in the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cc-developers/K0_fsgPxfjw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cc-developer...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Ankit Bansal

unread,
Mar 24, 2015, 6:56:54 PM3/24/15
to cc-dev...@googlegroups.com
Also, I had the nodejs installed, but it is a bug in ubuntu 14.04 which hampers compatibility with ruby2.0.
And concord's simulations require Ruby = 2.0.0-p195 to = 2.0.0-p247.

Thanks

Piotr Janik

unread,
Mar 25, 2015, 5:56:06 AM3/25/15
to cc-dev...@googlegroups.com
Hey,

I think it's a bit too early for a pull request. This looks like a new file that doesn't bring any functionality yet.
But if you want to keep this pull request open to show progress and ask for feedback, it's perfectly fine. It's a good start, file is placed in a right place.
BTW, please make sure that whitespace and code layout are consistent.

Thanks,
Piotr

You received this message because you are subscribed to the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cc-developer...@googlegroups.com.

Ankit Bansal

unread,
Mar 25, 2015, 7:55:54 AM3/25/15
to cc-dev...@googlegroups.com
Hi

I had sent a wrong request yesterday after which I made a commit. The new spinner.js file is in the new pull request. 
Would love to get some feedback on the same.
Also included are the changes in the interactive-metadata.js file for slider orientation, and the slider-controller.js. Please let me know if I am on the right track.

Thank You

Piotr Janik

unread,
Mar 25, 2015, 8:03:00 AM3/25/15
to cc-dev...@googlegroups.com
2015-03-25 12:55 GMT+01:00 Ankit Bansal <ankitba...@gmail.com>:
I had sent a wrong request yesterday after which I made a commit. The new spinner.js file is in the new pull request. 
Would love to get some feedback on the same.

I don't see anything new, just https://github.com/concord-consortium/lab/pull/77 which I've already seen (comments in the previous mail).
 
Also included are the changes in the interactive-metadata.js file for slider orientation, and the slider-controller.js. Please let me know if I am on the right track.

I can't see it.

Ankit Bansal

unread,
Mar 25, 2015, 8:12:04 AM3/25/15
to cc-dev...@googlegroups.com

--
--
----
post message :cc-dev...@googlegroups.com
unsubscribe: cc-developer...@googlegroups.com
more options: http://groups.google.com/group/cc-developers?hl=en
---
You received this message because you are subscribed to a topic in the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cc-developers/K0_fsgPxfjw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cc-developer...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Piotr Janik

unread,
Mar 30, 2015, 9:21:31 AM3/30/15
to cc-dev...@googlegroups.com
Anikt,

I've had a chance to take a look at your pull requests again.

Re #77:
Nothing has changed there, it's still just the copy-pasted piece of code.

Re #78:
To be honest, I can't understand motivation behind this pull request. This code doesn't even build. Have you tested it locally before creating your pull request?... What's more:
- It also includes some changes from other commits (mine actually).
- (Theoretically) it includes spinner functionality and vertical orientation for slider - it shouldn't be just one pull request, but two separate ones.
- Regarding slider orientation functionality - you added one malformed .css() call and a few typos, not sure what was the intention.
- Whitespace.

Best regards,
Piotr

2015-03-27 9:48 GMT+01:00 Ankit Bansal <ankitba...@gmail.com>:
Hello

Thanks for your reply.
This is the link to project proposal I have made. Please let me know if any changes are required.

Looking forward to your feedback

Thank You

On Fri, Mar 27, 2015 at 3:14 AM, Nathan Kimball <nkim...@concord.org> wrote:
Hi,

Yes, it is fine to propose to have models separate from the usual ones for gesture input, so it would be the author's choice. 

Best, -Nathan


On Thu, Mar 26, 2015 at 4:44 PM, Ankit Bansal <ankitba...@gmail.com> wrote:
Sir
Just to confirm.
The choice to allow or not allow interactions using gestures would be of the author right? It doesn't need to be there compulsarily, like the spinner UI option orientation I was working on(choice of the author). So the author may or may not want to invoke the call. Also, how to handle each gesture would also be his consent, we need to just provide him the tools to do so, and how it would interact with each HTML element, and which gesture will be supported for which widget
This is the plan if I am correct.
Please let me know if I have missed out on anything.

Thanks

On Fri, Mar 27, 2015 at 12:44 AM, Ankit Bansal <ankitba...@gmail.com> wrote:

And yes, I do understand how Leap is better at this point. 
I had given the arguments for Kinect usage as Piotr wanted some more details about the API, advantages and functionality, which I could highlight since I had experience working on it.

Thank You

On Fri, Mar 27, 2015 at 12:36 AM, Ankit Bansal <ankitba...@gmail.com> wrote:
Sir

Thank you for your reply. Since I have explored the APIs of both, I was thinking of writing down a proposal which considers both alternatives. And also gives, how the devices will interact with concord's APIs.
Also, in my proposal I was thinking of writing about the various gestures that leap/kinect support, and then build on from there, how each can first be detected followed by where they can be used by the simulations, which would be the user's choice whether or not to add it in the simulation.

Thank You

On Fri, Mar 27, 2015 at 12:19 AM, Nathan Kimball <nkim...@concord.org> wrote:
Hi Ankit,

I had not wanted to commit on the input device at this time. If you feel strongly about one device over the other, you may cast your proposal toward that device.  You have done the comparison and are in a good position to make a decision. 

I think it is fair to say, that based on yours and others investigation, our research needs will be better suited to Leap (even taking into account your argument about scale and extendability).  However, I will give serious consideration to a proposal focused on either device.

Best,

-Nathan


On Thu, Mar 26, 2015 at 8:20 AM, Ankit Bansal <ankitba...@gmail.com> wrote:
Hello Sir

I was thinking about getting the Project Proposal ready now, since tomorrow is the final date. I just had a question though, regarding the input device. Since it has not been finalised yet, how should I put in the specifics of taking input in the proposal. Any guidance on this matter?

Thank You

Piotr Janik

unread,
Apr 3, 2015, 2:51:00 PM4/3/15
to Ankit Bansal, Nathan Kimball, cc-dev...@googlegroups.com
Hi,

2015-04-02 12:02 GMT+02:00 Ankit Bansal <ankitba...@gmail.com>:
I have edited three files. This is the comparison link to what I made. Does it look enough for a pull request now?

First and the foremost, you need to make sure that your code is at least syntactically valid JavaScript. This cod still doesn't build, it has clearly visible typos and errors.
I'm betting that you haven't setup Lab locally, have you? That's definitely the first step you need to take before you start coding and creating pull requests.

Regarding other bits:
- metadata looks good,
- changes to interactives controller are right,
- spinner-controller.js file is in a right place
- spinner-controller.js content doesn't look very well at the moment (errors mentioned above, I can't see a proper implementation idea yet),
- remember about clean whitespace (!)

Of course it's okay to ask for feedback on your local branch (so I'm doing it now), but if you're asking about pull request specifically - it should provide a new functionality which is fully implemented (or almost, as obviously some issues may be revealed during review process). So this branch doesn't look like a proper pull request (yet).

Thanks,
Piotr
Reply all
Reply to author
Forward
0 new messages