--
--
----
post message :cc-dev...@googlegroups.com
unsubscribe: cc-developer...@googlegroups.com
more options: http://groups.google.com/group/cc-developers?hl=en
---
You received this message because you are subscribed to the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cc-developer...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
You received this message because you are subscribed to a topic in the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cc-developers/K0_fsgPxfjw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cc-developer...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Dear PiotrI was going through the project which tells us to get up the lab framework to support a spinner UI. I just need a little help getting started on that. If any changes are to be made in the lab framework I understand that I will have to make changes in the src folder of the "lab" project, as that is the one that deals with the UI elements. I now know the code that needs to be added to incorporate this functionality. But the example given here already does contain it(cml file)Should I try to add support for the rest of the individual examples?Thank You
On Sun, Mar 15, 2015 at 8:18 PM, Ankit Bansal <ankitba...@gmail.com> wrote:
HelloYes, I have been thinking of that aspect as well. But since that is largely dependent on the videos, I will first see all of them, and then start deciding on how each can be approached using an input device.Thank You
On Sun, Mar 15, 2015 at 8:07 PM, Nathan Kimball <nkim...@concord.org> wrote:
Hi Ankit,I think you have made a very interesting point regarding the activity of the student and the scale of the interaction. It is a researchable point, and I have asked the research team to think about it. I don't have an answer to it right away. (This happens to be the time of the national science teachers convention in the US, so a lot of people are busy or away.)I encourage you to continue to look at how either one of the systems that we have discussed will interface with our models, as in previous posts. This may well be the more difficult issue of HCI.Best, -Nathan Kimball
On Sat, Mar 14, 2015 at 12:53 PM, Ankit Bansal <ankitba...@gmail.com> wrote:
HelloThere is another point that came to my mind, and I just wanted to share it. From the perspective of a child, if he is already learning sitting on a chair in front of a screen, whether he does it by clicking or by moving his fingers above the leap device, he is likely not to observe any major difference if he anyway has to click and type or move and gesture in that limited space(speaking from experience with a few children, when we were trying to select a device for our project). As opposed to this, standing and learning and playing would add a new perspective altogether. Extensibility yes Kinect, but as far as ease of use is concerned, it would remain equal on both devices for the user/child because he just has to do appropriate gestures, which is independent of the device, only the distance would differ.So apart from extensibility, engaging the child into the learning process also becomes an important aspect.I will be looking at the molecular libraries and working on the problems mentioned from today itself.Looking forward to your replyThank You
On Sat, Mar 14, 2015 at 2:16 AM, Nathan Kimball <nkim...@concord.org> wrote:
Hi Ankit,Thanks very much for your device comparison and your code samples. You certainly have relevant experience. I will leave it to Piotr for more insight about your code samples. For the comparison, you have outlined a classic trade-off between the two devices, one of extensiblity (kinect) vs. ease of use (leap), among other factors. I am presently checking with other project members (our researchers) to see if the question of extensibility is important at this point in the research. I'm quite certain we won't be making a hard decision right away on the device.But there are many things for you and the others interested in this topic to do. As Piotr mentioned, looking at our molecular modeling HTML5 libraries will be very useful. From his earlier email:Please take a look at Dan's post: https://groups.google.com/forum/#!topic/cc-developers/FhVu0jZsSFQI think it's also relevant to this project, as at some point in time we will have to integrate gestures support with the Lab Framework. You can try to implement one of the listed features, it's a nice way to present your coding skills. I'm familiar with the Lab Framework, so I can help if you get stuck with something.Best, -Nathan
On Fri, Mar 13, 2015 at 12:53 PM, Ankit Bansal <ankitba...@gmail.com> wrote:
HelloI hope you have been through the analysis, and we are soon able to choose a final input device for the project.These are a few of my codes:1) Doodle Art: Kinect Paint App: Main Code.One of the winners of Microsoft Code.Fun.Do Hackathon held at BITS-Pilani.2) A few javaScript (jQuery and Ajax) codes written for pumpapp, a dynamic URL sharing tool:Please let me know if you have any questions or need to see any more.Thank You
----------
----------
From: Nathan Kimball <nkim...@concord.org>
Date: Sun, Mar 8, 2015 at 10:40 AM
To: Ankit Bansal <ankitba...@gmail.com>
Cc: Piotr Janik <janikp...@gmail.com>Hello Ankit,Very good question. The emphasis right now should be on detail tracking of the hands and fingers. I'm aware that the Kinect has capabilities for the whole body, and that could be very exciting to use, however it may be impractical for most school situations. Clenched fist, flat hand, fingers together and separately, and motions of the hands together have been imagined.-Nathan
----------
From: Ankit Bansal <ankitba...@gmail.com>
Date: Sun, Mar 8, 2015 at 11:52 AM
To: Nathan Kimball <nkim...@concord.org>
Cc: Piotr Janik <janikp...@gmail.com>
Hello SirI have been through both APIs and capabilites of both the sensors, and both seem as viable options. As you had mentioned a few mails back, we need to look into a few criteria before choosing one. In regard to this, I have thought of a few aspects that can be considered.Firstly, what is the age group of children we are looking to target? Going through the site, I gathered that based on the videos and animations, the student group being targeted here would be teenagers (secondary school / high school , i.e. around class 10 here in India)The leap motion controller architecture is such that there are two spaces of interaction, Hover zone and Touch Zone as shown by the image below. As per my understanding of the gesture recognition capabilities, in order to interact with the videos, we will not require pin point accuracy but rather how the animation would be affected by the gesture as a whole (i.e. the act of two hands joining rather than how accurately two fingers are in contact with each other). Looking at the age of the children involved, I believe that the leap motion controller can be a good option as would allow taking more gestures into account, and integrating them into the simulations.If we choose to extend this gesture recognition and learning-by-doing scenario to younger children however, I would prefer the Kinect. The reason for this being that the leap motion controller requires you to move your hands in a fixed area, while the Kinect allows full body interaction. As I had mentioned before, I have had a chance to work with autistic children, and use their performances against normal children as a measure. What was observed was that using their entire body seemed more intuitive and engaging for both kinds of children. Another aspect is that they require more room for error as compared to teenagers, which would limit our approach when incorporating gestures using leap-motion, as leap motion as very high accuracy for skeletal tracking of fingers, and even a slight change can lead to a different interpretation.I also wanted to mention that Kinect v2 allows three kinds of hand positions to be recognised: flat hand, closed fist, and two fingers out, and hence using them from a distance can also be an option.Looking forward to your reply.Thank You
----------
From: Ankit Bansal <ankitba...@gmail.com>
Date: Tue, Mar 10, 2015 at 10:04 AM
To: Nathan Kimball <nkim...@concord.org>
Cc: Piotr Janik <janikp...@gmail.com>SirI hope you have gone through my previous e-mail where I have tried to highlight a few aspects that can be taken into consideration while choosing an input device.Meanwhile I have been through the APIs of Leap motion Controller and tried to understand and work with them as much as I can without the hardware device being available as of now. Since I am familiar with Kinect, going through the js libraries was relatively less time consuming. If we could make a final decision on choice of hardware, I can begin to focus my efforts and try and make a few applications.Looking forward to your responseThank You
On Mar 16, 2015, at 11:48 AM, Ankit Bansal <ankitba...@gmail.com> wrote:
Dear PiotrI was going through the project which tells us to get up the lab framework to support a spinner UI. I just need a little help getting started on that. If any changes are to be made in the lab framework I understand that I will have to make changes in the src folder of the "lab" project, as that is the one that deals with the UI elements. I now know the code that needs to be added to incorporate this functionality.
I did try adding it to one of the online simulations and here is a screenshot. I changed one of the inputs to incorporate the spinner.
<Screenshot (3).png>
...
On Mar 16, 2015, at 12:57 PM, Ankit Bansal <ankitba...@gmail.com> wrote:
Hello
Thanks for your reply Dan.If I understood correctly, I am supposed to implement a new data structure altogether which would allow me to set and use the spinner, with some predefined attributes and some settable by the user?Yes, what I had interpreted it as was changing the input type in each of the individual simulations. So do I add a new setup to the framework then, like numeric output widget, or was the approach correct then?
You received this message because you are subscribed to a topic in the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cc-developers/K0_fsgPxfjw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cc-developer...@googlegroups.com.
You received this message because you are subscribed to a topic in the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cc-developers/K0_fsgPxfjw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cc-developer...@googlegroups.com.
And update the interactive-metadata file:Not sure what else needs to be updated in order to serialize the interactive state as well.
Thank you Piotr
I was thinking on the lines of these only. I had been reading other controllers for reference, like the button controller etc. I will continue work and contact you as and when required.
--
--
----
post message :cc-dev...@googlegroups.com
unsubscribe: cc-developer...@googlegroups.com
more options: http://groups.google.com/group/cc-developers?hl=en
---
You received this message because you are subscribed to a topic in the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cc-developers/K0_fsgPxfjw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cc-developer...@googlegroups.com.
Meanwhile, I was also looking into the slider orientation aspect, which I think would work on by making changes in the interactive-metadata.js file, and handling it for the controller.Please let me know if there is any error in this approach,
I am having some problem setting up the development environment. For some reason, I am getting this error which says
ruby script/check-development-dependencies.rb
script/check-development-dependencies.rb:57
As i began to read the basic skeleton in the interactives-controller.js, I was thinking, that right now, if I just begin with a simple definition of the spinner UI, just the instantiation of the model without any update functions etc as per the requirement. The controller for now would just display. Will this approach be fine?
For this simple element, apart from the js file I'll be writing, what files will I need to begin editing to get the basic code running. Just a little push would be helpful.
Just a question though, should I work on these, or already try linking some simulations to leap or kinect?
--
--
----
post message :cc-dev...@googlegroups.com
unsubscribe: cc-developer...@googlegroups.com
more options: http://groups.google.com/group/cc-developers?hl=en
---
You received this message because you are subscribed to a topic in the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cc-developers/K0_fsgPxfjw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cc-developer...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
You received this message because you are subscribed to the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cc-developer...@googlegroups.com.
I had sent a wrong request yesterday after which I made a commit. The new spinner.js file is in the new pull request.Would love to get some feedback on the same.
Also included are the changes in the interactive-metadata.js file for slider orientation, and the slider-controller.js. Please let me know if I am on the right track.
--
--
----
post message :cc-dev...@googlegroups.com
unsubscribe: cc-developer...@googlegroups.com
more options: http://groups.google.com/group/cc-developers?hl=en
---
You received this message because you are subscribed to a topic in the Google Groups "Concord Consortium Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cc-developers/K0_fsgPxfjw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cc-developer...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
HelloThanks for your reply.This is the link to project proposal I have made. Please let me know if any changes are required.Looking forward to your feedbackThank YouOn Fri, Mar 27, 2015 at 3:14 AM, Nathan Kimball <nkim...@concord.org> wrote:Hi,Yes, it is fine to propose to have models separate from the usual ones for gesture input, so it would be the author's choice.Best, -NathanOn Thu, Mar 26, 2015 at 4:44 PM, Ankit Bansal <ankitba...@gmail.com> wrote:SirJust to confirm.The choice to allow or not allow interactions using gestures would be of the author right? It doesn't need to be there compulsarily, like the spinner UI option orientation I was working on(choice of the author). So the author may or may not want to invoke the call. Also, how to handle each gesture would also be his consent, we need to just provide him the tools to do so, and how it would interact with each HTML element, and which gesture will be supported for which widgetThis is the plan if I am correct.Please let me know if I have missed out on anything.ThanksOn Fri, Mar 27, 2015 at 12:44 AM, Ankit Bansal <ankitba...@gmail.com> wrote:And yes, I do understand how Leap is better at this point.I had given the arguments for Kinect usage as Piotr wanted some more details about the API, advantages and functionality, which I could highlight since I had experience working on it.Thank YouOn Fri, Mar 27, 2015 at 12:36 AM, Ankit Bansal <ankitba...@gmail.com> wrote:SirThank you for your reply. Since I have explored the APIs of both, I was thinking of writing down a proposal which considers both alternatives. And also gives, how the devices will interact with concord's APIs.Also, in my proposal I was thinking of writing about the various gestures that leap/kinect support, and then build on from there, how each can first be detected followed by where they can be used by the simulations, which would be the user's choice whether or not to add it in the simulation.Thank You
On Fri, Mar 27, 2015 at 12:19 AM, Nathan Kimball <nkim...@concord.org> wrote:
Hi Ankit,I had not wanted to commit on the input device at this time. If you feel strongly about one device over the other, you may cast your proposal toward that device. You have done the comparison and are in a good position to make a decision.I think it is fair to say, that based on yours and others investigation, our research needs will be better suited to Leap (even taking into account your argument about scale and extendability). However, I will give serious consideration to a proposal focused on either device.Best,-NathanOn Thu, Mar 26, 2015 at 8:20 AM, Ankit Bansal <ankitba...@gmail.com> wrote:Hello SirI was thinking about getting the Project Proposal ready now, since tomorrow is the final date. I just had a question though, regarding the input device. Since it has not been finalised yet, how should I put in the specifics of taking input in the proposal. Any guidance on this matter?Thank You
I have edited three files. This is the comparison link to what I made. Does it look enough for a pull request now?