On Mon, Aug 8, 2011 at 11:53 AM, Greg <grj...@gmail.com> wrote:
How did you overcome that in your app? Got any tips on how someone
else can get the same great results you did?
Actually this app is not doing any audio sampling. Instead its doing audio synthesis with the new Web Audio API that is being developed. I am using the Audiolet.js library to do the audio synthesis which is basically producing a sine wave and applying effects to it to try and simulate the sound of a note. Check out this presentation from the Mozilla Summit that explains the new Audio APIs: http://www.youtube.com/watch?v=1Uw0CrQdYYg
One tip I have is when developing a project that is a mashup of several different technologies try to get each technology working separately first and then mash them together. With our Cloud Composer app we first got the audio working in a basic web page, then got the canvas click events working in a separate web page, and then got the add notes feature working with a basic vexflow canvas in a separate web page. Once we got all these features working separately we then mashed them together and added in the jQuery Mobile framework. With this approach its much easier to find bugs and troubleshoot your code.
-Greg
I look to other people to help me out with audio when it's necessary. :)
And great advice - that's pretty much the exact same thing our team
did with the "picture guessing game" thing. I worked on gathering
images, "randomizer" PHP script, metadata for each image and finally
checking for matches in the voice recognition. Another teammate did
the game-loop and canvas elements with countdown and masking, another
kept the ideas flowing and our final teammate insured the wireframes
were created. Keeping it isolated in sections (and/or by
technologies) is without a doubt a priceless practice. :)