Olivier,
Thanks for continuing to update Pyo. It is an amazing system with some incredibly powerful features.
For some projects where I'm using Pyo, I'd like some additional features. I realize that everyone has wishes, and your time is limited. I've donated in the past, but would be glad to donate toward specific new features, if that was an option.
1. My first group of projects uses Pyo to render virtual sound environments for different purposes. I use the HeadSpace object for this a lot! Since HeadSpace isn't a native object, though, it isn't able to take parametric input from Pyo objects. For virtual sound environments, parametric input would allow HeadSpace parameters to automatically scale as parameters to other objects are adjusted. I also haven't worked out a clean way to smooth movement of HeadSpace objects. Perhaps cross-fading impulses somehow would get rid of the clicking when position repeatedly changes? Anyway, I'm sure that you must know how to do this the right way. A native HeadSpace object would certainly require a bit less processing power.
2. My second group of projects is focused on musical ideas. Pyo is generally good for this, as long as sounds are completely synthesized. I'd like to be able to use multi-sample instruments for some situations, though. It would be very good if we could use sampled instruments in a popular format that carries info about key/velocity mappings like SoundFont or SFZ. At one time, you wrote about connecting to FluidSynth as a possible solution. I'm not particularly attached to either SoundFonts or SFZ as a format, nor FluidSynth as a solution, but it would be great to have some sort of solution for easily incorporating multi-sample instruments.
#1 is most important to me. The first practical use of this tech is my effort to add positional sound as an additional information channel to the NVDA open source screen reading program for the blind (
nvaccess.org). Currently, all information about the computer's user interface is presented in a narrative style by a speech synthesizer. Attempting to narrate all that happens in a UI is a challenge, partly as speech doesn't intuitively carry any positional information, but also because speech, even at high speed, is a slow channel for conveying complex information. While most people can only focus on a single conversation at a time, our brains are able to process non-spoken audio cues in parallel. My goal is to use audio cues to move as much information away from the speech channel as is possible, thereby dramatically increasing the speed that blind people are able to operate a computer through a screen reading program (currently quite slow). Positional audio will also naturally be able to express spacial relationships that can only be roughly and slowly expressed through speech. The NVDA screen reader is almost entirely written in Python, and easily integrates with Pyo. My efforts will be freely available as part of the larger NVDA project.
Would you, or someone else on the list, have time to work on either of these? My C++ skills are poor and rusty. I would probably be able to help more by providing a little financial incentive.
Bryan