Simon Stapleton
unread,Oct 28, 2024, 4:36:44 PM10/28/24Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to extemp...@googlegroups.com
It's been - a while - since I last used extempore. Life got in the way. For about 12 years.
And I sudenly find myself with a new performance project, something that's been in the back of my mind for about 25 years, a follow on from a series of juggling pieces I did 1997-1999, all done with super-8, and went down pretty well.
The basic conceit was to juggle as a team with myself, and explored ideas of the hyper-reality of super-saturated film color versus a somewhat more shabby reality, the boundaries between reality and image and the physical boundary of the screen. I was very happy with it as it stood. However, the performance itself was, not to put too fine a point on it, bloody hard, tied to a very rigidly scheduled 24 frames per second and an image to music sync that was somewhat approximate. Even a more "conventional" juggling performance locked to a piece of music is rigid, but has the possibility of, in the case of an error or a drop, simply skipping part of the routine and eventually sliding more or less seamlessly back into the piece's timeline. When the performance also has to sync to a projected image that continues no matter what, options become very limited, and you have to be really concetrating - after all, most of the time, the action you're supposed to be working in sync with, is happening behind you, and blinded by the projector, you can't see it.
But I somewhat digress. I find myself with a "need" to upgrade that perfromance to the brave new digital world. To go from flicker to glitch. We have technology available which was nothing short of science fiction those 25 years ago, and these open some very interesting possibilities. Without going into too many boring details, I am thinking multiple modular projection screen objects on stage, tracked via opencv, and a video projector used to project parts of potentially multiple video clips onto those screens simultaneous with live performance happening around them; the creation of virtual spaces around those screens, and the like. And generative music.
Being able to track objects on stage means being able to sequence parts of the performance. I'm not sure, but I might even be able to detect failures on my part ("drops", as they are known in juggling parlance) and integrate them into the performance. At the least, I should be able to remove most, although probably not all, of the overall timing stress, making the whole damn thing more fun to do.
So, overall, this seems like a win-win-win situation, with more possibilities to explore, and a less stressful result. But it requires being able to script opencv, to create a sequencing engine, for any of this to work. Now, I could do that in python, but hell, I loathe python, even if I could probably leverage zarf's "boodler" to do a generative soundscape type thing as well. But extempore is nice to code for, and handles timing properly, so it's back to extempore for Simon.
Which is where my question comes in. I run OSX, and have an even more severe allergic reaction to windows than I do to python. And looking at the git repo, it seems the opencv stuff only currently works for windows, and with a statically linked opencv to boot. I'm guessing the former is down to nobody bothering to make an OSX compatible CMakelists.txt - I can fix that - but I was wondering if there is any particular reason why opencv needs to be statically linked other than "it was easiest to do that way"?
Anyway, cheers all in advance for any ideas you might have.
Simon