This opens up an interesting creative opportunity: by hacking the sample rate stored in the file's header you can make it play back at a different speed - albeit destructively. SoundHack can do this; I wonder if there is a simple command line utility that can do it (without trying to actually _convert_ the file)? Then I could write a little script to pitch a cue down a semitone. Now if only you could interact with that header information from within QLab...
Anyway, the question I did want to ask was which is more taxing for the computer: sample rate conversion on the fly or increased disk activity with higher sample-rate files? Say you find yourself working with a desk running at 96kHz but your show has mainly been built with 44.1kHz material originating on CD; should you batch convert all your files to 96kHz - thereby more than doubling the demand on the hard disk for no actual quality gain - or should you let the CPU do this conversion on the fly? (If you were really worried about the quality of Apple's SRC you probably wouldn't have got yourself into this position in the first place: you'd have made all your files at 96kHz using iZotope's SRC when necessary...)
Rich
________________________________________________________
WHEN REPLYING, PLEASE QUOTE ONLY WHAT YOU NEED. Thanks!
Change your preferences or unsubscribe here:
http://lists.figure53.com/listinfo.cgi/qlab-figure53.com
Follow Figure 53 on Twitter here: http://twitter.com/Figure53
Disk activity is almost always the first thing I'd try to reduce in a QLab system. Audio processing usually has a lot of CPU headroom, so it should typically be better to put the load there than on the disk.
-C