I am just starting my investigations into this type of project myself. I've succeeded in getting some Kinect/Processing examples to run but have not yet done anything with it to Qlab.
However, there is a Syphon library in processing so you should be able to send your results to Syphon instead of a window and Qlab should be able to pick it up just fine. My initial projects have shown that my mac is not working very hard to display a 25,000 particle window in conjunction with a Kinect driven blob that constrains the particles (20% CPU load max). I suspect running them concurrently with Qlab should be okay unless the complexity ramps up significantly.
Note that the OpenNI and SimpleNI libraries have changed dramatically with each edition and some sample projects out there break badly on the wrong edition. Also note that SimpleNI is not compatible with the most recent Kinect (the one that comes with the XBox One) but that Kinect is WAY better at tracking. I suspect you already know this based on your #2 comment.
I've been using NIMate to send a ghost detection video stream via Syphon into Processing 3 to get the sample apps working with my Kinect (v Xbox One). NIMate is quite good and sends multiple Syphon streams as well as OSC commands for skeleton tracking, color based multiple user tracking, etc (all the stuff that is easy to do on the windows side of the house but not so much on the MacOS side of things). You need to pay if you want to use more than one custom Syphon stream.
I don't think you'll be able to get your video stream from a windows based solution unless you pass it via hardware or some non-Syphon software solution. Syphon is MacOS only as far as I know.
I'm sure others here might have some better informed thoughts on the subject but I'll share my successes if/when I get there.
Scott