Sorry for late response. Yes we're using machine learning algorithms (our own implementation). The tool I mentioned is mainly a training tool that created data for real time classification.
Yes we're currently using OpenNI and NiTE for skeleton tracking and hand tracking. There's also KinectSDK option in SigmaNIL, so you can choose to make it work with KinectSDK also. We checked two hand signs, it works, but to make it work with the library we have to change the hand tracking handling (sth like, when two hands come closer interpret that image as a single shape, etc).
I'll let you know about the release soon.