I have been wondering about that for a while, I don't have a structure sensor so I can't test it, but if you look at the example models in the app and on their website it looks like those were generated through Photogrammetry, which will deliver photorealistic textures, but inferior meshes if you don't use at least a couple dozen crisp, high res and well positioned photos.
The app itself also shows you two different ways of scanning, one where you take a bunch of still shots (photogrammetry, which is what 123D Catch uses) and a similar way to Skanect, but both seem to send data off to the cloud to be converted into a mesh.
The video on their site shows the Skanect way of scanning, giving a photogrammetry result, so I'm curious what that app actually does and delivers. So far I haven't really been able to find decent examples and comparisons.