Hi Mr Brekel.
I've been trying to match up some of the scans with footage taken from different angles on other cameras , so I've come across some inconsistency's particularly in the depth axis.
I was quite shocked at how long my chin and nose were when seeing them for the first time in the 3d viewer ;)
It's quite an interesting problem - I can imagine in the future people will have to calculate lens distortion for your camera sensor in the way you do now before camera matching or stabilising video footage .
Maybe the same algorithms could be used ?
with video you can hold up a chart with straight edges drawn on it and then distort the footage until you make some straight lines .
maybe you could hold some geometry in front of the kinect first and adjust the 3d scan to fit ?
Cheers
John