Skanect Textures are low quality compared to ItSeez3D

1,401 views
Skip to first unread message

Bart Rogers

unread,
Dec 22, 2014, 10:20:02 AM12/22/14
to ska...@googlegroups.com
I have an iPad Air 2 that takes great photos with a lot of detail. The camera is so good that there is no flash on the iPad Air 2 because it is not needed. I've used both Skanect Pro and itseez3D to scan objects and the difference in the textures shows how poorly the Skanect textures are captured. I also tried the scanner app with the same result - i believe because it is the same engine.

I am wondering if I am doing something wrong or if anyone else is seeing this problem? The itSeez3D textures are photo quality while the Skanect textures are very blurry with no detail at all. I can see the difference easily using MeshLab and Blender. I notice that the ItSeez3D has a texture map associated with the .ply file while Skanect embeds the color information in the scanned model in what appears to be vertex colors.

Is there somewhere in the source that controls how the textures are captured and mapped? Are they averaged over too wide a range? Can someone shed some light on this stark difference between a free program and the paid Skanect Pro?

I've attached 3 screenshots from Blender:
 vertex.png is vertex edit view of cup from Skanect
texture.png is texture edit view of cup from Skanect
itsees3D.png same cup from itseez3D app.

Thanks all.

B
itseez3D.png
texture.png
vertex.png

Giaplay

unread,
Jan 5, 2015, 10:03:19 AM1/5/15
to ska...@googlegroups.com
I also try to find any solution about this situation on forum. I think the texture of itseez3D was mapped on a mesh with only one photo a time. And the texture of Skanect was overlay by multiple raw photos on a vertex? or mesh?layer by layer.  So, if the sensor's alignment ability of scan  is not quite precisely, the result of photo-mapping will be always blur. You can try to take a single angle scan shot will get a almost "photo-texture" result. Just don't move:P

Bart Rogers於 2014年12月22日星期一UTC+8下午11時20分02秒寫道:

Giaplay

unread,
Jan 8, 2015, 12:11:37 AM1/8/15
to ska...@googlegroups.com


There is another way by editting Skanect RAW data manually can get a better texture-mapped little bit.


Giaplay於 2015年1月5日星期一UTC+8下午11時03分19秒寫道:

Giaplay

unread,
Jan 8, 2015, 12:19:12 AM1/8/15
to ska...@googlegroups.com
Right img is original and Left is edit-version

Giaplay於 2015年1月8日星期四UTC+8下午1時11分37秒寫道:

Voudas Toast-eater

unread,
Jan 8, 2015, 4:56:18 PM1/8/15
to ska...@googlegroups.com
can you explain your workflow?

Roel Veldhuyzen

unread,
Jan 14, 2015, 4:40:36 AM1/14/15
to ska...@googlegroups.com
I have been wondering about that for a while, I don't have a structure sensor so I can't test it, but if you look at the example models in the app and on their website it looks like those were generated through Photogrammetry, which will deliver photorealistic textures, but inferior meshes if you don't use at least a couple dozen crisp, high res and well positioned photos. 
The app itself also shows you two different ways of scanning, one where you take a bunch of still shots (photogrammetry, which is what 123D Catch uses) and a similar way to Skanect, but both seem to send data off to the cloud to be converted into a mesh.
The video on their site shows the Skanect way of scanning, giving a photogrammetry result, so I'm curious what that app actually does and delivers. So far I haven't really been able to find decent examples and comparisons.

TechLite

unread,
Jan 19, 2015, 7:57:53 PM1/19/15
to ska...@googlegroups.com
That's odd. I look in the scan directory and find two directories - meshes and images. The image has lots of folders, none of which contain any images in a format that is recognizable. I am running Skanect 1.7 (but since there is no way to see the version number, I can't be certain). It is the same as what is available as I write this.

So where are the raw image files located? I could build a new texture map in Blender with good raw images if they were available. My sense is that the texture images are just point data in files, not jpg like some people mention. Was this true of earlier versions that the jpg files were available?  I find that Skanect is nearly unusable for color texture work. ItSeez3D is now doing full body, head and shoulder bust, and objects with really good textures and the texture uvmap file is sent with the mesh.

I'd prefer to use Skanect, but I can't find a way to get good textures with it.  Can someone offer any help in locating the raw image files?

Thanks.

Roel Veldhuyzen

unread,
Jan 20, 2015, 2:17:53 AM1/20/15
to ska...@googlegroups.com
The images folder should have an "unknown" fodler, which contains a whole bunch of "view-#####" folders. Each of those has a "raw" folder, that "raw" folder contains the captured image.

Bart

unread,
Jan 20, 2015, 7:04:37 PM1/20/15
to ska...@googlegroups.com
Thanks Roel.  I had found these folders. The "unknown" folder is named "net". In each view-000##### folder, I see 7 files (calibration.yml, grabber-type, rgb-pose.avs, sensor_motion.txt, sensor_properties.txt, serial, and timestamp) plus a "raw" folder.  The raw folder has one file named "depth16bits.lzf" which appears to be a binary file of some sort. Where in all of this is the image? (I'm using Skanect 1.7).
Bart

On Tuesday, January 20, 2015 at 2:17:53 AM UTC-5, Roel Veldhuyzen wrote:

The images folder should have an "unknown" fodler, which contains a whole bunch of "view-#####" folders. Each of those has a "raw" folder, that "raw" folder contains the captured image.

Roel Veldhuyzen

unread,
Jan 21, 2015, 2:33:52 AM1/21/15
to ska...@googlegroups.com
Basrt, as you can see in the screenshot I posted, every "raw" folder should not only have a depth16bits.lzf file, but also a color.jpg file, which is the photo taken at that frame. 
Although this is with scans done with a wired scanner, maybe the iPad app stores the images on the device somewhere?

On Wednesday, January 21, 2015 at 1:04:37 AM UTC+1, Bart wrote:
Thanks Roel.  I had found these folders. The "unknown" folder is named "net". In each view-000##### folder, I see 7 files (calibration.yml, grabber-type, rgb-pose.avs, sensor_motion.txt, sensor_properties.txt, serial, and timestamp) plus a "raw" folder.  The raw folder has one file named "depth16bits.lzf" which appears to be a binary file of some sort. Where in all of this is the image? (I'm using Skanect 1.7).

Nicolas Burrus

unread,
Jan 21, 2015, 6:38:47 AM1/21/15
to Roel Veldhuyzen, ska...@googlegroups.com
Hi Bart,

If you are using Uplink, make sure you enabled Depth+Color transmission, and even then, only some view-XXXXXX folders will have raw/color.jpg files, because Uplink sends them sporadically to reduce the network traffic.

--
You received this message because you are subscribed to the Google Groups "Skanect" group.
To unsubscribe from this group and stop receiving emails from it, send an email to skanect+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Bart

unread,
Jan 21, 2015, 7:58:07 PM1/21/15
to ska...@googlegroups.com, ro...@roelveldhuyzen.nl
Thanks so much Nicolas.
I do use color and that's why I'm so disappointed. Apparently I had looked in all the wrong folders that did not have the photo file "color.jpg" along with the .lzf file. I did a search for *.jpg and there they were.

So that leaves the question why such low resolution on the iPad? The 640 x 480 rgb size seems to produce terrible textures. I also noticed that the photos do not do any exposure adjustments for light.  But is there anyway to use the iPad camera in its native mode with automatic settings?  If the color snapshots are too big, you could wait until the scan is stopped then transmit the color photo files separately. Just a thought for improving the textures. I really don't like having to use ItSeez3D but I haven't been able to get a decent texture with Skanect Pro.  Any release plans for updates that you can share?

Thanks for the tip on where to find the color files. 

Bart

siva subramaniyem

unread,
Apr 5, 2017, 3:44:33 AM4/5/17
to Skanect
Hi Giaplay,
Where you able to find a solution on why the different in texture quality output between Skanect & itseez3D apps,it would be really helpful for us to if you could share any of your findings in this area

Thanks
Siva

Nicolas Burrus

unread,
Apr 5, 2017, 4:00:34 AM4/5/17
to Bart, ska...@googlegroups.com, Roel Veldhuyzen
Hi Bart,

With Skanect 1.9 and the high-resolution color frames setting you will get color images with 2x more resolution through Uplink. Sending eve higher resolution textures after the scan in batch is indeed an interesting approach to solve the bandwidth issue, but we do not have such a feature yet.

The color exposure / white balance settings are locked once the start starts by default to improve the color consistency, but this can be changed with the "Uplink Color Gain" setting. If you choose "Automatic", the camera will keep auto-exposing. It is usually recommended to get a somewhat uniform lighting though, and make sure that you start the scan at the position which has the most representative lighting condition.

To unsubscribe from this group and stop receiving emails from it, send an email to skanect+unsubscribe@googlegroups.com.

Tobby Ryan

unread,
Apr 9, 2017, 3:14:05 PM4/9/17
to Skanect, 3dke...@gmail.com, ro...@roelveldhuyzen.nl
my personal opinion with itSeez3D, the 6.99 export of each model is a turn off, hence why i abandoned it. plus with the "face" reconition to create the bounding box, it fails most of the time trying to find a persons face. it was better before they changed things. I will stick with Skanect. now if the offline "Scanner" app would save or have heightx2 bounding, it would be even more awesome.
Reply all
Reply to author
Forward
0 new messages