Semantic Segmentation: how to set class legend for new classes?

1,798 views
Skip to first unread message

Rob Trahms

unread,
Nov 22, 2016, 10:36:23 AM11/22/16
to DIGITS Users
Hi all -
I have successfully followed the PASCAL-VOC semantic segmentation example, and can test images with it - fantastic!
I would like to apply this type of training to a whole new set of data, but am trying to figure out where the classes are defined.
The ground truth for the PASCAL-VOC shows the mask images with colorization based on class, and the classes are the predefined 40-50 generic classes in the PASCAL-VOC dataset (e.g. 0=Aeroplane, etc). 

If I wanted to create a new set of classes mapped to different colors, where would I do that?  Help is much appreciated!
Rob

Rob Trahms

unread,
Nov 22, 2016, 10:42:01 AM11/22/16
to DIGITS Users
Funny, right after I posted this, I found the optional label field in the create segmentation dataset section (pointing to pascal-voc-classes.txt). 
Thanks!
Rob

Rob Trahms

unread,
Nov 22, 2016, 10:58:29 AM11/22/16
to DIGITS Users
Now that I have found pascal-voc-classes.txt, I am trying to figure out how they are mapped to the colors.
Is it just based on the color number, lowest to highest?  I can see the following (in order):
background = "black" (0)
aeroplane = "red" (0x800000)
bicycle = "green" (0x008000)
bird = "olive" (0x808000)
boat = "blue" (0x000080)
bottle = "purple" (0x800080)
...and so on.  It might be by HSV spectrum, don't know.

What's the pattern?


Rob

On Tuesday, November 22, 2016 at 7:36:23 AM UTC-8, Rob Trahms wrote:

Greg Heinrich

unread,
Nov 22, 2016, 11:25:43 AM11/22/16
to DIGITS Users
Hello, the idea is to have adjacent indices map to very different colors. The details of how this is done can be found in the VOCdevkit in VOCcode/VOClabelcolormap.m.

Rob Trahms

unread,
Nov 22, 2016, 1:28:57 PM11/22/16
to DIGITS Users
Thanks - this helps.  I looked at VOClabelcolormap.m, and found the algo that generates the separated colors by index counter.  Great!
Using the secret decoder ring analogy, if this is the encoding of the class when creating the ground truth images, where is the decoder part of this algorithm that results in the legend being displayed correctly in DIGITS?
In other words... how is the pascal-voc-classes.txt being read in - is it during testing only to show the legend, and that is where things are decoded for class display? 

Apologies for all the questions - I think I am on the verge of getting this to work with our own data, and knowing this would help us get the training (and testing) going.
Rob

Rob Trahms

unread,
Nov 22, 2016, 1:32:59 PM11/22/16
to DIGITS Users
Actually, making the question simpler - if I were to provide my own label file in the optional field, would it be using the same algo to map colors to those custom classes?
Thanks,
Rob

Greg Heinrich

unread,
Nov 22, 2016, 3:03:42 PM11/22/16
to DIGITS Users
Label images in PASCAL VOC are palette images, meaning that pixel values are indices into a palette (the palette is embedded in the PNG file). Therefore, DIGITS does not need to do any conversion when creating the dataset: labels in the dataset have exactly the same data as in the original label images. Similarly, at the output of the network, DIGITS extracts the predicted classification for every pixel in the output to figure the class ID of every pixel. Then DIGITS uses the palette from the PASCAL VOC labels to show a visualization of the segmentation, using the same colors. If you were to provide your own class names, DIGITS would proceed in the same way. DIGITS does not need to know how the PASCAL VOC folks created their color map because that information is already provided in the palette of each label image.

Alternatively, some datasets (such as SYNTHIA) have their label images in RGB. For those, you do need to tell DIGITS how to map colors to class IDs by providing a color map text file. The color map is used to create the label dataset. In any case, image segmentation datasets in DIGITS always have single-channel labels, where each pixel value is the ID of the target class.

I hope this helps.

Rob Trahms

unread,
Nov 23, 2016, 10:09:48 AM11/23/16
to DIGITS Users
It does help - thank you (and how did you know I was going to train on SYNTHIA next? :) )
I am looking through the DIGITS scripts and trying to find where the "Test One Image" code is, to read how the legend is set up, how the segmentation image is displayed, etc).  Can you direct me to that?
Thanks,
Rob

Greg Heinrich

unread,
Nov 23, 2016, 10:39:30 AM11/23/16
to DIGITS Users
This is the location of the file that does the visualization of an image segmentation network in DIGITS: https://github.com/NVIDIA/DIGITS/blob/digits-5.0/digits/extensions/view/imageSegmentation/view.py

Rob Trahms

unread,
Nov 23, 2016, 10:40:07 AM11/23/16
to DIGITS Users
Actually, I think I found it - under extensions/view/imageSegmentation/view.py.
Rob

Navjot Kaur

unread,
Jan 5, 2017, 5:24:50 AM1/5/17
to DIGITS Users
How did you define colormap text file for synthia dataset?

Greg Heinrich

unread,
Jan 5, 2017, 6:10:43 AM1/5/17
to DIGITS Users
For SYNTHIA_RAND_CVPR_2016 this is given in the README.txt file. The file that you need to pass to DIGITS should have these contents:

0   0   0
128 128 128
128 0   0
128 64  128
0   0   192
64  64  128
128 128 0
192 192 128
64  0   128
192 128 128
64  64  0
0   128 192

Ibrahim hassan

unread,
Nov 24, 2017, 4:30:26 AM11/24/17
to DIGITS Users
Hello,
I am getting this error when pass that file to DIGITS.
invalid literal for int() with base 10: 'Last'

Greg Heinrich

unread,
Nov 24, 2017, 8:06:54 AM11/24/17
to DIGITS Users
That sounds legit, "Last" is not a number... double check the file contents.
Reply all
Reply to author
Forward
0 new messages