· Pixel Data must be decompressed and represented as Little Endian
· All Photometric Interpretation will be represented as Monochrome2
· All Pixel Data will be represented as an unsigned integer
· For Color Pixels should be always represented color by pixel, R1, G1, B1, R2, G2, B2
· All color representations converted to RGB per Part 3 section C.7.6.3 where then shape will determine color from monochrome
· Error should be thrown for images with lossy compression that data cannot be retrieved in this format for example urn:dicom:wado:0007 - The requested instance(s) cannot be provided in the requested format or transfer syntax.
· For 3D arrays pixel spacing and slice thickness must be the same for all slices or frames, values returned in header
Open issues to discuss are:
· How will the conversion to unsigned integers be handled?
· Do we use raw pixel data representations only or pixel data rendered with machine applied LUTS?
· How will pixel padding be defined?
I was hoping for some feedback from this group on items you may want to see, issues that you have encountered, or other best practices that I may need to ensure are addressed.
I am thinking something of a structure as follows:
number of dimensions, 3 for 2D monochrome or RGB, 4 for 3D (multi-frame or series) monochrome or RGB
2D image shape = (0028,0010) Rows, (0028,0011) Columns, (1=monochrome or 3=RGB)
Multi Frame = Frames, (0028,0010) Rows, (0028,0011) Columns, (1=monochrome or 3=RGB)
Series = Instances, (0028,0010) Rows, (0028,0011) Columns, (1=monochrome or 3=RGB)
total number of elements in the array
int16 – from bits stored (0028,0101), may be int8, int12 or int16 for example
the size (in bytes) of each element of a NumPy array
To represent pixel spacing there needs to be an additional header field to represent
Pixel Spacing (0028,0030)
Slice Thickness (0018,0050)
Any feedback is welcome.