I am trying to reconcile the above with my findings (and the miriad of partial and/or vague explanations found in the wild).
E.g., the image:
Code: Select all
$ dcmj2pnm -v -im -o downloads/CT-MONO2-16-ankle
I: reading DICOM file: downloads/CT-MONO2-16-ankle
I: preparing pixel data
I: dumping image parameters
I: filename : downloads/CT-MONO2-16-ankle
I: transfer syntax : Little Endian Implicit
I: SOP class : SecondaryCaptureImageStorage
I: SOP instance UID : 1.2.840.113619.2.1.2411.1031152382.365.1.736169244
I: columns x rows : 512 x 512
I: bits per sample : 17
I: color model : MONOCHROME2
I: pixel aspect ratio : 1.00
I: number of frames : 1 (1 processed)
I: VOI LUT function : <default>
I: VOI windows in file : 1
I: - <no explanation>
I: VOI LUTs in file : 0
I: presentation shape : <default>
I: overlays in file : 0
I: maximum pixel value : 3056
I: minimum pixel value : -992
I: cleaning up memory
$
After extracting the PNG:
Code: Select all
$ dcmj2pnm downloads/CT-MONO2-16-ankle t.png
Code: Select all
$ identify -verbose t.png
Image: t.png
...
Colorspace: Gray
Depth: 16-bit
Channel depth:
gray: 16-bit
Channel statistics:
Pixels: 262144
Gray:
min: 32800 (0.500496)
max: 36848 (0.562264)
...
So, is the encoding of pixel values in monochrome 16-bit PNGs a mere offset of the stored values for each pixel in the original DICOM file? Again, my understading was that a modality transformation will always take place upon extraction, and for CT that means transforming to HU values before adding the offset.
Thanks!