JPEG Lossless compression with bit-depth=12 possible?

All other questions regarding DCMTK

Moderator: Moderator Team

Post Reply
Message
Author
David_F
Posts: 9
Joined: Mon, 2017-08-07, 16:06

JPEG Lossless compression with bit-depth=12 possible?

#1 Post by David_F »

Hello DCMTK Users,
I have a question regarding dcmcjpeg.
I would like to know if and how it is possible to compress a DICOM image (with: BitsAllocated=16, BitsStored=12, HighBit=11, PixelRepresentation=0, TS= Explicit Little Endian, PI=MONOCHROME2)
to JPEG Lossless SV1 (TS: 1.2.840.10008.1.2.4.70) using the 12 Bits bit-depth Encoder (DJCompressIJG12Bit) from DCMTK.

I tried to do so with the latest DCMTK 3.6.2 but for me it was only possible to lossless compress images with the 16 Bits bit-depth encoder. I tried that with several unsigned 12-bit MR and CT images but dcmcjpeg seems to forbid JPEG Lossless compression in combination with the 12 Bits Encoder. Also the option: --bits-force-12 did not help. I could trigger the 12 Bits encoder in combination with the JPEG Extended (Process 2 & 4) Target Transfer Syntax, though.

Therefore, my question: Is it possible to compress 12-Bits (BitsStored=12) images to the JPEG Lossless SV1 Transfer Syntax using the 12 Bits Encoder from DCMTK (DJCompressIJG12Bit) and if yes, which Command Line Args shall I use?

Thanks in advance!
David

David_F
Posts: 9
Joined: Mon, 2017-08-07, 16:06

Re: JPEG Lossless compression with bit-depth=12 possible?

#2 Post by David_F »

Or let me pose the question differently:

Talking about Decompression of an unsigned JPEG Lossless (.70 Syntax) compressed DICOM Image with BitsAllocated=16, BitsStored=12, HB=11, PI=MONOCHROME.
Does it matter for the choice of the correct Decompressor whether the image was compressed considering the complete pixel cell or only the part of the pixel cell making up the pixel data (here 12 bits)?

Or in other words, taken also into consideration the input from this post from the DICOM forum related to this topic:
https://groups.google.com/forum/#!searc ... TtHgeqUnIJ. Are the relevant bits defined by BitsStored and HighBit correctly decompressed when always a Decompressor is chosen that assumes the image was compressed in "raw mode (i.e. complete pixel cell)" (in this case the 16 Bits Decompressor would be the choice) even if the image in fact has been compressed using the "cooked"-mode (i.e. part of the pixel making up the pixel data)?

And another question:
Does anybody know which command line arguments one has to use with dcmcjpeg (preferrably 3.6.2) to force "cooked"-mode compression?

Best wishes,
David

Marco Eichelberg
OFFIS DICOM Team
OFFIS DICOM Team
Posts: 1437
Joined: Tue, 2004-11-02, 17:22
Location: Oldenburg, Germany
Contact:

Re: JPEG Lossless compression with bit-depth=12 possible?

#3 Post by Marco Eichelberg »

The bit depth is encoded in the JPEG bitstream, so the decoder can (and has to) determine from there what to do.
Since the process you are referring to is lossless, In both cases the pixel data will be compressed without modification.
With regard to the fact that DCMTK selects the 16 bit process in this case, the discussion continued offline (by mail) - the probable reason is the presence of Rescale Slope/Intercept, which causes DCMTK to treat the sample image in question as a signed 13-bit image and thus select the 16 bit encoder.

David_F
Posts: 9
Joined: Mon, 2017-08-07, 16:06

Re: JPEG Lossless compression with bit-depth=12 possible?

#4 Post by David_F »

Thanks for your answer. This was very helpful.

Just one question to make sure that I understood you correctly in your previous post. Did you actually want to say instead of:

"Since the process you are referring to is lossless, In both cases the pixel data will be compressed without modification."

this:

"Since the process you are referring to is lossless, In both cases the pixel data will be decompressed without modification."

?

In the meantime I managed it to force DCMTK 3.6.2 to compress a whole bunch of different images of various modalities (with all being grayscale, unsigned and signed, with and without rescale intercept/slope, BitsStored=12 as well as 10) using the DJCompressIJG12Bit Encoder, just to see what the result of a subsequent decompression would be, using both DJDecompressIJG12Bit and DJDecompressIJG16Bit as Decoder. And with respect to the bits represented by Bits Stored and High Bit, the resulting Pixel vals/User vals have been always correct. I could never see a difference between using DJDecompressIJG12Bit or DJDecompressIJG16Bit. Leaving aside Overlay bits, are there any tricky cases to consider where it would make a difference which decompressor one uses (I am only referring to decompression of lossless (.70 syntax) compressed images)

Regarding lossless compression with DCMTK, I used an unsigned grayscale MR image without rescale intercept and slope, with Bits Allocated = 16 and Bits Stored 12. DJCompressIJG16Bit is always chosen in combination with the cmd-line args: dcmcjpeg +e1 ....
As far as I can see, this is because in encodeTruelossless()-Mode BitsAllocated is chosen as bitsPerSample when calling createEncoderInstance() which is 16.

Best wishes and greetings,
David

Post Reply

Who is online

Users browsing this forum: Ahrefs [Bot], Google [Bot] and 1 guest