short *pDataInt16 = new short[memsize/sizeof(short)];
unsigned long m_imgAllSize = 256*256;
[...]
result = dataset->putAndInsertSint16Array(DCM_PixelData, (const Sint16*)pDataInt16, m_imgAllSize);
if (result.bad())
AfxMessageBox((LPCTSTR)result.text());
As you can read in the DICOM standard (part 6) "Pixel Data" can only be stored as OB (unsigned 8 bit) or OW (unsigned 16 bit). See part 5 for details on the encoding of pixel data.
As I read in the DICOM standard 2007 version, both OB (A string of bytes) and OW (A string of 16-bit words) specify that "the encoding of the contents is specified by the negotiated Transfer Syntax".
With Pixel Representation (0028, 0103) equals "0001H", two?s complement integer can be the encoding of pixel data now. And I did see such example file from David Clunie's enhanced DICOM test dataset.
So is now the time to allow dataset->putAndInsertSint16Array(DCM_PixelData, ...)?
Last edited by Sic on Sat, 2011-02-12, 15:15, edited 2 times in total.
So is now the time to allow dataset->putAndInsertSint16Array(DCM_PixelData, ...)?
No. You have to understand that the encoding of uncompressed DICOM pixel data has two distinct levels. The first level is the pixel cell, which is defined by BitsAllocated. The pixel data is actually a sequence of bits containing a list of cells, where each cell uses "BitsAllocated" bits, directly concatenated without any pad bits. This sequence of bits is then split into 8-bit or 16-bit fields and encoded as OB or OW (implies byte swapping on Big Endian architectures for OW). Note that typically the cell size is a multiple of 8, but this is not a requirement for all DICOM SOP classes. You might actually encounter 12-bit cells, although I am not aware of any commercial system producing such images. The second level is the cell content, as defined by BitsStored, HighBit, PixelRepresentation and, possibly the various (60xx,yyyy) Overlay attributes. Not all bits of the cell might actually be used for pixel data, some might be used for overlay data and some might be empty (used as pad bits). Only the bits actually used for pixel data (as defined by BitsStored and HighBit) must be interpreted as an integer number of "BitsStored" bits, either unsigned or 2-complement, and all further transformations are applied to this value. In particular you might very well find a 13-bit 2-complement integer number, and some manipulation will be needed to actually make that look like a signed integral number on whatever platform you use. For display purposes, all of this complexity is hidden when you use the dcmimage library (class DicomImage and related classes), for a good reason. It is not easy to completely understand and to correctly implement this rather complex encoding.
I understand it's tricky to handle the read-in right, but users who beg for dataset->putAndInsertSint16Array(DCM_PixelData, ...) are most probably only trying to write out pure pixel data with no overlay etc., and I just think it would be nice if they can do this.
But even now users can still write signed integer array as pixel data with the current DCMTK interface by cheating:
putAndInsertSint16Array() would only ever be useful for people encoding images with BitsAllocated=16, BitsStored=16, HighBit=15, PixelRepresentation=1. This is a rare special case and certainly not worth special support in the API. Casting the pixel data to Uint16 is not even cheating - in your case this is exactly the right thing to do: Asking the system to interpret the array which was filled with Sint16 (signed short) values as an OW array and encode it as such. This is how DICOM represents pixel data in this case.
Thanks for the clarification. I need to save functional imaging calculation result in DICOM format. Besides scaling, sometimes save pixel data in signed short may be handy.
hi , I am facing the same problem i am trying to insert signed pixel data but it gives the same error when i use the putAndInsertSint16Array and when i used the putAndInsertUint16Array the result seems to be wrong , it's not as the original image .. can any one tell me the steps to overcome this problem ??
the bits stored is equal to the bits allocated it CT signed ,Monochrome2, 16 bitsstored , 16 bits allocated,& after insertion of the pixel data when applying the original ww and wl on the image it seems darker than the original image ..
i tried the above steps but they were useless , any suggestion
Is there a Modality LUT Transformation (e.g. Rescale Slope/Intercept) in the DICOM image? Is it really appropriate for the signed pixel data, i.e. is the output in Hounsfield Units (HU)? Also, depending on what you mean by "original" Window Center/Width, the VOI LUT Transformation might be appropriate or not.
yes it contains Rescale slope and intercept but the pixel data after insertion is different that the original data , i was wondering if i can insert the pixel data by looping on the array and inserting it using putAndInsertSint16 ...