eurekha_ananth - 9 months ago 77

C# Question

I am a beginner to dicom development group . I need to create a localizer image line on dicom image . So, is there any good ideas . Any Geeks .

Answer Source

David Brabant put you already in the right direction (if you want to work with DICOM you should definitely read and treasure **dclunie's medical image FAQ**). Let's see if I can elaborate on it and make it easier for you to implement.

I assume you have a tool/library to extract tags from a DICOM file (Offis' DCMTK?). For the sake of exemplification I'll refer to a CT scan (many slices, i.e. many images) and a scout image, onto which you want to display localizer lines. Each DICOM image, including your CT slices and your scout, contain full information about their location in space, in these two tags:

`Group,Elem VR Value Name of the tag --------------------------------------------------------------------- (0020,0032) DS [-249.51172\-417.51172\-821] # ImagePositionPatient X0 Y0 Z0 (0020,0037) DS [1\0\0\0\1\0] # ImageOrientationPatient A B C D E F`

ImagePositionPatient has the global coordinates in mm of the first pixel transmitted (the top left-hand corner pixel, to be clear) expressed as (x,y,z). I marked them X0, Y0, Z0. ImageOrientationPatient contains two vectors, both of three components, specifying the direction cosines of the first row of pixels and first column of pixels of the image. Understanding direction cosines doesn't hurt (see e.g. http://mathworld.wolfram.com/DirectionCosine.html), but the method suggested by dclunie works directly with them, so for now let's just say they give you the orientation in space of the image plane. I marked them A-F to make formulas easier.

Now, in the code given by dclunie (I believe it's intended to be C, but it's so simple it should work as well as Java, C#, awk, Vala, Octave, etc.) the conventions are the following:

scr_* = refers to the soruce image, i.e. the CT slice

dst_* = refers to the destination image, i.e. the scout

*_pos_x, *_pos_y, *_pos_z = the X0, Y0, Z0 above

*_row_dircos_x, *_row_dircos_y, *_row_dircos_z = the A, B, C above

*_col_dircos_x, *_col_dircos_y, *_col_dircos_z = the D, E, F above

After setting the right values just apply these:

```
dst_nrm_dircos_x = dst_row_dircos_y * dst_col_dircos_z
- dst_row_dircos_z * dst_col_dircos_y;
dst_nrm_dircos_y = dst_row_dircos_z * dst_col_dircos_x
- dst_row_dircos_x * dst_col_dircos_z;
dst_nrm_dircos_z = dst_row_dircos_x * dst_col_dircos_y
- dst_row_dircos_y * dst_col_dircos_x;
src_pos_x -= dst_pos_x;
src_pos_y -= dst_pos_y;
src_pos_z -= dst_pos_z;
dst_pos_x = dst_row_dircos_x * src_pos_x
+ dst_row_dircos_y * src_pos_y
+ dst_row_dircos_z * src_pos_z;
dst_pos_y = dst_col_dircos_x * src_pos_x
+ dst_col_dircos_y * src_pos_y
+ dst_col_dircos_z * src_pos_z;
dst_pos_z = dst_nrm_dircos_x * src_pos_x
+ dst_nrm_dircos_y * src_pos_y
+ dst_nrm_dircos_z * src_pos_z;
```

Or, if you have some fancy matrix class, you can build this matrix and multiply it with your point coordinates.

`[ dst_row_dircos_x dst_row_dircos_y dst_row_dircos_z -dst_pos_x ] M = [ dst_col_dircos_x dst_col_dircos_y dst_col_dircos_z -dst_pos_y ] [ dst_nrm_dircos_x dst_nrm_dircos_y dst_nrm_dircos_z -dst_pos_z ] [ 0 0 0 1 ]`

That would be like this:

`Scout_Point(x,y,z,1) = M * CT_Point(x,y,z,1)`

Said all that, **which points** of the CT should we convert to create a line on the scout? Also for this dclunie already suggests a general solution:

"*My approach is to project the square that is the bounding box of the source image (i.e. lines joining the TLHC, TRHC,BRHC and BLHC of the slice).*"

If you project the four corner points of the CT slice, you'll have a line for CT slices perpendicular to the scout, and a trapezoid in case of non perpendicular slices. Now, if your CT slice is aligned with the coordinate axes (i.e. ImageOrientationPatient = [1\0\0\0\1\0]), the four points are trivial. You compute the width/height of the image in mm using the number of rows/columns and the pixel distance along x/y direction and sum things up appropriately. If you want to implement the generic case, then you need a little trigonometry... or maybe not. It's maybe time you read the definition of the direction cosines if you haven't yet.

I'll try to put you on track. E.g. working on the TRHC, you know where the voxel is in the image plane:

`# Pixel location of the TRHC x_pixel = number_of_columns-1 # Counting from 0 y_pixel = 0 z_pixel = 0 # We're on a plane!`

The pixel distance values in DICOM are referred to the image plane, so you can simply multiply x and y by those values to have their position in mm, while z is 0 (both pixels and mm). I am talking about these values:

`(0028,0011) US 512 # 2, 1 Columns (0028,0010) US 512 # 2, 1 Rows (0028,0030) DS [0.9765625\0.9765625] # 20, 2 PixelSpacing`

The matrix M above, is a generic transformation from global to image coordinates, having the direction cosines available. What you need now is something that does the inverse job (image to global) and on the source images (the CT slices). I'll let you go and dig in the geometry books to be sure, but I think it should be something like this (the rotation part is transposed, translation has no sign change and of course we use the src_* values):

`[src_row_dircos_x src_col_dircos_x src_nrm_dircos_x src_pos_x ] M2 = [src_row_dircos_y src_col_dircos_y src_nrm_dircos_y src_pos_y ] [src_row_dircos_z src_col_dircos_z src_nrm_dircos_z src_pos_z ] [0 0 0 1 ]`

Convert points in the CT slice (e.g. the four corners) to millimeters and then apply M2 to have them in global coordinates. Then you can feed them to the procedure reported by dclunie. Cross-check my maths before using it e.g. for patient diagnostics! ;-)

Hope this helps understanding better dclunie's method. Cheers