Abstract:
A method for processing image data includes obtaining a first set of 3D volumetric image data. The 3D volumetric image data includes a volume of voxels. Each voxel has an intensity. The method further includes obtaining a local voxel noise estimate for each of the voxels of the volume. The method further includes processing the volume of voxels based at least on the intensity of the voxels and the local voxel noise estimates of the voxels. An image data processor (124) includes a computer processor that at least one of: generate a 2D direct volume rendering from first 3D volumetric image data based on voxel intensity and individual local voxel noise estimates of the first 3D volumetric image data, or registers second 3D volumetric image data and first 3D volumetric image data based at least one individual local voxel noise estimates of second and first 3D volumetric image data sets.
Abstract:
A method includes obtaining 3D pre-scan image data generated from a scan of a subject. The 3D pre-scan image data includes voxels that represent a tissue of interest. The method further includes generating a 2D planning projection image showing the tissue of interest based on the 3D pre-scan image data. A system includes a 2D planning projection image from 3D pre-scan image data generator (218). The 2D planning projection image from 3D pre-scan image data generator obtains 3D pre-scan image data generated from a scan of a subject. The 3D pre-scan image data includes voxels that represent a tissue of interest. The 2D planning projection image from 3D pre-scan image data generator further generates a 2D planning projection image showing the tissue of interest based on the 3D pre-scan image data.
Abstract:
A method includes determining a change in a volume of a tissue of interest located in at least two data sets between the at least two data sets. The at least two image data sets include a first image data set acquired at a first time and a second image data set acquired at a second time, and the first and second times are different. The method includes generating a rendering which includes a region in which the tissue of interest is located and indicia that indicates a magnitude of the change across the region. The region is superimposed over the rendering, which is generated based on at least one of the at least two image data sets, and linked to a corresponding image respectively in the at least two image data sets including voxels representing tissue of interest. The method includes visually presenting the rendering in a graphical user interface.
Abstract:
An apparatus (10) for assessing radiologist performance includes at least one electronic processor (20) programmed to: during reading sessions in which a user is logged into a user interface (UI) (27), present (98) medical imaging examinations (31) via the UI, receive examination reports on the presented medical imaging examinations via the UI, and file the examination reports; and perform a tracking method (102, 202) including at least one of: (i) computing (204) concurrence scores (34) quantifying concurrence between clinical findings contained in the examination reports and corresponding computer-generated clinical findings for the presented medical imaging examinations which are generated by a computer aided diagnostic (CAD) process miming as a background process during the reading sessions; and/or (ii) determining (208) reading times (38) for the presented medical imaging examinations wherein the reading time for each presented medical imaging examination is the time interval between a start of the presenting of the medical imaging examination via the user interface and the filing of the corresponding examination report; and generating (104) at least one time-dependent user performance metric (36) for the user based on the computed concurrence scores and/or the determined reading times.
Abstract:
A method includes obtaining contrast-enhanced image data having a plurality of voxels, each voxel having an intensity value. The method further includes determining a vesselness value for each voxel. The method further includes determining a hypo-density value for each voxel. The method further includes weighting each of the intensity values by a corresponding vesselness value. The method further includes weighting each of the hypo- density values by the corresponding vesselness value. The method further includes combining the weighted intensity values and the weighted hypo-density values, thereby generating composite image data. The method further includes visually displaying the composite image data.
Abstract:
A method includes displaying a background image on a display screen. The method further includes receiving, from an input device, a signal indicative of a free hand line being drawn over the background image. The signal includes coordinates of points of the free hand line with respect to the display screen. The free hand line is independent of content represented in the background image. The method further includes storing the signal in a storage device. The method further includes generating a smooth stiff line based on the stored signal. The method further includes displaying the smooth stiff line over the background image.
Abstract:
The invention relates to an apparatus configured to display an aortic valve image and an indicator when the aortic valve is in its open-state and/or when the valve is in its closed-state. The indicator is supposed to be in an overlay to the image of the aortic valve, such that a physician can see on the same display image the information needed to advance a guide wire or catheter through the aortic valve of a heart. This may prevent damaging ensures not to damage the aortic valve. The physician receives the relevant information, when the aortic valve is in its open-state and thus being in a state to be passed by the catheter. The information, whether the aortic valve is in its open-state or in its closed-state, corresponds to the systolic phase and the distal phase of the heart, respectively. The information, when the heart is in its systolic phase and when it is in the diastolic phase may be extracted from an ECG measurement. From the detection of these cardiac phases, the closed-state of the valve and/or the open-state of the valve can be estimated using general knowledge about flood flow during the cardiac cycle.
Abstract:
The invention relates to a device for processing CT imaging data, comprising a processing unit, which is configured to receive a plurality of sets of CT imaging data recorded at different imaging positions and at different points in time. Furthermore, the processing device is configured to provide a plurality of auxiliary sets of CT imaging data, each auxiliary set of CT imaging data comprising processed image data allocated to spatial positions inside a respective spatial section of the object space, wherein a given one of the spatial sections contains those spatial positions which are covered by those sets of CT imaging data acquired at a respective one of the imaging positions, and to generate the processed image data for a given spatial position using those of the sets of CT imaging data acquired at the respective one of the imaging positions.
Abstract:
A method includes determining a registration transform between first three dimensional pre-scan image data and second three dimensional pre-scan image data based on a predetermined registration algorithm. The method further includes registering first volumetric scan image data and second volumetric scan image data based on the registration transform. The method further includes generating registered image data. A system (100) includes a pre-scan registerer (122) that determines a registration transform between first three dimensional pre-scan image data and second three dimensional pre-scan image data based on a predetermined registration algorithm. The system further includes a volume registerer (126) that registers first volumetric scan image data and second volumetric scan image data based on the registration transform, generating registered image data.
Abstract:
A system (100) for segmenting a coronary artery vessel tree (182) of a patient heart in a three dimensional (3D) cardiac image (120) includes a coronary volume definition unit (150) and a coronary artery segmentation unit (180). The coronary volume definition unit (150) sets a spatial boundary (210, 220) from internal and external surfaces of heart tissues in the 3D cardiac image based on a fitted heart model (200). The coronary artery segmentation unit (180) segments the coronary artery vessel tree (182) in the 3D cardiac image using a segmentation algorithm with a search space limited by the spatial boundary set from the internal and external surfaces of the heart tissues.