Abstract:
A method includes determining a registration transform between first three dimensional pre-scan image data and second three dimensional pre-scan image data based on a predetermined registration algorithm. The method further includes registering first volumetric scan image data and second volumetric scan image data based on the registration transform. The method further includes generating registered image data. A system (100) includes a pre-scan registerer (122) that determines a registration transform between first three dimensional pre-scan image data and second three dimensional pre-scan image data based on a predetermined registration algorithm. The system further includes a volume registerer (126) that registers first volumetric scan image data and second volumetric scan image data based on the registration transform, generating registered image data.
Abstract:
A method for processing image data includes obtaining a first set of 3D volumetric image data. The 3D volumetric image data includes a volume of voxels. Each voxel has an intensity. The method further includes obtaining a local voxel noise estimate for each of the voxels of the volume. The method further includes processing the volume of voxels based at least on the intensity of the voxels and the local voxel noise estimates of the voxels. An image data processor (124) includes a computer processor that at least one of: generate a 2D direct volume rendering from first 3D volumetric image data based on voxel intensity and individual local voxel noise estimates of the first 3D volumetric image data, or registers second 3D volumetric image data and first 3D volumetric image data based at least one individual local voxel noise estimates of second and first 3D volumetric image data sets.
Abstract:
A method includes obtaining 3D pre-scan image data generated from a scan of a subject. The 3D pre-scan image data includes voxels that represent a tissue of interest. The method further includes generating a 2D planning projection image showing the tissue of interest based on the 3D pre-scan image data. A system includes a 2D planning projection image from 3D pre-scan image data generator (218). The 2D planning projection image from 3D pre-scan image data generator obtains 3D pre-scan image data generated from a scan of a subject. The 3D pre-scan image data includes voxels that represent a tissue of interest. The 2D planning projection image from 3D pre-scan image data generator further generates a 2D planning projection image showing the tissue of interest based on the 3D pre-scan image data.
Abstract:
A method includes determining a change in a volume of a tissue of interest located in at least two data sets between the at least two data sets. The at least two image data sets include a first image data set acquired at a first time and a second image data set acquired at a second time, and the first and second times are different. The method includes generating a rendering which includes a region in which the tissue of interest is located and indicia that indicates a magnitude of the change across the region. The region is superimposed over the rendering, which is generated based on at least one of the at least two image data sets, and linked to a corresponding image respectively in the at least two image data sets including voxels representing tissue of interest. The method includes visually presenting the rendering in a graphical user interface.
Abstract:
System and related method to visualize image data. The system comprises an input port (IN) for receiving i) image data comprising a range of intensity values converted from signals acquired by an imaging apparatus in respect of an imaged object, and ii) a definition of a first transfer function configured to map a data interval within said range of said intensity values to an image interval of image values. A transition region identifier (TRI) identifies from among intensity values outside said data interval, one or more transition intensity values representative of a transition in composition and/or configuration of said object or of a transition in respect of a physical property in relation to said object. A transfer function generator (TFG) generates for said intensity values outside said data interval a second transfer function. The second transfer function is non-linear and has a respective gradient that is locally maximal around said transition intensity values. A renderer (RD) then renders, on a display unit (MT), a visualization of at least a part of said image data based on the two transfer functions.
Abstract:
A method displays spectral image data reconstructed from spectral projection data with a first reconstruction algorithm and segmented image data reconstructed from the same spectral projection data with a different reconstruction algorithm, which is different from the first reconstruction algorithm. The method includes reconstructing spectral projection data with the first reconstruction algorithm, which generates the spectral image data and displaying the spectral image data. The method further includes reconstructing the spectral projection data with the different reconstruction algorithm, which generates segmentation image data, segmenting the segmentation image data, which produces the segmented image data, and displaying the segmented image data.
Abstract:
A method includes using a pre-scan image to define a scan field of view for a region of interest of a patient to be scanned for at least one image acquisition of a series of image acquisitions of a scan plan, performing an image acquisition of the series based on a corresponding scan field of view for the image acquisition, and determining, via a processor (120), a next field of view for a next image acquisition of the series based on available image related data.
Abstract:
An apparatus (10) for assessing radiologist performance includes at least one electronic processor (20) programmed to: during reading sessions in which a user is logged into a user interface (UI) (27), present (98) medical imaging examinations (31) via the UI, receive examination reports on the presented medical imaging examinations via the UI, and file the examination reports; and perform a tracking method (102, 202) including at least one of: (i) computing (204) concurrence scores (34) quantifying concurrence between clinical findings contained in the examination reports and corresponding computer-generated clinical findings for the presented medical imaging examinations which are generated by a computer aided diagnostic (CAD) process miming as a background process during the reading sessions; and/or (ii) determining (208) reading times (38) for the presented medical imaging examinations wherein the reading time for each presented medical imaging examination is the time interval between a start of the presenting of the medical imaging examination via the user interface and the filing of the corresponding examination report; and generating (104) at least one time-dependent user performance metric (36) for the user based on the computed concurrence scores and/or the determined reading times.
Abstract:
A method includes obtaining contrast-enhanced image data having a plurality of voxels, each voxel having an intensity value. The method further includes determining a vesselness value for each voxel. The method further includes determining a hypo-density value for each voxel. The method further includes weighting each of the intensity values by a corresponding vesselness value. The method further includes weighting each of the hypo- density values by the corresponding vesselness value. The method further includes combining the weighted intensity values and the weighted hypo-density values, thereby generating composite image data. The method further includes visually displaying the composite image data.
Abstract:
A method includes displaying a background image on a display screen. The method further includes receiving, from an input device, a signal indicative of a free hand line being drawn over the background image. The signal includes coordinates of points of the free hand line with respect to the display screen. The free hand line is independent of content represented in the background image. The method further includes storing the signal in a storage device. The method further includes generating a smooth stiff line based on the stored signal. The method further includes displaying the smooth stiff line over the background image.