Abstract:
A method for generating a 3-dimensional reconstruction model of an object of interest that lies within a volume, the method executed at least in part by a computer, acquires a first set of projection images of the volume at a first exposure and a first field of view and a second set of projection images of the object of interest within the volume at a second exposure that is higher than the first exposure and a second field of view that is narrower than the first field of view. An object of interest is reconstructed from the second set of projection images according to information related to portions of the volume that lie outside the object of interest. The reconstructed object of interest is displayed.
Abstract:
A method for reducing metal artifacts in a volume radiographic image reconstructs a first 3-D-image using measured projection images and forms a 3-D image metal mask that contains metal voxels. For each measured projection image, a projection metal mask is a projection of the 3-D image metal mask. A 3-D prior image contains voxels within the 3-D image metal mask. Voxel values of the first 3-D image outside the 3-D image metal mask are replaced with a value representative of air or soft tissue. Non-metal voxels of the 3-D prior image are modified according to a difference between a pixel value related to the nonmetal voxel and the corresponding pixel value in a calculated projection image. Composite projection images are formed by replacing measured projection image data for pixels within the projection metal mask with calculated projection image data. A metal artifact reduced 3-D image is reconstructed from composite projections.
Abstract:
Method and/or apparatus embodiments can process volume image data of a subject. An exemplary method includes obtaining a first group of two-dimensional radiographic images of the subject, wherein each of the images is obtained with a detector and a radiation source at a different scan angle. The method arranges image data from the first group of images in an image stack so that corresponding pixel data from the detector is in register for each of the images in the image stack. Pixels that represent metal objects are segmented from the image stack and data replaced for at least some of the segmented pixels to generate a second group of modified two-dimensional radiographic images. The second group of images is combined with the first group to generate a three-dimensional volume image according to the combined images and an image slice from the three-dimensional volume image is displayed.
Abstract:
A method and apparatus for generating a color mapping for a dental object. The method includes generating a transformation matrix according to a set of spectral reflectance data for a statistically valid sampling of teeth. Illumination is directed toward the dental object over at least a first, a second, and a third wavelength band, one wavelength band at a time. For each of a plurality of pixels in an imaging array, an image data value is obtained, corresponding to each of the at least first, second, and third wavelength bands. The transformation matrix is applied to form the color mapping by generating a set of visual color values for each of the plurality of pixels according to the obtained image data values and according to image data values obtained from a reference object at the at least first, second, and third wavelength bands. The color mapping can be stored in an electronic memory.
Abstract:
An imaging method, accesses a set of low-energy projection images and performs a low-energy reconstruction using the low-energy projection images. A synthesized intermediate low-energy projection image is generated. A high-energy reconstruction is performed using a set of high-energy projection images. A synthesized intermediate high-energy projection image is generated. A dual-energy reconstruction is performed using at least one low-energy projection image, the synthesized intermediate low-energy projection image, at least one high-energy projection image, and the synthesized intermediate high-energy projection image.
Abstract:
A method for geometric calibration of a radiography apparatus disposes at least one radio-opaque marker in the field of view of the radiography apparatus. A series of tomosynthesis projection images of patient anatomy is acquired from the detector with the x-ray source at different positions along a scan path. For at least three projection images showing the position of the radio-opaque marker, the spatial and angular geometry of the x-ray source and detector are calculated according to the positions of the marker. A tomosynthesis image is reconstructed according to the calculated geometry. A rendering of the reconstructed image is displayed.
Abstract:
A method for tomosynthesis volume reconstruction acquires at least a prior projection image of the subject at a first angle and a subsequent projection image of the subject at a second angle. A synthetic image corresponding to an intermediate angle between the first and second angle is generated by a repeated process of relating an area of the synthetic image to a prior patch on the prior projection image and to a subsequent patch on the subsequent projection image according to a bidirectional spatial similarity metric, wherein the prior patch and subsequent patch have n×m pixels; and combining image data from the prior patch and the subsequent patch to form a portion of the synthetic image. The generated synthetic image is displayed, stored, processed, or transmitted.
Abstract:
An apparatus captures radiographic images of an animal standing proximate the apparatus. A moveable x-ray source and a digital radiographic detector are hidden from view of the animal and are revolved about a portion of the animal's body to capture one or a sequence of radiographic images of the animal's body.
Abstract:
A method for 3-D cephalometric analysis acquires reconstructed volume image data from a computed tomographic scan of a patient's head. The acquired volume image data simultaneously displays from at least a first 2-D view and a second 2-D view. For an anatomical feature of the head, an operator instruction positions a reference mark corresponding to the feature on either the first or the second displayed 2-D view and the reference mark displays on each of the at least first and second displayed 2-D views. In at least the first and second displayed 2-D views, one or more connecting lines display between two or more of the positioned reference marks. One or more cephalometric parameters are derived according to the positioned reference marks, the derived parameters are displayed.
Abstract:
A method for tomosynthesis volume reconstruction acquires at least a prior projection image of the subject at a first angle and a subsequent projection image of the subject at a second angle. A synthetic image corresponding to an intermediate angle between the first and second angle is generated by a repeated process of relating an area of the synthetic image to a prior patch on the prior projection image and to a subsequent patch on the subsequent projection image according to a bidirectional spatial similarity metric, wherein the prior patch and subsequent patch have n×m pixels; and combining image data from the prior patch and the subsequent patch to form a portion of the synthetic image. The generated synthetic image is displayed, stored, processed, or transmitted.