Abstract:
An apparatus captures radiographic images of an animal standing proximate the apparatus. A moveable x-ray source and a digital radiographic detector are hidden from view of the animal and are revolved about a portion of the animal's body to capture one or a sequence of radiographic images of the animal's body.
Abstract:
A method for reporting bone mineral density values for a patient, the method executed at least in part by a computer includes accessing a 3-D volume image that includes at least bone content and background. A 3-D bone region is automatically segmented from the background to generate a 3-D bone volume image having a plurality of voxels. One or more bone mineral density values are computed from voxel values of the 3-D bone volume image. A 3-D mapping of the one or more computed bone mineral density values is generated and displayed, stored, or transmitted.
Abstract:
A method for geometric calibration of a radiography apparatus acquires tomosynthesis projection images of patient anatomy from a detector and an x-ray source translated along a scan path. An epipolar geometry is calculated according to the relative position of the detector to the scan path by estimating a direction for epipolar lines that extend along an image plane that includes the detector. A region of interest of the patient anatomy that is a portion of each projection image and is fully included within each projection image in the series is defined. A consistency metric is calculated for estimated epipolar lines extending within the ROI. The method iteratively adjusts the epipolar line estimation until the consistency metric indicates accuracy to within a predetermined threshold. A portion of a tomosynthesis volume is reconstructed and displayed.
Abstract:
A wearable calibration target having a band that is configured to wrap about a patient's limb and one or more calibration patches coupled to the band, wherein each of the one or more calibration patches is formed from a material having a known attenuation to X-ray radiation.
Abstract:
A method for imaging a subject obtains a first set of acquired projection images of a subject volume, wherein each projection image in the first set has a corresponding acquisition angle and forms an initial reconstructed volume image. A second set of synthetic projection images is generated according to processing of the acquired projection images and combined with the first set to form a combined set of projection images. The method augments the initial reconstructed image to form an improved reconstructed image by at least a first iteration of an iterative reconstruction process using the initial reconstructed image with the combined set of acquired and synthetic projection images and at least a subsequent iteration of the iterative reconstruction process using the first set of acquired projection images and fewer than, or none of, the second set of synthetic projection images. The improved reconstruction image is rendered on a display.
Abstract:
A method for forming an image reconstructs a volume image according to X-ray projection images acquired at acquisition angles. The full volume image is partitioned to form at least a first and a second non-overlapping sub-volume. Within each sub-volume, forward projection images for the sub-volume are calculated, with the corresponding forward projection images computed at the acquisition angles, and with intermediate forward projection images at angles between the acquisition angles. A weight factor relates the contribution of each pixel in the X-ray projection images to each sub-volume at each acquisition angle. Synthesized sub-volume projection images are formed according to the calculated weight factors and acquired projection images in each sub-volume. Synthesized sub-volume projection images form synthesized projection images for the full volume image. A second volume image is reconstructed according to the acquired X-ray projection images and the synthesized projection images. The reconstructed second volume image is displayed, stored, or transmitted.
Abstract:
A method of automatic tooth segmentation, the method executed at least in part on a computer system acquires volume image data for either or both upper and lower jaw regions of a patient and identifies image content for a specified jaw from the acquired volume image data. For the specified jaw, the method estimates average tooth height for teeth within the specified jaw, finds a jaw arch region, detects one or more separation curves between teeth in the jaw arch region, defines an individual tooth sub volume according to the estimated average tooth height and the detected separation curves, segments at least one tooth from within the defined sub-volume, and displays the at least one segmented tooth.
Abstract:
A method of automatic tooth segmentation, the method executed at least in part on a computer system acquires volume image data for either or both upper and lower jaw regions of a patient and identifies image content for a specified jaw from the acquired volume image data. For the specified jaw, the method estimates average tooth height for teeth within the specified jaw, finds a jaw arch region, detects one or more separation curves between teeth in the jaw arch region, defines an individual tooth sub volume according to the estimated average tooth height and the detected separation curves, segments at least one tooth from within the defined sub-volume, and displays the at least one segmented tooth.
Abstract:
A method for 3-D cephalometric analysis acquires reconstructed volume image data from a computed tomographic scan of a patient's head. The acquired volume image data simultaneously displays from at least a first 2-D view and a second 2-D view. For an anatomical feature of the head, an operator instruction positions a reference mark corresponding to the feature on either the first or the second displayed 2-D view and the reference mark displays on each of the at least first and second displayed 2-D views. In at least the first and second displayed 2-D views, one or more connecting lines display between two or more of the positioned reference marks. One or more cephalometric parameters are derived according to the positioned reference marks, the derived parameters are displayed.
Abstract:
A method and apparatus for generating a color mapping for a dental object. The method includes generating a transformation matrix according to a set of spectral reflectance data for a statistically valid sampling of teeth. Illumination is directed toward the dental object over at least a first, a second, and a third wavelength band, one wavelength band at a time. For each of a plurality of pixels in an imaging array, an image data value is obtained, corresponding to each of the at least first, second, and third wavelength bands. The transformation matrix is applied to form the color mapping by generating a set of visual color values for each of the plurality of pixels according to the obtained image data values and according to image data values obtained from a reference object at the at least first, second, and third wavelength bands. The color mapping can be stored in an electronic memory.