Abstract:
K-space data obtained from a magnetic resonance imaging scan where motion was detected is split into two parts in accordance with the timing of the motion to produce first and second sets of k-space data corresponding to different poses. Sub-images are reconstructed from the k first and second sets of k-space data, which are used as inputs to a deep neural network which transforms them into a motion-corrected image.
Abstract:
A method for sparse image reconstruction includes acquiring coil data from a magnetic resonance imaging device. The coil data includes undersampled k-space data corresponding to a subject. The method further includes processing the coil data using an image reconstruction technique to generate an initial undersampled image. The method also includes generating a reconstructed image based on the coil data, the initial undersampled image, and a plurality of iterative blocks of a flared network. A first iterative block of the flared network receives the initial undersampled image. Each of the plurality of iterative blocks includes a data consistency unit and a regularization unit and the iterative blocks are connected both by direct connections from one iterative block to the following iterative block and by a plurality of dense skip connections to non-adjacent iterative blocks. The flared network is based on a neural network trained using previously acquired coil data.
Abstract:
A computer-implemented method for partial volume correction in Positron Emission Tomography (PET) image reconstruction includes receiving emission data related to an activity distribution, reconstructing the activity distribution from the emission data by maximizing a penalized-likelihood objective function to produce a reconstructed PET image, quantifying an activity concentration in a region of interest of the reconstructed PET image to produce an uncorrected quantitation, and correcting the uncorrected quantitation based on a pre-calculated contrast recovery coefficient value to account for a partial volume error in the uncorrected quantitation.
Abstract:
An imaging system is provided that includes at least one detector configured to acquire imaging information, a processing unit, and a display unit. The processing unit is operably coupled to the at least one detector, and is configured to reconstruct an image using the imaging information. The image is organized into voxels having non-uniform dimensions. The processing unit is configured to perform a penalized likelihood (PL) image reconstruction using the imaging information. The PL image reconstruction includes a penalty function. Performing the penalty function includes interpolating a voxel size in at least one dimension from an original size to an interpolated size before determining a penalty function, determining the penalty function using the interpolated size to provide an initial penalty, interpolating the initial penalty to the original size to provide a modified penalty, and applying the modified penalty in the PL image reconstruction.
Abstract:
The present discussion relates to the use of deep learning techniques to accelerate iterative reconstruction of images, such as CT, PET, and MR images. The present approach utilizes deep learning techniques so as to provide a better initialization to one or more steps of the numerical iterative reconstruction algorithm by learning a trajectory of convergence from estimates at different convergence status so that it can reach the maximum or minimum of a cost function faster.
Abstract:
Aspects of the invention relate to generating an emission activity image as well as an emission attenuation map using an iterative updation based on both the raw emission projection data and the raw radiography projection data, and an optimization function. The outputs include an optimized emission activity image, and at least one of an optimized emission attenuation map or an optimized radiography image. In some aspects an attenuated corrected emission activity image is obtained using the optimized emission activity image, and the optimized emission attenuation map.
Abstract:
According to some embodiments, an emission tomography scanner may acquire emission scan data. One or more anatomical images may be generated using an anatomical imaging system, and the anatomical images may be processed to obtain an initial attenuation image. An emission image and a corrected attenuation image may be jointly reconstructed from the acquired emission scan data, the corrected attenuation image representing a deformation of the initial attenuation image. A final reconstructed emission image may then be calculated based on the reconstructed emission image and/or the corrected attenuation image. The final reconstructed emission image may then be stored in a data storage system and/or displayed on a display system.
Abstract:
Methods, systems and non-transitory computer readable media for imaging are disclosed. Emission projection data corresponding to a target region of a subject is acquired using an emission tomography system. Additionally, one or more magnetic resonance images of the target region are generated using a magnetic resonance imaging system operatively coupled to the emission tomography system. A partially-determined attenuation map is determined by identifying one or more regions in the partially-determined attenuation map with a designated confidence level based on the magnetic resonance images. Further, a complete attenuation map and/or a complete activity map is reconstructed from the emission projection data using the partially-determined attenuation map as a constraint. One or more images corresponding to the target region are then generated based on the partially-determined attenuation map, the complete attenuation map and/or the complete activity map.
Abstract:
The subject matter discussed herein relates to a fast magnetic resonance imaging (MRI) method to suppress fine-line artifact in Fast-Spin-Echo (FSE) images reconstructed with a deep-learning network. The network is trained using fully sampled NEX=2 (Number of Excitations equals to 2) data. In each case, the two excitations are combined to generate fully sampled ground-truth images with no fine-line artifact, which are used for comparison with the network generated image in the loss function. However, only one of the excitations is retrospectively undersampled and inputted into the network during training. In this way, the network learns to remove both undersampling and fine-line artifacts. At inferencing, only NEX=1 undersampled data are acquired and reconstructed.
Abstract:
Imaging system and method are presented. Emission scan (ES) and anatomical scan (AS) data corresponding to a target volume in a subject are received. One or more at least partial AS images are reconstructed using AS data. An image-space certainty (IC) map representing a confidence level (CL) for attenuation coefficients of selected voxels in AS images and a preliminary attenuation (PA) map based on AS images are generated. One or more of selected attenuation factors (AF) in projection-space are initialized based on PA map. A projection-space certainty (PC) map representing CL for the selected AF is generated based on IC map. An emission image of the target volume is initialized. The selected AF and emission image are iteratively updated based on the ES data, PC map, initial AF, and/or initial emission image. A desired emission image and/or AF values are determined based on the iteratively updated AF and/or emission image.