Abstract:
This invention discloses methods and apparatuses for 3D imaging in Magnetoencephalography (MEG), Magnetocardiography (MCG), and electrical activity in any biological tissue such as neural/muscle tissue. This invention is based on Field Paradigm founded on the principle that the field intensity distribution in a 3D volume space uniquely determines the 3D density distribution of the field emission source and vice versa. Electrical neural/muscle activity in any biological tissue results in an electrical current pattern that produces a magnetic field. This magnetic field is measured in a 3D volume space that extends in all directions including substantially along the radial direction from the center of the object being imaged. Further, magnetic field intensity is measured at each point along three mutually perpendicular directions. This measured data captures all the available information and facilitates a computationally efficient closed-form solution to the 3D image reconstruction problem without the use of heuristic assumptions. This is unlike prior art where measurements are made only on a surface at a nearly constant radial distance from the center of the target object, and along a single direction. Therefore necessary, useful, and available data is ignored and not measured in prior art. Consequently, prior art does not provide a closed-form solution to the 3D image reconstruction problem and it uses heuristic assumptions. The methods and apparatuses of the present invention reconstruct a 3D image of the neural/muscle electrical current pattern in MEG, MCG, and related areas, by processing image data in either the original spatial domain or the Fourier domain.
Abstract:
Apparatus and methods based on signal processing techniques are disclosed for determining the distance of an object from a camera, rapid autofocusing of a camera, and obtaining focused pictures from blurred pictures produced by a camera. The apparatus of the present invention includes a camera characterized by a set of four camera parameters: position of the image detector or film inside the camera, focal length of the optical system in the camera, the size of the aperture of the camera, and the characteristics of the light filter in the camera. In the method of the present invention, at least two images of the object are recorded with different values for the set of camera parameters. The two images are converted to a standard format to obtain two normalized images. The values of the camera parameters and the normalized images are substituted into an equation obtained by equating two expressions for the focused image of the object. The two expressions for the focused image are based on a new deconvolution formula which requires computing only the derivatives of the normalized images and a set of weight parameters dependent on the camera parameters and the point spread function of the camera. In particular, the deconvolution formula does not involve any Fourier transforms and therefore the present invention has significant advantages over prior art. The equation which results from equating two expressions for the focused image of the object is solved to obtain a set of solutions for the distance of the object. A third image of the object is then recorded with new values for the set of camera parameters. The solution for distance which is consistent with the third image and the new values for the camera parameters is determined to obtain the distance of the object. Based on the distance of the object, a set of values is determined for the camera parameters for focusing the object. The camera parameters are then set equal to these values to accomplish autofocusing. After determining the distance of the object, the focused image of the object is obtained using the deconvolution formula. A generalized version of the method of determining the distance of an object can be used to determine one or more unknown camera parameters. This generalized version is also applicable to any linear shift-invariant system for system parameter estimation and signal restoration.
Abstract:
Apparatus for measuring magnetic field intensity characteristics around a target object enclosed in a 3D volume space is disclosed. It comprises (a) a means for magnetically polarizing the target object with a known polarizing magnetic field to introduce a magnetic density distribution (MDI) f(r1), (b) a means for measuring magnetic field characteristics g(r2) around the target object at a set of points r2 in a 3D volume space that in particular extends substantially along a radial direction pointing away from the approximate center of the object, (c) a means for setting up a vector-matrix equation; and (d) a means for solving this vector-matrix equation and obtaining a solution for f(r1) that provides a 3D tomographic image of the target object. This novel apparatus is integrated with frequency and phase encoding methods of Magnetic Resonance Imaging (MRI) technique in prior art to achieve different trade-offs.
Abstract:
Three-dimensional (3D) tomographic image of a target object such as soft-tissue in humans is obtained in the method and apparatus of the present invention. The target object is first magnetized by a polarizing magnetic field pulse. The magnetization of the object is specified by a 3D spatial Magnetic Density image (MDI). The magnetic field due to the magnetized Object is measured in a 3D volume space that extends in all directions Including substantially along the radial direction, not just on a surface as in prior art. This measured data includes additional information overlooked in prior art and this data is processed to obtain a more accurate 3 D image reconstruction in lesser time than in prior art. The methods and apparatuses of the present invention are combined with frequency and phase encoding techniques of Magnetic Resonance imaging (MRI) technique in prior art to achieve different trade-offs.
Abstract:
Field Image Tomography (FIT) is a fundamental new theory for determining the three-dimensional (3D) spatial density distribution of field emitting sources. The field can be the intensity of any type of field including (i) Radio Frequency (RF) waves in Magnetic Resonance Imaging (MRI), (ii) Gamma radiation in SPECT/PET, and (iii) gravitational field of earth, moon, etc. FIT exploits the property that field intensity decreases with increasing radial distance from the field source and the field intensity distribution measured in an extended 3D volume space can be used to determine the 3D spatial density distribution of the emitting source elements. A method and apparatus are disclosed for MRI of target objects based on FIT. Spinning atomic nuclei of a target object in a magnetic field are excited by beaming a suitable Radio Frequency (RF) pulse. These excited nuclei emit RF radiation while returning to their normal state. The intensity or amplitude distribution of the RF emission field g is measured in a 3D volume space that may extend substantially along the radial direction around the emission source. g is related to the 3D tomography f through a system matrix H that depends on the MRI apparatus, and noise n through the vector equation g=Hf+n. This equation is solved to obtain the tomographic image f of the target object by a method that reduces the effect of noise.
Abstract:
A method based on image defocus information is disclosed for determining distance (or ranging) of objects from a camera system and autofocusing of camera systems. The method uses signal processing techniques. The present invention includes a camera characterized by a set of four camera parameters: position of the image detector inside the camera, focal length of the optical system in the camera, the size of the aperture of the camera, and the characteristics of the light filter in the camera. In the method of the present invention, at least two images of the object are recorded with different values for the set of camera parameters. The two images are converted to one-dimensional signals by summing them along a particular direction whereby the effect of noise is reduced and the amount of computations are significantly reduced. Fourier coefficients of the one-dimensional signals and a log-by-rho-squared transform are used to obtain a calculated table. A stored table is calculated using the log-by-rho-squared transform and the Modulation Transfer Function of the camera system. Based on the calculated table and the stored table, the distance of the desired object is determined. In autofocusing, the calculated table and the stored table are used to calculate a set of focus camera parameters. The camera system is then set to the focus camera parameters to accomplish autofocusing.
Abstract:
A method of fast matrix multiplication and a method and apparatus for fast solving of a matrix equation are disclosed. They are useful in many applications including image blurring, deblurring, and 3D image reconstruction, in 3D microscopy and computer vision. The methods and apparatus are based on a new theoretical result—the Generalized Convolution Theorem (GCT). Based on GCT, matrix equations that represent certain linear integral equations are first transformed to equivalent convolution integral equations through change of variables. Then the resulting convolution integral equations are evaluated or solved using the Fast Fourier Transform (FFT). Evaluating a convolution integral corresponds to matrix multiplication and solving a convolution integral equation corresponds to solving the related matrix equation through deconvolution. Carrying-out these convolution and deconvolution operations in the Fourier domain using FFT speeds up computations significantly. These results are applicable to both one-dimensional and multi-dimensional integral equations.
Abstract:
A method and apparatus are disclosed for high-sensitivity Single-Photon Emission Computed Tomography (SPECT), and Positron Emission Tomography (PET). The apparatus includes a two-dimensional (2D) gamma detector array that, unlike a conventional SPECT machine, moves to different positions in a three-dimensional (3D) volume space near an emission source and records a data vector g which is a measure of gamma emission field. In particular, the 3D volume space in which emission data g is measured extends substantially along a radial direction r pointing away from the emission source, and unlike a conventional SPECT machine, each photon detector element in the 2D gamma detector array is provided with a very large collimator aperture. Data g is related to the 3D spatial density distribution f of the emission source, noise vector n, and a system matrix H of the SPECT/PET apparatus through the linear system of equations g=Hf+n. This equation is solved for f by a method that reduces the effect of noise.
Abstract:
A method and apparatus for directly sensing both the focused image and the three-dimensional shape of a scene are disclosed. This invention is based on a novel mathematical transform named Rao Transform (RT) and its inverse (IRT). RT and IRT are used for accurately modeling the forward and reverse image formation process in a camera as a linear shift-variant integral operation. Multiple images recorded by a camera with different camera parameter settings are processed to obtain 3D scene information. This 3D scene information is used in computer vision applications and as input to a virtual digital camera which computes a digital still image. This same 3D information for a time-varying scene can be used by a virtual video camera to compute and produce digital video data.
Abstract:
The present invention concerns a method of determining the distance between a surface patch of a 3-D spatial scene and a camera system. The distance of the surface patch is determined on the basis of at least a pair of images, each image formed using a camera system with either a finite or infinitesimal change in the value of at least one camera parameter. A first and second image of the 3-D scene are formed using the camera system which is characterized by a first and second set of camera parameters, and a point spread function, respectively, where the first and second set of camera parameters have at least one dissimilar camera parameter value. A first and second subimage is selected from the first and second images so formed, where the subimages correspond to the surface patch of the 3-D scene, the distance from which to the camera system, is to be determined. On the basis of the first and second subimages, a first constraint is derived between the spread parameters of the point spread function which corresponds to the first and second subimages. On the basis of the values of the camera parameters, a second constraint is derived between the spread parameters of the point spread function which corresponds to the first and second subimages. Using the first and second constrainsts, the spread parameters are then determined. On the basis of at least one of the spread parameters and the first and second sets of camera parameters, the distance between the camera system and the surface patch in the 3-D scene is determined.