Abstract:
Imaging or measurement methods and systems including methods and systems for finding the three-dimensional orientation and position of multiple dipole-like particles and single molecules, methods and systems for generating helical beams and helical spread functions, and methods and systems for super-resolution and super-localization of dense arrays of emitters.
Abstract:
Embodiments include methods, systems, and/or devices that may be used to image, obtain three-dimensional information from a scene, and/or locate multiple small particles and/or objects in three dimensions. A point spread function (PSF) with a predefined three dimensional shape may be implemented to obtain high Fisher information in 3D. The PSF may be generated via a phase mask, an amplitude mask, a hologram, or a diffractive optical element. The small particles may be imaged using the 3D PSF. The images may be used to find the precise location of the object using an estimation algorithm such as maximum likelihood estimation (MLE), expectation maximization, or Bayesian methods, for example. Calibration measurements can be used to improve the theoretical model of the optical system. Fiduciary particles/targets can also be used to compensate for drift and other type of movement of the sample relative to the detector.
Abstract:
Imaging or measurement methods and systems including methods and systems for finding the three-dimensional orientation and position of multiple dipole-like particles and single molecules, methods and systems for generating helical beams and helical spread functions, and methods and systems for super-resolution and super-localization of dense arrays of emitters.
Abstract:
Systems, methods, and computer program products are disclosed to localize and/or image a dense array of particles. In some embodiments, a plurality of particles may be imaged using an imaging device. A plurality of point spread function dictionary coefficients of the image may be estimated using a point spread function dictionary; where the point spread function dictionary can include a plurality of spread function responses corresponding to different particle positions. From the point spread function dictionary coefficients the number of particles in the image can be determined. Moreover location of each particle in the image can be determined from the point spread function dictionary coefficients.
Abstract:
Some embodiments of the invention include a system comprising a positioning device configured to a hold a sample and adjust a position of a sample in response to receiving a drift compensation signal; a first light source disposed to transilluminate the sample; a second light source disposed to epi-illuminate the sample; an optical system configured to receive light from the sample and generate a three-dimensional point spread function from the light from the sample; an image sensor disposed relative to the optical system that produces an image from the light collected from the sample via the optical system; and logic electrically coupled with the image detector and the positioning device, the logic configured to determine one or more drift compensation values from images imaged by the image detector, and configured to send one or more drift compensation signals to the positioning device.
Abstract:
Systems and methods are disclosed to enhance three-dimensional photoacoustic imaging behind, through, or inside a scattering material. Embodiments of the invention can increase the optical fluence in an ultrasound transducer focus and/or enhance the optical intensity using wavefront shaping before the scatterer. The photoacoustic signal induced by an object placed behind the scattering medium can serve as feedback to optimize the wavefront, enabling one order of magnitude enhancement of the photoacoustic amplitude. Using the enhanced optical intensity, the object can be scanned in two dimensions and/or a spot can be scanned by re-optimizing the wavefront before post-processing of the data to reconstruct the image. The temporal photoacoustic signal provides information to reconstruct the third-dimensional information.
Abstract:
Systems, methods and software for scanning an ocular structure of an eye are provided. A method includes projecting a light onto the ocular structure, and scanning, by natural movements of the living eye, the light in a region of the living eye. Applicable ocular structures can include an ocular surface, a cornea, a sclera, an iris, a crystalline lens, an ocular fundus, a retina, a choroid, and a vitreous humor. A system includes a light source to create light, and optics to focus the light from the light source onto the ocular structure and to collect secondary light coming from the eye towards a detector. The system also includes a tracking system to register positions of the eye at different times, and a computer system to receive signals coupled from the tracking system representative of eye positions at different times and associate the signals with the tracking position.
Abstract:
Controlling the propagation and interaction of light in complex media has sparked major interest. Unfortunately, spatial light modulation devices suffer from limited speed precluding real-time applications (e.g., imaging in live tissue). To address this problem, various embodiments use a phase-control technique to characterize complex media based on use of fast 1D spatial modulators and 1D-to-2D transformation performed by the same medium being analyzed. Some embodiments use a micro-electro-mechanical grating light valve (GLV) with 1088 degrees of freedom modulated at 350 KHz, enabling unprecedented high-speed wavefront measurements. Some embodiments continuously measure the transmission matrix, calculate the optimal wavefront and project a focus through various dynamic scattering samples in real-time, (e.g., within 2.4 ms per cycle). As such, some embodiments improve, by more than an order of magnitude, prior wavefront shaping modulation speed and open new opportunities for optical processing using 1D-to-2D transformations.
Abstract:
Recent remarkable progress in wave-front shaping has enabled control of light propagation inside linear media to focus and image through scattering objects. In particular, light propagation in multimode fibers comprises complex intermodal interactions and rich spatiotemporal dynamics. Control of physical phenomena in multimode fibers and its applications is in its infancy, opening opportunities to take advantage of complex mode interactions. Various embodiments of the present technology provide wave-front shaping for controlling nonlinear phenomena in multimode fibers. Using a spatial light modulator at the fiber's input and a genetic algorithm optimization, some embodiments control a highly nonlinear stimulated Raman scattering cascade and its interplay with four wave mixing via a flexible implicit control on the superposition of modes that are coupled into the fiber.
Abstract:
Recent remarkable progress in wave-front shaping has enabled control of light propagation inside linear media to focus and image through scattering objects. In particular, light propagation in multimode fibers comprises complex intermodal interactions and rich spatiotemporal dynamics. Control of physical phenomena in multimode fibers and its applications is in its infancy, opening opportunities to take advantage of complex mode interactions. Various embodiments of the present technology provide wave-front shaping for controlling nonlinear phenomena in multimode fibers. Using a spatial light modulator at the fiber's input and a genetic algorithm optimization, some embodiments control a highly nonlinear stimulated Raman scattering cascade and its interplay with four wave mixing via a flexible implicit control on the superposition of modes that are coupled into the fiber.