Abstract:
What is disclosed is a video system and method that accounts for differences in imaging characteristics of differing video systems used to acquire video of respective regions of interest of a subject being monitored for a desired physiological function. In one embodiment, video is captured using N video imaging devices, where N≧2, of respective regions of interest of a subject being monitored for a desired physiological function (i.e., a respiratory or cardiac function). Each video imaging device is different but has complimentary imaging characteristics. A reliability factor f is determined for each of the devices in a manner more fully disclosed herein. A time-series signal is generated from each of the videos. Each time-series signal is weighted by each respective reliability factor and combined to obtain a composite signal. A physiological signal can be then extracted from the composite signal. The processed physiological signal corresponds to the desired physiological function.
Abstract:
What is disclosed is a system and method for compensating for motion induce artifacts in a physiological signal obtained from a video. In one embodiment, a video of a first and second region of interest of a subject being monitored for a desired physiological function is captured by a video device. The first region is an area of exposed skin wherein a desired signal corresponding to the physiological function can be registered. The second region is an area where movement is likely to induce motion artifacts into that signal. The video is processed to isolate pixels in the image frames associated with these regions. Pixels of the first region are processed to obtain a time-series signal. A physiological signal is extracted from the time-series signal. Pixels of the second region are analyzed to identify motion. The physiological signal is processed to compensate for the identified motion.
Abstract:
What is disclosed is a method for monitoring a subject for cardiac arrhythmia such as atrial fibrillation using an apparatus that can be comfortably worn by the subject around an area of exposed skin where a photoplethysmographic (PPG) signal can be registered. In one embodiment, the apparatus is a reflective or transmissive wrist-worn device with emitter/detector pairs fixed to an inner side of a band with at least one illuminator emitting source light at a specified wavelength band. The illuminator is paired to a respective photodetector comprising one or more sensors that are sensitive to a wavelength band of its paired illuminator. The photodetector measures intensity of sensed light emitted by a respective illuminator. The signal obtained by the sensors comprises a continuous PPG signal. The continuous PPG signal analyzed for peak-to-peak pulse points from which the existence of cardiac arrhythmia such as atrial fibrillation event can be determined.
Abstract:
What is disclosed is a system and method for increasing the accuracy of physiological signals obtained from video of a subject being monitored for a desired physiological function. In one embodiment, image frames of a video are received. Successive batches of image frames are processed. For each batch, pixels associated with an exposed body region of the subject are isolated and processed to obtain a time-series signal. If movement occurred during capture of these image frames that is below a pre-defined threshold level then parameters of a predictive model are updated using this batch's time-series signal. Otherwise, the last updated predictive model is used to generate a predicted time-series signal for this batch. The time-series signal is fused with the predicted time-series signal to obtain a fused time-series signal. The time-series signal for each batch is processed to obtain a physiological signal for the subject corresponding to the physiological function.
Abstract:
What is disclosed is a novel video processing system and method wherein a plurality of image frames of a video captured using a video camera with a spatial resolution of (M×N) in the (x, y) direction, respectively, and a temporal resolution (T) in frames per unit of time. A first and second magnification factor f1, f2 are selected for spatial enhancement in the (x, y) direction. A third magnification factor f3 is selected for a desired temporal enhancement in (T). The video data is processed using a dictionary comprising high and low resolution patch cubes which are used to induce spatial and temporal components in the video where no data exists. A high resolution course video X0 is generated which has an enhanced spatial resolution of (f1*M)×(f2*N) and an enhanced temporal resolution of (f3*T) frames. The course high resolution video is then smoothed, when found required, to generate a smoothed high resolution video.
Abstract:
A color management system includes an input device, an input processor, and a plurality of print engines. The input processor is configured to transform, using an input transformation stored on the input device, the digital image in an input source color space to a digital image in a standardized multi-color color space. A print engine processor of the print engine is configured to receive the digital image in the standardized multi-color color space from the input processor and transform, using a print engine transformation stored on the print engine, the digital image in the standardized multi-color color space to a digital image in a print engine multi-color color space. The input transformation includes a color gamut coverage at least equal to color gamut coverage of all the print engines in the color management system.
Abstract:
What is disclosed is a system and method for processing a video acquired using a 2D monocular video camera system to assess respiratory function of a subject of interest. In various embodiments hereof, respiration-related video signals are obtained from a temporal sequence of 3D surface maps that have been reconstructed based on an amount of distortion detected in a pattern placed over the subject's thoracic region (chest area) during video acquisition relative to known spatial characteristics of an undistorted reference pattern. Volume data and frequency information are obtained from the processed video signals to estimate chest volume and respiration rate. Other respiratory function estimations of the subject in the video can also be derived. The obtained estimations are communicated to a medical professional for assessment. The teachings hereof find their uses in settings where it is desirable to assess patient respiratory function in a non-contact, remote sensing environment.
Abstract:
A video is received of a region of a subject where a signal corresponding to respiratory function can be registered by a video device. Pixels in the region in each of the image frames are processed to identify a respiratory pattern with peak/valley pairs. A peak/valley pair of interest is selected. An array of optical flow vectors is determined between a window of groups of pixel locations in a reference image frame corresponding to a peak of the pair/valley pair and a window in each of a number of image frames corresponding to the respiratory signal between the peak and ending at a valley point. Optical flow vectors have a direction and a magnitude. A ratio is determined between upwardly pointing optical flow vectors and downwardly pointing optical flow vectors. Based on the ratio, a determination is made whether the respiration phase for that peak/valley pair is inspiration or expiration.
Abstract:
What is disclosed is a system and method for determining respiration rate from a video of a subject. In one embodiment, a video is received comprising plurality of time-sequential image frames of a region of a subject's body. Features of pixels are extracted from that region from each image frame and vectors formed from these features. Each image frame has an associated feature vector. A N×M video matrix of the vectors of length N is constructed such that a total number of columns M in the video matrix correspond to a time duration over which the subject's respiration rate is to be determined. The video matrix is processed to obtain a matrix of eigenvectors where principal axes of variations due to motion associated with respiration are contained in a first few eigenvectors. One eigenvector is selected from the first few eigenvectors. A respiration rate is obtained from the selected eigenvector.
Abstract:
What is disclosed is a system and method for the detection of cancerous tissue by analyzing blocks of pixels in a thermal image of a region of exposed skin tissue. In one embodiment, matrices are received which have been derived from vectors of temperature values associated with pixels in blocks of pixels which have been isolated from a plurality of thermal images of both cancerous and non-cancerous tissue. The vectors are rearranged to form matrices. A thermal image of a subject is received. Blocks of pixels which reside within a region of exposed skin tissue are identified and isolated. For each identified pixel block, an image vector comprising temperature values associated with these pixels is formed. The vector is provided to a classifier which uses the matrices to classify tissue associated with this block of pixels as being either cancerous or non-cancerous tissue.