Abstract:
Provided are systems and methods for determining timestamp information for High Dynamic Range and/or Wide Dynamic Range composite images. For example, an apparatus is provided that comprises an image sensor configured to capture a plurality of sub-frames of a scene, wherein each sub-frame comprises an image of the scene captured using an exposure time that is different from at least one other exposure time of at least one other sub-frame of the plurality of sub-frames. The apparatus is configured to receive, for each of the plurality of sub-frames, sub-pixel image data corresponding to a first portion of an image frame, determine composite image data corresponding to the first portion of the image frame based on values of the received sub-pixel image data for the plurality of sub-frames, identify an indicator based on the sub-frames corresponding to the received sub-pixel image data, and determine timestamp information, based on the identified indicator.
Abstract:
Systems, methods, and devices for enhancing an image are described herein. In some aspects, a device comprises a memory unit configured to store a left image and a right image. The left image and right image each depict a same scene from a different viewpoint. The device further comprises a coder configured to retrieve the left image and the right image from the memory unit. The coder is configured to determine a depth map based on a difference in spatial orientation between the left and right image. The device further comprises a processor coupled to the coder. The processor is configured to identify a portion of the left or right image selected by a user. The processor is further configured to determine an enhancement region surrounding the portion selected by the user based on the depth map. The processor is further configured to enhance the enhancement region.
Abstract:
A method performed by an electronic device is described. The method includes obtaining sensor data corresponding to multiple occupants from an interior of a vehicle. The method also includes obtaining, by a processor, at least one occupant status for at least one of the occupants based on a first portion of the sensor data. The method further includes identifying, by the processor, at least one vehicle operation in response to the at least one occupant status. The method additionally includes determining, by the processor, based at least on a second portion of the sensor data, whether to perform the at least one vehicle operation. The method also includes performing the at least one vehicle operation in a case that it is determined to perform the at least one vehicle operation.
Abstract:
Method and apparatus for reducing random noise in digital video streams are described. In one innovative aspect, the device includes a noise estimator. The device also includes a motion detector configured to determine a motion value indicative of motion between two frames of the video stream, the motion value based at least in part on the noise value. The device further includes a spatial noise reducer configured to filter the image data based at least in part on a blending factor and the noise value. The device also includes a temporal noise reducer configured to filter the video data based on the motion value and the noise value. The device also includes a blender configured to blend the spatial and temporal filtered values to provide a weighted composite filtered output image.
Abstract:
Method and apparatus for reducing random noise in digital video streams are described. In one innovative aspect, the device includes a noise estimator. The device also includes a motion detector configured to determine a motion value indicative of motion between two frames of the video stream, the motion value based at least in part on the noise value. The device further includes a spatial noise reducer configured to filter the image data based at least in part on a blending factor and the noise value. The device also includes a temporal noise reducer configured to filter the video data based on the motion value and the noise value. The device also includes a blender configured to blend the spatial and temporal filtered values to provide a weighted composite filtered output image.
Abstract:
Systems and methods are disclosed for detecting light sources and selectively adjusting exposure times of individual sensors in image sensors. In one aspect, a method includes capturing multiple images of a scene using a digital imager. The method includes generating a blended image by combining the multiple images, and executing an object detection algorithm on the blended image to locate and identify objects. The method includes determining a region of the identified object that contains a light source, and generating bounding box data around the light source region. The method includes communicating the bounding box data to the digital imager and updating the exposure time of the sensors in the bounding box region.
Abstract:
A method of processing data includes receiving, at a computing device, data representative of an image captured by an image sensor. The method also includes determining a first scene clarity score. The method further includes determining whether the first scene clarity score satisfies a threshold, and if the first scene clarity score satisfies the threshold, determining a second scene clarity score based on second data extracted from the data.
Abstract:
Systems and methods for improving the contrast of image frames are disclosed. In one embodiment, a system for improving the contrast of image frames includes a control module configured to create an intensity histogram for an image frame, define a set of markers on an intensity range of the histogram, assign a blend factor to each marker, calculate a blend factor for each original pixel of the image, obtain a first equalized pixel output value, calculate a final equalized pixel output value using the blend factor, the first equalized pixel output value, and an original pixel value, and output new pixel values that constitute the output image.
Abstract:
A method performed by an electronic device is described. The method includes obtaining sensor data corresponding to multiple occupants from an interior of a vehicle. The method also includes obtaining, by a processor, at least one occupant status for at least one of the occupants based on a first portion of the sensor data. The method further includes identifying, by the processor, at least one vehicle operation in response to the at least one occupant status. The method additionally includes determining, by the processor, based at least on a second portion of the sensor data, whether to perform the at least one vehicle operation. The method also includes performing the at least one vehicle operation in a case that it is determined to perform the at least one vehicle operation.
Abstract:
Provided are systems and methods for determining timestamp information for High Dynamic Range and/or Wide Dynamic Range composite images. For example, an apparatus is provided that comprises an image sensor configured to capture a plurality of sub-frames of a scene, wherein each sub-frame comprises an image of the scene captured using an exposure time that is different from at least one other exposure time of at least one other sub-frame of the plurality of sub-frames. The apparatus is configured to receive, for each of the plurality of sub-frames, sub-pixel image data corresponding to a first portion of an image frame, determine composite image data corresponding to the first portion of the image frame based on values of the received sub-pixel image data for the plurality of sub-frames, identify an indicator based on the sub-frames corresponding to the received sub-pixel image data, and determine timestamp information, based on the identified indicator.