Abstract:
Systems and methods for reducing chrominance (chroma) noise in image data are provided. In one example of such a method, image data in YCC format may be received into logic of an image signal processor. Using the logic, noise may be filtered from a first chrominance component or a second chrominance component, or both, of the image data, using a sparse filter and a noise threshold. The noise threshold may be determined based at least in part on two of the components of the YCC image data.
Abstract:
Disclosed is a system for producing images including techniques for reducing the memory and processing power required for such operations. The system provides techniques for programmatically representing a graphics problem. The system further provides techniques for reducing and optimizing graphics problems for rendering with consideration of the system resources, such as the availability of a compatible GPU.
Abstract:
This disclosure pertains to novel devices, methods, and computer readable media for performing raw camera noise reduction using a novel, so-called “alignment mapping” technique to more effectively separate structure from noise in an image, in order to aid in the denoising process. Alignment mapping allows for the extraction of more structure from the image and also the ability to understand the image structure, yielding information for edge direction, edge length, and corner locations within the image. This information can be used to smooth long edges properly and to prevent tight image details, e.g., text, from being overly smoothed. In alignment maps, the amount of noise may be used to compute thresholds and scaling parameters used in the preparation of the alignment map. According to some embodiments, a feature map may also be created for the image. Finally, the image may be smoothed using the created feature map as a mask.
Abstract:
This disclosure pertains to novel devices, methods, and computer readable media for performing “blind” color defringing on images. In one embodiment, the blind defringing process begins with blind color edge alignment. This process largely cancels every kind of fringe, except for axial chromatic aberration. Next, the process looks at the edges and computes natural high and low colors to either side of the edge, attempting to get new pixel colors that aren't contaminated by the fringe color. Next, the process resolves the pixel's estimated new color by interpolating between the low and high colors, based on the green variation across the edge and the amount of green in the pixel that is being repaired. Care is taken to prevent artifacts in areas that generally do not fringe, like red-black boundaries and skin tone. Finally, the process computes the final repaired color by using luminance-scaling of the new pixel color estimate.
Abstract:
The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user.
Abstract:
The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user's head may either be inferred or calculated directly by using one or more of a device's optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user's head, more realistic virtual 3D depictions of the graphical objects on the device's display may be created—and interacted with—by the user.
Abstract:
The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user.
Abstract:
Disclosed are a system and method for computing a picture. Instead of loading a file that contains the image from memory, the present invention provides for a system and method for opening and retaining a procedural recipe and a small set of instructions that can be executed to compute a picture. The picture can be computed very quickly using a GPU (graphics processing unit), and can be made to move on demand. When a part of the image is needed to composite, that part is computed using a fragment program on the GPU using the procedural recipe and a specially written fragment program into a temporary VRAM buffer. After it is computed and composited, the buffer containing the result of the fragment program may be discarded.
Abstract:
Disclosed are a system and method for computing a picture. Instead of loading a file that contains the image from memory, the present invention provides for a system and method for opening and retaining a procedural recipe and a small set of instructions that can be executed to compute a picture. The picture can be computed very quickly using a GPU (graphics processing unit), and can be made to move on demand. When a part of the image is needed to composite, that part is computed using a fragment program on the GPU using the procedural recipe and a specially written fragment program into a temporary VRAM buffer. After it is computed and composited, the buffer containing the result of the fragment program may be discarded.
Abstract:
The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user's head may either be inferred or calculated directly by using one or more of a device's optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user's head, more realistic virtual 3D depictions of the graphical objects on the device's display may be created—and interacted with—by the user.