Abstract:
Examples are described for overlaying primitives, arranged as concentric circles, in circular images onto respective mesh models to generate rectangular images representative of a 360-degree video or image. Portions of the rectangular images are blended to generate a stitched rectangular image, and image content for display is generated based on the stitched rectangular image.
Abstract:
Techniques and systems are provided for encoding video data. For example, a method of encoding video data includes obtaining a background picture that is generated based on a plurality of pictures captured by an image sensor. The background picture is generated to include background portions identified in each of the captured pictures. The method further includes encoding, into a video bitstream, a group of pictures captured by the image sensor. The group of pictures includes at least one random access picture. Encoding the group of pictures includes encoding at least a portion of the at least one random access picture using inter-prediction based on the background picture.
Abstract:
The disclosed technology relates to image-capturing methods. In one aspect, a method includes receiving an image frame comprising a plurality of pixels and subtracting foreground pixels from the image frame to obtain background pixels. The method additionally includes determining an exposure condition for a next image frame based on at least a subset of the background pixels. The method further includes adjusting the foreground pixels such that a difference between a background luma value and a foreground luma value of the next image frame is within a predetermined range. Aspects are also directed to apparatuses configured for the methods.
Abstract:
Methods and apparatus for capturing an image using an automatic focus are disclosed herein. In one aspect, a method is disclosed which includes communicating, using a camera, with a wireless device via a wireless communication network. The method further includes determining a distance between the camera and the wireless device using the wireless communication network and adjusting a focus of the camera based upon the determined distance. Finally, the method includes capturing an image using the adjusted focus of the camera. In some aspects, this method may be done on a smartphone or digital camera which includes Wi-Fi capabilities.
Abstract:
The disclosed technology relates to image-capturing methods. In one aspect, a method includes receiving an image frame comprising a plurality of pixels and subtracting foreground pixels from the image frame to obtain background pixels. The method additionally includes determining an exposure condition for a next image frame based on at least a subset of the background pixels. The method further includes adjusting the foreground pixels such that a difference between a background luma value and a foreground luma value of the next image frame is within a predetermined range. Aspects are also directed to apparatuses configured for the methods.
Abstract:
Apparatus and methods for facial detection are disclosed. A plurality of images of an observed face is received for identification. Based at least on two or more selected images of the plurality of images, a template of the observed face is generated. In some embodiments, the template is a subspace generated based on feature vectors of the plurality of received images. A database of identities and corresponding facial data of known persons is searched based at least on the template of the observed face and the facial data of the known persons. One or more identities of the known persons are selected based at least on the search.
Abstract:
A method for three-dimensional face generation is described. An inverse depth map is calculated based on a depth map and an inverted first matrix. The inverted first matrix is generated from two images in which pixels are aligned vertically and differ horizontally. The inverse depth map is normalized to correct for distortions in the depth map caused by image rectification. A three-dimensional face model is generated based on the inverse depth map and one of the two images.
Abstract:
A method for picture processing is described. A first tracking area is obtained. A second tracking area is also obtained. The method includes beginning to track the first tracking area and the second tracking area. Picture processing is performed once a portion of the first tracking area overlapping the second tracking area passes a threshold.
Abstract:
Embodiments include methods and systems for context-adaptive pixel processing based, in part, on a respective weighting-value for each pixel or a group of pixels. The weighting-values provide an indication as to which pixels are more pertinent to pixel processing computations. Computational resources and effort can be focused on pixels with higher weights, which are generally more pertinent for certain pixel processing determinations.
Abstract:
Methods, systems, and apparatuses are provided to automatically determine whether an image is spoofed. For example, a computing device may obtain an image, and may execute a trained convolutional neural network to ingest elements of the image. Further, and based on the ingested elements of the image, the executed trained convolutional neural network generates an output map that includes a plurality of intensity values. In some examples, the trained convolutional neural network includes a plurality of down sampling layers, a plurality of up sampling layers, and a plurality of joint spatial and channel attention layers. Further, the computing device may determine whether the image is spoofed based on the plurality of intensity values. The computing device may also generate output data based on the determination of whether the image is spoofed, and may store the output data within a data repository.