Abstract:
The example techniques of this disclosure are directed to generating a stereoscopic view from an application designed to generate a mono view. For example, the techniques may modify instructions for a vertex shader based on a viewing angle. When the modified vertex shader is executed, the modified vertex shader may generate coordinates for vertices for a stereoscopic view based on the viewing angle.
Abstract:
A device includes a sensor configured to determine an angle of a longitudinal extent of the device with respect to a ground surface. The device also includes an estimator configured to estimate a first distance and to estimate a second distance based on the angle and the first distance. The first distance is associated with a first projection from a center of the device to the ground surface. The first projection is perpendicular to the longitudinal extent of the device. The second distance is associated with a second projection from the center of the device to the ground surface. The second projection is perpendicular to the ground surface.
Abstract:
A method includes accessing, at a computing device, data descriptive of a graph representing a program. The graph includes multiple nodes representing execution steps of the program and includes multiple edges representing data transfer steps. The method also includes determining at least two heterogeneous hardware resources of the computing device that are available to execute code represented by one or more of the nodes, and determining one or more paths from a source node to a sink node based on a topology of the graph. The method further includes scheduling execution of code at the at least two heterogeneous hardware resources. The code is represented by at least one of the multiple nodes, and the execution of the code is scheduled based on the one or more paths.
Abstract:
Embodiments include methods and systems for context-adaptive pixel processing based, in part, on a respective weighting-value for each pixel or a group of pixels. The weighting-values provide an indication as to which pixels are more pertinent to pixel processing computations. Computational resources and effort can be focused on pixels with higher weights, which are generally more pertinent for certain pixel processing determinations.
Abstract:
Methods, systems, computer-readable media, and apparatuses for image-based status determination are presented. In some embodiments, a method includes capturing at least one image of a moving path. At least one feature within the at least one image is analyzed and based on the analysis of the at least one feature, a direction of movement of the moving path is determined. In some embodiments, a method includes capturing an image of an inclined path. At least one feature within the image is analyzed and based on analysis of the at least one feature, a determination is made whether the image was captured from a top position relative to the inclined path or a bottom position relative to the inclined path.
Abstract:
Systems and techniques are provided for performing scene segmentation and object tracking. For example, a method for processing one or more frames. The method may include determining first one or more features from a first frame. The first frame includes a target object. The method may include obtaining a first mask associated with the first frame. The first mask includes an indication of the target object. The method may further include generating, based on the first mask and the first one or more features, a representation of a foreground and a background of the first frame. The method may include determining second one or more features from a second frame and determining, based on the representation of the foreground and the background of the first frame and the second one or more features, a location of the target object in the second frame.
Abstract:
Systems and techniques are described for image processing. An imaging system receives an identity image and an attribute image. The identity image depicts a first person having an identity. The attribute image depicts a second person having an attribute, such as a facial feature, an accessory worn by the second person, and/or an expression. The imaging system uses trained machine learning model(s) to generate a combined image based on the identity image and the attribute image. The combined image depicts a virtual person having both the identity of the first person and the attribute of the second person. The imaging system outputs the combined image, for instance by displaying the combined image or sending the combined image to a receiving device. In some examples, the imaging system updates the trained machine learning model(s) based on the combined image.
Abstract:
Techniques and systems are provided for authenticating a user of a device. For example, input biometric data associated with a person can be obtained. A similarity score for the input biometric data can be determined by comparing the input biometric data to a set of templates that include reference biometric data associated with the user. The similarity score can be compared to an authentication threshold. The person is authenticated as the user when the similarity score is greater than the authentication threshold. The similarity score can also be compared to a learning threshold that is greater than the authentication threshold. A new template including features of the input biometric data is saved for the user when the similarity score is less than the learning threshold and greater than the authentication threshold.
Abstract:
Systems and techniques are provided for facial expression recognition. In some examples, a system receives an image frame corresponding to a face of a person. The system also determines, based on a three-dimensional model of the face, landmark feature information associated with landmark features of the face. The system then inputs, to at least one layer of a neural network trained for facial expression recognition, the image frame and the landmark feature information. The system further determines, using the neural network, a facial expression associated with the face.
Abstract:
A method for picture processing is described. A first tracking area is obtained. A second tracking area is also obtained. The method includes beginning to track the first tracking area and the second tracking area. Picture processing is performed once a portion of the first tracking area overlapping the second tracking area passes a threshold.