Personalized eye openness estimation

    公开(公告)号:US11227156B2

    公开(公告)日:2022-01-18

    申请号:US16239352

    申请日:2019-01-03

    Abstract: Methods, systems, and devices for personalized (e.g., user specific) eye openness estimation are described. A network model (e.g., a convolutional neural network) may be trained using a set of synthetic eye openness image data (e.g., synthetic face images with known degrees or percentages of eye openness) and a set of real eye openness image data (e.g., facial images of real persons that are annotated as either open eyed or closed eyed). A device may estimate, using the network model, a multi-stage eye openness level (e.g., a percentage or degree to which an eye is open) of a user based on captured real time eye openness image data. The degree of eye openness estimated by the network model may then be compared to an eye size of the user (e.g., a user specific maximum eye size), and a user specific eye openness level may be estimated based on the comparison.

    COORDINATED MULTI-VIEWPOINT IMAGE CAPTURE

    公开(公告)号:US20220006945A1

    公开(公告)日:2022-01-06

    申请号:US16920198

    申请日:2020-07-02

    Abstract: Various embodiments may include methods and systems for configuring synchronous multi-viewpoint photography. Various embodiments may include displaying preview images on initiating and responding devices. Various embodiments may include determining an adjustment to the orientation of a responding device based on the preview images. Various embodiments may include transmitting an instruction configured to enable the responding device to display a notification for adjusting the position or the orientation of the responding device based at least on the adjustment. Various embodiments may include transmitting, to the responding device, a second instruction to enable the responding device to capture a second image at approximately the same time as the initiating device captures a first image. Embodiments further include capturing, via a camera, the first image, receiving, from the responding device, a second image, and generating an image file based on the first image and the second image.

    User adaptation for biometric authentication

    公开(公告)号:US11216541B2

    公开(公告)日:2022-01-04

    申请号:US16125360

    申请日:2018-09-07

    Abstract: Techniques and systems are provided for authenticating a user of a device. For example, input biometric data associated with a person can be obtained. A similarity score for the input biometric data can be determined by comparing the input biometric data to a set of templates that include reference biometric data associated with the user. The similarity score can be compared to an authentication threshold. The person is authenticated as the user when the similarity score is greater than the authentication threshold. The similarity score can also be compared to a learning threshold that is greater than the authentication threshold. A new template including features of the input biometric data is saved for the user when the similarity score is less than the learning threshold and greater than the authentication threshold.

    Systems and methods for facial liveness detection

    公开(公告)号:US11048953B2

    公开(公告)日:2021-06-29

    申请号:US16641469

    申请日:2017-12-01

    Abstract: A method performed by an electronic device is described. The method includes receiving an image. The image depicts a face. The method also includes detecting at least one facial landmark of the face in the image. The method further includes receiving a depth image of the face and determining at least one landmark depth by mapping the at least one facial landmark to the depth image. The method also includes determining a plurality of scales of depth image pixels based on the at least one landmark depth and determining a scale smoothness measure for each of the plurality of scales of depth image pixels. The method additionally includes determining facial liveness based on at least two of the scale smoothness measures. Determining the facial liveness may be based on a depth-adaptive smoothness threshold and/or may be based on a natural face size criterion.

    Circular fisheye video in virtual reality

    公开(公告)号:US10979691B2

    公开(公告)日:2021-04-13

    申请号:US15495730

    申请日:2017-04-24

    Abstract: Provided are systems, methods, and computer-readable medium for including parameters that describe fisheye images in a 360-degree video with the 360-degree video. The 360-degree video can then be stored and/or transmitted as captured by the omnidirectional camera, without transforming the fisheye images into some other format. The parameters can later be used to map the fisheye images to an intermediate format, such as an equirectangular format. The intermediate format can be used to store, transmit, and/or display the 360-degree video. The parameters can alternatively or additionally be used to map the fisheye images directly to a format that can be displayed in a 360-degree video presentation, such as a spherical format.

    SYSTEMS AND METHODS FOR FACIAL LIVENESS DETECTION

    公开(公告)号:US20210049391A1

    公开(公告)日:2021-02-18

    申请号:US16641469

    申请日:2017-12-01

    Abstract: A method performed by an electronic device is described. The method includes receiving an image. The image depicts a face. The method also includes detecting at least one facial landmark of the face in the image. The method further includes receiving a depth image of the face and determining at least one landmark depth by mapping the at least one facial landmark to the depth image. The method also includes determining a plurality of scales of depth image pixels based on the at least one landmark depth and determining a scale smoothness measure for each of the plurality of scales of depth image pixels. The method additionally includes determining facial liveness based on at least two of the scale smoothness measures. Determining the facial liveness may be based on a depth-adaptive smoothness threshold and/or may be based on a natural face size criterion.

    Methods and systems of determining object status for false positive removal in object tracking for video analytics

    公开(公告)号:US10402987B2

    公开(公告)日:2019-09-03

    申请号:US15973090

    申请日:2018-05-07

    Abstract: Techniques and systems are provided for maintaining blob trackers for one or more video frames. For example, a blob tracker can be identified for a current video frame. The blob tracker is associated with a blob detected for the current video frame, and the blob includes pixels of at least a portion of one or more objects in the current video frame. One or more characteristics of the blob tracker are determined. The one or more characteristics are based on a bounding region history of the blob tracker. A confidence value is determined for the blob tracker based on the determined one or more characteristics, and a status of the blob tracker is determined based on the determined confidence value. The status of the blob tracker indicates whether to maintain the blob tracker for the one or more video frames. For example, the determined status can include a first type of blob tracker that is output as an identified blob tracker-blob pair, a second type of blob tracker that is maintained for further analysis, or a third type of blob tracker that is removed from a plurality of blob trackers maintained for the one or more video frames.

    Methods and systems of generating a background picture for video coding

    公开(公告)号:US10375399B2

    公开(公告)日:2019-08-06

    申请号:US15134183

    申请日:2016-04-20

    Abstract: Techniques and systems are provided for generating a background picture. The background picture can be used for coding one or more pictures. For example, a method of generating a background picture includes generating a long-term background model for one or more pixels of a background picture. The long-term background model includes a statistical model for detecting long-term motion of the one or more pixels in a sequence of pictures. The method further includes generating a short-term background model for the one or more pixels of the background picture. The short-term background model detects short-term motion of the one or more pixels between two or more pictures. The method further includes determining a value for the one or more pixels of the background picture using the long-term background model and the short-term background model.

Patent Agency Ranking