-
公开(公告)号:US10828549B2
公开(公告)日:2020-11-10
申请号:US15574109
申请日:2016-12-30
Applicant: Intel Corporation
Inventor: Qiang Eric Li , Wenlong Li , Shaohui Jiao , Yikai Fang , Xiaolu Shen , Lidan Zhang , Xiaofeng Tong , Fucen Zeng
Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
-
公开(公告)号:US10803157B2
公开(公告)日:2020-10-13
申请号:US14911390
申请日:2015-03-28
Applicant: Intel Corporation
Inventor: Wenlong Li , Xiaolu Shen , Lidan Zhang , Jose E. Lorenzo , Qiang Li , Steven Holmes , Xiaofeng Tong , Yangzhou Du , Mary Smiley , Alok Mishra
IPC: G06F3/048 , G06F21/32 , H04L29/06 , H04L9/32 , H04W12/06 , G06F21/30 , G06F3/01 , G06F21/36 , G06K9/00 , G06T19/00
Abstract: A mechanism is described to facilitate gesture matching according to one embodiment. A method of embodiments, as described herein, includes selecting a gesture from a database during an authentication phase, translating the selected gesture into an animated avatar, displaying the avatar, prompting a user to perform the selected gesture, capturing a real-time image of the user and comparing the gesture performed by the user in the captured image to the selected gesture to determine whether there is a match.
-
公开(公告)号:US11383144B2
公开(公告)日:2022-07-12
申请号:US17093215
申请日:2020-11-09
Applicant: Intel Corporation
Inventor: Qiang Eric Li , Wenlong Li , Shaohui Jiao , Yikai Fang , Xiaolu Shen , Lidan Zhang , Xiaofeng Tong , Fucen Zeng
Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
-
公开(公告)号:US20220138555A1
公开(公告)日:2022-05-05
申请号:US17088328
申请日:2020-11-03
Applicant: Intel Corporation
Inventor: Lidan Zhang , Lei Zhu , Qi She , Ping Guo
IPC: G06N3/08
Abstract: Examples methods, apparatus, and articles of manufacture corresponding to a spectral nonlocal block have been disclosed. An example apparatus includes a first convolution filter to perform a first convolution using input features and first weighted kernels to generate first weighted input features, the input features corresponding to data of a neural network; an affinity matrix generator to: perform a second convolution using the input features and second weighted kernels to generate second weighted input features; perform a third convolution using the input features and third weighted kernels to generate third weighted input features; and generate an affinity matrix based on the second and third weighted input features; a second convolution filter to perform a fourth convolution using the first weighted input features and fourth weighted kernels to generate fourth weighted input features; and a accumulator to transmit output features corresponding to a spectral nonlocal operator.
-
公开(公告)号:US20210069571A1
公开(公告)日:2021-03-11
申请号:US17093215
申请日:2020-11-09
Applicant: Intel Corporation
Inventor: Qiang Eric Li , Wenlong Li , Shaohui Jiao , Yikai Fang , Xiaolu Shen , Lidan Zhang , Xiaofeng Tong , Fucen Zeng
Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
-
公开(公告)号:US20200051306A1
公开(公告)日:2020-02-13
申请号:US16655686
申请日:2019-10-17
Applicant: INTEL CORPORATION
Inventor: Minje Park , Tae-Hoon Kim , Myung-Ho Ju , Jihyeon Yi , Xiaolu Shen , Lidan Zhang , Qiang Li
Abstract: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor. The machine learning component may be trained using a set of training images that depict human facial expressions and avatar animation authored by professional animators to reflect facial expressions depicted in the set of training images.
-
公开(公告)号:US11841935B2
公开(公告)日:2023-12-12
申请号:US17947991
申请日:2022-09-19
Applicant: Intel Corporation
Inventor: Wenlong Li , Xiaolu Shen , Lidan Zhang , Jose E. Lorenzo , Qiang Li , Steven Holmes , Xiaofeng Tong , Yangzhou Du , Mary Smiley , Alok Mishra
IPC: G06F3/048 , G06F21/32 , H04L9/40 , H04L9/32 , G06F21/30 , H04W12/06 , H04W12/065 , H04W12/68 , G06V40/20 , G06F3/01 , G06F21/36 , G06T19/00
CPC classification number: G06F21/32 , G06F3/017 , G06F21/30 , G06F21/36 , G06T19/00 , G06V40/28 , H04L9/32 , H04L63/08 , H04W12/06 , H04W12/065 , H04W12/68
Abstract: Example gesture matching mechanisms are disclosed herein. An example machine readable storage device or disc includes instructions that, when executed, cause programmable circuitry to at least: prompt a user to perform gestures to register the user, randomly select at least one of the gestures for authentication of the user, prompt the user to perform the at least one selected gesture, translate the gesture into an animated avatar for display at a display device, the animated avatar including a face, analyze performance of the gesture by the user, and authenticate the user based on the performance of the gesture.
-
公开(公告)号:US10475225B2
公开(公告)日:2019-11-12
申请号:US15124811
申请日:2015-12-18
Applicant: INTEL CORPORATION
Inventor: Minje Park , Tae-Hoon Kim , Myung-Ho Ju , Jihyeon Yi , Xiaolu Shen , Lidan Zhang , Qiang Li
Abstract: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor. The machine learning component may be trained using a set of training images that depict human facial expressions and avatar animation authored by professional animators to reflect facial expressions depicted in the set of training images.
-
公开(公告)号:US11978217B2
公开(公告)日:2024-05-07
申请号:US17057084
申请日:2019-01-03
Applicant: Intel Corporation
Inventor: Lidan Zhang , Ping Guo , Haibing Ren , Yimin Zhang
IPC: G06T7/246 , G06F18/22 , G06F18/2413 , G06N20/00 , G06V10/764 , G06V10/82 , G06V20/52
CPC classification number: G06T7/246 , G06F18/22 , G06F18/24133 , G06N20/00 , G06T7/248 , G06V10/764 , G06V10/82 , G06V20/52 , G06T2207/10016 , G06T2207/20081 , G06T2207/20084 , G06V2201/07
Abstract: A long-term object tracker employs a continuous learning framework to overcome drift in the tracking position of a tracked object. The continuous learning framework consists of a continuous learning module that accumulates samples of the tracked object to improve the accuracy of object tracking over extended periods of time. The continuous learning module can include a sample pre-processor to refine a location of a candidate object found during object tracking, and a cropper to crop a portion of a frame containing a tracked object as a sample and to insert the sample into a continuous learning database to support future tracking.
-
公开(公告)号:US20230410487A1
公开(公告)日:2023-12-21
申请号:US18250498
申请日:2020-11-30
Applicant: Intel Corporation
Inventor: Lidan Zhang , Qi She , Ping Guo , Yimin Zhang
IPC: G06V10/778 , G06V20/40 , G06V10/82 , G06V40/20
Abstract: Performing online learning for a model to detect unseen actions in an action recognition system is disclosed. The method includes extracting semantic features in a semantic domain from semantic action labels, transforming the semantic features from the semantic domain into mixed features in a mixed domain, and storing the mixed features in a feature database. The method further includes extracting visual features in a visual domain from a video stream and determining if the visual features indicate an unseen action in the video stream. If no unseen action is determined, applying an offline classification model to the visual features to identify seen actions, assigning identifiers to the identified seen actions, transforming the visual features from the visual domain into mixed features in the mixed domain, and storing the mixed features and seen action identifiers in the feature database. If an unseen action is determined, transforming the visual features from the visual domain into mixed features in the mixed domain, applying a continual learner model to mixed features from the feature database to identify unseen actions in the video stream, assigning identifiers to the identified unseen actions, and storing the unseen action identifiers in the feature database.
-
-
-
-
-
-
-
-
-