-
公开(公告)号:US20240104744A1
公开(公告)日:2024-03-28
申请号:US18492502
申请日:2023-10-23
Applicant: Intel Corporation
Inventor: Qiang Li , Xiaofeng Tong , Yikai Fang , Chen Ling , Wenlong Li
CPC classification number: G06T7/13 , G06F18/251 , G06V10/803 , G06V20/52 , G06V20/64 , G06V40/103 , H04N13/282
Abstract: A mechanism is described for facilitating real-time multi-view detection of objects in multi-camera environments, according to one embodiment. A method of embodiments, as described herein, includes mapping first lines associated with objects to a ground plane; and forming clusters of second lines corresponding to the first lines such that an intersection point in a cluster represents a position of an object on the ground plane.
-
公开(公告)号:US11869141B2
公开(公告)日:2024-01-09
申请号:US17436481
申请日:2019-05-14
Applicant: INTEL CORPORATION
Inventor: Xiaofeng Tong , Wenlong Li
CPC classification number: G06T17/00 , G06T7/70 , G06V10/255 , G06T2200/08
Abstract: Techniques related to validating an image based 3D model of a scene are discussed. Such techniques include detecting an object within a captured image used to generate the scene, projecting the 3D model to a view corresponding to the captured image to generate a reconstructed image, and comparing image regions of the captured and reconstructed images corresponding to the object to validate the 3D model.
-
公开(公告)号:US11383144B2
公开(公告)日:2022-07-12
申请号:US17093215
申请日:2020-11-09
Applicant: Intel Corporation
Inventor: Qiang Eric Li , Wenlong Li , Shaohui Jiao , Yikai Fang , Xiaolu Shen , Lidan Zhang , Xiaofeng Tong , Fucen Zeng
Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
-
公开(公告)号:US11295502B2
公开(公告)日:2022-04-05
申请号:US16987707
申请日:2020-08-07
Applicant: Intel Corporation
Inventor: Yikai Fang , Yangzhou Du , Qiang Eric Li , Xiaofeng Tong , Wenlong Li , Minje Park , Myung-Ho Ju , Jihyeon Kate Yi , Tae-Hoon Pete Kim
Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
-
公开(公告)号:US20210279896A1
公开(公告)日:2021-09-09
申请号:US17255837
申请日:2018-09-28
Applicant: INTEL CORPORATION
Inventor: Xiaofeng Tong , Chen Ling , Ming Lu , Qiang Li , Wenlong Li , Yikai Fang , Yumeng Wang
Abstract: A multi-camera architecture for detecting and tracking a ball in real-time. The multi-camera architecture includes network interface circuitry to receive a plurality of real-time videos taken from a plurality of high-resolution cameras. Each of the high-resolution cameras simultaneously captures a sports event, wherein each of the plurality of high-resolution cameras includes a viewpoint that covers an entire playing field where the sports event is played. The multi-camera architecture further includes one or more processors coupled to the network interface circuitry and one or more memory devices coupled to the one or more processors. The one or more memory devices includes instructions to determine the location of the ball for each frame of the plurality of real-time videos, which when executed by the one or more processors, cause the multi-camera architecture to simultaneously perform one of a detection scheme or a tracking scheme on a frame from each of the plurality of real-time videos to detect the ball used in the sports event and perform a multi-camera build to determine a location of the ball in 3D for the frame from each of the plurality of real-time videos using one of detection or tracking results for each of the cameras.
-
公开(公告)号:US20210209372A1
公开(公告)日:2021-07-08
申请号:US17059788
申请日:2018-09-27
Applicant: INTEL CORPORATION
Inventor: Xiaofeng Tong , Wenlong Li , Doron Houminer , Chen Ling , Yumeng Wang
Abstract: Method, systems and apparatuses may provide for technology that extracts one or more motion features from filtered position data associated with a projectile in a game and identifies a turning point in a trajectory of the projectile based on the one or more motion features. The technology may also automatically designate the turning point as a highlight moment if one or more of the turning point or the trajectory satisfies a proximity condition with respect to a target area in the game.
-
公开(公告)号:US20210069571A1
公开(公告)日:2021-03-11
申请号:US17093215
申请日:2020-11-09
Applicant: Intel Corporation
Inventor: Qiang Eric Li , Wenlong Li , Shaohui Jiao , Yikai Fang , Xiaolu Shen , Lidan Zhang , Xiaofeng Tong , Fucen Zeng
Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
-
公开(公告)号:US20190304155A1
公开(公告)日:2019-10-03
申请号:US16172664
申请日:2018-10-26
Applicant: Intel Corporation
Inventor: Yikai Fang , Yangzhou Du , Qiang Eric Li , Xiaofeng Tong , Wenlong Li , Minje Park , Myung-Ho Ju , Jihyeon Kate Yi , Tae-Hoon Pete Kim
Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
-
公开(公告)号:US09633463B2
公开(公告)日:2017-04-25
申请号:US14763792
申请日:2014-09-24
Applicant: Intel Corporation
Inventor: Qiang Li , Xiaofeng Tong , Yangzhou Du , Wenlong Li , Caleb J. Ozer , Jose Elmer S. Lorenzo
IPC: G06T13/40 , G06F3/0488 , G06T15/00 , G06T15/50 , G06F3/01 , A63F13/428 , A63F13/2145
CPC classification number: G06T13/40 , A63F13/2145 , A63F13/428 , A63F2300/6607 , G06F3/011 , G06F3/04883 , G06T15/005 , G06T15/503
Abstract: Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, the apparatus may include a gesture tracker and an animation engine. The gesture tracker may be configured to detect and track a user gesture that corresponds to a canned facial expression, the user gesture including a duration component corresponding to a duration the canned facial expression is to be animated. Further, the gesture tracker may be configured to respond to a detection and tracking of the user gesture, and output one or more animation messages that describe the detected/tracked user gesture or identify the canned facial expression, and the duration. The animation engine may be configured to receive the one or more animation messages, and drive an avatar model, in accordance with the one or more animation messages, to animate the avatar with animation of the canned facial expressions for the duration. Other embodiments may be described and/or claimed.
-
公开(公告)号:US20170111616A1
公开(公告)日:2017-04-20
申请号:US15395661
申请日:2016-12-30
Applicant: Intel Corporation
Inventor: Wenlong Li , Xiaofeng Tong , Yangzhou Du , Qiang Eric Li , Yimin Zhang , Wei Hu , John G. Tennant , Hui A. Li
CPC classification number: H04N7/157 , G06K9/00248 , G06K9/00255 , G06K9/00268 , G06K9/00281 , G06K9/00308 , G06T13/40 , H04N7/147 , H04N21/4223 , H04N21/44008 , H04N21/4788 , H04N21/8146
Abstract: Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, extracting features from the face, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters.
-
-
-
-
-
-
-
-
-