-
公开(公告)号:US10719692B2
公开(公告)日:2020-07-21
申请号:US15900905
申请日:2018-02-21
Applicant: Apple Inc.
Inventor: Micah P. Kalscheur , Feng Tang
IPC: G06K9/00 , G06F16/583 , G06F16/56 , G06F21/32
Abstract: Subepidermal imaging of a face may be used to assess subepidermal features such as blood vessels (e.g., veins) when the device is attempting to authenticate a user in a facial recognition authentication process. Assessment of the subepidermal features may be used to distinguish between users that have closely related facial features (e.g., siblings or twins) in situations where the facial recognition authentication process has less certainty in a decision about recognition of the user's face as an authorized user.
-
42.
公开(公告)号:US20200042096A1
公开(公告)日:2020-02-06
申请号:US16600830
申请日:2019-10-14
Applicant: Apple Inc.
Inventor: Feng Tang , Chong Chen , Haitao Guo , Xiaojin Shi , Thorsten Gernoth
Abstract: Intelligent systems are disclosed that respond to user intent and desires based upon activity that may or may not be expressly directed at the intelligent system. In some embodiments, the intelligent system acquires a depth image of a scene surrounding the system. A scene geometry may be extracted from the depth image and elements of the scene may be monitored. In certain embodiments, user activity in the scene is monitored and analyzed to infer user desires or intent with respect to the system. The interpretation of the user's intent as well as the system's response may be affected by the scene geometry surrounding the user and/or the system. In some embodiments, techniques and systems are disclosed for interpreting express user communication, e.g., expressed through hand gesture movements. In some embodiments, such gesture movements may be interpreted based on real-time depth information obtained from, e.g., optical or non-optical type depth sensors.
-
公开(公告)号:US09811721B2
公开(公告)日:2017-11-07
申请号:US14706649
申请日:2015-05-07
Applicant: Apple Inc.
Inventor: Feng Tang , Ang Li , Xiaojin Shi
IPC: G06K9/00 , H04N13/02 , G06K9/46 , G06F3/01 , G06F3/03 , G06F3/042 , G06T7/246 , G06T7/254 , H04N13/00
CPC classification number: G06K9/00355 , G06F3/017 , G06F3/0304 , G06F3/0425 , G06K9/4609 , G06T7/246 , G06T7/254 , G06T2200/04 , G06T2207/30196 , H04N13/207 , H04N13/271 , H04N2013/0085
Abstract: In the field of Human-computer interaction (HCI), i.e., the study of the interfaces between people (i.e., users) and computers, understanding the intentions and desires of how the user wishes to interact with the computer is a very important problem. The ability to understand human gestures, and, in particular, hand gestures, as they relate to HCI, is a very important aspect in understanding the intentions and desires of the user in a wide variety of applications. In this disclosure, a novel system and method for three-dimensional hand tracking using depth sequences is described. Some of the major contributions of the hand tracking system described herein include: 1.) a robust hand detector that is invariant to scene background changes; 2.) a bi-directional tracking algorithm that prevents detected hands from always drifting closer to the front of the scene (i.e., forward along the z-axis of the scene); and 3.) various hand verification heuristics.
-
44.
公开(公告)号:US20170090584A1
公开(公告)日:2017-03-30
申请号:US14865850
申请日:2015-09-25
Applicant: Apple Inc.
Inventor: Feng Tang , Chong Chen , Haitao Guo , Xiaojin Shi , Thorsten Gernoth
CPC classification number: G06F3/017 , G06F3/012 , G06F3/0304 , G06F3/16 , G06K9/00375 , G06K9/00389 , G06K9/6269 , G06K9/6282 , G06K2209/40
Abstract: Varying embodiments of intelligent systems are disclosed that respond to user intent and desires based upon activity that may or may not be expressly directed at the intelligent system. In some embodiments, the intelligent system acquires a depth image of a scene surrounding the system. A scene geometry may be extracted from the depth image and elements of the scene, such as walls, furniture, and humans may be evaluated and monitored. In certain embodiments, user activity in the scene is monitored and analyzed to infer user desires or intent with respect to the system. The interpretation of the user's intent or desire as well as the system's response may be affected by the scene geometry surrounding the user and/or the system. In some embodiments, techniques and systems are disclosed for interpreting express user communication, for example, expressed through fine hand gesture movements. In some embodiments, such gesture movements may be interpreted based on real-time depth information obtained from, for example, optical or non-optical type depth sensors. The depth information may be interpreted in “slices” (three-dimensional regions of space having a relatively small depth) until one or more candidate hand structures are detected. Once detected, each candidate hand structure may be confirmed or rejected based on its own unique physical properties (e.g., shape, size and continuity to an arm structure). Each confirmed hand structure may be submitted to a depth-aware filtering process before its own unique three-dimensional features are quantified into a high-dimensional feature vector. A two-step classification scheme may be applied to the feature vectors to identify a candidate gesture (step 1), and to reject candidate gestures that do not meet a gesture-specific identification operation (step-2). The identified gesture may be used to initiate some action controlled by a computer system.
-
-
-