-
公开(公告)号:US20190041197A1
公开(公告)日:2019-02-07
申请号:US15912917
申请日:2018-03-06
Applicant: Apple Inc.
Inventor: Thorsten Gernoth , Ian R. Fasel , Haitao Guo , Atulit Kumar
CPC classification number: G06T7/514 , G01B11/14 , G06K9/00221 , G06K9/2027 , G06K9/2036 , G06T7/74 , G06T2207/10048 , G06T2207/30244 , H04N5/2251 , H04N5/2256 , H04N5/33
Abstract: An estimate of distance between a user and a camera on a device is used to determine an illumination pattern density used for speckle pattern illumination of the user in subsequent images. The distance may be estimated using an image captured when the user is illuminated with flood infrared illumination. Either a sparse speckle (dot) pattern illumination pattern or a dense speckle pattern illumination pattern is used depending on the distance between the user's face and the camera.
-
12.
公开(公告)号:US10048765B2
公开(公告)日:2018-08-14
申请号:US14865850
申请日:2015-09-25
Applicant: Apple Inc.
Inventor: Feng Tang , Chong Chen , Haitao Guo , Xiaojin Shi , Thorsten Gernoth
Abstract: Varying embodiments of intelligent systems are disclosed that respond to user intent and desires based upon activity that may or may not be expressly directed at the intelligent system. In some embodiments, the intelligent system acquires a depth image of a scene surrounding the system. A scene geometry may be extracted from the depth image and elements of the scene, such as walls, furniture, and humans may be evaluated and monitored. In certain embodiments, user activity in the scene is monitored and analyzed to infer user desires or intent with respect to the system. The interpretation of the user's intent or desire as well as the system's response may be affected by the scene geometry surrounding the user and/or the system. In some embodiments, techniques and systems are disclosed for interpreting express user communication, for example, expressed through fine hand gesture movements. In some embodiments, such gesture movements may be interpreted based on real-time depth information obtained from, for example, optical or non-optical type depth sensors. The depth information may be interpreted in “slices” (three-dimensional regions of space having a relatively small depth) until one or more candidate hand structures are detected. Once detected, each candidate hand structure may be confirmed or rejected based on its own unique physical properties (e.g., shape, size and continuity to an arm structure). Each confirmed hand structure may be submitted to a depth-aware filtering process before its own unique three-dimensional features are quantified into a high-dimensional feature vector. A two-step classification scheme may be applied to the feature vectors to identify a candidate gesture (step 1), and to reject candidate gestures that do not meet a gesture-specific identification operation (step-2). The identified gesture may be used to initiate some action controlled by a computer system.
-
公开(公告)号:US20240169046A1
公开(公告)日:2024-05-23
申请号:US18521808
申请日:2023-11-28
Applicant: Apple Inc.
Inventor: Deepti S. Prakash , Lucia E. Ballard , Jerrold V. Hauck , Feng Tang , Etai Littwin , Pavan Kumar Anasosalu Vasu , Gideon Littwin , Thorsten Gernoth , Lucie Kucerova , Petr Kostka , Steven P. Hotelling , Eitan Hirsh , Tal Kaitz , Jonathan Pokrass , Andrei Kolin , Moshe Laifenfeld , Matthew C. Waldon , Thomas P. Mensch , Lynn R. Youngs , Christopher G. Zeleznik , Michael R. Malone , Ziv Hendel , Ivan Krstic , Anup K. Sharma
CPC classification number: G06F21/32 , G06F21/83 , G06V40/166 , G06V40/172 , G06V40/40 , H04L9/0844 , H04L9/085 , H04L9/3228 , H04L9/3231 , H04L9/3234 , H04L9/3247 , H04L63/0861 , H04W12/06
Abstract: Techniques are disclosed relating to biometric authentication, e.g., facial recognition. In some embodiments, a device is configured to verify that image data from a camera unit exhibits a pseudo-random sequence of image capture modes and/or a probing pattern of illumination points (e.g., from lasers in a depth capture mode) before authenticating a user based on recognizing a face in the image data. In some embodiments, a secure circuit may control verification of the sequence and/or the probing pattern. In some embodiments, the secure circuit may verify frame numbers, signatures, and/or nonce values for captured image information. In some embodiments, a device may implement one or more lockout procedures in response to biometric authentication failures. The disclosed techniques may reduce or eliminate the effectiveness of spoofing and/or replay attacks, in some embodiments.
-
公开(公告)号:US20230096119A1
公开(公告)日:2023-03-30
申请号:US17945196
申请日:2022-09-15
Applicant: Apple Inc
Inventor: Thorsten Gernoth , Cheng Lu , Hao Tang , Michael P. Johnson
Abstract: Various implementations disclosed herein provide feedback to a user during object scanning based on how well sensor data of the scanned object has been captured. During object scanning a user may move an electronic device with sensors (e.g., cameras, depth sensors, etc.) around an object to capture sensor data for use in generating a final 3D model of the object. Live feedback during the scanning process is enabled by assessing how well the captured sensor data represents different portions of the object. In some implementations, a 3D model is generated, updated, and assessed based on the sensor data live during the scanning process. This live 3D model may be coarser (i.e., having fewer details) than the final 3D model.
-
15.
公开(公告)号:US11561621B2
公开(公告)日:2023-01-24
申请号:US16600830
申请日:2019-10-14
Applicant: Apple Inc.
Inventor: Feng Tang , Chong Chen , Haitao Guo , Xiaojin Shi , Thorsten Gernoth
Abstract: Intelligent systems are disclosed that respond to user intent and desires based upon activity that may or may not be expressly directed at the intelligent system. In some embodiments, the intelligent system acquires a depth image of a scene surrounding the system. A scene geometry may be extracted from the depth image and elements of the scene may be monitored. In certain embodiments, user activity in the scene is monitored and analyzed to infer user desires or intent with respect to the system. The interpretation of the user's intent as well as the system's response may be affected by the scene geometry surrounding the user and/or the system. In some embodiments, techniques and systems are disclosed for interpreting express user communication, e.g., expressed through hand gesture movements. In some embodiments, such gesture movements may be interpreted based on real-time depth information obtained from, e.g., optical or non-optical type depth sensors.
-
公开(公告)号:US11386355B2
公开(公告)日:2022-07-12
申请号:US16706578
申请日:2019-12-06
Applicant: Apple Inc.
Inventor: Carlos E. Guestrin , Leon A. Gatys , Shreyas V. Joshi , Gustav M. Larsson , Kory R. Watson , Srikrishna Sridhar , Karla P. Vega , Shawn R. Scully , Thorsten Gernoth , Onur C Hamsici
Abstract: A device implementing a system for providing predicted RGB images includes at least one processor configured to obtain an infrared image of a subject, and to obtain a reference RGB image of the subject. The at least one processor is further configured to provide the infrared image and the reference RGB image to a machine learning model, the machine learning model having been trained to output predicted RGB images of subjects based on infrared images and reference RGB images of the subjects. The at least one processor is further configured to provide a predicted RGB image of the subject based on output by the machine learning model.
-
17.
公开(公告)号:US20200042096A1
公开(公告)日:2020-02-06
申请号:US16600830
申请日:2019-10-14
Applicant: Apple Inc.
Inventor: Feng Tang , Chong Chen , Haitao Guo , Xiaojin Shi , Thorsten Gernoth
Abstract: Intelligent systems are disclosed that respond to user intent and desires based upon activity that may or may not be expressly directed at the intelligent system. In some embodiments, the intelligent system acquires a depth image of a scene surrounding the system. A scene geometry may be extracted from the depth image and elements of the scene may be monitored. In certain embodiments, user activity in the scene is monitored and analyzed to infer user desires or intent with respect to the system. The interpretation of the user's intent as well as the system's response may be affected by the scene geometry surrounding the user and/or the system. In some embodiments, techniques and systems are disclosed for interpreting express user communication, e.g., expressed through hand gesture movements. In some embodiments, such gesture movements may be interpreted based on real-time depth information obtained from, e.g., optical or non-optical type depth sensors.
-
公开(公告)号:US10303866B1
公开(公告)日:2019-05-28
申请号:US16141084
申请日:2018-09-25
Applicant: Apple Inc.
Inventor: Marcel Van Os , Thorsten Gernoth , Kelsey Y. Ho
Abstract: An operation of a facial recognition authentication process may fail to authenticate a user even if the user is an authorized user of the device. In such cases, the facial recognition authentication process may automatically re-initiate to provide another attempt to authenticate the user using additional captured images. For the new attempt (e.g., the retry) to authenticate the user, one or more criteria for the images used in the facial recognition authentication process may be adjusted. For example, criteria for distance between the camera and the user's face and/or occlusion of the user's face in the images may be adjusted before the new attempt to authenticate the user. Adjustment of these criteria may increase the likelihood that the authorized user will be successfully authenticated in the new attempt.
-
19.
公开(公告)号:US20170090584A1
公开(公告)日:2017-03-30
申请号:US14865850
申请日:2015-09-25
Applicant: Apple Inc.
Inventor: Feng Tang , Chong Chen , Haitao Guo , Xiaojin Shi , Thorsten Gernoth
CPC classification number: G06F3/017 , G06F3/012 , G06F3/0304 , G06F3/16 , G06K9/00375 , G06K9/00389 , G06K9/6269 , G06K9/6282 , G06K2209/40
Abstract: Varying embodiments of intelligent systems are disclosed that respond to user intent and desires based upon activity that may or may not be expressly directed at the intelligent system. In some embodiments, the intelligent system acquires a depth image of a scene surrounding the system. A scene geometry may be extracted from the depth image and elements of the scene, such as walls, furniture, and humans may be evaluated and monitored. In certain embodiments, user activity in the scene is monitored and analyzed to infer user desires or intent with respect to the system. The interpretation of the user's intent or desire as well as the system's response may be affected by the scene geometry surrounding the user and/or the system. In some embodiments, techniques and systems are disclosed for interpreting express user communication, for example, expressed through fine hand gesture movements. In some embodiments, such gesture movements may be interpreted based on real-time depth information obtained from, for example, optical or non-optical type depth sensors. The depth information may be interpreted in “slices” (three-dimensional regions of space having a relatively small depth) until one or more candidate hand structures are detected. Once detected, each candidate hand structure may be confirmed or rejected based on its own unique physical properties (e.g., shape, size and continuity to an arm structure). Each confirmed hand structure may be submitted to a depth-aware filtering process before its own unique three-dimensional features are quantified into a high-dimensional feature vector. A two-step classification scheme may be applied to the feature vectors to identify a candidate gesture (step 1), and to reject candidate gestures that do not meet a gesture-specific identification operation (step-2). The identified gesture may be used to initiate some action controlled by a computer system.
-
公开(公告)号:US12002227B1
公开(公告)日:2024-06-04
申请号:US17376313
申请日:2021-07-15
Applicant: Apple Inc.
Inventor: Donghoon Lee , Thorsten Gernoth , Onur C. Hamsici , Shuo Feng
CPC classification number: G06T7/344 , G06N3/08 , G06T15/205 , G06T2207/20081 , G06T2207/20084
Abstract: Devices, systems, and methods are disclosed for partial point cloud registration. In some implementations, a method includes obtaining a first set of three-dimensional (3D) points corresponding to an object in a physical environment, the first set of 3D points having locations in a first coordinate system, obtaining a second set of 3D points corresponding to the object in the physical environment, the second set of 3D points having locations in a second coordinate system, predicting, via a machine learning model, locations of the first set of 3D points in the second coordinate system, and determining transform parameters relating the first set of 3D points and the second set of 3D points based on the predicted location of the first set of 3D points in the second coordinate system.
-
-
-
-
-
-
-
-
-