-
公开(公告)号:US11798242B2
公开(公告)日:2023-10-24
申请号:US17866071
申请日:2022-07-15
Applicant: Apple Inc.
Inventor: Avi Bar-Zeev , Golnaz Abdollahian , Devin William Chalmers , David H. Y. Huang , Banafsheh Jalali
CPC classification number: G06T19/006 , G02B27/017 , G06F3/011
Abstract: In one implementation, a method of providing contextual computer-generated reality (CGR) digital assistant is performed at a device provided to deliver a CGR scene, the device including one or more processors, non-transitory memory, and one or more displays. The method includes obtaining image data characterizing a field of view captured by an image sensor. The method further includes identifying in the image data a contextual trigger for one of a plurality of contextual CGR digital assistants. The method additionally includes selecting a visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation is selected based on context and in response to identifying the contextual trigger. The method also includes presenting the CGR scene by displaying the visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation provides information associated with the contextual trigger.
-
公开(公告)号:US20220322006A1
公开(公告)日:2022-10-06
申请号:US17841527
申请日:2022-06-15
Applicant: Apple Inc.
Inventor: Avi Bar-Zeev
Abstract: In one implementation, a method of transforming a sound into a virtual sound for a synthesized reality (SR) setting is performed by a head-mounted device (HMD) including one or more processors, non-transitory memory, a microphone, a speaker, and a display. The method includes displaying, on the display, an image representation of a synthesized reality (SR) setting including a plurality of surfaces associated with an acoustic reverberation property of the SR setting. The method includes recording, via the microphone, a real sound produced in a physical setting. The method further includes generating, using the one or more processors, a virtual sound by transforming the real sound based on the acoustic reverberation property of the SR setting. The method further includes playing, via the speaker, the virtual sound.
-
公开(公告)号:US11363378B2
公开(公告)日:2022-06-14
申请号:US17087878
申请日:2020-11-03
Applicant: Apple Inc.
Inventor: Avi Bar-Zeev
Abstract: In one implementation, a method of transforming a sound into a virtual sound for a synthesized reality (SR) setting is performed by a head-mounted device (HMD) including one or more processors, non-transitory memory, a microphone, a speaker, and a display. The method includes displaying, on the display, an image representation of a synthesized reality (SR) setting including a plurality of surfaces associated with an acoustic reverberation property of the SR setting. The method includes recording, via the microphone, a real sound produced in a physical setting. The method further includes generating, using the one or more processors, a virtual sound by transforming the real sound based on the acoustic reverberation property of the SR setting. The method further includes playing, via the speaker, the virtual sound.
-
14.
公开(公告)号:US11182964B2
公开(公告)日:2021-11-23
申请号:US16375595
申请日:2019-04-04
Applicant: Apple Inc.
Inventor: Alexis H. Palangie , Avi Bar-Zeev
Abstract: The present disclosure relates to techniques for providing tangibility visualization of virtual objects within a computer-generated reality (CGR) environment, such as a CGR environment based on virtual reality and/or a CGR environment based on mixed reality. A visual feedback indicating tangibility is provided for a virtual object within a CGR environment that does not correspond to a real, tangible object in the real environment. A visual feedback indicating tangibility is not provided for a virtual representation of a real object within a CGR environment that corresponds to a real, tangible object in the real environment.
-
15.
公开(公告)号:US20210339143A1
公开(公告)日:2021-11-04
申请号:US17271551
申请日:2019-09-17
Applicant: Apple Inc.
Inventor: Avi Bar-Zeev , Alexis Henri Palangie , Luis Rafael Deliz Centeno , Rahul Nair
Abstract: In various implementations, methods and devices for attenuation of co-user interactions in SR space are described. In one implementation, a method of attenuating avatars based on a breach of avatar social interaction criteria is performed at a device provided to deliver simulated reality (SR) content. In one implementation, a method of close collaboration in SR setting is performed at a device provided to deliver SR content.
-
公开(公告)号:US11119573B2
公开(公告)日:2021-09-14
申请号:US16568782
申请日:2019-09-12
Applicant: Apple Inc.
Inventor: Avi Bar-Zeev , Devin W. Chalmers , Fletcher R. Rothkopf , Grant H. Mulliken , Holly E. Gerhard , Lilli I. Jonsson
Abstract: One exemplary implementation provides an improved user experience on a device by using physiological data to initiate a user interaction for the user experience based on an identified interest or intention of a user. For example, a sensor may obtain physiological data (e.g., pupil diameter) of a user during a user experience in which content is displayed on a display. The physiological data varies over time during the user experience and a pattern is detected. The detected pattern is used to identify an interest of the user in the content or an intention of the user regarding the content. The user interaction is then initiated based on the identified interest or the identified intention.
-
公开(公告)号:US10768698B1
公开(公告)日:2020-09-08
申请号:US16015858
申请日:2018-06-22
Applicant: Apple Inc.
Inventor: Jae Hwang Lee , Avi Bar-Zeev , Fletcher R. Rothkopf
Abstract: In one implementation, a method includes: synthesizing an AR/VR content stream by embedding a plurality of glints provided for eye tracking into one or more content frames of the AR/VR content stream; displaying, via the one or more AR/VR displays, the AR/VR content stream to a user of the HMD; obtaining, via the image sensor, light intensity data corresponding to the one or more content frames of the AR/VR content stream that include the plurality of glints, wherein the light intensity data includes a projection of an eye of the user of the HMD having projected thereon the plurality of glints; and determining an orientation of the eye of the user of the HMD based on the light intensity data.
-
公开(公告)号:US20200098188A1
公开(公告)日:2020-03-26
申请号:US16577310
申请日:2019-09-20
Applicant: Apple Inc.
Inventor: Avi Bar-Zeev , Golnaz Abdollahian , Devin William Chalmers , David H. Y. Huang
Abstract: In one implementation, a method of providing contextual computer-generated reality (CGR) digital assistant is performed at a device provided to deliver a CGR scene, the device including one or more processors, non-transitory memory, and one or more displays. The method includes obtaining image data characterizing a field of view captured by an image sensor. The method further includes identifying in the image data a contextual trigger for one of a plurality of contextual CGR digital assistants. The method additionally includes selecting a visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation is selected based on context and in response to identifying the contextual trigger. The method also includes presenting the CGR scene by displaying the visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation provides information associated with the contextual trigger.
-
公开(公告)号:US20250060821A1
公开(公告)日:2025-02-20
申请号:US18935873
申请日:2024-11-04
Applicant: Apple Inc.
Inventor: Grant H. Mulliken , Avi Bar-Zeev , Devin W. Chalmers , Fletcher R. Rothkopf , Holly Gerhard , Lilli I. Jonsson
Abstract: One exemplary implementation provides an improved user experience on a device by using physiological data to initiate a user interaction for the user experience based on an identified interest or intention of a user. For example, a sensor may obtain physiological data (e.g., pupil diameter) of a user during a user experience in which content is displayed on a display. The physiological data varies over time during the user experience and a pattern is detected. The detected pattern is used to identify an interest of the user in the content or an intention of the user regarding the content. The user interaction is then initiated based on the identified interest or the identified intention.
-
公开(公告)号:US12120493B2
公开(公告)日:2024-10-15
申请号:US18220982
申请日:2023-07-12
Applicant: Apple Inc.
Inventor: Avi Bar-Zeev
CPC classification number: H04R3/04 , G02B27/0172 , H04R1/028 , H04R1/10 , G06T19/006
Abstract: In one implementation, a method of transforming a sound into a virtual sound for a synthesized reality (SR) setting is performed by a head-mounted device (HMD) including one or more processors, non-transitory memory, a microphone, a speaker, and a display. The method includes displaying, on the display, an image representation of a synthesized reality (SR) setting including a plurality of surfaces associated with an acoustic reverberation property of the SR setting. The method includes recording, via the microphone, a real sound produced in a physical setting. The method further includes generating, using the one or more processors, a virtual sound by transforming the real sound based on the acoustic reverberation property of the SR setting. The method further includes playing, via the speaker, the virtual sound.
-
-
-
-
-
-
-
-
-