Detecting specular surfaces
    1.
    发明授权

    公开(公告)号:US12073533B1

    公开(公告)日:2024-08-27

    申请号:US17820442

    申请日:2022-08-17

    Applicant: Apple Inc.

    CPC classification number: G06T3/60 G06V10/60 G06T2207/30244

    Abstract: Identifying a specular surface, such as a mirror, in a captured scene includes extracting, from one or more images of the scene, a set of natural features and generating, from the image, a set of synthesized “mirrored” features. One or more correspondences may be determined between the set of natural features in the image and the set of synthesized mirrored features. A first set of features are identified based on the determined one or more correspondences as representing a specular surface (e.g., a mirror) located in the scene, and then a geometry and/or location of the specular surface within the scene may be determined. For example, in some embodiments, the feature from a determined pair of corresponding features in a scene that is determined to be farther away from the device that captured the image(s) of the scene may be determined to be the feature lying on the specular surface.

    Position estimation based on eye gaze

    公开(公告)号:US11170521B1

    公开(公告)日:2021-11-09

    申请号:US16532083

    申请日:2019-08-05

    Applicant: Apple Inc.

    Abstract: In an exemplary process for determining a position of an object in a computer-generated reality environment using an eye gaze, a user uses their eyes to interact with user interface objects displayed on an electronic device. A first direction of gaze is determined for a first eye of a user detected via the one or more cameras, and a second direction of gaze is determined for a second eye of the user detected via the one or more cameras. A convergence point of the first and second directions of gaze is determined, and a distance between a position of the user and a position of an object in the computer-generated reality environment is determined based on the convergence point. A task is performed based on the determined distance between the position of the user and the position of the object in the computer-generated reality environment.

    Method of Providing Image Feature Descriptors

    公开(公告)号:US20190362179A1

    公开(公告)日:2019-11-28

    申请号:US16531678

    申请日:2019-08-05

    Applicant: Apple Inc.

    Abstract: A method of providing a set of feature descriptors configured to be used in matching an object in an image of a camera is provided. The method includes: a) providing at least two images of a first object; b) extracting in at least two of the images at least one feature from the respective image, c) providing at least one descriptor for an extracted feature, and storing the descriptors; d) matching descriptors in the first set of descriptors; e) computing a score parameter based on the result of the matching process; f) selecting at least one descriptor based on its score parameter; g) adding the selected descriptor(s) to a second set of descriptors; and h) updating the score parameter of descriptors in the first set based on a selection process and to the result of the matching process.

    Tracking and drift correction
    6.
    发明授权

    公开(公告)号:US12008151B2

    公开(公告)日:2024-06-11

    申请号:US17314130

    申请日:2021-05-07

    Applicant: Apple Inc.

    Abstract: Some implementations provide improved user interfaces for interacting with a virtual environment. The virtual environment is presented by a display of a first device having an image sensor. The first device uses the image sensor to determine a relative position and orientation of a second device based on a marker displayed on a display of the second device. The first device uses the determined relative position of the second device to display a representation of the second device including virtual content in place of the marker.

    Object detection using multiple three dimensional scans

    公开(公告)号:US11580652B2

    公开(公告)日:2023-02-14

    申请号:US17326559

    申请日:2021-05-21

    Applicant: Apple Inc.

    Abstract: One exemplary implementation facilitates object detection using multiple scans of an object in different lighting conditions. For example, a first scan of the object can be created by capturing images of the object by moving an image sensor on a first path in a first lighting condition, e.g., bright lighting. A second scan of the object can then be created by capturing additional images of the object by moving the image sensor on a second path in a second lighting condition, e.g., dim lighting. Implementations determine a transform that associates the scan data from these multiple scans to one another and use the transforms to generate a 3D model of the object in a single coordinate system. Augmented content can be positioned relative to that object in the single coordinate system and thus will be displayed in the appropriate location regardless of the lighting condition in which the physical object is later detected.

    Augmented devices
    8.
    发明授权

    公开(公告)号:US11379033B2

    公开(公告)日:2022-07-05

    申请号:US17019856

    申请日:2020-09-14

    Applicant: Apple Inc.

    Abstract: Implementations use a first device (e.g., an HMD) to provide a CGR environment that augments the input and output capabilities of a second device, e.g., a laptop, smart speaker, etc. In some implementations, the first device communicates with a second device in its proximate physical environment to exchange input or output data. For example, an HMD may capture an image of a physical environment that includes a laptop. The HMD may detect the laptop, send a request the laptop's content, receive content from the laptop (e.g., the content that the laptop is currently displaying and additional content), identify the location of the laptop, and display a virtual object with the received content in the CGR environment on or near the laptop. The size, shape, orientation, or position of the virtual object (e.g., a virtual monitor or monitor extension) may also be configured to provide a better user experience.

    Systems and Methods for Providing Personalized Saliency Models

    公开(公告)号:US20220092331A1

    公开(公告)日:2022-03-24

    申请号:US17448456

    申请日:2021-09-22

    Applicant: Apple Inc.

    Abstract: Methods, systems, and computer readable media for providing personalized saliency models, e.g., for use in mixed reality environments, are disclosed herein, comprising: obtaining, from a server, a first saliency model for the characterization of captured images, wherein the first saliency model represents a global saliency model; capturing a first plurality of images by a first device; obtaining information indicative of a reaction of a first user of the first device to the capture of one or more images of the first plurality images; updating the first saliency model based, at least in part, on the obtained information to form a personalized, second saliency model; and transmitting at least a portion of the second saliency model to the server for inclusion into the global saliency model. In some embodiments, a user's personalized (i.e., updated) saliency model may be used to modify one or more characteristics of at least one subsequently captured image.

    TRACKING AND DRIFT CORRECTION
    10.
    发明申请

    公开(公告)号:US20210263584A1

    公开(公告)日:2021-08-26

    申请号:US17314130

    申请日:2021-05-07

    Applicant: Apple Inc.

    Abstract: Some implementations provide improved user interfaces for interacting with a virtual environment. The virtual environment is presented by a display of a first device having an image sensor. The first device uses the image sensor to determine a relative position and orientation of a second device based on a marker displayed on a display of the second device. The first device uses the determined relative position of the second device to display a representation of the second device including virtual content in place of the marker.

Patent Agency Ranking