LATE WARPING TO MINIMIZE LATENCY OF MOVING OBJECTS

    公开(公告)号:US20220375026A1

    公开(公告)日:2022-11-24

    申请号:US17518828

    申请日:2021-11-04

    Applicant: Snap Inc.

    Abstract: A method for minimizing latency of moving objects in an augmented reality (AR) display device is described. In one aspect, the method includes determining an initial pose of a visual tracking device, identifying an initial location of an object in an image that is generated by an optical sensor of the visual tracking device, the image corresponding to the initial pose of the visual tracking device. rendering virtual content based on the initial pose and the initial location of the object, retrieving an updated pose of the visual tracking device, tracking an updated location of the object in an updated image that corresponds to the updated pose, and applying a time warp transformation to the rendered virtual content based on the updated pose and the updated location of the object to generate transformed virtual content.

    Collaborative augmented reality eyewear with ego motion alignment

    公开(公告)号:US12299933B2

    公开(公告)日:2025-05-13

    申请号:US18442772

    申请日:2024-02-15

    Applicant: Snap Inc.

    Abstract: Eyewear providing an interactive augmented reality experience between two eyewear devices by using alignment between respective 6DOF trajectories, also referred to herein as ego motion alignment. An eyewear device of user A and an eyewear device of user B track the eyewear device of the other user, or an object of the other user, such as on the user's face, to provide the collaborative AR experience. This enables sharing common three-dimensional content between multiple eyewear users without using or aligning the eyewear devices to common image content such as a marker, which is a more lightweight solution with reduced computational burden on a processor. An inertial measurement unit may also be used to align the eyewear devices.

    VARIED DEPTH DETERMINATION USING STEREO VISION AND PHASE DETECTION AUTO FOCUS (PDAF)

    公开(公告)号:US20240223715A1

    公开(公告)日:2024-07-04

    申请号:US18606150

    申请日:2024-03-15

    Applicant: Snap Inc.

    CPC classification number: H04N5/2226 H04N23/45 H04N23/672

    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for varied depth determination using stereo vision and phase detection auto focus (PDAF). Computer stereo vision (stereo vision) is used to extract three-dimensional information from digital images. To utilize stereo vison, two optical sensors are displaced horizontally from one another and used to capture images depicting two differing views of a real-world environment from two different vantage points. The relative depth of the objects captured in the images is determined using triangulation by comparing the relative positions of the objects in the two images. For example, the relative positions of matching objects (e.g., features) identified in the captured images are used along with the known orientation of the optical sensors (e.g., distance between the optical sensors, vantage points the optical sensors) to estimate the depth of the objects.

    Varied depth determination using stereo vision and phase detection auto focus (PDAF)

    公开(公告)号:US11722630B2

    公开(公告)日:2023-08-08

    申请号:US17746292

    申请日:2022-05-17

    Applicant: Snap Inc.

    CPC classification number: H04N5/2226 H04N23/45 H04N23/672

    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for varied depth determination using stereo vision and phase detection auto focus (PDAF). Computer stereo vision (stereo vision) is used to extract three-dimensional information from digital images. To utilize stereo vision, two optical sensors are displaced horizontally from one another and used to capture images depicting two differing views of a real-world environment from two different vantage points. The relative depth of the objects captured in the images is determined using triangulation by comparing the relative positions of the objects in the two images. For example, the relative positions of matching objects (e.g., features) identified in the captured images are used along with the known orientation of the optical sensors (e.g., distance between the optical sensors, vantage points the optical sensors) to estimate the depth of the objects.

    VARIED DEPTH DETERMINATION USING STEREO VISION AND PHASE DETECTION AUTO FOCUS (PDAF)

    公开(公告)号:US20230239423A1

    公开(公告)日:2023-07-27

    申请号:US18129009

    申请日:2023-03-30

    Applicant: Snap Inc.

    CPC classification number: H04N5/2226 H04N23/45 H04N23/672

    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for varied depth determination using, stereo vision and phase detection auto focus (PDAF). Computer stereo vision (stereo vision) is used to extract three-dimensional information from digital images. To utilize stereo vison, two optical sensors are displaced horizontally from one another and used to capture images depicting two differing views of a real-world environment from two different vantage points. The relative depth of the objects captured in the images is determined using triangulation by comparing the relative positions of the objects in the two images. For example, the relative positions of matching objects (e.g., features) identified in the captured images are used along with the known orientation of the optical sensors (e.g., distance between the optical sensors, vantage points the optical sensors) to estimate the depth of the objects.

    Late warping to minimize latency of moving objects

    公开(公告)号:US12067693B2

    公开(公告)日:2024-08-20

    申请号:US17518828

    申请日:2021-11-04

    Applicant: Snap Inc.

    CPC classification number: G06T3/18 G06T13/40 G06T13/80 G06T15/205 G06T2210/44

    Abstract: A method for minimizing latency of moving objects in an augmented reality (AR) display device is described. In one aspect, the method includes determining an initial pose of a visual tracking device, identifying an initial location of an object in an image that is generated by an optical sensor of the visual tracking device, the image corresponding to the initial pose of the visual tracking device, rendering virtual content based on the initial pose and the initial location of the object, retrieving an updated pose of the visual tracking device, tracking an updated location of the object in an updated image that corresponds to the updated pose, and applying a time warp transformation to the rendered virtual content based on the updated pose and the updated location of the object to generate transformed virtual content.

Patent Agency Ranking