SYSTEM AND METHOD FOR TARGET PLANE DETECTION AND SPACE ESTIMATION

    公开(公告)号:US20220230398A1

    公开(公告)日:2022-07-21

    申请号:US17346105

    申请日:2021-06-11

    Abstract: A method includes obtaining scene data, wherein the scene data includes image data of a scene and depth data of the scene, and the depth data includes depth measurement values of points of a point cloud. The method further includes defining a first detection area, wherein the first detection area includes a spatially defined subset of the scene data, defining a plane model based on points of the point cloud within the first detection area, and defining a plane based on the plane model. The method includes determining at least one value of a usable size of the plane based on points of the point cloud, comparing at least one value of a characteristic size of a digital object to the at least one value of the usable size of the plane, and generating a display including the digital object positioned upon the plane based on the plane model.

    SYSTEM AND METHOD FOR DEPTH MAP GUIDED IMAGE HOLE FILLING

    公开(公告)号:US20220165041A1

    公开(公告)日:2022-05-26

    申请号:US17463037

    申请日:2021-08-31

    Abstract: An electronic device that reprojects two-dimensional (2D) images to three-dimensional (3D) images includes a memory configured to store instructions, and a processor configured to execute the instructions to: propagate an intensity for at least one pixel of an image based on a depth guide of neighboring pixels of the at least one pixel, wherein the at least one pixel is considered a hole during 2D to 3D image reprojection; propagate a color for the at least one pixel based on an intensity guide of the neighboring pixels of the at least one pixel; and compute at least one weight for the at least one pixel based on the intensity and color propagation.

    FINAL VIEW GENERATION USING OFFSET AND/OR ANGLED SEE-THROUGH CAMERAS IN VIDEO SEE-THROUGH (VST) EXTENDED REALITY (XR)

    公开(公告)号:US20250150566A1

    公开(公告)日:2025-05-08

    申请号:US18664097

    申请日:2024-05-14

    Inventor: Yingen Xiong

    Abstract: A method includes identifying a passthrough transformation associated with a VST XR device. The VST XR device includes see-through cameras that are (i) offset from forward axes extending from expected locations of a user's eyes and/or (ii) rotated such that optical axes of the see-through cameras are angled relative to the forward axes. The method also includes obtaining images captured using the see-through cameras, applying the passthrough transformation to the images to generate transformed images, and displaying the transformed images on one or more display panels of the VST XR device. The passthrough transformation is based on (i) a first transformation between see-through camera viewpoints and viewpoint-matched virtual camera viewpoints and (ii) a second transformation that aligns principal points of the see-through cameras and principal points of the one or more display panels.

    DYNAMICALLY-ADAPTIVE PLANAR TRANSFORMATIONS FOR VIDEO SEE-THROUGH (VST) EXTENDED REALITY (XR)

    公开(公告)号:US20250076969A1

    公开(公告)日:2025-03-06

    申请号:US18670128

    申请日:2024-05-21

    Inventor: Yingen Xiong

    Abstract: A method includes obtaining multiple image frames captured using one or more imaging sensors of a video see-through (VST) extended reality (XR) device while a user's head is at a first head pose and depth data associated with the image frames. The method also includes predicting a second head pose of the user's head when rendered images will be displayed. The method further includes projecting at least one of the image frames onto one or more first planes to generate at least one projected image frame. The method also includes transforming the at least one projected image frame from the one or more first planes to one or more second planes corresponding to the second head pose to generate at least one transformed image frame. The method further includes rendering the at least one transformed image frame for presentation on one or more displays of the VST XR device.

    EFFICIENT DEPTH-BASED VIEWPOINT MATCHING AND HEAD POSE CHANGE COMPENSATION FOR VIDEO SEE-THROUGH (VST) EXTENDED REALITY (XR)

    公开(公告)号:US20240378820A1

    公开(公告)日:2024-11-14

    申请号:US18639261

    申请日:2024-04-18

    Inventor: Yingen Xiong

    Abstract: A video see-through (VST) extended reality (XR) device includes a see-through camera configured to capture an image frame of a three-dimensional (3D) scene, a display panel, and at least one processing device. The at least one processing device is configured to obtain the image frame, identify a depth-based transformation in 3D space, transform the image frame into a transformed image frame based on the depth-based transformation, and initiate presentation of the transformed image frame on the display panel. The depth-based transformation provides (i) viewpoint matching between a head pose of the VST XR device when the image frame is captured and a head pose of the VST XR device when the transformed image frame is presented, (ii) parallax correction between the head poses, and (iii) compensation for a change between the head poses.

    GENERATION AND RENDERING OF EXTENDED-VIEW GEOMETRIES IN VIDEO SEE-THROUGH (VST) AUGMENTED REALITY (AR) SYSTEMS

    公开(公告)号:US20240257475A1

    公开(公告)日:2024-08-01

    申请号:US18327605

    申请日:2023-06-01

    Inventor: Yingen Xiong

    CPC classification number: G06T19/006 G06T3/40 G06T5/20 G06V10/44

    Abstract: A method includes obtaining multiple see-through image frames of an environment around an augmented reality (AR) device using multiple imaging sensors of the AR device. The method also includes generating a depth map based on the see-through image frames and generating a three-dimensional (3D) representation of the environment based on the depth map. The method further includes projecting the 3D representation onto a curved surface, mapping points of the projected 3D representation to multiple virtual view images, and presenting the virtual view images on one or more displays of the AR device. Generating the depth map may include generating an initial depth map using a trained machine learning model and modifying the initial depth map to provide both spatial consistency and temporal consistency in order to generate a refined depth map. The curved surface may include a portion of a cylindrical, spherical, or conical surface.

    MESH TRANSFORMATION WITH EFFICIENT DEPTH RECONSTRUCTION AND FILTERING IN PASSTHROUGH AUGMENTED REALITY (AR) SYSTEMS

    公开(公告)号:US20240169673A1

    公开(公告)日:2024-05-23

    申请号:US18296302

    申请日:2023-04-05

    Inventor: Yingen Xiong

    CPC classification number: G06T17/205 G06T7/50 G06T19/006

    Abstract: A method includes obtaining images of an environment captured by imaging sensors associated with a passthrough AR device and position data and depth data associated with the images. The method also includes generating a point cloud representative of the environment based on the images, position data, and depth data. The method further includes generating a mesh for a specified image. The mesh includes grid points at intersections of mesh lines. The method also includes determining one or more depths of one or more grid points of the mesh. The method further includes transforming the mesh from a viewpoint of a specified imaging sensor that captured the specified image to a user viewpoint of the passthrough AR device based on the depth(s) of the grid point(s). In addition, the method includes rendering a virtual view of the specified image for presentation by the passthrough AR device based on the transformed mesh.

    System and method for target plane detection and space estimation

    公开(公告)号:US11741676B2

    公开(公告)日:2023-08-29

    申请号:US17346105

    申请日:2021-06-11

    Abstract: A method includes obtaining scene data, wherein the scene data includes image data of a scene and depth data of the scene, and the depth data includes depth measurement values of points of a point cloud. The method further includes defining a first detection area, wherein the first detection area includes a spatially defined subset of the scene data, defining a plane model based on points of the point cloud within the first detection area, and defining a plane based on the plane model. The method includes determining at least one value of a usable size of the plane based on points of the point cloud, comparing at least one value of a characteristic size of a digital object to the at least one value of the usable size of the plane, and generating a display including the digital object positioned upon the plane based on the plane model.

    SYSTEM AND METHOD FOR GENERATING A THREE-DIMENSIONAL PHOTOGRAPHIC IMAGE

    公开(公告)号:US20230245373A1

    公开(公告)日:2023-08-03

    申请号:US17806029

    申请日:2022-06-08

    CPC classification number: G06T15/005 G06T7/50 H04N13/128 H04N13/225

    Abstract: A method includes receiving, from a camera, one or more frames of image data of a scene comprising a background and one or more three-dimensional objects, wherein each frame comprises a raster of pixels of image data; detecting layer information of the scene, wherein the layer information is associated with a depth-based distribution of the pixels in the one or more frames; and determining a multi-layer model for the scene, the multi-layer model comprising a plurality of discrete layers comprising first and second discrete layers, wherein each discrete layer is associated with a unique depth value relative to the camera. The method further includes mapping the pixels to the layers of the plurality of discrete layers; rendering the pixels as a first image of the scene as viewed from a first perspective; and rendering the pixels as a second image of the scene as viewed from a second perspective.

Patent Agency Ranking