Coding scheme for depth data
    21.
    发明授权

    公开(公告)号:US12225178B2

    公开(公告)日:2025-02-11

    申请号:US17641843

    申请日:2020-09-10

    Inventor: Bart Kroon

    Abstract: Methods of encoding and decoding depth data are disclosed. In an encoding method, depth values and occupancy data are both encoded into a depth map. The method adapts how the depth values and occupancy data are converted to map values in the depth map. For example, it may adaptively select a threshold, above or below which all values represent unoccupied pixels. By adapting how the depth and occupancy are encoded, based on analysis of the depth values, the method can enable more effective encoding and transmission of the depth data and occupancy data. The encoding method outputs metadata defining the adaptive encoding. This metadata can be used by a corresponding decoding method, to decode the map values. Also provided are an encoder and a decoder for depth data, and a corresponding bitstream, comprising a depth map and its associated metadata.

    CODING HYBRID MULTI-VIEW SENSOR CONFIGURATIONS

    公开(公告)号:US20240340398A1

    公开(公告)日:2024-10-10

    申请号:US18290838

    申请日:2022-08-01

    Abstract: A method for transmitting multi-view image frame data. The method comprises obtaining multi-view components representative of a scene generated from a plurality of sensors, wherein each multi-view component corresponds to a sensor and wherein at least one of the multi-view components includes a depth component and at least one of the multi-view components does not include a depth component. A virtual sensor pose is obtained for each sensor in a virtual scene, wherein the virtual scene is a virtual representation of the scene and wherein the virtual sensor pose is a virtual representation of the pose of the sensor in the scene when generating the corresponding multi-view component. Sensor parameter metadata is generated for the multi-view components, wherein the sensor parameter metadata contains extrinsic parameters for the multi-view components and the extrinsic parameters contain at least the virtual sensor pose of a sensor for each of the corresponding multi-view components. The extrinsic parameters enable the generation of additional depth components by warping the depth components based on their corresponding virtual sensor pose and a target position in the virtual scene. The multi-view components and the sensor parameter metadata is thus transmitted.

    Changing video tracks in immersive videos

    公开(公告)号:US12069334B2

    公开(公告)日:2024-08-20

    申请号:US18038719

    申请日:2021-12-03

    CPC classification number: H04N21/437 H04N21/234363 H04N21/431 H04N21/816

    Abstract: A method for transitioning from a first set of video tracks, VT1, to a second set of video tracks, VT2, when rendering a multi-track video, wherein each video track has a corresponding rendering priority. The method comprises receiving an instruction to transition from a first set of first video tracks VT1 to a second set of second video tracks VT2, obtaining the video tracks VT2 and, if the video tracks VT2 are different to the video tracks VT1, applying a lowering function to the rendering priority of one or more of the video tracks in the first set of video tracks VT1 and/or an increase function to the rendering priority of one or more video tracks in the second set of video tracks VT2. The lowering function and the increase function decrease and increase the rendering priority over time respectively. The rendering priority is used in the determination of the weighting of a video track and/or elements of a video track used to render a multi-track video.

    Apparatus and method of generating an image signal

    公开(公告)号:US11823323B2

    公开(公告)日:2023-11-21

    申请号:US17435066

    申请日:2020-02-29

    Inventor: Bart Kroon

    CPC classification number: G06T15/205 G06T17/005 G06T2210/61

    Abstract: An image source (407) provides an image divided into segments of different sizes with only a subset of these comprising image data. A metadata generator (409) generates metadata structured in accordance with a tree data structure where each node is linked to a segment of the image. Each node is a branch node linking the parent node to child nodes linked to segments that are subdivisions of the parent node, or a leaf node which has no children. A leaf node is either an unused leaf node linked to a segment for which the first image comprises no image data or a used leaf node linked to a segment for which the first image comprises image data. The metadata indicates whether each node is a branch node, a used leaf node, or an unused leaf node. An image signal generator (405) generates an image signal comprising the image data of the first image and the metadata.

    CODING SCHEME FOR DEPTH DATA
    26.
    发明申请

    公开(公告)号:US20220394229A1

    公开(公告)日:2022-12-08

    申请号:US17641843

    申请日:2020-09-10

    Inventor: Bart Kroon

    Abstract: Methods of encoding and decoding depth data are disclosed. In an encoding method, depth values and occupancy data are both encoded into a depth map. The method adapts how the depth values and occupancy data are converted to map values in the depth map. For example, it may adaptively select a threshold, above or below which all values represent unoccupied pixels. By adapting how the depth and occupancy are encoded, based on analysis of the depth values, the method can enable more effective encoding and transmission of the depth data and occupancy data. The encoding method outputs metadata defining the adaptive encoding. This metadata can be used by a corresponding decoding method, to decode the map values. Also provided are an encoder and a decoder for depth data, and a corresponding bitstream, comprising a depth map and its associated metadata.

    Autostereoscopic display device
    27.
    发明授权

    公开(公告)号:US11314103B2

    公开(公告)日:2022-04-26

    申请号:US16898825

    申请日:2020-06-11

    Abstract: An autostereoscopic display device uses an electroluminescent display. A set of pixels is provided beneath view forming elements (such as lenses), with a plurality of pixels across the view forming element width direction. The pixels are arranged with at least two different angular orientations with respect to the substrate. The out-coupling performance is improved by arranging for the light emission direction to be substantially perpendicular to the desired emitting surface of the view forming elements.

    Apparatus and method for generating an image

    公开(公告)号:US11218690B2

    公开(公告)日:2022-01-04

    申请号:US16623838

    申请日:2018-06-21

    Abstract: An apparatus for generating an image comprises a receiver (101) which receives 3D image data providing an incomplete representation of a scene. A view vector source (103) provides a rendering view vector indicative of a rendering viewpoint for the image. A renderer (105) renders a first region of an intermediate image for the rendering view vector based on image data from the 3D image data. An extrapolator (107) extrapolates the 3D image data into a second adjoining region of the intermediate image with the 3D image not comprising image data for the second region. A blur processor (109) generates the image in response to applying a spatially varying blurring to the intermediate image where a degree of blurring is higher in a transitional region between the first region and the second region than in an internal area of the first region.

    Image generation from video
    29.
    发明授权

    公开(公告)号:US10931928B2

    公开(公告)日:2021-02-23

    申请号:US16497498

    申请日:2018-03-02

    Abstract: An apparatus comprising a store (101) for storing route data for a set of routes in an N-dimensional space where each route of the set of routes is associated with a video item including frames comprising both image and depth information. An input (105) receives a viewer position indication and a selector (107) selects a first route of the set of routes in response to a selection criterion dependent on a distance metric dependent on the viewer position indication and positions of the routes of the set of routes. A retriever (103, 109) retrieves a first video item associated with the first route from a video source (203). An image generator (111) generates at least one view image for the viewer position indication from a first set of frames from the first video item. In the system, the selection criterion is biased towards a currently selected route relative to other routes of the set of routes.

Patent Agency Ranking