CODING HYBRID MULTI-VIEW SENSOR CONFIGURATIONS

    公开(公告)号:US20240340398A1

    公开(公告)日:2024-10-10

    申请号:US18290838

    申请日:2022-08-01

    Abstract: A method for transmitting multi-view image frame data. The method comprises obtaining multi-view components representative of a scene generated from a plurality of sensors, wherein each multi-view component corresponds to a sensor and wherein at least one of the multi-view components includes a depth component and at least one of the multi-view components does not include a depth component. A virtual sensor pose is obtained for each sensor in a virtual scene, wherein the virtual scene is a virtual representation of the scene and wherein the virtual sensor pose is a virtual representation of the pose of the sensor in the scene when generating the corresponding multi-view component. Sensor parameter metadata is generated for the multi-view components, wherein the sensor parameter metadata contains extrinsic parameters for the multi-view components and the extrinsic parameters contain at least the virtual sensor pose of a sensor for each of the corresponding multi-view components. The extrinsic parameters enable the generation of additional depth components by warping the depth components based on their corresponding virtual sensor pose and a target position in the virtual scene. The multi-view components and the sensor parameter metadata is thus transmitted.

    FILE FORMAT WITH VARIABLE DATA
    3.
    发明公开

    公开(公告)号:US20230410466A1

    公开(公告)日:2023-12-21

    申请号:US18038035

    申请日:2021-12-08

    CPC classification number: G06V10/72 G06V20/64

    Abstract: A method for storing data representative of virtual objects on a computer storage system. The method comprises storing constant data corresponding to physical properties of the virtual objects which will remain constant when the data is read. The constant data comprises one or more constant elements representative of physical properties of one or more of the virtual objects. The method also comprises storing variable data corresponding to physical properties of the virtual objects which are uncertain at the time of storing the data. The variable data comprises one or more variable elements representative of uncertain physical properties of one or more of the virtual objects and wherein each variable element comprises a range of values and a probability function for the range of values.

    AN IMAGE SYNTHESIS SYSTEM AND METHOD THEREFOR

    公开(公告)号:US20220383596A1

    公开(公告)日:2022-12-01

    申请号:US17771900

    申请日:2020-10-23

    Abstract: An image synthesis system comprises receivers (201, 203, 205) receiving scene data describing at least part of a scene; object data describing a 3D object from a viewing zone having a relative pose with respect to the object, and a view pose in the scene. A pose determiner circuit (207) determines an object pose for the object in the scene in response to the scene data and the view pose; and a view synthesis circuit (209) generates a view image of the object from the object data, the object pose, and the view pose. A circuit (211) determines a viewing region in the scene which corresponds to the viewing zone for the object being at the object pose. The pose determiner circuit (207) determines a distance measure for the view pose relative to the viewing region and changes the object pose if the distance measure meets a criterion including a requirement that a distance between the view pose and a pose of the viewing region exceeds a threshold.

    Image generating apparatus and method therefor

    公开(公告)号:US11368663B2

    公开(公告)日:2022-06-21

    申请号:US17287540

    申请日:2019-10-23

    Abstract: An apparatus comprises a determiner (305) which determines a first-eye and a second eye view pose. A receiver (301) receives a reference first-eye image with associated depth values and a reference second-eye image with associated depth values, the reference first-eye image being for a first-eye reference pose and the reference second-eye image being for a second-eye reference pose. A depth processor (311) determines a reference depth value, and modifiers (307) generate modified depth values by reducing a difference between the received depth values and the reference depth value by an amount that depends on a difference between the second or first-eye view pose and the second or first-eye reference pose. A synthesizer (303) synthesizes an output first-eye image for the first-eye view pose by view shifting the reference first-eye image and an output second-eye image for the second-eye view pose by view shifting the reference second-eye image based on the modified depth values. The terms first and second may be replaced by left and right, respectively or vice verse. E.g. the terms first-eye view pose, second-eye view pose, reference first-eye image, and reference second-eye image may be replaced by left-eye view pose, right-eye view pose, reference left-eye image, and reference right-eye image, respectively.

    Apparatus and method for generating an image data stream

    公开(公告)号:US11317124B2

    公开(公告)日:2022-04-26

    申请号:US17279252

    申请日:2019-09-16

    Abstract: An apparatus comprises a processor (301) providing a plurality of reference video streams for a plurality of reference viewpoints for a scene. A receiver (305) receives a viewpoint request from a remote client where the viewpoint request is indicative of a requested viewpoint. A generator (303) generates an output video stream comprising a first video stream with frames from a first reference video stream and a second video stream with frames from a second reference video stream. The frames of the second video stream are differentially encoded relative to the frames of the first video stream. A controller (307) selects the reference video stream for the first and second video streams in response to the viewpoint request and may be arranged to swap the reference video streams between being non-differentially encoded and being differentially encoded when the viewpoint request meets a criterion.

    APPARATUS AND METHOD FOR GENERATING IMAGES OF A SCENE

    公开(公告)号:US20210264658A1

    公开(公告)日:2021-08-26

    申请号:US17254903

    申请日:2019-06-20

    Abstract: An apparatus comprises a store (209) storing a set of anchor poses for a scene, as well as typically 3D image data for the scene. A receiver (201) receives viewer poses for a viewer and a render pose processor (203) determines a render pose in the scene for a current viewer pose of the viewer pose where the render pose is determined relative to a reference anchor pose. A retriever (207) retrieves 3D image data for the reference anchor pose and a synthesizer (205) synthesizes images for the render pose in response to the 3D image data. A selector selects the reference anchor pose from the set of anchor poses and is arranged to switch the reference anchor pose from a first anchor pose of the set of anchor poses to a second anchor pose of the set of anchor poses in response to the viewer poses.

    Method and apparatus for processing an image property map

    公开(公告)号:US10944952B2

    公开(公告)日:2021-03-09

    申请号:US16479972

    申请日:2018-01-31

    Abstract: An apparatus comprises receiver (101) receiving a light intensity image, confidence map, and image property map. A filter unit (103) is arranged to filter the image property map in response to the light intensity image and the confidence map. Specifically, for a first position, the filter unit (103) determines a combined neighborhood image property value in response to a weighted combination of neighborhood image property values in a neighborhood around the first position, the weight for a first neighborhood image property value at a second position being dependent on a confidence value for the first neighborhood image property value and a difference between light intensity values for the first position and for the second position; and determines a first filtered image property value for the first position as a combination of a first image property value at the first position in the image property map and the combined neighbor image property value.

    Image generation from video
    9.
    发明授权

    公开(公告)号:US10931928B2

    公开(公告)日:2021-02-23

    申请号:US16497498

    申请日:2018-03-02

    Abstract: An apparatus comprising a store (101) for storing route data for a set of routes in an N-dimensional space where each route of the set of routes is associated with a video item including frames comprising both image and depth information. An input (105) receives a viewer position indication and a selector (107) selects a first route of the set of routes in response to a selection criterion dependent on a distance metric dependent on the viewer position indication and positions of the routes of the set of routes. A retriever (103, 109) retrieves a first video item associated with the first route from a video source (203). An image generator (111) generates at least one view image for the viewer position indication from a first set of frames from the first video item. In the system, the selection criterion is biased towards a currently selected route relative to other routes of the set of routes.

    Method and apparatus for determining a depth map for an image

    公开(公告)号:US10580154B2

    公开(公告)日:2020-03-03

    申请号:US15569184

    申请日:2016-05-06

    Abstract: An apparatus for determining a depth map for an image comprises an image unit (105) which provides an image with an associated depth map comprising depth values for at least some pixels of the image. A probability unit (107) determines a probability map for the image comprising probability values indicative of a probability that pixels belong to a text image object. A depth unit (109) generates a modified depth map where the modified depth values are determined as weighted combinations of the input values and a text image object depth value corresponding to a preferred depth for text. The weighting is dependent on the probability value for the pixels. The approach provides a softer depth modification for text objects resulting in reduced artefacts and degradations e.g. when performing view shifting using depth maps.

Patent Agency Ranking