Block-based advanced residual prediction for 3D video coding

    公开(公告)号:US09967592B2

    公开(公告)日:2018-05-08

    申请号:US14592633

    申请日:2015-01-08

    Inventor: Li Zhang Ying Chen

    CPC classification number: H04N19/577 H04N19/52 H04N19/597

    Abstract: Techniques for advanced residual prediction (ARP) in video coding may include receiving a first encoded block of video data in a first access unit, wherein the first encoded block of video data was encoded using advanced residual prediction and bi-directional prediction, determining temporal motion information for a first prediction direction of the first encoded block of video data, and identifying reference blocks for a second prediction direction, different than the first prediction direction, using the temporal motion information determined for the first prediction direction, wherein the reference blocks are in a second access unit.

    Target output layers in video coding

    公开(公告)号:US09936196B2

    公开(公告)日:2018-04-03

    申请号:US14066209

    申请日:2013-10-29

    CPC classification number: H04N19/30 H04N19/597 H04N19/70

    Abstract: In one example, a device includes a video coder configured to code a multilayer bitstream comprising a plurality of layers of video data, where the plurality of layers of video data are associated with a plurality of layer sets, and where each layer set contains one or more layers of video data of the plurality of layers, and to code on one or more syntax elements of the bitstream indicating one or more output operation points, where each output operation point is associated with a layer set of the plurality of layer sets and one or more target output layers of the plurality of layers.

    MULTI-TO-MULTI TRACKING IN VIDEO ANALYTICS

    公开(公告)号:US20180046865A1

    公开(公告)日:2018-02-15

    申请号:US15384911

    申请日:2016-12-20

    Abstract: Techniques and systems are provided for processing video data. For example, techniques and systems are provided for matching a plurality of bounding boxes to a plurality of trackers. In some examples, a first association is performed, in which case one or more of the plurality of bounding boxes are associated with one or more of the plurality of trackers by minimizing distances between the one or more bounding boxes and the one or more trackers. A set of unmatched trackers are identified from the plurality of trackers after the first association. The set of unmatched trackers are not associated with a bounding box from the plurality of bounding boxes during the first association. A second association is then performed, in which case each of the set of unmatched trackers is associated with an associated bounding box from the plurality of bounding boxes that is within a first pre-determined distance. A set of unmatched bounding boxes is identified from the plurality of bounding boxes after the second association. The set of unmatched bounding boxes are not associated with a tracker from the plurality of trackers during the second association. A third association is then performed, in which case each of the set of unmatched bounding boxes is associated with an associated tracker from the plurality of trackers that is within a second pre-determined distance.

    METHODS AND SYSTEMS FOR AUTO-ZOOM BASED ADAPTIVE VIDEO STREAMING

    公开(公告)号:US20170302719A1

    公开(公告)日:2017-10-19

    申请号:US15433763

    申请日:2017-02-15

    Abstract: Systems, methods, and computer readable media are described for providing automatic zoom based adaptive video streaming. In some examples, a tracking video stream and a target video stream are obtained and are processed. The tracking video stream has a first resolution, and the target video stream has a second resolution that is higher than the first resolution. The tracking video stream is processed to define regions of interest for frames of the tracking video stream. The target video stream is processed to generate zoomed-in regions of frames of the target video stream. A zoomed-in region of the target video stream corresponds to a region of interest defined using the tracking video stream. The zoomed-in regions of the frames of the target video stream are then provided for display on a client device.

Patent Agency Ranking