Abstract:
A method and apparatus for 3D video coding system are disclosed. Embodiments according to the present invention apply SAO process (sample adaptive offset process) to at least one dependent-view image of the processed multi-view images if processed multi-view images are received. Also embodiments according to the present invention apply the SAO process to at least one dependent-view image of the processed multi-view images or at least one depth map of the processed multi-view depth maps if both processed multi-view images and the processed multi-view depth maps are received. The SAO can be applied to each color component of the processed multi-view images or the processed multi-view depth maps. The SAO parameters associated with a target region in one dependent-view image or in one depth map corresponding to one view may share or may be predicted by second SAO parameters associated with a source region corresponding to another view.
Abstract:
A method for deriving a motion vector predictor (MVP) receives motion vectors (MVs) associated with reference blocks of the current block. The method determines at least one first spatial search MV associated with a first MV searching order and at least one second spatial search MV associated with a second MV searching order for each neighboring reference block. Then, the method determines whether a first available-first spatial search MV exists for said at least one neighboring reference block according to the first MV searching order, and provides the first available-first spatial search MV as a spatial MVP for the current block. Finally, the method determines whether a first available-second spatial search MV exists for said at least one neighboring reference block according to the second MV searching order only if none of first spatial search MVs for said at least one neighboring reference block is available.
Abstract:
Embodiments of the present invention re-use at least a portion of motion information of the corresponding block for the motion information of the current block if a corresponding reference picture corresponding to a reference picture pointed by the corresponding block is in a current reference picture list of the current block. If the corresponding reference picture is not in the current reference picture list of the current block, the motion information of the current block is determined using an alternative process, where at least a portion of the motion information, which was used in the previous case, is not re-used for the current block according to the alternative process.
Abstract:
A method and apparatus for coding video data using Inter prediction mode or Merge mode in a video coding system are disclosed, where the video data is configured into a Base Layer (BL) and an Enhancement Layer (EL), and the EL has higher spatial resolution or better video quality than the BL. In one embodiment, at least one information piece of motion information associated with one or more BL blocks in the BL is identified. A motion vector prediction (MVP) candidate list or a Merge candidate list for the selected block in the EL is then determined, where said at least one information piece associated with said one or more BL blocks in the BL is included in the MVP-candidate list or a Merge candidate MVP candidate list or the Merge candidate list. The input data associated with the selected block is coded or decoded using the MVP candidate list or the Merge candidate list.
Abstract:
A method and apparatus for three-dimensional video coding and multi-view video coding are disclosed. Embodiments according to the present invention derive a unified disparity vector (DV) based on neighboring blocks of the current block or depth information associated with the current block and locate a single corresponding block in a reference view according to the unified DV. An inter-view motion vector prediction (MVP) candidate is then derived for both list0 and list1 from the single corresponding block. List0 and list1 MVs of the inter-view MVP candidate are derived from the single corresponding block located according to the unified DV.
Abstract:
A method for three-dimensional video encoding or decoding are disclosed. In one embodiment, the method constrains the disparity vector (DV) to generate a constrained DV, wherein horizontal, vertical, or both components of the constrained DV is constrained to be zero or within a range from M to N units of DV precision, and M and N are integers. In another embodiment, a derived DV for DV based motion-compensated-prediction is determined from a constrained neighboring block set of the current block. In yet another embodiment, a derived disparity vector is derived to replace an inter-view Merge candidate if the inter-view Merge candidate of the current block is not available or not valid. In yet another embodiment, a DV difference (DVD) or a motion vector difference (MVD) for the current block is determined according to a DV and the DVD/MVP is constrained to be zero or within a range.
Abstract:
A method and apparatus for three-dimensional video coding are disclosed. Embodiments according to the present invention apply the pruning process to one or more spatial candidates and at least one of the inter-view candidate and the temporal candidate to generate a retained candidate set. The pruning process removes any redundant candidate among one or more spatial candidates and at least one of the inter-view candidate and the temporal candidate. A Merge/Skip candidate list is then generated, which includes the retained candidate set. In one embodiment, the temporal candidate is exempted from the pruning process. In another embodiment, the inter-view candidate is exempted from the pruning process. In other embodiments of the present invention, the pruning process is applied to the inter-view candidate and two or more spatial candidates. The pruning process compares the spatial candidates with the inter-view candidate.
Abstract:
A method and apparatus for deriving a scaled MV (motion vector) for a current block based on a candidate MV associated with a candidate block are disclosed. Embodiments according to the present invention increase effective scaling factor of motion vector scaling. In one embodiment, a distance ratio of a first picture distance between a current picture and a target reference picture pointed to by a current motion vector of the current block to a second picture distance between a candidate picture corresponding to the candidate block and a candidate reference picture pointed to by the candidate MV is computed. The scaled MV is then generated based on the candidate MV according to the distance ratio, where the scaled MV has an effective scaling ratio between −m and n, and wherein m and n are positive integers greater than 4. The values of m and n can be 8, 16 or 32.
Abstract:
An apparatus and method of deriving a motion vector predictor (MVP) for a current MV of a current block in Inter, Merge or Skip mode are disclosed based on motion vector (MV) attribute search. The system determines first MV attribute search comprising whether a given MV pointing to the target reference picture in the given reference list, or whether the given MV pointing to the target reference picture in other reference list, and determines second MV attribute search comprising whether the given MV pointing to other reference pictures in the given reference list, or whether the given MV pointing to the other reference pictures in the other reference list. The MVP for the current block is then determined from the neighboring blocks according to a search order.
Abstract:
A method and apparatus of priority-based MVP (motion vector predictor) derivation for motion compensation in a video encoder or decoder are disclosed. According to this method, one or more final motion vector predictors (MVPs) are derived using priority-based MVP derivation process. The one or more final MVPs are derived by selecting one or more firstly available MVs from a priority-based MVP list for Inter prediction mode, Skip mode or Merge mode based on reference data of one or two target reference pictures that are reconstructed prior to the current block according to a priority order. Therefore, there is no need for transmitting information at the encoder side nor deriving information at the decoder side that is related to one or more MVP indices to identify the one or more final MVPs in the video bitstream.