Abstract:
An electronic apparatus includes at least two camera devices and a processing device. The processing device determines a first distance to a surface formed by the two camera devices and a second distance to the surface in response to detecting an object positioned at a first time by the two camera devices, and determines a third distance from the object positioned at a second time to the surface, wherein the second time is later than the first time, and the third distance is longer than the first distance and shorter than the second distance. Also, the processing device determines a depth in a virtual space corresponding to the object positioned at the second time according to the first distance, the second distance, and the third distance.
Abstract:
A method and apparatus for texture image compression in a 3D video coding system are disclosed. Embodiments according to the present invention derive depth information related to a depth map associated with a texture image and then process the texture image based on the depth information derived. The invention can be applied to the encoder side as well as the decoder side. The encoding order or decoding order for the depth maps and the texture images can be based on block-wise interleaving or picture-wise interleaving. One aspect of the present invent is related to partitioning of the texture image based on depth information of the depth map. Another aspect of the present invention is related to motion vector or motion vector predictor processing based on the depth information.
Abstract:
A dual-camera device is provided. The dual-camera device includes wide and telephoto imaging sections with respective lens/sensor combinations, and a processor. The wide and telephoto imaging sections provide wide image data and telephoto image data, respectively. At least one misalignment error exists between the wide and telephoto imaging sections. The processor generates an output image provided with a smooth transition when switching between a lower zooming factor and a higher zooming factor. The processor warps the wide image data using a portion of the misalignment error to generate base wide image data, and warps the telephoto image data using the remaining portion of the misalignment error to generate base telephoto image data. The processor generates the output image using the base wide image data at the lower zooming factor, and generates the output image using the base telephoto image data at the higher zooming factor.
Abstract:
A method for deriving a motion vector predictor (MVP) receives motion vectors (MVs) associated with reference blocks of the current block. The method determines at least one first spatial search MV associated with a first MV searching order and at least one second spatial search MV associated with a second MV searching order for each neighboring reference block. Then, the method determines whether a first available-first spatial search MV exists for said at least one neighboring reference block according to the first MV searching order, and provides the first available-first spatial search MV as a spatial MVP for the current block. Finally, the method determines whether a first available-second spatial search MV exists for said at least one neighboring reference block according to the second MV searching order only if none of first spatial search MVs for said at least one neighboring reference block is available.
Abstract:
A 3D displaying method, comprising: acquiring distance information map from at least one image; receiving control information from a user input device; modifying the distance information map according to the control information to generate modified distance information map; generating an interactive 3D image according to the modified distance information map; and displaying the interactive 3D image.
Abstract:
An apparatus and method of deriving a motion vector predictor (MVP) for a current MV of a current block in Inter, Merge or Skip mode are disclosed based on motion vector (MV) attribute search. The system determines first MV attribute search comprising whether a given MV pointing to the target reference picture in the given reference list, or whether the given MV pointing to the target reference picture in other reference list, and determines second MV attribute search comprising whether the given MV pointing to other reference pictures in the given reference list, or whether the given MV pointing to the other reference pictures in the other reference list. The MVP for the current block is then determined from the neighboring blocks according to a search order.
Abstract:
An electronic device controlling method and a user registration method are provided. In the electronic device controlling method, when a target device receives a first and a second control commands which are identical, but performed by different users simultaneously or separately, the target device performs a first predetermined operation based on an identity of the user performing the first control command, and performs a second predetermined operation based on an identity of the user performing the second control command. In the user registration method, a user registered identity model corresponding to a user to be registered is established according to identity information of the user, and is mapped to a user profile comprising a relationship between the control commands and the predetermined operations. By acquiring the registered information, the target device is able to perform the user dependent operations.
Abstract:
A method and apparatus for deriving a motion vector predictor (MVP) are disclosed. The MVP is selected from spatial MVP and temporalone or more MVP candidates. The method determines a value of a flag in a video bitstream, where the flag is utilized for selectively disabling use of one or more temporal MVP candidates for motion vector prediction. The method selects, based on an index derived from the video bitstream, the MVP from one or more non-temporal MVP candidates responsive to the flag indicating that said one or more temporal MVP candidates are not to be utilized for motion vector prediction. Further, the method provides the MVP for the current block.
Abstract:
A method and apparatus for performing hybrid multihypothesis prediction during video coding of a coding unit includes: processing a plurality of sub-coding units in the coding unit; and performing disparity vector (DV) derivation when the coding unit is processed by a 3D or multi-view coding tool or performing block vector (BV) derivation when the coding unit is processed by intra picture block copy (IntraBC) mode. The step of performing DV or BV derivation includes deriving a plurality of vectors for multihypothesis motion-compensated prediction of a specific sub-coding unit from at least one other sub-coding/coding unit. The one other sub-coding/coding unit is coded before the corresponding DV or BV is derived for multihypothesis motion-compensated prediction of the specific sub-coding unit. A linear combination of a plurality of pixel values derived from the plurality of vectors is used as a predicted pixel value of the specific sub-coding unit.
Abstract:
A method and apparatus for deriving MVP (motion vector predictor) for Skip or Merge mode in 3D video coding are disclosed. In one embodiment, the method comprises determining an MVP candidate set for a selected block and selecting one MVP from an MVP list for motion vector coding of the block. The MVP candidate set may comprise multiple spatial MVP candidates associated with neighboring blocks and one inter-view candidate, and the MVP list is selected from the MVP candidate set. The MVP list may consist of only one MVP candidate or multiple MVP candidates. If only one MVP candidate is used, there is no need to incorporate an MVP index associated with the MVP candidate in the video bitstream corresponding to the three-dimensional video coding. Also, the MVP candidate can be the first available MVP candidate from the MVP candidate set according to a pre-defined order.