Abstract:
A method and apparatus for processing reconstructed video using in-loop filter in a video coding system are disclosed. The method uses chroma in-loop filter indication to indicate whether chroma components are processed by in-loop filter when the luma in-loop filter indication indicates that in-loop filter processing is applied to the luma component. An additional flag may be used to indicate whether the in-loop filter processing is applied to an entire picture using same in-loop filter information or each block of the picture using individual in-loop filter information. Various embodiments according to the present invention to increase efficiency are disclosed, wherein various aspects of in-loop filter information are taken into consideration for efficient coding such as the property of quadtree-based partition, boundary conditions of a block, in-loop filter information sharing between luma and chroma components, indexing to a set of in-loop filter information, and prediction of in-loop filter information.
Abstract:
An audio device is provided. The audio device includes processing circuitry which is connected to a loudspeaker and a microphone. The processing circuitry is configured to play an echo reference signal from a far end on the loudspeaker, and perform an acoustic echo cancellation (AEC) process using the echo reference signal and an acoustic signal received by the microphone using an AEC adaptive filter. The processing circuitry repeatedly determines a first status of the loudspeaker according to a relation between the played echo reference signal and the received acoustic signal, and transmits a first status signal indicating the first status of the loudspeaker to the far end through a cloud network.
Abstract:
A method and apparatus for processing transform blocks of video data performed by a video encoder or a video decoder are disclosed. A plurality of quantization matrix sets are determined, where each quantization matrix set includes one or more quantization matrices corresponding to different block types. For a transform block corresponding to a current block in a current picture, a selected quantization matrix set is determined from the plurality of quantization matrix sets for the transform block. Quantization process or de-quantization process is applied to the transform block using a corresponding quantization matrix from the selected quantization matrix set.
Abstract:
The invention provides a motion prediction method. First, a plurality of candidate units corresponding to a current unit of a current frame is determined. A plurality of motion vectors of the candidate units is then obtained. A plurality of scaling factors of the candidate units is then calculated according to a plurality of respective temporal distances depending on a plurality of reference frames of the motion vectors. The motion vectors of the candidate units are then scaled according to the scaling factors to obtain a plurality of scaled motion vectors. The scaled motion vectors are ranked, and a subset of highest ranking motion vectors are identified to be included in a candidate set. Finally, a motion vector predictor for motion prediction of the current unit is then selected from the candidate units.
Abstract:
In one implementation, a method codes video pictures, in which each of the video pictures is partitioned into LCUs (largest coding units). The method operates by receiving a current LCU, partitioning the current LCU adaptively to result in multiple leaf CUs, determining whether a current leaf CU has at least one nonzero quantized transform coefficient according to both Prediction Mode (PredMode) and Coded Block Flag (CBF), and incorporating quantization parameter information for the current leaf CU in a video bitstream, if the current leaf CU has at least one nonzero quantized transform coefficient. If the current leaf CU has no nonzero quantized transform coefficient, the method excludes the quantization parameter information for the current leaf CU in the video bitstream.
Abstract:
A method and apparatus for processing of coded video using in-loop processing are disclosed. Input data to the in-loop processing is received and the input data corresponds to reconstructed or reconstructed-and-deblocked coding units of the picture. The input data is divided into multiple filter units and each filter unit includes one or more boundary-aligned reconstructed or reconstructed-and-deblocked coding units. A candidate filter is then selected from a candidate filter set for the in-loop processing. The candidate filter set comprises at least two candidate filters the said in-loop processing corresponding to adaptive loop filter (ALF), adaptive offset (AO), or adaptive clipping (AC). The in-loop processing is then applied to one of the filter units to generate a processed filter unit by using the candidate filter selected to all boundary-aligned reconstructed or reconstructed-and-deblocked coding units in said one of the filter units.
Abstract:
A method and apparatus for loop processing of reconstructed video in an encoder system are disclosed. The loop processing comprises an in-loop filter and one or more adaptive filters. The filter parameters for the adaptive filter are derived from the pre-in-loop video data so that the adaptive filter processing can be applied to the in-loop processed video data without the need of waiting for completion of the in-loop filter processing for a picture or an image unit. In another embodiment, two adaptive filters derive their respective adaptive filter parameters based on the same pre-in-loop video data. In yet another embodiment, a moving window is used for image-unit-based coding system incorporating in-loop filter and one or more adaptive filters. The in-loop filter and the adaptive filter are applied to a moving window of pre-in-loop video data comprising one or more sub-regions from corresponding one or more image units.
Abstract:
A portable electronic device including an ultrasound transmitter, an ultrasound receiver, and a processing unit is provided. The ultrasound transmitter sends ultrasonic signals, while the ultrasound receiver receives reflected ultrasonic signals from an object. The ultrasound transmitter and the ultrasound receiver are disposed to form a reference axis. The processing unit processes the reflected ultrasonic signals to obtain a time-frequency distribution thereof, and determines a 1D gesture corresponding to projection loci of movements of the object on the reference axis according to the time-frequency distribution.