Abstract:
A wearable device is provided. The wearable device includes a photon sensor, a processor, and an output unit. The photon sensor senses light reflected from a specific region and transforms the sensed light to a plurality of electric-signal components. The processor receives the electric-signal components sensed within a period to form a dimensional sensing signal. The processor extracts a feature of a waveform of the dimensional sensing signal and determines whether a predetermined heart condition of the object is present according to the feature of the waveform of the dimensional sensing signal to generate a determination signal. The output unit is coupled to the processor. The output unit receives the determination signal and generates an alarm signal according to the determination signal.
Abstract:
A method and apparatus for deriving a motion vector predictor (MVP) are disclosed. The MVP is selected from spatial MVP and temporalone or more MVP candidates. The method determines a value of a flag in a video bitstream, where the flag is utilized for selectively disabling use of one or more temporal MVP candidates for motion vector prediction. The method selects, based on an index derived from the video bitstream, the MVP from one or more non-temporal MVP candidates responsive to the flag indicating that said one or more temporal MVP candidates are not to be utilized for motion vector prediction. Further, the method provides the MVP for the current block.
Abstract:
A system and method of content adaptive pixel intensity processing are described. The method includes receiving a predefined set of processed video data configured from the processed video data, deriving a range information associated with an original maximum value and an original minimum value for a predefined set of original video data, wherein the predefined set of processed video data is derived from the predefined set of original video data, and adaptively clipping pixel intensity of the predefined set of processed video data to a range deriving from the range information, wherein the range information is incorporated in a bitstream and represented in a form of the original maximum value and the original minimum value, prediction values associated with a reference maximum value and a reference minimum value, or a range index associated with a predefined range set.
Abstract:
An apparatus and method for sample adaptive offset (SAO) to restore intensity shift of processed video data are disclosed. In an encoder side, the processed video data corresponding to reconstructed video data, deblocked-reconstructed video data, or adaptive loop filtered and deblocked-reconstructed video data are partitioned into regions smaller than a picture. The region partition information is signaled in a video bitstream located in a position before intensity offset values syntax. At the decoder side, the processed video data is partitioned into regions according to the partition information parsed from the bitstream at a position before intensity offset values syntax. Region-based SAO is applied to each region based on the intensity offset for the category of the region-based SAO type selected.
Abstract:
A method and apparatus for sharing context among different SAO syntax elements for a video coder are disclosed. Embodiments of the present invention apply CABAC coding to multiple SAO syntax elements according to a joint context model, wherein the multiple SAO syntax elements share the joint context. The multiple SAO syntax elements may correspond to SAO merge left flag and SAO merge up flag. The multiple SAO syntax elements may correspond to SAO merge left flags or merge up flags associated with different color components. The joint context model can be derived based on joint statistics of the multiple SAO syntax elements. Embodiments of the present invention code the SAO type index using truncated unary binarization, using CABAC with only one context, or using CABAC with context mode for the first bin associated with the SAO type index and with bypass mode for any remaining bin.
Abstract:
A physiological monitoring system is provided. The physiological monitoring system includes a feature extraction device, an identifier, a processor, a physiological sensing device, and a vital-sign detector. The feature extraction device extracts biological information of an object to generate an extraction signal. The identifier receives the extraction signal and verifies an identity of the object according to the extraction signal. The processor receives the extraction signal and obtains at least one biological feature of the user according to the extraction signal. The physiological sensing device senses a physiological feature to generate a bio-signal. The vital-sign detector estimates vital-sign data of the object according to the bio-signal and the at least one biological feature.
Abstract:
A method and apparatus for three-dimensional and scalable video coding are disclosed. Embodiments according to the present invention determine a motion information set associated with the video data, wherein at least part of the motion information set is made available or unavailable conditionally depending on the video data type. The video data type may correspond to depth data, texture data, a view associated with the video data in three-dimensional video coding, or a layer associated with the video data in scalable video coding. The motion information set is then provided for coding or decoding of the video data, other video data, or both. At least a flag may be used to indicate whether part of the motion information set is available or unavailable. Alternatively, a coding profile for the video data may be used to determine whether the motion information is available or not based on the video data type.
Abstract:
A method and apparatus for deriving a motion vector predictor (MVP) are disclosed. The MVP is selected from spatial MVP and temporalone or more MVP candidates. The method determines a value of a flag in a video bitstream, where the flag is utilized for selectively disabling use of one or more temporal MVP candidates for motion vector prediction. The method selects, based on an index derived from the video bitstream, the MVP from one or more non-temporal MVP candidates responsive to the flag indicating that said one or more temporal MVP candidates are not to be utilized for motion vector prediction. Further, the method provides the MVP for the current block.
Abstract:
A method of SAO (sample-adaptive offset) processing is disclosed, where EO classification is based on a composite EO type group. The composite EO type group comprises at least one first EO type from a first EO type group and at least one second EO type from a second EO type group. The first EO type group determines the EO classification based on the current reconstructed pixel and two neighboring reconstructed pixels, and the second EO type group determines the EO classification based on weighted outputs of the current reconstructed pixel and a number of neighboring reconstructed pixels. A method of inter-layer SAO processing is also disclosed. An inter-layer reference picture for an enhancement layer is generated from the BL reconstructed picture and the inter-layer SAO information is determined, where at least a portion of the inter-layer SAO information is predicted or re-used from the BL SAO information.
Abstract:
A method and apparatus for 3D video coding system are disclosed. Embodiments according to the present invention apply SAO process (sample adaptive offset process) to at least one dependent-view image of the processed multi-view images if processed multi-view images are received. Also embodiments according to the present invention apply the SAO process to at least one dependent-view image of the processed multi-view images or at least one depth map of the processed multi-view depth maps if both processed multi-view images and the processed multi-view depth maps are received. The SAO can be applied to each color component of the processed multi-view images or the processed multi-view depth maps. The SAO parameters associated with a target region in one dependent-view image or in one depth map corresponding to one view may share or may be predicted by second SAO parameters associated with a source region corresponding to another view.