CAMERA PARAMETER-ASSISTED VIDEO FRAME RATE UP CONVERSION
    11.
    发明申请
    CAMERA PARAMETER-ASSISTED VIDEO FRAME RATE UP CONVERSION 有权
    相机参数辅助视频帧速率转换

    公开(公告)号:US20150365675A1

    公开(公告)日:2015-12-17

    申请号:US14832944

    申请日:2015-08-21

    Abstract: This disclosure describes methods and apparatus for decoding data. In one aspect, the method comprises decoding encoded video data to obtain decoded video frame data, the encoded video data comprising encoded video frame data encoded at a first frame rate and embedded data. The method further comprises determining a camera parameter from the embedded data and up-converting the decoded video frame data to a second frame rate based on the camera parameter. The determined camera parameter may be, for example, a parameter associated with one or more of a zoom factor, an auto focus status, lens position information, frame luma information, an auto exposure (AE) convergence status, an automatic white balance (AWB) convergence status, global motion information, and frame blurriness information, and the like. An encoding device may embed the camera parameter(s) in an encoded video bit stream for a decoder to utilize during frame rate up-conversion.

    Abstract translation: 本公开描述了用于解码数据的方法和装置。 一方面,该方法包括解码经编码的视频数据以获得解码的视频帧数据,编码视频数据包括以第一帧速率编码的编码视频帧数据和嵌入数据。 该方法还包括从嵌入数据确定相机参数,并且基于相机参数将解码的视频帧数据上变频到第二帧速率。 确定的相机参数可以是例如与缩放因子,自动对焦状态,透镜位置信息,帧亮度信息,自动曝光(AE)收敛状态,自动白平衡(AWB)中的一个或多个相关联的参数 )收敛状态,全局运动信息和帧模糊信息等。 编码设备可以将相机参数嵌入编码视频比特流中,以供解码器在帧速率上变频期间使用。

    ENCODER-ASSISTED ADAPTIVE VIDEO FRAME INTERPOLATION
    12.
    发明申请
    ENCODER-ASSISTED ADAPTIVE VIDEO FRAME INTERPOLATION 有权
    编码器辅助自适应视频帧插值

    公开(公告)号:US20140376637A1

    公开(公告)日:2014-12-25

    申请号:US14478835

    申请日:2014-09-05

    Abstract: The disclosure is directed to techniques for encoder-assisted adaptive interpolation of video frames. According to the disclosed techniques, an encoder generates information to assist a decoder in interpolation of a skipped video frame, i.e., an S frame. The information permits the decoder to reduce visual artifacts in the interpolated frame and thereby achieve improved visual quality. The information may include interpolation equation labels that identify selected interpolation equations to be used by the decoder for individual video blocks. As an option, to conserve bandwidth, the equation labels may be transmitted for only selected video blocks that meet a criterion for encoder-assisted interpolation. Other video blocks without equation labels may be interpolated according to a default interpolation technique.

    Abstract translation: 本公开涉及视频帧的编码器辅助自适应插值技术。 根据所公开的技术,编码器生成信息以帮助解码器插入跳过的视频帧,即S帧。 该信息允许解码器减少内插帧中的视觉伪影,从而实现改善的视觉质量。 该信息可以包括识别用于各个视频块的由解码器使用的所选内插方程的内插方程标签。 作为选择,为了节省带宽,可以仅针对满足编码器辅助插值的准则的所选择的视频块来发送等式标签。 没有等式标签的其他视频块可以根据默认内插技术进行内插。

    MULTIVIEW SYNTHESIS AND PROCESSING SYSTEMS AND METHODS
    13.
    发明申请
    MULTIVIEW SYNTHESIS AND PROCESSING SYSTEMS AND METHODS 审中-公开
    综合与处理系统与方法

    公开(公告)号:US20140098100A1

    公开(公告)日:2014-04-10

    申请号:US14046858

    申请日:2013-10-04

    Abstract: Certain embodiments relate to systems and methods for presenting an autostereoscopic, 3-dimensional image to a user. The system may comprise a view rendering module to generate multi-view autostereoscopic images from a limited number of reference views, enabling users to view the content from different angles without the need of glasses. Some embodiments may employ two or more reference views to generate virtual reference views and provide high quality stereoscopic images. Certain embodiments may use a combination of disparity-based depth map processing, view interpolation and smart blending of virtual views, artifact reduction, depth cluster guided hole filling, and post-processing of synthesized views.

    Abstract translation: 某些实施例涉及用于向用户呈现自动立体3维图像的系统和方法。 该系统可以包括视图呈现模块,用于从有限数量的参考视图生成多视图自动立体图像,使得用户能够从不同角度观看内容而不需要眼镜。 一些实施例可以采用两个或更多个参考视图来生成虚拟参考视图并提供高质量的立体图像。 某些实施例可以使用基于视差的深度图处理,视图插值和虚拟视图的智能混合,伪像减少,深度簇引导孔填充和合成视图的后处理的组合。

    Apparatus and methods for object detection using machine learning processes

    公开(公告)号:US12073611B2

    公开(公告)日:2024-08-27

    申请号:US17561299

    申请日:2021-12-23

    CPC classification number: G06V10/82 G06V10/25 G06V10/72 G06V10/764 G06V40/107

    Abstract: Methods, systems, and apparatuses are provided to automatically detect objects within images. For example, an image capture device may capture an image, and may apply a trained neural network to the image to generate an object value and a class value for each of a plurality of portions of the image. Further, the image capture device may determine, for each of the plurality of image portions, a confidence value based on the object value and the class value corresponding to each image portion. The image capture device may also detect an object within at least one image portion based on the confidence values. Further, the image capture device may output a bounding box corresponding to the at least one image portion. The bounding box defines an area of the image that includes one or more objects.

    Three-dimensional scan registration with deformable models

    公开(公告)号:US11769263B2

    公开(公告)日:2023-09-26

    申请号:US17144102

    申请日:2021-01-07

    Abstract: Systems and techniques are provided for registering three-dimensional (3D) images to deformable models. An example method can include determining, based on an image of a target and associated depth information, a 3D mesh of the target; determining different sets of rotation and translation parameters based on modifications to rotation and translation parameters of the 3D mesh; generating, based on the different sets of rotation and translation parameters, different 3D meshes having different orientations, different poses, and/or different alignments relative to the target; determining different sets of model parameters associated with the different 3D meshes, based on modifications to the different sets of rotation and translation parameters; generating, based on the different sets of model parameters, different additional 3D meshes having different orientations, different poses, and/or different alignments relative to the target; and selecting a final 3D mesh of the target from the different additional 3D meshes.

    Systems and methods for non-obstacle area detection

    公开(公告)号:US10395377B2

    公开(公告)日:2019-08-27

    申请号:US15872672

    申请日:2018-01-16

    Abstract: A method performed by an electronic device is described. The method includes generating a depth map of a scene external to a vehicle. The method also includes performing first processing in a first direction of a depth map to determine a first non-obstacle estimation of the scene. The method also includes performing second processing in a second direction of the depth map to determine a second non-obstacle estimation of the scene. The method further includes combining the first non-obstacle estimation and the second non-obstacle estimation to determine a non-obstacle map of the scene. The combining includes combining comprises selectively using a first reliability map of the first processing and/or a second reliability map of the second processing The method additionally includes navigating the vehicle using the non-obstacle map.

    SYSTEMS AND METHODS FOR DETERMINING FEATURE POINT MOTION

    公开(公告)号:US20180040133A1

    公开(公告)日:2018-02-08

    申请号:US15231370

    申请日:2016-08-08

    Abstract: A method performed by an electronic device is described. The method includes obtaining a motion vector map based on at least two images. The motion vector map has fewer motion vectors than a number of pixels in each of the at least two images. The method also includes obtaining a feature point from one of the at least two images. The method further includes performing a matching operation between a template associated with the feature point and at least one search space based on the motion vector map. The method additionally includes determining a motion vector corresponding to the feature point based on the matching operation.

Patent Agency Ranking