CONTEXTUAL VIDEO CONTENT ADAPTATION BASED ON TARGET DEVICE
    21.
    发明申请
    CONTEXTUAL VIDEO CONTENT ADAPTATION BASED ON TARGET DEVICE 审中-公开
    基于目标设备的背景视频内容适配

    公开(公告)号:US20160359937A1

    公开(公告)日:2016-12-08

    申请号:US15169641

    申请日:2016-05-31

    Applicant: Apple Inc.

    Abstract: Methods and apparatus for contextual video content adaptation are disclosed. Video content is adapted based on any number of criteria such as a target device type, viewing conditions, network conditions or various use cases, for example. A target adaptation of content may be defined for a specified video source. For example, based on receiving a request from a portable device for a live sports feed, a shortened and reduced resolution version of the live sport feed video may be defined for the portable device. The source content may be accessed and adapted (e.g., adapted temporally, spatially, etc.) and an adapted version of content generated. For example, the source content may be cropped to a particular spatial region of interest and/or reduced in length to a particular scene. The generated adaptation may be transmitted to a device in response to the request, or stored to a storage device.

    Abstract translation: 公开了用于上下文视频内容适配的方法和装置。 基于诸如目标设备类型,观看条件,网络条件或各种用例的任何数量的标准来适配视频内容。 可以为指定的视频源定义内容的目标适配。 例如,基于从便携式设备接收用于实时运动饲料的请求,可以为便携式设备定义实时运动饲料视频的缩短和降低的分辨率版本。 源内容可以被访问和适应(例如,在时间上,空间上等等)和生成的内容的适应版本。 例如,源内容可以被裁剪到感兴趣的特定空间区域和/或缩小到特定场景的长度。 生成的适配可以响应于该请求而发送到设备,或者存储到存储设备。

    PREENCODER ASSISTED VIDEO ENCODING
    22.
    发明申请
    PREENCODER ASSISTED VIDEO ENCODING 审中-公开
    PREENCODER辅助视频编码

    公开(公告)号:US20150350686A1

    公开(公告)日:2015-12-03

    申请号:US14290304

    申请日:2014-05-29

    Applicant: Apple Inc.

    CPC classification number: H04N19/42 H04N19/103

    Abstract: A method and system of using a pre-encoder to improve encoder efficiency. The encoder may conform to ITU-T H.265 and the pre-encoder may conform to ITU-T H. 264. The pre-encoder may receive source video data and provide information regarding various coding modes, candidate modes, and a selected mode for coding the source video data. In an embodiment, the encoder may directly use the mode selected by the pre-encoder. In another embodiment, the encoder may receive both the source video data and information regarding the various coding modes (e.g., motion information, macroblock size, intra prediction direction, rate-distortion cost, and block pixel statistics) to simplify and/or refine its mode decision process. For example, the information provided by the pre-encoder may indicate unlikely modes, which unlikely modes need not be tested by the encoder, thus saving power and time.

    Abstract translation: 一种使用预编码器来提高编码器效率的方法和系统。 编码器可以符合ITU-T H.265标准,并且预编码器可以符合ITU-T H.264的要求。预编码器可以接收源视频数据并提供关于各种编码模式,候选模式和选择模式的信息 用于对源视频数据进行编码。 在一个实施例中,编码器可以直接使用由预编码器选择的模式。 在另一个实施例中,编码器可以接收源视频数据和关于各种编码模式的信息(例如,运动信息,宏块大小,帧内预测方向,速率失真成本和块像素统计),以简化和/或改进其 模式决策过程。 例如,预编码器提供的信息可能指示不太可能的模式,不可能的模式不需要被编码器测试,从而节省功率和时间。

    Applications for decoder-side modeling of objects identified in decoded video data

    公开(公告)号:US11553200B2

    公开(公告)日:2023-01-10

    申请号:US16871378

    申请日:2020-05-11

    Applicant: Apple Inc.

    Abstract: Techniques are disclosed for coding and decoding video data using object recognition and object modeling as a basis of coding and error recovery. A video decoder may decode coded video data received from a channel. The video decoder may perform object recognition on decoded video data obtained therefrom, and, when an object is recognized in the decoded video data, the video decoder may generate a model representing the recognized object. It may store data representing the model locally. The video decoder may communicate the model data to an encoder, which may form a basis of error mitigation and recovery. The video decoder also may monitor deviation patterns in the object model and associated patterns in audio content; if/when video decoding is suspended due to operational errors, the video decoder may generate simulated video data by analyzing audio data received during the suspension period and developing video data from the data model and deviation(s) associated with patterns detected from the audio data.

    In loop chroma deblocking filter
    24.
    发明授权

    公开(公告)号:US11102515B2

    公开(公告)日:2021-08-24

    申请号:US16890245

    申请日:2020-06-02

    Applicant: Apple Inc.

    Abstract: Chroma deblock filtering of reconstructed video samples may be performed to remove blockiness artifacts and reduce color artifacts without over-smoothing. In a first method, chroma deblocking may be performed for boundary samples of a smallest transform size, regardless of partitions and coding modes. In a second method, chroma deblocking may be performed when a boundary strength is greater than 0. In a third method, chroma deblocking may be performed regardless of boundary strengths. In a fourth method, the type of chroma deblocking to be performed may be signaled in a slice header by a flag. Furthermore, luma deblock filtering techniques may be applied to chroma deblock filtering.

    Dynamic video configurations
    25.
    发明授权

    公开(公告)号:US11025933B2

    公开(公告)日:2021-06-01

    申请号:US15585581

    申请日:2017-05-03

    Applicant: Apple Inc.

    Abstract: Techniques are disclosed for managing memory allocations when coding video data according to multiple codec configurations. According to these techniques, devices may negotiate parameters of a coding session that include parameters of a plurality of different codec configurations that may be used during the coding session. A device may estimate sizes of decoded picture buffers for each of the negotiated codec configurations and allocate in its memory a portion of memory sized according to a largest size of the estimated decoded picture buffers. Thereafter, the devices may exchange coded video data. The exchange may involve decoding coded data of reference pictures and storing the decoded reference pictures in the allocated memory. During the coding session, the devices may toggle among the different negotiated codec configurations. As they do, reallocations of memory may be avoided.

    Video coding techniques for multi-view video

    公开(公告)号:US10924747B2

    公开(公告)日:2021-02-16

    申请号:US15443342

    申请日:2017-02-27

    Applicant: Apple Inc.

    Abstract: Techniques are disclosed for coding and decoding video captured as cube map images. According to these techniques, padded reference images are generated for use during predicting input data. A reference image is stored in a cube map format. A padded reference image is generated from the reference image in which image data of a first view contained in reference image is replicated and placed adjacent to a second view contained in the cube map image. When coding a pixel block of an input image, a prediction search may be performed between the input pixel block and content of the padded reference image. When the prediction search identifies a match, the pixel block may be coded with respect to matching data from the padded reference image. Presence of replicated data in the padded reference image is expected to increase the likelihood that adequate prediction matches will be identified for input pixel block data, which will increase overall efficiency of the video coding.

    Efficient still image coding with video compression techniques

    公开(公告)号:US10812832B2

    公开(公告)日:2020-10-20

    申请号:US14732393

    申请日:2015-06-05

    Applicant: Apple Inc.

    Abstract: Coding techniques for image data may cause a still image to be converted to a “phantom” video sequence, which is coded by motion compensated prediction techniques. Thus, coded video data obtained from the coding operation may include temporal prediction references between frames of the video sequence. Metadata may be generated that identifies allocations of content from the still image to the frames of the video sequence. The coded data and the metadata may be transmitted to another device, whereupon it may be decoded by motion compensated prediction techniques and converted back to a still image data. Other techniques may involve coding an image in both a base layer representation and at least one coded enhancement layer representation. The enhancement layer representation may be coded predictively with reference to the base layer representation. The coded base layer representation may be partitioned into a plurality of individually-transmittable segments and stored. Prediction references of elements of the enhancement layer representation may be confined to segments of the base layer representation that correspond to a location of those elements. Meaning, when a pixel block of an enhancement layer maps to a given segment of the base layer representation, prediction references are confined to that segment and do not reference portions of the base layer representation that may be found in other segment(s).

    Contextual video content adaptation based on target device

    公开(公告)号:US10749923B2

    公开(公告)日:2020-08-18

    申请号:US15169641

    申请日:2016-05-31

    Applicant: Apple Inc.

    Abstract: Methods and apparatus for contextual video content adaptation are disclosed. Video content is adapted based on any number of criteria such as a target device type, viewing conditions, network conditions or various use cases, for example. A target adaptation of content may be defined for a specified video source. For example, based on receiving a request from a portable device for a live sports feed, a shortened and reduced resolution version of the live sport feed video may be defined for the portable device. The source content may be accessed and adapted (e.g., adapted temporally, spatially, etc.) and an adapted version of content generated. For example, the source content may be cropped to a particular spatial region of interest and/or reduced in length to a particular scene. The generated adaptation may be transmitted to a device in response to the request, or stored to a storage device.

Patent Agency Ranking