-
公开(公告)号:US10944938B2
公开(公告)日:2021-03-09
申请号:US15515984
申请日:2015-09-29
Applicant: DOLBY LABORATORIES LICENSING CORPORATION
Inventor: Ning Xu , James E. Crenshaw , Scott Daly , Samir N. Hulyalkar , Raymond Yeung
IPC: H04N7/01 , H04N19/46 , H04N19/157 , H04N5/14 , H04N5/21
Abstract: Methods and systems for controlling judder are disclosed. Judder can be introduced locally within a picture, to restore a judder feeling which is normally expected in films. Judder metadata can be generated based on the input frames. The judder metadata includes base frame rate, judder control rate and display parameters, and can be used to control judder for different applications.
-
公开(公告)号:US11010860B2
公开(公告)日:2021-05-18
申请号:US16595772
申请日:2019-10-08
Applicant: Dolby Laboratories Licensing Corporation
Inventor: Raymond Yeung , Patrick Griffis , Thaddeus Beier , Robin Atkins
IPC: G06T11/00 , G06T1/20 , G06T3/40 , G06T5/00 , G06T5/40 , H04N1/60 , H04N5/20 , H04N9/68 , H04N5/202 , H04N9/64 , H04N1/32
Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.
-
公开(公告)号:US10510134B2
公开(公告)日:2019-12-17
申请号:US15880438
申请日:2018-01-25
Applicant: Dolby Laboratories Licensing Corporation
Inventor: Raymond Yeung , Patrick Griffis , Thaddeus Beier , Robin Atkins
IPC: G06T11/00 , G06T1/20 , G06T3/40 , G06T5/00 , G06T5/40 , H04N1/60 , H04N9/64 , H04N9/68 , G09G5/02 , G09G5/06 , H04N5/202 , H04N5/20 , H04N1/32
Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.
-
公开(公告)号:US10553255B2
公开(公告)日:2020-02-04
申请号:US15408262
申请日:2017-01-17
Applicant: DOLBY LABORATORIES LICENSING CORPORATION
Inventor: Robin Atkins , Raymond Yeung , Sheng Qu
Abstract: Methods and systems for generating and applying scene-stable metadata for a video data stream are disclosed herein. A video data stream is divided or partitioned into scenes and a first set of metadata may be generated for a given scene of video data. The first set of metadata may be any known metadata as a desired function of video content (e.g., luminance). The first set of metadata may be generated on a frame-by-frame basis. In one example, scene-stable metadata may be generated that may be different from the first set of metadata for the scene. The scene-stable metadata may be generated by monitoring a desired feature with the scene and may be used to keep the desired feature within an acceptable range of values. This may help to avoid noticeable and possibly objectionably visual artifacts upon rendering the video data.
-
公开(公告)号:US09916638B2
公开(公告)日:2018-03-13
申请号:US15584368
申请日:2017-05-02
Applicant: Dolby Laboratories Licensing Corporation
Inventor: Raymond Yeung , Patrick Griffis , Thaddeus Beier , Robin Atkins
IPC: G06T11/00 , G06T1/20 , G06T3/40 , G06T5/00 , G06T5/40 , H04N1/60 , H04N9/64 , H04N9/68 , G09G5/02 , G09G5/06 , H04N5/20 , H04N1/32
CPC classification number: G06T1/20 , G06T5/007 , G06T5/009 , H04N1/32101 , H04N5/20 , H04N5/202 , H04N9/64 , H04N9/68
Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.
-
公开(公告)号:US09607658B2
公开(公告)日:2017-03-28
申请号:US14906306
申请日:2014-07-28
Applicant: DOLBY LABORATORIES LICENSING CORPORATION
Inventor: Robin Atkins , Raymond Yeung , Sheng Qu
CPC classification number: G11B27/3027 , G09G5/10 , G09G2360/16 , G09G2370/04 , G11B27/28 , G11B27/34 , H04N5/147 , H04N5/268 , H04N5/91 , H04N9/8205
Abstract: Methods and systems for generating and applying scene-stable metadata for a video data stream are disclosed herein. A video data stream is divided or partitioned into scenes and a first set of metadata may be generated for a given scene of video data. The first set of metadata may be any known metadata as a desired function of video content (e.g., luminance). The first set of metadata may be generated on a frame-by-frame basis. In one example, scene-stable metadata may be generated that may be different from the first set of metadata for the scene. The scene-stable metadata may be generated by monitoring a desired feature with the scene and may be used to keep the desired feature within an acceptable range of values. This may help to avoid noticeable and possibly objectionably visual artifacts upon rendering the video data.
-
-
-
-
-