-
公开(公告)号:US20230134212A1
公开(公告)日:2023-05-04
申请号:US18088615
申请日:2022-12-26
Inventor: Mun Churl KIM
IPC: H04N19/117 , H04N19/82 , G06F17/15 , H04N19/176 , G06N3/04 , H04N19/124
Abstract: Disclosed according to one exemplary embodiment includes not limited to: a filtering unit configured to generate filtering information by filtering a residual image corresponding to a difference between an original image and a prediction image; an inverse filtering unit configured to generate inverse filtering information by inversely filtering the filtering information; an estimator configured to generate the prediction image based on the original image and reconstruction information; a CNN-based in-loop filter configured to receive the inverse filtering information and the prediction image and to output the reconstruction information; and an encoder configured to perform encoding based on the filtering information and information of the prediction image, and wherein the CNN-based in-loop filter is trained for each of the plurality of artefact sections according to an artefact value or for each of the plurality of quantization parameter sections according to a quantization parameter.
-
公开(公告)号:US20220366538A1
公开(公告)日:2022-11-17
申请号:US17623270
申请日:2020-07-03
Inventor: Mun Churl KIM , Se Hwan KI
Abstract: Disclosed are a video processing method and a device therefor. The video processing method may include receiving a video comprising a plurality of temporal portions, receiving a first model parameter corresponding to a first neural network to process the video entirely, receiving residues between the first model parameter and a plurality of second model parameters corresponding to a plurality of second neural networks to process the plurality of temporal portions, and performing at least one of super-resolution, reverse or inverse tone mapping, tone mapping, frame interpolation, motion deblurring, denoising, and compression artifact removal on the video based on the residues.
-
公开(公告)号:US20220012855A1
公开(公告)日:2022-01-13
申请号:US17482433
申请日:2021-09-23
Inventor: Mun Churl KIM , Soo Ye KIM , Dae Eun KIM
Abstract: In this invention, we propose a convolutional neural network (CNN) based architecture designed for the ITM to HDR consumer displays, called ITM-CNN, and its training strategy for enhancing the performance based on image decomposition using the guided filter. We demonstrate the benefits of decomposing the image by experimenting with various architectures and also compare the performance for different training strategies. To the best of our knowledge, this invention first presents the ITM problem using CNNs for HDR consumer displays, where the network is trained to restore lost details and local contrast. Our ITM-CNN can readily up-convert LDR images for direct viewing on an HDR consumer medium, and is a very powerful means to solve the lack of HDR video contents with legacy LDR videos.
-
公开(公告)号:US20210082087A1
公开(公告)日:2021-03-18
申请号:US16960907
申请日:2018-12-12
Inventor: Mun Churl KIM , Yong Woo KIM , Jae Seok CHOI
IPC: G06T3/40 , H04N7/01 , G06T1/20 , H04N19/124
Abstract: Disclosed are an image processing method and device using a line-wise operation. The image processing device, according to one embodiment, comprises: a receiver for receiving an image; at least one first line buffer for outputting the image into a line-wise image line; a first convolution operator for generating a feature map by performing a convolution operation on the basis of the output from the first line buffer; and a feature map processor for storing the output from the first convolution operator in units of at least one line, and processing so as to output the feature map stored in units of at least one line into a two-dimensional form, wherein at least one convolution operation operates in the form of a pipeline.
-
公开(公告)号:US20170372461A1
公开(公告)日:2017-12-28
申请号:US15633763
申请日:2017-06-27
Inventor: Yong Woo KIM , Sang Yeon KIM , Woo Suk HA , Mun Churl KIM , Dae Eun KIM
Abstract: The present invention provides a technology that separates a low-contrast-ratio image into sublayer images, classifies each sublayer image into several categories in accordance with the characteristics of each sublayer image, and learns a transformation matrix representing a relationship between the low-contrast-ratio image and a high-contrast-ratio image for each category. In addition, the present invention provides a technology that separates an input low-contrast-ratio image into sublayer images, selects a category corresponding to each sublayer image, and applies a learned transformation matrix to generate a high.
-
公开(公告)号:US20160173909A1
公开(公告)日:2016-06-16
申请号:US15049854
申请日:2016-02-22
Applicant: Electronics and Telecommunications Research Institute , Kwangwoon University Industry-Academic Collaboration Foundation , University-Industry Cooperation Group of Kyung Hee University , Korea Advanced Institute of Science and Technology
Inventor: Sung-Chang LIM , Ha Hyun LEE , Hui Yong KIM , Se Yoon Jeong , Suk Hee CHO , Hae Chul CHOI , Jong Ho KIM , Jin Ho LEE , Jin Soo CHOI , Jin Woo HONG , Dong Gyu SIM , Seoung Jun OH , Gwang Hoon PARK , Mun Churl KIM , Neung Joo HWANG , Sea Nae PARK
IPC: H04N19/625 , H04N19/55 , H04N19/615 , H04N19/14 , H04N19/176 , H04N19/124 , H04N19/13 , H04N19/122 , H04N19/11 , H04N19/44 , H04N19/107 , H04N19/139 , H04N19/159 , H04N19/52 , H04N19/593 , H04N19/91 , H04N19/18 , H04N19/119
CPC classification number: H04N19/122 , H04N19/107 , H04N19/11 , H04N19/119 , H04N19/12 , H04N19/124 , H04N19/13 , H04N19/139 , H04N19/14 , H04N19/159 , H04N19/176 , H04N19/18 , H04N19/182 , H04N19/44 , H04N19/50 , H04N19/52 , H04N19/55 , H04N19/593 , H04N19/60 , H04N19/615 , H04N19/625 , H04N19/91
Abstract: Provided is a video encoding apparatus, including a signal separator to separate a differential image block into a first domain and a second domain, based on a boundary line included in the differential image block, the differential image block indicating a difference between an original image and a prediction image with respect to the original image, a transform encoder to perform a transform encoding with respect to the first domain using a discrete cosine transform (DCT), a quantization unit to quantize an output of the transform encoding unit in a frequency domain, a space domain quantization unit to quantize the second domain in a space domain, and an entropy encoder to perform an entropy encoding using outputs of the quantization unit and the space domain quantization unit.
-
公开(公告)号:US20240348839A1
公开(公告)日:2024-10-17
申请号:US18757240
申请日:2024-06-27
Inventor: Mun Churl KIM , Bum Shik LEE , Jae Il KIM , Chang Seob PARK , Sang Jin HAHM , In Joon CHO , Keun Sik LEE , Byung Sun KIM
IPC: H04N19/86 , H04N19/103 , H04N19/117 , H04N19/122 , H04N19/124 , H04N19/13 , H04N19/14 , H04N19/159 , H04N19/176 , H04N19/61 , H04N19/80 , H04N19/172 , H04N19/174 , H04N19/182
CPC classification number: H04N19/86 , H04N19/103 , H04N19/117 , H04N19/122 , H04N19/124 , H04N19/13 , H04N19/14 , H04N19/159 , H04N19/176 , H04N19/61 , H04N19/80 , H04N19/172 , H04N19/174 , H04N19/182
Abstract: Disclosed are a method of encoding a division block in video encoding and a method of decoding a division block in video decoding. An input picture is divided into encoding unit blocks. The encoding unit blocks are divided into sub-blocks. The sub-blocks are encoded by selectively using at least one of intra prediction encoding and inter prediction encoding. A decoding process is performed through a reverse process of the encoding method. When pixel values of an encoding unit block are encoded in video encoding, the flexibility in selecting an encoding mode is increased and the efficiency of encoding is increased.
-
28.
公开(公告)号:US20230308673A1
公开(公告)日:2023-09-28
申请号:US18327160
申请日:2023-06-01
Applicant: Electronics and Telecommunications Research Institute , KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
Inventor: Jong Ho KIM , Hui Yong KIM , Se Yoon JEONG , Sung Chang LIM , Ha Hyun LEE , Jin Ho LEE , Suk Hee CHO , Jin Soo CHOI , Jin Woong KIM , Chie Teuk AHN , Mun Churl KIM , Bum Shik LEE
CPC classification number: H04N19/44 , H04N19/70 , H04N19/46 , H04N19/593 , H04N19/60 , H04N19/18 , H04N19/64
Abstract: Disclosed decoding method of the intra prediction mode comprises the steps of: determining whether an intra prediction mode of a present prediction unit is the same as a first candidate intra prediction mode or as a second candidate intra prediction mode on the basis of 1-bit information; and determining, among said first candidate intra prediction mode and said second candidate intra prediction mode, which candidate intra prediction mode is the same as the intra prediction mode of said present prediction unit on the basis of additional 1-bit information, if the intra prediction mode of the present prediction unit is the same as at least either the first candidate intra prediction mode or the second candidate intra prediction mode, and decoding the intra prediction mode of the present prediction unit.
-
公开(公告)号:US20210344916A1
公开(公告)日:2021-11-04
申请号:US17376162
申请日:2021-07-15
Inventor: Mun Churl KIM
IPC: H04N19/117 , H04N19/82 , G06F17/15 , H04N19/176 , G06N3/04 , H04N19/124
Abstract: Disclosed according to one exemplary embodiment includes not limited to: a filtering unit configured to generate filtering information by filtering a residual image corresponding to a difference between an original image and a prediction image; an inverse filtering unit configured to generate inverse filtering information by inversely filtering the filtering information; an estimator configured to generate the prediction image based on the original image and reconstruction information; a CNN-based in-loop filter configured to receive the inverse filtering information and the prediction image and to output the reconstruction information; and an encoder configured to perform encoding based on the filtering information and information of the prediction image, and wherein the CNN-based in-loop filter is trained for each of the plurality of artefact sections according to an artefact value or for each of the plurality of quantization parameter sections according to a quantization parameter.
-
公开(公告)号:US20210166360A1
公开(公告)日:2021-06-03
申请号:US16769576
申请日:2017-12-06
Inventor: Mun Churl KIM , Soo Ye KIM , Dae Eun KIM
Abstract: Inverse tone mapping (ITM) aims at generating a single high dynamic range (HDR) image from a low dynamic range (LDR) image. While ITM was frequently used for graphics rendering in the HDR space, the advent of HDR consumer displays (e.g., HDR TV) and the consequent need for HDR multimedia contents open up new horizons for the consumption of ultra-high quality video contents. However, due to the lack of HDR-filmed contents, the legacy LDR videos must be up-converted for viewing on these HDR displays. Unfortunately, the previous ITM methods are not appropriate for HDR consumer displays, and their inverse-tone-mapped results are not visually pleasing with noise amplification or lack of details. In this paper, we propose a convolutional neural network (CNN) based architecture designed for the ITM to HDR consumer displays, called ITM-CNN, and its training strategy for enhancing the performance based on image decomposition using the guided filter. We demonstrate the benefits of decomposing the image by experimenting with various architectures and also compare the performance for different training strategies. To the best of our knowledge, this paper first presents the ITM problem using CNNs for HDR consumer displays, where the network is trained to restore lost details and local contrast. Our ITM-CNN can readily up-convert LDR images for direct viewing on an HDR consumer medium, and is a very powerful means to solve the lack of HDR video contents with legacy LDR videos.
-
-
-
-
-
-
-
-
-