METHOD OF LOCAL IMPLICIT NORMALIZING FLOW FOR ARBITRARY-SCALE IMAGE SUPER-RESOLUTION, AND ASSOCIATED APPARATUS

    公开(公告)号:US20240177269A1

    公开(公告)日:2024-05-30

    申请号:US18518614

    申请日:2023-11-24

    申请人: MEDIATEK INC.

    IPC分类号: G06T3/40

    CPC分类号: G06T3/4053 G06T3/4046

    摘要: A method of local implicit normalizing flow for arbitrary-scale image super-resolution, an associated apparatus and an associated computer-readable medium are provided. The method applicable to a processing circuit may include: utilizing the processing circuit to run a local implicit normalizing flow framework to start performing arbitrary-scale image super-resolution with a trained model of the local implicit normalizing flow framework according to at least one input image, for generating at least one output image, where a selected scale of the output image with respect to the input image is an arbitrary-scale; and during performing the arbitrary-scale image super-resolution with the trained model, performing prediction processing to obtain multiple super-resolution predictions for different locations of a predetermined space in a situation where a same non-super-resolution input image among the at least one input image is given, in order to generate the at least one output image.

    IMAGE ANALYSIS METHOD AND IMAGE ANALYSIS SYSTEM

    公开(公告)号:US20240177319A1

    公开(公告)日:2024-05-30

    申请号:US18518609

    申请日:2023-11-24

    申请人: MEDIATEK INC.

    IPC分类号: G06T7/12 G06T7/13 G06T9/00

    CPC分类号: G06T7/12 G06T7/13 G06T9/00

    摘要: Many unsupervised domain adaptation (UDA) methods have been proposed to bridge the domain gap by utilizing domain invariant information. Most approaches have chosen depth as such information and achieved remarkable successes. Despite their effectiveness, using depth as domain invariant information in UDA tasks may lead to multiple issues, such as excessively high extraction costs and difficulties in achieving a reliable prediction quality. As a result, we introduce Edge Learning based Domain Adaptation (ELDA), a framework which incorporates edge information into its training process to serve as a type of domain invariant information. Our experiments quantitatively and qualitatively demonstrate that the incorporation of edge information is indeed beneficial and effective, and enables ELDA to outperform the contemporary state-of-the-art methods on two commonly adopted benchmarks for semantic segmentation based UDA tasks. In addition, we show that ELDA is able to better separate the feature distributions of different classes.