Abstract:
The present disclosure discloses a method for generating a face image, an electronic device, and a non-transitory computer-readable storage medium, the method includes: receiving a first face image and target facial expression information, and determining first facial expression information corresponding to the first face image; selecting a first reference face image matched with the first facial expression information and a second reference face image matched with the target facial expression information from a face image library; respectively extracting feature points in the first reference face image and the second reference face image, and determining face deformation information between the first reference face image and the second reference face image based on the feature points; and extracting feature points in the first face image, and generating a second face image based on the face deformation information and the feature points in the first face image.
Abstract:
A computer-implemented method of recognizing a facial expression of a subject in an input image is provided. The method includes filtering the input image to generate a plurality of filter response images; inputting the input image into a first neural network; processing the input image using the first neural network to generate a first prediction value; inputting the plurality of filter response images into a second neural network; processing the plurality of filter response images using the second neural network to generate a second prediction value; weighted averaging the first prediction value and the second prediction value to generate a weighted average prediction value; and generating an image classification result based on the weighted average prediction value.
Abstract:
An image processing method is disclosed. The image processing method may include inputting a first image and a third image to a pre-trained style transfer network model, the third image being a composited image formed by the first image and a second image; extracting content features of the third image and style features of the second image, normalizing the content features of the third image based on the style features of the second image to obtain target image features, and generating a target image based on the target image features and outputting the target image by using the pre-trained style transfer network model.
Abstract:
An image processing method implemented by a computing device is described herein, which includes acquiring an image to be processed and a target style of image, the image to be processed being an image of a second resolution level, and inputting the image to be processed and the target style into a trained image processing neural network for image processing to obtain a target image of the target style, the target image being an image of a first resolution level. The resolution of the image of the first resolution level is higher than that of the image of the second resolution level.
Abstract:
The present invention provides a method and a system for overlaying an image in a video stream. The method comprising steps of: acquiring an image element signature including at least one image element from the video stream; determining whether the image element signature matches an image to be overlaid; and overlaying the image when the image element signature is determined to match the image to be overlaid.
Abstract:
The present application discloses an image interpolation method for interpolating a pixel and enhancing an edge in an image, comprising detecting an edge position in an image; obtaining edge characteristics associated with the edge position; determining whether an interpolation point is located within an edge region based on the edge characteristics of an array of p×q pixels surrounding the interpolation point, wherein p and q are integers larger than 1; determining edge direction of an interpolation point located within the edge region, wherein the edge direction is normal to gradient direction; classifying the edge direction accordingly to in angle subclasses and n angle classes; wherein each angle class comprises one or more subclasses, m and n are integers, and n≤m; selecting a one-dimensional horizontal interpolation kernel based on the angle class; performing a horizontal interpolation using the selected one-dimensional horizontal interpolation kernel; and performing a vertical interpolation using a one-dimensional vertical interpolation kernel.
Abstract:
The present application discloses an apparatus for protecting a user's eye from blue light radiation, the apparatus includes a radiation detector configured to convert blue light from a light source into a photo voltage having a voltage value; a processor coupled to the radiation detector and configured to calculate a cumulative radiation intensity based on the voltage value cumulated over a time interval, and to compare the cumulative radiation intensity with a threshold value; and a controller coupled to the processor, configured to adjustably control blocking of at least a portion of blue light from the user's eye when the cumulative radiation intensity exceeds the threshold value.
Abstract:
Embodiments of the disclosure provide an image amplifying method, an image amplifying device, and a display apparatus, and relate to field of image processing technique, the method comprises: obtaining, by an image amplifying device, high-frequency and low-frequency components of a source image; performing, by the image amplifying device, pixel interpolation on the low-frequency components of the source image through a first interpolation algorithm, to obtain a low-frequency sub-image; performing, by the image amplifying device, pixel interpolation on the high-frequency components of the source image through a second interpolation algorithm, to obtain a high-frequency sub-image; and merging, by the image amplifying device, the low-frequency and high-frequency sub-images, to obtain a merged image; wherein the first interpolation algorithm and the second interpolation algorithm adopt different algorithms, so that it can ensure image quality of the amplified image while reducing the operation amount. Embodiments of the disclosure are applied to image amplification.
Abstract:
A frequency signal generating system comprises a digital phase-locked loop for receiving a source frequency signal; a loop filter for filtering out high frequency components of a signal output from the digital phase-locked loop; and a voltage controlled oscillator for outputting a target frequency signal according to a signal from the loop filter, wherein an output terminal of the voltage controlled oscillator is connected to a first output terminal of the digital phase-locked loop so that the target frequency signal output from the voltage controlled oscillator is fed back to the digital phase-locked loop, the digital phase-locked loop performs frequency-dividing and phase-detecting on the source frequency signal and the fed back target frequency signal so that the target frequency signal output from the voltage controlled oscillator and the source frequency signal satisfy a definite mathematical relationship therebetween.
Abstract:
The present disclosure provides an image processing method and an image processing apparatus. The image processing method includes: obtaining a to-be-converted SDR image; using a first convolutional network to perform feature analysis on the SDR image, to obtain N weights of the SDR image; where the N weights are respectively configured to characterize proportions of color information of the SDR image to color information characterized in preset N 3D lookup tables, the N 3D lookup tables are configured to characterize color information of different types; obtaining a first 3D lookup table for the SDR image according to the N weights and the N 3D lookup tables; using the first 3D lookup table to adjust the color information of the SDR image to obtain an HDR image; and using a second convolutional neural network to perform refinement correction on the HDR image to obtain an output image.