-
公开(公告)号:US20210241498A1
公开(公告)日:2021-08-05
申请号:US17301070
申请日:2021-03-24
Inventor: Hao SUN , Fu LI , Xin LI , Tianwei LIN
Abstract: The disclosure provides a method for processing an image, an electronic device, and a storage medium, belonging to a field of computer vision and deep learning. An image including a face figure is acquired. Facial feature information matching the face figure is extracted. A style of the image is converted to a preset drawing style based on the facial feature information to obtain a style transferred image.
-
公开(公告)号:US20210343065A1
公开(公告)日:2021-11-04
申请号:US17373420
申请日:2021-07-12
Inventor: Tianwei LIN , Fu LI , Xin LI , Henan ZHANG , Hao SUN
Abstract: The disclosure discloses a cartoonlization processing method for an image, and relates to a field of computational vision, image processing, face recognition, deep learning technologies. The method includes: performing skin color recognition on a facial image to be processed to determine a target skin color of a face in the facial image; processing the facial image by utilizing any cartoonizing model in a cartoonizing model set to obtain a reference cartoonized image corresponding to the facial image in a case that the cartoonizing model set does not contain a cartoonizing model corresponding to the target skin color; determining a pixel adjustment parameter based on the target skin color and a reference skin color corresponding to the any cartoonizing model; and adjusting a pixel value of each pixel point in the reference cartoonized image based on the pixel adjustment parameter, to obtain a target cartoonized image corresponding to the facial image.
-
公开(公告)号:US20210334579A1
公开(公告)日:2021-10-28
申请号:US17184379
申请日:2021-02-24
Inventor: Tianwei LIN , Xin Li , Fu Li , Dongliang He , Hao Sun , Henan Zhang
Abstract: A method and apparatus for processing a video frame are provided. The method may include: converting, using an optical flow generated based on a previous frame and a next frame of adjacent frames in a video, a feature map of the previous frame to obtain a converted feature map; determining, based on an error of the optical flow, a weight of the converted feature map, and obtaining a fused feature map based on a weighted result of a feature of the converted feature map and a feature of a feature map of the next frame; and updating the feature map of the next frame as the fused feature map.
-
公开(公告)号:US20210227152A1
公开(公告)日:2021-07-22
申请号:US17025255
申请日:2020-09-18
Inventor: Henan ZHANG , Xin LI , Fu LI , Tianwei LIN , Hao SUN , Shilei WEN , Hongwu ZHANG , Errui DING
Abstract: Embodiments of the present disclosure provide a method and apparatus for generating an image. The method may include: receiving a first image including a face input by a user in an interactive scene; presenting the first image to the user; inputting the first image into a pre-trained generative adversarial network in a backend to obtain a second image output by the generative adversarial network; where the generative adversarial network uses face attribute information generated based on the input image as a constraint; and presenting the second image to the user in response to obtaining the second image output by the generative adversarial network in the backend.
-
5.
公开(公告)号:US20210216783A1
公开(公告)日:2021-07-15
申请号:US17144523
申请日:2021-01-08
Inventor: Xiang LONG , Dongliang HE , Fu LI , Xiang ZHAO , Tianwei LIN , Hao SUN , Shilei WEN , Errui DING
Abstract: A method includes screening, by a video-clip screening module in a video description model, a plurality of video proposal clips acquired from a video to be analyzed, to acquire a plurality of video clips suitable for description. The plural video proposal clips acquired from the video to be analyzed may be screened by the video-clip screening module to acquire the plural video clips suitable for description; and then, each video clip is described by a video-clip describing module, thus avoiding description of all the video proposal clips, only describing the screened video clips which have strong correlation with the video and are suitable for description, removing the interference of the description of the video clips which are not suitable for description in the description of the video, guaranteeing the accuracy of the final descriptions of the video clips, and improving the quality of the descriptions of the video clips.
-
公开(公告)号:US20210304413A1
公开(公告)日:2021-09-30
申请号:US17344917
申请日:2021-06-10
Inventor: Hao SUN , Fu LI , Tianwei LIN , Dongliang HE
Abstract: An image processing method, an image processing device and an electronic device, all relate to computer vision and deep learning. The image processing method includes: acquiring a first image and a second image; performing semantic region segmentation on the first image and the second image to acquire a first segmentation image and a second segmentation image respectively; determining an association matrix between the first segmentation image and the second segmentation image; and processing the first image in accordance with the association matrix to acquire a target image.
-
7.
公开(公告)号:US20210216782A1
公开(公告)日:2021-07-15
申请号:US17144205
申请日:2021-01-08
Inventor: Tianwei LIN , Xin LI , Dongliang HE , Fu LI , Hao SUN , Shilei WEN , Errui DING
Abstract: A method and apparatus for detecting a temporal action of a video, an electronic device and a storage medium are disclosed, which relates to the field of video processing technologies. An implementation includes: acquiring an initial temporal feature sequence of a video to be detected; acquiring, by a pre-trained video-temporal-action detecting module, implicit features and explicit features of a plurality of configured temporal anchor boxes based on the initial temporal feature sequence; and acquiring, by the video-temporal-action detecting module, the starting position and the ending position of a video clip containing a specified action, the category of the specified action and the probability that the specified action belongs to the category from the plural temporal anchor boxes according to the explicit features and the implicit features of the plural temporal anchor boxes.
-
-
-
-
-
-