Abstract:
A method for optimizing a photographing pose of a user, where the method is applied to an electronic device, and the method includes: displaying a photographing interface of a camera of the electronic device; obtaining a to-be-taken image in the photographing interface; determining, based on the to-be-taken image, that the photographing interface includes a portrait; entering a pose recommendation mode; and presenting a recommended human pose picture to a user in a predetermined preview manner, where the human pose picture is at least one picture that is selected from a picture library through metric learning and that has a top-ranked similarity to the to-be-taken image, and where the similarity is an overall similarity obtained by fusing a background similarity and a foreground similarity.
Abstract:
A method for optimizing a photographing pose of a user, where the method is applied to an electronic device, and the method includes: displaying a photographing interface of a camera of the electronic device; obtaining a to-be-taken image in the photographing interface; determining, based on the to-be-taken image, that the photographing interface includes a portrait; entering a pose recommendation mode; and presenting a recommended human pose picture to a user in a predetermined preview manner, where the human pose picture is at least one picture that is selected from a picture library through metric learning and that has a top-ranked similarity to the to-be-taken image, and where the similarity is an overall similarity obtained by fusing a background similarity and a foreground similarity.
Abstract:
This application relates to the field of artificial intelligence and the field of computer vision. The method includes performing feature extraction on an image to obtain a basic feature map of the image, and determining a proposal of a region possibly including a pedestrian in the image. The basic feature map of the image is then processed to obtain an object visibility map in which a response to a pedestrian visible part is greater than a response to a pedestrian blocked part and a background part. The method further performs weighted summation processing on the object visibility map and the basic feature map to obtain an enhanced feature map of the image, and determines, based on the proposal of the image and the enhanced feature map of the image, a bounding box including a pedestrian in the image and a confidence level of the bounding box including the pedestrian.
Abstract:
An image characteristic estimation method and device is presented, where content of the method includes extracting at least two eigenvalues of input image data, and executing the following operations for each extracted eigenvalue, until execution for the extracted eigenvalues is completed. Selecting an eigenvalue, and performing at least two matrix transformations on the eigenvalue using a pre-obtained matrix parameter in order to obtain a first matrix vector corresponding to the eigenvalue; when a first matrix vector corresponding to each extracted eigenvalue is obtained, obtaining second matrix vectors with respect to the at least two extracted eigenvalues using a convolutional network calculation method according to the obtained first matrix vector corresponding to each eigenvalue; and obtaining a status of an image characteristic in the image data by means of estimation according to the second matrix vectors. In this way, accuracy of estimation is effectively improved.
Abstract:
An image characteristic estimation method and device is presented, where content of the method includes extracting at least two eigenvalues of input image data, and executing the following operations for each extracted eigenvalue, until execution for the extracted eigenvalues is completed. Selecting an eigenvalue, and performing at least two matrix transformations on the eigenvalue using a pre-obtained matrix parameter in order to obtain a first matrix vector corresponding to the eigenvalue; when a first matrix vector corresponding to each extracted eigenvalue is obtained, obtaining second matrix vectors with respect to the at least two extracted eigenvalues using a convolutional network calculation method according to the obtained first matrix vector corresponding to each eigenvalue; and obtaining a status of an image characteristic in the image data by means of estimation according to the second matrix vectors. In this way, accuracy of estimation is effectively improved.
Abstract:
A method, an apparatus and a terminal for reconstructing a three-dimensional object, where the method includes acquiring two-dimensional line drawing information, segmenting, according to the two-dimensional line drawing information and according to a degree of freedom, the two-dimensional line drawing to obtain at least one line sub-drawing, where the degree of freedom is a smallest quantity of vertices that need to be known for determining a spatial location of the three-dimensional object that includes planes, reconstructing a three-dimensional sub-object according to the line sub-drawing, and combining all three-dimensional sub-objects to obtain the three-dimensional object, and hence, the three-dimensional object can be automatically reconstructed according to two-dimensional line drawing information.
Abstract:
An image processing method. The method includes: An electronic device obtains N images, where the N images have a same quantity of pixels and a same pixel location arrangement, and N is an integer greater than 1; the electronic device obtains, based on feature values of pixels located at a same location in the N images, a reference value of the corresponding location; the electronic device determines a target pixel of each location based on a reference value of the location; and the electronic device generates a target image based on the target pixel of each location.
Abstract:
In one embodiment, a target tracking method includes: receiving a current frame of picture including a target object; determining, based on a drift determining model, whether a tracker drifts for tracking of the target object in the current frame of picture, where the drift determining model is obtained through modeling based on largest values of responses values of a training sample used to train the drift determining model, where the training sample is collected from a training picture that includes the target object, where the response value of the sample is a value indicating a probability that the training sample is the target object in the training picture; and outputting a tracking drift result, where the tracking drift result includes: drift is generated for the tracking of the target object, or no drift is generated for the tracking of the target object.
Abstract:
In a method for selecting pictures from a sequence of pictures of an object in motion, a computerized user device determines, for each picture in the sequence of pictures, a value of a motion feature of the object. Based on analyzing the values of the motion feature of the pictures in the sequence, the device identifies a first subset of pictures from the pictures in the sequence. The device then selects, based on a second selection criterion, a second subset of pictures from the first subset of pictures. The pictures in the second subset are displayed to a user for further selection.
Abstract:
A body relationship estimation method and apparatus are disclosed. The method includes obtaining a target picture, calculating a first body relationship feature of two persons according to at least one of first location information of a body part of each person of the two persons in the target picture or second location information of body parts of the two persons, where the first location information is obtained by performing single-person gesture estimation on each person, and the second location information is obtained by performing two-person joint gesture estimation on the two persons when the first location information indicates that the body parts of the two persons overlap, and determining a body relationship between the two persons according to the first body relationship feature.