METHOD FOR MATCHING IMAGE FEATURE POINT, ELECTRONIC DEVICE AND STORAGE MEDIUM

    公开(公告)号:US20220351495A1

    公开(公告)日:2022-11-03

    申请号:US17865261

    申请日:2022-07-14

    Abstract: A method for matching an image feature point, an electronic device, and a storage medium are provided. The method may include: for images in an acquired image sequence, performing operations including: obtaining a mapping image of a current image based on mapping transformation information between adjacent images before the current image; determining in the mapping image a target area for matching with a feature point in a last image frame prior to the current image; matching the feature point in the last image frame with a feature point in the target area corresponding to the feature point, to determine matching information between the feature point of the current image and the feature point of the last image frame; and determining mapping transformation information between the current image and the last image frame, based on the matching information.

    METHOD FOR GENERATING DEPTH MAP, ELECRONIC DEVICE AND STORAGE MEDIUM

    公开(公告)号:US20220215565A1

    公开(公告)日:2022-07-07

    申请号:US17703731

    申请日:2022-03-24

    Abstract: A method for generating a depth map, an electronic device and a storage medium. The method includes: obtaining a point cloud map and a visual image of a scene; generating a first depth value of each pixel in the visual image based on the point cloud map and the visual image; determining a three-dimensional coordinate location of each pixel in a world coordinate system based on a coordinate location and the first depth value of each pixel in the visual image; generating a second depth value of each pixel by inputting the three-dimensional coordinate location and pixel information of each pixel into a depth correction model; and generating the depth map of the scene based on the second depth value of each pixel.

    IMAGE STITCHING
    5.
    发明申请

    公开(公告)号:US20220215507A1

    公开(公告)日:2022-07-07

    申请号:US17552182

    申请日:2021-12-15

    Abstract: An image stitching method and apparatus, a device, and a medium is provided. An implementation solution is: obtaining a first image and a second image, where the first image and the second image have an overlapping area; determining a first stitching line segment of the first image and a second stitching line segment of the second image, where the second stitching line segment has a first matching line segment in the first image; determining a first stitching area of the first image based on the first stitching line segment and the first matching line segment; configuring a first target canvas at least based on the first stitching area; determining, for each pixel of a plurality of pixels included in the first stitching area, a corresponding mapping pixel in the first target canvas; and mapping pixel values of the plurality of pixels included in the first stitching area to the corresponding mapping pixels in the first target canvas, respectively, to obtain an image to be stitched of the first image.

    MODEL TRAINING METHOD AND APPARATUS, PEDESTRIAN RE-IDENTIFICATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE

    公开(公告)号:US20240221346A1

    公开(公告)日:2024-07-04

    申请号:US17800880

    申请日:2022-01-29

    CPC classification number: G06V10/44 G06T9/00 G06V10/761 G06V10/762 G06V10/806

    Abstract: The present disclosure provides a model training method and apparatus, a pedestrian re-identification method and apparatus, and an electronic device, and relates to the field of artificial intelligence, and specifically to computer vision and deep learning technologies, which can be applied to smart city scenarios. A specific implementation solution is: performing, by using a first encoder, feature extraction on a first pedestrian image and a second pedestrian image in a sample dataset, to obtain an image feature of the first pedestrian image and an image feature of the second pedestrian image; fusing the image feature of the first pedestrian image and the image feature of the second pedestrian image, to obtain a fused feature; performing, by using a first decoder, feature decoding on the fused feature, to obtain a third pedestrian image; and determining the third pedestrian image as a negative sample image of the first pedestrian image, and using the first pedestrian image and the negative sample image to train a first preset model to convergence, to obtain a pedestrian re-identification model. The embodiments of the present disclosure can improve the effect of the model in distinguishing between pedestrians with similar appearances but different identities.

    Method for Processing Video, Electronic Device, and Storage Medium

    公开(公告)号:US20230245364A1

    公开(公告)日:2023-08-03

    申请号:US17884231

    申请日:2022-08-09

    CPC classification number: G06T13/20 G06T7/20 G06T7/80 G06T2207/30241

    Abstract: The present disclosure provides a method for processing a video, an electronic device, and a storage medium. A specific implementation solution includes: generating a first three-dimensional movement trajectory of a virtual three-dimensional model in world space based on attribute information of a target contact surface of the virtual three-dimensional model in the world space; converting the first three-dimensional movement trajectory into a second three-dimensional movement trajectory in camera space, where the camera space is three-dimensional space for shooting an initial video; determining a movement sequence of the virtual three-dimensional model in the camera space according to the second three-dimensional movement trajectory; and compositing the virtual three-dimensional model and the initial video by means of texture information of the virtual three-dimensional model and the movement sequence, to obtain a to-be-played target video.

    HUMAN-OBJECT INTERACTION DETECTION
    10.
    发明申请

    公开(公告)号:US20230052389A1

    公开(公告)日:2023-02-16

    申请号:US17976662

    申请日:2022-10-28

    Abstract: A human-object interaction detection method, a neural network and a training method therefor is provided. The human-object interaction detection method includes: extracting a plurality of first target features and one or more first motion features from an image feature of an image to be detected; fusing each first target feature and some of the first motion features to obtain enhanced first target features; fusing each first motion feature and some of the first target features to obtain enhanced first motion features; processing the enhanced first target features to obtain target information of a plurality of targets including human targets and object targets; processing the enhanced first motion features to obtain motion information of one or more motions, where each motion is associated with one human target and one object target; and matching the plurality of targets with the one or more motions to obtain a human-object interaction detection result.

Patent Agency Ranking