COMPUTER VISION-BASED THIN OBJECT DETECTION
    1.
    发明申请

    公开(公告)号:US20200226392A1

    公开(公告)日:2020-07-16

    申请号:US16631935

    申请日:2018-05-23

    Abstract: Implementations of the subject matter described herein provide a solution for thin object detection based on computer vision technology. In the solution, a plurality of images containing at least one thin object to be detected are obtained. A plurality of edges are extracted from the plurality of images, and respective depths of the plurality of edges are determined. In addition, the at least one thin object contained in the plurality of images is identified based on the respective depths of the plurality of edges, the identified at least one thin object being represented by at least one of the plurality of edges. The at least one thin object is an object with a significantly small ratio of cross-sectional area to length. It is usually difficult to detect such thin object with a conventional detection solution, but the implementations of the present disclosure effectively solve this problem.

    IMMERSIVE VIDEO CONFERENCE SYSTEM

    公开(公告)号:US20250030816A1

    公开(公告)日:2025-01-23

    申请号:US18710438

    申请日:2022-11-10

    Abstract: According to implementations of the subject matter described herein, there is provided a solution for an immersive video conference. In the solution, a conference mode for the video conference is determined at first, the conference mode indicating a layout of a virtual conference space for the video conference, and viewpoint information associated with the second participant in the video conference is determined based on the layout. Furthermore, a first view of the first participant is determined based on the viewpoint information and then sent to a conference device associated with the second participant to display a conference image to the second participant. Thereby, on the one hand, it is possible to enable the video conference participants to obtain a more authentic and immersive video conference experience, and on the other hand, to obtain a desired virtual conference space layout according to needs more flexibly.

    FRAME AGGREGATION NETWORK FOR SCALABLE VIDEO FACE RECOGNITION

    公开(公告)号:US20180060698A1

    公开(公告)日:2018-03-01

    申请号:US15254410

    申请日:2016-09-01

    CPC classification number: G06K9/6257 G06K9/00268 G06K9/00744 G06K9/623

    Abstract: In a video frame processing system, a feature extractor generates, based on a plurality of data sets corresponding to a plurality of frames of a video, a plurality of feature sets, respective ones of the feature sets including features extracted from respective ones of the data sets. A first stage of the feature aggregator generates a kernel for a second stage of the feature aggregator. The kernel is adapted to content of the feature sets so as to emphasize desirable ones of the feature sets and deemphasize undesirable ones of the feature sets. In the second stage of the feature aggregator the kernel generated by the first stage is applied to the plurality of feature sets to generate a plurality of significances corresponding to the plurality of feature sets. The feature sets are weighted based on corresponding significances and weighted feature sets are aggregated to generate an aggregated feature set.

    TEXTURE COMPLETION
    4.
    发明公开
    TEXTURE COMPLETION 审中-公开

    公开(公告)号:US20240161382A1

    公开(公告)日:2024-05-16

    申请号:US18279717

    申请日:2021-04-26

    Abstract: According to implementations of the present disclosure, there is provided a solution for completing textures of an object. In this solution, a complete texture map of an object is generated from a partial texture map of the object according to a texture generation model. A first prediction on whether a texture of at least one block in the complete texture map is an inferred texture is determined according to a texture discrimination model. A second image of the object is generated based on the complete texture map. A second prediction on whether the first image and the second image are generated images is determined according to an image discrimination model. The texture generation model, the texture and image discrimination models are trained based on the first and second predictions.

Patent Agency Ranking