Video editing apparatus, method and program for the same

    公开(公告)号:US11508412B2

    公开(公告)日:2022-11-22

    申请号:US17421364

    申请日:2019-12-25

    Abstract: A video image editing device or the like, capable of editing a wraparound video image generated by using a plurality of video images captured by multi-viewpoint cameras to be comfortably viewable by viewers, is provided. Based on information about the positions and the sizes of X subjects, a polynomial expression regarding the position of the subject and a polynomial expression regarding the size of the subject are generated. Correction or interpolation of the positions of the H subjects is performed by a polynomial approximation curve using the polynomial expression regarding the position of the subject, correction or interpolation of the sizes of the N subjects is performed by a polynomial approximation curve using the polynomial expression regarding the size of the subject, and the sizes of the N subjects are expanded or contracted with an expansion/contraction parameter p. An intermediate image is generated from two images of the same size corresponding to the photographing devices adjacent to each other, and a wraparound video image is generated, with a parameter indicating the length of the wraparound video image being represented by Tp.

    Information completion apparatus, method and program

    公开(公告)号:US12003762B2

    公开(公告)日:2024-06-04

    申请号:US17613464

    申请日:2019-05-28

    CPC classification number: H04N19/577 G06V20/40 G06V2201/07

    Abstract: A technique for interpolating positional information of a target in an image of a frame in which positional information of the target has not been acquired. An information interpolation device includes: a target information acquisition unit 4 acquiring target information that is information related to a target in an image of each frame composing an input video; an indicator determination unit 5 determining, based on the target information, an indicator indicating validity as a starting frame for each frame, the starting frame starting predetermined image processing on a target; a starting frame determination unit 6 determining a starting frame based on the determined indicator; and a target information interpolation unit 7 interpolating, when a frame in which positional information of a target is not included in target information exists among frames within a predetermined number from the determined starting frame, positional information of the target of the frame that does not include the positional information of the target by using positional information of the target included in target information of a frame other than the frame that does not include the positional information of the target.

    Region extraction model learning apparatus, region extraction model learning method, and program

    公开(公告)号:US11816839B2

    公开(公告)日:2023-11-14

    申请号:US15734443

    申请日:2019-05-20

    Abstract: Provided is technology for extracting a person region from an image, that can suppress preparation costs of learning data. Included are a composited learning data generating unit that generates, from already-existing learning data that is a set of an image including a person region and a mask indicating the person region, and a background image to serve as a background of a composited image, composited learning data that is a set of a composited image and a compositing mask indicating a person region in the composited image, and a learning unit that learns model parameters using the composited learning data. The composited learning data generating unit includes a compositing parameter generating unit that generates compositing parameters that are a set of an enlargement factor, a degree of translation, and a degree of rotation, using the mask of the learning data, and a composited image and compositing mask generating unit that extracts a compositing person region from an image in the learning data using a mask of the learning data, generates the composited image from the background image and the compositing person region, using the compositing parameters, generates the compositing mask from a mask generating image and the compositing person region that are the same size as the composited image, using the compositing parameters, and generates the composited learning data.

    Height estimation method, height estimation apparatus, and program

    公开(公告)号:US12215964B2

    公开(公告)日:2025-02-04

    申请号:US17800588

    申请日:2020-02-20

    Abstract: A height estimation method performed by a height estimation apparatus includes a first feature point extraction step of extracting a feature point coordinate, a first coordinate estimation step of estimating a coordinate of a first subject frame, a pre-generation step of deriving a height of the first subject frame and generating a distance addition pattern and a correction coefficient for an individual missing pattern, a second feature point extraction step of extracting a feature point coordinate from a second input image, a second coordinate estimation step of estimating a coordinate of a second subject frame and estimating a coordinate of an object frame, a subject data selection step of selecting the individual missing pattern and the correction coefficient in accordance with the feature point coordinate, an object data selection step of selecting an object height, and a height estimation step of adding up a distance between a feature point coordinate and another feature point coordinate extracted in accordance with the missing pattern and deriving an estimated value of a height of the subject in accordance with a result of adding up the distance, the correction coefficient, the object height, and the coordinates of the object frame.

    Virtual environment construction apparatus, method, and computer readable medium

    公开(公告)号:US10850177B2

    公开(公告)日:2020-12-01

    申请号:US16070382

    申请日:2017-01-26

    Abstract: Preliminary experience of a match at a player's perspective is enabled. A virtual environment material storage 13 has stored therein virtual environment materials for reproducing a dynamic object and a static object on a virtual environment. A dynamic object sensing unit 11 chronologically measures a position and a posture of the dynamic object in a real environment and generates position and posture information composed of one movement action. A presentation sequence acquisition unit 17 obtains a presentation sequence including position and posture information of a plurality of different kinds of dynamic objects. A virtual environment construction unit 14 synthesizes a virtual environment material of the dynamic object and a virtual environment material of the static object based on the presentation sequence to construct the virtual environment.

Patent Agency Ranking