-
公开(公告)号:US11508412B2
公开(公告)日:2022-11-22
申请号:US17421364
申请日:2019-12-25
Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Inventor: Toshiaki Takeda , Dan Mikami , Yoshinori Kusachi
IPC: G11B27/00 , G11B27/031 , G11B27/34 , H04N13/282 , G06V40/10 , H04N5/262
Abstract: A video image editing device or the like, capable of editing a wraparound video image generated by using a plurality of video images captured by multi-viewpoint cameras to be comfortably viewable by viewers, is provided. Based on information about the positions and the sizes of X subjects, a polynomial expression regarding the position of the subject and a polynomial expression regarding the size of the subject are generated. Correction or interpolation of the positions of the H subjects is performed by a polynomial approximation curve using the polynomial expression regarding the position of the subject, correction or interpolation of the sizes of the N subjects is performed by a polynomial approximation curve using the polynomial expression regarding the size of the subject, and the sizes of the N subjects are expanded or contracted with an expansion/contraction parameter p. An intermediate image is generated from two images of the same size corresponding to the photographing devices adjacent to each other, and a wraparound video image is generated, with a parameter indicating the length of the wraparound video image being represented by Tp.
-
公开(公告)号:US12003762B2
公开(公告)日:2024-06-04
申请号:US17613464
申请日:2019-05-28
Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Inventor: Mariko Isogawa , Dan Mikami , Kosuke Takahashi , Yoshinori Kusachi
IPC: H04N19/00 , G06V20/40 , H04N19/577
CPC classification number: H04N19/577 , G06V20/40 , G06V2201/07
Abstract: A technique for interpolating positional information of a target in an image of a frame in which positional information of the target has not been acquired. An information interpolation device includes: a target information acquisition unit 4 acquiring target information that is information related to a target in an image of each frame composing an input video; an indicator determination unit 5 determining, based on the target information, an indicator indicating validity as a starting frame for each frame, the starting frame starting predetermined image processing on a target; a starting frame determination unit 6 determining a starting frame based on the determined indicator; and a target information interpolation unit 7 interpolating, when a frame in which positional information of a target is not included in target information exists among frames within a predetermined number from the determined starting frame, positional information of the target of the frame that does not include the positional information of the target by using positional information of the target included in target information of a frame other than the frame that does not include the positional information of the target.
-
公开(公告)号:US11998819B2
公开(公告)日:2024-06-04
申请号:US17845010
申请日:2022-06-21
Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Inventor: Kosuke Takahashi , Dan Mikami , Mariko Isogawa , Akira Kojima , Hideaki Kimata , Ayumi Matsumoto
CPC classification number: A63B69/0002 , G06F17/18 , G06T19/00 , G06T19/006 , A63B2069/0006 , A63B2069/0008
Abstract: A virtual reality system is provided that includes a video presentation apparatus, the virtual reality system comprising: the video presentation apparatus which includes circuitry configured to: obtain as input a video sequence composed of a plurality of frames and mask information specifying a complementation target region in the video sequence; separate a frame into a foreground region and a background region based on binary images representing differences between the plurality of frames included in the video sequence; determine either one of patch-search-based completion and paste synthesis as a complementation method for the complementation target region based on a number of pixels belonging to the foreground region and located within a given distance from a periphery of the complementation target region; and complement the complementation target region in accordance with the complementation method; and a virtual reality head mounted display which presents the complemented video sequence to the user.
-
4.
公开(公告)号:US11816839B2
公开(公告)日:2023-11-14
申请号:US15734443
申请日:2019-05-20
Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Inventor: Ayumi Matsumoto , Dan Mikami , Hideaki Kimata
CPC classification number: G06T7/11 , G06F18/214 , G06F18/217 , G06N20/00 , G06T7/30 , G06T7/70 , G06V10/267 , G06V40/103 , G06T2207/20081 , G06T2207/20212 , G06T2207/30196
Abstract: Provided is technology for extracting a person region from an image, that can suppress preparation costs of learning data. Included are a composited learning data generating unit that generates, from already-existing learning data that is a set of an image including a person region and a mask indicating the person region, and a background image to serve as a background of a composited image, composited learning data that is a set of a composited image and a compositing mask indicating a person region in the composited image, and a learning unit that learns model parameters using the composited learning data. The composited learning data generating unit includes a compositing parameter generating unit that generates compositing parameters that are a set of an enlargement factor, a degree of translation, and a degree of rotation, using the mask of the learning data, and a composited image and compositing mask generating unit that extracts a compositing person region from an image in the learning data using a mask of the learning data, generates the composited image from the background image and the compositing person region, using the compositing parameters, generates the compositing mask from a mask generating image and the compositing person region that are the same size as the composited image, using the compositing parameters, and generates the composited learning data.
-
公开(公告)号:US20220314092A1
公开(公告)日:2022-10-06
申请号:US17845010
申请日:2022-06-21
Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Inventor: Kosuke TAKAHASHI , Dan Mikami , Mariko Isogawa , Akira Kojima , Hideaki Kimata , Ayumi Matsumoto
Abstract: A virtual reality system is provided that includes a video presentation apparatus, the virtual reality system comprising: the video presentation apparatus which includes circuitry configured to: obtain as input a video sequence composed of a plurality of frames and mask information specifying a complementation target region in the video sequence; separate a frame into a foreground region and a background region based on binary images representing differences between the plurality of frames included in the video sequence; determine either one of patch-search-based completion and paste synthesis as a complementation method for the complementation target region based on a number of pixels belonging to the foreground region and located within a given distance from a periphery of the complementation target region; and complement the complementation target region in accordance with the complementation method; and a virtual reality head mounted display which presents the complemented video sequence to the user.
-
公开(公告)号:US12215964B2
公开(公告)日:2025-02-04
申请号:US17800588
申请日:2020-02-20
Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Inventor: Toshiaki Takeda , Dan Mikami , Yoshinori Kusachi
Abstract: A height estimation method performed by a height estimation apparatus includes a first feature point extraction step of extracting a feature point coordinate, a first coordinate estimation step of estimating a coordinate of a first subject frame, a pre-generation step of deriving a height of the first subject frame and generating a distance addition pattern and a correction coefficient for an individual missing pattern, a second feature point extraction step of extracting a feature point coordinate from a second input image, a second coordinate estimation step of estimating a coordinate of a second subject frame and estimating a coordinate of an object frame, a subject data selection step of selecting the individual missing pattern and the correction coefficient in accordance with the feature point coordinate, an object data selection step of selecting an object height, and a height estimation step of adding up a distance between a feature point coordinate and another feature point coordinate extracted in accordance with the missing pattern and deriving an estimated value of a height of the subject in accordance with a result of adding up the distance, the correction coefficient, the object height, and the coordinates of the object frame.
-
公开(公告)号:US11810306B2
公开(公告)日:2023-11-07
申请号:US17059121
申请日:2019-04-23
Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Inventor: Mariko Isogawa , Dan Mikami , Kosuke Takahashi , Hideaki Kimata , Ayumi Matsumoto
IPC: G06T7/20 , G06N3/08 , G06V10/764 , G06V10/774 , G06V10/82 , G06V20/40 , G06V40/20
CPC classification number: G06T7/20 , G06N3/08 , G06V10/764 , G06V10/774 , G06V10/82 , G06V20/46 , G06V40/23 , G06T2207/10016 , G06T2207/20081
Abstract: A motion classification model learning apparatus that learns a model for early recognizing a motion is provided. A training data acquisition part acquiring training data configured with pairs of video information about a motion that can be classified into any of a plurality of categories according to characteristics of the motion and category information that is a correct label corresponding to the video information; a motion history image generation part generation a motion history image of the video information; and a model learning part learning a model that outputs a label that is the category information, with the motion history image as an input are included.
-
公开(公告)号:US11715240B2
公开(公告)日:2023-08-01
申请号:US17430257
申请日:2020-02-03
Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Inventor: Mariko Isogawa , Dan Mikami , Kosuke Takahashi , Yoshinori Kusachi
IPC: G06T11/00 , A63B69/00 , A63B71/06 , G06F3/01 , G06F3/16 , A63B102/02 , A63B102/18
CPC classification number: G06T11/00 , A63B69/0002 , A63B71/0622 , G06F3/011 , G06F3/165 , A63B2069/0008 , A63B2071/0625 , A63B2071/0638 , A63B2102/02 , A63B2102/18 , A63B2243/0025
Abstract: A video and audio presentation device configured to present video and audio for sports training with less delay in physical reaction is provided. The device includes: an offset determination unit configured to determine a time-series offset toffset obtained by correcting a time difference tdiff on a basis of a correction coefficient α, the time difference tdiff representing a time difference between Areal and Amix, or a time difference between AVR and Amix, where a physical reaction Areal represents time-series data of physical reaction to an incoming object in a real environment, a physical reaction AVR represents time-series data of physical reaction to an incoming object presented in a virtual reality environment, and a physical reaction Amix represents time-series data of physical reaction to an incoming object presented in a semi-virtual reality environment; a video presentation unit configured to present video in the semi-virtual reality environment; and an audio presentation unit configured to shift audio corresponding to the video on a basis of the time-series offset toffset so that the audio precedes arrival of the video, and to present the shifted audio in the semi-virtual reality environment.
-
公开(公告)号:US10850177B2
公开(公告)日:2020-12-01
申请号:US16070382
申请日:2017-01-26
Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Inventor: Kosuke Takahashi , Dan Mikami , Mariko Isogawa , Akira Kojima , Hideaki Kimata , Ayumi Matsumoto
Abstract: Preliminary experience of a match at a player's perspective is enabled. A virtual environment material storage 13 has stored therein virtual environment materials for reproducing a dynamic object and a static object on a virtual environment. A dynamic object sensing unit 11 chronologically measures a position and a posture of the dynamic object in a real environment and generates position and posture information composed of one movement action. A presentation sequence acquisition unit 17 obtains a presentation sequence including position and posture information of a plurality of different kinds of dynamic objects. A virtual environment construction unit 14 synthesizes a virtual environment material of the dynamic object and a virtual environment material of the static object based on the presentation sequence to construct the virtual environment.
-
-
-
-
-
-
-
-