-
公开(公告)号:US20180361240A1
公开(公告)日:2018-12-20
申请号:US15781764
申请日:2017-01-13
Applicant: SONY INTERACTIVE ENTERTAINMENT INC.
Inventor: Shoichi IKENOUE , Tatsuo TSUCHIE , Tetsugo INADA , Masaki UCHIDA , Hirofumi OKAMOTO
Abstract: As a user guide, the state of overlooking a real space including an imaging device and a user is presented by one of two images: an image in reference to the position of the imaging device, or an image in reference to the user's direction. In both images, a rectangle indicative of the imaging device and a circle representative of the user are presented in a manner reflecting the actual positional relations and directions involved. Also presented are an area denoting a play area in the horizontal direction determined by a camera angle of view for example, and an arrow pointing to the direction in which the user is to move.
-
公开(公告)号:US20250157192A1
公开(公告)日:2025-05-15
申请号:US18832913
申请日:2022-12-23
Applicant: SONY INTERACTIVE ENTERTAINMENT INC.
Inventor: Daisuke TSURU , Masaki UCHIDA , Yuto HAYAKAWA , Mitsuru NISHIBE
IPC: G06V10/764 , G06T7/30 , G06T7/73 , G06V10/44
Abstract: An image forming device tracks a state of a head mounted display by Visual SLAM on the basis of a captured image of a space around a user wearing the head mounted display. The image forming device determines keyframes collated with a current frame during (a) setting of a play area (step S50), and classifies some of the determined keyframes into keyframes prohibiting discard (step S52). The image forming device reads stored keyframe data and uses the read keyframe data for tracking during (b) execution of an application (step S60). The image forming device adds keyframes and discards pieces of data of keyframes allowing discard according to movement of the user (step S64).
-
公开(公告)号:US20210056720A1
公开(公告)日:2021-02-25
申请号:US16623217
申请日:2018-07-13
Applicant: Sony Interactive Entertainment Inc.
Inventor: Masaki UCHIDA
IPC: G06T7/593 , G06T7/73 , H04N13/128
Abstract: An information processing device extracts an image of a marker from a photographed image, and obtains a position of a representative point of the marker in a three-dimensional space. Meanwhile, a position and an attitude corresponding to a time of photographing the image are estimated on the basis of an output value of a sensor included in a target object. A weight given to positional information of each marker is determined by using a target object model on the basis of the estimation, and positional information of the target object is calculated. Further, final positional information is obtained by synthesizing estimated positional information at a predetermined ratio, and the final positional information is output and fed back for a next estimation.
-
-