User input device camera
    1.
    发明授权

    公开(公告)号:US10321126B2

    公开(公告)日:2019-06-11

    申请号:US14792844

    申请日:2015-07-07

    申请人: zSpace, Inc.

    摘要: Systems and methods for capturing a two dimensional (2D) image of a portion of a three dimensional (3D) scene may include a computer rendering a 3D scene on a display from a user's point of view (POV). A camera mode may be activated in response to user input and a POV of a camera may be determined. The POV of the camera may be specified by position and orientation of a user input device coupled to the computer, and may be independent of the user's POV. A 2D frame of the 3D scene based on the POV of the camera may be determined and the 2D image based on the 2D frame may be captured in response to user input. The 2D image may be stored locally or on a server of a network.

    3D DESIGN AND COLLABORATION OVER A NETWORK
    5.
    发明申请
    3D DESIGN AND COLLABORATION OVER A NETWORK 有权
    3D设计和协作在一个网络

    公开(公告)号:US20150363964A1

    公开(公告)日:2015-12-17

    申请号:US14837669

    申请日:2015-08-27

    申请人: zSpace, Inc.

    IPC分类号: G06T15/08 H04L29/08

    摘要: In some embodiments, a system and/or method may include accessing three-dimensional (3D) imaging software on a remote server. The method may include accessing over a network a 3D imaging software package on a remote server using a first system. The method may include assessing, using the remote server, a capability of the first system to execute the 3D imaging software package. The method may include displaying an output of the 3D imaging software using the first system based upon the assessed capabilities of the first system. In some embodiments, the method may include executing a first portion of the 3D imaging software using the remote server based upon the assessed capabilities of the first system. In some embodiments, the method may include executing a second portion of the 3D imaging software using the first system based upon the assessed capabilities of the first system.

    摘要翻译: 在一些实施例中,系统和/或方法可以包括访问远程服务器上的三维(3D)成像软件。 该方法可以包括使用第一系统在网络上访问远程服务器上的3D成像软件包。 该方法可以包括使用远程服务器评估第一系统执行3D成像软件包的能力。 该方法可以包括基于第一系统的评估能力,使用第一系统显示3D成像软件的输出。 在一些实施例中,该方法可以包括基于第一系统的评估能力,使用远程服务器执行3D成像软件的第一部分。 在一些实施例中,该方法可以包括基于第一系统的评估能力,使用第一系统来执行3D成像软件的第二部分。

    HEAD TRACKING EYEWEAR SYSTEM
    6.
    发明申请
    HEAD TRACKING EYEWEAR SYSTEM 有权
    头部跟踪眼镜系统

    公开(公告)号:US20150350635A1

    公开(公告)日:2015-12-03

    申请号:US14822384

    申请日:2015-08-10

    申请人: zSpace, Inc.

    IPC分类号: H04N13/04

    摘要: In some embodiments, a system for tracking with reference to a three-dimensional display system may include a display device, an image processor, a surface including at least three emitters, at least two sensors, a processor. The display device may image, during use, a first stereo three-dimensional image. The surface may be positionable, during use, with reference to the display device. At least two of the sensors may detect, during use, light received from at least three of the emitters as light blobs. The processor may correlate, during use, the assessed referenced position of the detected light blobs such that a first position/orientation of the surface is assessed. The image processor may generate, during use, the first stereo three-dimensional image using the assessed first position/orientation of the surface with reference to the display. The image processor may generate, during use, a second stereo three-dimensional image using an assessed second position/orientation of the surface with reference to the display.

    摘要翻译: 在一些实施例中,用于参考三维显示系统进行跟踪的系统可以包括显示设备,图像处理器,包括至少三个发射器的表面,至少两个传感器,处理器。 显示设备可以在使用期间成像第一立体三维图像。 表面可以在使用期间参照显示装置定位。 传感器中的至少两个可以在使用期间将从至少三个发射器接收的光作为光斑检测。 处理器可以在使用期间将检测到的光斑的评估参考位置相关联,使得评估表面的第一位置/取向。 参考显示器,图像处理器可以在使用期间使用所评估的表面的第一位置/取向来生成第一立体三维图像。 在使用期间,图像处理器可以使用参考显示器的表面的评估的第二位置/取向来生成第二立体三维图像。

    User Input Device Camera
    7.
    发明申请

    公开(公告)号:US20190253699A1

    公开(公告)日:2019-08-15

    申请号:US16394406

    申请日:2019-04-25

    申请人: zSpace, Inc.

    摘要: Systems and methods for capturing a two dimensional (2D) image of a portion of a three dimensional (3D) scene may include a computer rendering a 3D scene on a display from a user's point of view (POV). A camera mode may be activated in response to user input and a POV of a camera may be determined. The POV of the camera may be specified by position and orientation of a user input device coupled to the computer, and may be independent of the user's POV. A 2D frame of the 3D scene based on the POV of the camera may be determined and the 2D image based on the 2D frame may be captured in response to user input. The 2D image may be stored locally or on a server of a network.

    Integrating real world conditions into virtual imagery

    公开(公告)号:US10019831B2

    公开(公告)日:2018-07-10

    申请号:US15298956

    申请日:2016-10-20

    申请人: zSpace, Inc.

    摘要: Systems and methods for incorporating real world conditions into a three-dimensional (3D) graphics object are described herein. In some embodiments, images of a physical location of a user of a three-dimensional (3D) display system may be received from at least one camera and a data imagery map of the physical location may be determined based at least in part on the received images. The data imagery map may capture real world conditions associated with the physical location of the user. Instructions to render a 3D graphics object may be generated and the data imagery map may be incorporated into a virtual 3D scene comprising the 3D graphics object, thereby incorporating the real world conditions into virtual world imagery. In some embodiments, the data imagery may include a light map, a sparse light field, and/or a depth map of the physical location.