Abstract:
A system and a method are provided for analyzing an image of an aortic valve structure to enable assessment of aortic valve calcifications. The system comprises an image interface for obtaining an image of an aortic valve structure, the aortic valve structure comprising aortic valve leaflets and an aortic bulbus. The system further comprises a segmentation subsystem for segmenting the aortic valve structure in the image to obtain a segmentation of the aortic valve structure. The system further comprises an identification subsystem for identifying a calcification on the aortic valve leaflets by analyzing the image of the aortic valve structure. The system further comprises an analysis subsystem configured for determining a centerline of the aortic bulbus by analyzing the segmentation of the aortic valve structure, and for projecting the calcification from the centerline of the aortic bulbus onto the aortic bulbus, thereby obtaining a projection indicating a location of the calcification as projected onto the aortic bulbus. The system further comprises an output unit for generating data representing the projection. Provided information on the accurate location of calcifications after a valve replacement may be advantageously used, for example, to effectively analyze the risk of paravalvular leakages of Transcatheter aortic valve implantation (TAVI) interventions for assessing the suitability of a patient for TAVI procedure.
Abstract:
Methods, systems, and apparatus, including computer program products feature providing a rendering of a three-dimensional assembly of components. An explosion sequence for separating first components of the assembly is determined. The explosion sequence comprises stages in which each stage represents a different spatial relationship between two or more of the first components. A first input is received from an interactive control. A first stage in the explosion sequence is selected based on the first input. The rendering of the assembly is updated, responsive to the first input, to show the first stage of the explosion sequence. A second input is received from the interactive control. A different second stage in the explosion sequence is selected based on the second input. The rendering of the assembly is updated, responsive to the second input, to show the second stage of the explosion sequence.
Abstract:
Among other things, one or more techniques and/or systems are provided for mitigating redundant pixel texture contribution for texturing a geometry. That is, the geometry may represent a multidimensional surface of a scene, such as a city. The geometry may be textured using one or more texture images (e.g., an image comprising color values and/or depth values) depicting the scene from various view directions (e.g., a top-down view, an oblique view, etc.). Because more than one texture image may contribute to texturing a pixel of the geometry (e.g., due to overlapping views of the scene), redundant pixel texture contribution may arise. Accordingly, a redundant textured pixel within a texture image may be knocked out (e.g., in-painted) from the texture image to generate a modified texture image that may be relatively efficient to store and/or stream to a client due to enhanced compression of the modified texture image.
Abstract:
Among other things, one or more techniques and/or systems are provided for defining a view direction for a texture image used to texture a geometry. That is, a geometry may represent a multi-dimensional surface of a scene, such as a city. The geometry may be textured using one or more texture images depicting the scene from various view directions. Because more than one texture image may contribute to texturing portions of the geometry, a view direction for a texture image may be selectively defined based upon a coverage metric associated with an amount of non-textured geometry pixels that are textured by the texture image along the view direction. In an example, a texture image may be defined according to a customized configuration, such as a spherical configuration, a cylindrical configuration, etc. In this way, redundant texturing of the geometry may be mitigated based upon the selectively identified view direction(s).
Abstract:
An exfoliated picture projection method and device are provided which are capable of outputting distortion information of three-dimensional picture data. A distortion amount is calculated in accordance with the difference between the position of a reference virtual ray and the position of a virtual ray projected during creating of the exfoliated picture, coloring is added to the virtual rays in accordance with the distortion amount, and the colored virtual rays are projected to generate exfoliated picture data. Then, the same virtual rays are projected to generate perspective projective picture data, and the exfoliated picture data and perspective projective picture data are subjected to post processing. Therefore, the obtained exfoliated picture and perspective projection picture are output to a monitor.
Abstract:
Among other things, one or more techniques and/or systems are provided for mitigating redundant pixel texture contribution for texturing a geometry. That is, the geometry may represent a multidimensional surface of a scene, such as a city. The geometry may be textured using one or more texture images (e.g., an image comprising color values and/or depth values) depicting the scene from various view directions (e.g., a top-down view, an oblique view, etc.). Because more than one texture image may contribute to texturing a pixel of the geometry (e.g., due to overlapping views of the scene), redundant pixel texture contribution may arise. Accordingly, a redundant textured pixel within a texture image may be knocked out (e.g., in-painted) from the texture image to generate a modified texture image that may be relatively efficient to store and/or stream to a client due to enhanced compression of the modified texture image.
Abstract:
Methods, systems, and apparatus, including computer program products feature providing a rendering of a three-dimensional assembly of components. An explosion sequence for separating first components of the assembly is determined. The explosion sequence comprises stages in which each stage represents a different spatial relationship between two or more of the first components. A first input is received from an interactive control. A first stage in the explosion sequence is selected based on the first input. The rendering of the assembly is updated, responsive to the first input, to show the first stage of the explosion sequence. A second input is received from the interactive control. A different second stage in the explosion sequence is selected based on the second input. The rendering of the assembly is updated, responsive to the second input, to show the second stage of the explosion sequence.
Abstract:
The present invention relates to a system and method for capturing video of a real-world scene over a field of view that may exceed the field of view of a user, manipulating the captured video, and then stereoscopically displaying the manipulated image to the user in a head mounted display to create a virtual environment having length, width, and depth in the image. By capturing and manipulating video for a field of view that exceeds the field of view of the user, the system and method can quickly respond to movement by the user to update the display allowing the user to look and pan around, i.e., navigate, inside the three-dimensional virtual environment.
Abstract:
An image projection method for generating a panoramic image, the method including the steps of accessing images that were captured by a camera located at a source location, and each of the images being captured from a different angle of view, the source location being variable as a function of time, calibrating the images collectively to create a camera model that encodes orientation, optical distortion, and variable defects of the camera; matching overlapping areas of the images to generate calibrated image data, accessing a three-dimensional map, first projecting pixel coordinates of the calibrated image data into a three-dimensional space using the three-dimensional map to generate three-dimensional pixel data, and second projecting the three-dimensional pixel data to an azimuth-elevation coordinate system that is referenced from a fixed virtual to generate the panoramic image.
Abstract:
Methods, systems, and apparatus, including computer program products feature providing a rendering of a three-dimensional assembly of components. An explosion sequence for separating first components of the assembly is determined. The explosion sequence comprises stages in which each stage represents a different spatial relationship between two or more of the first components. A first input is received from an interactive control. A first stage in the explosion sequence is selected based on the first input. The rendering of the assembly is updated, responsive to the first input, to show the first stage of the explosion sequence. A second input is received from the interactive control. A different second stage in the explosion sequence is selected based on the second input. The rendering of the assembly is updated, responsive to the second input, to show the second stage of the explosion sequence.