Abstract:
A CG image combining device generates a CG image by mapping an image to an object. A memory unit stores shape data and a pair of left-view and right-view image data. A determination unit refers to the shape data to evaluate a curvature of the object's surface from normal vectors of polygons constituting the object. Furthermore, the determination unit determines whether the object is suitable for stereoscopic image mapping by comparing the curvature with a threshold. A mapping unit (i) generates left-view CG data by combining the left-view image data with the shape data and generates right-view CG data by combining the right-view image data with the shape data, when the object is suitable for the mapping, and (ii) generates left-view CG data and right-view CG data by combining one of the left-view and right-view image data with the shape data, when the object is not suitable for the mapping.
Abstract:
It is possible to perform three-dimensional shape measurement with easy processing, regardless of whether an object is moving or not. An image capturing unit (103) captures a captured image (I) including both a real image (I2) of the object (113R) and a mirror (101). A light amount changing unit (63a) changes a light amount of a virtual image (I1). An image separating unit (captured image separating unit 104) specifies, as a virtual image (Ib1), an image in a region having a different light amount (R1), in a captured image (Ia) in which the light amount is changed and a captured image (Ib) in which the light amount is not changed, and specifies an image in a region having the same light amount (R2) as a real image (Ib2). A three dimensional shape is reconstructed from the real image and so on that are specified.
Abstract:
To aim to provide a graphics rendering apparatus comprising: a scaling coefficient determination unit operable to determine, based on polygon data representing a polygon onto which a texture is to be mapped, a scaling coefficient that is a basis for scaling first vector data from which the texture is to be generated; a vector data conversion unit operable to generate second vector data by scaling the first vector data based on the scaling coefficient; a texture generation unit operable to generate a texture based on the second vector data; and a texture mapping unit operable to map the texture generated by the texture generation unit onto the polygon.
Abstract:
An animation control unit specifies shape data, hierarchical structure data, a group table, and state information. A character state calculating unit obtains the specified shape data from a shape data storing unit, the hierarchical structure data from a hierarchical structure storing unit, and the group table from a table storing unit. The character state calculating unit also obtains motion data shown in the specified state information from a motion data storing unit and specifies, from the obtained motion data, motion data identified by each group number. In accordance with the obtained hierarchical structure data, the character state calculating unit corrects the shape data by using the specified motion data. A three-dimensional rendering unit renders the corrected shape data to generate an image, and a display unit displays the generated image.
Abstract:
A CG image combining device generates a CG image by mapping an image to an object. A memory unit stores shape data and a pair of left-view and right-view image data. A determination unit refers to the shape data to evaluate a curvature of the object's surface from normal vectors of polygons constituting the object. Furthermore, the determination unit determines whether the object is suitable for stereoscopic image mapping by comparing the curvature with a threshold. A mapping unit (i) generates left-view CG data by combining the left-view image data with the shape data and generates right-view CG data by combining the right-view image data with the shape data, when the object is suitable for the mapping, and (ii) generates left-view CG data and right-view CG data by combining one of the left-view and right-view image data with the shape data, when the object is not suitable for the mapping.
Abstract:
It is possible to perform three-dimensional shape measurement with easy processing, regardless of whether an object is moving or not. An image capturing unit (103) captures a captured image (I) including both a real image (I2) of the object (113R) and a mirror (101). A light amount changing unit (63a) changes a light amount of a virtual image (I1). An image separating unit (captured image separating unit 104) specifies, as a virtual image (Ib1), an image in a region having a different light amount (R1), in a captured image (Ia) in which the light amount is changed and a captured image (Ib) in which the light amount is not changed, and specifies an image in a region having the same light amount (R2) as a real image (Ib2). A three dimensional shape is reconstructed from the real image and so on that are specified.
Abstract:
A subdivision level determination unit (13) in a curved surface subdivision apparatus (10) accepts an input of information about control points that define a shape of a curved surface and determines the subdivision level for the surface. Next, it sets, for a subdivision processing operation control unit (16), a control table corresponding to the determined subdivision level. The subdivision processing operation control unit (16) executes the subdivision processing while controlling a work memory unit (14) and a subdivision processing operation unit (15) based on the set control table.
Abstract:
A subdivision level determination unit (13) in a curved surface subdivision apparatus (10) accepts an input of information about control points that define a shape of a curved surface and determines the subdivision level for the surface. Next, it sets, for a subdivision processing operation control unit (16), a control table corresponding to the determined subdivision level. The subdivision processing operation control unit (16) executes the subdivision processing while controlling a work memory unit (14) and a subdivision processing operation unit (15) based on the set control table.
Abstract:
A mobile terminal device 100 comprises an object unit 100a operable to generate and store various kinds of objects composed of three-dimensional objects, a database unit 100b operable to store information displayed for the three-dimensional objects, a key input unit 100c operable to perform input processing by input keys such as cursor keys, a rendering unit 100d operable to render various kinds of objects passed by the object unit 100a based on the position information and a display unit 100e operable to generate and display images to be displayed on the display screen.
Abstract:
A three-dimensional (3D) graphic generation apparatus and its generation method according to the present invention generate high quality 3D graphics from two-dimensional (2D) graphics such as characters, without requiring difficult operations. Triangulation is performed using outline data corresponding to a sequence of all points which form an outline of a 2D graphic including a 2D character or the like, and thereafter configurations of the triangles are changed using the outline segments. The generated triangles are judged whether or not they are components of the character, and triangles that have been judged as the components are spatially moved, thereby generating the top surface. Further, side surfaces are generated by connecting corresponding points, thereby generating a 3D graphic.