Abstract:
A 3D graphics processing method includes receiving a homogeneous coordinate and an attribute value of both end points of one scan line of a polygon composed of a plurality of perspective projected vertices, calculating a reference value indicating an amount of perspective distortion in the scan line using the received homogeneous coordinates, and interpolating an attribute value of each of pixels of the scan line using at least some of the received homogeneous coordinates and attribute values, the attribute value interpolated by selectively applying perspective correction to each pixel based on the reference value.
Abstract:
Provided are an apparatus, method, and medium for processing an image. In the apparatus, at least one image selected from images to be stitched so as to make colors of overlapped sections of the images as identical with each other as possible and operating again in response to a re-alignment signal, and an image correction unit generating the re-alignment signal after correcting a color of at least a part of the overlapped sections as a result of the shifting of the at least one image in consideration of colors of the overlapped sections Then the shifting of the image is repeated. Therefore, even when the overlapped sections of the images are not identical in color, the overlapped sections of the images can be precisely aligned when the images are stitched for forming a panoramic image, thereby making the panoramic image more accurate.
Abstract:
A method and an apparatus for encoding and decoding a position interpolator including key data and key value data are provided. The method for encoding a position interpolator includes (b) generating key data and key value data to be encoded by extracting, from a first animation path constituted by the position interpolator, a minimum number of break points, which can bring about an error of no greater than a predetermined allowable error limit between the first animation path and a second animation to be generated by the extracted break points, (d) encoding the key data generated in step (b), and (e) encoding the key value data generated in step (b).
Abstract:
A method and apparatus for rendering 3D graphic data is provided. The 3D graphic data is projected onto a 2D screen and points are interpolated and rendered, thereby quickly processing the 3D graphic data.
Abstract:
An image generating method and apparatus are provided. The image generating method irradiates a light with a predetermined wavelength to a target object at a predetermined interval, passes a light having a wavelength required to generate a color image from among lights reflected from the target object, detects color values according to the passed light, generates a depth image of the target object using color values detected during a period in which the light with the predetermined wavelength is irradiated, and generates the color image of the target object using color values detected during a period other than the period in which the light with the predetermined wavelength is irradiated. Accordingly, the image generating method can generate a depth image having high resolution while maintaining the resolution of the color image.
Abstract:
Provided is point-based efficient three-dimensional (3D) information representation from a color image that is obtained from a general Charge-Coupled Device (CCD)/Complementary Metal Oxide Semiconductor (CMOS) camera, and a depth image that is obtained from a depth camera. A 3D image processing method includes storing a depth image associated with an object as first data of a 3D data format, and storing a color image associated with the object as color image data of a 2D image format, independent of the first data.
Abstract:
A method and apparatus for obtaining depth information are provided. The method includes calculating a relative depth value between a first color pixel and a second color pixel based on values of color pixels of a color image, and calculating a depth value of a second depth pixel that belongs to a depth image corresponding to the color image, matches the second color pixel, and does not have the depth value, based on the calculated relative depth value and a depth value of a first depth value that belongs to the depth image, matching the first color pixel, and has the depth value thereof.
Abstract:
A modeling method, medium, and system. The modeling method identifies a object within an image by detecting edges within the image, determines a complexity of the identified object based upon detected surface orientations of the identified object relative to a defined surface of the image, selectively, based upon the determined complexity, generates one or more surfaces for a 3D model by identifying one or more vanishing points for one or more corresponding surfaces, respectively, of the identified object and respectively analyzing the identified one or more vanishing points relative to respective determined points of the one or more corresponding surfaces of the image, and generates the 3D model by combining the one or more surfaces in a 3D space.
Abstract:
A 3D graphics processing method, medium and apparatus performing perspective correction is described. The 3D graphics processing method includes receiving a homogeneous coordinate and an attribute value of both end points of one scan line of a polygon composed of a plurality of perspective projected vertices, calculating a reference value indicating an amount of perspective distortion in the scan line using the received homogeneous coordinates, and interpolating an attribute value of each of pixels of the scan line using at least some of the received homogeneous coordinates and attribute values, the attribute value interpolated by selectively applying perspective correction to each pixel based on the reference value. Accordingly, processing time and power consumption may be reduced.
Abstract:
A system, method and medium for processing objects including 3D graphic data, wherein the processing time for converting 3D graphic data into a 2D image can be minimized by aligning and converting the objects of the 3D graphic data into the 2D image based on the appearance information corresponding to the effects information or shader code.