Abstract:
There is provided an image processing apparatus including a plurality of imaging units included in a stereo camera, the plurality of imaging units being configured to image a first chart pattern including a pattern that is a plurality of feature points and a mirror surface, and a correction parameter calculation unit configured to calculate a correction parameter that corrects a gap of the plurality of imaging units, based on the pattern included in the first chart pattern imaged by the plurality of imaging units and a pattern mirrored in the mirror surface.
Abstract:
In an image processing apparatus, an image pickup unit takes images of an object including the face of a person wearing the glasses by which to observe a stereoscopic image that contains a first parallax image and a second parallax image obtained when the object in a three-dimensional (3D) space is viewed from different viewpoints. A glasses identifying unit identifies the glasses included in the image of the object taken by the image pickup unit. A face detector detects a facial region the face of the person included in the image of the object taken by the image pickup unit, based on the glasses identified by the glasses identifying unit. An augmented-reality special rendering unit adds a virtual feature to the facial region of the face of the person detected by the face detector.
Abstract:
An image acquisition section of an information processor acquires stereo images from an imaging device. An input information acquisition section acquires an instruction input from a user. A depth image acquisition portion of a position information generation section generates a depth image representing a position distribution of subjects existing in the field of view of the imaging device in the depth direction using stereo images. A matching portion adjusts the size of a reference template image in accordance with the position of each of the subjects in the depth direction represented by the depth image first, then performs template matching on the depth image, thus identifying the position of a target having a given shape and size in the three-dimensional space. An output information generation section generates output information by performing necessary processes based on the target position.
Abstract:
In an image processing apparatus, an image pickup unit takes images of an object including the face of a person wearing the glasses by which to observe a stereoscopic image that contains a first parallax image and a second parallax image obtained when the object in a three-dimensional (3D) space is viewed from different viewpoints. A glasses identifying unit identifies the glasses included in the image of the object taken by the image pickup unit. A face detector detects a facial region the face of the person included in the image of the object taken by the image pickup unit, based on the glasses identified by the glasses identifying unit. An augmented-reality special rendering unit adds a virtual feature to the facial region of the face of the person detected by the face detector.
Abstract:
Data of a moving image has a hierarchical structure comprising a 0-th layer, a first layer, a second layer, and a third layer in a z axis direction. Each layer is composed of moving image data of a single moving image expressed in different resolutions. Both the coordinates of a viewpoint at the time of the display of a moving image and a corresponding display area are determined in a virtual three-dimensional space formed by an x axis representing the horizontal direction of the image, a y axis representing the vertical direction of the image, and a z axis representing the resolution. By providing a switching boundary for layers with respect to the z axis, the layers of the moving image data used for frame rendering are switched in accordance with the value of z of the frame coordinates.
Abstract:
An imaging device includes a first camera and a second camera and shoots the same object under different shooting conditions. A shot-image data acquirer of an image analyzer acquires data of two images simultaneously shot from the imaging device. A correcting section aligns the distributions of the luminance value between the two images by carrying out correction for either one of the two images. A correction table managing section switches and generates a correction table to be used according to the function implemented by the information processing device. A correction table storage stores the correction table showing the correspondence relationship between the luminance values before and after correction. A depth image generator performs stereo matching by using the two images and generates a depth image.
Abstract:
A game controller includes a plurality of LEDs formed on the rear of a case. The plurality of LEDs are arranged two-dimensionally in its layout area. The game controller has a plurality of PWM control units which are provided inside the case and control the lighting of the plurality of LEDs, respectively. The PWM control units control the lighting of the LEDs based on a control signal from a game apparatus. The game apparatus acquires a captured image of the game controller, and acquires the position of the game controller in the captured image based on the positions of the LEDs in the captured image.
Abstract:
A tile image sequence obtained by dividing a frame into a predetermined size is further divided into another predetermined size on an image plane to generate a voxel (for example, a voxel. If a redundancy in a space direction or a time direction exists, then data is reduced in the direction, and sequences in the time direction are deployed on a two-dimensional plane. Voxel images are placed on an image plane of a predetermined size to generate one integrated image. In a grouping pattern which exhibits a minimum quantization error, pixels are collectively placed in the region of each voxel image for each group (integrated image). The integrated image after the re-placement is compressed in accordance with a predetermined compression method to generate a compressed image and reference information for determining the position of a needed pixel.
Abstract:
An image pickup apparatus includes: an image data production unit configured to produce data of a plurality of kinds of images from an image frame obtained by picking up an image of an object as a moving picture for each of pixel strings which configure a row; and an image sending unit configured to extract a pixel string in a region requested from a host terminal from within the data of each of the plurality of kinds of images and connect the pixel strings to each other for each unit number of pixels for connection determined on the basis of a given rule to produce a stream and then transmit the stream to the host terminal. The image sending unit switchably determines whether the unit number of pixels for connection is to be set to a fixed value or a variable value in response to the kind of each image.
Abstract:
A frame sequence of moving picture data is divided into a tile image sequence, and the color space of the tile image sequence is converted to generate a YCbCr image sequence. Each frame is reduced to ½ time in the vertical and horizontal directions, and a compression process is carried out to generate compression data of a reference image. The compression data of the reference image is decoded and decompressed similarly as upon image display to restore a YCbCr image as the reference image, and a difference image sequence is generated from the reference image and the original YCbCr image. Then, compression data of a difference image is generated, and compression data obtained by connecting the compression data of the reference image and the compression data of the difference image is generated for every four frames of a tile image.