Abstract:
An image processing device includes an image generation device, a first display device and a second display device, and a display control device, wherein the image generation device generates the first display image and the second display image such that the first display image and the second display image are different in at least any one of decimation ratio, enlargement ratio and reduction ratio of the first display image and the second display image, and the image generation device makes the first display device and the second display device different in at least any one of pixels that are of the first pixel group and the second pixel group and that are used in the generation of the second display image, the enlargement ratio and the reduction ratio of the second display image, and a pixel region in which the second display image is displayed.
Abstract:
According to the present invention, the first and second phase difference pixels are arranged on the pixel lines in the first direction in the color image that is thinned during imaging of a moving image including that for live view display. Therefore, even during moving image taking, phase difference AF can be accurately performed. Furthermore, the pixel values at the pixel positions of the first and second phase difference pixels in the thinned color image can be accurately acquired on the basis of the pixel values of the surrounding pixels. Accordingly, reduction in the image quality of the taken image (still image or moving image) due to the phase difference pixels can be prevented or alleviated. Furthermore, the pixel values can be accurately acquired on the basis of the pixel values of the surrounding pixels during simultaneous processing.
Abstract:
An image processing device comprising: an image acquisition device; a parallax acquisition device; a first data transform device; an operation processing device; and a second data transform device transforming third frequency component data and fourth frequency component data respectively corresponding to the third image and the fourth image calculated by the operation processing device into data on a real space and for respectively selecting pixels at positions corresponding to the target pixels as one pixel of the third image and one pixel of the fourth image.
Abstract:
It is an imaging element in which pixels which are photoelectric conversion elements are placed at respective square lattice positions, in which, when, in a predetermined region where pixels of the imaging element are placed, a plurality of pairs are arranged in a first line which is any one line among lines and a second line which is parallel to the first line, each pair having pair pixels which are first and second phase difference detection pixels placed adjacent to each other to detect a phase difference among the pixels of the imaging element, the pairs in the first line are placed to be spaced apart from each other by at least two pixels, and the pairs in the second line are placed at positions, which correspond to positions where the pair pixels in the first line are spaced apart from each other.
Abstract:
After AE/AF/AWB operation, a subject distance is calculated for each pixel, and a histogram which shows the distance distribution is created based thereon. The class with the highest frequency which is the peak at the side nearer than the focus distance is searched based on the histogram and a rectangular area Ln which includes pixels which have a subject distance within the searched range is set. The average parallax amount Pn which is included in the rectangular area Ln is calculated and it is confirmed whether Pn is within a range of parallax amounts a and a−t1. In a case where Pn is not within a range of parallax amounts a and a−t1 which is set in advance, the aperture value is adjusted such that Pn is within the range of the parallax amounts a and a−t1.
Abstract translation:在AE / AF / AWB操作之后,针对每个像素计算被摄体距离,并且基于此创建示出距离分布的直方图。 基于直方图搜索具有最高频率的峰,该焦点距离更靠近一侧的峰值,并且设置包括搜索范围内的被摄体距离的像素的矩形区域Ln。 计算包含在矩形区域Ln中的平均视差量Pn,并确认Pn是否在视差量a和a-t1的范围内。 在Pn不在预先设定的视差量a和a-t1的范围内的情况下,调整光圈值使得Pn在视差量a和a-t1的范围内。
Abstract:
Interpolation precision of phase difference detection pixels is raised. An image sensor (14) is provided with a color filter (30) upon which a basic sequence pattern, formed by disposing a first sequence pattern and a second sequence pattern in point symmetry, is repeatedly disposed. In the first sequence pattern, first filters are disposed on pixels in the four corners and in the center of a square array of 3×3 pixels, second filters are disposed in a horizontal line in the center of a square array, and third filters are disposed in a vertical line in the center of a square array. In the second sequence pattern, the first sequence pattern and the positions of the first filters are the same, while the positions of the second filters and the positions of the third filters have been swapped.
Abstract:
An image capturing element is provided with: a color filter in which a basic arrangement pattern having first and second arrangement patterns arranged to be symmetrical about a point is repeated. The first arrangement pattern comprises first filters arranged on pixels in 2×2 arrangement located at the upper-left portion and a pixel located at the lower-right in a 3×3 pixel square arrangement, second filters arranged on the center and lower end lines in the vertical direction of the square arrangement, and third filters arranged on the center and right lines in the horizontal direction of the square arrangement. The second arrangement pattern comprises the first filters having the same arrangement as in the first arrangement pattern, and the second filters and the third filters having the arrangements interchanged with each other compared to the arrangements in the first arrangement pattern.
Abstract:
An information processing apparatus includes a detection unit that detects a three-dimensional position and a posture of an object in an instruction three-dimensional region having an enlarged or reduced relationship with an observation three-dimensional region in which a virtual viewpoint and a virtual visual line are defined, a derivation unit that derives the viewpoint and the visual line corresponding to detection results of the detection unit depending on positional relationship information indicating a relative positional relationship between the observation three-dimensional region and the instruction three-dimensional region, and an acquisition unit that acquires a virtual viewpoint image showing a subject in a case in which the subject is observed with the viewpoint and the visual line derived by the derivation unit, the virtual viewpoint image being based on a plurality of images obtained by imaging an imaging region included in the observation three-dimensional region by a plurality of imaging apparatuses.
Abstract:
Provided is an information processing apparatus including a processor; and a memory that is connected to or incorporated in the processor, in which the processor acquires imaging device position information regarding positions of a plurality of imaging devices and imaging direction information regarding an imaging direction of each of the plurality of imaging devices, derives virtual viewpoint range specification information capable of specifying a virtual viewpoint range in which a virtual viewpoint image is capable of being generated on the basis of a plurality of captured images obtained by being captured by the plurality of imaging devices, on the basis of the imaging device position information and the imaging direction information, and outputs information regarding the virtual viewpoint range specification information.
Abstract:
An information processing apparatus acquires a plurality of pieces of sound information, sound collection device position information, and target subject position information. In addition, the information processing apparatus specifies a target sound of a region corresponding to a position of a target subject from the plurality of pieces of sound information based on the acquired sound collection device position information and the acquired target subject position information. Further, the information processing apparatus generates target subject emphasis sound information indicating a sound including a target subject emphasis sound in which the specified target sound is emphasized more than a sound emitted from a region different from the region corresponding to the position of the target subject indicated by the acquired target subject position information in a case in which a virtual viewpoint video is generated.