Abstract:
A region extraction device and a method thereof in a stage previous to detection of a target object, capable of appropriately extracting a region having a possibility of presence of a target object emitting light having a light emission spectrum of a specific narrowband as a determination region in an imaging region, and an object detection apparatus and a method thereof capable of efficiently detecting the target object using a region extraction result are provided. In a region extraction method, a plurality of images including an image of a second narrowband corresponding to a first narrowband of light emitted by a target object and an image of a third narrowband different from the second narrowband are acquired from a multispectral camera. Next, a region that emits light having a light emission spectrum other than a light emission spectrum of the first narrowband is determined as a non-determination region in an imaging region based on the plurality of acquired images, and one or a plurality of regions excluding the non-determination region from the imaging region are extracted as a determination region.
Abstract:
Provided are an imaging device and an imaging method that can generate images between which a difference in appearance caused by a difference between the polarization directions of received light is suppressed in a case in which different images are generated on the basis of light having different polarization directions. An imaging device (1) includes: an imaging optical system (10); a first polarizer that aligns a polarization direction of light transmitted through a first pupil region and a second pupil region with a first polarization direction; a second polarizer that transmits light in a second polarization direction different from the first polarization direction; an imaging element (100) that receives the light transmitted through the first pupil region and the second pupil region; and an image generation unit that performing a crosstalk removal process on pixel signals of a first pixel and a second pixel and generates a first image corresponding to the light transmitted through the first pupil region and a second image corresponding to the light transmitted through the second pupil region on the basis of the pixel signals subjected to the crosstalk removal process.
Abstract:
An imaging apparatus capable of capturing an in-focus image while moving, and an image composition apparatus capable of generating a high detail composite image are provided.A camera 100 is mounted on an unmanned aerial vehicle 10, and imaging is performed while moving. During imaging, a focusing mechanism included in the camera 100 is controlled, and a focus position is periodically scanned. In addition, during imaging, movement of the unmanned aerial vehicle 10 is controlled such that at least one scanning is performed during movement to a position shifted by an imaging range.
Abstract:
In an imaging device that includes an imaging lens including n number of optical systems of which imaging characteristics are different and an image sensor including m number of light receiving sensors of which combinations of crosstalk ratio and light sensitivity are different in each pixel, m number of primary image data items are generated by obtaining image signals output from the light receiving sensors of each pixel of the image sensor, and n number of secondary image data items corresponding to the optical systems are generated by performing crosstalk removal processing on the m number of generated primary image data items for each pixel. In a case where a pixel as a processing target includes the primary image data of which a pixel value is saturated, the secondary image data items are generated by removing the corresponding primary image data and performing the crosstalk removal processing.
Abstract:
Provided are an imaging device and an image data generation method which are capable of reducing noise generated in an image in which crosstalk is removed. In an imaging device (1) that captures images corresponding to optical systems at one time by using an imaging lens (10) including a plurality of optical system of which imaging characteristics are different and an image sensor (100) including a plurality of light receiving sensors of which crosstalk ratios are different in each pixel, the number (m) of light receiving sensors included in each pixel of the image sensor (100) is larger than the number (n) of optical systems included in the imaging lens (10) (m>n). Accordingly, it is possible to reduce noise generated in an image in which crosstalk is removed.
Abstract:
The imaging device includes a multiple-property lens that includes a first area having a first property and a second area having a second property different from the first property, an image sensor in which a first light receiving element 25A having a first microlens and a second light receiving element 25B having a second microlens having a different image forming magnification from the first microlens are two-dimensionally arranged, and a crosstalk removal processing unit that removes a crosstalk component from each of a first crosstalk image acquired from the first light receiving element 25A of the image sensor and a second crosstalk image acquired from the second light receiving element to generate a first image and a second image respectively having the first property and the second property of the multiple-property lens.
Abstract:
An image-processing device includes an image acquiring device, an encoded aperture pattern setting device configured to set encoded aperture patterns for multiple pupil images of the main lens, respectively, a calculation device configured to perform a weighted product-sum calculation between the pupil image for each lens of the lens array in the image acquired from the image sensor and the encoded aperture pattern set by the encoded aperture pattern setting device, and an image generating device configured to generate an image based on a calculation result by the calculation device.
Abstract:
A device for measuring distances to multiple subjects includes an imaging optical system, a pupil orientation sensor having multiple pixels including photoelectric conversion elements arranged two-dimensionally, the pupil orientation sensor selectively receiving a light flux passed through any of the multiple regions, an image acquisition device configured to simultaneously acquire each of multiple images corresponding to the multiple regions from the pupil orientation sensor, a focusing control device configured to independently drive the physically-separated multiple lenses of the imaging optical system on the basis of the multiple images acquired by the image acquisition device to control the lenses to be focused on multiple subjects each having a different focusing distance, and a first calculation device configured to calculate each of the focusing distances to the multiple subjects respectively subjected to focusing control by the focusing control device.
Abstract:
An aspect of the present invention is an imaging method using a multifocal lens having a plurality of regions, the plurality of regions having different focal lengths, and the imaging method includes a focusing state control step of controlling a focusing state of the multifocal lens, and an imaging step of obtaining an image of a subject in the controlled focusing state. In the focusing state control step, the focusing state is controlled so that a main subject is focused via a region with a shortest required focusing time among the plurality of regions in response to a picture taking instruction.
Abstract:
This parallax image display device is provided with an image acquiring unit that acquires a right-eye image and a left-eye image used for generating a parallax image enabling a stereoscopic view, an information volume distribution calculator that calculates an information volume distribution of the right-eye image and an information volume distribution of the left-eye image, and a parallax image generator that generates the parallax image from the right-eye image and the left-eye image on the basis of the information volume distribution of the right-eye image and the information volume distribution of the left-eye image.