Abstract:
An image processing device includes a reference camera determiner that determines one among the multiple cameras, as a reference camera; an imaging condition acquisition unit that acquires imaging conditions relating to exposure and white balance which are set in the reference camera; an imaging condition setter that sends a command to set the imaging conditions relating to the exposure and the white balance, based on the imaging conditions in the reference camera, to cameras other than the reference camera; and an image composition unit that performs image composition processing on the multiple captured images that are acquired from the multiple cameras and that outputs the panoramic composite image. Thus, a boundary of an area of each composed image in the panoramic composite image that results from the composition is suppressed from being unnaturally conspicuous, and the captured image that is a reference is suitably displayed.
Abstract:
An image processing device includes a stitching processing unit that composites composition source images generated from captured images under a preset processing condition to generate a composite image, a screen generation unit that generates a screen and outputs the composite image to a display input device, a touch operation determination unit that determines a camera image region to be adjusted based on a detection result of a touch operation on the screen and determines an adjustment item according to an operation mode of the touch operation, and a processing condition setting unit that sets a temporary processing condition related to the camera image region to be adjusted. The stitching processing unit generates the composition source image from the captured image of the camera corresponding to the camera image region to be adjusted under the temporary processing condition and updates the composite image by compositing the composition source images.
Abstract:
A video processing apparatus analyzes an input video input from a video input unit, detects a plurality of moving bodies included in the input video, determines a main moving body and a sub moving body, and determines a sub picture position for superimposing and displaying a sub video in a main video in a picture-in-picture form. This video processing apparatus performs cut-out processing on the main video and the sub video from the input video, and synthesizes the cut-out main video and sub video to generate a picture-in-picture synthesized video in which the sub video is superimposed at the sub picture position of the main video in the picture-in-picture form. The picture-in-picture synthesized video is output in one stream.
Abstract:
A video image processing device uses a plurality of video data captured by a plurality of cameras to generate a wide range video data of the entire celestial sphere having a 360-degree range around an area where the plurality of cameras is installed and to transmit the generated wide range video data of the entire celestial sphere to a video display device. The video display device detects a direction of the video display device as a direction of a sight line of the user, receives the transmitted wide range video data of the entire celestial sphere, segments a predetermined area of video data including a detection result of a sensor from the wide range video data of the entire celestial sphere, adjusts a luminosity of the extracted predetermined area of video data to fall in a certain range of luminosity, and displays the adjusted predetermined area of video data.
Abstract:
A method of processing an image includes receiving an omnidirectional image. An instruction is received with a processor to process the omnidirectional image to generate a rectangular image, in response to user input via a display on the omnidirectional image. At least one intermediate image is generated showing a transition between the omnidirectional image and the rectangular image.
Abstract:
A method for processing an image is provided in which an omnidirectional image is received. The omnidirectional image is displayed on a display. Two panoramic images are generated based on the omnidirectional image by correcting distortion of the omnidirectional image. The two panoramic images are displayed on the display in response to a user input. Both of the two panoramic images are scrolled in response to a user input conducted on one of the two panoramic images displayed on the display.
Abstract:
A video image processing device uses a plurality of video data captured by a plurality of cameras to generate a wide range video data of the entire celestial sphere having a 360-degree range around an area where the plurality of cameras is installed and to transmit the generated wide range video data of the entire celestial sphere to a video display device. The video display device detects a direction of the video display device as a direction of a sight line of the user, receives the transmitted wide range video data of the entire celestial sphere, segments a predetermined area of video data including a detection result of a sensor from the wide range video data of the entire celestial sphere, adjusts a luminosity of the extracted predetermined area of video data to fall in a certain range of luminosity, and displays the adjusted predetermined area of video data.
Abstract:
A method for processing an image includes receiving an input image having distortion, receiving position information input by a user on the received input image, generating a first image by correcting the input image having distortion, and translating a position on the input image having distortion indicated by the received position information into a position on the generated first image. The method further includes setting, on the generated first image, a first mask area having a predetermined shape and including the position on the generated first image translated from the position on the input image having distortion, and performing mask processing on the set first mask area on the generated first image.
Abstract:
A method for processing an image is provided, in which a circular omnidirectional image is received. The circular omnidirectional image is displayed on a display. A first rectangular image representing a first part of the circular omnidirectional image is generated in response to a first user input received via the display on the circular omnidirectional image. The first rectangular image is superimposed on the circular omnidirectional image.
Abstract:
An image processing device includes a stitching processing unit that composites composition source images generated from captured images under a preset processing condition to generate a composite image, a screen generation unit that generates a screen and outputs the composite image to a display input device, a touch operation determination unit that determines a camera image region to be adjusted based on a detection result of a touch operation on the screen and determines an adjustment item according to an operation mode of the touch operation, and a processing condition setting unit that sets a temporary processing condition related to the camera image region to be adjusted. The stitching processing unit generates the composition source image from the captured image of the camera corresponding to the camera image region to be adjusted under the temporary processing condition and updates the composite image by compositing the composition source images.