Abstract:
A procedure includes calculating a position of a line of sight of a user in a display screen of a display device, based on information on an eyeball portion of the user included in an input image, setting a processing target region which is a target region of processing corresponding to an input operation by a line of sight and an operation region which is adjacent to the processing target region and is for receiving the input operation by the line of sight, in the display screen, based on the position of the line of sight and information on a field of view of the user, and creating screen data in which image information within the processing target region is included in image information within the operation region and which is to be displayed on the display device.
Abstract:
An object detection method includes extracting an object candidate by grouping a plurality of characteristic points from two images acquired at different times by a camera; calculating, for each of characteristic points included in the object candidate, an observation flow amount indicating an amount of positional shift on the two images and a virtual road surface flow amount indicating an amount of positional shift on the two images while assuming that the characteristic point is located on a road surface; generating, for each of the characteristic points, a flow difference distribution of the object candidate by calculating a difference between the observation flow amount and the virtual road surface flow amount; and determining whether the object candidate is an object existing on and protruding from the road surface by comparing the flow difference distribution with a plurality of flow difference distribution models generated for predicted inclined states of the surface.
Abstract:
A traffic lane boundary line extraction apparatus includes: an image-data-acquiring-unit configured to acquire vehicle outside image data captured by a travelling vehicle; a candidate-area-detection-unit configured to detect a candidate area of a traffic lane boundary line from a road surface part of the vehicle outside image data; a road-surface-area-setting-unit configured to set a road surface area corresponding to the candidate area in the road surface part of the vehicle outside image data for each of the candidate area; a luminance-calculation-unit configured to calculate a representative luminance of the candidate area, and a representative luminance of the road surface area; and if a difference between the representative luminance of the candidate area and the representative luminance of the road surface area is greater than a predetermined threshold value, a candidate area evaluation unit configured to evaluate that the candidate area is suitable for the traffic lane boundary line.
Abstract:
An image processing device includes: a memory; and a processor coupled to the memory and configured to: obtain image data regarding an image captured by an image capture device that moves along with a moving body and of which image capture direction is a certain traveling direction of the moving body, and detect a feature caused by a reflection from features based on information regarding positions of the features in the image data at a time when the moving body has moved in a direction different from the image capture direction.