Abstract:
A non-transitory computer-readable recording medium has recorded thereon a computer program for face direction estimation that causes a computer to execute a process including: generating, for each presumed face direction, a face direction converted image by converting the direction of the face represented on an input image into a prescribed direction; generating, for each presumed face direction, a reversed face image by reversing the face represented on the face direction converted image; converting the direction of the face represented on the reversed face image to be the presumed face direction; calculating, for each presumed face direction, an evaluation value that represents the degree of difference between the face represented on the reversed face image and the face represented on the input image, based on the conversion result; and specifying, based on the evaluation value, the direction of the face represented on the input image.
Abstract:
A non-transitory computer-readable medium storing a program for tracking a feature point in an image that causes a computer to execute a process, the process includes: calculating first values indicating degree of corner for respective pixels in another image, based on change of brightness values in horizontal direction and vertical direction; calculating second values indicating degree of similarity between respective areas in the another image and a reference area around the feature point in the image, based on comparison between the respective areas and the reference area; calculating third values indicating overall degree of corner and similarity, based on the first values and the second values; and tracking the feature point by identifying a point in the another image corresponding to the feature point in the image, based on the third values.
Abstract:
A processor generates a first road surface image from an image at a first time captured by an imaging device mounted on a moving body, and generates a second road surface image from an image at a second time after the first time. Next, the processor determines direction information depending on a direction of travel of the moving body between the first time and the second time from an amount of turn of the moving body between the first time and the second time. Then, the processor determines a relative positional relationship between the first road surface image and the second road surface image by using the amount of turn and the direction information, and determines an amount of travel of the moving body between the first time and the second time on the basis of the relative positional relationship.
Abstract:
An image processing apparatus includes an acquiring unit for acquiring an image from a camera in a moving body; an extracting unit for extracting feature points from the image; a matching unit for performing a matching process on feature points extracted from images taken at different time points; a position calculating unit for calculating a three-dimensional position based on the matching feature point and the movement of the moving body; a calculating unit for calculating a precision of the three-dimensional position; a distribution determining unit for detecting an object from the image and setting a threshold for each object based on a precision distribution of the feature points of each object; a selecting unit for selecting, for each object, the feature points having a higher precision than the threshold; and a generating unit for generating an object shape by using the feature points that have been selected.
Abstract:
An apparatus is configured to execute: a first process for estimating a first component at a first time point by using a waveform and the first component calculated from the waveform before the first time point, the waveform being based on a running trace of a vehicle, the first component being less than a first frequency; a second process for estimating the first component at the first time point by using the waveform, the first component calculated from the waveform before the first time point, and a second component at the first time point, the second component being greater than the first frequency and predicted from the second component calculated from the waveform before the first time point; and a calculation process for calculating the second component at the first time point from the waveform based on the first components estimated by the first and second process.
Abstract:
A non-transitory computer-readable recording medium having stored therein a line of sight detection program for causing a computer to execute a process, the process includes finding an index indicating a variation of a line of sight of an observer who observes an object based on a difference between line of sight data of a left eye of the observer and line of sight data of a right eye of the observer, determining a stay of the line of sight of the observer based on the index, and resolving a line of sight position of the observer based on a result of the determination on the stay.
Abstract:
A processor of a locus estimation device accepts a measured value of a wheel speed of right and left front wheels of a moving object, and a measured value of a steering angle at which the traveling direction is changed. Based on a measured value of a wheel speed of the right and left front wheels, a measured value of a steering angle, a distance in the direction of the body of the moving object, a distance in the direction of the axle of the moving object, and a constant, the processor estimates an amount of rotation of the middle point of a rotation center of the right and left rear wheels on a circle having a center which is a point on a straight line passing through the rotation center of the right and left rear wheels, and an amount of translation of the middle point.
Abstract:
An image processing device, includes: a memory; and a processor coupled to the memory, configured to: extract an edge where positions overlap with each other, by comparing a first edge image extracted from an image captured for a first time and a second edge image extracted from an image captured for a second time after the first time, the image for the first time and the image for the second first time being captured from a movable body, remove the extracted edge from at least one of the first edge image and the second edge image, perform matching processing on the first and second edge images in both or one of which the extracted edge has been removed, and estimate a movement amount of the movable body, by using a displacement amount between the first edge image and the second edge image which are subjected to the matching processing.
Abstract:
From plural observation features of a time series acquired by observing movements of a person, plural candidate segments are decided of a target action series containing plural respective actions expressing plural movements. Each of the plural candidate segments is divided into each action segment that is a time segment of the action, a likelihood corresponding to each of the plural actions computed for each of the action segments is normalized by action segment, and as an evaluation value a representative value is computed of normalized likelihood corresponding to each of the action segments selected from out of all of the respective action segments in the candidate segments based on an order of actions in the target action series. Being the target action series is determined in cases in which the evaluation value exceeds a common threshold.
Abstract:
A camera parameter estimating method includes: obtaining a plurality of image frames in time series, the image frames being photographed by a camera installed in a mobile body; detecting at least one straight line from a central portion of a first image frame group including one or more first image frames among the plurality of image frames; detecting a plurality of curves corresponding to the straight line from a second image frame group including one or more second image frames at a later time than an image frame from which the straight line is detected based on a feature quantity of the detected straight line; and estimating a parameter of the camera based on the plurality of curves.