Abstract:
A display control device includes: an acquirer that receives inclination information on an occupant's head in a mobile body from a detector that detects the inclination; and a controller that controls a displayer to generate a predetermined image representing a presentation image superimposed on an object as viewed from the occupant when the presentation image is displayed on a display medium, based on recognition results of the object ahead of the mobile body and the inclination information. When the object is recognized, with the head not inclined, the controller causes the displayer to generate a first predetermined image representing a first presentation image indicating a horizontal direction, and with the head inclined, the controller causes the displayer to generate a second predetermined image representing a second presentation image obtained by rotating part of the first presentation image by an angle which is determined according to the inclination.
Abstract:
There is provided a calibration apparatus for calculating a camera installation parameter with respect to a flat surface without preparing two sets of parallel lines on the flat surface, with respect to which the camera installation parameter is to be obtained. An acquirer acquires a photographed image of two linearly-extending lines substantially perpendicular to a flat surface. An extractor extracts the two linearly-extending lines from the acquired image through image processing. A calculator calculates a vanishing point from the extracted two linearly-extending lines and calculates a camera installation parameter with respect to the flat surface on the basis of coordinates of the vanishing point and given coordinates different from the coordinates of the vanishing point.
Abstract:
A driving assistance device can be installed in a vehicle in which either an automatic driving mode or a manual driving mode is selected for traveling. A receiving unit receives a switching request for a driving mode. A generating unit generates switching notification information for a switching notice in accordance with the switching request received by the receiving unit. An output unit outputs the switching notification information to a presentation unit that presents, outward from the vehicle, the switching notice indicated by the switching notification information generated by the generating unit.
Abstract:
The driving assistance device acquires, from an autonomous driving controller that determines an action of a vehicle during autonomous driving of the vehicle, action information indicating a first action that the vehicle is caused to execute. The driving assistance device acquires, from a detector that detects a surrounding situation and a travel state of the vehicle, detection information indicating a detection result. The driving assistance device determines a second action which is executable in place of the first action, based on the detection information. The driving assistance device generates a first image representing the first action and a second image representing the second action. The driving assistance device outputs the first image and the second image to a notification device such that the first image and the second image are displayed within a fixed field of view of a driver of the vehicle.
Abstract:
An equipment control device includes a receiver that receives sensing result information including a position, a shape, and a movement of a predetermined object and including a position of an eye point of a person, and a controller that, when the sensing result information indicates that the eye point, equipment placed at a predetermined position, and the object align and that the object is in a predetermined shape corresponding to the equipment in advance, determines command information causing to operate the equipment in accordance with the movement of the object in the predetermined shape to an equipment operating device.
Abstract:
An in-vehicle display device includes an image acquiring unit, a position predicting unit, and an image converting unit. The image acquiring unit acquires an image obtained by capturing an object existing around a host vehicle by a camera mounted on the host vehicle. The position predicting unit calculates a first positional relationship between the object and the host vehicle at timing T1 and calculates a second positional relationship between the host vehicle and the object at timing T2 after timing T1 based on the image acquired by the image acquiring unit and a capture timing. Furthermore, the position predicting unit predicts a third positional relationship between the host vehicle and the object at timing T3 after timing T2 based on the first positional relationship and the second positional relationship. The image converting unit converts the image acquired by the image acquiring unit such that the positional relationship between the host vehicle and the object becomes the third positional relationship predicted by the position predicting unit.
Abstract:
A monitoring target management unit specifies a monitoring target based on vehicle peripheral information acquired from a vehicle exterior image sensor mounted on a vehicle. A display controller highlights the monitoring target specified by the monitoring target management unit. An operation signal input unit receives a user input for updating the monitoring target specified by the monitoring target management unit. The monitoring target management unit updates a monitoring target when the operation signal input unit receives a user input.
Abstract:
During autonomous driving of a vehicle, useful Information is presented to an occupant. An image generator in driving assistance device generates a first image representing an action the vehicle is capable of executing depending on a travel environment during autonomous driving and a second image representing a basis of the action. Image output unit outputs the first image and the second image generated by the image generator to notification device in the vehicle in association with each other. The action vehicle is capable of executing may be selected from among action candidates by a driver.
Abstract:
Provided is a technology for improving accuracy in determining the next action. Travel history generator generates, for each driver, a travel history associating an environmental parameter indicating a travel environment through which a vehicle has previously traveled with an action selected by the driver in response to the environmental parameter. Acquisition unit acquires a travel history similar to a travel history of a current driver from among travel histories generated by travel history generator. Driver model generator generates a driver model based on the travel history acquired by acquisition unit. Determination unit determines the next action based on the driver model generated by driver model generator) and an environmental parameter indicating a current travel environment of the vehicle.
Abstract:
An electronic mirror device includes a display, a controller that controls the display, a switching portion, and a preliminary starter. The display includes a main case having an opening, a video display portion, and a semi-transparent mirror. The video display portion includes a display screen and displays camera images captured by a camera. The semi-transparent mirror is provided between the display screen of the video display portion and the opening of the main case and displays a mirror reflection. The switching portion causes the display to switch between displaying the camera images and the mirror reflection. The preliminary starter preliminarily starts up the controller and the display before the switching portion causes the display to display the camera images.