Abstract:
A multi-user augmented reality (AR) system operates without a previously acquired common reference by generating a reference image on the fly. The reference image is produced by capturing at least two images of a planar object and using the images to determine a pose (position and orientation) of a first mobile platform with respect to the planar object. Based on the orientation of the mobile platform, an image of the planar object, which may be one of the initial images or a subsequently captured image, is warped to produce the reference image of a front view of the planar object. The reference image may be produced by the mobile platform or by, e.g., a server. Other mobile platforms may determine their pose with respect to the planar object using the reference image to perform a multi-user augmented reality application.
Abstract:
A mobile platform efficiently processes image data, using distributed processing in which latency sensitive operations are performed on the mobile platform, while latency insensitive, but computationally intensive operations are performed on a remote server. The mobile platform acquires image data, and determines whether there is a trigger event to transmit the image data to the server. The trigger event may be a change in the image data relative to previously acquired image data, e.g., a scene change in an image. When a change is present, the image data may be transmitted to the server for processing. The server processes the image data and returns information related to the image data, such as identification of an object in an image or a reference image or model. The mobile platform may then perform reference based tracking using the identified object or reference image or model.
Abstract:
A multi-user augmented reality (AR) system operates without a previously acquired common reference by generating a reference image on the fly. The reference image is produced by capturing at least two images of a planar object and using the images to determine a pose (position and orientation) of a first mobile platform with respect to the planar object. Based on the orientation of the mobile platform, an image of the planar object, which may be one of the initial images or a subsequently captured image, is warped to produce the reference image of a front view of the planar object. The reference image may be produced by the mobile platform or by, e.g., a server. Other mobile platforms may determine their pose with respect to the planar object using the reference image to perform a multi-user augmented reality application.
Abstract:
An apparatus includes a first sensor configured to generate first sensor data. The first sensor data is related to an occupant of a vehicle. The apparatus further includes a depth sensor and a processor. The depth sensor is configured to generate data corresponding to a volume associated with at least a portion of the occupant. The processor is configured to receive the first sensor data and to activate the depth sensor based on the first sensor data.
Abstract:
An apparatus includes a first sensor configured to generate first sensor data. The first sensor data is related to an occupant of a vehicle. The apparatus further includes a depth sensor and a processor. The depth sensor is configured to generate data corresponding to a volume associated with at least a portion of the occupant. The processor is configured to receive the first sensor data and to activate the depth sensor based on the first sensor data.
Abstract:
A mobile platform efficiently processes image data, using distributed processing in which latency sensitive operations are performed on the mobile platform, while latency insensitive, but computationally intensive operations are performed on a remote server. The mobile platform acquires image data, and determines whether there is a trigger event to transmit the image data to the server. The trigger event may be a change in the image data relative to previously acquired image data, e.g., a scene change in an image. When a change is present, the image data may be transmitted to the server for processing. The server processes the image data and returns information related to the image data, such as identification of an object in an image or a reference image or model. The mobile platform may then perform reference based tracking using the identified object or reference image or model.