Abstract:
A first map comprising local features and 3D locations of the local features is generated, the local features comprising visible features in a current image and a corresponding set of covisible features. A second map comprising prior features and 3D locations of the prior features may be determined, where each prior feature: was first imaged at a time prior to the first imaging of any of the local features, and lies within a threshold distance of at least one local feature. A first subset comprising previously imaged local features in the first map and a corresponding second subset of the prior features in the second map is determined by comparing the first and second maps, where each local feature in the first subset corresponds to a distinct prior feature in the second subset. A transformation mapping a subset of local features to a subset of prior features is determined.
Abstract:
Embodiments disclosed obtain a plurality of measurement sets from a plurality of sensors in conjunction with the capture of a sequence of exterior and interior images of a structure while traversing locations in and around the structure. Each measurement set may be associated with at least one image. An external structural envelope of the structure is determined from exterior images of the structure and the corresponding outdoor trajectory of a UE. The position and orientation of the structure and the structural envelope is determined in absolute coordinates. Further, an indoor map of the structure in absolute coordinates may be obtained based on interior images of the structure, a structural envelope in absolute coordinates, and measurements associated with the indoor trajectory of the UE during traversal of the indoor area to capture the interior images.
Abstract:
A mobile device determines a vision based pose using images captured by a camera and determines a sensor based pose using data from inertial sensors, such as accelerometers and gyroscopes. The vision based pose and sensor based pose are used separately in a visualization application, which displays separate graphics for the different poses. For example, the visualization application may be used to calibrate the inertial sensors, where the visualization application displays a graphic based on the vision based pose and a graphic based on the sensor based pose and prompts a user to move the mobile device in a specific direction with the displayed graphics to accelerate convergence of the calibration of the inertial sensors. Alternatively, the visualization application may be a motion based game or a photography application that displays separate graphics using the vision based pose and the sensor based pose.
Abstract:
An accelerometer in a mobile device is calibrated by taking multiple measurements of acceleration vectors when the mobile device is held stationary at different orientations with respect to a plane normal. A circle is calculated that fits respective tips of measured acceleration vectors in the accelerometer coordinate system. The radius of the circle and the lengths of the measured acceleration vectors are used to calculate a rotation angle for aligning the accelerometer coordinate system with the mobile device surface. A gyroscope in the mobile device is calibrated by taking multiple measurements of a rotation axis when the mobile device is rotated at different rates with respect to the rotation axis. A line is calculated that fits the measurements. The angle between the line and an axis of the gyroscope coordinate system is used to align the gyroscope coordinate system with the mobile device surface.
Abstract:
A mobile device determines a vision based pose using images captured by a camera and determines a sensor based pose using data from inertial sensors, such as accelerometers and gyroscopes. The vision based pose and sensor based pose are used separately in a visualization application, which displays separate graphics for the different poses. For example, the visualization application may be used to calibrate the inertial sensors, where the visualization application displays a graphic based on the vision based pose and a graphic based on the sensor based pose and prompts a user to move the mobile device in a specific direction with the displayed graphics to accelerate convergence of the calibration of the inertial sensors. Alternatively, the visualization application may be a motion based game or a photography application that displays separate graphics using the vision based pose and the sensor based pose.
Abstract:
An electronic device is described. The electronic device includes a memory and a processor in communication with the memory. The memory is configured to store precalibration data for a camera mounted on a vehicle, the precalibration data including a camera height determined relative to a road plane the vehicle is configured to contact during operation. The processor is configured to receive a plurality of images. The processor is also configured to classify one or more features in the plurality of images as road features based on the precalibration data.
Abstract:
A first map comprising local features and 3D locations of the local features is generated, the local features comprising visible features in a current image and a corresponding set of covisible features. A second map comprising prior features and 3D locations of the prior features may be determined, where each prior feature: was first imaged at a time prior to the first imaging of any of the local features, and lies within a threshold distance of at least one local feature. A first subset comprising previously imaged local features in the first map and a corresponding second subset of the prior features in the second map is determined by comparing the first and second maps, where each local feature in the first subset corresponds to a distinct prior feature in the second subset. A transformation mapping a subset of local features to a subset of prior features is determined.
Abstract:
An accelerometer in a mobile device is calibrated by taking multiple measurements of acceleration vectors when the mobile device is held stationary at different orientations with respect to a plane normal. A circle is calculated that fits respective tips of measured acceleration vectors in the accelerometer coordinate system. The radius of the circle and the lengths of the measured acceleration vectors are used to calculate a rotation angle for aligning the accelerometer coordinate system with the mobile device surface. A gyroscope in the mobile device is calibrated by taking multiple measurements of a rotation axis when the mobile device is rotated at different rates with respect to the rotation axis. A line is calculated that fits the measurements. The angle between the line and an axis of the gyroscope coordinate system is used to align the gyroscope coordinate system with the mobile device surface.
Abstract:
An accelerometer located within a mobile device is used to estimate a gravity vector on a target plane in a world coordinate system. The accelerometer makes multiple measurements, each measurement being taken when the mobile device is held stationary on the target plane and a surface of the mobile device faces and is in contact with a planar portion of the target plane. An average of the measurements is calculated. A rotational transformation between an accelerometer coordinate system and a mobile device's coordinate system is retrieved from a memory in the mobile device, where the mobile device's coordinate system is aligned with the surface of the mobile device. The rotational transformation is applied to the averaged measurements to obtain an estimated gravity vector in a world coordinate system defined by the target plane.
Abstract:
Techniques provided herein are directed toward using a camera, such as a forward-facing camera, to identify non-line-of-sight (NLoS) satellites in a satellite positioning system. In particular, successive images captured by the camera of the vehicle can be used to create a three-dimensional (3-D) skyline model of one or more objects that may be obstructing the view of a satellite (from the perspective of the vehicle). Accordingly, this allows for the determination of NLoS satellites and exclusion of data from the NLoS satellites in the determination of the location of the vehicle. Techniques may further include providing the determined location of the vehicle.