Abstract:
Methods and apparatus relating to enabling augmented reality applications using eye gaze tracking are disclosed. An exemplary method according to the disclosure includes displaying an image to a user of a scene viewable by the user, receiving information indicative of an eye gaze of the user, determining an area of interest within the image based on the eye gaze information, determining an image segment based on the area of interest, initiating an object recognition process on the image segment, and displaying results of the object recognition process.
Abstract:
A detector in a mobile device receives input from a modem, determines whether the mobile device is indoor or outdoor based on the modem-supplied input, and stores in memory a binary value to indicate an indoor-outdoor state of the mobile device. In certain embodiments, the mobile device includes a modem of a cell phone and the modem-supplied input includes an estimate of a power delay profile. In some embodiments, the detector extracts a feature from the modem-supplied input, and uses the extracted feature with a classifier, to output a state and a probability of the state. In these embodiments, logic in the detector compares an empirically-determined threshold, against a probability output by the classifier, and when the threshold is exceeded, the state determined by the classifier is output as the state of the mobile device. In other embodiments, the classifier outputs the state of the mobile device directly (without probability).
Abstract:
A mobile platform efficiently processes image data, using distributed processing in which latency sensitive operations are performed on the mobile platform, while latency insensitive, but computationally intensive operations are performed on a remote server. The mobile platform acquires image data, and determines whether there is a trigger event to transmit the image data to the server. The trigger event may be a change in the image data relative to previously acquired image data, e.g., a scene change in an image. When a change is present, the image data may be transmitted to the server for processing. The server processes the image data and returns information related to the image data, such as identification of an object in an image or a reference image or model. The mobile platform may then perform reference based tracking using the identified object or reference image or model.
Abstract:
Systems, apparatus and methods in a mobile device to enable and disable a depth sensor for tracking pose of the mobile device are presented. A mobile device relaying on a camera without a depth sensor may provide inadequate pose estimates, for example, in low light situations. A mobile device with a depth sensor uses substantial power when the depth sensor is enabled. Embodiments described herein enable a depth sensor only when images are expected to be inadequate, for example, accelerating or moving too fast, when inertial sensor measurements are too noisy, light levels are too low or high, an image is too blurry, or a rate of images is too slow. By only using a depth sensor when images are expected to be inadequate, battery power in the mobile device may be conserved and pose estimations may still be maintained.
Abstract:
Systems and methods for monitoring the number of neighboring wireless devices in a wireless network are described herein. In one aspect, the method includes receiving a message from one of the neighboring wireless devices having an identifier associated with the neighboring wireless device and adding the identifier into a Bloom filter. The method may further include estimating the number of distinct strings that have been added into the Bloom filter based on the number of zeros in the Bloom filter, the number of distinct strings representing an estimate of the number of neighboring wireless devices in the wireless network.
Abstract:
Exemplary methods, apparatuses, and systems infer a context of a user or device. A computer vision parameter is configured according to the inferred context. Performing a computer vision task, in accordance with the configured computer vision parameter. The computer vision task may by at least one of: a visual mapping of an environment of the device, a visual localization of the device or an object within the environment of the device, or a visual tracking of the device within the environment of the device.
Abstract:
A method of auto-calibrating light sensor data of a mobile device includes, obtaining, by the mobile device, one or more reference parameters representative of light sensor data collected by a reference device. The method also includes collecting, by the mobile device, light sensor data from a light sensor included in the mobile device, itself. One or more sample parameters of the light sensor data obtained from the light sensor included in the mobile device are then calculated. A calibration model is then determined for auto-calibrating the light sensor data of the light sensor included in the mobile device based on the one or more reference parameters and the one or more sample parameters.
Abstract:
Exemplary methods, apparatuses, and systems infer a context of a user or device. A computer vision parameter is configured according to the inferred context. Performing a computer vision task, in accordance with the configured computer vision parameter. The computer vision task may by at least one of: a visual mapping of an environment of the device, a visual localization of the device or an object within the environment of the device, or a visual tracking of the device within the environment of the device.
Abstract:
Systems and methods share context information on a neighbor aware network. In one aspect, a context providing device receives a plurality of responses to a discovery query from a context consuming device, and tailors services it offers to the context consuming device based on the responses. In another aspect, a context providing device indicates in its response to a discovery query which services or local context information it can provide to the context consuming device, and also a cost associated with providing the service or the local context information. In some aspects, the cost is in units of monetary currency. In other aspects, the cost is in units of user interface display made available to an entity associated with the context providing device in exchange for the services or local context information offered to the context consuming device.
Abstract:
A system for storing target images for object recognition predicts a querying performance for the target image if the target image were included in a search tree of a database. The search tree has a universal search tree structure that is fixed so that it does not change with the addition of new target images. The target image is selected for inclusion or exclusion in the search tree based on the based on the querying performance, wherein the fixed tree structure of the search tree does not change if inclusion of the target image is selected.