Abstract:
A method, performed by an electronic device, for verifying a user to allow access to the electronic device is disclosed. In this method, sensor data may be received from a plurality of sensors including at least an image sensor and a sound sensor. Context information of the electronic device may be determined based on the sensor data and at least one verification unit may be selected from a plurality of verification units based on the context information. Based on the sensor data from at least one of the image sensor or the sound sensor, the at least one selected verification unit may calculate at least one verification value. The method may determine whether to allow the user to access the electronic device based on the at least one verification value and the context information.
Abstract:
A method, performed by an electronic device, for verifying a user to allow access to the electronic device is disclosed. In this method, sensor data may be received from a plurality of sensors including at least an image sensor and a sound sensor. Context information of the electronic device may be determined based on the sensor data and at least one verification unit may be selected from a plurality of verification units based on the context information. Based on the sensor data from at least one of the image sensor or the sound sensor, the at least one selected verification unit may calculate at least one verification value. The method may determine whether to allow the user to access the electronic device based on the at least one verification value and the context information.
Abstract:
In several aspects of described embodiments, an electronic device and method use a camera to capture an image or a frame of video of an environment outside the electronic device followed by identification of blocks of regions in the image. Each block that contains a region is checked, as to whether a test for presence of a line of pixels is met. When the test is met for a block, that block is identified as pixel-line-present. Pixel-line-present blocks are used to identify blocks that are adjacent. One or more adjacent block(s) may be merged with a pixel-line-present block when one or more rules are found to be satisfied, resulting in a merged block. The merged block is then subject to the above-described test, to verify presence of a line of pixels therein, and when the test is satisfied the merged block is processed normally, e.g. classified as text or non-text.
Abstract:
In several aspects of described embodiments, an electronic device and method use a camera to capture an image or a frame of video of an environment outside the electronic device followed by identification of blocks of regions in the image. Each block that contains a region is checked, as to whether a test for presence of a line of pixels is met. When the test is met for a block, that block is identified as pixel-line-present. Pixel-line-present blocks are used to identify blocks that are adjacent. One or more adjacent block(s) may be merged with a pixel-line-present block when one or more rules are found to be satisfied, resulting in a merged block. The merged block is then subject to the above-described test, to verify presence of a line of pixels therein, and when the test is satisfied the merged block is processed normally, e.g. classified as text or non-text.
Abstract:
A method, which is performed by an electronic device, for adjusting at least one image capturing parameter in a preview mode is disclosed. The method may include capturing a preview image of a scene including at least one text object based on a set of image capturing parameters. The method may also identify a plurality of text regions in the preview image. From the plurality of text regions, a target focus region may be selected. Based on the target focus region, the at least one image capturing parameter may be adjusted.
Abstract:
The various aspects are directed to automatic device-to-device connection control. An aspect extracts a first sound signature, wherein the extracting the first sound signature comprises extracting a sound signature from a sound signal emanating from a certain direction, receives a second sound signature from a peer device, compares the first sound signature to the second sound signature, and pairs with the peer device. An aspect extracts a first sound signature, wherein the extracting the first sound signature comprises extracting a sound signature from a sound signal emanating from a certain direction, sends the first sound signature to a peer device, and pairs with the peer device. An aspect detects a beacon sound signal, wherein the beacon sound signal is detected from a certain direction, extracts a code embedded in the beacon sound signal, and pairs with a peer device.
Abstract:
A method, which is performed by an electronic device, for adjusting at least one image capturing parameter in a preview mode is disclosed. The method may include capturing a preview image of a scene including at least one text object based on a set of image capturing parameters. The method may also identify a plurality of text regions in the preview image. From the plurality of text regions, a target focus region may be selected. Based on the target focus region, the at least one image capturing parameter may be adjusted.
Abstract:
A mobile device that is capable of automatically starting and ending the recording of an audio signal captured by at least one microphone is presented. The mobile device is capable of adjusting a number of parameters related with audio logging based on the context information of the audio input signal.
Abstract:
A portable computing device reads information embossed on a form factor utilizing a built-in digital camera and determines dissimilarity between each pair of embossed characters to confirm consistency. Techniques comprise capturing an image of a form factor having information embossed thereupon, and detecting embossed characters. The detecting utilizes a gradient image and one or more edge images with a mask corresponding to the regions for which specific information is expected to be found on the form factor. The embossed form factor may be a credit card, and the captured image may comprise an account number and an expiration date embossed upon the credit card. Detecting embossed characters may comprise detecting the account number and the expiration date of the credit card, and/or the detecting may utilize a gradient image and one or more edge images with a mask corresponding to the regions for the account number and expiration date.
Abstract:
A portable computing device reads information embossed on a form factor utilizing a built-in digital camera and determines dissimilarity between each pair of embossed characters to confirm consistency. Techniques comprise capturing an image of a form factor having information embossed thereupon, and detecting embossed characters. The detecting utilizes a gradient image and one or more edge images with a mask corresponding to the regions for which specific information is expected to be found on the form factor. The embossed form factor may be a credit card, and the captured image may comprise an account number and an expiration date embossed upon the credit card. Detecting embossed characters may comprise detecting the account number and the expiration date of the credit card, and/or the detecting may utilize a gradient image and one or more edge images with a mask corresponding to the regions for the account number and expiration date.