Abstract:
A method, performed by an electronic device, for verifying a user to allow access to the electronic device is disclosed. In this method, sensor data may be received from a plurality of sensors including at least an image sensor and a sound sensor. Context information of the electronic device may be determined based on the sensor data and at least one verification unit may be selected from a plurality of verification units based on the context information. Based on the sensor data from at least one of the image sensor or the sound sensor, the at least one selected verification unit may calculate at least one verification value. The method may determine whether to allow the user to access the electronic device based on the at least one verification value and the context information.
Abstract:
A method, which is performed in a first electronic device, for authorizing access to a second electronic device is disclosed. The method may include establishing communication between the first electronic device and the second electronic device. The method may also obtain data indicative of a motion of at least one of the first and second electronic devices in response to a movement of the at least one of the first and second electronic devices. Based on the data, a control signal authorizing access to the second electronic device is generated, and transmitted to the second electronic device.
Abstract:
A method for controlling an application in a mobile device is disclosed. The method includes receiving environmental information, inferring an environmental context from the environmental information, and controlling activation of the application based on a set of reference models associated with the inferred environmental context. In addition, the method may include receiving a sound input, extracting a sound feature from the sound input, transmitting the sound feature to a server configured to group a plurality of mobile devices into at least one similar context group, and receiving, from the server, information on a leader device or a non-leader device and the at least one similar context group.
Abstract:
A method for displaying an image is disclosed. The method may be performed in an electronic device. Further, the method may detect at least one text region in the image and determine at least one text category associated with the at least one text region. Based on the at least one text region and the at least one text category, the method may generate at least one thumbnail from the image and display the at least one thumbnail.
Abstract:
A method, performed in an electronic device, for connecting to a target device is disclosed. The method includes capturing an image including a face of a target person associated with the target device and recognizing an indication of the target person. The indication of the target person may be a pointing object, a speech command, and/or any suitable input command. The face of the target person in the image is detected based on the indication and at least one facial feature of the face in the image is extracted. Based on the at least one facial feature, the electronic device is connected to the target device.
Abstract:
A method for grouping data items in a mobile device is disclosed. In this method, a plurality of data items and a sound tag associated with each of the plurality of data items are stored, and the sound tag includes a sound feature extracted from an input sound indicative of an environmental context for the data item. Further, the method may include generating a new data item, receiving an environmental sound, generating a sound tag associated with the new data item by extracting a sound feature from the environmental sound, and grouping the new data item with at least one of the plurality of data items based on the sound tags associated with the new data item and the plurality of data items.