Abstract:
An electronic device for generating video data is disclosed. The electronic device may include a communication unit configured to wirelessly receive a video stream captured by a camera, wherein the camera is located in an unmanned aerial vehicle. The electronic device may also include at least one sound sensor configured to receive an input sound stream. In addition, the electronic device may include an audio control unit configured to generate an audio stream associated with the video stream based on the input sound stream. Further, the electronic device may include a synthesizer unit configured to generate the video data based on the video stream and the audio stream.
Abstract:
A method, performed in an electronic device, for connecting to a target device is disclosed. The method includes capturing an image including a face of a target person associated with the target device and recognizing an indication of the target person. The indication of the target person may be a pointing object, a speech command, and/or any suitable input command. The face of the target person in the image is detected based on the indication and at least one facial feature of the face in the image is extracted. Based on the at least one facial feature, the electronic device is connected to the target device.
Abstract:
According to an aspect of the present disclosure, a method for controlling access to a plurality of electronic devices is disclosed. The method includes detecting whether a first device is in contact with a user, adjusting a security level of the first device to activate the first device when the first device is in contact with the user, detecting at least one second device within a communication range of the first device, and adjusting a security level of the at least one second device to control access to the at least one second device based on a distance between the first device and the at least one second device.
Abstract:
A method for grouping data items in a mobile device is disclosed. In this method, a plurality of data items and a sound tag associated with each of the plurality of data items are stored, and the sound tag includes a sound feature extracted from an input sound indicative of an environmental context for the data item. Further, the method may include generating a new data item, receiving an environmental sound, generating a sound tag associated with the new data item by extracting a sound feature from the environmental sound, and grouping the new data item with at least one of the plurality of data items based on the sound tags associated with the new data item and the plurality of data items.
Abstract:
According to an aspect of the present disclosure, a method for controlling display of a region on a touch screen display of a mobile device is disclosed. The method includes receiving a command indicative of zooming by a first sensor, sensing at least one image including at least one eye by a camera, determining a direction of a gaze of the at least one eye based on the at least one image, determining a target region to be zoomed on the touch screen display based on the direction of the gaze, and zooming the target region on the touch screen display.
Abstract:
According to an aspect of the present disclosure, a method for controlling access to a plurality of electronic devices is disclosed. The method includes detecting whether a first device is in contact with a user, adjusting a security level of the first device to activate the first device when the first device is in contact with the user, detecting at least one second device within a communication range of the first device, and adjusting a security level of the at least one second device to control access to the at least one second device based on a distance between the first device and the at least one second device.
Abstract:
A method for controlling access to a plurality of applications in an electronic device includes receiving a voice command from a speaker for accessing a target application among the plurality of applications, and verifying whether the voice command is indicative of a user authorized to access the applications based on a speaker model of the authorized user. In this method, each application is associated with a security level having a threshold value. The method further includes updating the speaker model with the voice command if the voice command is verified to be indicative of the user, and adjusting at least one of the threshold values based on the updated speaker model.
Abstract:
A method, performed in an electronic device, for connecting to a target device is disclosed. The method includes capturing an image including a face of a target person associated with the target device and recognizing an indication of the target person. The indication of the target person may be a pointing object, a speech command, and/or any suitable input command. The face of the target person in the image is detected based on the indication and at least one facial feature of the face in the image is extracted. Based on the at least one facial feature, the electronic device is connected to the target device.
Abstract:
An electronic device for generating video data is disclosed. The electronic device may include a communication unit configured to wirelessly receive a video stream captured by a camera, wherein the camera is located in an unmanned aerial vehicle. The electronic device may also include at least one sound sensor configured to receive an input sound stream. In addition, the electronic device may include an audio control unit configured to generate an audio stream associated with the video stream based on the input sound stream. Further, the electronic device may include a synthesizer unit configured to generate the video data based on the video stream and the audio stream.
Abstract:
A method, performed in an electronic device, for connecting to a target device is disclosed. The method includes capturing an image including a face of a target person associated with the target device and recognizing an indication of the target person. The indication of the target person may be a pointing object, a speech command, and/or any suitable input command. The face of the target person in the image is detected based on the indication and at least one facial feature of the face in the image is extracted. Based on the at least one facial feature, the electronic device is connected to the target device.