Abstract:
A method, performed in an electronic device, for assigning a target keyword to a function is disclosed. In this method, a list of a plurality of target keywords is received at the electronic device via a communication network, and a particular target keyword is selected from the list of target keywords. Further, the method may include receiving a keyword model for the particular target keyword via the communication network. In this method, the particular target keyword is assigned to a function of the electronic device such that the function is performed in response to detecting the particular target keyword based on the keyword model in an input sound received at the electronic device.
Abstract:
A method of detecting a target keyword for activating a function in an electronic device is disclosed. The method includes receiving an input sound starting from one of the plurality of portions of the target keyword. The input sound may be periodically received based on a duty cycle. The method extracts a plurality of sound features from the input sound, and obtains state information on a plurality of states associated with the portions of the target keyword. Based on the extracted sound features and the state information, the input sound may be detected as the target keyword. The plurality of states includes a predetermined number of entry states indicative of a predetermined number of the plurality of portions.
Abstract:
A method, which is performed in an electronic device, for activating a target application is disclosed. The method may include receiving an input sound stream including an activation keyword for activating the target application and a speech command indicative of a function of the target application. The method may also detect the activation keyword from the input sound stream. If the activation keyword is detected, a portion of the input sound stream including at least a portion of the speech command may be buffered in a buffer memory. In addition, in response to detecting the activation keyword, the target application may be activated to perform the function of the target application.
Abstract:
An electronic device for generating video data is disclosed. The electronic device may include a communication unit configured to wirelessly receive a video stream captured by a camera, wherein the camera is located in an unmanned aerial vehicle. The electronic device may also include at least one sound sensor configured to receive an input sound stream. In addition, the electronic device may include an audio control unit configured to generate an audio stream associated with the video stream based on the input sound stream. Further, the electronic device may include a synthesizer unit configured to generate the video data based on the video stream and the audio stream.
Abstract:
A method, which is performed by an electronic device, for obtaining a speaker-independent keyword model of a keyword designated by a user is disclosed. The method may include receiving at least one sample sound from the user indicative of the keyword. The method may also generate a speaker-dependent keyword model for the keyword based on the at least one sample sound, send a request for the speaker-independent keyword model of the keyword to a server in response to generating the speaker-dependent keyword model, and receive the speaker-independent keyword model adapted for detecting the keyword spoken by a plurality of users from the server.
Abstract:
A method for generating a notification by an electronic device to alert a user of the electronic device is disclosed. In this method, a speech phrase may be received. Then, the received speech phrase may be recognized, by a processor, as a command to generate the notification. In addition, one or more context data of the electronic device may be detected by at least one sensor. It may be determined whether the notification is to be generated at least based on the context data. The notification may be generated, by the processor, based on the context data and the command to generate the notification.
Abstract:
The various aspects are directed to automatic device-to-device connection control. An aspect extracts a first sound signature, wherein the extracting the first sound signature comprises extracting a sound signature from a sound signal emanating from a certain direction, receives a second sound signature from a peer device, compares the first sound signature to the second sound signature, and pairs with the peer device. An aspect extracts a first sound signature, wherein the extracting the first sound signature comprises extracting a sound signature from a sound signal emanating from a certain direction, sends the first sound signature to a peer device, and pairs with the peer device. An aspect detects a beacon sound signal, wherein the beacon sound signal is detected from a certain direction, extracts a code embedded in the beacon sound signal, and pairs with a peer device.
Abstract:
A method of providing a channel program recommendation in real-time for a display device is disclosed. The method includes receiving a plurality of multimedia signal streams for a plurality of channel programs, and generating in real-time at least one video and audio content tag based on at least one of video and audio contents from the multimedia signal streams. The display device generates the channel program recommendation including at least one of the channel programs based on the at least one video and audio content tag and at least one of a plurality of viewing criteria. A notification of the channel program recommendation is then output.
Abstract:
A method for providing object information for a scene in a wearable computer is disclosed. In this method, an image of the scene is captured. Further, the method includes determining a current location of the wearable computer and a view direction of an image sensor of the wearable computer and extracting at least one feature from the image indicative of at least one object. Based on the current location, the view direction, and the at least one feature, information on the at least one object is determined Then, the determined information is output.
Abstract:
A method for controlling voice activation by a target keyword in a mobile device is disclosed. The method includes receiving an input sound stream. When the input sound stream indicates speech, the voice activation unit is activated to detect the target keyword and at least one sound feature is extracted from the input sound stream. Further, the method includes deactivating the voice activation unit when the at least one sound feature indicates a non-target keyword.