Abstract:
A method includes receiving an alarm sound including information related to an emergency event. The method also includes transmitting, to a server, identification information of the mobile device and the information. The method further includes receiving, from the server, an instruction for responding to the emergency event. The method further includes outputting the instruction.
Abstract:
According to an aspect of the present disclosure, a method for generating a keyword model of a user-defined keyword in an electronic device is disclosed. The method includes receiving at least one input indicative of the user-defined keyword, determining a sequence of subwords from the at least one input, generating the keyword model associated with the user-defined keyword based on the sequence of subwords and a subword model of the subwords, wherein the subword model is configured to model a plurality of acoustic features of the subwords based on a speech database, and providing the keyword model associated with the user-defined keyword to a voice activation unit configured with a keyword model associated with a predetermined keyword.
Abstract:
A method, performed in an electronic device, for tracking a piece of music in an audio stream is disclosed. The method may receive a first portion of the audio stream and extract a first sound feature based on the first portion of the audio stream. Also, the method may determine whether the first portion of the audio stream is indicative of music based on the first sound feature. In response to determining that the first portion of the audio stream is indicative of music, a piece of music may be identified based on the first portion of the audio stream. Further, upon receiving a second portion of the audio stream, the method may extract a second sound feature based on the second portion of the audio stream and determine whether the second portion of the audio stream is indicative of the first piece of music.
Abstract:
A method for activating a voice assistant function in a mobile device is disclosed. The method includes receiving an input sound stream by a sound sensor and determining a context of the mobile device. The method may determine the context based on the input sound stream. For determining the context, the method may also obtain data indicative of the context of the mobile device from at least one of an acceleration sensor, a location sensor, an illumination sensor, a proximity sensor, a clock unit, and a calendar unit in the mobile device. In this method, a threshold for activating the voice assistant function is adjusted based on the context. The method detects a target keyword from the input sound stream based on the adjusted threshold. If the target keyword is detected, the method activates the voice assistant function.
Abstract:
According to an aspect of the present disclosure, a method for generating a keyword model of a user-defined keyword in an electronic device is disclosed. The method includes receiving at least one input indicative of the user-defined keyword, determining a sequence of subwords from the at least one input, generating the keyword model associated with the user-defined keyword based on the sequence of subwords and a subword model of the subwords, wherein the subword model is configured to model a plurality of acoustic features of the subwords based on a speech database, and providing the keyword model associated with the user-defined keyword to a voice activation unit configured with a keyword model associated with a predetermined keyword.
Abstract:
A method for controlling an electronic device in response to speech spoken by a user is disclosed. The method may include receiving an input sound by a sound sensor. The method may also detect the speech spoken by the user in the input sound, determine first characteristics of a first frequency range and second characteristics of a second frequency range of the speech in response to detecting the speech in the input sound, and determine whether a direction of departure of the speech spoken by the user is toward the electronic device based on the first and second characteristics.
Abstract:
A method for providing object information for a scene in a wearable computer is disclosed. In this method, an image of the scene is captured. Further, the method includes determining a current location of the wearable computer and a view direction of an image sensor of the wearable computer and extracting at least one feature from the image indicative of at least one object. Based on the current location, the view direction, and the at least one feature, information on the at least one object is determined. Then, the determined information is output.
Abstract:
A method for communicating messages by a mobile device via a sound medium is disclosed. The mobile device receives input sounds from at least one mobile device via the sound medium. From the input sounds, an input sound signal carrying a first message encoded with a first key is detected. The mobile device decodes the first message based on a matching key. An output sound signal carrying a second message encoded with a second key is generated. Further, the mobile device transmits an output sound corresponding to the output sound signal via the sound medium.
Abstract:
According to an aspect of the present disclosure, a method for controlling access to a plurality of electronic devices is disclosed. The method includes detecting whether a first device is in contact with a user, adjusting a security level of the first device to activate the first device when the first device is in contact with the user, detecting at least one second device within a communication range of the first device, and adjusting a security level of the at least one second device to control access to the at least one second device based on a distance between the first device and the at least one second device.
Abstract:
A method for grouping data items in a mobile device is disclosed. In this method, a plurality of data items and a sound tag associated with each of the plurality of data items are stored, and the sound tag includes a sound feature extracted from an input sound indicative of an environmental context for the data item. Further, the method may include generating a new data item, receiving an environmental sound, generating a sound tag associated with the new data item by extracting a sound feature from the environmental sound, and grouping the new data item with at least one of the plurality of data items based on the sound tags associated with the new data item and the plurality of data items.