摘要:
Systems and methods are provided for improving language models for speech recognition by personalizing knowledge sources utilized by the language models to specific users or user-population characteristics. A knowledge source, such as a knowledge graph, is personalized for a particular user by mapping entities or user actions from usage history for the user, such as query logs, to the knowledge source. The personalized knowledge source may be used to build a personal language model by training a language model with queries corresponding to entities or entity pairs that appear in usage history. In some embodiments, a personalized knowledge source for a specific user can be extended based on personalized knowledge sources of similar users.
摘要:
Functionality is described herein for determining the intents of linguistic items (such as queries), to produce intent output information. For some linguistic items, the functionality deterministically assigns intents to the linguistic items based on known intent labels, which, in turn, may be obtained or derived from a knowledge graph or other type of knowledge resource. For other linguistic items, the functionality infers the intents of the linguistic items based on selection log data (such as click log data provided by a search system). In some instances, the intent output information may reveal new intents that are not represented by the known intent labels. In one implementation, the functionality can use the intent output information to train a language understanding model.
摘要:
Systems and methods are provided for training language models using in-domain-like data collected automatically from one or more data sources. The data sources (such as text data or user-interactional data) are mined for specific types of data, including data related to style, content, and probability of relevance, which are then used for language model training. In one embodiment, a language model is trained from features extracted from a knowledge graph modified into a probabilistic graph, where entity popularities are represented and the popularity information is obtained from data sources related to the knowledge. Embodiments of language models trained from this data are particularly suitable for domain-specific conversational understanding tasks where natural language is used, such as user interaction with a game console or a personal assistant application on personal device.
摘要:
This disclosure pertains to a classification model, and to functionality for producing and applying the classification model. The classification model is configured to discriminate whether an input linguistic item (such as a query) corresponding to either a natural language (NL) linguistic item or a keyword language (KL) linguistic item. An NL linguistic item expresses an intent using a natural language, while a KL linguistic item expresses the intent using one or more keywords. In a training phase, the functionality produces the classification model based on query click log data or the like. In an application phase, the functionality may, among other uses, use the classification model to filter a subset of NL linguistic items from a larger set of items, and then use the subset of NL linguistic items to train a natural language interpretation model, such as a spoken language understanding model.
摘要:
This disclosure pertains to a classification model, and to functionality for producing and applying the classification model. The classification model is configured to discriminate whether an input linguistic item (such as a query) corresponding to either a natural language (NL) linguistic item or a keyword language (KL) linguistic item. An NL linguistic item expresses an intent using a natural language, while a KL linguistic item expresses the intent using one or more keywords. In a training phase, the functionality produces the classification model based on query click log data or the like. In an application phase, the functionality may, among other uses, use the classification model to filter a subset of NL linguistic items from a larger set of items, and then use the subset of NL linguistic items to train a natural language interpretation model, such as a spoken language understanding model.
摘要:
A deep structured semantic module (DSSM) is described herein which uses a model that is discriminatively trained based on click-through data, e.g., such that a conditional likelihood of clicked documents, given respective queries, is maximized, and a condition likelihood of non-clicked documents, given the queries, is reduced. In operation, after training is complete, the DSSM maps an input item into an output item expressed in a semantic space, using the trained model. To facilitate training and runtime operation, a dimensionality-reduction module (DRM) can reduce the dimensionality of the input item that is fed to the DSSM. A search engine may use the above-summarized functionality to convert a query and a plurality of documents into the common semantic space, and then determine the similarity between the query and documents in the semantic space. The search engine may then rank the documents based, at least in part, on the similarity measures.
摘要:
Systems and methods are provided for improving language models for speech recognition by personalizing knowledge sources utilized by the language models to specific users or user-population characteristics. A knowledge source, such as a knowledge graph, is personalized for a particular user by mapping entities or user actions from usage history for the user, such as query logs, to the knowledge source. The personalized knowledge source may be used to build a personal language model by training a language model with queries corresponding to entities or entity pairs that appear in usage history. In some embodiments, a personalized knowledge source for a specific user can be extended based on personalized knowledge sources of similar users.
摘要:
Functionality is described herein for determining the intents of linguistic items (such as queries), to produce intent output information. For some linguistic items, the functionality deterministically assigns intents to the linguistic items based on known intent labels, which, in turn, may be obtained or derived from a knowledge graph or other type of knowledge resource. For other linguistic items, the functionality infers the intents of the linguistic items based on selection log data (such as click log data provided by a search system). In some instances, the intent output information may reveal new intents that are not represented by the known intent labels. In one implementation, the functionality can use the intent output information to train a language understanding model.
摘要:
Systems and methods are provided for training language models using in-domain-like data collected automatically from one or more data sources. The data sources (such as text data or user-interactional data) are mined for specific types of data, including data related to style, content, and probability of relevance, which are then used for language model training. In one embodiment, a language model is trained from features extracted from a knowledge graph modified into a probabilistic graph, where entity popularities are represented and the popularity information is obtained from data sources related to the knowledge. Embodiments of language models trained from this data are particularly suitable for domain-specific conversational understanding tasks where natural language is used, such as user interaction with a game console or a personal assistant application on personal device.
摘要:
Systems and methods are provided for improving language models for speech recognition by adapting knowledge sources utilized by the language models to session contexts. A knowledge source, such as a knowledge graph, is used to capture and model dynamic session context based on user interaction information from usage history, such as session logs, that is mapped to the knowledge source. From sequences of user interactions, higher level intent sequences may be determined and used to form models that anticipate similar intents but with different arguments including arguments that do not necessarily appear in the usage history. In this way, the session context models may be used to determine likely next interactions or “turns” from a user, given a previous turn or turns. Language models corresponding to the likely next turns are then interpolated and provided to improve recognition accuracy of the next turn received from the user.