摘要:
A system and method for providing Web site navigation recommendations is provided. A Web page of interest is identified as a destination Web page. A domain of Web pages related to the destination Web page is determined. Information is extracted from each Web page in the domain and a recommendation comprising instructions for navigating to the destination Web page is generated based on the extracted information.
摘要:
A system and method for providing Website navigation recommendations is provided. A Web page of interest is identified as a destination Web page. A domain of Web pages related to the destination Web page is determined. Information is extracted from each Web page in the domain and a recommendation comprising instructions for navigating to the destination Web page is generated based on the extracted information.
摘要:
Embodiments of the present invention provide a system for automatically extracting conversational structure from a voice record based on lexical and acoustic features. The system also aggregates business-relevant statistics and entities from a collection of spoken conversations. The system may infer a coarse-level conversational structure based on fine-level activities identified from extracted acoustic features. The system improves significantly over previous systems by extracting structure based on lexical and acoustic features. This enables extracting conversational structure on a larger scale and finer level of detail than previous systems, and can feed an analytics and business intelligence platform, e.g. for customer service phone calls. During operation, the system obtains a voice record. The system then extracts a lexical feature using automatic speech recognition (ASR). The system extracts an acoustic feature. The system then determines, via machine learning and based on the extracted lexical and acoustic features, a coarse-level structure of the conversation.
摘要:
Digitized media is received that records a conversation between individuals. Cues are extracted from the digitized media that indicate properties of the conversation. The cues are entered as training data into a machine learning module to create a trained machine learning model. The trained machine learning model is used in a processor to detect other misalignments in subsequent digitized conversations.
摘要:
A system and method for enhancing visual representation to individuals participating in a conversation is provided. Visual data for a plurality of individuals participating in one or more conversations is analyzed. Possible conversational configurations of the individuals are generated. Each possible conversational configuration includes one or more pair-wise probabilities of at least two of the individuals. A probability weight is assigned to each of the pair-wise probabilities having a likelihood that the individuals of that pair-wise probability are participating in a conversation. A probability of each possible conversational configuration is determined by combining the probability weights for the pair-wise probabilities of that possible conversational configuration. The possible conversational configuration with the highest probability is selected as a most probable configuration. The individuals participating in the conversations based on the pair-wise probabilities with the most probable configuration are determined. Visual representation is enhanced for each individual participating in the determined conversations.
摘要:
Digitized media is received that records a conversation between individuals. Cues are extracted from the digitized media that indicate properties of the conversation. The cues are entered as training data into a machine learning module to create a trained machine learning model. The trained machine learning model is used in a processor to detect other misalignments in subsequent digitized conversations.
摘要:
Embodiments of the present invention provide a system for automatically extracting conversational structure from a voice record based on lexical and acoustic features. The system also aggregates business-relevant statistics and entities from a collection of spoken conversations. The system may infer a coarse-level conversational structure based on fine-level activities identified from extracted acoustic features. The system improves significantly over previous systems by extracting structure based on lexical and acoustic features. This enables extracting conversational structure on a larger scale and finer level of detail than previous systems, and can feed an analytics and business intelligence platform, e.g. for customer service phone calls. During operation, the system obtains a voice record. The system then extracts a lexical feature using automatic speech recognition (ASR). The system extracts an acoustic feature. The system then determines, via machine learning and based on the extracted lexical and acoustic features, a coarse-level structure of the conversation.