Abstract:
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for displaying information that includes a response to a query received by a device. The device receives a follow on query for an electronic conversation about a certain topic and generates a transcription of the follow on query. The device provides the transcription and data about the conversation to respective classifier modules of an assistant module. The assistant module uses a particular classifier module to identify the follow on query as either a query that corresponds to the topic, a query that deviates from the topic, or a query that is unrelated to the topic. The assistant module selects a template for displaying information that includes a response to the follow on voice query after the device causes information to be displayed that includes a response to a preceding voice query.
Abstract:
Methods, apparatus, systems, and computer-readable media are provided for: operating an instance of a personal assistant module to serve a user while the user operates the client computing device, wherein the instance of the personal assistant module has access to a persistent record of a message exchange thread between the user and instance(s) of the personal assistant module; detecting cue(s) emanating from the persistent message exchange thread; identifying candidate application(s) that are installed on a client computing device operated by the user, wherein the candidate application(s) are associated with content of the message exchange thread; and incorporating, into a transcript of the message exchange thread that is displayed in a graphical user interface rendered on the client computing device, selectable link(operable by the user to cause the client computing device to expose to the user an interface associated with a respective candidate application.
Abstract:
Methods, apparatus, systems, and computer-readable media are provided for incorporating application links into message exchange threads. One or more cues emanating from a message exchange thread involving two or more message exchange clients may be detected. The one or more cues may trigger incorporation, into the message exchange thread, of a selectable link to a distinct application. At least one candidate application that is installed on a given client computing device operated by a message exchange thread participant may be identified. The candidate application may be associated with content of the message exchange thread. A selectable link may be incorporated into a transcript of the message exchange thread displayed in a graphical user interface of a message exchange client operating on the given client computing device. The selectable link may be operable by the participant to expose to the participant an interface associated with a respective candidate application.
Abstract:
Methods, apparatus, and computer-readable media (transitory and non-transitory) are provided herein for reducing latency caused by switching input modalities. In various implementations, a first input such as text input may be received at a first modality of a multimodal interface provided by an electronic device. In response to determination that the first input satisfies one or more criteria, the electronic device may preemptively establish a session between the electronic device and a query processor configured to process input received at a second modality (e.g., voice input) of the multimodal interface. In various implementations, the electronic device may receive a second input (e.g., voice input) at the second modality of the multimodal interface, initiate processing of at least a portion of the second input at the query processor within the session, and build a complete query based on output from the query processor.
Abstract:
Methods, apparatus, and computer readable media are described related to automated assistants that proactively incorporate, into human-to-computer dialog sessions, unsolicited content of potential interest to a user. In various implementations, in an existing human-to-computer dialog session between a user and an automated assistant, it may be determined that the automated assistant has responded to all natural language input received from the user. Based on characteristic(s) of the user, information of potential interest to the user or action(s) of potential interest to the user may be identified. Unsolicited content indicative of the information of potential interest to the user or the action(s) may be generated and incorporated by the automated assistant into the existing human-to-computer dialog session. In various implementations, the incorporating may be performed in response to the determining that the automated assistant has responded to all natural language input received from the user during the human-to-computer dialog session.
Abstract:
Methods, apparatus, and computer-readable media (transitory and non-transitory) are provided herein for reducing latency caused by switching input modalities. In various implementations, a first input such as text input may be received at a first modality of a multimodal interface provided by an electronic device. In response to determination that the first input satisfies one or more criteria, the electronic device may preemptively establish a session between the electronic device and a query processor configured to process input received at a second modality (e.g., voice input) of the multimodal interface. In various implementations, the electronic device may receive a second input (e.g., voice input) at the second modality of the multimodal interface, initiate processing of at least a portion of the second input at the query processor within the session, and build a complete query based on output from the query processor.
Abstract:
Methods, apparatus, and computer-readable media (transitory and non-transitory) are provided herein for reducing latency caused by switching input modalities. In various implementations, a first input such as text input may be received at a first modality of a multimodal interface provided by an electronic device. In response to determination that the first input satisfies one or more criteria, the electronic device may preemptively establish a session between the electronic device and a query processor configured to process input received at a second modality (e.g., voice input) of the multimodal interface. In various implementations, the electronic device may receive a second input (e.g., voice input) at the second modality of the multimodal interface, initiate processing of at least a portion of the second input at the query processor within the session, and build a complete query based on output from the query processor.
Abstract:
Implementations include actions of obtaining a set of entities based on one or more terms of a query, obtaining one or more entities associated with each live event of a plurality of live events, identifying a live event that is responsive to the query based on comparing at least one entity in the set of entities to one or more entities associated with each live event of a plurality of live events, determining that an event search result corresponding to the live event is to be displayed in search results, and in response: providing the event search result for display, the event search result including information associated with the live event, the information including an indicator of an occurrence of the live event.
Abstract:
Implementations of the present disclosure include actions of providing first text for display on a computing device of a user, the first text being provided from a first speech recognition engine based on first speech received from the computing device, and being displayed as a search query, receiving a speech correction indication from the computing device, the speech correction indication indicating a portion of the first text that is to be corrected, receiving second speech from the computing device, receiving second text from a second speech recognition engine based on the second speech, the second speech recognition engine being different from the first speech recognition engine, replacing the portion of the first text with the second text to provide a combined text, and providing the combined text for display on the computing device as a revised search query.