Abstract:
This relates to intelligent automated assistants and, more specifically, to intelligent context sharing and task performance among a collection of devices with intelligent automated assistant capabilities. An example method includes, at a first electronic device participating in a context-sharing group associated with a first location: receiving a user voice input; receiving, from a context collector, an aggregate context of the context-sharing group; providing at least a portion of the aggregate context and data corresponding to the user voice input to a remote device; receiving, from the remote device, a command to perform one or more tasks and a device identifier corresponding to a second electronic device; and transmitting the command to the second electronic device based on the device identifier, wherein the command causes the second electronic device to perform the one or more tasks.
Abstract:
The present disclosure generally relates to using voice interaction to access call functionality of a companion device. In an example process, a user utterance is received. Based on the user utterance and contextual information, the process causes a server to determine a user intent corresponding to the user utterance. The contextual information is based on a signal received from the companion device. In accordance with the user intent corresponding to an actionable intent of answering the incoming call, a command is received. Based on the command, instructions are provided to the companion device, which cause the companion device to answer the incoming call and provide audio data of the answered incoming call. Audio is outputted according to the audio data of the answered incoming call.
Abstract:
Systems and processes for operating an intelligent automated assistant are provided. An example method includes receiving, from one or more external electronic devices, a plurality of speaker profiles for a plurality of users; receiving a natural language speech input; determining, based on comparing the natural language speech input to the plurality of speaker profiles: a first likelihood that the natural language speech input corresponds to a first user of the plurality of users; and a second likelihood that the natural language speech input corresponds to a second user of the plurality of users; determining whether the first likelihood and the second likelihood are within a first threshold; and in accordance with determining that the first likelihood and the second likelihood are not within the first threshold: providing a response to the natural language speech input, the response being personalized for the first user.
Abstract:
The present disclosure generally relates to using voice interaction to access call functionality of a companion device. In an example process, a user utterance is received. Based on the user utterance and contextual information, the process causes a server to determine a user intent corresponding to the user utterance. The contextual information is based on a signal received from the companion device. In accordance with the user intent corresponding to an actionable intent of answering the incoming call, a command is received. Based on the command, instructions are provided to the companion device, which cause the companion device to answer the incoming call and provide audio data of the answered incoming call. Audio is outputted according to the audio data of the answered incoming call.
Abstract:
Systems and processes for operating an intelligent automated assistant are provided. In one example process, a first instance of a digital assistant operating on a first electronic device receives a natural-language speech input indicative of a user request. The first electronic device obtains a set of data corresponding to a second instance of the digital assistant on a second electronic device, and updates one or more settings of the first instance of the digital assistant based on the received set of data. The first instance of the digital assistant performs one or more tasks based on the updated one or more settings and provides an output indicative of whether the one or more tasks are performed.
Abstract:
A virtual assistant uses context information to supplement natural language or gestural input from a user. Context helps to clarify the user's intent and to reduce the number of candidate interpretations of the user's input, and reduces the need for the user to provide excessive clarification input. Context can include any available information that is usable by the assistant to supplement explicit user input to constrain an information-processing problem and/or to personalize results. Context can be used to constrain solutions during various phases of processing, including, for example, speech recognition, natural language processing, task flow processing, and dialog generation.