Abstract:
A collaboration system includes a stream processing engine and a Bot subsystem. The stream processing engine performs cognitive processing of multimodal input streams originated at one or more user devices in a communication session supported by a collaboration service to derive user-intent-based user requests and transmit the user requests over one or more networks. The Bot subsystem includes a stream receptor directs the multimodal input streams from the user devices to the stream processing engine to enable the stream processing engine to derive the user requests. The Bot subsystem also includes a cognitive action interpreter to translate the user requests to action requests and issue the action requests to the collaboration service so as to initiate actions with respect to the communication session. The Bot subsystem also includes a cognitive responder to transmit, in response to the user requests, multimodal user responses to the one or more user devices.
Abstract:
The present technology is an automatically configuring virtual assistant. The virtual assistant is assigned to an existing conversation space and, based on analyzing the existing conversation space the virtual assistant has been assigned to, is associated with at least one contextual cue of an existing conversation space. The analysis includes natural language processing of a title, a topic, or a past conversation of the existing conversation space in order to determine the at least one contextual cue.
Abstract:
Presented herein are techniques and systems associated with generating a predicted utilization likelihood for a shared collaboration resource. Integrated resource data associated with a meeting scheduled for a shared collaboration resource is obtained and analyzed using a machine-learned predictive model. The analysis generates a predicted utilization likelihood of the shared resource. An indication of the predicted utilization likelihood is provided to an output system, such as a graphical user interface.
Abstract:
A mobile device or a server may be configured to automatically define a customized mute status. Data indicative of a physical movement of the mobile device is received. In response, the mobile device is monitored to determine whether one or more notifications are received at the mobile device and whether a responsive action responsive to the one or more notifications is taken at the mobile device. When no responsive action is taken, a customized mute status for the mobile device is defined or stored.
Abstract:
A video conference endpoint includes a microphone array to detect ultrasound transmitted by a user device and that is encoded with a user identity. The endpoint determines a position of the user device relative to the microphone array based on the detected ultrasound and recovers the user identity from the detected ultrasound. The microphone array also detects audio in an audio frequency range perceptible to humans from an active talker, and determines a position of the active talker relative to the microphone array based on the detected audio. The endpoint determines whether the position of the active talker and the position of the user device are within a predetermined positional range of each other. If it is determined that the positions are both within the predetermined positional range, the endpoint assigns the user identity to the active talker.
Abstract:
In one embodiment, a collaboration node prioritizes each modality of communication accessible by at least a first user and a second user based on one or more communication characteristics in a collaboration profile, monitors communication characteristics of a communication session conducted in a first modality of communication between the first user and the second user, and determines a second modality of communication accessible to the first user and the second user having a higher priority than the first modality of communication based on the collaboration profile and the communication characteristics for the communication session. The collaboration node further notifies at least one of the first user or the second user when the second modality of communication has the higher priority than the first modality of communication.
Abstract:
In one embodiment, a method for visualizing a multi-modal conversation on a computing device includes: storing conversation elements of at least two modes of the multi-modal conversation in a conversation container object, where the at least two modes represent at least two different types of communication or content shared by participants of the multi-modal conversation, and displaying a conversation channel as a progression of conversation tiles aligned according to a timeline, where the conversation channel represents the multi-modal conversation, and each of the conversation tiles represents one of the conversation elements.
Abstract:
A scheduling request to schedule an online meeting is received. The online meeting involves a plurality of participants during which multiple participants may become presenters to present content during the online meeting. A presentation queue is generated that includes an ordering of presenters for the online meeting and associated time slots for each of the presenters during the online meeting. Content to display the presentation queue is sent to each of the participants of the online meeting.
Abstract:
A networking environment accessible by a plurality of computing devices is established to facilitate communications between participants associated with the computing devices, where content is generated and shared by participants via the networking environment. An item of content is shared with a group of recipients associated with computing devices via the networking environment, where the shared item of content includes one or more tags associated with the content, and each tag includes an initial weight value associated with the tag. A relevance factor associated with the group is determined, where the relevance factor is based upon information obtained from profiles of recipients from the group, and the initial weight value of each tag associated with the shared item of content is adjusted based at least in part upon the collective relevance factor associated with the group.
Abstract:
In one embodiment, a method includes obtaining content captured during a collaboration session, processing the content to identify a first cue included in the content, and interpreting the first cue, wherein interpreting the first cue includes generating a first insight associated with the first cue. The method also includes processing the first insight to generate an insight summary, and generating an output associated with the collaboration session, wherein the output includes the insight summary.