Abstract:
Exemplary embodiments relate to the creation of a media effect index for group video conversations. Media effect application (e.g., in the form of graphical overlays, filters, sounds, etc.) may be tracked in a timeline during a chat session. The resulting index may be used to create a highlights reel, which may serve as an index into a live show or may be used to determine the best time to insert materials into a recording of the conversation. The index may be used to automatically detect events in the video feed, to allow viewers to skip ahead to exciting moments (e.g., represented by clusters of applications of particular types of media effects), to determine where each participant spoke in a discussion, or to provide a common “watch together” experience while multiple users watch a common video. An analysis of the index may be used for research or consumer testing.
Abstract:
In one embodiment, a method includes launching, by a client system of a first user, a video-call session to enable a video stream for display in a small-overlay-window on a display of the client system of the first user; receiving, by the client system of the first user, a video stream comprising video from a client system of a second user; determining at least one property for the small-overlay-window based on information associated with the second user; and displaying the video stream in the small-overlay-window, wherein the small-overlay-window is customized based on the determined at least one property, and wherein the small-overlay-window is positioned directly over an interface of an active application running on the client system of the first user.
Abstract:
Exemplary embodiments relate to techniques for selecting which users should be shown in an interface during a group call, and for presenting the users on (potentially small) displays. According to some embodiments, a most-relevant speaker is selected for display on each call participants' screen. When deciding which user to display in the primary window of a video call, a dominant or relevant user is selected. A dominant user may be selected based on the audio energy represented by the audio packets for the user's device; alternatively dominant user selection may be implemented using artificial intelligence or machine learning, allowing for better differentiation between speaking and noise. On each user's display that does not belong to the relevant user, the current relevant user is shown. On the current relevant user's display, the previous relevant user is shown.
Abstract:
Exemplary embodiments relate to techniques for connecting two users when a caller places a call but a callee rejects the call or fails to answer in a predetermined period of time. The calling application may terminate the call attempt and request status updates regarding the called party to determine when the callee is available for a follow-up call. The system may gain insight into when a user is available based on the user's presence in a messaging or social networking app, activity in a third-party application unrelated to the call, or the power status of the user's device. When it is determined that the callee is available, a notification may be sent to the caller informing the caller that it is a good time to call back. The techniques may also be used in reverse, informing the callee of when the caller is available for a return call.
Abstract:
Exemplary embodiments relate to techniques for sharing live video while maintaining an asynchronous copy of the video. According to some embodiments, a user begins to record video and shares the video with selected other users. If one of the other users opts to join the original user, the shared video upgrades to a live video conversation. If no one (or only some participants) joins the original user, the recorded video becomes an asynchronous artifact in the users' messaging history. In some embodiments, the live video may be recorded and shared in response to a first user initiating a video call with at least a second user, but receiving no answer. The first user begins to share a live video (which may become an asynchronous artifact). If the second user joins the call while the video is being recorded, the conversation may upgraded to a video conversation.
Abstract:
Different online systems, such as an ad system or a social networking system, maintain different identifiers. An ad system identifies an association between an unsynced cookie maintained by an ad system and a user of the online system. The ad system identifies an overlap IP sequence including multiple occurrences of a user's user id and multiple occurrences of an unsynced cookie id in communications associated with an IP address over a given time period. The ad system determines an overlap score based on the identified overlap IP sequence. The overlap score determines how closely the unsynced cookie is associated with the user of the online system. The ad system determines whether the unsynced cookie id and the user id are associated with one another based on the overlap score. The ad system stores an association between the unsynced cookie and the user of the online system thereby generating a synced cookie.
Abstract:
In one embodiment, a method includes determining, by a computer server machine, that a callee-user is available for a communication session based on location information associated with a client system of the callee-user; sending, by the computer server machine, in response to determining that the callee-user is available, a notification to a client system of a caller-user indicating that the callee-user is available; receiving, by the computer server machine, a request from the client system of the caller-user to initiate the communication session; establishing, by the computer server machine, the communication session to enable a media stream comprising media captured at the client system of the caller-user to be received at the client system of a callee-user; and sending, by the computer server machine, the media captured at the client system of the caller-user to the client system of the callee-user.
Abstract:
Exemplary embodiments relate to the application of media effects, such as visual overlays, sound effects, etc. to a video conversation. A media effect may be applied as a reaction to an occurrence in the conversation, such as in response to an emotional reaction detected by emotion analysis of information associated with the video. Effect application may be controlled through gestures, such as applying different effects with different gestures, or canceling automatic effect application using a gesture. Effects may also be applied in group settings, and may affect multiple users. A real-time data channel may synchronize effect application across multiple participants. When broadcasting a video stream that includes effects, the three channels may be sent to an intermediate server, which stitches the three channels together into a single video stream; the single video stream may then be sent to a broadcast server for distribution to the broadcast recipients.
Abstract:
An online system determines one or more metrics describing consumption of content by various users by identifying users of the online system capable of being identified based on information received from multiple client devices. For example, the online system identifies users associated with user identifiers that are also associated with other types of identifying information (e.g., cookies, device identifiers). From the identified users, the online system generates a set of users based on a distribution of characteristics. The distribution of characteristics may be determined by the online system as characteristics of a group of users or received by the online system from a third party system and describes characteristics of users of the third party system. Based on interactions with content by users in the set, the online system determines one or more metrics describing consumption of content.
Abstract:
An online system customizes video conversations between users of the online system. During a video conversation, the online system presents a composite view to the participating users. The composite view may include visual representations of the users, a background graphic, or other types of graphics such as masks and props that the users can wear or interact with in the environment of the video conversation. The visual representations may be generated based on a live video feed of the users or include avatars of the users. The online system can determine the graphics based on information about the users. For instance, the online system determines a background graphic showing a location that the users have each visited. Upon viewing the background graphic, the users may be encouraged to interact with the background graphic or other graphics included in the composite view, which can promote an engaging video conversation experience.