Abstract:
An example apparatus for generating synthesized images includes a receiver to receive a frame, a mask and external images. The apparatus also includes a foreground augmenter to generate augmented foregrounds in the frame based on the mask. The apparatus includes a background augmenter to generate augmented backgrounds based on the frame, the mask, and the external images. The apparatus also further includes an image synthesizer to generate a synthesized image based on the generated augmented foregrounds and the augmented backgrounds.
Abstract:
Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar; initiating communication; detecting a user input; identifying the user input; identifying an animation command based on the user input; generating avatar parameters; and transmitting at least one of the animation command and the avatar parameters.
Abstract:
Movement of a user in a user space is mapped to a first portion of a command action stream sent to a telerobot in a telerobot space. An immersive feedback stream is provided by the telerobot to the user. Upon movement of user into or proximate to a margin of user space, the first portion of the command action stream may be suspended. The user may re-orient in the user space, and may then continue to move, with movement mapping re-engaged and resumption of transmission of a second portion of command action stream. In this way, user may control a telerobot via movement mapping, even though user space and telerobot space may not be the same size.
Abstract:
According to one embodiment of the invention, a method includes generating a person-name Information Gain (IG)-Tree and a relation IG-Tree from annotated data. The method also includes tagging and partial parsing of an input document. The names of the persons are extracted within the input document using the person-name IG-tree. Additionally, names of organizations are extracted within the input document. The method also includes extracting entity names that are not names of persons and organizations within the input document. Further, the relations between the identified entity names are extracted using the relation-IG-tree.
Abstract:
Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, extracting features from the face, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters.
Abstract:
According to one embodiment of the invention, a method includes generating a person-name Information Gain (IG)-Tree and a relation IG-Tree from annotated data. The method also includes tagging and partial parsing of an input document. The names of the persons are extracted within the input document using the person-name IG-tree. Additionally, names of organizations are extracted within the input document. The method also includes extracting entity names that are not names of persons and organizations within the input document. Further, the relations between the identified entity names are extracted using the relation-IG-tree.
Abstract:
A device, method and system of video and audio sharing among communication devices, may comprise a communication device for generating and sending a packet containing information related to the video and audio, and another communication device for receiving the packet and rendering the information related to the audio and video. In some embodiments, the communication device may comprise: an audio encoding module to encode a piece of audio into an audio bit stream; an avatar data extraction module to extract avatar data from a piece of video and generate an avatar data bit stream; and a synchronization module to generate synchronization information for synchronizing the audio bit stream with the avatar parameter stream. In some embodiments, the another communication device may comprise: an audio decoding module to decode an audio bit stream into decoded audio data; an Avatar animation module to animate an Avatar model based on an Avatar data bit stream to generate an animated Avatar model; and a synchronizing and rendering module to synchronize and render the decoded audio data and the animated Avatar model by utilizing the synchronization information.
Abstract:
An example apparatus for generating synthesized images includes a receiver to receive a frame, a mask and external images. The apparatus also includes a foreground augmenter to generate augmented foregrounds in the frame based on the mask. The apparatus includes a background augmenter to generate augmented backgrounds based on the frame, the mask, and the external images. The apparatus also further includes an image synthesizer to generate a synthesized image based on the generated augmented foregrounds and the augmented backgrounds.
Abstract:
Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar; initiating communication; detecting a user input; identifying the user input; identifying an animation command based on the user input; generating avatar parameters; and transmitting at least one of the animation command and the avatar parameters.
Abstract:
Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar; initiating communication; detecting a user input; identifying the user input; identifying an animation command based on the user input; generating avatar parameters; and transmitting at least one of the animation command and the avatar parameters.