Abstract:
The method of providing audiovisual content to a client device configured to be coupled to a display. The method detects a selection of a graphical element corresponding to a video content item. In response to detecting the selection of the graphical element, a transmission mode is determined. The transmission mode is a function of: (i) one or more decoding capabilities of the client device; (ii) a video encoding format of the video content item; (ii) whether the video content item should be displayed in a full screen or a partial screen format; and (iv) whether the client device is capable of overlaying image data into a video stream. Next, audiovisual data that includes the video content item is prepared for transmission according to the determined transmission mode. Finally, the prepared audiovisual data is transmitted from the server toward the client device, according to the determined transmission mode, for display on the display.
Abstract:
Embodiments of the invention relate to user interfaces and systems and methods for generating a real-time “lean-back” user interface for use with a television or other display device and for reuse of encoded elements for forming a video frame of the user interface. An interactive session is established between a client device associated with a user's television and the platform for creating the user interface over a communication network, such as a cable television network. The user interface is automatically generated by the platform and is animated even without interactions by the user with an input device. The user interface includes a plurality of interactive animated assets. The animated assets are capable of changing over time (e.g. different images, full-motion video) and are also capable of being animated so as to change screen position, rotate, move etc. over time. A hash is maintained of cached encoded assets and cached elements that may be reused within a user session and between user sessions.
Abstract:
A system method and computer program product for creating a composited video frame sequence for an application. A current scene state for the application is compared to a previous scene state wherein each scene state includes a plurality of objects. A video construction engine determines if properties of one or more objects have changed based upon a comparison of the scene states. If properties of one or more objects have changed based upon the comparison, the delta between the object's states is determined and this information is used by a fragment encoding module if the fragment has not been encoded before. The information is used to define, for example, the motion vectors for use by the fragment encoding module in construction of the fragments to be used by the stitching module to build the composited video frame sequence.
Abstract:
The method of providing audiovisual content to a client device configured to be coupled to a display. The method detects a selection of a graphical element corresponding to a video content item. In response to detecting the selection of the graphical element, a transmission mode is determined. The transmission mode is a function of: (i) one or more decoding capabilities of the client device; (ii) a video encoding format of the video content item; (ii) whether the video content item should be displayed in a full screen or a partial screen format; and (iv) whether the client device is capable of overlaying image data into a video stream. Next, audiovisual data that includes the video content item is prepared for transmission according to the determined transmission mode. Finally, the prepared audiovisual data is transmitted from the server toward the client device, according to the determined transmission mode, for display on the display.
Abstract:
The method of providing audiovisual content to a client device configured to be coupled to a display. The method detects a selection of a graphical element corresponding to a video content item. In response to detecting the selection of the graphical element, a transmission mode is determined. The transmission mode is a function of: (i) one or more decoding capabilities of the client device; (ii) a video encoding format of the video content item; (ii) whether the video content item should be displayed in a full screen or a partial screen format; and (iv) whether the client device is capable of overlaying image data into a video stream. Next, audiovisual data that includes the video content item is prepared for transmission according to the determined transmission mode. Finally, the prepared audiovisual data is transmitted from the server toward the client device, according to the determined transmission mode, for display on the display.
Abstract:
Embodiments of the invention relate to user interfaces and systems and methods for generating a real-time “lean-back” user interface for use with a television or other display device and for reuse of encoded elements for forming a video frame of the user interface. An interactive session is established between a client device associated with a user's television and the platform for creating the user interface over a communication network, such as a cable television network. The user interface is automatically generated by the platform and is animated even without interactions by the user with an input device. The user interface includes a plurality of interactive animated assets. The animated assets are capable of changing over time (e.g. different images, full-motion video) and are also capable of being animated so as to change screen position, rotate, move etc. over time. A hash is maintained of cached encoded assets and cached elements that may be reused within a user session and between user sessions.
Abstract:
The method of providing audiovisual content to a client device configured to be coupled to a display. The method detects a selection of a graphical element corresponding to a video content item. In response to detecting the selection of the graphical element, a transmission mode is determined. The transmission mode is a function of: (i) one or more decoding capabilities of the client device; (ii) a video encoding format of the video content item; (ii) whether the video content item should be displayed in a full screen or a partial screen format; and (iv) whether the client device is capable of overlaying image data into a video stream. Next, audiovisual data that includes the video content item is prepared for transmission according to the determined transmission mode. Finally, the prepared audiovisual data is transmitted from the server toward the client device, according to the determined transmission mode, for display on the display.
Abstract:
A system method and computer program product for creating a composited video frame sequence for an application. A current scene graph state for the application is compared to a previous scene graph state wherein each scene graph state includes a plurality of hierarchical nodes that represent one or more objects at each node. A video construction engine determines if one or more objects have moved based upon a comparison of the scene graph states. If one or more objects have moved based upon the scene graph comparison, motion information about the objects is determined and the motion information is forwarded to a stitcher module. The motion information is used to define motion vectors for use by the stitcher module in construction of the composited video frame sequence.
Abstract:
A system method and computer program product for creating a composited video frame sequence for an application. A current scene graph state for the application is compared to a previous scene graph state wherein each scene graph state includes a plurality of hierarchical nodes that represent one or more objects at each node. A video construction engine determines if one or more objects have moved based upon a comparison of the scene graph states. If one or more objects have moved based upon the scene graph comparison, motion information about the objects is determined and the motion information is forwarded to a stitcher module. The motion information is used to define motion vectors for use by the stitcher module in construction of the composited video frame sequence.