Abstract:
Transport accelerator (TA) systems and methods for accelerating transmission of content from a user agent (UA) of a user device to a remote recipient are provided according to embodiments of the present disclosure. Embodiments comprise a TA architecture implementing a connection manager (CM) and a request manager (RM). A RM of embodiments subdivides fragments of content provided by the UA into a plurality of content chunks, each fragment may be subdivided into multiple content chunks. The RM of embodiments provides content chunks to a connection manager (CM) of the TA for transmitting the content chunks. The CM of embodiments transmits the content chunks via a plurality of connections established between the CM and the remote recipient.
Abstract:
Embodiments provide methodologies for reliably storing data within a storage system using liquid distributed storage control. Such liquid distributed storage control operates to compress repair bandwidth utilized within a storage system for data repair processing to the point of operating in a liquid regime. Liquid distributed storage control logic of embodiments may employ a lazy repair policy, repair bandwidth control, a large erasure code, and/or a repair queue. Embodiments of liquid distributed storage control logic may additionally or alternatively implement a data organization adapted to allow the repair policy to avoid handling large objects, instead streaming data into the storage nodes at a very fine granularity.
Abstract:
An over-the-air (OTA) broadcast middleware unit is configured to receive aggregated session description data for a plurality of sessions, wherein each of the sessions transports media data related to common media content, and wherein each of the sessions is transmitted as part of an OTA broadcast, and extract at least some of the media data from the OTA broadcast based on the aggregated session description data. The OTA broadcast middleware unit may further deliver the extracted media data to a streaming client, such as a Dynamic Adaptive Streaming over HTTP (DASH) client.
Abstract:
Systems and methods for encoding data for transmission over a communications channel using an improved LT staircase FEC code are provided. Embodiments may include mapping source symbols to repair symbols, wherein a number of edges of the mapping associated with a source symbol is determined randomly according to a first distribution. The repair symbols may be ordered, and at least a first repair symbol may be encoded based on the source symbols that map to the first repair symbol and/or another repair symbol that immediately precedes the first repair symbol in the ordering of the repair symbols.
Abstract:
Methods, apparatuses, and computer-readable media for determining a source block size are presented. A sender may transmit received media as source blocks. The sender may receive a value N, a target number of packets from which a receiver can recover a source block with high fidelity; a value P′, a target packet payload size; a value O, a symbol reception overhead value; and a value R, a target upper bound on data reception overhead. The sender may determine a value K, a number of symbols to be used per source block, based on the values N, P′, O and R. The source symbols of the source blocks may be encoded into encoded symbols, wherein the encoded symbols may or may not include the source symbols. The encoded symbols may be packetized into at least N packets for transmission to a receiver.
Abstract:
Transport accelerator (TA) systems and methods for delivery of content to a user agent (UA) of the client device from a content server are provided according to embodiments of the present disclosure. Embodiments of a TA operate to subdivide, by a request manager (RM) of the TA, fragment requests provided by the UA each into a plurality of chunk requests for requesting chunks of the content and to provide, by the RM to a connection manager (CM) of the TA, chunk requests of the plurality of chunk requests for requesting chunks of the content. Requests may thus be made, by the CM, for the chunks of the content from the content server via a plurality of connections established between the CM and the content server.
Abstract:
Content (e.g., multimedia streams, audio-video streams, video files, text, etc.) may be delivered to receiver devices over a broadcast channel and/or via a broadcast network via components (e.g., servers, receiver device, software applications, modules, processes, etc.) configured to communicate the content in a manner that reduces the amount of information communicated over the broadcast network, reduces the amount network bandwidth consumed by the communication, meets precise timing requirements for the individual objects that are communicated, and enables each receiver device to receive, decode, and render the content without consuming an excess amount of that receiver device's battery or processing resources.
Abstract:
Systems and methods which implement one or more data organization techniques that facilitate efficient access to source data stored by a storage system are disclosed. Data organization techniques implemented according to embodiments are adapted to optimize (e.g., maximize) input/output efficiency and/or (e.g., minimize) storage overhead, while maintaining mean time to data loss, repair efficiency, and/or traffic efficiency. Data organization techniques as may be implemented by embodiments include blob based organization techniques, grouped symbols organization techniques, data ordering organization techniques, and combinations thereof.
Abstract:
Content (e.g., multimedia streams, audio-video streams, video files, text, etc.) may be delivered to receiver devices over a broadcast channel and/or via a broadcast network via components (e.g., servers, receiver device, software applications, modules, processes, etc.) configured to communicate the content in a manner that reduces the amount of information communicated over the broadcast network, reduces the amount network bandwidth consumed by the communication, meets precise timing requirements for the individual objects that are communicated, and enables each receiver device to receive, decode, and render the content without consuming an excess amount of that receiver device's battery or processing resources.
Abstract:
Embodiments providing co-derived data storage patterns for use in reliably storing data and/or facilitating access to data within a storage system using fragments of source objects are disclosed. A set of data storage patterns for use in storing the fragments distributed across a plurality of storage nodes may be generated whereby the set of data storage patterns are considered collectively to meet one or more system performance goals. Such co-derived data storage pattern sets may be utilized when storing fragments of a source object to storage nodes of a storage system. Co-derived pattern set management logic may generate co-derived data storage pattern sets, select/assign data storage patterns of a co-derived data storage pattern set for use with respect to source objects, modify data storage patterns of a co-derived data storage pattern set, and generate additional data storage patterns for a co-derived data storage pattern set in accordance with the concepts herein.