Abstract:
Systems and methods which implement forward checking of data integrity are disclosed. A storage system of embodiments may, for example, comprise data integrity forward checking logic which is operable to perform forward checking of data integrity in real-time or near real-time to check that a number of node failures can be tolerated without loss of data. Embodiments may be utilized to provide assurance that a number of fragments needed for source data recovery will be available for the source objects most susceptible to failure when a certain number of additional fragments are lost, such as due to storage node failures.
Abstract:
Systems and methods which implement one or more data organization techniques that facilitate efficient access to source data stored by a storage system are disclosed. Data organization techniques implemented according to embodiments are adapted to optimize (e.g., maximize) input/output efficiency and/or (e.g., minimize) storage overhead, while maintaining mean time to data loss, repair efficiency, and/or traffic efficiency. Data organization techniques as may be implemented by embodiments include blob based organization techniques, grouped symbols organization techniques, data ordering organization techniques, and combinations thereof.
Abstract:
Methods, apparatuses, and computer-readable media for determining a source block size are presented. A sender transmits received media as source blocks. The sender receives a value N, a target number of packets from which a receiver can recover a source block with high fidelity; a value P′, a target packet payload size; a value O, a symbol reception overhead value; and a value R, a target upper bound on data reception overhead. The sender determines a value K, a number of symbols to be used per source block, based on the values N, P′, O and R. The source symbols of the source blocks are encoded into encoded symbols. In some cases, the encoded symbols include the source symbol, and in other cases the encoded symbols do not include the source symbols. The encoded symbols are packetized into at least N packets for transmission to the receiver.
Abstract:
Systems, methods, and devices of the various embodiments enable rate shaping of content data delivered to a client application. A processor may determine an ingress rate of content data to a buffer. The processor may determine an amount of the content data stored in the buffer. The processor may determine an egress rate of the content data from the buffer to the client application based on the ingress rate and the amount of content data stored in the buffer. The processor may send the content data from the buffer to the client application at the egress rate.
Abstract:
Transport accelerator (TA) systems and methods for delivery of content to a user agent (UA) of a client device are provided according to embodiments of the present disclosure. Embodiments receive, by a request manager (RM) of the TA, fragment requests provided by the UA for requesting content from a content server, and determine an amount of redundant encoded content data to request for a fragment request of the fragment requests for use by the RM in recovering the fragment.
Abstract:
A client device includes one or more processors configured to receive, from a server device, forward-error corrected data via a plurality of parallel network paths, determine losses of the data over each of the network paths, and send data representing the losses of the data over each of the network paths to the server device. Additionally or alternatively, a client device includes one or more processors configured to receive a first set of encoding units for a first block, wherein the first set of encoding units includes fewer than a minimum number of encoding units needed to recover the first block, after receiving the first set of encoding units, receive a second set of encoding units for a second block, and after receiving the second set of encoding units, receive a third set of encoding units including one or more encoding units for the first block.
Abstract:
A block-request streaming system provides for improvements in the user experience and bandwidth efficiency of such systems, typically using an ingestion system that generates data in a form to be served by a conventional file server (HTTP, FTP, or the like), wherein the ingestion system intakes content and prepares it as files or data elements to be served by the file server. A client device can be adapted to take advantage of the ingestion process. The client device might be configured to optimize use of resources, given the information available to it from the ingestion system. This may include configurations to determine the sequence, timing and construction of block requests based on monitoring buffer size and rate of change of buffer size, use of variable sized requests, mapping of block requests to underlying transport connections, flexible pipelining of requests, and/or use of whole file requests based on statistical considerations.
Abstract:
Systems and methods which implement repair bandwidth control techniques, such as may provide a feedback control structure for regulating repair bandwidth in the storage system. Embodiments control a source object repair rate in a storage system by analyzing source objects represented in a repair queue to determine repair rate metrics for the source objects and determining a repair rate based on the repair rate metrics to provide a determined level of recovery of source data stored as by the source objects and to provide a determined level of repair efficiency in the storage system. For example, embodiments may determine a per storage object repair rate (e.g., a repair rate preference for each of a plurality of source objects) and select a particular repair rate (e.g., a maximum repair rate) for use by a repair policy. Thereafter, the repair policy of embodiments may implement repair of one or more source objects in accordance with the repair rate.
Abstract:
Content (e.g., multimedia streams, audio-video streams, video files, text, etc.) may be delivered to receiver devices over a broadcast channel and/or via a broadcast network via components (e.g., servers, receiver device, software applications, modules, processes, etc.) configured to communicate the content in a manner that reduces the amount of information communicated over the broadcast network, reduces the amount network bandwidth consumed by the communication, meets precise timing requirements for the individual objects that are communicated, and enables each receiver device to receive, decode, and render the content without consuming an excess amount of that receiver device's battery or processing resources.
Abstract:
Systems and methods which are adapted to provide transport accelerator operation through the use of user agent (UA) signaling are disclosed. In operation according to embodiments, a transport accelerator (TA) analyzes content requests to determine if the content request includes an indication that transport acceleration functionality is to be provided. If such an indication is present, the TA further analyzes the content request to determine if transport acceleration functionality will be provided.