Abstract:
Processes and systems are disclosed for selecting a producer system from a number of producer systems to lease to a consumer system. A leasing agent, in response to a request from the consumer system for access to a service at a producer system, can identify a producer system to lease to the lease requestor based, at least in part, on a selection weight associated with each producer system that the leasing agent is assigned. The selection weights can be modified based on status information associated with each of the producer systems. This status information may be obtain from the producer systems and/or from a consumer system that has previously accessed the producer system. The consumer system may provide the status information to the leasing agent as part of the consumer system's lease request.
Abstract:
Some embodiments facilitate high performance packet-processing by enabling one or more processors that perform packet-processing to determine whether to enter an idle state or similar state. As network packets usually arrive or are transmitted in batches, the processors of some embodiments determine that more packets may be coming down a multi-stage pipeline upon receiving a first packet for processing. As a result, the processors may stay awake for a duration of time in anticipation of an incoming packet. Some embodiments keep track of the last packet that entered the first stage of the pipeline and compare that with a packet that the processor just processed in a pipeline stage to determine whether there may be more packets coming that need processing. In some embodiments, a processor may also look at a queue length of a queue associated with an upstream stage to determine whether more packets may be coming.
Abstract:
High-speed processing of packets to, and from, a virtualization environment can be provided while utilizing hardware-based segmentation offload and other such functionality. A hardware vendor such as a network interface card (NIC) manufacturer can enable the hardware to support open and proprietary stateless tunneling in conjunction with a protocol such as single root I/O virtualization (SR-IOV) in order to implement a virtualized overlay network. The hardware can utilize various rules, for example, that can be used by the NIC to perform certain actions, such as to encapsulate egress packets and decapsulate packets.
Abstract:
A client request, formatted in accordance with a file system interface, is received at an access subsystem of a distributed multi-tenant storage service. After the request is authenticated at the access subsystem, an atomic metadata operation comprising a group of file system metadata modifications is initiated, including a first metadata modification at a first node of a metadata subsystem of the storage service and a second metadata modification at a second node of the metadata subsystem. A plurality of replicas of at least one data modification corresponding to the request are saved at respective storage nodes of the service.
Abstract:
A request for a session identifier for a particular client is transmitted from an access subsystem of a storage service to a metadata subsystem of the service. A session identifier based on a persistent session storage location at which metadata of the client session are stored is received at the access subsystem. The session identifier is cached at the access subsystem prior to its transmission to the client. A lock state indicator generated by the metadata subsystem in response to a particular request from the client during the client session may also be cached at the access subsystem. Subsequent storage requests from the client during the session may be handled by the access subsystem using the cached session identifier and lock state indicator.
Abstract:
Some embodiments facilitate high performance packet-processing by enabling one or more processors that perform packet-processing to determine whether to enter an idle state or similar state. As network packets usually arrive or are transmitted in batches, the processors of some embodiments determine that more packets may be coming down a multi-stage pipeline upon receiving a first packet for processing. As a result, the processors may stay awake for a duration of time in anticipation of an incoming packet. Some embodiments keep track of the last packet that entered the first stage of the pipeline and compare that with a packet that the processor just processed in a pipeline stage to determine whether there may be more packets coming that need processing. In some embodiments, a processor may also look at a queue length of a queue associated with an upstream stage to determine whether more packets may be coming.
Abstract:
In response to receiving a write request directed to a particular logical block of a storage object, a page of free space (sufficient to accommodate the payload of the write request, but smaller in size than the logical block) of a particular extent that has been selected to store contents of the logical block is allocated. The current size of the extent is smaller than the combined sizes of logical blocks that are mapped to the extent. The page is modified in accordance with a payload indicated in the write request. In response to a subsequent write request directed to the particular extent, a determination is made that the particular extent would violate a free space threshold criterion if the payload of the write request were accommodated, and an extent expansion operation is initiated.
Abstract:
A write request directed to a storage object is received at a distributed file storage service. Based on a variable stripe size selection policy, a size of a particular stripe of storage space to be allocated for the storage object is determined, which differs from the size of another stripe allocated earlier for the same storage object. Allocation of storage for the particular stripe at a particular storage device is requested, and if the allocation succeeds, the contents of the storage device are modified in accordance with the write request.
Abstract:
Systems and method for the management of migrations of virtual machine instances are provided. A migration manager monitors the resource usage of a virtual machine instance over time in order to create a migration profile. When migration of a virtual machine instance is desired, the migration manager schedules the migration to occur such that the migration conforms to the migration profile.
Abstract:
Methods and apparatus for supporting cached volumes at storage gateways are disclosed. A storage gateway appliance is configured to cache at least a portion of a storage object of a remote storage service at local storage devices. In response to a client's write request, directed to at least a portion of a data chunk of the storage object, the appliance stores a data modification indicated in the write request at a storage device, and asynchronously uploads the modification to the storage service. In response to a client's read request, directed to a different portion of the data chunk, the appliance downloads the requested data from the storage service to the storage device, and provides the requested data to the client.