Abstract:
An example method for leveraging hardware accelerators for scalable distributed stream processing in a network environment is provided and includes allocating a plurality of hardware accelerators to a corresponding plurality of bolts of a distributed stream in a network, facilitating a handshake between the hardware accelerators and the corresponding bolts to allow the hardware accelerators to execute respective processing logic according to the corresponding bolts, and performing elastic allocation of hardware accelerators and load balancing of stream processing in the network. The distributed stream comprises a topology of at least one spout and the plurality of bolts. In specific embodiments, the allocating includes receiving capability information from the bolts and the hardware accelerators, and mapping the hardware accelerators to the bolts based on the capability information. In some embodiments, facilitating the handshake includes executing a shadow process to interface between the hardware accelerator and the distributed stream.
Abstract:
Techniques are provided to generate and store a network graph database comprising information that indicates a service node topology, and virtual or physical network services available at each node in a network. A service request is received for services to be performed on packets traversing the network between at least first and second endpoints. A subset of the network graph database is determined that can provide the services requested in the service request. A service chain and service chain identifier is generated for the service based on the network graph database subset. A flow path is established through the service chain by flow programming network paths between the first and second endpoints using the service chain identifier.
Abstract:
Systems and methods are described for allocating resources in a cloud computing environment. The method includes receiving a computing request, the request for use of at least one virtual machine and a portion of memory. In response to the request, a plurality of hosts is identified and a cost function is formulated using at least a portion of those hosts. Based on the cost function, at least one host that is capable of hosting the virtual machine and memory is selected.
Abstract:
In one embodiment, a processor can receive data representing a view reflected by a mirror of a plurality of mirrors. The plurality of mirrors may be configured in a space to reflect a plurality of views of structures in the space. The mirror of the plurality of mirrors may include a uniquely identifiable feature distinguishable from other objects in the space. The processor can identify the mirror of the plurality of mirrors according to the uniquely identifiable feature. The processor can also determine an attribute of the structures according to the identified mirror and the data representing the view reflected by the mirror.