Abstract:
A light guiding arrangement adapted to guide light along a deflection light path, the light guiding arrangement comprising a housing, wherein the housing comprises a light entering area; a light escaping area adapted to transmit guided light to an image sensor; a first lens arrangement arranged in between the light entering area and the light escaping area; and one or more light deflection elements arranged in between the light entering area and the light escaping area. The one or more light deflection elements are adapted to form the deflection light path in between the light entering area and the light escaping area. The housing is formed by two or more operatively interconnected components configured to deform the housing.
Abstract:
A media data preparation device adapted to receive media data, including at least one processor, and at least one non-transitory memory having computer program code stored thereon for execution by the at least one processor, the computer program code including instructions to receive a set of metadata that is based on at least one spatial coordinate, where the set of metadata is associated with the media data, and determine a representation of the media data in a virtual reality space based on the set of metadata
Abstract:
A hardware acceleration method includes obtaining compilation policy information and a source code, where the compilation policy information indicates that a first code type matches a first processor and a second code type matches a second processor; analyzing a code segment in the source code according to the compilation policy information; determining a first code segment belonging to the first code type or a second code segment belonging to the second code type; compiling the first code segment into a first executable code; sending the first executable code to the first processor; compiling the second code segment into a second executable code; and sending the second executable code to the second processor.
Abstract:
An application deployment method and a scheduler are disclosed. The method includes: receiving, by a scheduler, an application deployment request sent for a first application by a cloud controller of a first cloud; after receiving the application deployment request, sending, by the scheduler, a first query message and a second query message to a cloud controller of a second cloud, and sending a second query message to a cloud controller of a third cloud; determining, by the scheduler, a target calculation unit from at least one calculation unit that is obtained by querying by using the first query message and the second query message and that has a first calculation capability; and deploying, by the scheduler, the first application to the target calculation unit.
Abstract:
Embodiments of the present invention disclose a method, a device, and a system for obtaining a cost between nodes. The method includes: receiving, by a first server, a first cost request message from a client, where the first cost request message includes a first source node list, a first candidate node list, and a cost type; and calculating a cost between each source node in the first source node list and each candidate node in the first candidate node list according to the cost type. Network traffic management and optimization are implemented by using the technical solutions provided in the embodiments of the present invention, in which a cost between nodes is acquired from servers hierarchically deployed by an Internet service provider ISP and used as a basis of node selection.
Abstract:
Example packet control methods and apparatus are described. One example method includes detecting a packet flow causing a congestion status change. A congestion isolation message is generated and is used to change a priority of a packet in the packet flow. The congestion isolation message includes description information of the packet flow. The congestion isolation message is sent to at least one node.
Abstract:
A flow control method includes: when congestion is detected, determining, by a first switching device, a key flow from a plurality of data flows; generating a back pressure message including a flow attribute value of the key flow; sending the back pressure message to an upstream device of the key flow; and pausing, by the upstream device of the key flow, sending of the key flow, where the back pressure message has no impact on sending of another data flow other than the key flow by the upstream device of the key flow. The present disclosure further provides a switching device that can implement the flow control method.
Abstract:
A hardware acceleration method, a compiler, and a device, to improve code execution efficiency and implement hardware acceleration. The method includes: obtaining, by a compiler, compilation policy information and source code, where the compilation policy information indicates that a first code type matches a first processor and a second code type matches a second processor; analyzing, by the compiler, a code segment in the source code according to the compilation policy information, and determining a first code segment belonging to the first code type or a second code segment belonging to the second code type; and compiling, by the compiler, the first code segment into first executable code, and sending the first executable code to the first processor; and compiling the second code segment into second executable code, and sending the second executable code to the second processor.
Abstract:
Embodiments of this application relate to the field of communications technologies, and disclose a flow control method and apparatus, to resolve a prior-art problem such as packet loss, packet accumulation, or network congestion that occurs after a packet is switched between priority queues. A specific solution is as follows: A first device receives a first packet sent by a second device, where the first packet carries a first field and a second field, the first field carries a first priority, and the second field carries a second priority; the first device performs flow control based on the first priority in the first packet; and the first device performs queue scheduling on the first packet based on the second priority in the first packet.
Abstract:
A path computation method and a related device are disclosed. The method includes: a second network device receives path requirement information and a recomputation condition that are sent by a first network device; the second network device first obtains by means of computation a path meeting a requirement according to the path requirement information, and sends description information of the path meeting the requirement to the first network device; then the second network device constantly determines whether the recomputation condition is met; and when the recomputation condition is met, the second network device performs path recomputation, and sends description information of a path obtained by means of recomputation to the first network device. Therefore sensitivity for triggering path recomputation can be improved and a quantity of communication messages between network devices can be reduced.