Abstract:
A device and method for controlling traffic transmission/reception in a network end terminal is provided. The method includes measuring a transmission/reception processing performance value of a first network stack and a transmission/reception processing performance value of a second network stack according to each central processing unit (CPU) core, reserving network performance required for an application on the basis of the transmission/reception processing performance value of the first network stack and the transmission/reception processing performance value of the second network stack measured according to each CPU core, and allocating a CPU core corresponding to the reserved network performance to a networking thread of the application to control traffic transmission/reception.
Abstract:
In order to select a conference processing device to host a video conference between conference participation devices, a video conference system selects conference processing devices that are positioned most adjacent to each of conference participation devices that participate in the video conference as candidates for a conference processing device. The video conference system forms network topology based on candidate conference processing devices and conference participation devices, and aligns candidate conference processing devices based on preset alignment reference information. The video conference system selects one of the aligned candidate conference processing devices as an optimal conference processing device to host the video conference.
Abstract:
A processor partitions a deep neural network having a plurality of exit points and at least one partition point in a branch corresponding to each of the exit points, for distributed processing in an edge device and a cloud. The processor sets environmental variables and training variables for training, selects an action to move at least one of an exit point and a partition point from a combination of the exit point and the partition point corresponding to a current state, performs the training by accumulating experience data using a reward according to the selected action and then moves to a next state, and outputs a combination of an optimal exit point and a partition point as a result of the training.