-
1.
公开(公告)号:US20220182933A1
公开(公告)日:2022-06-09
申请号:US17542301
申请日:2022-02-24
Inventor: Jae Hyeon PARK , Young Deok PARK , Young Joo SUH
Abstract: An operation method of a first communication node in a communication system, according to an exemplary embodiment of the present disclosure for achieving the above-described objective, may comprise: transitioning to a down-clocking state; performing a monitoring operation in the down-clocking state; detecting reception of a first packet transmitted from a second communication node providing a service to the first communication node; identifying a first preamble included in the first packet; performing analysis on the first preamble; and based on a result of the analysis on the first preamble, determining whether to maintain the down-clocking state or transition to a full-clocking state.
-
2.
公开(公告)号:US20250068691A1
公开(公告)日:2025-02-27
申请号:US18799915
申请日:2024-08-09
Inventor: Youngmi JIN , Dong Deok KIM , Young Joo SUH
IPC: G06F17/11
Abstract: The present disclosure relates to a multi-armed bandit method and apparatus for selecting multiple items while ensuring fairness of exposure of the multiple items and maximizing the averaged total reward. The MAB method includes: initializing the empirical mean reward and number of arm selections of each arm for the M arms, and the time step; incrementing the time step; calculating the UCB index of each arm for the M arms; selecting K−1 arms with the K−1 highest UCB indices calculated; calculating unfairness indices for the unchosen M−(K−1) arms; checking if there is an arm with a positive unfairness index among the unchosen M−(K−1) arms; selecting the remaining single arm depending on whether there is an arm with a positive unfairness index among the unchosen M−(K−1) arms; playing the selected K arms; and updating the empirical mean reward and the number of arm selections for the played arms.
-
公开(公告)号:US20240419131A1
公开(公告)日:2024-12-19
申请号:US18368506
申请日:2023-09-14
Inventor: Young Joo SUH , Dong Deok KIM
IPC: G05B13/02
Abstract: A method of controlling a Claus process based on reinforcement learning is proposed. The method includes receiving by a controller, a state value of the Claus process, inputting, by the controller, the state value of the Claus process into a reinforcement learning model, and controlling, by the controller, the Claus process based on an action value output from the reinforcement learning model.
-
-