Adversarial Reinforcement Learning for Procedural Content Generation and Improved Generalization

    公开(公告)号:US20240017175A1

    公开(公告)日:2024-01-18

    申请号:US18474863

    申请日:2023-09-26

    CPC classification number: A63F13/67 G06N3/08 A63F13/56 G06N3/045

    Abstract: Methods, apparatus and systems are provided for training a first reinforcement-learning (RL) agent and a second RL agent coupled to a computer game environment using RL techniques. The first RL agent iteratively generates a sub-goal sequence in relation to an overall goal within the computer game environment, where the first RL agent generates a new sub-goal for the sub-goal sequence after a second RL agent, interacting with the computer game environment, successfully achieves a current sub-goal in the sub-goal sequence. The second RL agent iteratively interacts with the computer game environment to achieve the current sub-goal in which each iterative interaction includes an attempt by the second RL agent for interacting with the computer game environment to achieve the current sub-goal. The first RL agent is updated using a first reward issued when the second RL agent successfully achieves the current sub-goal. The second RL agent is updated when a second reward is issued by the computer game environment based on the performance of the second RL agent attempting to achieve said current sub-goal. Once validly trained, the first RL agent forms a final first RL agent for automatic procedural content generation (PCG) in the computer game environment and the second RL agent forms a final second RL agent for automatically interacting with a PCG computer game environment.

    Adversarial Reinforcement Learning for Procedural Content Generation and Improved Generalization

    公开(公告)号:US20220266145A1

    公开(公告)日:2022-08-25

    申请号:US17477732

    申请日:2021-09-17

    Abstract: Methods, apparatus and systems are provided for training a first reinforcement-learning (RL) agent and a second RL agent coupled to a computer game environment using RL techniques. The first RL agent iteratively generates a sub-goal sequence in relation to an overall goal within the computer game environment, where the first RL agent generates a new sub-goal for the sub-goal sequence after a second RL agent, interacting with the computer game environment, successfully achieves a current sub-goal in the sub-goal sequence. The second RL agent iteratively interacts with the computer game environment to achieve the current sub-goal in which each iterative interaction includes an attempt by the second RL agent for interacting with the computer game environment to achieve the current sub-goal. The first RL agent is updated using a first reward issued when the second RL agent successfully achieves the current sub-goal. The second RL agent is updated when a second reward is issued by the computer game environment based on the performance of the second RL agent attempting to achieve said current sub-goal. Once validly trained, the first RL agent forms a final first RL agent for automatic procedural content generation (PCG) in the computer game environment and the second RL agent forms a final second RL agent for automatically interacting with a PCG computer game environment.

    Adversarial reinforcement learning for procedural content generation and improved generalization

    公开(公告)号:US12157063B2

    公开(公告)日:2024-12-03

    申请号:US18474863

    申请日:2023-09-26

    Abstract: Methods, apparatus and systems are provided for training a first reinforcement-learning (RL) agent and a second RL agent coupled to a computer game environment using RL techniques. The first RL agent iteratively generates a sub-goal sequence in relation to an overall goal within the computer game environment, where the first RL agent generates a new sub-goal for the sub-goal sequence after a second RL agent, interacting with the computer game environment, successfully achieves a current sub-goal in the sub-goal sequence. The second RL agent iteratively interacts with the computer game environment to achieve the current sub-goal in which each iterative interaction includes an attempt by the second RL agent for interacting with the computer game environment to achieve the current sub-goal. The first RL agent is updated using a first reward issued when the second RL agent successfully achieves the current sub-goal. The second RL agent is updated when a second reward is issued by the computer game environment based on the performance of the second RL agent attempting to achieve said current sub-goal. Once validly trained, the first RL agent forms a final first RL agent for automatic procedural content generation (PCG) in the computer game environment and the second RL agent forms a final second RL agent for automatically interacting with a PCG computer game environment.

    Adversarial reinforcement learning for procedural content generation and improved generalization

    公开(公告)号:US11883746B2

    公开(公告)日:2024-01-30

    申请号:US17477732

    申请日:2021-09-17

    CPC classification number: A63F13/67 A63F13/56 G06N3/045 G06N3/08

    Abstract: Methods, apparatus and systems are provided for training a first reinforcement-learning (RL) agent and a second RL agent coupled to a computer game environment using RL techniques. The first RL agent iteratively generates a sub-goal sequence in relation to an overall goal within the computer game environment, where the first RL agent generates a new sub-goal for the sub-goal sequence after a second RL agent, interacting with the computer game environment, successfully achieves a current sub-goal in the sub-goal sequence. The second RL agent iteratively interacts with the computer game environment to achieve the current sub-goal in which each iterative interaction includes an attempt by the second RL agent for interacting with the computer game environment to achieve the current sub-goal. The first RL agent is updated using a first reward issued when the second RL agent successfully achieves the current sub-goal. The second RL agent is updated when a second reward is issued by the computer game environment based on the performance of the second RL agent attempting to achieve said current sub-goal. Once validly trained, the first RL agent forms a final first RL agent for automatic procedural content generation (PCG) in the computer game environment and the second RL agent forms a final second RL agent for automatically interacting with a PCG computer game environment.

    GLITCH DETECTION SYSTEM
    9.
    发明申请

    公开(公告)号:US20210366183A1

    公开(公告)日:2021-11-25

    申请号:US17017585

    申请日:2020-09-10

    Abstract: The present disclosure provides a system for automating graphical testing during video game development. The system can use Deep Convolutional Neural Networks (DCNNs) to create a model to detect graphical glitches in video games. The system can use an image, a video game frame, as input to be classified into one of defined number of classifications. The classifications can include a normal image and one of a plurality of different kinds of glitches. In some embodiments, the glitches can include corrupted textures, including low resolution textures and stretched textures, missing textures, and placeholder textures. The system can apply a confidence measure to the analysis to help reduce the number of false positives.

Patent Agency Ranking