Partner Approximating Learners (PAL): Simulation-Accelerated Learning\n with Explicit Partner Modeling in Multi-Agent Domains Article Swipe
YOU?
·
· 2019
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.1909.03868
Mixed cooperative-competitive control scenarios such as human-machine\ninteraction with individual goals of the interacting partners are very\nchallenging for reinforcement learning agents. In order to contribute towards\nintuitive human-machine collaboration, we focus on problems in the continuous\nstate and control domain where no explicit communication is considered and the\nagents do not know the others' goals or control laws but only sense their\ncontrol inputs retrospectively. Our proposed framework combines a learned\npartner model based on online data with a reinforcement learning agent that is\ntrained in a simulated environment including the partner model. Thus, we\novercome drawbacks of independent learners and, in addition, benefit from a\nreduced amount of real world data required for reinforcement learning which is\nvital in the human-machine context. We finally analyze an example that\ndemonstrates the merits of our proposed framework which learns fast due to the\nsimulated environment and adapts to the continuously changing partner due to\nthe partner approximation.\n
Related Topics To Compare & Contrast
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/1909.03868
- https://arxiv.org/pdf/1909.03868
- OA Status
- green
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4288112366