Multi-View Reinforcement Learning Article Swipe
Related Concepts
Reinforcement learning
Computer science
Variety (cybernetics)
Markov decision process
Sample complexity
Artificial intelligence
Partially observable Markov decision process
Sample (material)
Observable
Markov process
Transfer of learning
Machine learning
Markov chain
Markov model
Mathematics
Statistics
Chemistry
Quantum mechanics
Physics
Chromatography
Minne Li
,
Lisheng Wu
,
Jun Wang
,
Haitham Bou Ammar
·
YOU?
·
· 2019
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.1910.08285
· OA: W2970162659
YOU?
·
· 2019
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.1910.08285
· OA: W2970162659
This paper is concerned with multi-view reinforcement learning (MVRL), which allows for decision making when agents share common dynamics but adhere to different observation models. We define the MVRL framework by extending partially observable Markov decision processes (POMDPs) to support more than one observation model and propose two solution methods through observation augmentation and cross-view policy transfer. We empirically evaluate our method and demonstrate its effectiveness in a variety of environments. Specifically, we show reductions in sample complexities and computational time for acquiring policies that handle multi-view environments.
Related Topics
Finding more related topics…