Generalized Mean Estimation in Monte-Carlo Tree Search
Generalized Mean Estimation in Monte-Carlo Tree Search
Tuan Dam, Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 2397-2404.
https://doi.org/10.24963/ijcai.2020/332
We consider Monte-Carlo Tree Search (MCTS) applied to Markov Decision Processes (MDPs) and Partially Observable MDPs (POMDPs), and the well-known Upper Confidence bound for Trees (UCT) algorithm. In UCT, a tree with nodes (states) and edges (actions) is incrementally built by the expansion of nodes, and the values of nodes are updated through a backup strategy based on the average value of child nodes. However, it has been shown that with enough samples the maximum operator yields more accurate node value estimates than averaging. Instead of settling for one of these value estimates, we go a step further proposing a novel backup strategy which uses the power mean operator, which computes a value between the average and maximum value. We call our new approach Power-UCT, and argue how the use of the power mean operator helps to speed up the learning in MCTS. We theoretically analyze our method providing guarantees of convergence to the optimum. Finally, we empirically demonstrate the effectiveness of our method in well-known MDP and POMDP benchmarks, showing significant improvement in performance and convergence speed w.r.t. state of the art algorithms.
Keywords:
Machine Learning: Reinforcement Learning
Uncertainty in AI: Markov Decision Processes
Uncertainty in AI: Sequential Decision Making