Publications

カンファレンス (国際) Safe Exploration in Reinforcement Learning: A Generalized Formulation and Algorithms

Akifumi Wachi, Wataru Hashimoto (Osaka University), Xun Shen (Osaka University), Kazumune Hashimoto (Osaka University)

The 37th Conference on Neural Information Processing Systems (NeurIPS 2023)

2023.12.13

Safe exploration is essential for the practical use of reinforcement learning (RL) in many real-world scenarios. In this paper, we present a generalized safe exploration (GSE) problem as a unified formulation of common safe exploration problems. We then propose a solution of the GSE problem in the form of a meta-algorithm for safe exploration, MASE, which combines an unconstrained RL algorithm with an uncertainty quantifier to guarantee safety in the current episode while properly penalizing unsafe explorations before actual safety violation to discourage them in future episodes. The advantage of MASE is that we can optimize a policy while guaranteeing with a high probability that no safety constraint will be violated under proper assumptions. Specifically, we present two variants of MASE with different constructions of the uncertainty quantifier: one based on generalized linear models with theoretical guarantees of safety and near-optimality, and another that combines a Gaussian process to ensure safety with a deep RL algorithm to maximize the reward. Finally, we demonstrate that our proposed algorithm achieves better performance than state-of-the-art algorithms on grid-world and Safety Gym benchmarks without violating any safety constraints, even during training.

Paper : Safe Exploration in Reinforcement Learning: A Generalized Formulation and Algorithms新しいタブまたはウィンドウで開く (外部サイト)