主讲人:Prof. Chengchun Shi(伦敦政治经济学院)
时间:2023年9月2日10:00—11:00 地点:N602
【摘要】We consider offline reinforcement learning (RL) methods in possibly nonstationary environments. Many existing RL algorithms in the literature rely on the stationarity assumption that requires the system transition and the reward function to be constant over time. However, the stationarity assumption is restrictive in practice and is likely to be violated in a number of applications, including traffic signal control, robotics and mobile health. In this paper, we develop a consistent procedure to test the nonstationarity of the optimal policy based on pre-collected historical data, without additional online data collection. Based on the proposed test, we further develop a sequential change point detection method that can be naturally coupled with existing state-of-the-art RL methods for policy optimization in nonstationary environments. The usefulness of our method is illustrated by theoretical results, simulation studies, and a real data example from the 2018 Intern Health Study.
【个人简介】Chengchun Shi is an Associate Professor at London School of Eco- nomics and Political Science. He is serving as the associate editors of JRSSB, JASA (T&M) and Journal of Nonparametric Statistics. His research focuses on developing statistical learning methods in rein- forcement learning, with applications to healthcare, ridesharing, video-sharing and neuroimaging. He was the recipient of the Royal Statistical Society Research Prize in 2021. He also received the IMS travel awards in three years.