HyperAgent - A Simple, Efficient and Scalable RL Framework for Complex Environments

Abstract

Under resource constraints, reinforcement learning (RL) agents need to be simple, efficient and scalable with (1) large state space and (2) increasingly accumulated data of interactions when deploying in complex environments. We propose the HyperAgent, a RL framework with hypermodel, index sampling schemes and incremental update mechanism, enabling computation-efficient sequential posterior approximation and data-efficient action selection under general value function approximation beyond conjugacy. The implementation of HyperAgent is simple as it only add one module and a line of code additional to DDQN. Practically, HyperAgent demonstrates its robust performance in large-scale deep RL benchmarks with significant efficiency gain in terms of both data and computation. Theoretically, among the practically scalable algorithms, HyperAgent is the first achieving provably scalable per-step computational complexity as well as sublinear regret under tabular RL. The core of our theoretical analysis is the sequential posterior approximation argument. This is made possible by the first analytical tool for sequential random projection, a non-trivial martingale extension of the Johnson-Lindenstrauss lemma, which is of independent interest. This work bridges the theoretical and practical realms of RL, establishing a new benchmark for RL algorithms design.

Date
Jan 13, 2024 1:20 PM
Event
Contributed Talk in The third doctoral and postdoctoral Daoyuan academic forum. Best paper award
Location
Daoyuan Building
Shenzhen,
Yingru LI
Yingru LI
Ph.D. Candidate

My interests include sequential decision-making, optimization and applied probability with applications in AI & OR.