Hindsight-Combined and Hindsight-Prioritized Experience Replay
Reinforcement learning has proved to be of great utility; execution, however, may be costly due to sampling inefficiency. An efficient method for training is experience replay, which recalls past experiences. Several experience replay techniques, namely, combined experience replay, hindsight experience replay, and prioritized experience replay, have been crafted while their relative merits are unclear. In the study, one proposes hybrid algorithms – hindsight-combined and hindsight-prioritized experience replay – and evaluates their performance against published baselines. Experimental results demonstrate the superior performance of hindsight-combined experience replay on an OpenAI Gym benchmark. Further, insight into the nonconvergence of hindsightprioritized experience replay is presented towards the improvement of the approach.
R. Tan, K. Ikeda, J. Vergara, (2020), Hindsight-Combined and Hindsight-Prioritized Experience Replay. Lecture Notes in Computer Science: Neural Information Processing (pp. 429-439), Springer Nature.