Survey of Deep Q-Network Variants in PyGame Learning Environment
ICDLT '18: 2018 2nd International Conference on Deep Learning Technologies 2018
Ēvalds Urtāns, Agris Ņikitenko

Q-value function models based on variations of Deep Q-Network (DQN) have shown good results in many virtual environments. In this paper, over 30 sub-algorithms were surveyed that influence the performance of DQN variants. Important stability and repeatability aspects of state of art Deep Reinforcement Learning algorithms were found. Multi Deep Q-Network (MDQN) as a generalization of popular Double Deep Q-Network (DDQN) algorithm was developed. Visual representations of a learning process as Q-Value maps were produced using PyGame Learning Environment.


Keywords
Deep Reinforcement Learning; Deep Learning; DQN; DDQN; MDQN
DOI
10.1145/3234804.3234816
Hyperlink
https://dl.acm.org/citation.cfm?doid=3234804.3234816

Urtāns, Ē., Ņikitenko, A. Survey of Deep Q-Network Variants in PyGame Learning Environment. In: ICDLT '18: 2018 2nd International Conference on Deep Learning Technologies, China, Chongqing, 27-29 June, 2018. New York: ACM, 2018, pp.27-36. ISBN 978-1-4503-6473-7. Available from: doi:10.1145/3234804.3234816

Publication language
English (en)
The Scientific Library of the Riga Technical University.
E-mail: uzzinas@rtu.lv; Phone: +371 28399196