Zinātniskās darbības atbalsta sistēma
Latviešu English

Publikācija: Applying Q-Learning to Non-Markovian Environments

Publikācijas veids Citas publikācijas konferenču (arī vietējo) ziņojumu izdevumos
Pamatdarbībai piesaistītais finansējums Nav zināms
Aizstāvēšana: ,
Publikācijas valoda English (en)
Nosaukums oriģinālvalodā Applying Q-Learning to Non-Markovian Environments
Pētniecības nozare 1. Dabaszinātnes
Pētniecības apakšnozare 1.2. Datorzinātne un informātika
Autori Jurijs Čižovs
Arkādijs Borisovs
Atslēgas vārdi Reinforcement learning, non-Markovian deterministic environments, intelligent agents, agent control
Anotācija This paper considers the problem of intelligent agent functioning in non-Markovian environments. We advice to divide the problem into two subproblems: just finding non-Markovian states in the environment and building an internal representation of original environment by the agent. The internal representation is free from non Markovian states because insufficient number of additional dynamically created states and transitions are provided. Then, the obtained environment might be used in classical reinforcement learning algorithms (like SARSA(λ)) which guarantee the convergence by Bellman equation. A great difficulty is to recognize different “copies” of the same states. The paper contains a theoretical introduction, ideas and problem description, and, finally, an illustration of results and conclusions.
Atsauce Čižovs, J., Borisovs, A. Applying Q-Learning to Non-Markovian Environments. No: Proceedings of the First International Conference on Agents and Artificial Intelligence (ICAART 2009), Portugāle, Porto, 19.-21. janvāris, 2009. Porto: Institute for Systems and Technologies of Information, Control and Communication, 2009, 306.-311.lpp. ISBN 9783642118197.
ID 4691