Markov Decision Process in the Problem of Dynamic Pricing Policy
Automatic Control and Computer Sciences 2011
Jurijs Čižovs, Arkādijs Borisovs

Markov decision processes (MDP) are widely used in problems whose solutions may be represented by a certain series of actions. A lot of papers demonstrate successful MDP use in model problems, robotic control problems, planning problems, etc. In addition, economic problems have the property of multistep motion towards a goal as well. This paper is dedicated to MDP application to the problem of pricing policy management. The problem of dynamic pricing is stated in terms of MDP. Additional attention is paid to the method of constructing an MDP model based on data mining. Based on the data on sales of an actual industrial plant, construction of an MDP model that includes the searching for and generalization of regularities is demnstrated.


Keywords
Markov decision process, dynamic pricing policy, MDP model construction
DOI
10.3103/S0146411611060058
Hyperlink
http://link.springer.com/article/10.3103%2FS0146411611060058

Čižovs, J., Borisovs, A. Markov Decision Process in the Problem of Dynamic Pricing Policy. Automatic Control and Computer Sciences, 2011, Vol.45, Iss.6, pp.77-90. ISSN 0146-4116. e-ISSN 1558-108X. Available from: doi:10.3103/S0146411611060058

Publication language
English (en)
The Scientific Library of the Riga Technical University.
E-mail: uzzinas@rtu.lv; Phone: +371 28399196