The goal of the doctoral thesis “Development and study of a controlled Markov decision model of a dynamic system by means of data mining techniques” is to develop the decision making framework based on the Markov Decision Process (MDP) for the dynamic systems in which the data are represented as time series. The mathematical framework of MDP has been used successfully to find the optimal management strategy in discrete stochastic processes developing over time. There are a number of modifications and enhancements aimed at solving tasks with continuous parameters, partially observable environments, etc. However, the issues related to the building of an MDP-model which contains the data represented as time series, are open for research. The extension of the framework for working with time series allows one to take advantage of a standard MDP framework to make decisions on economic problems in online mode. In the Ph.D. Thesis the advanced decision making method based on Markov Decision Process is under central consideration. The maximum-likelihood technique, which is a statistical method for estimating the unknown parameter, is used to construct the probabilistic model in framework of the apparatus. Data mining techniques including tools for data normalization, clustering and classification are employed. The methods of computational intelligence: Reinforcement Learning and Artificial Neural Networks are used. The agent-oriented architecture is used for the software systems under development. The practical application of the intellectual agent system based on Markov Decision Process was demonstrated in the task of Dynamic Pricing Policy. The testing data are the actual sales records of the real manufacturing and trade management system 1С:Enterprise v7. In the course of experiments based on real data on sales, the numerical evaluations of MDP model closeness to the factual evolution of the processes under investigation, as well as the evaluations of the system functioning on testing data set are obtained. This paper presents a series of experiments of several sub-systems (Artificial Neural Networks, Markov Decision Process) with toy problems. Besides, a series of experiments by the example of Dynamic Pricing Policies task in order to numerically evaluate the effectiveness of the improved MDP framework was carried out. The doctoral thesis is written in the Latvian language. It includes introduction, 6 chapters, conclusion, a list of references and 4 appendixes. The doctoral thesis contains 138 pages and is illustrated with 70 figures and 15 tables. The list of references includes 83 titles.