Dynamic ProgrammingCourier Corporation, 1 gen 2003 - 340 pagine An introduction to the mathematical theory of multistage decision processes, this text takes a "functional equation" approach to the discovery of optimum policies. Written by a leading developer of such policies, it presents a series of methods, uniqueness and existence theorems, and examples for solving the relevant equations. The text examines existence and uniqueness theorems, the optimal inventory equation, bottleneck problems in multistage production processes, a new formalism in the calculus of variation, strategies behind multistage games, and Markovian decision processes. Each chapter concludes with a problem set that Eric V. Denardo of Yale University, in his informative new introduction, calls "a rich lode of applications and research topics." 1957 edition. 37 figures. |
Altre edizioni - Visualizza tutto
Parole e frasi comuni
a₁ a₂ allocation analysis analytic approximation in policy assume assumption b₁ b₂ Bellman C₁ C₂ calculus of variations Chapter choice computational concave function Consider the problem constraints convergence convex convex function cost decision processes defined discussion dx/dt Dynamic Programming existence and uniqueness expected value f₁ finite functional equation Hence inequality initial interval K₁ K₂ Lemma linear machine mathematical method minimize minimum multi-stage nonlinear obtain optimal policy p₁ P₂ partial differential equation policy space probability problem of determining problem of maximizing proof q₁ quantity r₁ r₂ RAND Corporation recurrence relation region result satisfies sequence Show solution stage Stieltjes integrals stochastic stochastic processes T₁ techniques theory tion variables variational problem vector w₁ w₂ x₁ x₂ y₁ Y₂ yields z₁ Þ₂