正在加载图片...
6 Optimal Risk-Sensitive Feedback Control of Quantum Systems 6.1 System Model 74 6.2 Risk-Neutral Optimal Control 76 6.3 Risk-Sensitive Optimal Control 6.4 Control of a Two Level Atom 6.4.1 Setup 6.4.2 Information State 6.4.3 Dynamic Programmin 81 6.4.4 Risk-Neutral Control 6.5 Control of a Trapped Atom 6.5.1 Setup 6.5.2 Information State 6.5.3 Optimal LEQG Control 6.5.4 Robustness 1 Introduction The purpose of these notes is to provide an overview of some aspects of optimal and robust control theory considered relevant to quantum control. The notes begin with classical deterministic optimal control, move through classical stochastic and robust control, and conclude with quantum feedback control. Optimal control theory is a systematic approach to controller design whereby the desired performance objectives are encoded in a cost function, which is subsequently optimized to determine the desired controller. Robust control theory aims to enhance the robustness(ability to withstand, to some extent uncertainty,errors, etc) of controller designs by explicitly including uncertainty models in the design process. Some of the material is in continuous time, while other material is written in discrete time. There are two underlying and universal themes in the notes dynamic programming and filtering Dynamic programming is one of the two fundamental tools of optimal control, the other being Pontryagin's principle, [24]. Dynamic programming is a means by which candidate optimal controls can be verified optimal. The procedure is to find a suitable solution to a dynamic programming equation(DPe), which encodes the optimal performance, and to use it to compare the performance of a candidate optimal control. Candidate controls may be determined from Pontryagin's principle, or directly from the solution to the DPE In general it is difficult to solve DPEs. Explicit solutions exist in cases like the linear quadratic regulator, but in general approximations must usually be sought. In addition there are some technical complications regarding the DPE. In continuous time, the dPe is a nonlinear PDE, commonly called the Hamilton-Jacobi-Bellman(hjB )equation. The complications concern differentiability, or lackthereof, and occur even in "simple"classical deterministic problems, section 2. This is one reason it can be helpful to work in discrete time, where such regularity issues are much simpler(another reason for working in discrete time is to facilitate digital implementation6 Optimal Risk-Sensitive Feedback Control of Quantum Systems 74 6.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 6.2 Risk-Neutral Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . . 76 6.3 Risk-Sensitive Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . 77 6.4 Control of a Two Level Atom . . . . . . . . . . . . . . . . . . . . . . . . . 80 6.4.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6.4.2 Information State . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6.4.3 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . 81 6.4.4 Risk-Neutral Control . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.5 Control of a Trapped Atom . . . . . . . . . . . . . . . . . . . . . . . . . . 83 6.5.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 6.5.2 Information State . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.5.3 Optimal LEQG Control . . . . . . . . . . . . . . . . . . . . . . . . 85 6.5.4 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 1 Introduction The purpose of these notes is to provide an overview of some aspects of optimal and robust control theory considered relevant to quantum control. The notes begin with classical deterministic optimal control, move through classical stochastic and robust control, and conclude with quantum feedback control. Optimal control theory is a systematic approach to controller design whereby the desired performance objectives are encoded in a cost function, which is subsequently optimized to determine the desired controller. Robust control theory aims to enhance the robustness (ability to withstand, to some extent, uncertainty, errors, etc) of controller designs by explicitly including uncertainty models in the design process. Some of the material is in continuous time, while other material is written in discrete time. There are two underlying and universal themes in the notes: dynamic programming and filtering. Dynamic programming is one of the two fundamental tools of optimal control, the other being Pontryagin’s principle, [24]. Dynamic programming is a means by which candidate optimal controls can be verified optimal. The procedure is to find a suitable solution to a dynamic programming equation (DPE), which encodes the optimal performance, and to use it to compare the performance of a candidate optimal control. Candidate controls may be determined from Pontryagin’s principle, or directly from the solution to the DPE. In general it is difficult to solve DPEs. Explicit solutions exist in cases like the linear quadratic regulator, but in general approximations must usually be sought. In addition, there are some technical complications regarding the DPE. In continuous time, the DPE is a nonlinear PDE, commonly called the Hamilton-Jacobi-Bellman (HJB) equation. The complications concern differentiability, or lackthereof, and occur even in “simple” classical deterministic problems, section 2. This is one reason it can be helpful to work in discrete time, where such regularity issues are much simpler (another reason for working in discrete time is to facilitate digital implementation). 3
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有