Generalized Accelerated Gradient Methods for Distributed MPC Based on Dual Decomposition
(2014) p.309325 Abstract
 We consider distributed model predictive control (DMPC) where a sparse centralized optimization problem without a terminal cost or a terminal constraint set is solved in distributed fashion. Distribution of the optimization algorithm is enabled by dual decomposition. Gradient methods are usually used to solve the dual problem resulting from dual decomposition. However, gradient methods are known for their slow convergence rate, especially for illconditioned problems. This is not desirable in DMPC where the amount of communication should be kept as low as possible. In this chapter, we present a distributed optimization algorithm applied to solve optimization problems arising in DMPC that has significantly better convergence rate than the... (More)
 We consider distributed model predictive control (DMPC) where a sparse centralized optimization problem without a terminal cost or a terminal constraint set is solved in distributed fashion. Distribution of the optimization algorithm is enabled by dual decomposition. Gradient methods are usually used to solve the dual problem resulting from dual decomposition. However, gradient methods are known for their slow convergence rate, especially for illconditioned problems. This is not desirable in DMPC where the amount of communication should be kept as low as possible. In this chapter, we present a distributed optimization algorithm applied to solve optimization problems arising in DMPC that has significantly better convergence rate than the classical gradient method. This improved convergence rate is achieved by using accelerated gradient methods instead of standard gradient methods and by in a welldefined manner, incorporating Hessian information into the gradientiterations. We also present a stopping condition to the distributed optimization algorithm that ensures feasibility, stability and closed loop performance of the DMPCscheme, without using a stabilizing terminal cost or terminal constraint set. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/4926692
 author
 Giselsson, Pontus ^{LU} and Rantzer, Anders ^{LU}
 organization
 publishing date
 2014
 type
 Chapter in Book/Report/Conference proceeding
 publication status
 published
 subject
 host publication
 Distributed Model Predictive Control Made Easy
 editor
 Maestre, José M. and Negenborn, Rudy R.
 pages
 309  325
 publisher
 Springer
 external identifiers

 scopus:84896528510
 ISBN
 9789400770058
 DOI
 10.1007/9789400770065_19
 project
 LCCC
 language
 English
 LU publication?
 yes
 id
 d01f4a03849347eb8ae1a25e6ef17e61 (old id 4926692)
 date added to LUP
 20160404 12:02:07
 date last changed
 20211006 05:38:06
@inbook{d01f4a03849347eb8ae1a25e6ef17e61, abstract = {We consider distributed model predictive control (DMPC) where a sparse centralized optimization problem without a terminal cost or a terminal constraint set is solved in distributed fashion. Distribution of the optimization algorithm is enabled by dual decomposition. Gradient methods are usually used to solve the dual problem resulting from dual decomposition. However, gradient methods are known for their slow convergence rate, especially for illconditioned problems. This is not desirable in DMPC where the amount of communication should be kept as low as possible. In this chapter, we present a distributed optimization algorithm applied to solve optimization problems arising in DMPC that has significantly better convergence rate than the classical gradient method. This improved convergence rate is achieved by using accelerated gradient methods instead of standard gradient methods and by in a welldefined manner, incorporating Hessian information into the gradientiterations. We also present a stopping condition to the distributed optimization algorithm that ensures feasibility, stability and closed loop performance of the DMPCscheme, without using a stabilizing terminal cost or terminal constraint set.}, author = {Giselsson, Pontus and Rantzer, Anders}, booktitle = {Distributed Model Predictive Control Made Easy}, editor = {Maestre, José M. and Negenborn, Rudy R.}, isbn = {9789400770058}, language = {eng}, pages = {309325}, publisher = {Springer}, title = {Generalized Accelerated Gradient Methods for Distributed MPC Based on Dual Decomposition}, url = {http://dx.doi.org/10.1007/9789400770065_19}, doi = {10.1007/9789400770065_19}, year = {2014}, }