Abstract
In this chapter, we study algorithms for solving convex optimization problems. We will focus on algorithms that have been applied or have the potential to be applied for solving machine learning and other data analysis problems. More specifically, we will discuss first-order methods which have been proven effective for large-scale optimization. These methods also form the basis for other computationally efficient methods, e.g., stochastic and randomized methods to be discussed in later chapters.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
A. Beck, M. Teboulle, Mirror-descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31, 167–175 (2003)
A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sciences 2, 183–202 (2009)
A. Ben-Tal, A.S. Nemirovski, Lectures on Modern Convex Optimization: Analysis, Algorithms, Engineering Applications. MPS-SIAM Series on Optimization (SIAM, Philadelphia, 2000)
A. Ben-Tal, A.S. Nemirovski, Non-Euclidean restricted memory level method for large-scale convex optimization. Math. Program. 102, 407–456 (2005)
S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)
A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision 40, 120–145 (2011)
Y. Chen, G. Lan, Y. Ouyang, Optimal primal-dual methods for a class of saddle point problems. SIAM J. Optim. 24(4), 1779–1814 (2014)
C. Dang, G. Lan, Randomized first-order methods for saddle point optimization. Manuscript, Department of Industrial and Systems Engineering, University of Florida, Gainesville, September 2014
C.D. Dang, G. Lan, On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators. Comput. Optim. Appl. (2015). https://doi.org/10.1007/s10589-014-9673-9
F. Facchinei, J. Pang, Finite-dimensional Variational Inequalities and Complementarity Problems, Volumes I and II. Comprehensive Study in Mathematics (Springer, New York, 2003)
B. He, X. Yuan, On the $o(1/n)$ convergence rate of the Douglas Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)
K.C. Kiwiel, Proximal level bundle method for convex nondifferentiable optimization, saddle point problems and variational inequalities. Math. Program. B 69, 89–109 (1995)
G. Korpelevich, The extragradient method for finding saddle points and other problems. Ekon. Mat. Metody 12, 747–756 (1976)
G. Lan, Bundle-level type methods uniformly optimal for smooth and non-smooth convex optimization. Math. Program. 149(1), 1–45 (2015)
G. Lan, Y. Yang, Accelerated stochastic algorithms for nonconvex finite-sum and multi-block optimization. SIAM J. Optim. 29(4), 2753–2784 (2019)
G. Lan, Z. Lu, R.D.C. Monteiro, Primal-dual first-order methods with \({\cal O}(1/\epsilon )\) iteration-complexity for cone programming. Math. Program. 126, 1–29 (2011)
C. Lemaréchal, A.S. Nemirovski, Y.E. Nesterov, New variants of bundle methods. Math. Program. 69, 111–148 (1995)
R.D.C. Monteiro, B.F. Svaiter, Iteration-complexity of block-decomposition algorithms and the alternating direction method of multipliers. SIAM J. Optim. 23(1), 475–507 (2013)
A.S. Nemirovski, Prox-method with rate of convergence o(1∕t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229–251 (2005)
A.S. Nemirovski, D. Yudin, Problem Complexity and Method Efficiency in Optimization. Wiley-Interscience Series in Discrete Mathematics (Wiley, XV, New York, 1983)
Y.E. Nesterov, A method for unconstrained convex minimization problem with the rate of convergence O(1∕k 2). Doklady AN SSSR 269, 543–547 (1983)
Y.E. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course (Kluwer Academic Publishers, Boston, 2004)
Y.E. Nesterov, Smooth minimization of nonsmooth functions. Math. Program. 103, 127–152 (2005)
Y.E. Nesterov, Gradient methods for minimizing composite objective functions. Technical report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain, September 2007
Y. Ouyang, Y. Chen, G. Lan, E. Pasiliao, An accelerated linearized alternating direction method of multipliers. SIAM J. Imaging Sci. 8(1), 644–681 (2015)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Lan, G. (2020). Deterministic Convex Optimization. In: First-order and Stochastic Optimization Methods for Machine Learning. Springer Series in the Data Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-39568-1_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-39568-1_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-39567-4
Online ISBN: 978-3-030-39568-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)