Skip to main content

Part of the book series: Springer Series in the Data Sciences ((SSDS))

  • 7435 Accesses

Abstract

In this chapter, we study algorithms for solving convex optimization problems. We will focus on algorithms that have been applied or have the potential to be applied for solving machine learning and other data analysis problems. More specifically, we will discuss first-order methods which have been proven effective for large-scale optimization. These methods also form the basis for other computationally efficient methods, e.g., stochastic and randomized methods to be discussed in later chapters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. A. Beck, M. Teboulle, Mirror-descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31, 167–175 (2003)

    Article  MathSciNet  Google Scholar 

  2. A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sciences 2, 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  3. A. Ben-Tal, A.S. Nemirovski, Lectures on Modern Convex Optimization: Analysis, Algorithms, Engineering Applications. MPS-SIAM Series on Optimization (SIAM, Philadelphia, 2000)

    Google Scholar 

  4. A. Ben-Tal, A.S. Nemirovski, Non-Euclidean restricted memory level method for large-scale convex optimization. Math. Program. 102, 407–456 (2005)

    Article  MathSciNet  Google Scholar 

  5. S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)

    Article  Google Scholar 

  6. A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision 40, 120–145 (2011)

    Article  MathSciNet  Google Scholar 

  7. Y. Chen, G. Lan, Y. Ouyang, Optimal primal-dual methods for a class of saddle point problems. SIAM J. Optim. 24(4), 1779–1814 (2014)

    Article  MathSciNet  Google Scholar 

  8. C. Dang, G. Lan, Randomized first-order methods for saddle point optimization. Manuscript, Department of Industrial and Systems Engineering, University of Florida, Gainesville, September 2014

    Google Scholar 

  9. C.D. Dang, G. Lan, On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators. Comput. Optim. Appl. (2015). https://doi.org/10.1007/s10589-014-9673-9

  10. F. Facchinei, J. Pang, Finite-dimensional Variational Inequalities and Complementarity Problems, Volumes I and II. Comprehensive Study in Mathematics (Springer, New York, 2003)

    Google Scholar 

  11. B. He, X. Yuan, On the $o(1/n)$ convergence rate of the Douglas Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)

    Article  MathSciNet  Google Scholar 

  12. K.C. Kiwiel, Proximal level bundle method for convex nondifferentiable optimization, saddle point problems and variational inequalities. Math. Program. B 69, 89–109 (1995)

    MathSciNet  MATH  Google Scholar 

  13. G. Korpelevich, The extragradient method for finding saddle points and other problems. Ekon. Mat. Metody 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  14. G. Lan, Bundle-level type methods uniformly optimal for smooth and non-smooth convex optimization. Math. Program. 149(1), 1–45 (2015)

    Article  MathSciNet  Google Scholar 

  15. G. Lan, Y. Yang, Accelerated stochastic algorithms for nonconvex finite-sum and multi-block optimization. SIAM J. Optim. 29(4), 2753–2784 (2019)

    Article  MathSciNet  Google Scholar 

  16. G. Lan, Z. Lu, R.D.C. Monteiro, Primal-dual first-order methods with \({\cal O}(1/\epsilon )\) iteration-complexity for cone programming. Math. Program. 126, 1–29 (2011)

    Google Scholar 

  17. C. Lemaréchal, A.S. Nemirovski, Y.E. Nesterov, New variants of bundle methods. Math. Program. 69, 111–148 (1995)

    Article  MathSciNet  Google Scholar 

  18. R.D.C. Monteiro, B.F. Svaiter, Iteration-complexity of block-decomposition algorithms and the alternating direction method of multipliers. SIAM J. Optim. 23(1), 475–507 (2013)

    Article  MathSciNet  Google Scholar 

  19. A.S. Nemirovski, Prox-method with rate of convergence o(1∕t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229–251 (2005)

    Article  MathSciNet  Google Scholar 

  20. A.S. Nemirovski, D. Yudin, Problem Complexity and Method Efficiency in Optimization. Wiley-Interscience Series in Discrete Mathematics (Wiley, XV, New York, 1983)

    Google Scholar 

  21. Y.E. Nesterov, A method for unconstrained convex minimization problem with the rate of convergence O(1∕k 2). Doklady AN SSSR 269, 543–547 (1983)

    Google Scholar 

  22. Y.E. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course (Kluwer Academic Publishers, Boston, 2004)

    Book  Google Scholar 

  23. Y.E. Nesterov, Smooth minimization of nonsmooth functions. Math. Program. 103, 127–152 (2005)

    Article  MathSciNet  Google Scholar 

  24. Y.E. Nesterov, Gradient methods for minimizing composite objective functions. Technical report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain, September 2007

    Google Scholar 

  25. Y. Ouyang, Y. Chen, G. Lan, E. Pasiliao, An accelerated linearized alternating direction method of multipliers. SIAM J. Imaging Sci. 8(1), 644–681 (2015)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Lan, G. (2020). Deterministic Convex Optimization. In: First-order and Stochastic Optimization Methods for Machine Learning. Springer Series in the Data Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-39568-1_3

Download citation

Publish with us

Policies and ethics