Menu: Home :: go to Journal :: switch to Russian :: switch to English
You are here: all Journals and Issues→ Journal→ Issue→ Article

Levenberg–Marquardt method for unconstrained optimization


We propose and study the Levenberg–Marquardt method globalized by means of linesearch for unconstrained optimization problems with possibly nonisolated solutions. It is well-recognized that this method is an efficient tool for solving systems of nonlinear equations, especially in the presence of singular and even nonisolated solutions. Customary globalization strategies for the Levenberg–Marquardt method rely on linesearch for the squared Euclidean residual of the equation being solved. In case of unconstrained optimization problem, this equation is formed by putting the gradient of the objective function equal to zero, according to the Fermat principle. However, these globalization strategies are not very adequate in the context of optimization problems, as the corresponding algorithms do not have “preferences” for convergence to minimizers, maximizers, or any other stationary points. To that end, in this work we considers a different technique for globalizing convergence of the Levenberg–Marquardt method, employing linesearch for the objective function of the original problem. We demonstrate that the proposed algorithm possesses reasonable global convergence properties, and preserves high convergence rate of the Levenberg–Marquardt method under weak assumptions.


unconstrained optimization problem; nonisolated solutions; Levenberg–Marquardt method; globalization of convergence

Full-text in one file









[1] K. Levenberg, “A method for the solution of certain non-linear problems in least squares”, Quarterly of Appl. Math., 2 (1944), 164–168. [2] D. W. Marquardt, “An algorithm for least-squares estimation of nonlinear parameters”, J. SIAM, 11 (1963), 431–441. [3] N. Yamashita, M. Fukushima, “On the rate of convergence of the Levenberg–Marquardt method”, Computing, 2001, 15, 237–249. [4] J.-Y. Fan, Y.-X. Yuan, “On the quadratic convergence of the Levenberg–Marquardt method”, Computing, 74 (2005), 23–39. [5] P. E. Gill, W. Murray, M. H. Wright, Practical Optimization, Academic Press, San Diego, 1981. [6] R. B. Schnabel, E. Eskow, “A new modified Cholesky factorization”, SIAM J. Sci. Statist. Comput., 11 (1990), 1136–1158. [7] S. H. Cheng, N. J. Higham, “A modified Cholesky algorithm based on a symmetric indefinite factorization”, SIAM J. Matrix. Anal. Appl., 9 (1998), 1097–1110. [8] J. Nocedal and S.J.Wright, Numerical Optimization, 2, Heidelberg: Springer-Verlag, New York, Berlin, 2006. [9] D.P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods, Academic Press, New York, 1982. [10] A. Fischer, “Local behavior of an iterative framework for generalized equations with nonisolated solutions”, Math. Program., 94 (2002), 91–124. [11] A. F. Izmailov, M. V. Solodov, E. I. Uskov, “Globalizing stabilized SQP by smooth primal-dual exact penalty function”, J. Optim. Theory Appl., 169 (2016), 148–178. [12] K. Ueda, N. Yamashita, “Convergence properties of the regularized Newton method for the unconstrained nonconvex optimization”, Appl. Math. Optim., 62 (2010), 27–46. [13] C. Shen, X. Chen, Y. Liang, “A regularized Newton method for degenerate unconstrained optimization problems”, Optim. Lett., 6 (2012), 1913–1933.

Section of issue

Scientific articles

Для корректной работы сайта используйте один из современных браузеров. Например, Firefox 55, Chrome 60 или более новые.