World Library  
Flag as Inappropriate
Email this Article

Broyden's method

Article Id: WHEBN0010971756
Reproduction Date:

Title: Broyden's method  
Author: World Heritage Encyclopedia
Language: English
Subject: Mathematical model
Publisher: World Heritage Encyclopedia

Broyden's method

In numerical analysis, Broyden's method is a C. G. Broyden in 1965.[1]

Newton's method for solving F(x) = 0 uses the Jacobian matrix, J, at every iteration. However, computing this Jacobian is a difficult and expensive operation. The idea behind Broyden's method is to compute the whole Jacobian only at the first iteration, and to do a rank-one update at the other iterations.

In 1979 Gay proved that when Broyden's method is applied to a linear system of size n × n, it terminates in 2 n steps,[2] although like all quasi-Newton methods, it may not converge for nonlinear systems.

Description of the method

Solving single variable equation

In the secant method, we replace the first derivative f at xn with the finite difference approximation:

f'(x_n) \simeq \frac{f(x_n) - f(x_{n - 1})}{x_n - x_{n - 1}} ,

and proceed similar to Newton's Method:

x_{n + 1} = x_n - \frac{1}{f'(x_n)} f(x_n)

where n is the iteration index.

Solving a system of nonlinear equations

To solve a system of k nonlinear equations

\mathbf F(\mathbf x) = \mathbf 0 ,

where F is a vector-valued function of vector x:

\mathbf x = (x_1, x_2, x_3, \dotsc, x_k)
\mathbf F(\mathbf x) = (F_1(x_1, x_2, \dotsc, x_k), F_2(x_1, x_2, \dotsc, x_k), \dotsc, F_k(x_1, x_2, \dotsc, x_k))

For such problems, Broyden gives a generalization of the one-dimensional Newton's method, replacing the derivative with the Jacobian J. The Jacobian matrix is determined iteratively based on the secant equation in the finite difference approximation:

\mathbf J_n (\mathbf x_n - \mathbf x_{n - 1}) \simeq \mathbf F(\mathbf x_n) - \mathbf F(\mathbf x_{n - 1}) ,

where n is the iteration index. For clarity, let us define:

\mathbf F_n = \mathbf F(\mathbf x_n) ,
\Delta \mathbf x_n = \mathbf x_n - \mathbf x_{n - 1} ,
\Delta \mathbf F_n = \mathbf F_n - \mathbf F_{n - 1} ,

so the above may be rewritten as:

\mathbf J_n \Delta \mathbf x_n \simeq \Delta \mathbf F_n .

Unfortunately, the above equation is underdetermined when k is greater than one. Broyden suggests using the current estimate of the Jacobian matrix Jn − 1 and improving upon it by taking the solution to the secant equation that is a minimal modification to Jn − 1:

\mathbf J_n = \mathbf J_{n - 1} + \frac{\Delta \mathbf F_n - \mathbf J_{n - 1} \Delta \mathbf x_n}{\|\Delta \mathbf x_n\|^2} \Delta \mathbf x_n^{\mathrm T}

This minimizes the following Frobenius norm:

\|\mathbf J_n - \mathbf J_{n - 1}\|_{\mathrm F} .

We may then proceed in the Newton direction:

\mathbf x_{n + 1} = \mathbf x_n - \mathbf J_n^{-1} \mathbf F(\mathbf x_n) .

Broyden also suggested using the Sherman-Morrison formula to update directly the inverse of the Jacobian matrix:

\mathbf J_n^{-1} = \mathbf J_{n - 1}^{-1} + \frac{\Delta \mathbf x_n - \mathbf J^{-1}_{n - 1} \Delta \mathbf F_n}{\Delta \mathbf x_n^{\mathrm T} \mathbf J^{-1}_{n - 1} \Delta \mathbf F_n} \Delta \mathbf x_n^{\mathrm T} \mathbf J^{-1}_{n - 1}

This first method is commonly known as the "good Broyden's method".

A similar technique can be derived by using a slightly different modification to Jn − 1. This yields a second method, the so-called "bad Broyden's method" (but see[3]):

\mathbf J_n^{-1} = \mathbf J_{n - 1}^{-1} + \frac{\Delta \mathbf x_n - \mathbf J^{-1}_{n - 1} \Delta \mathbf F_n}{\|\Delta \mathbf F_n\|^2} \Delta \mathbf F_n^{\mathrm T}

This minimizes a different Frobenius norm:

\|\mathbf J_n^{-1} - \mathbf J_{n - 1}^{-1}\|_{\mathrm F} .

Many other quasi-Newton schemes have been suggested in optimization, where one seeks a maximum or minimum by finding the root of the first derivative (gradient in multi dimensions). The Jacobian of the gradient is called Hessian and is symmetric, adding further constraints to its update.

See also


  1. ^ Broyden, C. G. (October 1965). "A Class of Methods for Solving Nonlinear Simultaneous Equations". Mathematics of Computation (American Mathematical Society) 19 (92): 577–593.  
  2. ^ Gay, D.M. (August 1979). "Some convergence properties of Broyden's method". SIAM Journal of Numerical Analysis (SIAM) 16 (4): 623–630.  
  3. ^ Kvaalen, Eric (November 1991). "A faster Broyden method". BIT Numerical Mathematics (SIAM) 31 (2): 369–372.  

External links

  • Module for Broyden's Method by John H. Mathews
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from World eBook Library are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.