Forgot password?
 Register account
View 221|Reply 3

[数值求根]割线法的收敛阶1.618

[Copy link]

3159

Threads

7941

Posts

610K

Credits

Credits
63770
QQ

Show all posts

hbghlyj Posted 2022-4-28 02:57 |Read mode
math.okstate.edu/people/binegar/4513-F98/4513-l08.pdf page 2

Let $r$ be the actual root of $f(x)=0$, let $x_{n}$ be the approximate value for $r$ obtained by carrying out $n$ iterations of the secant method, and let $e_{n}$ be the corresponding error:
$$
e_{n}=x_{n}-r .
$$
We then have\begin{aligned}
e_{n+1} &=x_{n+1}-r \\
&=x_{n}-f\left(x_{n}\right)\left(\frac{x_{n}-x_{n-1}}{f\left(x_{n}\right)-f\left(x_{n-1}\right)}\right)-r \\
&=e_{n}-f\left(x_{n}\right)\left(\frac{x_{n}-x_{n-1}}{f\left(x_{n}\right)-f\left(x_{n-1}\right)}\right) \\
&=e_{n}-f\left(x_{n}\right)\left(\frac{x_{n}-r-x_{n-1}+r}{f\left(x_{n}\right)-f\left(x_{n-1}\right)}\right) \\
&=e_{n}-f\left(x_{n}\right)\left(\frac{e_{n}-e_{n-1}}{f\left(x_{n}\right)-f\left(x_{n-1}\right)}\right) \\
&=\frac{e_{n}\left(f\left(x_{n}\right)-f\left(x_{n-1}\right)\right)-f\left(x_{n}\right)\left(e_{n}-e_{n-1}\right)}{f\left(x_{n}\right)-f\left(x_{n-1}\right)} \\
&=\frac{e_{n-1} f\left(x_{n}\right)-e_{n} f\left(x_{n-1}\right)}{f\left(x_{n}\right)-f\left(x_{n-1}\right)} \\
&=\left[\frac{x_{n}-x_{n-1}}{f\left(x_{n}\right)-f\left(x_{n-1}\right)}\right]\left[\frac{e_{n-1} f\left(x_{n}\right)-e_{n} f\left(x_{n-1}\right)}{x_{n}-x_{n-1}}\right] \\
&=\left[\frac{x_{n}-x_{n-1}}{f\left(x_{n}\right)-f\left(x_{n-1}\right)}\right]\left[\frac{f\left(x_{n}\right) / e_{n}-f\left(x_{n-1}\right) / e_{n-1}}{x_{n}-x_{n-1}}\right] e_{n} e_{n-1}
\end{aligned}
Now by Taylor's Theorem
$$
\begin{aligned}
f\left(x_{n}\right) &=f(r)+f^{\prime}(r) e_{n}+\frac{1}{2} f^{\prime \prime}(r) e_{n}^{2}+\mathcal{O}\left(e_{n}^{3}\right) \\
&=0+f^{\prime}(r) e_{n}+\frac{1}{2} f^{\prime \prime}(r) e_{n}^{2}+\mathcal{O}\left(e_{n}^{3}\right)
\end{aligned}
$$
So
$$
\frac{f\left(x_{n}\right)}{e_{n}}=f^{\prime}(r)+\frac{1}{2} f^{\prime \prime}(r) e_{n}+\mathcal{O}\left(e_{n}^{2}\right)
$$
and similarly
$$
\frac{f\left(x_{n-1}\right)}{e_{n-1}}=f^{\prime}(r)+\frac{1}{2} f^{\prime \prime}(r) e_{n-1}+\mathcal{O}\left(e_{n-1}^{2}\right)
$$

So we have
$$
\begin{aligned}
\frac{f\left(x_{n}\right)}{e_{n}}-\frac{f\left(x_{n-1}\right)}{e_{n-1}} &=\left(f^{\prime}(r)+\frac{1}{2} f^{\prime \prime}(r) e_{n}\right)-\left(f^{\prime}(r)+\frac{1}{2} f^{\prime \prime}(r) e_{n-1}\right)+\mathcal{O}\left(e_{n-1}^{2}\right) \\
&=\frac{1}{2} f^{\prime \prime}(r)\left(e_{n}-e_{n-1}\right)+\mathcal{O}\left(e_{n-1}^{2}\right)
\end{aligned}
$$
and
$$
e_{n+1} \approx\left[\frac{x_{n}-x_{n-1}}{f\left(x_{n}\right)-f\left(x_{n-1}\right)}\right]\left[\frac{\frac{1}{2} f^{\prime \prime}(r)\left(e_{n}-e_{n-1}\right)}{x_{n}-x_{n-1}}\right] e_{n} e_{n-1}
$$
Now
$$
e_{n}-e_{n-1}=\left(x_{n}-r\right)-\left(x_{n-1}-r\right)=x_{n}-x_{n-1}
$$
and for $x_{n}$ and $x_{n-1}$ sufficiently close to $r$
$$
\frac{x_{n}-x_{n-1}}{f\left(x_{n}\right)-f\left(x_{n-1}\right)} \approx f^{\prime}(r)
$$
So
$$
e_{n+1} \approx\left[f^{\prime}(r)\right]\left[\frac{1}{2} f^{\prime \prime}(r)\right] e_{n} e_{n-1}=C e_{n} e_{n-1}\tag1\label1
$$
In order to determine the order of convergence, we now suppose an asymptotic relationship of the form
$$
\left|e_{n+1}\right| \approx A\left|e_{n}\right|^{\alpha}
$$
This relationship also requires
$$
\left|e_{n}\right| \approx A\left|e_{n-1}\right|^{\alpha} \Rightarrow\left|e_{n-1}\right|=\left(A^{-1}\left|e_{n}\right|\right)^{1 / \alpha}
$$
In order to be consistent with \eqref{1} we need
$$
\left|e_{n+1}\right|=A\left|e_{n}\right|^{\alpha}=C\left|e_{n} e_{n-1}\right|=C\left|e_{n}\right|\left(A^{-1}\left|e_{n}\right|\right)^{1 / \alpha}
$$
or
$$
A\left|e_{n}\right|^{\alpha}=C A^{-\frac{1}{\alpha}}\left|e_{n}\right|^{1+\frac{1}{\alpha}}
$$
or
$$
\frac{A^{1+\frac{1}{\alpha}}}{C}=\left|e_{n}\right|^{1-\alpha+\frac{1}{a}}
$$
Now the left hand side is a product of constants. Therefore the right hand side must also be constant as $n \rightarrow \infty$. For this to happen we need
$$
0=1-\alpha+\frac{1}{\alpha} \Rightarrow \alpha^{2}-\alpha-1=0 \quad \Rightarrow \quad \alpha=\frac{1 \pm \sqrt{1+4}}{2}=\frac{1}{2} \pm \frac{\sqrt{5}}{2}
$$
Taking the positive root (otherwise, the error terms asympotically diverge), we find
$$
\alpha \approx 1.62<2
$$
Thus, the rate of convergence of the secant method is superlinear, but not quadratic.

3159

Threads

7941

Posts

610K

Credits

Credits
63770
QQ

Show all posts

 Author| hbghlyj Posted 2022-4-28 03:23
www8.cs.umu.se/kurser/5DV005/HT11/TVB.pdf page 102
Theorem 6.4 If $f(r)=0 ; f^{\prime}(r) \neq 0$, and $f^{\prime \prime}(x)$ is continuous, then for $x_{0}$, $x_{1}$ close enough to $r$,
$$
\lim _{k \rightarrow \infty} \frac{\left|e_{k+1}\right|}{\left|e_{k}\right|^{\phi}}=\left|\frac{f^{\prime \prime}(r)}{2 f^{\prime}(r)}\right|^{\phi-1}
$$
where $e_{k}=x_{k}-r$ is the error, and $\phi=(\sqrt{5}+1) / 2 \cong 1.618$ is the root of the equation $\phi^{2}=\phi+1$, and is called the Golden Ratio.
In practice,
$$
e_{k+1} \simeq \frac{f^{\prime \prime}(r)}{2 f^{\prime}(r)} \cdot\left(e_{k-1} \cdot e_{k}\right)
$$
In contrast, for Newton iterations,
$$
e_{k+1} \simeq \frac{f^{\prime \prime}(r)}{2 f^{\prime}(r)} e_{k}^{2} .
$$
Thus, Newton's method is more powerful than the secant method. A better comparison of the "effectiveness" of these two methods should also consider the "work" needed for convergence.
For the secant method,
$$
\begin{aligned}
\left|e_{k+2}\right| & \simeq\left|\frac{f^{\prime \prime}(r)}{2 f^{\prime}(r)}\right|^{\phi-1}\left|e_{k+1}\right|^{\phi} \\
&=\left|\frac{f^{\prime \prime}(r)}{2 f^{\prime}(r)}\right|^{\phi-1}\left[\left|\frac{f^{\prime \prime}(r)}{2 f^{\prime}(r)}\right|^{\phi-1}\left|e_{k}\right|^{\phi}\right]^{\phi} \\
&=\left|\frac{f^{\prime \prime}(r)}{2 f^{\prime}(r)}\right|^{(\phi-1)+\left(\phi^{2}-\phi\right)} *\left|e_{k}\right|^{\phi^{2}} .
\end{aligned}
$$
But, $\phi^{2}=\phi+1$, therefore
$$
\left|e_{k+2}\right| \simeq\left|\frac{f^{\prime \prime}(r)}{2 f^{\prime}(r)}\right|^{\phi} \cdot\left|e_{k}\right|^{\phi+1},
$$
Thus, two steps of the secant method have an order of convergence $\simeq 2.618$, which is greater than that of Newton.

3159

Threads

7941

Posts

610K

Credits

Credits
63770
QQ

Show all posts

 Author| hbghlyj Posted 2022-4-28 03:45

3159

Threads

7941

Posts

610K

Credits

Credits
63770
QQ

Show all posts

 Author| hbghlyj Posted 2022-5-11 19:29
dl.acm.org/doi/pdf/10.1145/368892.368913
Secant modification of Newton's method
1958
T. A. Jeeves,Westinghouse Research Laboratories
Thus at each iteration the increase in the number of significant figures is 1.62 times the previous increase.
Footnote:
Not a factor of 2 as Wegstein asserts, hence the convergence is not quadratic.

Mobile version|Discuz Math Forum

2025-5-31 10:49 GMT+8

Powered by Discuz!

× Quick Reply To Top Edit