"Fossies" - the Fresh Open Source Software Archive  

Source code changes of the file "mpmath/calculus/optimization.py" between
mpmath-0.18.tar.gz and mpmath-0.19.tar.gz

About: mpmath is a Python library for arbitrary-precision floating-point arithmetic.

optimization.py  (mpmath-0.18):optimization.py  (mpmath-0.19)
skipping to change at line 737 skipping to change at line 737
*norm* *norm*
used vector norm (used by multidimensional solvers) used vector norm (used by multidimensional solvers)
solver has to be callable with ``(f, x0, **kwargs)`` and return an generator solver has to be callable with ``(f, x0, **kwargs)`` and return an generator
yielding pairs of approximative solution and estimated error (which is yielding pairs of approximative solution and estimated error (which is
expected to be positive). expected to be positive).
You can use the following string aliases: You can use the following string aliases:
'secant', 'mnewton', 'halley', 'muller', 'illinois', 'pegasus', 'anderson', 'secant', 'mnewton', 'halley', 'muller', 'illinois', 'pegasus', 'anderson',
'ridder', 'anewton', 'bisect' 'ridder', 'anewton', 'bisect'
See mpmath.optimization for their documentation. See mpmath.calculus.optimization for their documentation.
**Examples** **Examples**
The function :func:`~mpmath.findroot` locates a root of a given function usi ng the The function :func:`~mpmath.findroot` locates a root of a given function usi ng the
secant method by default. A simple example use of the secant method is to secant method by default. A simple example use of the secant method is to
compute `\pi` as the root of `\sin x` closest to `x_0 = 3`:: compute `\pi` as the root of `\sin x` closest to `x_0 = 3`::
>>> from mpmath import * >>> from mpmath import *
>>> mp.dps = 30; mp.pretty = True >>> mp.dps = 30; mp.pretty = True
>>> findroot(sin, 3) >>> findroot(sin, 3)
skipping to change at line 1029 skipping to change at line 1029
You can use Steffensen's method to accelerate a fixpoint iteration of linear You can use Steffensen's method to accelerate a fixpoint iteration of linear
(or less) convergence. (or less) convergence.
x* is a fixpoint of the iteration x_{k+1} = phi(x_k) if x* = phi(x*). For x* is a fixpoint of the iteration x_{k+1} = phi(x_k) if x* = phi(x*). For
phi(x) = x**2 there are two fixpoints: 0 and 1. phi(x) = x**2 there are two fixpoints: 0 and 1.
Let's try Steffensen's method: Let's try Steffensen's method:
>>> f = lambda x: x**2 >>> f = lambda x: x**2
>>> from mpmath.optimization import steffensen >>> from mpmath.calculus.optimization import steffensen
>>> F = steffensen(f) >>> F = steffensen(f)
>>> for x in [0.5, 0.9, 2.0]: >>> for x in [0.5, 0.9, 2.0]:
... fx = Fx = x ... fx = Fx = x
... for i in xrange(10): ... for i in xrange(9):
... try: ... try:
... fx = f(fx) ... fx = f(fx)
... except OverflowError: ... except OverflowError:
... pass ... pass
... try: ... try:
... Fx = F(Fx) ... Fx = F(Fx)
... except ZeroDivisionError: ... except ZeroDivisionError:
... pass ... pass
... print '%20g %20g' % (fx, Fx) ... print('%20g %20g' % (fx, Fx))
0.25 -0.5 0.25 -0.5
0.0625 0.1 0.0625 0.1
0.00390625 -0.0011236 0.00390625 -0.0011236
1.52588e-005 1.41691e-009 1.52588e-05 1.41691e-09
2.32831e-010 -2.84465e-027 2.32831e-10 -2.84465e-27
5.42101e-020 2.30189e-080 5.42101e-20 2.30189e-80
2.93874e-039 -1.2197e-239 2.93874e-39 -1.2197e-239
8.63617e-078 0 8.63617e-78 0
7.45834e-155 0 7.45834e-155 0
5.56268e-309 0
0.81 1.02676 0.81 1.02676
0.6561 1.00134 0.6561 1.00134
0.430467 1 0.430467 1
0.185302 1 0.185302 1
0.0343368 1 0.0343368 1
0.00117902 1 0.00117902 1
1.39008e-006 1 1.39008e-06 1
1.93233e-012 1 1.93233e-12 1
3.73392e-024 1 3.73392e-24 1
1.39421e-047 1
4 1.6 4 1.6
16 1.2962 16 1.2962
256 1.10194 256 1.10194
65536 1.01659 65536 1.01659
4.29497e+009 1.00053 4.29497e+09 1.00053
1.84467e+019 1 1.84467e+19 1
3.40282e+038 1 3.40282e+38 1
1.15792e+077 1 1.15792e+77 1
1.34078e+154 1
1.34078e+154 1 1.34078e+154 1
Unmodified, the iteration converges only towards 0. Modified it converges Unmodified, the iteration converges only towards 0. Modified it converges
not only much faster, it converges even to the repelling fixpoint 1. not only much faster, it converges even to the repelling fixpoint 1.
""" """
def F(x): def F(x):
fx = f(x) fx = f(x)
ffx = f(fx) ffx = f(fx)
return (x*ffx - fx**2) / (ffx - 2*fx + x) return (x*ffx - fx**2) / (ffx - 2*fx + x)
return F return F
 End of changes. 8 change blocks. 
19 lines changed or deleted 16 lines changed or added

Home  |  About  |  All  |  Newest  |  Fossies Dox  |  Screenshots  |  Comments  |  Imprint  |  Privacy  |  HTTPS