"Fossies" - the Fresh Open Source Software Archive  

Source code changes of the file "mpmath/matrices/linalg.py" between
mpmath-0.18.tar.gz and mpmath-0.19.tar.gz

About: mpmath is a Python library for arbitrary-precision floating-point arithmetic.

linalg.py  (mpmath-0.18):linalg.py  (mpmath-0.19)
skipping to change at line 16 skipping to change at line 16
................ ................
Basic linear algebra is implemented; you can for example solve the linear Basic linear algebra is implemented; you can for example solve the linear
equation system:: equation system::
x + 2*y = -10 x + 2*y = -10
3*x + 4*y = 10 3*x + 4*y = 10
using ``lu_solve``:: using ``lu_solve``::
>>> from mpmath import *
>>> mp.pretty = False
>>> A = matrix([[1, 2], [3, 4]]) >>> A = matrix([[1, 2], [3, 4]])
>>> b = matrix([-10, 10]) >>> b = matrix([-10, 10])
>>> x = lu_solve(A, b) >>> x = lu_solve(A, b)
>>> x >>> x
matrix( matrix(
[['30.0'], [['30.0'],
['-20.0']]) ['-20.0']])
If you don't trust the result, use ``residual`` to calculate the residual ||A*x- b||:: If you don't trust the result, use ``residual`` to calculate the residual ||A*x- b||::
skipping to change at line 43 skipping to change at line 45
As you can see, the solution is quite accurate. The error is caused by the As you can see, the solution is quite accurate. The error is caused by the
inaccuracy of the internal floating point arithmetic. Though, it's even smaller inaccuracy of the internal floating point arithmetic. Though, it's even smaller
than the current machine epsilon, which basically means you can trust the than the current machine epsilon, which basically means you can trust the
result. result.
If you need more speed, use NumPy. Or choose a faster data type using the If you need more speed, use NumPy. Or choose a faster data type using the
keyword ``force_type``:: keyword ``force_type``::
>>> lu_solve(A, b, force_type=float) >>> lu_solve(A, b, force_type=float)
matrix( matrix(
[[29.999999999999996], [['30.0'],
[-19.999999999999996]]) ['-20.0']])
``lu_solve`` accepts overdetermined systems. It is usually not possible to solve ``lu_solve`` accepts overdetermined systems. It is usually not possible to solve
such systems, so the residual is minimized instead. Internally this is done such systems, so the residual is minimized instead. Internally this is done
using Cholesky decomposition to compute a least squares approximation. This mean s using Cholesky decomposition to compute a least squares approximation. This mean s
that that ``lu_solve`` will square the errors. If you can't afford this, use that that ``lu_solve`` will square the errors. If you can't afford this, use
``qr_solve`` instead. It is twice as slow but more accurate, and it calculates ``qr_solve`` instead. It is twice as slow but more accurate, and it calculates
the residual automatically. the residual automatically.
Matrix factorization Matrix factorization
.................... ....................
The function ``lu`` computes an explicit LU factorization of a matrix:: The function ``lu`` computes an explicit LU factorization of a matrix::
>>> P, L, U = lu(matrix([[0,2,3],[4,5,6],[7,8,9]])) >>> P, L, U = lu(matrix([[0,2,3],[4,5,6],[7,8,9]]))
>>> print P >>> print(P)
[0.0 0.0 1.0] [0.0 0.0 1.0]
[1.0 0.0 0.0] [1.0 0.0 0.0]
[0.0 1.0 0.0] [0.0 1.0 0.0]
>>> print L >>> print(L)
[ 1.0 0.0 0.0] [ 1.0 0.0 0.0]
[ 0.0 1.0 0.0] [ 0.0 1.0 0.0]
[0.571428571428571 0.214285714285714 1.0] [0.571428571428571 0.214285714285714 1.0]
>>> print U >>> print(U)
[7.0 8.0 9.0] [7.0 8.0 9.0]
[0.0 2.0 3.0] [0.0 2.0 3.0]
[0.0 0.0 0.214285714285714] [0.0 0.0 0.214285714285714]
>>> print P.T*L*U >>> print(P.T*L*U)
[0.0 2.0 3.0] [0.0 2.0 3.0]
[4.0 5.0 6.0] [4.0 5.0 6.0]
[7.0 8.0 9.0] [7.0 8.0 9.0]
Interval matrices Interval matrices
----------------- -----------------
Matrices may contain interval elements. This allows one to perform Matrices may contain interval elements. This allows one to perform
basic linear algebra operations such as matrix multiplication basic linear algebra operations such as matrix multiplication
and equation solving with rigorous error bounds:: and equation solving with rigorous error bounds::
>>> a = matrix([['0.1','0.3','1.0'], >>> a = iv.matrix([['0.1','0.3','1.0'],
... ['7.1','5.5','4.8'], ... ['7.1','5.5','4.8'],
... ['3.2','4.4','5.6']], force_type=mpi) ... ['3.2','4.4','5.6']], force_type=mpi)
>>> >>>
>>> b = matrix(['4','0.6','0.5'], force_type=mpi) >>> b = iv.matrix(['4','0.6','0.5'], force_type=mpi)
>>> c = lu_solve(a, b) >>> c = iv.lu_solve(a, b)
>>> c >>> print(c)
matrix( [ [5.2582327113062568605927528666, 5.25823271130625686059275702219]]
[[[5.2582327113062393041, 5.2582327113062749951]], [[-13.1550493962678375411635581388, -13.1550493962678375411635540152]]
[[-13.155049396267856583, -13.155049396267821167]], [ [7.42069154774972557628979076189, 7.42069154774972557628979190734]]
[[7.4206915477497212555, 7.4206915477497310922]]]) >>> print(a*c)
>>> print a*c [ [3.99999999999999999999999844904, 4.00000000000000000000000155096]]
[ [3.9999999999999866773, 4.0000000000000133227]] [[0.599999999999999999999968898009, 0.600000000000000000000031763736]]
[[0.59999999999972430942, 0.60000000000027142733]] [[0.499999999999999999999979320485, 0.500000000000000000000020679515]]
[[0.49999999999982236432, 0.50000000000018474111]]
""" """
# TODO: # TODO:
# *implement high-level qr() # *implement high-level qr()
# *test unitvector # *test unitvector
# *iterative solving # *iterative solving
from copy import copy from copy import copy
from ..libmp.backend import xrange from ..libmp.backend import xrange
skipping to change at line 160 skipping to change at line 161
raise ZeroDivisionError('matrix is numerically singular') raise ZeroDivisionError('matrix is numerically singular')
# cache decomposition # cache decomposition
if not overwrite and isinstance(orig, ctx.matrix): if not overwrite and isinstance(orig, ctx.matrix):
orig._LU = (A, p) orig._LU = (A, p)
return A, p return A, p
def L_solve(ctx, L, b, p=None): def L_solve(ctx, L, b, p=None):
""" """
Solve the lower part of a LU factorized matrix for y. Solve the lower part of a LU factorized matrix for y.
""" """
assert L.rows == L.cols, 'need n*n matrix' if L.rows != L.cols:
raise RuntimeError("need n*n matrix")
n = L.rows n = L.rows
assert len(b) == n if len(b) != n:
raise ValueError("Value should be equal to n")
b = copy(b) b = copy(b)
if p: # swap b according to p if p: # swap b according to p
for k in xrange(0, len(p)): for k in xrange(0, len(p)):
ctx.swap_row(b, k, p[k]) ctx.swap_row(b, k, p[k])
# solve # solve
for i in xrange(1, n): for i in xrange(1, n):
for j in xrange(i): for j in xrange(i):
b[i] -= L[i,j] * b[j] b[i] -= L[i,j] * b[j]
return b return b
def U_solve(ctx, U, y): def U_solve(ctx, U, y):
""" """
Solve the upper part of a LU factorized matrix for x. Solve the upper part of a LU factorized matrix for x.
""" """
assert U.rows == U.cols, 'need n*n matrix' if U.rows != U.cols:
raise RuntimeError("need n*n matrix")
n = U.rows n = U.rows
assert len(y) == n if len(y) != n:
raise ValueError("Value should be equal to n")
x = copy(y) x = copy(y)
for i in xrange(n - 1, -1, -1): for i in xrange(n - 1, -1, -1):
for j in xrange(i + 1, n): for j in xrange(i + 1, n):
x[i] -= U[i,j] * x[j] x[i] -= U[i,j] * x[j]
x[i] /= U[i,i] x[i] /= U[i,i]
return x return x
def lu_solve(ctx, A, b, **kwargs): def lu_solve(ctx, A, b, **kwargs):
""" """
Ax = b => x Ax = b => x
skipping to change at line 235 skipping to change at line 240
ctx.prec = prec ctx.prec = prec
return x return x
def improve_solution(ctx, A, x, b, maxsteps=1): def improve_solution(ctx, A, x, b, maxsteps=1):
""" """
Improve a solution to a linear equation system iteratively. Improve a solution to a linear equation system iteratively.
This re-uses the LU decomposition and is thus cheap. This re-uses the LU decomposition and is thus cheap.
Usually 3 up to 4 iterations are giving the maximal improvement. Usually 3 up to 4 iterations are giving the maximal improvement.
""" """
assert A.rows == A.cols, 'need n*n matrix' # TODO: really? if A.rows != A.cols:
raise RuntimeError("need n*n matrix") # TODO: really?
for _ in xrange(maxsteps): for _ in xrange(maxsteps):
r = ctx.residual(A, x, b) r = ctx.residual(A, x, b)
if ctx.norm(r, 2) < 10*ctx.eps: if ctx.norm(r, 2) < 10*ctx.eps:
break break
# this uses cached LU decomposition and is thus cheap # this uses cached LU decomposition and is thus cheap
dx = ctx.lu_solve(A, -r) dx = ctx.lu_solve(A, -r)
x += dx x += dx
return x return x
def lu(ctx, A): def lu(ctx, A):
skipping to change at line 326 skipping to change at line 332
def householder(ctx, A): def householder(ctx, A):
""" """
(A|b) -> H, p, x, res (A|b) -> H, p, x, res
(A|b) is the coefficient matrix with left hand side of an optionally (A|b) is the coefficient matrix with left hand side of an optionally
overdetermined linear equation system. overdetermined linear equation system.
H and p contain all information about the transformation matrices. H and p contain all information about the transformation matrices.
x is the solution, res the residual. x is the solution, res the residual.
""" """
assert isinstance(A, ctx.matrix) if not isinstance(A, ctx.matrix):
raise TypeError("A should be a type of ctx.matrix")
m = A.rows m = A.rows
n = A.cols n = A.cols
assert m >= n - 1 if m < n - 1:
raise RuntimeError("Columns should not be less than rows")
# calculate Householder matrix # calculate Householder matrix
p = [] p = []
for j in xrange(0, n - 1): for j in xrange(0, n - 1):
s = ctx.fsum((A[i,j])**2 for i in xrange(j, m)) s = ctx.fsum((A[i,j])**2 for i in xrange(j, m))
if not abs(s) > ctx.eps: if not abs(s) > ctx.eps:
raise ValueError('matrix is numerically singular') raise ValueError('matrix is numerically singular')
p.append(-ctx.sign(A[j,j]) * ctx.sqrt(s)) p.append(-ctx.sign(A[j,j]) * ctx.sqrt(s))
kappa = ctx.one / (s - p[j] * A[j,j]) kappa = ctx.one / (s - p[j] * A[j,j])
A[j,j] -= p[j] A[j,j] -= p[j]
for k in xrange(j+1, n): for k in xrange(j+1, n):
skipping to change at line 472 skipping to change at line 480
>>> L = cholesky(A) >>> L = cholesky(A)
Traceback (most recent call last): Traceback (most recent call last):
... ...
ValueError: matrix is not positive-definite ValueError: matrix is not positive-definite
**References** **References**
1. [Wikipedia]_ http://en.wikipedia.org/wiki/Cholesky_decomposition 1. [Wikipedia]_ http://en.wikipedia.org/wiki/Cholesky_decomposition
""" """
assert isinstance(A, ctx.matrix) if not isinstance(A, ctx.matrix):
raise RuntimeError("A should be a type of ctx.matrix")
if not A.rows == A.cols: if not A.rows == A.cols:
raise ValueError('need n*n matrix') raise ValueError('need n*n matrix')
if tol is None: if tol is None:
tol = +ctx.eps tol = +ctx.eps
n = A.rows n = A.rows
L = ctx.matrix(n) L = ctx.matrix(n)
for j in xrange(n): for j in xrange(n):
c = ctx.re(A[j,j]) c = ctx.re(A[j,j])
if abs(c-A[j,j]) > tol: if abs(c-A[j,j]) > tol:
raise ValueError('matrix is not Hermitian') raise ValueError('matrix is not Hermitian')
skipping to change at line 518 skipping to change at line 527
try: try:
ctx.prec += 10 ctx.prec += 10
# do not overwrite A nor b # do not overwrite A nor b
A, b = ctx.matrix(A, **kwargs).copy(), ctx.matrix(b, **kwargs).copy( ) A, b = ctx.matrix(A, **kwargs).copy(), ctx.matrix(b, **kwargs).copy( )
if A.rows != A.cols: if A.rows != A.cols:
raise ValueError('can only solve determined system') raise ValueError('can only solve determined system')
# Cholesky factorization # Cholesky factorization
L = ctx.cholesky(A) L = ctx.cholesky(A)
# solve # solve
n = L.rows n = L.rows
assert len(b) == n if len(b) != n:
raise ValueError("Value should be equal to n")
for i in xrange(n): for i in xrange(n):
b[i] -= ctx.fsum(L[i,j] * b[j] for j in xrange(i)) b[i] -= ctx.fsum(L[i,j] * b[j] for j in xrange(i))
b[i] /= L[i,i] b[i] /= L[i,i]
x = ctx.U_solve(L.T, b) x = ctx.U_solve(L.T, b)
return x return x
finally: finally:
ctx.prec = prec ctx.prec = prec
def det(ctx, A): def det(ctx, A):
""" """
 End of changes. 17 change blocks. 
27 lines changed or deleted 37 lines changed or added

Home  |  About  |  All  |  Newest  |  Fossies Dox  |  Screenshots  |  Comments  |  Imprint  |  Privacy  |  HTTPS