This fixes issue #907 from the issue tracker.
Signed-off-by: Jochen Voss <vo...@seehuhn.de>
---
sympy/matrices/matrices.py | 31 +++++++++++++++++++++++++++++--
sympy/matrices/tests/test_matrices.py | 25 +++++++++++++++++++++++++
2 files changed, 54 insertions(+), 2 deletions(-)
diff --git a/sympy/matrices/matrices.py b/sympy/matrices/matrices.py
index 1af4e36..e83eb33 100644
--- a/sympy/matrices/matrices.py
+++ b/sympy/matrices/matrices.py
@@ -949,7 +949,7 @@ class Matrix(object):
def QRdecomposition(self):
"""
- Return Q*R where Q is orthogonal and R is upper triangular.
+ Return Q,R where A=Q*R, Q is orthogonal and R is upper triangular.
Assumes full-rank square (for now).
"""
@@ -970,10 +970,37 @@ class Matrix(object):
R[i,j] = Q[:,i].dot(self[:,j])
return Q,R
- # TODO: QRsolve
+ def QRsolve(self, b):
+ """
+ Solve the linear system 'Ax = b'.
+
+ 'self' is the matrix 'A', the method argument is the vector
+ 'b'. The method returns the solution vector 'x'. If 'b' is a
+ matrix, the system is solved for each column of 'b' and the
+ return value is a matrix of the same shape as 'b'.
+
+ This method is slower (approximately by a factor of 2) but
+ more stable than the LUsolve method.
+ """
+
+ Q, R = self.QRdecomposition()
+ y = Q.T * b
+
+ # back substitution to solve R*x = y:
+ # We build up the result "backwards" in the vector 'x' and reverse it
+ # only in the end.
+ x = []
+ n = R.lines
+ for j in range(n-1, -1, -1):
+ tmp = y[j,:]
+ for k in range(j+1, n):
+ tmp -= R[j,k] * x[n-1-k]
+ x.append(tmp/R[j,j])
+ return Matrix([row.mat for row in reversed(x)])
# Utility functions
def simplify(self):
+ """Simplify the elements of a matrix in place."""
for i in xrange(len(self.mat)):
self.mat[i] = simplify(self.mat[i])
diff --git a/sympy/matrices/tests/test_matrices.py b/sympy/matrices/tests/test_matrices.py
index 6f71b52..4099fbc 100644
--- a/sympy/matrices/tests/test_matrices.py
+++ b/sympy/matrices/tests/test_matrices.py
@@ -283,6 +283,31 @@ def test_LUdecomp():
+def test_QRsolve():
+ A = Matrix([[2,3,5],
+ [3,6,2],
+ [8,3,6]])
+ x = Matrix(3,1,[3,7,5])
+ b = A*x
+ soln = A.QRsolve(b)
+ assert soln == x
+ x = Matrix([[1,2],[3,4],[5,6]])
+ b = A*x
+ soln = A.QRsolve(b)
+ assert soln == x
+
+ A = Matrix([[0,-1,2],
+ [5,10,7],
+ [8,3,4]])
+ x = Matrix(3,1,[-1,2,5])
+ b = A*x
+ soln = A.QRsolve(b)
+ assert soln == x
+ x = Matrix([[7,8],[9,10],[11,12]])
+ b = A*x
+ soln = A.QRsolve(b)
+ assert soln == x
+
def test_LUsolve():
A = Matrix([[2,3,5],
[3,6,2],
--
1.6.0
The patch looks good to me. Could you please also add tests for
symbolic matrices? This only tests for a numeric ones, right?
Ondrej
2009/5/14 Riccardo Gori <goric...@gmail.com>:
>> + # back substitution to solve R*x = y:
>> + # We build up the result "backwards" in the vector 'x' and reverse
>> it + # only in the end.
>> + x = []
>> + n = R.lines
>> + for j in range(n-1, -1, -1):
>> + tmp = y[j,:]
>
> Why do you use y[j,:] instead of y[j]?
The right hand side can be matrix. This can be used, for example, to
invert A by using the identity matrix for b. In these cases, y = Q^T
b is also a matrix of the same dimension and we need the j-th row of
this matrix. Thus y[j,:]. Luckily the same notation works even if b
and y are just vectors, so we don't need to write two versions of the
code.
All the best,
Jochen
--
http://seehuhn.de/
2009/5/13 Ondrej Certik <ond...@certik.cz>:
> The patch looks good to me. Could you please also add tests for
> symbolic matrices? This only tests for a numeric ones, right?
Yup, I copied the tests from LUsolve, and they are only for numeric
matrices. I'll add tests with symbolic matrices too. This may
require removing this strange assert statemen from
Matrix.QRdecomposition.
Actually I'm getting a bit confused about this: the issue tracker
entry mentioned that QUsolve would be good because it has better
stability properties. This argument only makes sense if one is
worried about the effect of rounding errors. But the current
implementation of the QR decomposition seems to use Gram Schmidt
orthonormalisation, which is numerically unstable and shouldn't be
used in situations where rounding errors occur. This seems
inconsistent.
I see the following possibilities:
1) One could switch the QR decomposition to use Householder
reflections or similar, but this would restrict the algorithm to real
(or maybe complex) matrices, because the algorithm needs to know the
sign (or complex phase) of certain intermediate quantities. Result:
QRsolve would be stable but would only work for real matrices.
Symbolic matrices could use LUsolve.
2) Have two implementations of QR decomposition, one which works for
symbolic matrices, and one which is numerically stable (for real
matrices).
Neither of these seems to work well for matrices which have floats and
symbols in them. What do you think?
Hi Ondrey,
2009/5/13 Ondrej Certik <ond...@certik.cz>:
> The patch looks good to me. Could you please also add tests forYup, I copied the tests from LUsolve, and they are only for numeric
> symbolic matrices? This only tests for a numeric ones, right?
matrices. I'll add tests with symbolic matrices too. This may
require removing this strange assert statemen from
Matrix.QRdecomposition.
Actually I'm getting a bit confused about this: the issue tracker
entry mentioned that QUsolve would be good because it has better
stability properties. This argument only makes sense if one is
worried about the effect of rounding errors. But the current
implementation of the QR decomposition seems to use Gram Schmidt
orthonormalisation, which is numerically unstable and shouldn't be
used in situations where rounding errors occur. This seems
inconsistent.
I see the following possibilities:
1) One could switch the QR decomposition to use Householder
reflections or similar, but this would restrict the algorithm to real
(or maybe complex) matrices, because the algorithm needs to know the
sign (or complex phase) of certain intermediate quantities. Result:
QRsolve would be stable but would only work for real matrices.
Symbolic matrices could use LUsolve.
2) Have two implementations of QR decomposition, one which works for
symbolic matrices, and one which is numerically stable (for real
matrices).
Neither of these seems to work well for matrices which have floats and
symbols in them. What do you think?
I'll be travelling for a few days and will be able to send a new patch
only sometimes next week. What I suggest to do is:
1) Remove the "more stable" wording from the QRsolve docstring.
2) Add warnings to the both the LUsolve and the QRsolve docstrings,
stating that these functions should not be used for real matrics (and
point to mpmath instead).
3) Add one or more examples with symbolic matrices to the tests.
Does this sound ok for everybody?
All the best,
Jochen (who will change continents tomorrow)
Hi,
I'll be travelling for a few days and will be able to send a new patch
only sometimes next week. What I suggest to do is:
1) Remove the "more stable" wording from the QRsolve docstring.
2) Add warnings to the both the LUsolve and the QRsolve docstrings,
stating that these functions should not be used for real matrics (and
point to mpmath instead).
3) Add one or more examples with symbolic matrices to the tests.
Does this sound ok for everybody?
Please ping me if you have something ready for review/merge.
Thanks,
Ondrej