dot multiplication with matrix of variables

48 views
Skip to first unread message

Giuseppe G. A. Celano

unread,
May 30, 2020, 10:56:32 AM5/30/20
to sympy
Enter code here...

I am trying to perform a dot multiplication between a numpy array (64,1000) and a sympy matrix (1000, 100) containing only variables, but the computation never ends. How to do that?

David Bailey

unread,
May 30, 2020, 4:27:14 PM5/30/20
to sy...@googlegroups.com
On 30/05/2020 15:02, Giuseppe G. A. Celano wrote:
Enter code here...

I am trying to perform a dot multiplication between a numpy array (64,1000) and a sympy matrix (1000, 100) containing only variables, but the computation never ends. How to do that?
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/402c5bd3-b503-4a45-9ea4-6754a4e49a23%40googlegroups.com.

That calculation is going to create matrix with 6,400 elements, each of which will be a summation of 1000 terms (at least before any possible simplification, and assuming you mean variables without a numeric value). Bearing in mind that symbolic expressions take quite a lot of memory, that lot is going to take up a fair bit of memory but I'd guess it would be OK in 64 bits (you don't say if your Python installation is 64bits as opposed to 32 bits).

However, calculating something that size is certainly going to be challenging, and may need to run over night. In addition there is the problem that the program may be choking trying to print the result.

My advice would be to start with a much scaled down example, and then gradually scale it up to see what breaks.

If you want to do something immediately to use the matrix without printing the 64 x 100 matrix of algebraic expressions (!!) that might be best.

Good luck!

David

Giuseppe G. A. Celano

unread,
May 30, 2020, 10:42:20 PM5/30/20
to sympy
Thanks!

I am trying to use very small matrices. Is there any way to calculate the partial derivatives of "loss2" below?

import numpy as np
from sympy import *

n, d, n2, d2 = 5, 7, 4, 3

x = np.random.randn(n, d)
y = np.random.randn(n, d2)

w1 = MatrixSymbol("l", 7, 4)
w1 = Matrix(w1)

w2 = MatrixSymbol("p", 4, 3)
w2 = Matrix(w2)

h2 = x * w1
predicted = h2 * w2

loss2 = Matrix(np.square(predicted - y))



On Saturday, May 30, 2020 at 10:27:14 PM UTC+2, David Bailey wrote:
On 30/05/2020 15:02, Giuseppe G. A. Celano wrote:
Enter code here...

I am trying to perform a dot multiplication between a numpy array (64,1000) and a sympy matrix (1000, 100) containing only variables, but the computation never ends. How to do that?
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sy...@googlegroups.com.

Oscar Benjamin

unread,
May 31, 2020, 7:12:11 AM5/31/20
to sympy
On Sun, 31 May 2020 at 03:42, Giuseppe G. A. Celano
<giuseppe...@gmail.com> wrote:
>
> I am trying to use very small matrices. Is there any way to calculate the partial derivatives of "loss2" below?
>
> import numpy as np
> from sympy import *
>
> n, d, n2, d2 = 5, 7, 4, 3
>
> x = np.random.randn(n, d)
> y = np.random.randn(n, d2)
>
> w1 = MatrixSymbol("l", 7, 4)
> w1 = Matrix(w1)
>
> w2 = MatrixSymbol("p", 4, 3)
> w2 = Matrix(w2)
>
> h2 = x * w1
> predicted = h2 * w2
>
> loss2 = Matrix(np.square(predicted - y))

What do you want to differentiate with respect to?

You can use loss2.diff(w1[0, 0]) to differentiate with respect to the
upper left entry of w1.

--
Oscar

S.Y. Lee

unread,
May 31, 2020, 9:18:56 AM5/31/20
to sympy
It's better work on matrix expressions
I also don't think that x, y should be numeric matrices if they are random matrices. 

Now, the problem is the matrix derivative is computed wrong when it's derived with it's own elements
But when I tried with https://github.com/sympy/sympy/pull/17232 and symbolized all the stuff

import numpy as np
from sympy import *

n, d, n2, d2 = 5, 7, 4, 3

x = MatrixSymbol('x', n, d)
y = MatrixSymbol('y', n, d2)

w1 = MatrixSymbol("l", 7, 4)
w2 = MatrixSymbol("p", 4, 3)

h2 = x * w1
predicted = h2 * w2
HadamardPower(predicted - y, 2).diff(x[0, 0])
I see it gives consistent result with computations with explicit matrix. Although I can't easily read the formula.

Giuseppe G. A. Celano

unread,
May 31, 2020, 12:56:06 PM5/31/20
to sympy
Hi Lee,

Yes, it is a mistake. I meant:

x = np.random.randn(n, d)
y = np.random.randn(n, d2)



Giuseppe G. A. Celano

unread,
May 31, 2020, 1:44:42 PM5/31/20
to sympy
PS:  I checked my previous post and the code I wrote looks correct:

> import numpy as np
> from sympy import *
>
> n, d, n2, d2 = 5, 7, 4, 3
>
> x = np.random.randn(n, d)
> y = np.random.randn(n, d2)
>
> w1 = MatrixSymbol("l", 7, 4)
> w1 = Matrix(w1)
>
> w2 = MatrixSymbol("p", 4, 3)
> w2 = Matrix(w2)
>
> h2 = x * w1
> predicted = h2 * w2
>
> loss2 = Matrix(np.square(predicted - y))

Oscar Benjamin

unread,
May 31, 2020, 2:59:50 PM5/31/20
to sympy
On Sun, 31 May 2020 at 18:44, Giuseppe G. A. Celano
<giuseppe...@gmail.com> wrote:
>
> PS: I checked my previous post and the code I wrote looks correct:

Your code is correct but it is probably not a good way of solving your
actual problem.

What would make more sense as a use of sympy is to use sympy to derive
a matrix formula for the solution and then use numpy to calculate the
numeric solution with that formula using your input data.

If your actual problem is a purely numeric linear-least-squares
problem then numpy/scipy can already solve this pretty well and will
be more effective than sympy for large inputs. If your problem is more
complex and has a nontrivial formula for the solution then sympy might
be a good tool to find that formula.

--
Oscar

Giuseppe G. A. Celano

unread,
Jun 1, 2020, 11:46:30 PM6/1/20
to sympy
Hi Oscar,

Thanks for the answer. I was trying to find the values of w1, w2 from loss2 (starting with calculation of all partial derivatives). Are you suggesting not to work on the matrix in loss2?

I know how the problem can be tackled through numerical differentiation (gradient descent), but I was trying to use symbolic computation. Do you know of a similar task already solved via symbolic computation (and available online)?
Reply all
Reply to author
Forward
0 new messages