I am working with tensors with more than 2 dimensions. I would like to find a Julia equivalent to the numpy function tensordot. The docstring for the function explains its use well, so I will just copy/paste it here:
def tensordot(a, b, axes=2):
"""
Compute tensor dot product along specified axes for arrays >= 1-D.
Given two tensors (arrays of dimension greater than or equal to one),
``a`` and ``b``, and an array_like object containing two array_like
objects, ``(a_axes, b_axes)``, sum the products of ``a``'s and ``b``'s
elements (components) over the axes specified by ``a_axes`` and
``b_axes``. The third argument can be a single non-negative
integer_like scalar, ``N``; if it is such, then the last ``N``
dimensions of ``a`` and the first ``N`` dimensions of ``b`` are summed
over.
Parameters
----------
a, b : array_like, len(shape) >= 1
Tensors to "dot".
axes : variable type
* integer_like scalar
Number of axes to sum over (applies to both arrays); or
* array_like, shape = (2,), both elements array_like
Axes to be summed over, first sequence applying to ``a``, second
to ``b``.
See Also
--------
dot, einsum
Notes
-----
When there is more than one axis to sum over - and they are not the last
(first) axes of ``a`` (``b``) - the argument ``axes`` should consist of
two sequences of the same length, with the first axis to sum over given
first in both sequences, the second axis second, and so forth.
Examples
--------
A "traditional" example:
>>> a = np.arange(60.).reshape(3,4,5)
>>> b = np.arange(24.).reshape(4,3,2)
>>> c = np.tensordot(a,b, axes=([1,0],[0,1]))
>>> c.shape
(5, 2)
>>> c
array([[ 4400., 4730.],
[ 4532., 4874.],
[ 4664., 5018.],
[ 4796., 5162.],
[ 4928., 5306.]])
>>> # A slower but equivalent way of computing the same...
>>> d = np.zeros((5,2))
>>> for i in range(5):
... for j in range(2):
... for k in range(3):
... for n in range(4):
... d[i,j] += a[k,n,i] * b[n,k,j]
>>> c == d
array([[ True, True],
[ True, True],
[ True, True],
[ True, True],
[ True, True]], dtype=bool)
"""
I think the functionality laid out in the explicit for loops would be quite easy to replicate using the @nloops and @nref macros in Cartesian, but I am not sure if that will be most performant way to implement such a function. Also, I wanted to check here to see if this has already been done before I spend some time working on this.
Thanks,
Hey Tim,
Yep, sure does.
For those who find this later (i.e. future me), here are a couple ways to replicate the example in the numpy docs using TensorOperations.jl:
using TensorOperations, IndexNotation
# NOTE: we build it inside-out and then permute to match numpy's row major
# reshape with Julia's column major reshape.
a = permutedims(reshape(0:59.0, (5, 4, 3)), (3, 2, 1))
b = permutedims(reshape(0:23.0, (2, 3, 4)), (3, 2, 1))
# first method -- using IndexNotation
c1 = a[l"a,b,c"]*b[l"b,a,d"]
# second method -- calling tensorcontract
c2 = tensorcontract(a, ["a", "b", "c"], b, ["b", "a", "d"])
# third method -- using inlace operations
c3 = Array(Float64, 5, 2)
TensorOperations.tensorcontract!(1,a,["a", "b", "c"],'N',b,["b", "a", "d"],'N',0.0, c3,["c", "d"])
Best to file an issue.