Efficient iteration for tensor dot product - alternatives to splatting

80 views
Skip to first unread message

Alex Williams

unread,
Jun 26, 2016, 7:24:17 AM6/26/16
to julia-users
I'm trying to code an efficient implementation of the n-mode tensor product. Basically, this amounts to taking a dot product between a vector (b) and the mode-n fibers of a higher-order Array. For example, the mode-3 product of a 4th order array is:

```
C[i,j,k] = dot(A[i,j,:,k], b)
```

After iterating over all (i,j,k). Doing this for any arbitrary `A` and `n` can be achieved by the function below. But I believe it is slower than necessary due to the use of the splatting operator (`...`). Any ideas for making this faster?

```julia
"""
    B = tensordot(A, b, n)
Multiply N-dimensional array A by vector b along mode n. For inputs
size(A) = (I_1,...,I_n,...,I_N), and length(x) = (I_n), the output is
a (N-1)-dimensional array with size(B) = (I_1,...,I_n-1,I_n+1,...,I_N)
formed by taking the vector dot product of b and the mode-n fibers
of B.
"""
function tensordot{T,N}(A::Array{T,N}, b::Vector{T}, n::Integer)

    I = size(A)
    @assert I[n] == length(b)
    D = I[setdiff(1:N,n)]
    K = [ 1:d for d in D ]

    # preallocate result
    
    C = Array(T,D...)

    # do multiplication
    for i in product(K...)
        C[i...] = dot(b, vec(A[i[1:(n-1)]...,:,i[n:end]...]))
    end

    return C

end
```

Thanks all,

-- Alex


Tim Holy

unread,
Jun 26, 2016, 1:41:29 PM6/26/16
to julia...@googlegroups.com
I'd try TensorOperations.jl or AxisAlgorithms.jl

Best,
--Tim
Reply all
Reply to author
Forward
0 new messages