Troubles with variable length shapes

23 views
Skip to first unread message

Александр Кольцов

unread,
Apr 1, 2021, 1:39:03 PM4/1/21
to Accelerate

Hello,

I'm trying to learn accelerate-hs, but... I have problems with understanding how to work with variable length shapes.

For example, simple (according to my expectations) task2:

1. convert shape with generally unknown length (n1, n2, ... nm) to matrix shape (n1*n2*...*nm-1, nm)

2. convert shape with generally unknown length (n1, n2, ... nm) to matrix shape (n1, n2*...*nm)

I'm trying to do something like

cnv :: (Slice (Z :. sh)) => Exp (Z :. sh :. Int) -> Exp (Z :. Int :. Int)
cnv a = lift $ Z :. m :. n
    where
        Z :. ns :. n = unlift a  :: Z :. Exp sh :. Exp Int
        m = shapeSize (lift (Z :. ns))

but I have

Could not deduce: sh1 ~ sh

Expected type: Exp (Plain ((Z :. Exp sh1) :. Exp Int))
        Actual type: Exp ((Z :. sh) :. Int)

In the first argument of ‘unlift’, namely ‘a’


What am I doing wrong? And what is the correct way to solve such a problem?

Tom Smeding

unread,
Apr 1, 2021, 2:11:08 PM4/1/21
to Accelerate
Hi Aleksandr,

The function you wrote actually almost compiles! The problem you're running into is that the type ascription on the 'unlift', namely ':: Z :. Exp sh :. Exp Int', uses a different variable 'sh' than the 'sh' in the type signature of 'cnv'. This always works like this in Haskell and is not Accelerate-specific.

You can change this behaviour using the ScopedTypeVariables language extension; you can find more information about that extension e.g. here or here.
To use lexically scoped type variables, you then have to declare those variables using 'forall'. Doing that and adding an 'Elt' constraint on 'sh' (because that's necessary to be able to put it in an Exp) gives the following function, which typechecks:

cnv :: forall sh. (Elt sh, Shape (Z :. sh)) => Exp (Z :. sh :. Int) -> Exp (Z :. Int :. Int)

cnv a = lift $ Z :. m :. n
    where
        Z :. ns :. n = unlift a  :: Z :. Exp sh :. Exp Int
        m = shapeSize (lift (Z :. ns))

A different way to write the same function that avoids 'lift' and 'unlift', and the need for a type ascription, is to use the pattern synonyms 'Z_' and '::.' that Accelerate defines:

cnv :: (Elt sh, Shape (Z :. sh)) => Exp (Z :. sh :. Int) -> Exp (Z :. Int :. Int)
cnv (Z_ ::. ns ::. n) = Z_ ::. shapeSize (Z_ ::. ns) ::. n

These two ways of writing do the same thing in the end.

However, this function 'cnv' doesn't actually allow you to work with variable-length shapes. Look: 'Z :. sh' can only be a shape if 'sh' is 'Int', since shapes in Accelerate are always of the form ((Z :. a) :. ...) :. z. For example, Z :. (2 :. 3) is not a valid shape, but (Z :. 2) :. 3 would be. Therefore, 'shapeSize (Z_ ::. ns)' will always be equal to 'ns' (because 'shapeSize (Z :. 123)` is 123, of course); I think 'cnv' always returns the same shape as it got in.

Was your idea to have a function that maps 'Z :. 2 :. 3 :. 4' to 'Z :. (2*3) :. 4', and 'Z :. 2 :. 3 :. 4 :. 5 :. 6' to 'Z :. (2*3*4*5) :. 6', perhaps? Such a generic function can be written using a typeclass, but it's a bit more difficult.
Or did you have something else in mind?

Cheers,
Tom

Trevor McDonell

unread,
Apr 1, 2021, 2:13:58 PM4/1/21
to Accelerate
Hello, and welcome to the mailing list!

The first is relatively easy and you're on the right track. The type error you are getting is because GHC's type inferencer doesn't know that you want the `sh` type on the type signature of `cnv` to be the same `sh` as in the where clause. Scoped type variables would fix that, but you can also avoid that problem using pattern synonyms instead, here's my solution:

cnv :: Shape sh => Exp (sh :. Int) -> Exp DIM2
cnv (rest ::. nm) = Z_ ::. shapeSize rest ::. nm

The second is trickier. The operations in accelerate are rank-polymorphic over the innermost dimension of the array (the right-most index, where elements of successive indexes are adjacent in memory), which makes the previous operation easy, but makes getting the outermost index difficult. A while ago I experimented with a different setup so that we could access the index from both the right (:>) _and_ left (:<), but never pushed that into master. There's a small mockup here if you want to comment on the idea: https://github.com/AccelerateHS/accelerate/discussions/464

It might be possible to backport (:<) to the current setup, without all the other changes, which would give you what you need.

Cheers,
-Trev


--
You received this message because you are subscribed to the Google Groups "Accelerate" group.
To unsubscribe from this group and stop receiving emails from it, send an email to accelerate-hask...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/accelerate-haskell/d9af4819-0394-4468-bfc1-05ed57aae2e7n%40googlegroups.com.

Александр Кольцов

unread,
Apr 1, 2021, 2:43:12 PM4/1/21
to Accelerate
First of all, thanks a lot for your reply.
Yes, you understood my task correctly. I want to convert 'Z:. 2:. 3:. 4 :. 5 :. 6' in 'Z:. (2 * 3 * 4 * 5):. 6'. In the first case, everything is conceptually clear - thanks for the clarification, but I have no idea how to approach the second case. And would such a transformation be effective at all?

четверг, 1 апреля 2021 г. в 21:11:08 UTC+3, Tom Smeding:

Александр Кольцов

unread,
Apr 1, 2021, 2:55:34 PM4/1/21
to Accelerate
Initially, the problem I was interested in was to implement an operation a_{ijkq} * b_{kqws} = c_{ijws} on two tensors. Usually for this I used the straightening of the groups of indices (ij), (kq) and (ws) and apply matrix multiplication. Perhaps there is a more natural way in the accelerator?

четверг, 1 апреля 2021 г. в 21:43:12 UTC+3, Александр Кольцов:

Tom Smeding

unread,
Apr 1, 2021, 3:29:21 PM4/1/21
to accelerat...@googlegroups.com
Assuming you're okay with matrix multiplication with naive (cubic) complexity, I would implement your operation as follows: (WARNING: untested code)

-- Multiply a_{ijkq} and b_{kqws} into c_{ijws} such that:
--   c_{ijws} = sum_k sum_q a_{ijkq} * b_{kqws}
-- Precondition: last two dimensions of 'a' (k,q) must be equal in size to the
-- first two dimensions of 'b'.
tensmult :: (Elt a, Num a) => Acc (Array DIM4 a) -> Acc (Array DIM4 a) -> Acc (Array DIM4 a)
tensmult a b =
    let Z_ ::. ni ::. nj ::. nk ::. nq = shape a
        -- note how we use the assumption that the k,q dimensions are
        -- consistently sized in a and b!
        Z_ ::. _ ::. _ ::. nw ::. ns = shape b
        -- a'_{ijkqws}
        a' = replicate (Z_ ::. cAll ::. cAll ::. cAll ::. cAll ::. nw ::. ns) a
        -- b'_{ijkqws}
        b' = replicate (Z_ ::. ni ::. nj ::. cAll ::. cAll ::. cAll ::. cAll) b
        -- elementwise multiply a' and b'
        -- c1_{ijkqws}
        c1 = zipWith (+) a' b'
        -- rotate the k,q dimensions inwards; this can also be done in a nicer
        -- way using transposeOn and some lenses.
        -- c2_{ijwskq}
        c2 = backpermute (Z_ ::. ni ::. nj ::. nw ::. ns ::. nk ::. nq)
                         (\(Z_ ::. i ::. j ::. w ::. s ::. k ::. q) ->
                             Z_ ::. i ::. j ::. k ::. q ::. w ::. s)
                         c1
        -- reshape the k,q dimensions into one dimension, so that we can fold
        -- over them in one go
        c3 = reshape (Z_ ::. ni ::. nj ::. nw ::. ns ::. nk * nq) c2
        -- sum the k,q dimensions away
        c4 = sum c3
    in c4
  where
    cAll = constant All

I hope the comments make it clear what's going on. I'm not very familiar with accelerate-lens so I created c2 using plain 'backpermute', but 'transposeOn' should allow you to make that a bit more compact if you wish.

Hope that helps,
Tom
--
You received this message because you are subscribed to a topic in the Google Groups "Accelerate" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/accelerate-haskell/u_l2Mm3GusU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to accelerate-hask...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/accelerate-haskell/dab36020-8c49-48c2-8308-c2300692d9a3n%40googlegroups.com.

Александр Кольцов

unread,
Apr 1, 2021, 4:52:27 PM4/1/21
to Accelerate
Your solution is clear, thank you for it.
But the fact of the matter is that the dimension of the folded tensors may not be known in advance, as well as the number of indices over which the summation is carried out. 4-rank tensors are given for example only. So I was interested, and ability to work with shapes of variable length.
As I understand it, there are no easy ways to do this in the current version?

четверг, 1 апреля 2021 г. в 22:29:21 UTC+3, Tom Smeding:

Александр Кольцов

unread,
Apr 1, 2021, 5:12:32 PM4/1/21
to Accelerate
Sounds great! Are there any plans to include such changes in future releases?
Understanding the code given in the discussion is currently causing me difficulties, but I will do my best. First of all, as far as I can see, accelerator basic types and classes are overridden here . It would help me a lot if you could tell me how to use this module (let's call it for example FlexibleShape) in a simple task, for example, to calculate the value of the first dimension of an array (with variable length shape)
Acc (Array (Z :. n :. ...)) -> Exp n

четверг, 1 апреля 2021 г. в 21:13:58 UTC+3, Trevor L. McDonell:

Trevor McDonell

unread,
Apr 2, 2021, 9:23:26 AM4/2/21
to Accelerate
Hi Aleksandr,

I think adding the more flexible indexing to Accelerate would be good, and could make some interesting changes to the API. But I don't really have a time plan for that. In the meantime, I managed to hack up something for you which I think does what you described above. Hope it helps!

Cheers,
-Trevor

{-# LANGUAGE GADTs               #-}
{-# LANGUAGE PatternSynonyms     #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TypeApplications    #-}
{-# LANGUAGE TypeOperators       #-}
{-# LANGUAGE ViewPatterns        #-}

import Data.Array.Accelerate
import Data.Array.Accelerate.Numeric.LinearAlgebra                  -- accelerate-blas

import Data.Array.Accelerate.AST.Idx
import Data.Array.Accelerate.Representation.Shape
import Data.Array.Accelerate.Sugar.Shape
import Data.Array.Accelerate.Smart

import qualified Prelude                                            as P

test :: (Shape sh, Numeric e) => Acc (Array (sh :. Int) e) -> Acc (Array (sh :. Int) e) -> Acc (Matrix e)
test as bs =
  let rest1 :> nm = shape as
      n1 :< rest2 = shape bs
      as'         = reshape (I2 (shapeSize rest1) nm) as
      bs'         = reshape (I2 n1 (shapeSize rest2)) bs
   in
   as' <> bs'


infixl 3 :>
pattern (:>) :: Shape sh => Exp sh -> Exp Int -> Exp (sh :. Int)
pattern sh :> sz = sh ::. sz

infixr 3 :<
pattern (:<) :: Shape sh => Exp Int -> Exp sh -> Exp (sh :. Int)
pattern (:<) sz sh <- (uncons -> (sz, sh))
  where (:<) = cons

cons :: forall sh. Shape sh => Exp Int -> Exp sh -> Exp (sh :. Int)
cons (Exp _i) (Exp _sh) = Exp $ cons' (shapeR @sh) _i _sh
  where
    cons' :: ShapeR t -> SmartExp Int -> SmartExp t -> SmartExp (t, Int)
    cons' ShapeRz          i _  = SmartExp (Pair (SmartExp Nil) i)
    cons' (ShapeRsnoc shR) i sh =
      let h   = SmartExp (Prj PairIdxRight sh)
          t   = SmartExp (Prj PairIdxLeft sh)
          sh' = cons' shR i t
       in
       SmartExp (Pair sh' h)

uncons :: forall sh. Shape sh => Exp (sh :. Int) -> (Exp Int, Exp sh)
uncons (Exp _sh) = let (i, t) = uncons' (shapeR @(sh :. Int)) _sh in (Exp i, Exp t)
  where
    uncons' :: ShapeR (t,Int) -> SmartExp (t,Int) -> (SmartExp Int, SmartExp t)
    uncons' (ShapeRsnoc ShapeRz)          sh = (SmartExp (Prj PairIdxRight sh), SmartExp Nil)
    uncons' (ShapeRsnoc shR@ShapeRsnoc{}) sh =
      let h   = SmartExp (Prj PairIdxRight sh)
          t   = SmartExp (Prj PairIdxLeft sh)
          (i, t') = uncons' shR t
       in
       (i, SmartExp (Pair t' h))




Reply all
Reply to author
Forward
0 new messages