OutOfMemory Error

18 views
Skip to first unread message

Jianbo Ye

unread,
May 10, 2014, 7:22:03 PM5/10/14
to scala-stm-e...@googlegroups.com
Hi, I encounter a problem (bug?) in using STM.

Given a par collection, say l = (0 until n).par

When I call 

l.map(i=> func(i))

and in func() I update a large vector in atomic {} which looks like

atomic {
 A() = A() + M_i // takes time, because A and M are large dimensional vector (5M or so), and the addition operation require more memory allocation.
}

It will throw OutOfMemoryError as follows ( I am sure I have enough heap memory for such operations except STM make copies for block purpose)


        java.lang.OutOfMemoryError ...	
        ...
        at scala.concurrent.stm.ccstm.InTxnImpl.runBlock(InTxnImpl.scala:560)
	at scala.concurrent.stm.ccstm.InTxnImpl.topLevelAttempt(InTxnImpl.scala:516)
	at scala.concurrent.stm.ccstm.InTxnImpl.topLevelAtomicImpl(InTxnImpl.scala:387)
	at scala.concurrent.stm.ccstm.InTxnImpl.atomic(InTxnImpl.scala:248)
	at scala.concurrent.stm.ccstm.CCSTMExecutor.apply(CCSTMExecutor.scala:24)
        ...


But If I do this way, it becomes memory safe. 

l.map(i=> {
  func(i)
  0.0}).reduce(_+_)

Could someone explain me about that?

Nathan Bronson

unread,
May 12, 2014, 2:36:33 PM5/12/14
to scala-stm-e...@googlegroups.com
Jianbo,

I can't reproduce your problem without more code. Is it something that I could run locally, or that you can reduce to a standalone program?

On the other hand, reduce is probably a better solution for this computation, even if the STM code didn't OOM. The transaction resolves reads and writes without understanding the mathematical properties of the transactions (generality), but since each transaction writes A in this case the STM can't actually give you any parallelism. The reduce-based solution, on the other hand, knows from its API contract that the reduction operator can be applied safely in parallel (purity) and in any order (transitivity).

 - Nathan


--

---
You received this message because you are subscribed to the Google Groups "Scala STM Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-stm-expert-...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Nathan Grasso Bronson
ngbr...@gmail.com

Jianbo Ye

unread,
May 12, 2014, 3:33:05 PM5/12/14
to scala-stm-e...@googlegroups.com
Thanks for your explanation, Nathan

Yes, this example didn't provide any parallelism. But in my case, func() has much more things to do in parallel. You remind me that the reduce() indicate the operation can be applied in any order, so atomic{} part is not blocked for waiting the next consecutive one. I think that's basically the reason for out-of-memory, because in the first case, atomic{} backup all M_i.

I am not sure STM can artificially hang up the parallelized part of code, because it may create many M_i before it reduce them to A and free the memory in atomic{}. 

Jianbo 
To unsubscribe from this group and stop receiving emails from it, send an email to scala-stm-expert-group+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Nathan Bronson

unread,
May 13, 2014, 2:16:09 PM5/13/14
to scala-stm-e...@googlegroups.com
Jianbo,

ScalaSTM internally supports partial rollback when there are nested transactions, although normally it flattens all of the transactions to keep overheads low. I should probably add a configuration setting to the TxnExecutor that allows you to disable this subsumption optimization (since it is only usually faster), but if you want to try this you can get the right behavior by using atomic.oneOf, something like this:

  atomic.oneOf( { implicit txn: InTxn =>
    M_i = func()
    // the STM will often (not always) be able to back up to this point and retry, without calling func() again
    atomic { implicit txn =>
      A() = A() + M_i
    }
  })

- Nathan


To unsubscribe from this group and stop receiving emails from it, send an email to scala-stm-expert-...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages