Batch processing with slick 3

764 views
Skip to first unread message

Rauan Maemirov

unread,
May 9, 2015, 6:23:51 AM5/9/15
to scala...@googlegroups.com
Hey all.

I'm writing an offline data processing app. I need to pull documents from h2 file, process them in batch and then write/update processed data.
I've found slick stream to be very useful for this, but I need to process data in micro-batches, not one by one, because I need to be able to bulk insert/update my documents (update docs in h2 and index processed docs into elasticsearch in bulk).

Slick stream supports only foreach, and I couldn't find any examples of slick iterators that I'd be able to process with scala's grouped util.

Do I need to create my own buffer or is there any way to do it with slick?

Rauan Maemirov

unread,
May 9, 2015, 8:55:37 AM5/9/15
to scala...@googlegroups.com
I found the solution to process stream with iteratees, but couldn't check if it's working.
https://github.com/playframework/play-slick/blob/master/samples/iteratee/app/dao/RecordsDAO.scala

play-slick requires to setup configuration 'play way', so other calls like Database.forConfig("db") won't work.

Could somebody provide working example for slick iterators?


--

---
You received this message because you are subscribed to a topic in the Google Groups "Slick / ScalaQuery" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/scalaquery/bnpfVTFUetw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to scalaquery+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/scalaquery/5fa1fa08-676e-4a35-9214-8c0de2d0a729%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages