> With another, this time closer, reading of the ORMLite documentation,
> I was lead to believe I had to use transactions to be more effective performance-wise.
So transactions is one way to do it but as of 4.18, the Dao.callBatchTasks() method will a use a transaction if it needs to. It looks like the documentation is a little out of date there. You should use Dao.callBatchTasks():
But I'm worried that this won't solve your memory issues.
> So this is what I'm currently doing, generating a collection of all the objects I want to save, and then walking through
> it inside a TransactionManager and save each object at a time.
Can you show us a trimmed down code snippet?
> Now, the problem I'm facing is that a Transaction is a memory hogger
> operation, and it doesn't seem to release the memory it took, even after the commit.
Wow. This doesn't seem to me ORMLite's problem since there is nothing that is held in the TransactionManager aside from an AtomicInteger. Maybe the SQLite driver is holding memory but a quick Google search on that subject doesn't show anything.
Are you sure that you aren't holding memory in your collections somehow? Maybe null-ing the collections as you process them?
> But... I looks like ORMLite only commits the pending requests after the end of
> the loop and not after the end of the Transaction.
I'm sorry, which loop? At the end of the transaction (or at the end of the callBatchTasks) a commit _definitely_ happens. Actually, setAutoCommit(true) is done. I always assumed that this did a commit but maybe it doesn't. Hrm.
You can take a look at the source if you'd like. Look for callBatchTasks in:
It will use a transaction under Android but SQLite seems to be able to do auto-commit outside of Android.
I have some abuse tests but I'll add some 0s and see if I can get SQLite to roll over on my box.
gray
I'm on it.
gray
> Any chance to have a solution on how to insert 100K+ objects 4 times
> in a database without having to face a Java Out Of Memory Exception ? :
Wow. This is not good. Just found and fixed this bug:
https://sourceforge.net/tracker/?func=detail&aid=3298098&group_id=297653&atid=1255989
I was not closing some database connections in a couple of places in the JDBC code. This bug has been around for a while. I have no idea why I haven't stumbled across it before.
I'm going to do some more testing and then push out a 4.21 ASAP.
This is the sort of bug that makes me feel like a junior SE again. Highly embarrassing.
gray
> Any chance to have a solution on how to insert 100K+ objects 4 times
> in a database without having to face a Java Out Of Memory Exception ?
So 4.21 should have fixed this problem dude. My tests at least work. Sorry about the problems. Pretty bad issue that I guess has been in the JDBC section of the code for a while. I'm surprised I hadn't seen it before.
I've fleshed out some of the tests to better detect unclosed connections.
gray