Databases, in general, are pretty poor at storing big blobs of binary
data, although Postgres is no worse than the rest in my experience. As
a comparison, on my desktop machine, writing (and syncing) 300MB to a
(consumer-grade) local disk takes a few seconds, but getting Postgres
to write it takes several minutes.
>I'm using your ocpgdb wrapper and now I'm getting out of memory error.
>(see code below)
>
>If I decrease number of connections to 2 ( connectionsNum = 2) my
>test runs fine.
>As soon as I increase it to 5 or 6. I'm getting out of memory error.
>Is there any way to free memory for each connection at the end of the
>loop?
The increasing memory use does suggest a memory leak of some sort.
Normally a (python) object will be released once all references to it
are dropped unless there is a cycle. The cyclic gc runs every 700(?) VM
instructions, but you can run it on demand with gc.collect(), although
this will only help if you have cycles in your reference graphs - it
makes no difference here if I add a call to gc.collect() to your script.
It's also possible there is a memory leak in oclibpq, or that libpq itself
is deliberately keeping buffers attached to the connection object. My bet
is on libpq buffers... yep, just tried your script again, this keeping
multiple connections, but closing them and re-opening them after each
use. Memory consumption is now stable.
--
Andrew McNamara, Senior Developer, Object Craft
http://www.object-craft.com.au/