blob fields with PostGres

57 views
Skip to first unread message

Dave S

unread,
Jan 10, 2020, 10:51:17 PM1/10/20
to web2py-users
For web2py/pydal when the backend is PostGres, are blob fields translated to bytea or to large object?

If bytea, does the adapter check that the value being added fits the 1G limit of PostGres, or can a 2G field be sent (and then rejected by the backend)?

/dps

Carlos Correia

unread,
Jan 12, 2020, 5:36:14 PM1/12/20
to web...@googlegroups.com
Às 22:51 de 10/01/20, Dave S escreveu:
Psycopg2 (not Web2py) maps blobs to bytea, but has the 1GB limit.

This thread explains how to deal with that (using lo_import and lo_export):

https://postgresrocks.enterprisedb.com/t5/EDB-Postgres/problems-with-writing-reading-a-data-bytea/td-p/2095

,
Regards,

Carlos Correia
=========================
MEMÓRIA PERSISTENTE
GSM: 917 157 146
e-mail: ge...@memoriapersistente.pt
URL: http://www.memoriapersistente.pt
XMPP (Jabber): car...@memoriapersistente.pt
GnuPG: wwwkeys.eu.pgp.net

Dave S

unread,
Jan 15, 2020, 5:45:40 AM1/15/20
to web2py-users


On Sunday, January 12, 2020 at 9:36:14 AM UTC-8, Carlos Correia wrote:
Às 22:51 de 10/01/20, Dave S escreveu:
> For web2py/pydal when the backend is PostGres, are blob fields translated to
> bytea or to large object?
>
> If bytea, does the adapter check that the value being added fits the 1G limit
> of PostGres, or can a 2G field be sent (and then rejected by the backend)?


Psycopg2 (not Web2py) maps blobs to bytea, but has the 1GB limit.

This thread explains how to deal with that (using lo_import and lo_export):

https://postgresrocks.enterprisedb.com/t5/EDB-Postgres/problems-with-writing-reading-a-data-bytea/td-p/2095



Thanks for the pointer.  I'm a little disappointed the example didn't show how to use lo_import() and lo_export() in the python portion, and not just in the psql access, but it appears I can avoid that if I can guarantee fitting into 500MB.

/dps

Carlos Correia

unread,
Jan 15, 2020, 3:06:27 PM1/15/20
to web...@googlegroups.com
Às 05:45 de 15/01/20, Dave S escreveu:

To use lo_* functions in Python you should use db.executesql( '...' ) directly.

One approach (for files bigger then 500MB) would be to store them in the filesystem and store their pathnames in a DB field as mentioned in one of the comments.

Dave S

unread,
Jan 16, 2020, 9:40:07 PM1/16/20
to web2py-users


On Wednesday, January 15, 2020 at 7:06:27 AM UTC-8, Carlos Correia wrote:
Às 05:45 de 15/01/20, Dave S escreveu:


On Sunday, January 12, 2020 at 9:36:14 AM UTC-8, Carlos Correia wrote:
Às 22:51 de 10/01/20, Dave S escreveu:
> For web2py/pydal when the backend is PostGres, are blob fields translated to
> bytea or to large object?
>
> If bytea, does the adapter check that the value being added fits the 1G limit
> of PostGres, or can a 2G field be sent (and then rejected by the backend)?


Psycopg2 (not Web2py) maps blobs to bytea, but has the 1GB limit.

This thread explains how to deal with that (using lo_import and lo_export):

https://postgresrocks.enterprisedb.com/t5/EDB-Postgres/problems-with-writing-reading-a-data-bytea/td-p/2095



Thanks for the pointer.  I'm a little disappointed the example didn't show how to use lo_import() and lo_export() in the python portion, and not just in the psql access, but it appears I can avoid that if I can guarantee fitting into 500MB.


To use lo_* functions in Python you should use db.executesql( '...' ) directly.


So it seems.
 

One approach (for files bigger then 500MB) would be to store them in the filesystem and store their pathnames in a DB field as mentioned in one of the comments.


Yes.  Under the hood, that seems to be what PostGres  does (the 1G limit comes from taking a couple bits as marker for this), and what the PG originla Large Object system did.

Regards,

Carlos Correia

I regard your contributions with thanks!

/dps
 
Reply all
Reply to author
Forward
0 new messages