Às 22:51 de 10/01/20, Dave S escreveu:
> For web2py/pydal when the backend is PostGres, are blob fields translated to
> bytea or to large object?
>
> If bytea, does the adapter check that the value being added fits the 1G limit
> of PostGres, or can a 2G field be sent (and then rejected by the backend)?
Psycopg2 (not Web2py) maps blobs to bytea, but has the 1GB limit.
This thread explains how to deal with that (using lo_import and lo_export):
https://postgresrocks.enterprisedb.com/t5/EDB-Postgres/problems-with-writing-reading-a-data-bytea/td-p/2095
To use lo_* functions in Python you should use db.executesql(
'...' ) directly.
One approach (for files bigger then 500MB) would be to store them
in the filesystem and store their pathnames in a DB field as
mentioned in one of the comments.
Às 05:45 de 15/01/20, Dave S escreveu:
On Sunday, January 12, 2020 at 9:36:14 AM UTC-8, Carlos Correia wrote:Às 22:51 de 10/01/20, Dave S escreveu:
> For web2py/pydal when the backend is PostGres, are blob fields translated to
> bytea or to large object?
>
> If bytea, does the adapter check that the value being added fits the 1G limit
> of PostGres, or can a 2G field be sent (and then rejected by the backend)?
Psycopg2 (not Web2py) maps blobs to bytea, but has the 1GB limit.
This thread explains how to deal with that (using lo_import and lo_export):
https://postgresrocks.enterprisedb.com/t5/EDB-Postgres/problems-with-writing-reading-a-data-bytea/td-p/2095
Thanks for the pointer. I'm a little disappointed the example didn't show how to use lo_import() and lo_export() in the python portion, and not just in the psql access, but it appears I can avoid that if I can guarantee fitting into 500MB.
To use lo_* functions in Python you should use db.executesql( '...' ) directly.
One approach (for files bigger then 500MB) would be to store them in the filesystem and store their pathnames in a DB field as mentioned in one of the comments.
Regards, Carlos Correia