Uploading larger files

55 views
Skip to first unread message

Patrik Pahulák

unread,
Jan 17, 2024, 6:45:22 AM1/17/24
to Magnolia User Mailing List

Hello, I wouldn't exactly call this a bug, but I didn't know how else to categorize this issue.

Well basically as the name suggests I have issues uploading larger files to the DAM. The way I upload files to Magnolia is by using a rest api, basically I first create an asset and afterwards I keep uploading file chunks to it (storing them in a subNode of the asset - these are all saved as Binary). After I have all my chunks I try to save the asset this way (I get the chunks and create a stream):

InputStream in = new ByteArrayInputStream(out.toByteArray()); ValueFactory vf = damSession.getValueFactory(); Binary dataBinary = vf.createBinary(in); resource.setProperty("data", dataBinary);

This however does not work, an error is thrown:

org.postgresql.util.PSQLException: Unable to bind parameter values for statement. at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:390) ~[postgresql-42.6.0.jar:42.6.0] at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:498) ~[postgresql-42.6.0.jar:42.6.0] at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:415) ~[postgresql-42.6.0.jar:42.6.0] at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190) ~[postgresql-42.6.0.jar:42.6.0] at org.postgresql.jdbc.PgPreparedStatement.execute(PgPreparedStatement.java:177) ~[postgresql-42.6.0.jar:42.6.0] at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4] at org.apache.jackrabbit.core.util.db.ConnectionHelper.execute(ConnectionHelper.java:524) ~[jackrabbit-data-2.20.9.jar:2.20.9] at org.apache.jackrabbit.core.util.db.ConnectionHelper.reallyExec(ConnectionHelper.java:313) ~[jackrabbit-data-2.20.9.jar:2.20.9] at org.apache.jackrabbit.core.util.db.ConnectionHelper$1.call(ConnectionHelper.java:293) ~[jackrabbit-data-2.20.9.jar:2.20.9] at org.apache.jackrabbit.core.util.db.ConnectionHelper$1.call(ConnectionHelper.java:289) ~[jackrabbit-data-2.20.9.jar:2.20.9] at org.apache.jackrabbit.core.util.db.ConnectionHelper$RetryManager.doTry(ConnectionHelper.java:552) ~[jackrabbit-data-2.20.9.jar:2.20.9] at org.apache.jackrabbit.core.util.db.ConnectionHelper.exec(ConnectionHelper.java:297) ~[jackrabbit-data-2.20.9.jar:2.20.9] at org.apache.jackrabbit.core.data.db.DbDataStore.addRecord(DbDataStore.java:360) ~[jackrabbit-data-2.20.9.jar:2.20.9] at org.apache.jackrabbit.core.value.BLOBInDataStore.getInstance(BLOBInDataStore.java:132) ~[jackrabbit-core-2.20.9.jar:2.20.9] at org.apache.jackrabbit.core.value.InternalValue.getBLOBFileValue(InternalValue.java:623) ~[jackrabbit-core-2.20.9.jar:2.20.9] at org.apache.jackrabbit.core.value.InternalValue.create(InternalValue.java:379) ~[jackrabbit-core-2.20.9.jar:2.20.9] at org.apache.jackrabbit.core.value.InternalValueFactory.create(InternalValueFactory.java:108) ~[jackrabbit-core-2.20.9.jar:2.20.9] at org.apache.jackrabbit.core.value.ValueFactoryImpl.createBinary(ValueFactoryImpl.java:79) ~[jackrabbit-core-2.20.9.jar:2.20.9]

This is also part of the error log

Caused by: java.io.IOException: Bind message length 1 073 741 889 too long. This can be caused by very large or incorrect length specifications on InputStream parameters. at org.postgresql.core.v3.QueryExecutorImpl.sendBind(QueryExecutorImpl.java:1724) ~[postgresql-42.6.0.jar:42.6.0] at org.postgresql.core.v3.QueryExecutorImpl.sendOneQuery(QueryExecutorImpl.java:2003) ~[postgresql-42.6.0.jar:42.6.0] at org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1523) ~[postgresql-42.6.0.jar:42.6.0] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:360) ~[postgresql-42.6.0.jar:42.6.0]

I need to be able to upload files up to 2GB. The way I got this to work in the end is in my opinion very sub-optimal: right now I am already uploading the chunks and storing them, so instead of creating a single binary at the end of this whole file uploading process I just store the chunks and leave them be. Later when I want to retrieve the file, I have to stream them to my frontend application and build the file back up there. All of this is very annoying.

What I would like to know, if it is possible to upload larger files than 1GB, so that I can use TemplatingFunctions etc. to generate file links etc. etc..

 

I am using the community version, the db is postgres
Versions: Magnolia 6.2.40 and dam module 3.0.27

Roman Kovařík

unread,
Jan 18, 2024, 3:43:46 AM1/18/24
to Magnolia User Mailing List, patrik....@servermechanics.cz
Hey Patrik,

There could be multiple reasons
  • Limitation of the file system when the database is running.
  • Limitation of the database (you might try to store it directly in the database bypassing the JCR API).
  • Limitation of the driver (you might try a different type of database locally).
  • I'd not rule out a corrupted stream created by the code posted here. I can also see all data are loaded into memory (ByteArrayInputStream) instead of streaming it from a file, for example.
Generally, it's probably not a good idea to store such big files into JCR in the first place. Have you considered the plain file system or S3?
I hope you'll find a solution!

Regards, Roman

Reply all
Reply to author
Forward
0 new messages