just at the beginning a disclaimer: my knowledge stems from IBM VAST 6.x and earlier, I do NOT know about the current implementation of all later versions provided by Instantiations, that is VAST 7 up to the current VAST 2021. Nethertheleas, I hope my advice can help you, under the assumption that the DB implementation did not change significantly.
Having 0 for baseLength is bad. The ODBC / field design and implementation requires you to allocate enough space to be able to transfer the longest possible field length in memory in either direction (to or from datebase).
Thus you should try to set (for example)
AbtDatabaseVarBinaryField bufferSize: 10*1024*1024
for 10MB max. length of ANY field of type "VARBINARY(MAX)" in advance.
I admit, this is very inflexible, in particular if a record is declared containing several such fields, not to say if you are running a query returning many records of that kind.
Remember: if set that value too low, any field being longer than that buffer will be truncated in transit without error [if you run an update, insert or select].
The ODBC API demands this low level buffer wiring per field.
The Smalltalk implementation [remember VAST 6.x and earlier] realizes this with lazy init concerning this bufferSize.
As Richard stated, in such cases a transfer in chunks would be prefarable and solve this.
Actually ODBC provides this in its API (and the DBs provided this by types CLOB or BLOB).
As I remember, VAST 6.x and earlier implemented CLOB and BLOB support for DB/2 and there was support for that in the visual part editors of query statement, but possibly nearly undocumented, and unfortunately, this support is product specific,
Then there where traces for that in IBM VIsual Age Oracle BLOB implementation, but nothing for ODBC.
So there are two ways to deal with this:
A pragmatic solution: garuantee that no variable field is longer than an predetermined length, whatever this value might be, set this value in advance using AbtDatabaseVarBinaryField bufferSize: value (typically in the loaded code of your application) and use the given visual parts with utmost care, in the sense, that you will suffer problems if you are transfering too large records.
Thats means a table with many colums having all varbinary(max) will allocate many instance of VARBINARY; each one with the maximum possible size statet in bufferSIze (your value, in advance). As I warned you earlier, that may blow up you image size.
The other way is to write you own low level implementation of the ODBC protocol to transfer junks. This is very complex. To give you the idea, the BLOB logic was somehow
a) declare a record with all fields not being too large (e.g. those not being either varbinary(max) or varchar(max)) and a stub for every field being of type varbinary(max) or varchar(max)
b) run a insert, update or select which transfers initially only the fields not being stubbed,
c) for every field of type varbinary(max) or varchar(max) (= for every stub) you have to implement a transfer loop, transfering incrementally of chunks of reasonable size (thus requiring to allocating and wiring only ONE buffer).
Before b) and after c) you to have to provide the standard ODBC connection protocol start and stop.
As far as I remember, concerning this block transfer of stubs required alos a techniqe of specialized callbacks.
The design of the many AbtxxxxField classes supports this approach of product specific solutions.
In particular, there is a layer of product independent classes for every field type ... e.g. the AbtDatabaseVarBinaryField or AbtDatabaseVarCharacterField to be used in your application to declare queries, tables and host variables, and there is a layer of product specific fields like records for ODBC, DB2 or other products, which are mapped (registered) to be used instead of the generic fields when you open and run your connection using s specific product. THis mapping is static. That means you cannot change the database product at runtime (but I guess this is not a general problem).
In other word, If you only loaded the ODBC support and never the DB2 or ORACLE database features, you will not have seen this approach.
Changing VARBINARY(MAX) TO VARBINARY(10000) "10000 or any other lower value than max" will not change anything and will not help you.
I tend you should attempt the following:
a) determine a reasonable value for the maximum transferable length in you scope (e.g. as in Web uploads, deny any attempt to transfer any length longer than that, but it is up to you to detect this, neither ODBC nor the database will do this for you
b) and refrain to declare a tables, queries or host variables with many such instances, however, the trade off is to run many queries insted of one.
For b I can give you an example to explain what I mean with b)
Table A with col1, col2, col3, all three colums varbinary(max) is bad in this sense
Table B with colno, column where colNo is integer and column(max) is better
Instead of running one select of table A returning you one record at once you will have to run a cursor with many selects for table B, BUT returning only ONE record in one select. A multiple result row over TABLE B would have the same problem as the single select of TABLE A.
-> you have to work with LIMIT values of the SELECT part, to steer the maximum number to be returned from the database per call.
Kind regards
Marcus