|SQLITE_MAX_EXPR_DEPTH 1000 needs to be 0 in sqlite.h||Ronald Jones||1/29/13 7:16 PM|
I am using dbVisualizer on a Mac and ran across this error with a table that has 907 columns.
In researching the issue, I found this article which would solve it if the compile-time setting is set to 0 (no limit) from its default of 1000.
See this article http://stackoverflow.com/questions/9570197/sqlite-expression-maximum-depth-limit for more information.
The folks at dbVisualizer said to refer it back to this community for resolution, saying "it happens when calling DatabaseMetadata#getColumns() for the table."
I am wondering if anyone would object to taking off the limits or at least raising the limits for this value?
I am wondering if a "no limit" branch would be of interest to others as well? Perhaps there are a number of default settings which need boosting or made no-limit ?
How can the issue be resolved? Would all the underlying native libraries have to be recompiled to produce a new distribution/branch?
|Re: SQLITE_MAX_EXPR_DEPTH 1000 needs to be 0 in sqlite.h||Grace B||1/30/13 4:28 AM|
This would require reading over the SQLite api documentation, to find out what effects changing limits would have.
At the moment, the driver using just about the same compile time "limits" as the SQLite distribution. (Makefile: $(SQLITE_OUT)/sqlite3.o target) Perhaps there is more to gain from having the driver behave like the official distribution.
Consider also that for some like SQLITE_MAX_COMPOUND_SELECT, setting the value too high could cause a stack overflow or for SQLITE_MAX_LIKE_PATTERN where a high value could leave the application open to denial of service.
Yes the native libraries need to be recompiled. I have created an issue on bitbucket. https://bitbucket.org/xerial/sqlite-jdbc/issue/39