I have a 16GB session file that I would like to load into ZAP (ZAP had no problem creating the session). Whenever I try, however, I see the following error in "zap.log":
.
.
.
2016-04-28 12:02:59,581 [Thread-10] INFO ENGINE - dataFileCache open start
2016-04-28 12:02:59,582 [Thread-10] FATAL ENGINE - dataFileCache open failed
org.hsqldb.HsqlException: wrong database file version: requires large database support
.
.
.
Interestingly, I'm able to load a 14GB session file just fine. Perhaps I'm hitting a celiing at 16GB? However, in looking over HyperSQL project's FAQ, it seems like hsqldb shouldn't have any problems (
http://hsqldb.org/web/hsqlFAQ.html):
"The current size limit of an HSQLDB database is 8 TB for all CACHED
tables and 256GB for each TEXT table. In addition, maximum totall lob size
is 64TB. If you use large MEMORY tables, memory is only limited by the
allocated JVM memory, which can be several GB on modern machines and 64bit
operating systems. We have performed extensive tests with the latest
versions using the TestCacheSize and other test programs inserting millions
of rows and resulting in data files of up to 16 GB and larger LOB sizes.
Users have reported the use of databases with up top 900 million rows."
Thanks in advance for any thoughts on how I might be able to resolve this issue.
(You may be wondering how I ended up with 16 and 14GB session files.... Although I make liberal use of ZAP contexts and URI regular expressions to include/exclude scan targets, the system being scanned is enormous.)