100 TB is a large deployment, 10 TB would be pretty normal. So, if you have a lot of RAM (> 100 GB), you can probably get away with upping the default of 10 million for "perm_index_size" to something like 50 million. At the default 10 million and 200 permanent indexes ("num_indexes"), you get a max size per node of 2 billion events. I usually figure on an average event size of 800 bytes and a 2:1 inflation of the index to the original data, meaning you'd expect to max out at 2 billion * 800 * 2 = 3.2 TB. I have run with 400 as num_indexes as standard in production, so I know that value is ok. If you upped perm_index_size to 50 million, that would move your estimate up to 20 billion events on the node, which would be about 32 TB using the above model. I think some on this list have successfully gotten a num_indexes of 800 to work, but some activities will slow down a bit, and it's possible you could run into memory usage problems with some queries. If you tried 800 @ 50 million for the perm_index_size, you're up to 40 billion events at an estimated size of 64 TB. So, I don't think it's likely that you can fully use 100 TB of index on a single node and expect not to get weirdness on indexed queries. I would recommend ways of trying to split that disk amongst physical boxes or at least multiple VM's to keep the num_indexes reasonable.
Keep in mind that archive is not affected by any of this, it will just grow forever (up until the prescribed size), so you can let the remaining TB be archive (keep in mind that has an 8:1 compression ratio), so we're talking probably decades of storage, even at fairly high event rates.