Hey everybody. I know I'm late to the party, but I have found a long and complicated, but (as of our first test runs Friday) viable workaround to the slowdown solution. My company had the same issue pop up around the same time as the original poster, and while I cannot say for certain why it happens, here's where I've noticed it happens:
On our largest sheets that run a large number of formulas alongside large amounts of data seem to have the most problems being accessed, read, and to a lesser extent, written to, from Scripts. This problem either doesn't occur, or occurs with far less of a slowdown, when called from bound scripts attached to the Spreadsheets in question. The problem also doesn't seem to occur with custom formulas (I'm guessing because you don't have to open the Spreadsheets in the code, since you're just passing in arrays to the functions). The problem does, however, seem to most often occur on library functions (which are super useful to maintaining a unified codebase).
1) Get the sheet variables of the foreign sheets inside the local functions of the bound script, and pass those variables into the library functions. Since the library functions seem to struggle far more with getting the sheets than the bound functions do, opening a sheet object once locally, and passing it into multiple library calls, seems to both alleviate the sheet variable load time error a small bit, and reduce how many times you're calling this suddenly expensive operation.
2) The harder one: we're using a combination of custom formula hashing functions, the Cache Service, and timed functions to keep as much of our most-used data in the cache for as long as possible. Even on days where our column read time (which before June took seconds) would take 2-5 min to complete, reading from the cache seemed always to take only a second to load, and another to read all of the stored data (so for the moment, it seems immune to whatever shenaniganry google is up to).
3) Create your own custom read function that condenses range.getValues() calls into fewer ranges. If you gotta read from A, B, C, E, F, and H, your function should read the ranges as A:C, E:F, and H:H, and then break them apart into their own arrays inside the code. It's way faster when you're having the slowdown to run a little extra computation than it is to run a couple extra reads.
Step 2 is the heart of the greatest speedup change, and the more complicated of the bunch. If you're not familiar with Cache Service, this guy does a pretty good job of explaining it:
https://www.youtube.com/watch?v=Z0rzBzv-vY4&list=PL42xwJRIG3xCCAeJUuxtmr47NYC8sRQ2d&index=1 . Note: our data system is organized into columns, so if your data is organized more by row than column, some adaption is necessary.
So, the basic idea of the solution is that, the first time we read a column of data, we also save that column of data into the cache (we set ours for 30 min before timeout, but do as you will), and whenever we read data, we first check if that data is in the cache, and if so, get the data from the cache. That way, you should only have to actually read data from the sheet that wasn't already stored in the cache, or that timed out. Some limitations of the cache is that (as far as I know), it only works with strings and other primitive data types. Additionally, the cache doesn't keep a live version of your data; you have to update it each time.
The solution to the first is pretty easy. JSON.stringify() turns whatever parameter you pass into it into a string, and JSON.parse() converts the string parameter back into whatever data type it was originally. So, when we want to store data into the cache, we just convert it to a string, and when we retrieve it, we convert it back. Keeping the data live is a little trickier.
Say I have a column of someone's favorite movie series ranked in order with the column header "movie columns": 1) Lord of the Rings, 2) Star Wars, 3) Fast and Furious, etc... I store this list into the cache using the header, "movie columns", as my key. However, after that, the person watches the Harry Potter series and decides Harry Potter is their new favorite series, so that becomes #1 on the list, and everything else moves down one. If I go to read the sheet, my program will check the cache first, find "movie columns" in the cache, and return the original list instead of the new one, since the cache only has the original list stored. So, I need a way to tell if the data inside the cache has become outdated, and read new data rather than the old, outdated data. This is where the hashing functions I mentioned earlier come in.
My company uses SHA-256 for our purposes. The basic idea is: you call Utilities.computeHmacSha256Signature(stringifiedColumn, standardizedString).join(""), passing in a stringified version of your column into the first parameter, and a string that you want to be the same for every instance of this function, as a second parameter. I'll explain why in a moment. What this function does is it takes your strings, and it generates a 256-bit (32 character) output string. This string varies wildly between columns. Heck, just changing a 0 to a 1 in the column drastically alters the resulting string. The idea is: we use this string of numbers, rather than any name or column header, as our keys for our cache. That way, if even the slightest bit of data in the column changes, the hash key will also change, and when you go to read the data from the cache, that new hash key won't be in the cache, so that new column data will be read from the sheet rather than the cache, while all other unchanged data will still be read from the cache using their unchanged hash codes.
We implement the hash function locally on all sheets using bound scripts to make custom formulas. Each page that we use the Cache in has a special hidden sheet called "SHA-256"; this page has only one cell, which has a custom formula that takes in any number of columns (and is also specialized to intake and separate batches of consecutive columns), and returns a stringified object that maps the column header to a hash key. Like I said above, custom formulas don't seem to suffer the slowdown, so this thing usually seems to update 5 columns of about 500 cells within a few seconds of change (you can use SpreadsheetApp.flush() to force the custom formula to finish calculating before you continue). Then, whenever we want to read from a sheet, we first check if that sheet has a SHA-256 sheet, and if so, read that one cell, parse the string, and use the hash keys that the headers map to as cache keys.
Note: make sure that all instances of this custom formula use the same standardizedString. You're not implementing security here, so the string's complexity or randomness doesn't matter. The string is here to make sure that all data is hashed the same way (and therefore, very, very unlikely to evaluate to the same value as another set of data and cause weirdness with your read functions).
There are a few more notes here: the cache values can only hold 100KB of data per value, so if you have particularly large columns with lots of data in each cell, you'll want to break the stringified versions of the data down into smaller slices, and store them with some form of numbered prefix or suffix. If you have that problem, you'll also need to create code to check for the first instance of that numbered prefix or suffix on data that wasn't found under just the pure hash key. We split ours into 25000 characters at a time, just to be safe (though I personally have no idea if that's ideal or not, only that it works).
Next note: I personally recommend using Indirect inside your custom functions. Using =SHA_256(Indirect("Movies!A:A")) makes sure that you're always targeting the right spot with the formula, no matter what.
Next note: I mentioned using timed functions to speed this up further. The basic idea is that, since locally called functions don't seem to suffer the slowdown as severely, you periodically have a local function from that spreadsheet's bound script load all of the data columns that you have in your hash function into the cache (so that the first user to read from the sheet isn't slowed down).
Last note: sometimes, the cache might still be a problem, and you'll need a way to clear it. We keep a list of all of the hash-keys that we push data into the cache under in the Properties Service. Properties service works basically the same as the cache, except that it doesn't time out, and it's not as efficient. We keep and update a stringified object of cache keys mapped to true inside properties service under the key "cache keys". Whenever we want to clear the cache, we just get this property, parse it, get the object's keys, and remove them from the cache. Boom, cache is cleared.
We just got this implemented Friday, so if any of this suddenly doesn't work, we'll let you know. Hope this helps!