Greetings Wayne,
Your observation is correct and that is the expected performance. Each way, streams vs one-shot have their own trade-offs.
For more information, see my "Unified Compression Streams" speech at ESUG 2019
https://youtu.be/neTO5M1Y6e0
One-Shot
The most performant, but requires the data to be in-memory with the boundaries known. This may or may not be the case.
And you can't fit the compression/decompression behavior seamlessly into existing usages of streams.
Streaming
Useful to fit into existing streams and can wrap memory/file/socket streams as they are composable and polymorphic with Streams.
Does NOT require the size of the data is known or that it is already in memory.
For example, you can work on data as it comes across the socket. You can write in-memory zip files across the socket without
ever needing file I/O. If you have 1GB worth of data, you can work on it incrementally in small chunks instead of requiring the
creation of a 1GB ByteArray in memory. Or perhaps the data is even too large for a Smalltalk container....you can stream across 16GB worth of data if you want.
An example where the streams came in really handy was with IMAP. IMAP support the COMPRESS extension which means that
the data going back/forth with the server is compressed using DEFLATE. Normally, the connection works with a SocketStream, but
all I needed to do was subclass InflateReadStream to create SstImap4CompressionStream. This stream wrapped the socketStream
and required very little on my part to get the behavior. This would not have worked quite the same with one-shot APIs and would have
complicated the implementation quite a bit.
- Seth