Hi Rick, welcome to the forums and great to hear you find Bonsai helpful!
To answer your questions:
1) It should very much be possible to bring down the video size to almost any size that you want. The tradeoff, as you realize, will be quality. Also, what "quality" means can really depend on what the final purpose of the video is. Will you do further image processing on the video? Is it only for human eyes to process/classify? What kind of compression artifacts are acceptable?
In any case, one of the first knobs to turn on ffmpeg besides picking the codec is the output bitrate. This basically specifies how many bits per second are produced by the compression algorithm, and hence is the most direct determiner of file size.
ffmpeg has a small collection of examples here:
It can be tricky to decide which parameters are best beforehand without experimentation. I recommend trying a couple of codecs and playing with either the constant rate factor (-crf) or with a variable bitrate encoding (-vb). In terms of codecs we found either mpeg4 (CPU) or nvenc (GPU - requires NVIDIA) to be the fastest for live streaming.
You can calibrate these parameters by recording videos with a fixed duration (e.g. 1 min) at different parameters and check their quality vs file size.
2) ImageWriter simply streams the raw data to ffmpeg as if it were a file. From that perspective, ffmpeg doesn't really know where the frames come from and in principle there should be no difference over encoding from a video (except you always have to reencode the stream, since it contains raw frames). My understanding is that ffmpeg will buffer frames itself in order to compute keyframes, etc, depending on the codec.
The main difference is that if the encoding process cannot keep up with the image acquisition, you will end up growing a large buffer in RAM, which may end up blowing your process for longer recordings.
Hope this helps.