The point of your email is exactly correct. There should be a lot more examples and descriptions describing CPU time management on systems in our doc.
However I would like to point out that with image rewriting, there is typically a burst of CPU activity when PageSpeed "learns" a new site optimizing all the images
You can control the CPU usage during this burst via a few different conf file settings:
pagespeed ImageMaxRewritesAtOnce xxx;
pagespeed NumRewriteThreads xxx;
pagespeed NumExpensiveRewriteThread xxx;
Once the server-side cache is warm then the CPU impact is not that large. Even when an image's origin TTL expires, it does not need to be re-optimized unless it have actually changed.
We don't know what the exact right parameters are, and have tuned them for our desktop machines, which have 6 physical CPUs and lots of memory. On a lowest-price AWS instance these will need to be scaled back. It's also possible that we should do more platform analysis and try to auto-tune these settings, but there's risk in that too.
Finally I want to point out that I am skeptical that the WordPress plugin can optimize images as aggressively as PageSpeed can, because PageSpeed can optimize images in the context of the page (it knows the image dimensions) and it can optimize for the client (transcoding to webp on Chrome but not on Firefox). We are constantly improving our image optimization stack and are chomping at the bit to get our latest updates in your hands in our next release.
But I agree with your point: we need to be more transparent about the risks and we'll put this into the image-optimization doc 'risks'.
-Josh