SGTM. I'd also like to switch some workflows from using GCS storage to using
GitHub's workflow artifacts if possible (for files under 1-5GB). Workflow artifacts are easier to use from GitHub-hosted runners as they don't require additional authentication. GitHub workflow artifacts can also apply custom retention policies, but I'm not sure yet if there would be billing costs associated with our usage that would prompt us to customize them.
I expect that large benchmark artifacts (.vmfb files with real model weights embedded in them) will continue to use GCS or equivalent cloud storage. We might be able to optimize that down to a scale where workflow artifacts could be an option... depending on what data is being stored today. For example, we could use fake / splat weights or use parameters with external weights (store frequently changing .vmfb compiled program files in workflow artifacts and store infrequently changing .irpa parameter files in GCS).
Any data that we want to archive for historical analysis (e.g. benchmark results) should definitely stay in a bucket that we have direct access to.