Thanks for sharing your questions here!
Cloud Storage may indeed be the appropriate tool for the task you describe.
Cost estimations
The
Google Cloud Storage Pricing article describes the cost of storage for the various storage classes as well as the costs of network egress. With this information, you should be able to estimate the costs your organization would incur for its use. You can also find in this same article some
pricing examples to get a better idea of the end result.
Fastest export to the cloud
Assuming you mean the fastest way to upload data to a cloud storage bucket,
gsutil is a command line tool specifically for interacting with cloud storage buckets. It features many linux-like commands such as
gsutil cp that can be used to copy files from local-to-bucket, bucket-to-local or bucket-to-bucket. One can also use the
-r option to perform this copy operation recursively through subdirectories.
Weekly syncing
Sticking with the
gsutil tool described above, I would point out the
rsync command. As per the documentation:
The gsutil rsync command makes the contents under dst_url the same as the contents under src_url, by copying any missing files/objects (or those whose data has changed), and (if the -d option is specified) deleting any extra files/objects.
If you cannot install gsutil on each of those systems but CAN read their drive contents remotely (mapped network drive), you could have gsutil from a single machine sync content from the network drive to a storage. This would increase your internal network traffic though as all data would have to go through the single machine first. Otherwise, you could simply install gsutil on each of the machines to instead to upload to the bucket directly.
Please note the
system requirements for
gsutil. I don't think you'll have any success installing it on a Solaris machine though I've not tested this myself.
Hope this helps!