Hello David,
Based on my experience, It's production capable.
Our production DB has about 6-7 terabytes of data stored in cstore. Our compression ratio is around 5-8 times.(Oppose to the 3 times mentioned on the site.)
The raw data is is somewhere between 30-50 terabytes.
When we hit performance issues (cstore table size was 30GB or more) we could use partitioning on the tables, because skip indexes on text fields are not working as expected. (On numbers / dates it works perfectly.)
I think that aspect could be improved. Currently we just create partitions for these fields. (They contain values like JAN-16, FEB-17, MAR-18 , etc...)
Keep in mind that pg_dump will NOT work. We have used \COPY to create files that can be used as backups.
But I think this will be fixed in a newer version of cstore.
Hope this helped.