The folks at DataStax have built their entire hadoop file system
implementation around storing binary data in cassandra. Yes it is
possible and done correctly will be extremely performant. Download the
trial version of DataStax Enterprise and run through the tutorials
then take a look at the column families created.
The general approach is to chunk data into smaller fixed size columns.
Maybe use composites on the column names to get creative with
checksums, file size information and similar, depending on your use
case.
When the slides from the Cassandra summit are posted (should be pretty
soon) Jonathan's keynote actually has a picture of the data model they
use (which is tuned for large HDFS blocks, but should still give you
some good ideas) with some additional contextual information.