You've got the general idea. If you only put in, say, a few GB, Bigtable will only produce a handful of shards in the steady state, which causes two main problems:
* Rebalancing the data over your CBT nodes isn't as effective, as the sharding is coarse
* A transient issue with a single shard will cause a large fraction of your table to be temporarily unavailable
Bigtable also adjusts the sharding in response to your access patterns, so if you expect to be constantly throwing high throughput at your tables (enough to stress the minimum 3-node footprint, at least) things won't be quite so dire. However, if you're talking about throughput that could be supported by a single server for the foreseeable future, you probably want to consider a cheaper and simpler system. Bigtable also reduces the number of shards after a while if there's no longer any load, so intermittent batch jobs over small data are likely to encounter reduced performance upon starting up again.
Feel free to contact our PM (Misha, cc'd) or myself off list with more details about your use case, if you still think Bigtable may be a viable option for you.