Hi,
I am trying to move a 18T legacy Compact storage table to CQL3 style keyspace on a 16-node 2.0.11 cluster. The original table in question has both static and dynamic columns which breaks the use of Spark for this task - Datastax driver only sees the static columns.
Since I have all the tokens ranges via node tool for my table, I thought I could use your get_range_by_token in a loop to grab 1000 at a time and reload them into a separate cluster. My problem is that I can't seem to make the 2nd iteration start from the end of the first since the last row of the original get_range_by_token only has getKey(). I tried iterating like it the following:
```
while($x = $cf->get_range_by_token( $begin_token, $end_token,1000)) {
// upload $x into new $dest_cf which works great with no copy
$begin_token = $cf->pack_key($last_key)
}
```
I've tried passing $last_key without passing it through pack_key to no avail. I seems like there should be a way to get the token from a key - it's just using the correct hashing scheme right?
Is it possible to iterate through a 5 billion row table, 1000 rows at a time to achieve this?
Please say yes! And if so, what am I doing wrong?
Cheers,
Chris