The probable reason is that all those partitions are in a single range given that you have a lot of data.
Please run select token(<partition-key>), partition_key from <table> limit 5
For example for the following table
CREATE KEYSPACE keyspace1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND durable_writes = true;
CREATE TABLE keyspace1.standard1 (
key blob PRIMARY KEY,
"C0" blob,
"C1" blob,
"C2" blob,
"C3" blob,
"C4" blob
) WITH COMPACT STORAGE
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL","rows_per_partition":"ALL"}'
AND comment = ''
AND compaction = {'class': 'SizeTieredCompactionStrategy'}
AND compression = {}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
I have only entered 1000 rows
cqlsh -e "select token(key),key from keyspace1.standard1 limit 5;"
system.token(key) | key
----------------------+------------------------
-9206895928680792762 | 0x304e37394b3531333530
-9160549343292174675 | 0x4e4c4c3938374b383130
-9090087391989406011 | 0x36384b30374f33314c30
-9087663299618271833 | 0x344d334b4e35304b3331
-9086967334608494579 | 0x4b4f344d373130303030
getendpoints keyspace1 standard1 0x304e37394b3531333530
127.0.0.3
nodetool getendpoints keyspace1 standard1 0x4e4c4c3938374b383130
127.0.0.3
nodetool getendpoints keyspace1 standard1 0x36384b30374f33314c30
127.0.0.1
nodetool getendpoints keyspace1 standard1 0x344d334b4e35304b3331
127.0.0.2
nodetool getendpoints keyspace1 standard1 0x4b4f344d373130303030
127.0.0.3
Lets try to correlate this to the token ownership in the cluster
nodetool ring provides information on tokens selected by each node
Datacenter: datacenter1
==========
Address Rack Status State Load Owns Token
9194719961254193351
127.0.0.3 rack1 Up Normal 133.85 KB ? -9216714678074898560
127.0.0.3 rack1 Up Normal 133.85 KB ? -9196942068020296577
127.0.0.1 rack1 Up Normal 149.99 KB ? -9193556696765974833
127.0.0.1 rack1 Up Normal 149.99 KB ? -9163012479991023323
127.0.0.3 rack1 Up Normal 133.85 KB ? -9133267367462170802
127.0.0.3 rack1 Up Normal 133.85 KB ? -9122303054788654514
127.0.0.2 rack1 Up Normal 132.46 KB ? -9111062212776789516
127.0.0.2 rack1 Up Normal 132.46 KB ? -9089666287802159920
127.0.0.3 rack1 Up Normal 133.85 KB ? -9015579794224601574
127.0.0.3 rack1 Up Normal 133.85 KB ? -9011296699060860916
127.0.0.2 rack1 Up Normal 132.46 KB ? -8976643905893965440
127.0.0.2 rack1 Up Normal 132.46 KB ? -8969426451078557545
127.0.0.1 rack1 Up Normal 149.99 KB ? -8953553517161799888
127.0.0.3 rack1 Up Normal 133.85 KB ? -8947606167129378819
.
.
.
The information provides the info on ranges so
the first 3 lines tell us that
127.0.0.3 owns .data with a token range -9216714678074898560 - -9193556696765974833
127.0.0.1 owns data with a token range -9193556696765974833 - -9133267367462170802
.
.
Lets try to align the getendpoints info with the ring information
For example for the partition key 0x36384b30374f33314c30 the token is.-9090087391989406011 and it will be owned (with rf=1) by 127.0.0.2
127.0.0.2 rack1 Up Normal 132.46 KB ? -9111062212776789516
-9090087391989406011
127.0.0.2 rack1 Up Normal 132.46 KB ? -9089666287802159920
So for your data you can do the same I believe that you have a lot of data so you are getting all information on the same range that shares the same endpoints.
I hope this helps.
Shlomi