Raft does indeed at some level depend on a single thread to enforce order when applying changes. And even if it could deal with that limitation, it's still limited by the throughput of a single node (the leader). But there are a couple obvious solutions we've used to deal with these problems...
What we do first is split the state machine into multiple independent state machines where guarantees do not hold across state machines. Any given operation is associated with a specific state machine, and two-phase commit is used for cross state machine operations. This allows each state machine to read and apply operations independently and without regard for the order of operations in other state machines. Snapshots of state machines are taken independently of all other state machines. Similarly, we create a separate logical session for each state machine, and all sessions associated with a client share a single periodic keep-alive request that contains indexes relevant to the associated state machine for each session (see linearizable semantics).
Secondly, you can scale horizontally. We partition the state among multiple Raft clusters and again use a two-phase commit protocol with a fault tolerant coordinator for cross-partition transactions.
In this architecture, the cluster effectively manages multiple state machines, and each state machine may or may not be partitioned among several physical Raft clusters. Ordering guarantees do not hold across state machines and the cost of coordinating state machines and partitions is high, but it's very rare for our use case.
Of course, none of this is particularly novel or unexpected. But if a single Raft instance is viewed as a component of a larger system, the ordering limitations are not hard to overcome.