The 'indexed data structure' that is often used now in many projects is the built-in (Hash)Map that the newer versions of Erlang/OTP provide. The keys of a map kan be seen as a poor-man's pointers if you want to have (amortized) constant field access and update behaviour data structure. This is for instance the approach I took in the implementation of sparse vectors/matrices/tensors in the Tensor library (
https://github.com/qqwy/tensor). The nice thing about offloading most of the work to a built-in data structure is that the handling of this data structure happens in a compiled piece of code, that is furthermore allowed to 'cheat' the functional rules (as long as the final outcome remains pure).
It is worth noting that many of Erlang's built-in tools are still of the time that Erlang did not have map-support, which means that there are now more options available to us to implement certain data structures differently.
(That being said, maps are definitely slower than plain tuples at least up to a certain size and depending on the kind of structure you want to build and the guarantees it should have.) Benchmarking will show us what happens in practice.
I'd love there to be an Elixir-variant of Haskell's Data.Sequence (which is built on top of 2-3 trees and has O(1) element appending and prepending, and O(min(log(length(seq1), length(seq1))) concatenation) and Patrizia trees+sets.
On a related note, I've lately done a little bit of work to write out the different efficient functional FIFO queues (and deques) that Okasaki wrote about, both the trivial amortized O(1) variant (which is similar to what :queue does now) and the less trivial hard real-time O(1) variant. I plan to benchmark these against :queue and each other, to see for what input which version works the best.
Data structures really are fun! :D