OpenRefine was originally intended, designed, and tested around a 1 million line workload. Memory is your friend and enemy to overcome that original design but not always the most effective way forward for certain use cases.
Also, in addition to what Owen said, you might think about the operations that you are wanting to do with that file.
Gnu Tools like awk and sed are quite handy and have no limitation on file size handling other than the underlying disk format like ext2,3
If your looking to do large pattern analysis or clustering on a column(s) with millions or billions of rows...you might just try out database technologies themselves like MongoDB or even search technologies like Elasticsearch which I have both used for doing very expressive multi-million record search/analyze/replace/transform (you just have to learn about their provided functions or sometimes even plugins that can perform a lot of magic for just a little bit of learning).