James,
All the benefits of using Hadoop to store your data disappear if you
then try to pull all the data into memory at one time. You will need to
find a version of your data processing algorithm that is specifically
designed to work with a lot of data and explicitly prevent everything
from being loaded into memory at one time. There are lots of libraries
for doing this kind of thing these days, and fortunately many of them
have R bindings. Look into Spark, H2O, etc (I'm sure others will have
more suggestions). What specifically are you trying to accomplish?