I am fitting my Keras estimator using a Spark dataframe that I read from HDFS, but unlike Tensorflow dataset, it seems that it loads all the data in memory, which cause memory ussies, can you tell me how to optimize the use of memory, so it doesn't load all dataset in memory but use the batch size instead if it is possible...
Thanks,