Hi.
i found MR3 release has a limit of 256 gigabytes on the aggregate memory of ContainerWorkers on Kubernetes and a limit of 512 gigabytes on Hadoop.
Can this limit be increased to 1TB?
In a production environment, we often need to use more than 10 terabytes of resources. I believe this is a common scenario for most companies that use Apache Hive. If the upper memory limit of MR3 is too low, it may not meet our pressure test conditions. If users want to stress test and compare the performance of MR3 on large datasets, they usually need 1TB of memory.
On the other hand, 1TB of memory is not large enough to affect the commercial license charges of MR3.
So, dear developer team, what do you think of this proposal?
Look forward to your reply.