I would like to deploy a Apache Apex application to the DataTorrent RTS sandbox running in docker.
My application consists of 5 operators. 2 of these operators shall run on 3 containers in parallel using Partitioning.
When trying to launch my application using these specifications, all my containers are stuck in Status: PENDING_DEPLOY.
If I remove partitioning, my app and all its containers launch and run without problems. (the memory allocation in this case is 6GB).
I realized, that my containers always go to the state PENDING_DEPLOY if my total memory allocation would go above 8GB.
While trying to find a solution for this problem, I found that there exists a YARN property called:
yarn.nodemanager.resource.memory-mbAccording to this
link this value defaults to 8192MB.
Inside of my container there exists a file:
/etc/hadoop/conf/yarn-site.xmlI tried increasing the memory to ~12GB (my machine has around 15GB RAM) by adding the following property to the file:
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>12000</value>
</property>
I also tried adding this property as proposed in the
docs:
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>10</value>
</property>
After changing the values, I launched my application again, but again without success: PENDING_DEPLOY.
In dtManage it again seems to stuck at 8GB of memory allocation.
I tried this by restarting the docker container, immediately changing the values as described and only afterwards did followed the Installation Wizard steps in dtManage. Same output.
Does someone know how I can use the docker image with more memory than 8GB?