Re: [rrKnights] rendering issues - can they be related to job settings such as the sequence divide?

66 views
Skip to first unread message
Message has been deleted

RR support, Schoenberger

unread,
Sep 23, 2021, 12:32:50 PM9/23/21
to rrKn...@googlegroups.com
Hi

There are scenes that increase the memory the more frames they render.
But most of the scenes do not increase it that much.
It depends on the scene.

I would check the memory&CPU stats.
Not that an issue with multiple threads must not be the RAM, it can be
the network traffic of the machine, the network switches and the
fileserver as well.

And check if KeepSceneOpen is enabled for a job.


Note: the page file of a machine can be a real slowdown.
If you see that the available memory is very low during your render,
then I recommend to set the page file to 5GB fixed size only.
That it has enough space for all system executables, but not for the renderer.



A) Limit jobs per client
> at first I switch-off the multiple instance
Note that you can set the job option Required Memory to limit the
number of jobs a client can take. (job threads of the client, not
the multi-instance option inside the job).

Note that it requires to set such a value for all jobs (You could set
a default of 10GB via the submitter default files).
E.g. if you set your problem job to 60GB, your machine has 64GB, the
OS uses 3 GB and there are jobs with a Required Memory value of 1GB,
it will still start two jobs.



B) Memory&CPU stats:
1)
You can take a look into the frames rendered table in rrControl, tab job log.
It states the memory and CPU usage of Maya+Arnold per frame.

It states as well how many job threads have been running while the
frame rendered.
(job threads of the client, not the multi-instance option inside the job).

2)
Then the render log states at the end
"Max memory usage: 0.1GB (of 32GB installed, total system usage at
same time: 6.8GB.)"
So you can see if the system memory usage was too close to the
installed memory.

3)
By default the rrClient config has enabled an option to abort the
render if the OS/system does not get enough memory to run stable. (Not
sure if it was disabled in your config)

4)
If you change the Verbose level in the override section of the job,
then Arnold prints a lot of information at the end. And if textures
had to be re-loaded because of memory issues.
AND you can see which part takes in pre-processing/rendering takes how
much time. E.g. if it is the fileserver/network traffic instead of the
memory.

5)
Just to state it to have a complete list. The next RR major version
will have a new feature to log the cpu and memory usage of each minute
in a render log for each frame segment.
So you can see at which point the CPU usage drops or to which value
the memory usage increased.



C) Scene load time:
The render log states "Scene load time:"
It does not include the Maya startup time, but you can calculate it
yourself with the time stated in the log before Maya starts.




D) Sequence Divide and KeepSceneOpen:
If your job has enabled the job option KeepSceneOpen, then Maya is not
closed between frame segments.
So it does not matter if your frame segment size is 2 frames or 10 frames.
Note that memory stats ( A) 2) above ) are stated for each frame
segment if you use KSO.





Holger Schönberger

Please use the rrKnights Tavern or
our support system for new questions

RR support, Schoenberger

unread,
Sep 23, 2021, 12:37:16 PM9/23/21
to rrKn...@googlegroups.com
Hi

> that I could try to decrease the hanging client's situation described above?
Does RR detect the frozen renderer or do you abort it?
If you abort it, RR states the CPU usage at abort in the render log.

RRs frozen detection is requires
- that the core usage was at some point higher than 1 core (= arnold
started with the frame)
- the core usage is lower than 1.6 cores for 15 minutes (could be
changed in your config)
Reply all
Reply to author
Forward
0 new messages