Hi,
Using a TF graph and session via C++ API, closing the session does not appear to release the memory (RAM).
Here is a code snippet:
```
tensorflow::GraphDef graph_def;
tensorflow::Status graphLoadedStatus = ReadBinaryProto(tensorflow::Env::Default(),graphFile,&graph_def);
tensorflow::SessionOptions options;
std::unique_ptr<tensorflow::Session> session = std::unique_ptr<tensorflow::Session>(tensorflow::NewSession(options));
tensorflow::Status session_create_status = _session->Create(graph_def);
...
tensorflow::Status run_status = session->Run({{_inputLayer,*(vtfinputs.begin())}},{_outputLayer},{},&finalOutput) /* runs file, results are what they should be, and are acquired via finalOutput. */
...
session->Close();
session.reset();
/* RAM keeps building up if code above put into a loop */
```
I've read all I could find, and I understand that the closing of the session does not release the underlying graph. In Python it seems that `reset_default_graph()` releases the graph resources.
What is / would be the equivalent of `reset_default_graph()` in C++ ? Looking at the `python/client/ops.py` I was unable to link the stack release to the C++ counterpart.
As a side note, using ```session->Reset(options,containers)``` does not appear to compile (Reset does not exist for Session). Looking at ```session.h``` and ```direct_session.h``` I don't understand why this is the case.
Could someone explain what exact Session subclass is ran by the C++ APi by any chance.
(GPU memory appears to not been released when looking it up with `nvidia-smi` but I believe it is an `nvidia-smi` problem as in practice the GPU memory appears to be reusable, so no problem on this side).
I've spent a decent amount of time on this one, so any help and pointers are very much appreciated !
Thanks,
Em.