Dear organizers,
For our submission #1070, we find CUDA is out of memory when the process is starting to load decoder model. We have tested our decoder on the same GPU memory locally, and it works fine. Also, we cannot believe only loading decoder model allocates 16GB.
So, could you help us to look into this problem?
Best,
Nannan