Regarding your last question, yes, when you create a Tensor in Torch it is stored in RAM memory. The tensor you are creating (x= torch.Tensor(100000, 120,160):fill(1)) allocates around 15GB of memory, so maybe you don't have enough RAM to do it.
If you do have enough RAM to load the whole Tensor, this problem you are experiencing happens because lua hasn't collected the references to unused tensors.
First, put a collectgarbage() call inside the for loop every ~50 iterations.
Second,
torch.cat allocates new memory each time you call it. To make things faster, you could preallocate the big tensor in memory, and then copy the loaded image directly to it.
For example:
images = torch.Tensor(100000,120,160)
i = 1