Thanks for your post, I have a question about torch tensor. The first image loading tensor "imageAll". This tensor's dimension is (41267, 3, 32, 32 as num of images, channel, width, height). However, the dimension of training tensor "trainData.data" and testing tensor "testData.data" is (<numbe or train or test>, 1, 32, 32 as num of images, channel, width, height) while cloning data from imageAll. Why "trainData.data" and "testData.data" dropped channel number to 1 from 3? Is this because the following code only use 'Y' channel for get mean of images or normalize of images I think?