I see that in your first for loop you are reading images from a directory, resizing them and overwriting the previous images with the resized ones. It's a good practice to keep the original images as it is and resize them on the go (while your model trains). In the second for loop, you are again reading the same resized images, converting them to an array and appending them to a list - which in my opinion is a bad practice and is possibly occupying your ram leading to the memory problem. It works for cifar because images in that dataset are 32px along both dimensions. You are using images which are 7 times bigger than cifar images.
I would suggest using cv2 for the same thing that you are doing. cv2 reads images in numpy format and you can easily feed those images directly to your model.
Suggested code:
import cv2
import os
path = os.getcwd() + "/"
dirs = os.listdir(path)
X = np.empty((n_samples, 224, 224, 3))
index = 0
for file in dirs: (demo code - suit the loop according to your directory structure)
image = cv2.imread(file, 1) #1 for RGB, 0 for grayscale
resized_image = cv2.resize(image, (224, 224)) #width, height
X[index, :, :, :] = resized_image
X is a 4d tensor (numpy array) with dimensions : [n_samples, height, width, channels]
You can feed X directly to a model in keras using model.fit or, using model.train_on_batch if X can't fit into memory all at once (which can happen in your case).
Efficient code:
batch_size = 32
for iter in range(n_iterations):
index = 0
X, Y = np.empty((batch_size, 224, 224, 3)), np.empty((batch_size, n_classes))
for file in dirs: (demo code - suit the loop conditions according to your directory structure)
image = cv2.imread(file, 1) #1 for RGB, 0 for grayscale
resized_image = cv2.resize(image, (224, 224)) #width, height
X[index, :, :, :] = resize_image #creating a batch of 32 images - this shouldn't cause any memory error ; you can experiment with the batch_size as well
Y[index, :] = label (one hot vector)
index += 1
#X and Y have the data for 1 batch
model.train_on_batch(X, Y)
Hope this helps.