Thanks for sharing your project file today - I think I see a likely source of the problem.
I think the problem stems from a misunderstanding about the relationship between the "backdrop" sprite (the sprite with the room background and the window) and the "window" sprite (the transparent sprite that lets you see the webcam view)
The recognise code - the blocks that grab what is visible and run it through the machine learning model - is all running on the window sprite. This means that the size and location of the window sprite will determine what is recognised.
I can see what you made some changes to size and dimensions in the backdrop sprite, but without making corresponding changes to the window sprite.
If you click on the window sprite, it is briefly highlighted, which will let you see what I mean.
When I tried your project, before noticing what the problem is, I was doing things like this:
which resulted in images like the following being sent to the machine learning model
(note that images are always skewed to be square for processing by ML models, so that is normal - what is relevant here is what subset of the webcam view was submitted for classifying, not the skewing)
As a result, I can see why your model rarely recognises "looking" because what you're trying to classify is probably so different to any of your training examples - depending on where your test subject is, it's entirely possible that none of their face is even in the frame that is used by the model.
Kind regards
D
PS - when fixing this, be aware that the "backdrop image" block that grabs the image for submitting to the model will include other sprites - so if your panda sprite is in front of the window, that will be submitted to the ML model for recognizing as well. That wasn't happening in this case, because the panda sprite doesn't overlap with the window sprite. But I can see that if you just grow the window sprite to match the change you made to the backdrop sprite, you will hit that problem, and that will also impact your results.