I assume that your training images, and the images you're using to test the model outside of Scratch, all have white backgrounds.
The images that you're using in your Scratch project do not have a white background.
In other words, the first thing that jumps out at me here is that you're saying your Scratch project is providing the ML model with a type of image that you've not trained it to recognise, or ever tested it with before.
(I realise that to a human, a white background and a transparent background feel sort of equivalent or even synonymous. But to a machine, they're different.)
I don't know for a fact that this is why your model gives different answers in Scratch, but from the limited info I have - it's my first assumption, and the first thing I would recommend that you try.
Add a white background to your costume, giving the costume an aspect ratio that is roughly similar to what you've used in your training images.
See how that changes the predictions from the model.
Kind regards
D
PS - In many ways, I hope you see this as a positive thing happen in your project. My aim of Machine Learning for Kids is to let people get first hand experiences of the limitations and behaviours of machine learning technologies. What I'm describing here is a key lesson to learn about ML tech, and something you may not have thought about if it'd just magically worked for you despite the differences in your training and test images.
On Friday, April 17, 2026 at 12:47:02 PM UTC+1 chen liao wrote: