Hello,
So I've taken a look at the documentation:
https://cloud.google.com/video-intelligence/automl/docs/ and this google group. It's apparent how to create training data with annotations labels for the segments of the videos "object X for start time to end time" but I'm not sure how to create training data CSVs with bounding boxes that Cloud AutoML Video Intelligence Classification will parse and use during training? I would assume the bounding box would be crucial to help train the model on what region the annotation represents.
Is there a tutorial or examples somewhere on what the bounding box input data for the training dataset should look like?
If I missed something in the documentation, I'll be the first to admit I did once it's pointed out, but I didn't see any sample dataset examples that I recall that illustrate bounding box.
Thanks in advance!