Hello Challenge Organizers,
We have a few clarifying questions about the model submission process, particularly regarding cross-validation and model evaluation:
K-Fold Cross-Validation ApproachIn our current implementation, we're using k-fold cross-validation (k=5) to train multiple models and gain more robust insights into our model's performance. However, this raises some questions about the submission requirements:
Multiple Models: Since k-fold cross-validation generates k different models, how should we handle the model submission? Are we expected to:
Output Folder Structure: The challenge instructions specify submitting models to an output folder. How would this work with multiple models from k-fold validation?
We've noticed some nuances in the validation process that we'd like to clarify:
Internal Validation Split: In our training pipeline, we create an internal validation split for each training fold (either 80/20 or through k-fold cross-validation).
Challenge's Hidden Validation Set: Is the Challenge's hidden validation set completely separate from these internal validation splits?
Scoring Process: Could you confirm the scoring process:
To ensure we're following the challenge guidelines, we want to confirm our understanding of the workflow:
We appreciate your guidance in clarifying these points to ensure we're developing our solution in alignment with the challenge requirements.
Thank you