I read the paper and inspect dataset sampled randomly, I've had some questions about fullwiki dataset.
1. Do you guarantee that the answer is always in the contexts?
2. If not, can the model say there is no answer in the contexts in test time, I mean for leaderboard(no answer prediction)?
3. If the model can't, what is the crucial difference between disctractor setting and fullwiki setting. I think that distractor setting just have two golden paragraphs, but if the model can not say there is no answer in the context, absent paragraphs in fullwiki isn't meaningful. (In my knowledge, I can't improve the TF-IDF IR module for extracting more relevant 10 documents in the leaderboard)
Please answer my questions.
Thank you in advance,
Choi