Dear Organizers,
Thank you for organizing this challenge. My name is Kanchan Shivashankar, I am currently participating in Scholarly QALD challenge.
I encountered a problem when evaluating my generated results against the gold answers in the train dataset. The precision of floating point numbers are rounded off from the value available in KG . This affects the final evaluation scores.
Are they rounded off in the test dataset as well? If so what is the precision value to follow?
Here is an example from train dataset (for your reference):
id: 48f41d21-b0fb-45b7-a6e4-96495160f2d7
gold_answer: 3.4050634
SemOpenAlex : 3.405063390731812
Thank you for your response in advance!
Regards,
Kanchan