Hello, I am a junior high school student who belongs to Fngr, a nonprofit organization.
Our team is pursuing nonprofit and exploring efficient AI use. So, the pseudo code called Snap is created, and although the barriers to entry are not high due to the natural language function, it is possible to efficiently save tokens by reducing unnecessary words and long words to symbols.
However, due to the lack of initial capital, we could not entrust verification to a famous experimental institution. So, can we benchmark the language Snap we created to SQuAD to verify how much performance can be improved? Please.
We are eager for your help. Thank you.
PS. the pdf file is the user guide of Snap & Snap+. Please read them.