CodeSearchNet Challenge

1 view
Skip to first unread message

Nitin Bhide

unread,
Sep 27, 2019, 3:25:58 AM9/27/19
to nitinsknowledgeshare
https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/

From the article

Searching for code to reuse, call into, or to see how others handle a problem is one of the most common tasks in a software developer’s day. However, search engines for code are often frustrating and never fully understand what we want, unlike regular web search engines. We started using modern machine learning techniques to improve code search but quickly realized that we were unable to measure our progress. Unlike natural language processing with GLUE benchmarks, there is no standard dataset suitable for code search evaluation.
With our partners from Weights & Biases, today we’re announcing the CodeSearchNet Challenge evaluation environment and leaderboard. We’re also releasing a large dataset to help data scientists build models for this task, as well as several baseline models showing the current state of the art. Our leaderboard uses an annotated dataset of queries to evaluate the quality of code search tools.

I will be eagerly waiting and looking the leader board and what ideas they come up with.

Regards,
Nitin
Reply all
Reply to author
Forward
0 new messages