CERLIB Challenge: Continuous Evaluation of Relational Learning in Biomedicine

3 views
Skip to first unread message

Tiffany Callahan

unread,
Feb 12, 2021, 1:20:08 AM2/12/21
to BioHackathon
CERLIB Challenge: Continuous Evaluation of Relational Learning in Biomedicine

Predicting relations between biological entities using machine learning is a common and important task in computational biology. CERLIB is a new challenge that aims to continuously evaluate state-of-the-art relation prediction methods as new biological knowledge becomes available.
 
We invite you to participate in the Continuous Evaluation of:
 
Challenge Website: biochallenge.bio2vec.net  
(First) Meeting: Bio-Ontologies @ 2021 Intelligent Systems for Molecular Biology (ISMB)
 
Overview: The analysis of biological networks has long been a central component of computational biology. Networks and knowledge graphs form a crucial component of life science infrastructure where hundreds of data- and knowledge-bases have been developed. Networks and knowledge graphs are not only used to store and retrieve information but are also used for network- and knowledge-based analyses. One type of analysis includes determining whether a relation holds true or false within a knowledge base; this question can be solved deductively, inductively, or transductively. In the past 5 years, we have witnessed a proliferation of methods based on machine learning that address the problem of predicting relations from graph-based knowledge. While these methods are developed to solve tasks in many knowledge graphs, they are rarely evaluated and compared on a variety of biological knowledge. Moreover, every evaluation and comparison usually represents only a snapshot in time and may depend on a specific context that includes parameters, training/testing splits, random seeds, or various pre- and post-processing steps, thereby making it difficult to reproduce and compare results. Furthermore, in many cases, only “positive” predictions are evaluated (i.e. new relations), however, some relations also turn out to be incorrect and thus are removed from a dataset; methods that determine whether relations are wrongly asserted are vitally important and currently underrepresented. Biological knowledge evolves rapidly, and the corresponding data- and knowledge-bases are updated regularly to reflect new discoveries. This provides an opportunity for a time-based, prospective, and continuous evaluation of relation prediction methods.

We will use biological databases that continuously update their data and make it accessible through public SPARQL endpoints for evaluation. We will run the same query monthly and evaluate the submissions made up to this point.
 
Join the CERLIB challenge, which aims to collect and evaluate relation predictions in life sciences using an unbiased, empirical approach based on the growing body of biological knowledge. The challenge submission begins February 2021. For more information and to register, visit the challenge site.


We look forward to seeing you on the leader boards!

Reply all
Reply to author
Forward
0 new messages