comparison of various platforms

16 views
Skip to first unread message

pbr

unread,
Feb 12, 2013, 8:59:08 PM2/12/13
to linke...@googlegroups.com
Hi, 

Please forgive me if this FAQ has been asked before. I spent a few days looking through various platforms and tools in this space for sourcing, standardization, normalization, integration,....all the way to distribution of linked data at a very large scale. My goal is to identify robust tools to build linked data applications. Before I commit myself to any specific toolkit or a stack along these steps, I want to learn what others have to say. I have narrowed it down to tools related to LDIF, IWB, IKS, Virtuoso, LOD2, ELDA, etc. with some overlap around NLP components.

Any suggestion on where I can find a good comparison so that I can decide where to dig and try to avoid finding myself deep in a hole a year down the road?

regards,
pbr

hemmerling

unread,
Mar 4, 2013, 4:37:23 AM3/4/13
to Linked Data
Hello,

1)
have a look at my link-page

http://www.hemmerling.com/doku.php/en/linkeddata.html

, there you find links to many LinkedData tools. But it is not a
"comparison", though you will find many valuable comments on the page
about the tools.

The only "comparison" so far I found is at
http://en.wikipedia.org/wiki/Triplestore

2)
I decided to start with the
jena-fuseki – the Jena SPARQL server
http://jena.apache.org/
http://jena.apache.org/download/index.html

as it provides both SPARQL, SPARQL/Update and a REST interface.
It might not be for "big" data but a good starting point to become
familiar with the technology.

3)
As you said that you want to store "big" data, how about doing it in
SQL databases?
If so have a look at "middleware" like
http://www.d2rq.org/
to access SQL databases, by semantic web queries.

Sincerely
Rolf

Roberval Mariano

unread,
Mar 4, 2013, 11:13:29 AM3/4/13
to linke...@googlegroups.com
Hi,
 
I started with D2RQ, but today I use Virtuoso. D2RQ is a great tool, but it isn´t work well with million of RDF. The virtuoso work better D2RQ, but it hasn't a good interface.  The data are in multiple formats: RDF, XML, MDB, TXT, CSV. Each source data is converted in a Endpoint SPARQL as Application Ontology and Exported Ontology. To the Methodology of built Mashup Framework,  the tool used is irrelevant. We used LIDMS. I don't know his site. You use PDCA, what is very interesting. 
 
 
Mariano

--
Você está recebendo esta mensagem porque se inscreveu no grupo "Linked Data" dos Grupos do Google.
Para cancelar a inscrição neste grupo e parar de receber seus e-mails, envie um e-mail para linked-data+unsub...@googlegroups.com.
Para obter mais opções, acesse https://groups.google.com/groups/opt_out.




 
Reply all
Reply to author
Forward
0 new messages