[CfP] SWIB20 online conference - 12th Semantic Web in Libraries Conference, 23-27 November

47 views
Skip to first unread message

Adrian Pohl

unread,
Jun 4, 2020, 3:34:09 AM6/4/20
to lod...@googlegroups.com
Call for Proposals: SWIB20 online conference - 12th Semantic Web in
Libraries Conference
23.11. - 27.11.2020, UTC 14:00-16:30
Submission Deadline: 13 July 2020

SWIB conference (Semantic Web in Libraries) is an annual conference,
being held for the 12th time.

Due to the Covid-19 pandemic, we cannot be sure that travelling and
meeting face-to-face will be possible for everybody in the community by
November. Therefore, we have decided to hold SWIB20 within the scheduled
timeframe not in Bonn, Germany, but for the first time on the web. This
will be an opportunity to participate from all over the world at little
or no cost.

Taking into account the different time zones of the participants, we
plan to carry out the conference during the week of 23-27 November in
the time from UTC 14:00 until 16:30.

SWIB focuses on Linked Open Data (LOD) in libraries and related
organizations. It is well established as an event where IT staff,
developers, librarians, and researchers from over the world meet and
mingle and learn from each other. The topics of talks and workshops at
SWIB revolve around opening data, linking data and creating tools and
software for LOD production scenarios. These areas of focus are
supplemented by presentations of research projects in applied sciences,
industry applications, and LOD activities in other areas.

As usual, SWIB20 will be organized by the ZBW - German National Library
of Economics / Leibniz Information Centre for Economics and the North
Rhine-Westphalian Library Service Centre (hbz). The conference language
is English.

Would you like to share your experiences working on an interesting
service, research topic or project – not just what you did, but also how
you did it?

For this SWIB rendition we adjusted the formats to the online
environment:

- Presentations (20 minutes plus 5 q&a)
- Practical workshops or tutorials (maximum 120 min)

We appreciate proposals on the following or related topics:

Projects & Applications

* integration of LOD into productive library applications
* authorities & knowledge organization systems (thesauri,
classifications, ontologies)
* re-use of LOD (from libraries, Wikidata and other sources)
* presenting & visualizing LOD
* end-user environments for interaction with LOD (e.g. editing or
annotation)
* crowdsourcing/gamification approaches involving LOD sources
* linked research & open science

Technology (focus on Open Source software)

* semantically enhanced data publication
* machine learning for automatic indexing & named entity recognition
* data transformation/integration/cleansing/enhancement/mapping/
interlinking
* RDF validation
* data flow management
* read/write linked data
* linked data & library systems

Standards & Best Practices

* open web standards relevant for libraries
* application profiles & provenance information
* usable APIs
* providing updates & syncing data sources
* preservation & sustainability
* open data licensing

We are looking forward to receiving your proposals for presentations or
workshops by **13 July 2020**. Please submit an abstract of 1000-1500
characters using our conference system at https://www.conftool.org/swib20.
If you intend to present a specific software solution please include
links to the source code repository and make sure it is openly licensed
(https://opensource.org/licenses).

Proposals will be reviewed by the SWIB programme committee:

* Julia Beck (Frankfurt University Library)
* Uldis Bojars (National Library of Latvia)
* Valentine Charles (Europeana Foundation, Netherlands)
* Huda Khan, (Cornell University Library, USA)
* Niklas Lindström (National Library of Sweden)
* Devika Madalli (Indian Statistical Institute)
* Joachim Neubert (ZBW, Germany - Chair)
* Adrian Pohl (hbz, Germany - Chair)
* Dorothea Salo (UW-Madison, USA)
* Jodi Schneider (University of Illinois at Urbana-Champaign, USA)
* MJ Suhonos (Ryerson University, Canada)
* Osma Suominen (National Library of Finland)
* Katherine Thornton (Yale University Library, USA)
* Jakob Voß (GBV Common Library Network, Germany)

If you are interested in using the online conference infrastructure for
a satellite event before or after the conference slot, let us know.

Website: http://swib.org/swib20
Hashtag: #swib20
Twitter: @swibcon

Take a look at previous SWIB conferences at http://swib.org/swib20/history.

Please don't hesitate to ask if you have any questions:

Adrian Pohl
hbz
Tel. +49-(0)221-40075235
E-mail: swib(at)hbz-nrw.de

or

Joachim Neubert
ZBW
Tel. +49-(0)40-42834462
E-mail: j.neubert(at)zbw.eu

Eric Lease Morgan

unread,
Mar 25, 2021, 1:45:07 PM3/25/21
to lod...@googlegroups.com

To linked data, or not to linked data? That is my question. Seriously, please help convince me to expose a whole boatload of RDF, and help me understand what insights might be accomplished by doing so.

I have a set of 250 normalized relational databases, all with exactly the same structure. (Schema attached.) The set is growing, and I strive to have as many as 1,000 of these databases by the end of the calendar year. Each database has its own URL/URI. Each item in the database essentially describes some sort of file (.pdf, .html, .xml, .doc, etc.) Each database may list as few as one file, but each database may list thousands of files. Each file has its own URL/URI. Each file has been parsed according to parts-of-speech, named entities, mime-types, statistically significant keywords, abstracts/summaries, authors, titles, and extent. None of these extractions have a URL/URI, let alone a URL/URI in a common name space.

I am most interested in the keywords and the named entities, and I know I need to associate the keywords and named entity values with a URL/URI from things like DBedia.

The slightly-dated-but-still-works-great tool named "D2RQ" does a fine job of reading my databases and dumping sets of RDF to one or more files. [1] When and if I write a D2RQ mapping file, I will be able to output my databases to meaningful RDF through the use of RDFS and/or OWL. My bibliographic values (author, title, extent, etc.) can be exploited through Dublin Core. My keywords can be mapped through SKOS. The named entity types such as PERSON will map nicely through FOAF. I'm sure there are vocabularies for the other named entity types like ORGANIZATION or LOC and GRE (types of places). Heck there might even be a vocabulary for parts-of-speech values. I don't know.

Once I create a mapping file, it would be trivial to dump each database as a set of Linked Data. A person (or a computer program) could then slurp up some or all of the RDF for various purposes. Right now, a Python library called "RDFlib" looks very promising. [2] By applying SPARQL queries against the RDF, I can easily implement the functionality of a "library catalog", but I'm more interested in discovering the answers to complex questions like:

* Who influenced whom?
* How did a given idea ebb & flow over time?
* When is war justified?
* How can I characterize a writing style?
* What does it mean to be a good human being?
* Where did ideas manifest themselves, and where did they go?
* Who is Ishmael, and why should I care?
* How did Shakespeare describe love, and can I compare it to Plato's definition?

Answering such questions is not trivial, but I believe RDF is intended to help address them.

Creating the databases is trivial. I use a system called the Distant Reader to do that work. [3] Associating computed keywords and named entities with a URL/URI is not trivial because there is a lot of judgement and disambiguation involved. Once a mapping file is articulated, serializing RDF in any number of formats is trivial. Creating a triple store from one or more of the RDF serializations is pretty easy. I'm not so sure when it comes to the output of the SPARQL queries though.

Am I on the right track? To what degree do you think my goal of answering complex questions is achievable? Besides the use of OpenRefine, how might I automate the association of keywords and named entities to a URL/URI? Does anybody what to play in my sandbox, because I literally have two super computers at my disposal?


[1] D2RQ - http://d2rq.org
[2] RDFLib - https://github.com/RDFLib/rdflib
[3] Distant Reader - https://distantreader.org

--
Eric Lease Morgan
Navari Family Center for Digital Scholarship
University of Notre Dame

574/485-6870



schema.sql

Ethan Gruber

unread,
Mar 25, 2021, 2:03:17 PM3/25/21
to lod...@googlegroups.com
Are the named entities in the structured Dublin Core metadata, or are you looking to parse named entities out of larger documents, and then reconcile them? The latter will necessitate an intermediary NER process. There are a lot of open source tools available for this. This is not something I myself have experience with, but I'm sure others can send you in the right direction.

Once the entities are extracted, I'm not sure there's a "besides the use of OpenRefine" option, because OpenRefine is honestly the best and fastest tool for this. DBPedia is more or less deprecated at this point. It still exists, but it's been fully supplanted by Wikidata, and OpenRefine already has Wikidata reconciliation built into it. Other vocabulary services may have reconciliation APIs, but it's possible to run your own standalone service on a static CSV file on your hard drive (see http://okfnlabs.org/reconcile-csv/). Of course, Wikidata and DBPedia face the same shortcoming, which is notability of entities. If your databases consist of people who aren't notable enough to have Wikipedia pages or exist in other authority control systems that have been integrated with Wikidata (for example, miscellaneous people in archival records), then reconciliation for these entities isn't going to work. Geographic places, on the other hand, shouldn't be a problem. SNAC (https://snaccooperative.org/) apparently has an OpenRefine reconciliation API, but I haven't used it.

As for your research questions, I'm not sure any of this will answer those. Network graphs are useful for answering quantitative rather than qualitative questions. You can assemble and visualize large bodies of data, but interpretation is still humanistic. LOD isn't going to tell you when war is necessary.

Ethan

--
You received this message because you are subscribed to the Google Groups "Linked Open Data in Libraries, Archives, & Museums" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lod-lam+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lod-lam/A5DAFEEF-E47F-4474-9F0E-ADFFA28C4677%40nd.edu.



--
You received this message because you are subscribed to the Google Groups "Linked Open Data in Libraries, Archives, & Museums" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lod-lam+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lod-lam/A5DAFEEF-E47F-4474-9F0E-ADFFA28C4677%40nd.edu.

Eric Lease Morgan

unread,
Mar 25, 2021, 3:46:01 PM3/25/21
to lod...@googlegroups.com


> On Mar 25, 2021, at 2:03 PM, Ethan Gruber <ewg4...@gmail.com> wrote:
>
>> To linked data, or not to linked data? That is my question. Seriously, please help convince me to expose a whole boatload of RDF, and help me understand what insights might be accomplished by doing so....
>>
>> Am I on the right track? To what degree do you think my goal of answering complex questions is achievable? Besides the use of OpenRefine, how might I automate the association of keywords and named entities to a URL/URI? Does anybody what to play in my sandbox, because I literally have two super computers at my disposal?
>>
>> [1] D2RQ - http://d2rq.org
>> [2] RDFLib - https://github.com/RDFLib/rdflib
>> [3] Distant Reader - https://distantreader.org
>
>
> Are the named entities in the structured Dublin Core metadata, or are you looking to parse named entities out of larger documents, and then reconcile them? The latter will necessitate an intermediary NER process. There are a lot of open source tools available for this. This is not something I myself have experience with, but I'm sure others can send you in the right direction.
>
> Once the entities are extracted, I'm not sure there's a "besides the use of OpenRefine" option, because OpenRefine is honestly the best and fastest tool for this. DBPedia is more or less deprecated at this point. It still exists, but it's been fully supplanted by Wikidata, and OpenRefine already has Wikidata reconciliation built into it. Other vocabulary services may have reconciliation APIs, but it's possible to run your own standalone service on a static CSV file on your hard drive (see http://okfnlabs.org/reconcile-csv/). Of course, Wikidata and DBPedia face the same shortcoming, which is notability of entities. If your databases consist of people who aren't notable enough to have Wikipedia pages or exist in other authority control systems that have been integrated with Wikidata (for example, miscellaneous people in archival records), then reconciliation for these entities isn't going to work. Geographic places, on the other hand, shouldn't be a problem. SNAC (https://snaccooperative.org/) apparently has an OpenRefine reconciliation API, but I haven't used it.
>
> As for your research questions, I'm not sure any of this will answer those. Network graphs are useful for answering quantitative rather than qualitative questions. You can assemble and visualize large bodies of data, but interpretation is still humanistic. LOD isn't going to tell you when war is necessary.
>
> --
> Ethan


Thank you for the prompt reply.

Yes, I have already extracted the named entities, and they are manifested as tab-delimited files (as exampled in the link below) as well as records in my databases:

https://library.distantreader.org/carrels/defoe-life-1719/ent/chapter-016.ent

Using OpenRefine to reconcile entities with URIs is probably the most accurate way to get the work done, but it will not scale. I have as many as a 100,000 files to process, and each file will have hundreds of PERSON named entities. I don't have to reconcile each and every named entity, but I could reconcile "many" of them. That said, I might have to break down and curate my data set with the use of OpenRefine.

Also, you are correct. A computer will always return quantitative answers, and it will be my job to turn these answers into qualitative judgments. Duly noted. On the other hand, I can hope the database (SPARQL) queries will return relationships I had not previously observed. LOD will not tell me when war is justified, but it could extract computationally actionable sentences (triples) about war, and those sentences can be compared & contrasted.

Anybody else, other thoughts?

--
Eric Morgan







David Newbury

unread,
Mar 27, 2021, 12:32:55 AM3/27/21
to lod...@googlegroups.com, dnew...@getty.edu
I would say that what Ethan said is about par for the course.  You’ve hit the same barrier that all of us have hit—that the hard work turns out not to be the programmatic bits, but instead the slog of data cleanup and reconciliation. 

That said, the payoff of doing that work—be it through RDF or any other format, is really significant.  The effort of making the link pales next to the work of documenting and structuring information about those entities—and it goes faster than you’d think. Doesn’t scale at the same pace of fully-programmatic processes, of course—building a corpus of 10MM reconciled people is a project larger than aims be willing to take on. But if you can identify your question, you can almost certainly target that data cleanup work to find the highest value for the least effort. 

Hope that helps!

-- David Newbury


From: lod...@googlegroups.com <lod...@googlegroups.com> on behalf of Eric Lease Morgan <emo...@nd.edu>
Sent: Thursday, March 25, 2021 12:46 PM
To: lod...@googlegroups.com
Subject: Re: [LODLAM] to linked data or not to linked data
 
--
You received this message because you are subscribed to the Google Groups "Linked Open Data in Libraries, Archives, & Museums" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lod-lam+u...@googlegroups.com.

Shaw, Ryan

unread,
Mar 27, 2021, 9:01:17 AM3/27/21
to lod...@googlegroups.com

> On Mar 25, 2021, at 1:45 PM, Eric Lease Morgan <emo...@nd.edu> wrote:
>
> I have a set of 250 normalized relational databases, all with exactly the same structure. (Schema attached.)

Since you control all these databases, and they have the same structure, RDF isn't necessary to aggregate them and query over the results.

If your databases are sqlite, you might find these tools handy for exploring, querying, and visualizing your data, whether or not you commit to turning it into linked data:

https://datasette.io
https://github.com/simonw/sqlite-utils

If you plan on aggregating / enriching your data with other datasets and want to query over the aggregated whole, then transforming your data to RDF might be worth it.

If you want others to be able to easily incorporate your data into their own, then transforming your data to RDF might be worth it (though it will depend on what kind of tool stack they use and what they want to use your data for).

But if this is more of a standalone dataset, and RDBs are working for you, then there's no real reason to transform to RDF. RDF is a tool for creating and maintaining a "virtual" database out of independently maintained datasets distributed across the web - if you're not doing that, then it doesn't really give you any special question-answering powers.

Cheers,
Ryan

Tim Thompson

unread,
Mar 27, 2021, 4:27:26 PM3/27/21
to lod...@googlegroups.com
Ryan, your point about interoperability is well taken, but it seems too categorical to say that RDF doesn't offer any special query-answering powers. A couple of things come to mind. You've mentioned the aggregation/enrichment feature, which can be a big win. Eric's tables have fields like keyword and genre. If those fields were resolved against linked data thesauri like Getty AAT, he could leverage that syndetic structure (broader/narrower/related) for query expansion, potentially gaining insight into relationships among concepts.
  • With SPARQL, it's possible to query both data model and instance data through a common interface. Say we have data about people and organizations and their online social networks. We can have specific properties like :twitterFollows, :facebookFollows, etc. But if we declare those properties to be subproperties of <http://schema.org/follows>, then we can write a query like:
select ?x where {
  :W3C ?anyFollows ?x .
  filter exists {?anyFollows rdfs:subPropertyOf schema:follows}
}


to find everyone followed by the W3C across its online networks. So, Eric, it may be worth your while to invest some effort into data modeling, based on the implicit relationships in your SQL schema.
  • Both of these features can be further exploited using rule-based or logical inferencing to add new knowledge to the graph: e.g., on a property like :birthPlace. If we know Sara was born in São Paulo and we have geo data with partitive relationships, we can infer: :sara :birthPlace :Brazil.
But it does seem that those larger research questions are more geared toward data science and machine learning (Word2Vec, etc.) than question answering, even semantically enabled question answering. That said, I know I'd be interested in playing in this kind of sandbox :)

All best,
Tim


--
Tim A. Thompson
Metadata Librarian
Yale University Library

--
You received this message because you are subscribed to the Google Groups "Linked Open Data in Libraries, Archives, & Museums" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lod-lam+u...@googlegroups.com.

Eric Lease Morgan

unread,
Mar 27, 2021, 8:04:30 PM3/27/21
to lod...@googlegroups.com


On Mar 25, 2021, at 1:45 PM, Eric Lease Morgan <emo...@nd.edu> wrote:

To linked data, or not to linked data? That is my question. Seriously, please help convince me to expose a whole boatload of RDF, and help me understand what insights might be accomplished by doing so.


On a similar but different note, once I have used OpenRefine to identify a URI for a given item, how do I get the URI to appear in a column?

I have imported a set of TSV files into OpenRefine. I then faceted the input on "type", and "entity". I then selected a specific entity named "Jay Gatsby". I then used the Wikidata service to get the most correct URI. From the screen shot, you can see the reconciliation is correct. To create the "best" RDF, I think I need to update my database with the value of the URI. 

How do to create a new column and have its value be the reconciled URI? Once I get that far I can export my project as TSV and update my database. I've tried many different OpenRefine menu options to no avail. What am I missing?

--
Eric

 

Shaw, Ryan

unread,
Mar 28, 2021, 8:57:55 AM3/28/21
to lod...@googlegroups.com

> On Mar 27, 2021, at 8:03 PM, Eric Lease Morgan <emo...@nd.edu> wrote:
>
> On a similar but different note, once I have used OpenRefine to identify a URI for a given item, how do I get the URI to appear in a column?

[Reconcile → Add entity identifiers column] will add a new column with the Wikidata Q values:

https://docs.openrefine.org/manual/reconciling/#add-entity-identifiers-column

To turn these into URLs, add another column based on this new column with the GREL expression:

'http://www.wikidata.org/entity/' + value

https://docs.openrefine.org/manual/columnediting/#add-column-based-on-this-column

Cheers,
Ryan

Shaw, Ryan

unread,
Mar 28, 2021, 9:04:52 AM3/28/21
to lod...@googlegroups.com


> On Mar 27, 2021, at 4:26 PM, Tim Thompson <tima...@GMAIL.COM> wrote:
>
> Ryan, your point about interoperability is well taken, but it seems too categorical to say that RDF doesn't offer any special query-answering powers.

My point was just that the query-answering power in the examples you cite comes from the syndetic structure or partitive relationships available in those other datasets, not from the conversion into RDF. Conversion to RDF is of course a prerequisite for taking full advantage of the useful structure in those other datasets — but if the particular structure you need to answer the questions you want to ask isn't out there in a dataset somewhere, just converting your data to RDF is not going to enable you to do much more than you could already do with SQL queries (and maybe less since SQL tooling is so much more mature).

Cheers,
Ryan

Tim Thompson

unread,
Mar 28, 2021, 10:26:41 AM3/28/21
to lod...@googlegroups.com
On the one hand, that sounds like an implicit argument for publishing more data as RDF. But I'm still not persuaded by the premise that RDF isn't going to enable much more, on its own terms, than relational approaches. To quote the "Knowledge Graphs" paper (https://arxiv.org/pdf/2003.02320.pdf), which I came to through one of Bob DuCharme's excellent and practical blog posts (http://www.bobdc.com/blog/partialschemas/):

Unlike (other) NoSQL models, specialised graph query languages support not only standard relational operators (joins, unions, projections, etc.), but also navigational operators for recursively finding entities connected through arbitrary-length paths [16]. Standard knowledge representation formalisms – such as ontologies [70, 239, 366] and rules [254, 288] – can be employed to define and reason about the semantics of the terms used to label and describe the nodes and edges in the graph. Scalable frameworks for graph analytics [335, 503, 563] can be leveraged for computing centrality, clustering, summarisation, etc., in order to gain insights about the domain being described. Various representations have also been developed that support applying machine learning techniques directly over graphs [549, 559].

All best,
Tim


--
Tim A. Thompson
Discovery Metadata Librarian
Yale University Library



--
You received this message because you are subscribed to the Google Groups "Linked Open Data in Libraries, Archives, & Museums" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lod-lam+u...@googlegroups.com.

Eric Lease Morgan

unread,
Mar 28, 2021, 2:26:37 PM3/28/21
to lod...@googlegroups.com


On Mar 25, 2021, at 1:45 PM, Eric Lease Morgan <emo...@nd.edu> wrote:

> To linked data, or not to linked data? That is my question. Seriously, please help convince me to expose a whole boatload of RDF, and help me understand what insights might be accomplished by doing so...


I sincerely appreciate the time & effort y'all have sent my way. Thank you.

I learned many thing, such as but not limited to:

* my overall idea is sound
* the reconciliation process requires both practice and professional judgement
* the reconciliation process is not trivial, especially considering the size of my collection
* the benefits of reconciliation can outweigh the costs
* there exist a number of really cool (SQLite) add-ons that I can use along the way

To put my ideas into practice, I need to do seven things:

1. normalize my database(s) some more
2. add an additional field, specifically a field for URI
3. write a D2RQ mapping file complete with more meaningful ontologies
4. reconcile entities and update the database(s)
5. go to Step #4 until tired
6. export database(s) as RDF
7. use SPARQL to query the RDF and attempt to answer interesting questions

Yes, through the use of the SQL ATTACH command, I could probably join many of my databases to accomplish the same goal, but I'd like the make everything more public.

I think I'll begin my practicing with content by Homer, Austen, and Thoreau. Wish me luck, and again, "Thank you."

Eric Lease Morgan

unread,
Mar 28, 2021, 2:47:25 PM3/28/21
to lod...@googlegroups.com


On Mar 27, 2021, at 9:01 AM, Shaw, Ryan <ryan...@unc.edu> wrote:

> If your databases are sqlite, you might find these tools handy for exploring, querying, and visualizing your data, whether or not you commit to turning it into linked data:
>
> https://datasette.io
> https://github.com/simonw/sqlite-utils


These are really cool tools. The first provides a desktop Webbed interface to my database(s). I was in the process of writing something very similar, but now I don't have to do that work. Whew!

--
Eric Morgan


Eric Lease Morgan

unread,
Mar 28, 2021, 3:01:54 PM3/28/21
to lod...@googlegroups.com


On Mar 28, 2021, at 8:57 AM, Shaw, Ryan <ryan...@unc.edu> wrote:

>> On a similar but different note, once I have used OpenRefine to identify a URI for a given item, how do I get the URI to appear in a column?
>
> [Reconcile → Add entity identifiers column] will add a new column with the Wikidata Q values:
>
> https://docs.openrefine.org/manual/reconciling/#add-entity-identifiers-column
>
> To turn these into URLs, add another column based on this new column with the GREL expression:
>
> 'http://www.wikidata.org/entity/' + value
>
> https://docs.openrefine.org/manual/columnediting/#add-column-based-on-this-column


This was very helpful, and it is too bad prefixes (like, 'http://www.wikidata.org/entity/') don't come along free with the ride. --ELM

Richard Wallis

unread,
Mar 30, 2021, 8:20:22 AM3/30/21
to lod...@googlegroups.com
Very late to this interesting thread.

Referring back to the very beginning...

My bibliographic values (author, title, extent, etc.) can be exploited through Dublin Core. My keywords can be mapped through SKOS. The named entity types such as PERSON will map nicely through FOAF. I'm sure there are vocabularies for the other named entity types like ORGANIZATION or LOC and GRE (types of places). Heck there might even be a vocabulary for parts-of-speech values. I don't know.

In the interests of keeping things simple, it might be worth looking at Schema.org as a core vocabulary for this exercise.  It has types for the majority of entity types and relationships described.  It even includes things such as Quotation.  It doesn't include anything for parts of speech, but no vocabulary has 100% coverage.

~Richard.

Richard Wallis
Founder, Data Liberate
http://dataliberate.com
Twitter: @rjw



--
You received this message because you are subscribed to the Google Groups "Linked Open Data in Libraries, Archives, & Museums" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lod-lam+u...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages