No registration/listing of dump of project

28 views
Skip to first unread message

Jeannine Beeken

unread,
May 29, 2025, 2:52:09 PMMay 29
to vocbench-user
Hi,
We are using VB 14.0.0. When trying to make a dump of a project, it is not registered, i.e.  there is no indication of a dump in the list.  Why is that? What could have caused that issue, and what do you advise us to do. Does version 14.0.0 not allow to make a dump?
Best wishes,
Jeannine

Jeannine Beeken

unread,
Jun 2, 2025, 5:50:39 AMJun 2
to vocbench-user
Hi,

Has anyone encountered the same or a similar issue? It concerns 'Create a version dump' giving it a 'Version ID'. VB seems to start the task, but we do not know whether the task ends correctly, as there no new version-ID listed. We do not use/click on 'Dump a new version to a different location', for example CreateLocal, as we want it to be remote. The options for 'Configure version dump' are 1) CreateLocal, 2) CreateRemote, 3) AccessExistingRemote.  We expected AccessExistingRemote to be the default for 'Create a version dump' - 'Version ID'. In other words, as it is now, we do not know whether a dump has been created nor what its location is. What would you advise we do, thanks.
Best wishes,
Jeannine

Randall, Martin

unread,
Jun 3, 2025, 6:16:46 AMJun 3
to vocbench-user, Beeken, Jeannine C T

Hi

 

Just to add some extra information to Jeannine’s question I’ve had a look at our data:

 

When creating a new version a new directory is created in graphdb/data/repositories/<PROJECT NAME>-<VERSION> in the GraphDB instance. A corresponding directory is also created on the VocBench side in vocbench/SemanticTurkeyData/projects/<PROJECT NAME>/repositories/<VERSION>. However the new version does not appear in the UI in the list of versions. Is there anything in these directories/files that would prevent them appearing?

 

Thanks

 

Martin

 

From: 'Jeannine Beeken' via vocbench-user <vocben...@googlegroups.com>
Sent: 02 June 2025 10:51
To: vocbench-user <vocben...@googlegroups.com>
Subject: [vocbench-user] Re: No registration/listing of dump of project

 

CAUTION: This email originated from outside our organisation. Do not click links or open attachments unless you recognise the sender and know the content is safe. If you are not sure it is safe, please contact the IT Helpdesk.

--
You received this message because you are subscribed to the Google Groups "vocbench-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vocbench-use...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/vocbench-user/65a93255-bd2d-400b-b575-2bccad03e4f0n%40googlegroups.com.

Tiziano Lorenzetti

unread,
Jun 3, 2025, 8:51:11 AMJun 3
to Randall, Martin, vocbench-user, Beeken, Jeannine C T
Dear Jeannine an Martin,
when you create a version using "Dump" option, the system generates it in the same location as the accessed project. So, if the project is remote (hosted on GraphDB) the version will also be created as a remote repository in the same GDB instance. 
From Martin's message, I assume this is your case. Could you check the newly created directory in GraphDB? It should contain a config.ttl file and a storage directory.

I tested the dump feature on a fresh VB 14 installation, and it worked correctly for both local and remote projects.

To help diagnose the issue:
  • Which version of GraphDB are you using?
  • Did you check the GraphDB Workbench? do you see the new repository listed? This helps confirm whether the repository was created correctly
  • Do you find any of errors in the SemanticTurkey or GraphDB logs?
  • Does this happen with all project?
  • Are there any special characters or spaces in the project name or in the version ID?
Best regards,
Tiziano

Randall, Martin

unread,
Jun 3, 2025, 10:49:10 AMJun 3
to vocbench-user, Beeken, Jeannine C T, Tiziano Lorenzetti

Hi Tiziano

 

I can confirm that the newly created GraphDB directory does contain both a config.ttl and a storage directory (although if I roughly compare the contents of the directory there are major size differences in some of the files between the new version and the ‘_core’ directory – which I assume is the original? e.g. entities, entities.datatypes etc)

 

We are using version 10.6.2 running in an AWS Fargate container talking to a VocBench instance also running as a Fargate service

It seems to happen with all projects

There are no special characters only underscores and hyphens

 

I did look for errors in graphdb/logs/error.log and found a few instances like this (although the timings are a good 10 minutes or so before the new version directory was created):

 

[ERROR] 2025-06-02 09:16:51,768 [http-nio-7200-exec-9 | o.a.c.c.C.[.[.[.[openrdf-http-server]] Servlet.service() for servlet [openrdf-http-server] in context with path [] threw exception [Request processing failed; nested exception is org.eclipse.rdf4j.http.server.ServerHTTPException: java.util.concurrent.ExecutionException: org.eclipse.rdf4j.rio.RDFHandlerException: org.apache.catalina.connector.ClientAbortException: java.net.SocketTimeoutException] with root cause

java.net.SocketTimeoutException: null

        at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.doWrite(NioEndpoint.java:1426)

        at org.apache.tomcat.util.net.SocketWrapperBase.doWrite(SocketWrapperBase.java:775)

        at org.apache.tomcat.util.net.SocketWrapperBase.writeBlocking(SocketWrapperBase.java:600)

        at org.apache.tomcat.util.net.SocketWrapperBase.write(SocketWrapperBase.java:544)

        at org.apache.coyote.http11.Http11OutputBuffer$SocketOutputBuffer.doWrite(Http11OutputBuffer.java:540)

        at org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:112)

        at org.apache.coyote.http11.Http11OutputBuffer.doWrite(Http11OutputBuffer.java:193)

        at org.apache.coyote.Response.doWrite(Response.java:606)

        at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:335)

        at org.apache.catalina.connector.OutputBuffer.flushByteBuffer(OutputBuffer.java:777)

        at org.apache.catalina.connector.OutputBuffer.append(OutputBuffer.java:680)

        at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:383)

        at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:361)

        at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:97)

        at com.github.ziplet.filter.compression.ThresholdOutputStream.write(ThresholdOutputStream.java:92)

        at com.github.ziplet.filter.compression.CompressingServletOutputStream.write(CompressingServletOutputStream.java:66)

        at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)

        at java.base/java.io.BufferedOutputStream.write(BufferedOutputStream.java:127)

        at java.base/java.io.DataOutputStream.write(DataOutputStream.java:112)

        at java.base/java.io.FilterOutputStream.write(FilterOutputStream.java:108)

        at org.eclipse.rdf4j.rio.binary.BinaryRDFWriter.writeString(BinaryRDFWriter.java:346)

        at org.eclipse.rdf4j.rio.binary.BinaryRDFWriter.writeLiteral(BinaryRDFWriter.java:318)

        at org.eclipse.rdf4j.rio.binary.BinaryRDFWriter.writeValue(BinaryRDFWriter.java:293)

        at org.eclipse.rdf4j.rio.binary.BinaryRDFWriter.writeValueOrId(BinaryRDFWriter.java:270)

        at org.eclipse.rdf4j.rio.binary.BinaryRDFWriter.writeStatement(BinaryRDFWriter.java:223)

        at org.eclipse.rdf4j.rio.binary.BinaryRDFWriter.consumeStatement(BinaryRDFWriter.java:208)

        at org.eclipse.rdf4j.rio.helpers.AbstractRDFWriter.handleStatement(AbstractRDFWriter.java:109)

        at org.eclipse.rdf4j.repository.sail.SailRepositoryConnection.exportStatements(SailRepositoryConnection.java:390)

        at org.eclipse.rdf4j.http.server.repository.transaction.Transaction.lambda$exportStatements$6(Transaction.java:244)

        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)

        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)

        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)

        at java.base/java.lang.Thread.run(Thread.java:840)

Tiziano Lorenzetti

unread,
Jun 4, 2025, 8:49:14 AMJun 4
to Randall, Martin, vocbench-user, Beeken, Jeannine C T
Hi Martin,
A colleague informed me that a bug related to the dump feature was recently fixed. I couldn’t reproduce it in v14.0, so I’m unsure under which conditions it occurs or whether it’s related to the issue you're facing.

Quick question: is your GraphDB instance protected by authentication? If so, please try disabling it temporarily to see if that resolves the problem.

If the issue persists, try selecting “Dump a new version to a different location” and keep the same Repository Access and Repo Configuration as your current project.

Jeannine Beeken

unread,
Jun 10, 2025, 4:53:25 AMJun 10
to vocbench-user
Hi Tiziano,

We followed your suggestions and the answer to your question is that we haven’t added any authentication to the setup and we did try your second suggestion but there was no change in the behaviour. Would this be an issue/ bug at your side, and, if so, shall it be fixed for version 14.1.0. Thank you.
NB We have also looked at the latest fixes re dumps, i.e. https://bitbucket.org/art-uniroma2/vocbench3/commits/857507e07f8f7cf79b2bdf6f5edb2e207c124699 and https://bitbucket.org/art-uniroma2/vocbench3/commits/3c507fb7eaa808d05ef06d8b1cc8dfba9d45bcef However, they do not seem to be related to the UI issue we experience.

Best wishes,
Jeannine

Tiziano Lorenzetti

unread,
Jun 10, 2025, 11:52:39 AMJun 10
to Jeannine Beeken, vocbench-user
Dear Jeannine,
this is the recent fix related to the dump feature which involved the backend application (Semantic Turkey). 
This fix addressed an issue occurring only when using the simple "Dump" option. According to your feedback, you're having the issue also using “Dump a new version to a different location” option. 
Currently, we are not aware of any other bug affecting this functionality.

Looking at the stacktrace posted by Martin, the SocketTimeoutException suggests that SemanticTurkey is encountering a timeout while sending data to the external GraphDB repository.
This usually means the server is waiting too long for the repository to accept the data, possibly due to network delays, large data size, or the repository being slow or overloaded.
In light of this, the problem seems not to be on the application side, but rather in the communication between the backend and GraphDB.
Could you check if the GraphDB logs show any signs of delays or errors during the dump?
Also, could you try using the “Dump a new version to a different location” option with a very small project, just to exclude data size as a possible cause?

Best regards,
Tiziano

Jeannine Beeken

unread,
Jun 11, 2025, 4:41:19 AMJun 11
to vocbench-user
Dear Tiziano,

Thank you for the link to the fix of 19 May. We have been using the simple "Dump" option, when we noticed the issue re UI. This morning, I tried to make a 'simple dump' of a small project, as you suggested, and it worked! It is visible in the UI list of dumps below CURRENT.
I also recall Martin mentioning the following about the GraphDB logs: " I did look for errors in graphdb/logs/error.log and found a few instances like this (although the timings are a good 10 minutes or so before the new version directory was created)".
Your response "Looking at the stacktrace posted by Martin, the SocketTimeoutException suggests that SemanticTurkey is encountering a timeout while sending data to the external GraphDB repository.
This usually means the server is waiting too long for the repository to accept the data, possibly due to network delays, large data size, or the repository being slow or overloaded. In light of this, the problem seems not to be on the application side, but rather in the communication between the backend and GraphDB." indicates that it is an issue at our side indeed, probably due to 'large data size'. Thanks again for all your help!

Best wishes,
Jeannine
Reply all
Reply to author
Forward
0 new messages