Error message when clicking “Download” button of a file in a dataset

109 views
Skip to first unread message

ofuuzo ofuuzo

unread,
Jul 26, 2019, 9:38:29 AM7/26/19
to Dataverse Users Community
Hi,
I have upgraded our test Dataverse to Dataverse v. 4.15.1.   I receive the following error message when clicking “Download” button of a file in a dataset:

status    "ERROR"
code    404
message    "'/api/v1/access/datafile/645' Datafile 645: Failed to locate and/or open physical file."

The physical location (dir) of the files is still the same. I have checked  <jvm-options>-Ddataverse.files.directory=….</jvm-options> in domain.xml file. So what should be the cause this error? I need help.


Thanks in advance.

Cheers
Obi

Philip Durbin

unread,
Jul 26, 2019, 9:52:15 AM7/26/19
to dataverse...@googlegroups.com
Does this happen with *every* file or just one?

--
You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-commu...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dataverse-community/b30f7231-4fd7-4275-9edf-ac87405f4392%40googlegroups.com.


--

ofuuzo ofuuzo

unread,
Jul 26, 2019, 11:09:51 AM7/26/19
to Dataverse Users Community
Not draft files.

Obi

fredag 26. juli 2019 15.52.15 UTC+2 skrev Philip Durbin følgende:
Does this happen with *every* file or just one?

On Fri, Jul 26, 2019 at 9:38 AM ofuuzo ofuuzo <ofu...@gmail.com> wrote:
Hi,
I have upgraded our test Dataverse to Dataverse v. 4.15.1.   I receive the following error message when clicking “Download” button of a file in a dataset:

status    "ERROR"
code    404
message    "'/api/v1/access/datafile/645' Datafile 645: Failed to locate and/or open physical file."

The physical location (dir) of the files is still the same. I have checked  <jvm-options>-Ddataverse.files.directory=….</jvm-options> in domain.xml file. So what should be the cause this error? I need help.


Thanks in advance.

Cheers
Obi

--
You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-community+unsub...@googlegroups.com.

Jamie Jamison

unread,
Aug 16, 2019, 2:50:09 PM8/16/19
to Dataverse Users Community
I have a similar problem.  Not because of the upgrade.   I have to move our test and production instances to a new/different aws account.  Moving the s3 bucket contents has been problematic.  The files display display, geoconnect even works but when I try to download I get the "'/api/v1/access/datafile/5001' Datafile 5001: Failed to locate and/or open physical file." message.

These would be published files on the test server.

jamie

 a 

Philip Durbin

unread,
Aug 19, 2019, 9:41:55 AM8/19/19
to dataverse...@googlegroups.com
Hi Obi and Jamie,

What do you see in the "storageidentifier" column of the "dvobject" database table* for one of the files that isn't working?

Thanks,

Phil


On Fri, Aug 16, 2019 at 2:50 PM Jamie Jamison <jam...@g.ucla.edu> wrote:
I have a similar problem.  Not because of the upgrade.   I have to move our test and production instances to a new/different aws account.  Moving the s3 bucket contents has been problematic.  The files display display, geoconnect even works but when I try to download I get the "'/api/v1/access/datafile/5001' Datafile 5001: Failed to locate and/or open physical file." message.

These would be published files on the test server.

jamie

 a 
On Friday, July 26, 2019 at 8:09:51 AM UTC-7, ofuuzo ofuuzo wrote:
Not draft files.

Obi

fredag 26. juli 2019 15.52.15 UTC+2 skrev Philip Durbin følgende:
Does this happen with *every* file or just one?

On Fri, Jul 26, 2019 at 9:38 AM ofuuzo ofuuzo <ofu...@gmail.com> wrote:
Hi,
I have upgraded our test Dataverse to Dataverse v. 4.15.1.   I receive the following error message when clicking “Download” button of a file in a dataset:

status    "ERROR"
code    404
message    "'/api/v1/access/datafile/645' Datafile 645: Failed to locate and/or open physical file."

The physical location (dir) of the files is still the same. I have checked  <jvm-options>-Ddataverse.files.directory=….</jvm-options> in domain.xml file. So what should be the cause this error? I need help.


Thanks in advance.

Cheers
Obi

--
You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-commu...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-commu...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dataverse-community/2c695f13-7ffc-4f19-b8c9-ef953323e940%40googlegroups.com.

Jamie Jamison

unread,
Aug 19, 2019, 2:22:17 PM8/19/19
to Dataverse Users Community
Background:  I had to move the s3 buckets from one account to a different one.  The buckets couldn't have the same name. So,

dataverse-test-data-oregon had to becom dataverse-test-oregon.


What storageidentifier contains is the OLD s3 bucket (dataverse-test-data-oregon).


So I have to change the identifier in the database, manually?  I am doing this all on test instance first.


Thanks,


Jamie


To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-community+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-community+unsub...@googlegroups.com.

Jamie Jamison

unread,
Aug 20, 2019, 3:27:56 PM8/20/19
to Dataverse Users Community
Phil,

The storage identifier still shows the old bucket.  Is this something I can manually edit or would that have unintended messy consequences? I tried creating a new bucket on the test system and still can't see files.

Jamie

On Monday, August 19, 2019 at 6:41:55 AM UTC-7, Philip Durbin wrote:
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-community+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-community+unsub...@googlegroups.com.

Philip Durbin

unread,
Aug 20, 2019, 4:35:09 PM8/20/19
to dataverse...@googlegroups.com
Hi! I'm sorry but I don't have my Dataverse on my laptop hooked up to S3 storage right now so I'm not sure what you're seeing.

Are you saying that you see the old bucket name over and over in the "storageidentifier" column?


To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-commu...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-commu...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-commu...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dataverse-community/e6298b02-0b59-45f2-83fc-f6c28da69953%40googlegroups.com.

Jamie Jamison

unread,
Aug 20, 2019, 5:59:56 PM8/20/19
to Dataverse Users Community
Yes, for example I have
s3://dataverse-new bucket made today:string of numbers
s3://dataverse-oldbuckets:string of numbers

I've created a new test bucket and that works fine, even shows up in postgresql when I display what's in storageidentifier, but the old addresses show up also. The problem seems to be with files that were uploaded pre-moving-debacle.

However, since we can create dataverses and add now files now, we are good going forward with the production instance and I have time to fix the test files.

Jamie
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-community+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-community+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-community+unsub...@googlegroups.com.

James Myers

unread,
Aug 21, 2019, 11:02:01 AM8/21/19
to dataverse...@googlegroups.com

Jamie,

 

I looked into this for QDR a while back and I believe you’re fine if you edit to use the new bucket. It worked for us and I checked a bit further to make sure there were no other places the bucket info was stored/needed. Note https://github.com/IQSS/dataverse/issues/5149 - the dataset can include the bucket name, but it is never used, i.e. it’s only the info for the dvobjects for files themselves that are getting used.

 

- Jim

To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-commu...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.

To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-commu...@googlegroups.com.

--

You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.

To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-commu...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dataverse-community/e6298b02-0b59-45f2-83fc-f6c28da69953%40googlegroups.com.

Reply all
Reply to author
Forward
0 new messages