Hi,
There are several approaches we could take. Normally you would upgrade to each intermediate version to capture all the db and config changes. I have details on that below. However, since you have so many intermediate versions this may be a lot of effort. Alternatively, you could install the latest v4.x and then import the data from v2.1.1. I am not 100% sure how portable this is yet but you should be able to export datasets as ddi and then import those records, with perhaps some modifications. This approach would require you to add back all the supporting stuff like files, dv's, perms but it can be done.
Let us think about the import approach for a little bit but here are the instructions for upgrading versions:
You would need to upgrade to each intermediate version on your way to 4.x, unfortunately. To see upgrade instructions for individual versions from v2.x to v3.x, go here: https://sourceforge.net/projects/dvn/files/dvn/
I do have some additional information on upgrading directly from v2.0.5 to v3.0 that may be useful:
Here are the steps for upgrading a production v2.2.5 to v3.1.
High level steps:
I. Back up v2.0.5 db
II. Install the v3.1 prerequisites (glassfish v3.1, java v1.6.31+)
III. Run the v3.1 installer to create a blank, working v3.1 installation
IV. Upgrade v2.0.5 db
V. Adjust/ transfer and customized configuration settngs
i. Back up v2.0.5 db
It sounds like you already are doing this.
II. Install v3.1 prereq's
This can be done in parallel with v2.1. Will this be installed on a separate web server from v2.1 glassfish? If not, there may be a port 80 conflict if both are running.
III. Run the v3.1 installer to create a blank, working v3.1 installation.
If you have only one postgres installation and it is the production machine, specify a different name for the db than what the production name is. Otherwise it will write to it. You will repoint glassfish later to the production db so this step is just so the installer can complete the configuration.
*Non standard steps particular to v9.0 Postgres*
After running install, remove the v8.3 JBDC driver and copy over the v9.x driver. Remove the ejb-timer-service-app lock file and restart glassfish
When finished, verify you can log in, create a dv, study, etc.
IV. Upgrade v2.0.5 db
Point v3.1.2 glassfish to the v2.0.5 db. Leonid specifies how to do this in his upgrade doc below.
Redeploy the v3.1 DVN application. This will create additional tables.
Run the v2.0.5 to v3.0, then v3.0 to v3.1 build update scripts.
Restart glassfish
V. Adjust/ transfer and customized config settings
jvm options for study files, log files, lucene index. Probably only study files needs to be updated since it will point to an existing directory structure. The others either should probably just stay the default.
Drop/ recreate lucene index.
Move any image files.
I did some digging and found an v2.0.5 to v3.0 upgrade doc Leonid had produced and is pointed to in the v3.0 install README on Sourceforge. It goes into the above steps in more detail:
https://sourceforge.net/projects/dvn/files/dvn/3.0/dvnupgrade_v3_0.pdf
Let me know if this makes sense to you and if you have any questions or concerns at any step.
Kevin
We've been discussing the import approach and it should work but you would lose previous versions of a dataset. How many datasets do you have?
A developer will be contacting you to follow up but it might make sense to move the discussion to our support system and post the conclusion here.
Do you mind sending an email to sup...@dataverse.org with your original request and reference this? I'll update the ticket with additional info
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-migration-wg+unsub...@googlegroups.com.
To post to this group, send email to dataverse-...@googlegroups.com.
OK, sounds good. FYI for reference, the old v3.6.2 guides can be found here:
http://guides.dataverse.org/en/3.6.2/
I'm not sure you'll need them but in case you encounter something that is not covered by the other information we've provided,
Hi,
In v2.11 we did not provide APIs but rather export was a system administration function. I believe you can find the export under network administrator tools/utilities when you log in as administrator. The exported files should appear in each study's file directory. Export and Harvest work together so if I recall correctly, all public datasets are automatically exported on a nightly timer basis. Unpublished studies will not be exported.
I hope this helps. Let me know if you have further questions.
Kevin