Dive sites HASH

52 views
Skip to first unread message

Gummigroda

unread,
Jun 5, 2024, 9:33:46 AMJun 5
to Subsurface Divelog
Hi, 

I've written a scrip to create and update dive sites (as the XML import creates duplicates) within the git repo, following Linus instructions: https://groups.google.com/g/subsurface-divelog/c/2Jwgn9LUjoo/m/aJ2rv6ffBgAJ
But after creating a bunch of dive sites (files under '01-divesites'), non of them show up in Subsurface. And to my understanding it might be due to the naming of the files?

I've just named them 'Site-xxxxxxxx' appending a random 8 digit HEX. Is this correct or do they need to be named in a certain way?

Or am I doing it the wrong way? I.e. editing the files directly?
What I wan't to achieve is to be able to have a external source with data and be able to import this into Subsurface without creating duplicates.

Thanks!

Michael Keller

unread,
Jun 6, 2024, 7:56:18 AMJun 6
to subsurfac...@googlegroups.com
Hi Gummigroda.
I think doing this by directly modifying the git repository that is used
for the cloud storage is risky, as there is no verification of the
changes, and no easy way to undo if it goes wrong.

A better way will be to change your script to instead modify a
Subsurface log that has been stored into a local XML file. This way, the
user of the script can save their cloud storage log into a file, modify
the file with your script, then open the file in Subsurface, which will
verify the validity of the file. Once they have confirmed that the data
is what they want, they can then save the log to the cloud storage, and
thus get it updated.


Cheers

  Michael Keller

Gummigroda

unread,
Jun 10, 2024, 4:30:02 PMJun 10
to Subsurface Divelog
Hi,

The reason why I want to do it through the repo, is that it's rather easy to revert the changes as it's git. And there is no "further steps".

Doing the export, import of the dive site log, does not prevent the duplication of dive sites. So for each update/import of dive sites, another copy of the dive site will be added. If I run this 10 times, I'll be having a dive site library with 10 duplicates of each site( if it's not exactly the same, i guess).

Is there no better way of doing this? Of perhaps a pointer of how the hex representation of a dive site is created. (Tried to look into the sources, but my C knowledge is very limited)

Thanks!

Dirk Hohndel

unread,
Jun 10, 2024, 4:33:47 PMJun 10
to Subsurface Divelog
And the reason why most of us on the other side of this would rather you didn't do it in the git repo is because that's not a straight git server there on the other side, and there are a few assumptions that the cloud storage backend makes about how Subsurface handles its git repos. So by doing things that are "perfectly fine" to do with a git repo that is yours you can terminally damage the cloud storage for your account - and there's nothing I can do to help you.

There's a reason why this isn't advertised as "git backend" and instead as "cloud storage".

/D

Michael Keller

unread,
Jun 10, 2024, 5:45:45 PMJun 10
to subsurfac...@googlegroups.com
Hi Gummigroda.

On Tue, 11 Jun 2024 at 08:30, Gummigroda <gummi...@gmail.com> wrote:
Doing the export, import of the dive site log, does not prevent the duplication of dive sites. So for each update/import of dive sites, another copy of the dive site will be added. If I run this 10 times, I'll be having a dive site library with 10 duplicates of each site( if it's not exactly the same, i guess).

Yes, you are correct that exporting / importing can create duplicate sites.
That's why I said that you should SAVE your cloud dive log to a file, and then OPEN the file containing the modified dive log. Not export / import it.

Ngā mihi
  Michael Keller
--
GCS$/CC/E/IT d- s+ a C++ UL+++/S++ P L++ E-
W++ N o? K? w O(++) M-- V+ PS+ PE+ Y? PGP+ t
5? X R tv b++ DI++ D++ G e+++ h---- r+++ y+++
Reply all
Reply to author
Forward
0 new messages