is there's a way to bulk update json files ?

838 views
Skip to first unread message

Vincent

unread,
Oct 13, 2015, 3:02:45 AM10/13/15
to golden-cheetah-users
Hi,

I imported 3 years of .tcx and .fit files with success. I played around with my most recent data, and so far, everything seems fine.

I'd like to set the metadata for the whole period, without having to click and copy/paste on every activity. I have the data in a .csv files, and I tried to generate .json files, filling in fields like "Notes", "Workout Code", "TSS" and the like in the "TAGS" section . I overwrote the original .json files with my modified copies, but GoldenCheetah doesn't seem to reload them.

Is there's a way to tell GC to reload/reindex all the files, or another way to import all the metadata ?

Thanks,
Vincent.

Mark Liversedge

unread,
Oct 13, 2015, 3:05:42 AM10/13/15
to golden-cheetah-users
https://groups.google.com/forum/#!topic/golden-cheetah-users/orBY1d3LKlU

GC will refresh the next time you start after copying the files. It looks at each file's checksum and timestamp for the to decide if the metrics, mmp, model estimates and tags need to be reloaded.

Mark

Vincent

unread,
Oct 13, 2015, 12:16:16 PM10/13/15
to golden-cheetah-users
Thanks. That's obviously the first thing I tried, but restarting GC did nothing. I tried again this evening without any success.
I'm on GC 3.2 on win7 64 bits.

I noticed a cache directory, and emptied it before restarting GC a third time. The directory was filled, but still no metadata.

Anyway, I think I found. It did indeed reload the files, but wasn't happy with the content. GC seems to be sensitive to the attribute orders, whereas the API I used to generate the json file isn't. 

If in RIDE I have in this order STARTTIME, DEVICETYPE, INTERVALS, IDENTIFIER, SAMPLES, TAGS, RECINTSECS, it doesn't parse the file.
If I move the TAGS section between DEVICETYPE and INTERVALS, it does load the tags, but not the intervals.

This is really strange for a json object, but it's only a minor annoyance, I'll try to respect the attribute order in my file generation.

Thanks,
Vincent.

Karl Billeter

unread,
Oct 13, 2015, 8:07:44 PM10/13/15
to Vincent, golden-cheetah-users
On Tue, Oct 13, 2015 at 09:16:15AM -0700, Vincent wrote:

> Anyway, I think I found. It did indeed reload the files, but wasn't happy
> with the content. GC seems to be sensitive to the attribute orders, whereas
> the API I used to generate the json file isn't.

Odd. I haven't looked at the parser but haven't noticed that issue. Have you
tried lint-ing your files? Maybe a stray trailing comma or something.

Karl

Vincent Fiack

unread,
Oct 14, 2015, 2:55:15 AM10/14/15
to Karl Billeter, golden-cheetah-users
I'm pretty sure my files are valid json files. I tried looking at your source code, but you're not using a json api, it looks like you rolled your own parser with something like lex, which isn't something i can read fluently :)

I think I can reproduce the error starting with an unmodified file, and moving the TAGS section around. I can do some more tests this evening and send you two files, the original one and the modified one if you want to try to reproduce it.

Also, while we're on the topic, it's strange that you are adding a space inside each tag value, at the end.
ex: "TSS": "130 " instead of "TSS": "130".

Mark Liversedge

unread,
Oct 14, 2015, 3:04:30 AM10/14/15
to golden-cheetah-users, kbil...@gmail.com
That is because the tag names are user definable; and there is nothing to stop you selecting values that clash with symbols we use. So we put a space in there to stop that clash. We don't use a generic json parser because we need high performance, our parser is typically 10x faster than a generic one.

Send me a file and I'll see why it doesn't parse.

Mark

Vincent Fiack

unread,
Oct 14, 2015, 11:50:22 AM10/14/15
to Karl Billeter, golden-cheetah-users
Alright, sorry for the fauly bug report. The error wasn't in the attributes' order. The file that didn't parse was encoded in UTF-8 without BOM. When i edited it manually, I added the BOM. This is what made it work.

Karl Billeter

unread,
Oct 14, 2015, 6:40:15 PM10/14/15
to Vincent Fiack, golden-cheetah-users
On Wed, Oct 14, 2015 at 05:50:15PM +0200, Vincent Fiack wrote:
> Alright, sorry for the fauly bug report. The error wasn't in the
> attributes' order. The file that didn't parse was encoded in UTF-8 without
> BOM. When i edited it manually, I added the BOM. This is what made it work.

Ah, sorry, I should have remembered I needed to do that too for reading :-).
In Perl

open(my $fh, '<:via(File::BOM)', $ride);

I'm currently writing without but don't think I have any utf-8 chars... Should
probably add it to my output (I'm not convinced you _should_ need to, but
that's another story...).


Karl

Yves Arrouye

unread,
Oct 27, 2015, 11:39:03 PM10/27/15
to golden-cheetah-users, kbil...@gmail.com
http://www.unicode.org/versions/Unicode5.0.0/ch02.pdf

A BOM in UTF-8 is neither necessary nor recommended (there is no "byte order" when the encoding is byte based). Maybe GC could be tolerant of a BOM but not require it?

Karl Billeter

unread,
Oct 27, 2015, 11:55:32 PM10/27/15
to Yves Arrouye, golden-cheetah-users
On Tue, Oct 27, 2015 at 08:39:03PM -0700, Yves Arrouye wrote:
> http://www.unicode.org/versions/Unicode5.0.0/ch02.pdf
>
> A BOM in UTF-8 is neither necessary nor recommended (there is no "byte
> order" when the encoding is byte based). Maybe GC could be tolerant of a
> BOM but not require it?

Good point. From the JSON RFC

RFC 7159 JSON March 2014


8. String and Character Issues

8.1. Character Encoding

JSON text SHALL be encoded in UTF-8, UTF-16, or UTF-32. The default
encoding is UTF-8, and JSON texts that are encoded in UTF-8 are
interoperable in the sense that they will be read successfully by the
maximum number of implementations; there are many implementations
that cannot successfully read texts in other encodings (such as
UTF-16 and UTF-32).

Implementations MUST NOT add a byte order mark to the beginning of a
JSON text. In the interests of interoperability, implementations
that parse JSON texts MAY ignore the presence of a byte order mark
rather than treating it as an error.


Karl
Reply all
Reply to author
Forward
0 new messages