Haveyou found a resolution to this issue? I am experiencing the same issue with a web map I cannot view in Field Maps with the same error code "Unable to Open JSON parser invalid token" and I can also not view the map in the new version of collector and received the error message "Unable to Open Map invalid call with current token type"
I do have an idea, but my solution may not be applicable to all. My maps in Field Maps also gave the same error code, "Unable to Open JSON parser invalid token", and I believe it has to do with a custom Arcade expression either within pop-ups, style, or attribute table. I removed the Arcade expressions from each while within the web map, but was still unable to open the map in Field Maps. I had to remove the offending feature from the web map in AGOL, make all changes to the feature layer default display and table, and then reload the feature into the map. I wager you could pinpoint exactly which Arcade expression that Field Maps does not support, but my solution was acceptable for issue at the time.
@PaulBarr This solution seems to have worked for me. However, any layers that were an SQL view did not load (a warning of 'Layer failed to load' show up in the layer list). Any idea why the SQL view layers won't load in Field Maps?
@James_Whitacre_PGC and @PaulBarr I also have a SQL view layer, published to ArcGIS Server from sde connection that is also not loading in Field Maps with the same error. No errors in Server Manager when testing Field Map. This was working on 3/12/2024 and I went back to use today and got the error below in S123 app:
Can you show the input bundle into the JSON parser when it runs automatically with a webhook? Just use the little output/input bundle icon when looking at the Operation 1. I see Data size is set to 0 and I wonder if your input bundle from the webhook is simply empty.
Few questions, why do you need to parse the JSON as the data that you are getting is already preformatted? The webhook config as you have shown have a JSON pass-through disabled, which means that the data that you are getting is already preformatted in Webhook.
I found it pretty difficult to work with also, even the R tool ("fromJSON" from the rjson library) didn't do very well. I was able to convert it to a csv using C# but am not sure if the result will be useful to you. This is based on -nested-json-to-csv. The resulting .csv is attached (hopefully).
Since most of us are not in that category of being able to code like he does, I've attached a workflow (that contains the data set pointed at by @CharleyMcGee in his original post) that just uses basic Alteryx tools to get to a point where you have parsed out JSON data. This way hopefully people are "scared away" from Alteryx thinking they need to have that level of expertise.
It appears that the entire set of data is just one record. If there were multiple records, my assumption would be that there would be some indication of that in the code by which you could Group By in the Summarize tool (or you could create a batch macro that looks at multiple files individually that does what is attached). But in this case, the Summarize tool merely concatenates all of the data into a single VERY long string...which the JSON Parse tool can understand.
Anyway, I'd be interested in seeing a workflow that will automatically do all the transposing and cross-tabbing necessary to move from your output to a single table view (e.g. if the JSON were interpreted bunch of tables, then what you'd get by selecting everything from all the tables joined together). That's what I was having difficulty with, thus resorting to StackOverflow and C#.
RodL's example is a great starting point. It is CERTAINLY better than what I was getting. however, in this case, given this very specific data set, the C# code actually gets me where I need to be more readily. I can pretty much take that output, drop what I don't need and then push to Postgres.
I still want to work out a means of doing this completely within Alteryx, if possible, because that data set will change periodically and it would just be simpler to have Alteryx reach out, grab the new JSON, parse it out, dump the unneeded fields and then push the fresh data up to Postgres. Doing all of that in one place means I can just kick off a single job (whether it be some kind of C# program or powershell script or python or an Alteryx workflow) and be done with it. I'd like for that to happen within Alteryx.
As it is created to not only be a parser, but to have functionality to make it easier to use the data parsed in a project, I am very open for suggestions to improve the library, as well as keeping it updated with Tiled itself
@RacheProu: Your issue was actually due to a bug in Tileson. I have now fixed the bug and added a new release, as the bug was pretty severe, since it would happen in any case where you would have a tile that had nothing attached to it (no properties, collision data etc.). The reason why it has not been discovered until now, is that the demo-tileset actually has collision data on all tiles, which would make them all have a property. I have added unit tests for these cases to make sure they never happen again.
A little hint, though, is that you want to use the offsetX and offsetY that is in the example (and in your pastebin) to determine what part of the tileset you need to draw based on the tile. You can get the actual size of a tile from the map itself: map.getTileSize().
This way, you could load parts of the texure you want to display via the sprite:
mSprite.setTextureRect(sf::IntRect(offsetX, offsetY, map.getTileSize().x, map.getTileSize().y)
Also:
In your code, you should not call the mTexture.loadFromFile(pathStr) inside the for-loop, but rather load it once after you have found your tileset. Then you want to use the mSprite.setTexture() once as well, but use setTextureRect() to draw parts of the texture (tiles). You will need to render the sprite for each time you use setTextureRect, though, or else you will end up with only drawing the last one. You will need to make several changes to your code to make this happen.
@RacheProu: Using your code with your map as example, it was pretty easy to spot that something was wrong. After a few debugs and a small analysis on your map, it was not too hard to spot the origin of the error
@Shadow001: I now have a basic example commited, using SFML. It is a work in progress , and I have only verified that it compiles on Linux, but you can check out the code here: . If it were to compile in Windows, make sure you put the openal32.dll file in the executable directory. You can find that file under _libs/libs/win/release/msvc/sfml
It currently only draws images of tile layers, but I will add the rest later. I might be able to expand the example tonight, if I have the time. Also keep in mind that it only shows how to draw what Tileson have parsed on the go. If used in a game you would use the Tileson data to generate game objects once, then have the objects themselves (or a component) do the drawing for you.
We are currently trying to get our feet wet in managing our own parsers in Chronicle. We have started with Virtru Email Encryption logs which are ingested as JSON. I have been through the documentation quite a bit now but feel that a little bit of help would go a long way. I found another post stating that we need to import as a single line, which I believe we can do. Here are the JSON fields we are trying to target in the log in BOLD:
I need to have the same test done but with a different json to show my boss the good choice is RUST (very fast)
in the example:
[url] [/url]
json_pull.rs ou json_struct.rs
they use json like :
You will need to make a Rust struct that matches layout of your JSON, and derive #[derive(Deserialize)] for it. You can probably take an existing example and rename fields/adjust types to match yours. For nested objects you will need to nest multiple Rust structs.
On my computer Rust takes 2.8 microseconds and Python takes 16.9 microseconds, so the Rust one is around 6 times faster. Keep in mind that there may be other considerations in choosing a language. In particular the Rust code here is many more lines of code, the two implementations behave differently in the case of malformed input, there may be a faster JSON library for Python than the one in the standard library, etc.
I'm not saying not to try! Just saying that make sure the experience is a good one for all developers involved. Rust is fast, but so are many other languages. Make sure it fits your use case and the team's abilities.
Hello, I am trying to parse a JSON to a table with JSON Path (ive already tried JSON to Table but it does not work because it freezes). I think the problem is with the JSON file structure because it does not has arrays.
In case you are more familiar with XML/XPath and want to investigate this route instead: Strange that it did not work out for you - it did for me. Could you specify the error and what you did more closely? Which KNIME Version are you using?
I finally solved it by using a couple of Python Source nodes that each one runs a script on Python and using Pandas extract all the data I need and then I got a table on the output. I will try your solution that is cleaner.
Yes im using windows 10 21H1 right now, ive tried re executing the json reader but still not working. I dont really know if it has something to do with spanish. Ive tried some other xmls that i previosly checked with some validators online and the xpath does not work too
I am working on some Windchill customization for a client. I did a Java program, called as onpublish.afterloadermethod. I call some external REST web services and get back JSON response, which I have to parse.
As Java 8 doesn't have native JSON parser and I don't want to use third party Java JSON library (possible maintenance problems and additional work involved), I want to use some internal solution (like internal JSON library, used by developers), which must exist inside Windchill and will be available with future WIndchill verisons.
3a8082e126