Iam taking in a simulation from another process written in existing software. It outputs a JSON package of a few tables (stored as array of objects). I am running an optimization in JuMP based on the simulation. The optimization runs in a few seconds but reading the JSON and converting into a DataFrame takes a long time. Specifically converting the large simulation into a DataFrame.
I would assume LazyJSON would shine if you only need to access a subset of the JSON, but as part of the testing of the Unmarshal.jl package I got the impression that even if I unmarshal the whole object LazyJSON could still outperform JSON, but it depends on the size and complexity of the structure.
Thought about that. I am not doing any date manipulations, it is really just a key in the simulation and is used for grouping and aggregating results during the optimization. I figured a string was good enough.
I know this is a tad dated at this point, but JSON3 recently had a new release that should drastically reduce memory usage when parsing default json (i.e. JSON3.read(input)). Which should translate to JSONTables.jl using much less memory.
Since your input is being read as JSON directly, you likely won't need the JSON Parse tool (if you were reading your .json file in as a raw text file, this is when you would need something like that). Once you've got the data in you can begin to manipulate it. I found a good post that illustrates how to take the data and convert it into a more tabular format with an example if you're interested as well!
Hello, I am trying to parse a JSON to a table with JSON Path (ive already tried JSON to Table but it does not work because it freezes). I think the problem is with the JSON file structure because it does not has arrays.
In case you are more familiar with XML/XPath and want to investigate this route instead: Strange that it did not work out for you - it did for me. Could you specify the error and what you did more closely? Which KNIME Version are you using?
I finally solved it by using a couple of Python Source nodes that each one runs a script on Python and using Pandas extract all the data I need and then I got a table on the output. I will try your solution that is cleaner.
Yes im using windows 10 21H1 right now, ive tried re executing the json reader but still not working. I dont really know if it has something to do with spanish. Ive tried some other xmls that i previosly checked with some validators online and the xpath does not work too
Hi there, I'm new to Alteryx and this is my very first task. I'm trying to read a JSON file and have been following an example provided within Alteryx, but it's not working for me. I also found a suggestion in the community to concatenate all entities into one field and then use the JSON Parse tool, but that approach isn't working either. Can someone help me understand how to read this type of file?
How do you actually open and read a .json file into Labview? I have the i3 JSON toolkit but all the examples I've seen that use this generate their own data in Labview, not read it in from an existing file. I need to read in the data from a already existing .json file and am getting weird behavior when I try to use the labview file i/o functions. Can you give an example of the sequence of steps to get data from the .json into labview?
One thing to keep in mind when parsing JSON text in LabVIEW 2013 is that now LabVIEW includes native JSON parsing functions which handle many cases. See the following post for some of the JSON parsing options available in LabVIEW:
When generating, the following errors occur:
Cannot read json from resource /de/trekkings/tourbregaglia/data.json with error message unexpected end of JSON input
Cannot read json from resource /nl/trekkings/tourbregaglia/data.json with error message unexpected end of JSON input
Cannot read json from resource /trekkings/tourbregaglia/data.json with error message unexpected end of JSON input
There is only one way to achieve this as installshield do not support json files so you have to do this manually though a installscript or vbscript or using "Text file changes: which is supported by installshield apart from xml file. So far using text file changes would be best approach if the json files are not too complex or nested.
how you can achieve this is by keeping the json file with fixed values for key at every build
now in text file changes
suppose your json file looks like
{
I greatly appreciate your reply. My JSON is actually complex; it has roughly 20 values so unable to use the textchange option. Thankfully, able to achieve this by writing a .net function and using .net dll in installshield project.
I want to test my server via sending raw json data. One of the values of my request body is a really big object, so it would be very helpful if I can read it from a json file instead of pasting a really big raw json string. Is it possible?
Hello @bpricilla , thank you so much for your reply . So this is really helpful but I am a bit confused. I need to the email field in the request get its content from an entire json file which is not starting with an array ([]).
Hello, I would like to use dask.bag to load a json file that is too big for my computer memory. From my searching, it seems like if I have done things correctly, I should be able to read a smal portion of the file at a time. Here is what I did:
My guess is that you need to talk with your data modeler and/or DBA about loading the nested data to a table. Is the database table already setup to handle the nested data? If so, then perhaps you have questions about Ab Initio with respect to vectors, normalization, or loops.
Got intermediate solution. As suggested from support team, I used Read Multiple Files component, which helps to read the .json file as a one single record, then I am able to parse it with dml generated by json-to-dml utility.
Frustrated. Cant find a clear answer online.
I have a json file. I want my script to opn the file. Parse (eval?) it, and be able to spit out some info.
Any help here, or even links is appreciated.
I'm not clear what specifically you are asking. There is technically nothing to parse because it's already JavaScript that can be manipulated using regular functions. I'm not aware that there is a generic library or code snippet out there that would simply convert the node structure to arrays or whatever you may need, but clearly there's enough reference out there on how to deal with this on web frameworks like Node JS and the principles and methods could be adapted. Anything more specific will require much more detailed info on your part.
If the file you want to parse is relatively small (kb's of size), listed methods above are great. However, json2 library parser is extremely slow with bigger files. As long as you are sure JSON is well formated and "safe", this is the fastest way in doing it, N times, literally:
Unfortunately yes, that is my answer. I just talked to SAS and the JSON engine specialist says it cannot read that type of file.
Hopefully they have this on their to do list as it seems like only a small update would be needed to handle the shifting to a new line, As I mentioned, if I have a file with just one line, the JSON engine works perfectly, it just doesn't know what to do after it hit's the end of the first line.
That's a bit peculiar, most of the json files I have seen are formatted nicely to view in a text editor, so have end of lines. Is there perhaps a non printed or special character in there? Maybe something happened when transferring the file, maybe unix eof on windows or something like that. Really stabbing in the dark here, I can find nothing on end of line for json causing an issue. Can you perhaps post a few lines of the file, or the file.
Anyway, it seems like that the return is something called JSONL, which the JSON engine seems to have issues with. I'm wondering if there's some type of option in maybe a custom mapping file to let it know it's JSONL vs JSON.
I've seen these types of JSONL files before -- you could use SAS DATA step to pass through the file and break up the larger collection into individual files, then point the JSON engine at each of those. Then rejoin the data in DATA step or SQL.
That's an interesting thought. Reading it in line by line, outputting each line to a temp file then using the JSON engine to read that file and append it to the current dataset. Only issue I can see is making sure I create a map file that has the max length of each field.
You can use a plugin or alternatively you can create an endpoint (backend workflow), make the call from there, and return the json as plain text. Then you can call your endpoint using the API connector and it should be parsed automatically.
This hack of api mappings brings the whole thing together because once you have everything setup, you can work with the JSON packets & codebase output as if it was native Bubble Things. You end up with a best of both approach, although of course you have to structure things in a hack-y Bubble way.
I'm wondering if there will be "race conditions" if I use the dropbox api to read and write json. There are no individual users - authorization is via access token. When a user comes to the site, they'll type in their name and take a quiz. When they click submit at the end of the quiz, the user's name and score will be saved to a master json file. My plan was to do this by reading the file, appending the user info, then overwriting the entire file.
3a8082e126