--
You received this message because you are subscribed to
the "Pick and MultiValue Databases" group.
To post, email to: mvd...@googlegroups.com
To unsubscribe, email to: mvdbms+un...@googlegroups.com
For more options, visit http://groups.google.com/group/mvdbms
General response:
I've spoken with a few colleagues about this and the consensus seems to be a combination of these factors:
- Just doing JSON, like XML, is dirt simple as long as the sample data is as simple.
- The limitation of 3 levels in MV (@AM, @VM, @SM) creates an artificial limit for direct serializing and de-serializing with JSON and XML. The easy and obvious case doesn't stand up to fairly common cases where we need more levels. It becomes apparent early on (for most people I guess) that this kind of work can't be done using 1-for-1 structure translation, and that's what makes it more complex.
- More complexity requires more rigorous code, and few people have undertaken this effort.
- Parsing data isn't so much of a problem, but it's a huge challenge to create a general purpose utility which parses data that needs to be stored into the file system.
- Of those who have done this, the code is now a valuable asset and can't just be given away as FOSS.
- Even if someone is inclined to FOSS it, most of this code wasn't written to be an example of elegance and the code probably isn't offered up simply because it's too convoluted.
- Such code is a constant work-in-progress and opening it up to a wider audience (especially This one) is more likely to draw criticisms about what it doesn't do than collaboration from people who offer to enhance it.
- As with a few others I've spoken to, I'd write and publish a JSON or XML parser/builder if only there was a paying market for it. But people here are so reluctant to pay for what they use that there is a never ending list of things we Don't have in spite of what we Could have. And then people look at this market and say "what? it doesn't have THIS?" It's not the fault of the technology, it's the people who use it.
I tend to push MV data into a middle tier where I then massage it for transport elsewhere. For my purposes it's OK to move relatively flat MV data into a strongly typed class and then serialize it in one line with a common non-MV library. Same goes for the other direction. Insistence on doing everything inside the MV box limits options. That said, I agree that it would be nice to have utilities like this, especially if they were usable across all platforms. With that, there are some things that I tend to do outside of the box that I would probably do more directly anyway.
T
From: Kevin King
I know of no such standard. In fact, even outside of MV I haven't seen any cross-vendor standards describing naming conventions (ala EDI).
We use JSON as the standard transport format when moving information out of Unidata/Prelude to the web, using name/value pairs for input and nested objects for output. Each request is structured slightly differently based on the appropriate response for each request.
Kevin Powick wrote:
There is no standard – not in any database – ORM means you can create objects that are useable throughout your business without regard to how that data is actually persisted.
So think how you want your json objects to be used and then do your mapping – (maybe instead of ORM it is OmvM ;)
--
{"Names": ["Charlie","Linus","Lucy"],"Limits": [100,200,300]}
I wrote a Tip on this which was published on our MV community. I will hunt that up and repost it here.