Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Static JSON parser for embedded

195 views
Skip to first unread message

pozz

unread,
May 17, 2019, 3:20:26 AM5/17/19
to
I'd like to use JSON documents in my embedded project. I don't need full
parsing capabilities, because JSON could be very complex in general.

However I will use simple C structs that are equivalent to JSON
messages, for example:


struct myStruct {
unsigned int id;
char name[32];
int array[8];
};

{
"id": 12345,
"name": "John",
"array": [ -3, 7, 8 ]
}


I'm searching a static converter from JSON to C struct, some sort of
preprocessor that generates C code from C struct definition.

The parse function prototype could be:

int json_parse(const char *s, struct myStruct *dst);

if json_parse() returns 0, the result is in dst and I can access all the
members of the struct in a simple way in C code.

The parsing code should ignore JSON objects not defined in the struct
(for future use), however parsing well known members. For example, the
function should work well with this JSON string:

{
"id": 12345,
"name": "John",
"nickname": "mynick",
"array": [ -3, 7, 8 ]
}

Another feature could be to mark a member as mandatory or optional,
maybe through comments:

struct myStruct {
unsigned int id;
char name[32];
char nickname[32]; //@ json_optional
int array[8];
};

In this case the generated code could define the real struct as:

struct myStruct {
unsigned int id;
char name[32];
char nickname[32]; //@ json_optional
bool is_nickname_present;
int array[8];
};

Do you know of something code generator?

Jorgen Grahn

unread,
May 17, 2019, 4:48:33 AM5/17/19
to
On Fri, 2019-05-17, pozz wrote:
> I'd like to use JSON documents in my embedded project. I don't need full
> parsing capabilities, because JSON could be very complex in general.

Nitpicking:

Doesn't "JSON could be very complex in general" imply that you /do/
need full parsing capabilities, rather than the opposite?

How embedded is the project? I'm asking because lots of embedded
systems can easily run a JSON parser these days.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

pozz

unread,
May 17, 2019, 5:06:18 AM5/17/19
to
Il 17/05/2019 10:48, Jorgen Grahn ha scritto:
> On Fri, 2019-05-17, pozz wrote:
>> I'd like to use JSON documents in my embedded project. I don't need full
>> parsing capabilities, because JSON could be very complex in general.
>
> Nitpicking:
>
> Doesn't "JSON could be very complex in general" imply that you /do/
> need full parsing capabilities, rather than the opposite?

The application will receive messages created by me, so I have full
control of "JSON shape". I will never create messages that are too
nested, for example.

> How embedded is the project?

NXP LPC1768, 512kB Flash memory, 64kB RAM memory (that is almost full).
I can't use dynamic memory.

> I'm asking because lots of embedded
> systems can easily run a JSON parser these days.

Unfortunately I'm not using embedded Linux with big memories.

Bart

unread,
May 17, 2019, 6:40:08 AM5/17/19
to
On 17/05/2019 08:20, pozz wrote:
> I'd like to use JSON documents in my embedded project. I don't need full
> parsing capabilities, because JSON could be very complex in general.
>
> However I will use simple C structs that are equivalent to JSON
> messages, for example:
>
>
> struct myStruct {
>   unsigned int id;
>   char name[32];
>   int array[8];
> };
>
> {
>   "id": 12345,
>   "name": "John",
>   "array": [ -3, 7, 8 ]
> }
>
>
> I'm searching a static converter from JSON to C struct, some sort of
> preprocessor that generates C code from C struct definition.
>
> The parse function prototype could be:
>
>   int json_parse(const char *s, struct myStruct *dst);
>
> if json_parse() returns 0, the result is in dst and I can access all the
> members of the struct in a simple way in C code.

....

> NXP LPC1768, 512kB Flash memory, 64kB RAM memory (that is almost full).
> I can't use dynamic memory.

It's not clear whether you are looking for one or two things, and
whether it/they run on this device, or on some development PC.

For example json_parse function sounds like it converts JSON data to
binary, arranged according to some C struct layout (but how does it know
that "12345" is unsigned int, or how big 'int' is?)

While a JSON to C converter generates source code that then needs a C
compiler to do anything with.

You say elsewhere that you generate the JSON data yourself; in that case
can't you create the matching struct definitions at the same time?

Note that a function such as:

int json_parse(const char *s, struct myStruct *dst);

will presumably /only/ be able to parse data that matches that one
struct, and nothing else. And will only parse one struct, not a sequence
of them.

Or are you really looking for something like this (which runs on the
device):

int json_parse(char* s, J_Descr d, void* dst);

Where the extra parameter d /describes/ the shape and types of the data
expected? Here, while the function is general, at each call point you
must know what shape struct is expected, and provide a suitable d, and a
a pointer to a suitable struct.

pozz

unread,
May 17, 2019, 7:19:55 AM5/17/19
to
Il 17/05/2019 12:39, Bart ha scritto:
> On 17/05/2019 08:20, pozz wrote:
>> I'd like to use JSON documents in my embedded project. I don't need
>> full parsing capabilities, because JSON could be very complex in general.
>>
>> However I will use simple C structs that are equivalent to JSON
>> messages, for example:
>>
>>
>> struct myStruct {
>>    unsigned int id;
>>    char name[32];
>>    int array[8];
>> };
>>
>> {
>>    "id": 12345,
>>    "name": "John",
>>    "array": [ -3, 7, 8 ]
>> }
>>
>>
>> I'm searching a static converter from JSON to C struct, some sort of
>> preprocessor that generates C code from C struct definition.
>>
>> The parse function prototype could be:
>>
>>    int json_parse(const char *s, struct myStruct *dst);
>>
>> if json_parse() returns 0, the result is in dst and I can access all
>> the members of the struct in a simple way in C code.
>
> ....
>
> > NXP LPC1768, 512kB Flash memory, 64kB RAM memory (that is almost full).
> > I can't use dynamic memory.
>
> It's not clear whether you are looking for one or two things, and
> whether it/they run on this device, or on some development PC.

Yes, it was unclear in my request. I have a development machine with C
cross-compiler for a target machine (NXP-based).

I was thinking at a "JSON compiler" that runs on the development machine
and that generates C source code for cross-compiler.


> For example json_parse function sounds like it converts JSON data to
> binary, arranged according to some C struct layout (but how does it know
> that "12345" is unsigned int, or how big 'int' is?)

Bad example, I'm sorry.
"12345" *is* a string in JSON and will be a string in C struct.


> While a JSON to C converter generates source code that then needs a C
> compiler to do anything with.

Yes, of course. I hope it's clear now.


> You say elsewhere that you generate the JSON data yourself; in that case
> can't you create the matching struct definitions at the same time?

Yes, I start defining a JSON message and an equivalent C struct. What I
need is a C function that parses the JSON message, check its correctness
and matching of the reference struct and fill the members.

Of course, if I define N JSON messages, I will have N parsing functions
with N structs.


> Note that a function such as:
>
>     int json_parse(const char *s, struct myStruct *dst);
>
> will presumably /only/ be able to parse data that matches that one
> struct, and nothing else. And will only parse one struct, not a sequence
> of them.

Maybe the name of the function would be json_parse_myStruct(). For
another struct I will have a *different* function json_parse_myStruct2().


> Or are you really looking for something like this (which runs on the
> device):
>
>    int json_parse(char* s, J_Descr d, void* dst);
>
> Where the extra parameter d /describes/ the shape and types of the data
> expected? Here, while the function is general, at each call point you
> must know what shape struct is expected, and provide a suitable d, and a
> a pointer to a suitable struct.

Yes, this is another approach, a "dynamic" approach, because the parser
must be ready to expect a generic JSON message, structured as d.


Even if my approach is "static", it should be future-proof. It should
ignore unknown keys (maybe they will be added in future versions)
without errors.

And it should be flexible, because JSON messages could be generated by
different sources that don't guarantee the order of the keys. So the
parser should detect known keys/values in whatever order they appear in
the message.


My idea comes from Protocol Buffers[1] from Google. They have a
"protobuf compiler" to convert the message descriptor file (.proto) in a
suitable code.

[1] https://developers.google.com/protocol-buffers/

Jorgen Grahn

unread,
May 17, 2019, 7:23:02 AM5/17/19
to
Ah, then the question makes perfect sense.

Sorry I don't have the answer. I don't even /like/ JSON (especially
since I learned you can't have comments in it, so it's useless for
config).

Best would be if you could remove the requirement to use JSON. This
embedded thing clearly has more important tasks better suited to it.

Thiago Adams

unread,
May 17, 2019, 7:39:08 AM5/17/19
to
I have the same need and I can share what I have so far
to implement the runtime of this generator.

- Stream (support utf8 files or string)
- Scanner that emits JSON tokens.

Do you want to parse a fixed JSON? For instance, do you always
have the json with the same layout and order of your struct?

I have this exact need as well, but I also have situation
where the user can edit the json (like configuration) and
then the order can be modified and some items missing.

For this case (where the order can be modified) I need to
search for the property name.
For this task, I created the 'C string switch generator'
http://thradams.com/switchgenerator.html
and then I don't need to use a dictionary structure.

Basically I have all we need for the runtime, or to
implement this by hand.

The generator can be created in JS or C and run online
just like 'C string switch generator'





Thiago Adams

unread,
May 17, 2019, 7:41:10 AM5/17/19
to
On Friday, May 17, 2019 at 8:23:02 AM UTC-3, Jorgen Grahn wrote:
...
> Sorry I don't have the answer. I don't even /like/ JSON (especially
> since I learned you can't have comments in it, so it's useless for
> config).

Just ignore comments in your parser.

Thiago Adams

unread,
May 17, 2019, 8:09:49 AM5/17/19
to
I think the best approach in to create an json_scanf and
you don't need a generator.
Generator could lead to increase in code size. ParseStruct1
ParseStruct2 etc.. In my case I run this on desktop and
ParseStruct1, ParseStruct2 can have a better performance
and the code size is not so critical.


struct myStruct {
unsigned int id;
char name[32];
int array[8];
};

For scanf aprouch you could do:

json_scanf("{\"id\": %d , \"name\" : %s, \"array\": [%d]}",
&obj.id,
obj.name,
obj.array);

---

I put the code I have all together to parse a fixed json.
This is how the function that parser json for you sample
looks like:



int json_parse(const char* s, struct myStruct* dst)
{
struct JsonScanner scanner = JSONSCANNER_INIT;
JsonScanner_Attach(&scanner, s);

enum JSTokens tk = JsonScanner_Match(&scanner);
if (tk == TK_JS_LEFT_CURLY_BRACKET)
{
tk = JsonScanner_Match(&scanner);

assert(tk == TK_JS_STRING);
assert(strncmp("id", &scanner.Stream.data[scanner.LexemeStart], 2) == 0);
tk = JsonScanner_Match(&scanner);

assert(tk == TK_JS_COLON);
tk = JsonScanner_Match(&scanner);

assert(tk == TK_JS_NUMBER);
dst->id = atoi(&scanner.Stream.data[scanner.LexemeStart]);
tk = JsonScanner_Match(&scanner); //:

assert(tk == TK_JS_COMMA);
tk = JsonScanner_Match(&scanner);

assert(tk == TK_JS_STRING);
assert(strncmp("name", &scanner.Stream.data[scanner.LexemeStart], 4) == 0);
tk = JsonScanner_Match(&scanner);

assert(tk == TK_JS_COLON);
tk = JsonScanner_Match(&scanner);

assert(tk == TK_JS_STRING);
strncpy(dst->name, &scanner.Stream.data[scanner.LexemeStart], scanner.LexemeSize);
dst->name[scanner.LexemeSize] = 0;

tk = JsonScanner_Match(&scanner);

assert(tk == TK_JS_COMMA);
tk = JsonScanner_Match(&scanner);

assert(tk == TK_JS_STRING);
assert(strncmp("array", &scanner.Stream.data[scanner.LexemeStart], 5) == 0);
tk = JsonScanner_Match(&scanner);

assert(tk == TK_JS_COLON);
tk = JsonScanner_Match(&scanner);

assert(tk == TK_JS_LEFT_SQUARE_BRACKET);
tk = JsonScanner_Match(&scanner);

int i = 0;
while (tk == TK_JS_NUMBER)
{
assert(tk == TK_JS_NUMBER);
dst->array[i] = atoi(&scanner.Stream.data[scanner.LexemeStart]);
tk = JsonScanner_Match(&scanner); //:
if (tk != TK_JS_COMMA)
break;
tk = JsonScanner_Match(&scanner); //:
i++;
}

assert(tk == TK_JS_RIGHT_SQUARE_BRACKET);
tk = JsonScanner_Match(&scanner);

}

return 1;
}



pozz

unread,
May 17, 2019, 8:10:22 AM5/17/19
to
The device talks to AWS, mobile app, web app... and so on. In this
world, JSON is almost a standard.

pozz

unread,
May 17, 2019, 8:11:05 AM5/17/19
to
I think Jorgen thinks about JSON that could be parsed by standard JSON
parser that don't ignore comments and exit with error.

Thiago Adams

unread,
May 17, 2019, 8:15:57 AM5/17/19
to
And the code can be modified to be more readable and you don't need
a generator because it is simple to describe what you need.

ReadInteger(scanner, "id", obj->id);
ReadString(scanner, "name", obj->name);
ReadArray(scanner, "array", obj->array);

you also could add

ReadIntegerOptional(scanner, "id", obj->id, defaultValue);

etc


Thiago Adams

unread,
May 17, 2019, 8:18:32 AM5/17/19
to
Yes. But more people in the world thinks comments are useful
stuff and they are adding to their json parsers as well. (libs)
It is not standards but it a common feature added into
json. So I believe this is not a problem.

Bart

unread,
May 17, 2019, 10:06:25 AM5/17/19
to
OK, understood. You have a one or more bits of C that describe a C
struct. You want a tool running on a development machine that takes that
C, and generates a C program that is a parser for JSON text input that
matches that struct, and fills in the values in an instance of the
struct, and which runs on the embedded device.

Sorry, I don't know of any existing solutions, although there are one or
two things that come up on google (I guess you've already tried that
with 'JSON parser generator for C' etc). But there are already programs
that take whole language grammars as input, and produce complete
language parsers as output.

This requirement is much simpler, but is not trivial. For a start, you
need the best part of a C compiler to process the C struct (to cope with
includes, macros, typedefs, ifdefs, types etc), unless the struct
description is highly stylised. Or you don't use C.

The output can be a specific parser function, but that also would rely
on a more general library that provides, among other things, the
tokenising for JSON text data.

It'd make quite an interesting project actually, although my experience
(based on XML) suggests that a deceptively simple format like JSON isn't
quite so simple in practice..

pozz

unread,
May 17, 2019, 10:37:49 AM5/17/19
to
Yes.


> Sorry, I don't know of any existing solutions, although there are one or
> two things that come up on google (I guess you've already tried that
> with 'JSON parser generator for C' etc).

Yes, without success.


> But there are already programs
> that take whole language grammars as input, and produce complete
> language parsers as output.
>
> This requirement is much simpler, but is not trivial. For a start, you
> need the best part of a C compiler to process the C struct (to cope with
> includes, macros, typedefs, ifdefs, types etc), unless the struct
> description is highly stylised. Or you don't use C.

Too difficult. It would be better to describe the C struct in a
different simpler language and let the tool to generates the C struct
definition and the parser code.


> The output can be a specific parser function, but that also would rely
> on a more general library that provides, among other things, the
> tokenising for JSON text data.

Yes, of course.


> It'd make quite an interesting project actually, although my experience
> (based on XML) suggests that a deceptively simple format like JSON isn't
> quite so simple in practice..

I agree. It isn't so simple in practice.

Another good thing is incremental parsing. Many time the JSON string
isn't completely in memory, but it is read in parts. It would be nice to
call repeatedly parsing function with new parts until the JSON string ends.

Consider a big JSON file stored on an external I2C or SPI memory (not on
the same bus of the processor). The application could read 128-bytes
parts and feed the parser, saving a lot of memory.

Rick C. Hodgin

unread,
May 17, 2019, 10:38:45 AM5/17/19
to
On Friday, May 17, 2019 at 3:20:26 AM UTC-4, pozz wrote:
> I'd like to use JSON documents in my embedded project. I don't need full
> parsing capabilities, because JSON could be very complex in general.

If you have an embedded project, choose another structure. One
which is directly traversable directly with pointers.

> However I will use simple C structs that are equivalent to JSON
> messages, for example:
>
>
> struct myStruct {
> unsigned int id;
> char name[32];
> int array[8];
> };
>
> {
> "id": 12345,
> "name": "John",
> "array": [ -3, 7, 8 ]
> }
>

Instead, encode as:

Offset, Length Content
0 5 [4:tag offset][1:len]
5 5 [4:data offset][1:type=int]
10 5 [4:tag offset][1:len]
15 6 [4:data offset][1:type=char*][1:len]
21 5 [4:tag offset][1:len]
26 6 [4:data offset][1:type=int array][1:elements=8]
32 4 [4:terminator = 0]

36 "id"
38 [integer 12345]
42 "name"
46 "John"
52 "array"
57 [integers -3, 7, 8, 0, 0, 0, 0, 0]
// Total = 89 bytes

It would allow you to traverse the header top-down, adjusting only
based on whatever options there are for the 1:type field on every
alternate row. Everything else is directly computed by offset into
the data.

It can be encoded as mime64 for transport, and decoded for use,
so it's all text, able to go over a web server or whatever.

An algorithm would encode it. An algorithm would decode it. And,
it would be very fast, low-overhead, small code base, and concise.

--
Rick C. Hodgin

Malcolm McLean

unread,
May 17, 2019, 11:05:55 AM5/17/19
to
Instead of using C structs with names in C struct member name scope
matching the JSON "name" fields, it might be easier to generate something
like the following

struct json_element
{
char *name; // can be null if the field is part of an array
struct json_element *next; // next field
struct json_element *child; // for arrays and objects
char *string_ value; // only valid if the type matches. Could use
double real_value; // a union to save a few bytes
int type; //NUMERICAL, STRING, ARRAY, OBJECT
};

Now of course you need to write a parser to parse the parsed JSON.

For many applications, you have a flat structure of strings and scalars,
and you know in advance what the field names will be for valid data.
So the access functions are then quite easy to write.

Rick C. Hodgin

unread,
May 17, 2019, 11:15:29 AM5/17/19
to
On Friday, May 17, 2019 at 11:05:55 AM UTC-4, Malcolm McLean wrote:
> Instead of using C structs with names in C struct member name scope
> matching the JSON "name" fields, it might be easier to generate something
> like the following
>
> struct json_element
> {
> char *name; // can be null if the field is part of an array
> struct json_element *next; // next field
> struct json_element *child; // for arrays and objects
> char *string_ value; // only valid if the type matches. Could use
> double real_value; // a union to save a few bytes
> int type; //NUMERICAL, STRING, ARRAY, OBJECT
> };
>
> Now of course you need to write a parser to parse the parsed JSON.
>
> For many applications, you have a flat structure of strings and scalars,
> and you know in advance what the field names will be for valid data.
> So the access functions are then quite easy to write.

My thinking is for an embedded app, that you don't have to stick
with generic interoperative standards between remote systems, but
rather are free to enter proprietary domains of efficiency and need.

However, I read later on in the thread that he is looking to inter-
act with other machines that use JSON, so he has a constraint that
makes my solution non-viable in this case.

I think there should be a standard similar to what I posted above,
one which uses offsets from the start of the data, and one which
allows direct traversal, and to use that ability in lieu of JSON.
Having all that extra parsing is just inviting disaster and using
additional power that's not necessary.

That format above could be converted to a JSON format for physical
examination by a person, but computers should not have to deal with
text.

Just my opinion / philosophy on the subject.

Note also, that's why I created BXML (Binary XML), which allows for
tags to have an equal sign after them to indicate how long their
data is, so you don't have to fully parse, but can partially parse
and otherwise traverse:

<xml name:4=Rick age:2=49 occupation:5=human/>

In this example, you parse <, xml, name, and based on the :4,
you directly copy Rick, and ignore the space, parse age:2, and
directly copy the 49, and ignore the space, parse occupation:5
and direction copy human, and encounter / and parse >. It does
allow for faster parsing, as well as the embedding of large
amounts of data in an XML structure, such as sound data, graphics
or video data, such as:

<song title:14=Old, Old Story mp3:398321=.../>

The mp3 data of 398,321 bytes is directly encoded within. No
time to parse. No ridiculous CDATA sections, etc.

The idea I posted above for pozz takes that a step further to
allow more direct parsing.

--
Rick C. Hodgin

Lew Pitcher

unread,
May 17, 2019, 11:53:51 AM5/17/19
to
Malcolm McLean wrote:

> On Friday, 17 May 2019 15:38:45 UTC+1, Rick C. Hodgin wrote:
>> On Friday, May 17, 2019 at 3:20:26 AM UTC-4, pozz wrote:
>> > I'd like to use JSON documents in my embedded project. I don't need
>> > full parsing capabilities, because JSON could be very complex in
>> > general.
>>
>> If you have an embedded project, choose another structure. One
>> which is directly traversable directly with pointers.
>>
>> > However I will use simple C structs that are equivalent to JSON
>> > messages, for example:
[snip]
>> An algorithm would encode it. An algorithm would decode it. And,
>> it would be very fast, low-overhead, small code base, and concise.
>>
> Instead of using C structs with names in C struct member name scope
> matching the JSON "name" fields, it might be easier to generate something
> like the following
>
> struct json_element
> {
> char *name; // can be null if the field is part of an array
> struct json_element *next; // next field
> struct json_element *child; // for arrays and objects
> char *string_ value; // only valid if the type matches. Could use
> double real_value; // a union to save a few bytes
> int type; //NUMERICAL, STRING, ARRAY, OBJECT
> };
>
> Now of course you need to write a parser to parse the parsed JSON.

Perhaps a place to start would be with an existing (simple) json parser.
A quick google search brought up
https://gist.github.com/justjkk/436828/
which is a sample json parser written in Lex and Yacc (compilable to C
source), licenced with an MIT open licence.

[snip]


--
Lew Pitcher
"In Skills, We Trust"

Bart

unread,
May 17, 2019, 12:56:33 PM5/17/19
to
I think I'll have a go at doing this, although I don't know if I'll come
up with anything that will be of any use to you.

But some questions about the spec:

* Tags in JSON can be in any order?

* Unknown tags to be ignored?

* Missing tags to be ignored? (Allowing both of these I think allows any
JSON object to match any struct)

* No char* fields? (These might need heap memory)

* Float fields needed?

* Will arrays fields be only 1D, or can be 2D and 3D?

Rick C. Hodgin

unread,
May 17, 2019, 1:21:18 PM5/17/19
to
On Friday, May 17, 2019 at 12:56:33 PM UTC-4, Bart wrote:
> I think I'll have a go at doing this, although I don't know if I'll come
> up with anything that will be of any use to you.
>
> But some questions about the spec:
>
> * Tags in JSON can be in any order?
>
> * Unknown tags to be ignored?
>
> * Missing tags to be ignored? (Allowing both of these I think allows any
> JSON object to match any struct)
>
> * No char* fields? (These might need heap memory)
>
> * Float fields needed?
>
> * Will arrays fields be only 1D, or can be 2D and 3D?

Might make a good comp.lang.c coding challenge.

--
Rick C. Hodgin

Bart

unread,
May 17, 2019, 1:43:47 PM5/17/19
to
Maybe, but the whole thing is rather open-ended. Even forgetting about
converting a C struct to C parsing code, just the parsing part, even for
a struct format that is hard-coded, is full of choices that need to be made.

The specifications are not clear enough (the above is just part of it).

Having data come in in any order makes it entirely different: reading an
expected string or number is easier than reading an known value of half
a dozen type categories. What about underflow in array elements? Too
many array elements? Mixed types within an array? (All are possible in
JSON data.)

How to distinguish between 'char[32]' which can be initialised with a
string, and one which is an array of integers, or allow either?

It is rather a can of worms. (I'm doing it in a soft language first to
see what is possible.)

The OP added a few extra bits in as well, like /adding/ optional flag
members to the struct, with the implication that the struct definition
now additionally has to be part of the generated code. (Itself full of
problems, if the struct has dependencies of a 1000s of lines of C
headers; you don't want to drag in all that.)

(The way it's coming out, the struct definition doesn't need to be
visible in the generated parser; it just needs tables of names, types,
offsets and sizes.)

Thiago Adams

unread,
May 17, 2019, 1:58:23 PM5/17/19
to
On Friday, May 17, 2019 at 2:43:47 PM UTC-3, Bart wrote:
...
> The specifications are not clear enough (the above is just part of it).

I have the same problem and it is very clear for me.

Problem 1:
- Read configuration file. The user can edit remove and change
the order.

Problem 2:
- The json is send by my code to my code. In this case I could
do some optimization and expect the order. This would make
the parser faster.

For problem 1 this is like my interface looks like:

(I will publish all source latter)

struct X
{
int id;
bool bFlag;
char name[10];
};


void ExchangeX(struct Data* data, void* p)
{
struct X* x = (struct X*)p;
data->ExchangeInt(data, "id", &x->id);
data->ExchangeText(data, "name", x->name, 10);
data->ExchangeBool(data, "flag", &x->bFlag);
}

void Test()
{
struct X x;
JsonLoad("test.json", ExchangeX, &x);
}

Rick C. Hodgin

unread,
May 17, 2019, 2:16:32 PM5/17/19
to
On Friday, May 17, 2019 at 1:43:47 PM UTC-4, Bart wrote:
> On 17/05/2019 18:21, Rick C. Hodgin wrote:
> > Might make a good comp.lang.c coding challenge.
>
> Maybe, but the whole thing is rather open-ended.

Let's define it.

1) The user provides a structure. Format must be per line:
field_name, type[, optional]
2) Must be able to receive input data like this:
field_name = data
3) Must generate output JSON.
4) Must be able to parse JSON input into:
field_name = data

As a baseline. Edits / revisions are welcome.

> The OP added a few extra bits in as well, like /adding/ optional flag
> members to the struct, with the implication that the struct definition
> now additionally has to be part of the generated code. (Itself full of
> problems, if the struct has dependencies of a 1000s of lines of C
> headers; you don't want to drag in all that.)

I think if it produced an internal structure that could be
queried by member, it wouldn't need to be in a JSON string
format or field_name = data format.

As things are needed, the API call to the "get_type()" and
"get_value()" functions could be given, which return a void*
to the indicated type, which is discriminated against by an
input int* variable that is populated with one of the #define
values (one for each type).

I welcome to proposals / revisions. Might be a fun coding

Bart

unread,
May 17, 2019, 2:43:42 PM5/17/19
to
On 17/05/2019 19:16, Rick C. Hodgin wrote:
> On Friday, May 17, 2019 at 1:43:47 PM UTC-4, Bart wrote:
>> On 17/05/2019 18:21, Rick C. Hodgin wrote:
>>> Might make a good comp.lang.c coding challenge.
>>
>> Maybe, but the whole thing is rather open-ended.
>
> Let's define it.
>
> 1) The user provides a structure. Format must be per line:
> field_name, type[, optional]

This part is not clear. I think he prefers this structure to be defined
as normal C code. If that's too hard, then an alternative can be used.

But I don't think what you have is sufficient:

What are the possibilities for 'type'? The OP's example included
'char[32]' and 'int[8]' and 'unsigned int'. Just how elaborate can these
get? And what size is 'int' anyway?

How do you deal with nested structs?

> 2) Must be able to receive input data like this:
> field_name = data

I don't remember this was part of the requirement. Is this C code?


> 3) Must generate output JSON.

I don't remember this part either! (If needed, there are likely to be
existing solutions.)

> 4) Must be able to parse JSON input into:
> field_name = data

That's not so simple either. You can't do this with a char[32] type or
int[8]. And that still requires 'data' to be captured in a suitable
format, which may be unpacked (so char[3] taking 3 bytes may be
represented by a list of 3 ints taking 12 bytes).

Additionally, you can't expect the date to be present in the same order
as in the struct. This is what makes a hard-coded parser very much
harder. You can't do:

readinteger("id", S->id, 4);
readstring("name", &S->name,32);
readarray("array", &S->array,8,4);

You have to tentatively read a field tag, and try and match it with one
of the fields of the struct. It has to be more table-driven, but in that
case, it is more the data that is hard-coded rather than the parsing code.


Rick C. Hodgin

unread,
May 17, 2019, 2:57:51 PM5/17/19
to
On Friday, May 17, 2019 at 2:43:42 PM UTC-4, Bart wrote:
> On 17/05/2019 19:16, Rick C. Hodgin wrote:
> > On Friday, May 17, 2019 at 1:43:47 PM UTC-4, Bart wrote:
> >> On 17/05/2019 18:21, Rick C. Hodgin wrote:
> >>> Might make a good comp.lang.c coding challenge.
> >>
> >> Maybe, but the whole thing is rather open-ended.
> >
> > Let's define it.
> >
> > 1) The user provides a structure. Format must be per line:
> > field_name, type[, optional]
>
> This part is not clear. I think he prefers this structure to be defined
> as normal C code. If that's too hard, then an alternative can be used.
>
> But I don't think what you have is sufficient:
>
> What are the possibilities for 'type'? The OP's example included
> 'char[32]' and 'int[8]' and 'unsigned int'. Just how elaborate can these
> get? And what size is 'int' anyway?

In JSON, you're always dealing with text versions of the data,
so you translate into text. char[32] can contain "rick" or
"bart" only. It doesn't need to contain the full 32, but if
it is given "Bartholmew Montgomery Scott, Esquire" (36 chars),
then it truncates it to 32.

> How do you deal with nested structs?

type = JSON.

> > 2) Must be able to receive input data like this:
> > field_name = data
>
> I don't remember this was part of the requirement. Is this C code?

No. This would just be a way to receive input to encode into
the JSON string.

> > 3) Must generate output JSON.
>
> I don't remember this part either! (If needed, there are likely to be
> existing solutions.)

His goals were to be interoperative with other systems, so it
must be able to take local data, build JSON, and be able to
take remote JSON, and build local data.

> > 4) Must be able to parse JSON input into:
> > field_name = data
>
> That's not so simple either. You can't do this with a char[32] type or
> int[8].

char = "rick"
array = [1, 2, 3, 4, 5, 6, 7, 8]

> And that still requires 'data' to be captured in a suitable
> format, which may be unpacked (so char[3] taking 3 bytes may be
> represented by a list of 3 ints taking 12 bytes).

It's always in text form in JSON. sprintf(output, "%d", value)
for each and you convert int values to text.

> Additionally, you can't expect the date to be present in the same order
> as in the struct. This is what makes a hard-coded parser very much
> harder. You can't do:
>
> readinteger("id", S->id, 4);
> readstring("name", &S->name,32);
> readarray("array", &S->array,8,4);
>
> You have to tentatively read a field tag, and try and match it with one
> of the fields of the struct. It has to be more table-driven, but in that
> case, it is more the data that is hard-coded rather than the parsing code.

The requirements of JSON are all that's required on the inside.
Does JSON have a required order? I think it's more by protocol
or use cases that it would need to be in a particular order. At
the same level, each item should be made visible to the caller.

I would generically parse an input structure defining the needs
of the JSON output, but be able to receive input in any order.
Only when the output is required to be in that order would it
then make it be that way.

I would probably also set a "populated" flag to indicate if one
of the fields is populated or not, and if not do not even include
it in the output ... unless it's flagged required, or possibly if
it's not flagged optional.

I think it would get us close. But, we just need to look at the
needs of JSON, and then interpolate between C's needs, and JSON's
needs.

--
Rick C. Hodgin

Bart

unread,
May 17, 2019, 3:11:57 PM5/17/19
to
On 17/05/2019 19:57, Rick C. Hodgin wrote:
> On Friday, May 17, 2019 at 2:43:42 PM UTC-4, Bart wrote:
>> On 17/05/2019 19:16, Rick C. Hodgin wrote:
>>> On Friday, May 17, 2019 at 1:43:47 PM UTC-4, Bart wrote:
>>>> On 17/05/2019 18:21, Rick C. Hodgin wrote:
>>>>> Might make a good comp.lang.c coding challenge.
>>>>
>>>> Maybe, but the whole thing is rather open-ended.
>>>
>>> Let's define it.
>>>
>>> 1) The user provides a structure. Format must be per line:
>>> field_name, type[, optional]
>>
>> This part is not clear. I think he prefers this structure to be defined
>> as normal C code. If that's too hard, then an alternative can be used.
>>
>> But I don't think what you have is sufficient:
>>
>> What are the possibilities for 'type'? The OP's example included
>> 'char[32]' and 'int[8]' and 'unsigned int'. Just how elaborate can these
>> get? And what size is 'int' anyway?
>
> In JSON, you're always dealing with text versions of the data,
> so you translate into text

/TO/ text; why? We're trying to parse existing text!


char[32] can contain "rick" or
> "bart" only. It doesn't need to contain the full 32, but if
> it is given "Bartholmew Montgomery Scott, Esquire" (36 chars),
> then it truncates it to 32.

That doesn't answer the question. How complicated can these types get?
Can there be arrays of arrays of structs containing further arrays? The
answer determines the syntax you might have to devise to represent the
layout of the C struct.

>
>> How do you deal with nested structs?
>
> type = JSON.

And the layout of that struct is?


>> That's not so simple either. You can't do this with a char[32] type or
>> int[8].
>
> char = "rick"
> array = [1, 2, 3, 4, 5, 6, 7, 8]

You've lost me. We are still talking about doing this in C, yes? Then
that is not valid C syntax. Neither is it clear what the LHS is supposed
to represent here.


>> And that still requires 'data' to be captured in a suitable
>> format, which may be unpacked (so char[3] taking 3 bytes may be
>> represented by a list of 3 ints taking 12 bytes).
>
> It's always in text form in JSON. sprintf(output, "%d", value)
> for each and you convert int values to text.

I don't understand. This is generating text from binary, but the
requirement is to convert JSON text /to/ binary. Specifically into the
fields of a C struct instance.

I think we must be talking at cross-purposes.

I'm fairly sure the OP wants to be able to convert JSON text to C binary
data (look at the subject line).

There is no C to JSON, except that another requirement is a tool to
automatically convert a C struct definition into a parser that can do
that conversion (JSON to binary).

Rick C. Hodgin

unread,
May 17, 2019, 3:23:07 PM5/17/19
to
On Friday, May 17, 2019 at 3:11:57 PM UTC-4, Bart wrote:
> I don't understand. This is generating text from binary, but the
> requirement is to convert JSON text /to/ binary. Specifically into the
> fields of a C struct instance.

I view that aspect of the translation into another layer, because
translating what's coming in from a JSON string, or from data with
commensurate JSON attributes, is just translation into a parsable
form. What comes in and be ranged into auto-types (an incoming
integer that will fit into a 32-bit signed integer range can be
stored as a 32-bit signed integer, those which are bigger into
something bigger. For text, store it as a variable-length char
array, determined by what data was parsed on the incoming string.

To then take that data, which is in known minimum-require forms,
can be translated into a C structure form, and can generate the
C source code to process it.

This is basically what SOAP does, by the way. You take an incoming
WSDL file and parse it into all of the requisite data structs for
your target language.

> I think we must be talking at cross-purposes.

It's just a different take on how it is processed, and in what
forms. And, my goals in this part of the thread are more for
the coding contest, and not so much to directly address the
needs of the OP.

> I'm fairly sure the OP wants to be able to convert JSON text to C binary
> data (look at the subject line).
>
> There is no C to JSON, except that another requirement is a tool to
> automatically convert a C struct definition into a parser that can do
> that conversion (JSON to binary).

He mentioned in one of his posts that he would be communicating
with existing systems, which I assumed was two-way. Maybe it's
not. Maybe the OP's app is only receiving input.

--
Rick C. Hodgin

Ben Bacarisse

unread,
May 17, 2019, 4:33:50 PM5/17/19
to
Bart <b...@freeuk.com> writes:

> On 17/05/2019 19:16, Rick C. Hodgin wrote:
>> On Friday, May 17, 2019 at 1:43:47 PM UTC-4, Bart wrote:
>>> On 17/05/2019 18:21, Rick C. Hodgin wrote:
>>>> Might make a good comp.lang.c coding challenge.
>>>
>>> Maybe, but the whole thing is rather open-ended.
>>
>> Let's define it.
>>
>> 1) The user provides a structure. Format must be per line:
>> field_name, type[, optional]
>
> This part is not clear. I think he prefers this structure to be
> defined as normal C code. If that's too hard, then an alternative can
> be used.
>
> But I don't think what you have is sufficient:
>
> What are the possibilities for 'type'?

My favourite is functions (or function pointers in C). The OP
presumably has no need for these since JSON can't represent them. They
make sense (and are indeed very useful) in remote procedure calls.

> The OP's example included
> 'char[32]' and 'int[8]' and 'unsigned int'. Just how elaborate can
> these get? And what size is 'int' anyway?

I'd be surprised if the size really matters. You know what type is
needed to represent a number in the "input" (it's a double precision
IEEE floating point number) and the generated code will simply convert
that to whatever numeric type the corresponding struct member has. The
specification should say what happens in anomalous situations like a
negative source and an unsigned target or a fractional source and an
integral target and so on, but the "size" of int is really a matter of
whether the input should be checked for out of range values. That may
not even be part of the requirements.

> How do you deal with nested structs?

I'm getting a strong sense of déją vu here. I did this (twice, in fact)
in the early 80s for a remote procedure call mechanism, first in C and
then again in Common Lisp (way more fun!). There is a huge literature
about "argument marshalling" for RPC and there are almost certainly
JSON-based RPC mechanisms out there. That might help in a search for
existing code.

>> 2) Must be able to receive input data like this:
>> field_name = data
>
> I don't remember this was part of the requirement. Is this C code?
>
>> 3) Must generate output JSON.
>
> I don't remember this part either!

Nor I (to both of the above).

> You have to tentatively read a field tag, and try and match it with
> one of the fields of the struct. It has to be more table-driven, but
> in that case, it is more the data that is hard-coded rather than the
> parsing code.

There's going to a spectrum from a solution where bespoke C functions
get generated for every case, to one where a generic parser uses a
table of auxiliary data to assign value to struct members.

--
Ben.

Bart

unread,
May 17, 2019, 5:34:00 PM5/17/19
to
On 17/05/2019 17:56, Bart wrote:
> On 17/05/2019 15:37, pozz wrote:

>> Consider a big JSON file stored on an external I2C or SPI memory (not
>> on the same bus of the processor). The application could read
>> 128-bytes parts and feed the parser, saving a lot of memory.

> I think I'll have a go at doing this, although I don't know if I'll come
> up with anything that will be of any use to you.

I feel that doing a fully working solution in C (to convert C code
declaring a struct, to C source that can parse that struct) would be too
much work to complete. But I will continue to play with it.

At least I now have my own JSON parser
(https://github.com/sal55/qx/blob/master/readjson.b, not in C, which
converts a json file to a 'dict' dynamic data structure)

The next step would have been to link the recursive scanning of that
input file, with a description** of the C struct layout, which could
then be filled in as it scanned.

(** A table where each entry describes one field of the struct, and
contains: field name; generic type (integer, string, array, further
struct), size, offset, array/string length, and element size for the
array. Not sure how nested structs would be handled.)

The routine for scanning JSON would be given a pointer to this table,
and a pointer to a receiving struct. This would implement the 'dynamic'
version of a struct parser, mentioned in an earlier post, rather than a
dedicated one. The struct definition itself does not need to be visible
to such a parser; only the tables are needed.

The next step after /that/ would have been to implement it all in C
(ugh..), and then the next task would have been to hack a C compiler to
be able to generate the tables needed. (And then that compiler - not in
C - would need to be translated to C, something I haven't tried recently.)

So some ideas anyway if you (the OP) are thinking of a DIY solution.

pozz

unread,
May 17, 2019, 5:44:07 PM5/17/19
to
Il 17/05/2019 18:56, Bart ha scritto:
> On 17/05/2019 15:37, pozz wrote:
>> Il 17/05/2019 16:06, Bart ha scritto:
>
>> Too difficult. It would be better to describe the C struct in a
>> different simpler language and let the tool to generates the C struct
>> definition and the parser code.
>>
>>
>>> The output can be a specific parser function, but that also would
>>> rely on a more general library that provides, among other things, the
>>> tokenising for JSON text data.
>>
>> Yes, of course.
>>
>>
>>> It'd make quite an interesting project actually, although my
>>> experience (based on XML) suggests that a deceptively simple format
>>> like JSON isn't quite so simple in practice..
>>
>> I agree. It isn't so simple in practice.
>>
>> Another good thing is incremental parsing. Many time the JSON string
>> isn't completely in memory, but it is read in parts. It would be nice
>> to call repeatedly parsing function with new parts until the JSON
>> string ends.
>>
>> Consider a big JSON file stored on an external I2C or SPI memory (not
>> on the same bus of the processor). The application could read
>> 128-bytes parts and feed the parser, saving a lot of memory.
>
> I think I'll have a go at doing this, although I don't know if I'll come
> up with anything that will be of any use to you.

Thank you, maybe we can arrange a community github project and anyone
can help.


> But some questions about the spec:
>
> * Tags in JSON can be in any order?

Yes, i tags are keys. JSON standard defines a grammar, but it doesn't
define how to define a schema of a certain message.

{ "name": "Bart", "language": "C" }

and

{ "language": "C", "name": "Bart" }

are perfectly identical.


> * Unknown tags to be ignored?

I think it's the better choice. Imagine that the generator (another
application) will add some data in the future. We don't want to break
old receivers.


> * Missing tags to be ignored? (Allowing both of these I think allows any
> JSON object to match any struct)

Do you mean tags that are present in the C struct and not present in the
JSON text?

I think we have two possibilities.
The tag can be mandatory or optional. If the key is mandatory, the
parser should exit with an error (JSON message doesn't match the C struct).
If the tag is optional, we can add a bool "meta-member" in the final C
struct that shows if the tag was present in the message.

struct {
char name[32];
bool has_language;
char language[32];
} myStruct;


> * No char* fields? (These might need heap memory)

Do you mean a generic binary stream? I don't think.


> * Float fields needed?

I'm not interested in float fields in my current project. Maybe you can
"disable" float feature with #define.


> * Will arrays fields be only 1D, or can be 2D and 3D?

{ "coord": [ [1, 2], [3, 4], [5, 6] ] }

This is a 2D array and it is perfectly legal in JSON grammar.


Consider that JSON specifications are too much generic. You can think of
esoteric perfectly legal JSON messages that can't be directly
deserialized in a C struct.

For example, duplicated tags:

{ "name": "Bart", "name": "John" }


Or non homogeneous arrays.

{
"array": [
"string", 1, false, {
"tag1": "value1", "tag2": "value2"
}
]
}

I think we can ignore those possibilities.

pozz

unread,
May 17, 2019, 6:12:06 PM5/17/19
to
Il 17/05/2019 19:43, Bart ha scritto:
> On 17/05/2019 18:21, Rick C. Hodgin wrote:
>> On Friday, May 17, 2019 at 12:56:33 PM UTC-4, Bart wrote:
>>> I think I'll have a go at doing this, although I don't know if I'll come
>>> up with anything that will be of any use to you.
>>>
>>> But some questions about the spec:
>>>
>>> * Tags in JSON can be in any order?
>>>
>>> * Unknown tags to be ignored?
>>>
>>> * Missing tags to be ignored? (Allowing both of these I think allows any
>>> JSON object to match any struct)
>>>
>>> * No char* fields? (These might need heap memory)
>>>
>>> * Float fields needed?
>>>
>>> * Will arrays fields be only 1D, or can be 2D and 3D?
>>
>> Might make a good comp.lang.c coding challenge.
>
> Maybe, but the whole thing is rather open-ended. Even forgetting about
> converting a C struct to C parsing code, just the parsing part, even for
> a struct format that is hard-coded, is full of choices that need to be
> made.
>
> The specifications are not clear enough (the above is just part of it).
>
> Having data come in in any order makes it entirely different: reading an
> expected string or number is easier than reading an known value of half
> a dozen type categories. What about underflow in array elements? Too
> many array elements?

You define a maximum number of elements of an array in the C struct
definition. If there are less elements in the JSON message, we can use a
new meta-member size. If there are more elements, the extra elements are
simply ignored.

struct {
int array[MAX_ARRAY_SIZE];
size_t array_size;

int array2[MAX_ARRAY2_SIZE];
size_t array2_size;
} myStruct;


> Mixed types within an array? (All are possible in
> JSON data.)

No, I wouldn't support this.


> How to distinguish between 'char[32]' which can be initialised with a
> string, and one which is an array of integers, or allow either?

If we decide to define the C struct with a newer language (not pure C),
we can define the exact type of the member (string or array of chars).

Another possibility is to use char[] for strings and (unsigned) int for
integers (no short or long).


> It is rather a can of worms. (I'm doing it in a soft language first to
> see what is possible.)
>
> The OP added a few extra bits in as well, like /adding/ optional flag
> members to the struct, with the implication that the struct definition
> now additionally has to be part of the generated code.

Considering it is very difficult to read the definition of a C struct in
a pure C code (think of comments, preprocessor macros...), I think we
start with a new struct definition language (.cj). The output of the
generator will be .c and .h file, latter with C struct definition.

> (Itself full of
> problems, if the struct has dependencies of a 1000s of lines of C
> headers; you don't want to drag in all that.)
>
> (The way it's coming out, the struct definition doesn't need to be
> visible in the generated parser; it just needs tables of names, types,
> offsets and sizes.)

Take a look at jsmn[1]. The parser generates an array of tokens. Each
token has a type, a name, the offset in the message and the size of the
token.

However, after parsing, it's difficult to search for a precise tag and
retrive its value to save in the struct.


Another nice feature that some JSON parsers have and that can be very
useful in restricted environments, is the possibility to parse a message
incrementally.
Suppose you have a 4kB JSON message stored in an external memory module.
The processor can't read directly from the external memory that isn't on
a different bus (SPI, I2C).

The application could read part of the message, maybe 256 bytes, and
feed the parser for each part, until the end.


[1] https://github.com/zserge/jsmn

pozz

unread,
May 17, 2019, 6:17:31 PM5/17/19
to
Il 17/05/2019 23:33, Bart ha scritto:
> On 17/05/2019 17:56, Bart wrote:
>> On 17/05/2019 15:37, pozz wrote:
>
>>> Consider a big JSON file stored on an external I2C or SPI memory (not
>>> on the same bus of the processor). The application could read
>>> 128-bytes parts and feed the parser, saving a lot of memory.
>
>> I think I'll have a go at doing this, although I don't know if I'll
>> come up with anything that will be of any use to you.
>
> I feel that doing a fully working solution in C (to convert C code
> declaring a struct, to C source that can parse that struct) would be too
> much work to complete. But I will continue to play with it.
>
> At least I now have my own JSON parser
> (https://github.com/sal55/qx/blob/master/readjson.b, not in C, which
> converts a json file to a 'dict' dynamic data structure)

Are you sure starting from JSON file is a good choice? I was thinking to
start from C struct definition (in pure C code or something else).

Otherwise you should have a JSON file with populated keys and values.


> The next step would have been to link the recursive scanning of that
> input file, with a description** of the C struct layout, which could
> then be filled in as it scanned.
>
> (** A table where each entry describes one field of the struct, and
> contains: field name; generic type (integer, string, array, further
> struct), size, offset, array/string length, and element size for the
> array. Not sure how nested structs would be handled.)
>
> The routine for scanning JSON would be given a pointer to this table,
> and a pointer to a receiving struct. This would implement the 'dynamic'
> version of a struct parser, mentioned in an earlier post, rather than a
> dedicated one. The struct definition itself does not need to be visible
> to such a parser; only the tables are needed.
>
> The next step after /that/ would have been to implement it all in C
> (ugh..), and then the next task would have been to hack a C compiler to
> be able to generate the tables needed. (And then that compiler - not in
> C - would need to be translated to C, something I haven't tried recently.)
>
> So some ideas anyway if you (the OP) are thinking of a DIY solution.

I'm sorry, it's very difficult for me.

Bart

unread,
May 17, 2019, 8:25:26 PM5/17/19
to
On 17/05/2019 23:17, pozz wrote:
> Il 17/05/2019 23:33, Bart ha scritto:
>> On 17/05/2019 17:56, Bart wrote:
>>> On 17/05/2019 15:37, pozz wrote:
>>
>>>> Consider a big JSON file stored on an external I2C or SPI memory
>>>> (not on the same bus of the processor). The application could read
>>>> 128-bytes parts and feed the parser, saving a lot of memory.
>>
>>> I think I'll have a go at doing this, although I don't know if I'll
>>> come up with anything that will be of any use to you.
>>
>> I feel that doing a fully working solution in C (to convert C code
>> declaring a struct, to C source that can parse that struct) would be
>> too much work to complete. But I will continue to play with it.
>>
>> At least I now have my own JSON parser
>> (https://github.com/sal55/qx/blob/master/readjson.b, not in C, which
>> converts a json file to a 'dict' dynamic data structure)
>
> Are you sure starting from JSON file is a good choice? I was thinking to
> start from C struct definition (in pure C code or something else).
>
> Otherwise you should have a JSON file with populated keys and values.

Well, that seemed to work. I modified my test reader so that it would
fill in the values of a struct:

https://github.com/sal55/qx/blob/master/parsejson.b

(Dynamic language, not C, not Python.)

Here I've manually hard-coded the table for your example struct (lines
64 to 67). The struct definition does exist, or a version of the C one
(lines 19 to 23), but here it's just for testing - to more easily set up
a dummy destination of the right size, and to easily print the result.
Otherwise it's not needed.

The struct printed from this program using that table, as populated from
the JSON example in your OP:

(12345, John, (-3,7,8,0,0,0,0,0))

I won't take this further as there are lots of other complications (like
nested arrays, nested structs, optional fields and extra flag fields);
all sorts of other details I don't know about and which would come up,
and I don't know the overall context (how even does it decide which
struct a bit of incoming JSON will describe).

But it shows that a 'dynamic' parser - hard-coded program, but tables
somehow generated from C structs, could work, even in C. (A C version
would just have taken longer to try out.)


>> So some ideas anyway if you (the OP) are thinking of a DIY solution.
>
> I'm sorry, it's very difficult for me.

Maybe you should make it into a contest like someone suggested. People
here quite like writing code. But the task needs to be smaller and more
well-defined.

Ian Collins

unread,
May 17, 2019, 11:34:05 PM5/17/19
to
On 17/05/2019 19:20, pozz wrote:
> I'd like to use JSON documents in my embedded project. I don't need full
> parsing capabilities, because JSON could be very complex in general.
>
> However I will use simple C structs that are equivalent to JSON
> messages, for example:
>
>
> struct myStruct {
> unsigned int id;
> char name[32];
> int array[8];
> };
>
> {
> "id": 12345,
> "name": "John",
> "array": [ -3, 7, 8 ]
> }
>


I can't see anywhere in your post where you specify the size of a string
field. Maybe

{
"id": 12345,
"name": {"string": 32, "value": "John"},
"array": [ -3, 7, 8 ]
}

?

If you are interested in JSON style meta-languages, have a look at CTF
(Used for Linux trace data amongst other things).

--
Ian.

Clifford Heath

unread,
May 18, 2019, 3:14:15 AM5/18/19
to
On 17/5/19 5:20 pm, pozz wrote:
> I'd like to use JSON documents in my embedded project. I don't need full
> parsing capabilities, because JSON could be very complex in general.
>
> Do you know of something code generator?

Ok. Latecomer, but I read most of the thread. Introduction: I've been
writing C since 1979 and have a million lines of my code in production
in mission-critical software on tens of millions of enterprise
computers. Although these days my C is mostly C++, I use C++ mostly for
the features that make it "a better C".

Neither JSON nor C has a very strong type system. Neither is a good
specification language. You should choose a specification language that
does a good job of specifying your intent, and a compiler for it that
emits C that handles one or more packing syntaxes such as JSON (if you
must!)

The European Space Agency has followed the lead of the ISO networking
standards and built a compiler for ASN.1, which emits C code targeted at
static-memory embedded systems. The downside: I don't think it has a
JSON packing rule (the ASN.1 name for a transfer syntax). Other ASN.1
compilers do support JSON, but I don't know whether they target
static-memory or embedded systems.

The ESA-funded ASN1CC is here: <https://github.com/ttsiodras/asn1scc>.

It supports BER (Basic encoding rules, a compact & fast binary format),
PER (Packed Encoding Rules, a bit-packed binary format that will be
slower but tighter), XER (XML Encoding Rules, you can imagine!), and
ACN, where you get to define the encoding yourself.

If you wish to serve the community as well as yourself, write JER, JSON
Encoding Rules and submit it.

A quick search finds a proposed standard:
<https://www.obj-sys.com/docs/JSONEncodingRules.pdf> and a tool that
analyses a JSON fragment and produces a proposed ASN.1 definition for
the shape of the fragment <https://asn1.io/json2asn/>

So there is existing work, and no doubt community support for more.
You could do a lot worse than start with ASN1CC.

Clifford Heath.

Bart

unread,
May 18, 2019, 6:57:39 AM5/18/19
to
On 18/05/2019 04:33, Ian Collins wrote:
> On 17/05/2019 19:20, pozz wrote:
>> I'd like to use JSON documents in my embedded project. I don't need full
>> parsing capabilities, because JSON could be very complex in general.
>>
>> However I will use simple C structs that are equivalent to JSON
>> messages, for example:
>>
>>
>> struct myStruct {
>>     unsigned int id;
>>     char name[32];
>>     int array[8];
>> };
>>
>> {
>>     "id": 12345,
>>     "name": "John",
>>     "array": [ -3, 7, 8 ]
>> }
>>
>
>
> I can't see anywhere in your post where you specify the size of a string
> field.

It's specified in the C struct (assuming char[32] denotes a string), and
here it specifies an upper limit, to include a terminator.


> Maybe
>
> {
>    "id": 12345,
>    "name": {"string": 32, "value": "John"},
>    "array": [ -3, 7, 8 ]
> }

It doesn't really belong in the JSON data, which is more type and
language independent (there is no dimension for 'array' either). What
purpose would be serve? What does it mean? The data is a string with a
length that is the length of the string.

The same JSON item could be read by a multitudinous set of programs and
languages all with a different destination size, even within the same
program, or maybe it doesn't matter.

But now a simple string is a complex object, with an ambiguous meaning
(is this one string item, or a nested struct with an int field called
"string", and a string field called "value"?).

pozz

unread,
May 18, 2019, 10:04:07 AM5/18/19
to
ASN.1 is interested. I will look at it more deeply.

Just a simple question. Are optional fields possible in a message?


Jorgen Grahn

unread,
May 18, 2019, 2:18:13 PM5/18/19
to
On Fri, 2019-05-17, Thiago Adams wrote:
> On Friday, May 17, 2019 at 9:11:05 AM UTC-3, pozz wrote:
>> Il 17/05/2019 13:40, Thiago Adams ha scritto:
>> > On Friday, May 17, 2019 at 8:23:02 AM UTC-3, Jorgen Grahn wrote:
>> > ...
>> >> Sorry I don't have the answer. I don't even /like/ JSON (especially
>> >> since I learned you can't have comments in it, so it's useless for
>> >> config).
>> >
>> > Just ignore comments in your parser.
>> >
>>
>> I think Jorgen thinks about JSON that could be parsed by standard JSON
>> parser that don't ignore comments and exit with error.

Yes; thanks.

> Yes. But more people in the world thinks comments are useful
> stuff and they are adding to their json parsers as well. (libs)
> It is not standards but it a common feature added into
> json. So I believe this is not a problem.

Not good enough for me, I'm afraid. I don't like depending on
informal extensions, and I don't like telling people "here's a JSON
file, but remember to use a special parser to read it, because it's
not /really/ a JSON file after all".

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Jorgen Grahn

unread,
May 18, 2019, 2:22:29 PM5/18/19
to
On Fri, 2019-05-17, pozz wrote:
> Il 17/05/2019 13:22, Jorgen Grahn ha scritto:
>> On Fri, 2019-05-17, pozz wrote:
>>> Il 17/05/2019 10:48, Jorgen Grahn ha scritto:
>>>> On Fri, 2019-05-17, pozz wrote:
>>>>> I'd like to use JSON documents in my embedded project. I don't need full
>>>>> parsing capabilities, because JSON could be very complex in general.
>>>>
>>>> Nitpicking:
>>>>
>>>> Doesn't "JSON could be very complex in general" imply that you /do/
>>>> need full parsing capabilities, rather than the opposite?
>>>
>>> The application will receive messages created by me, so I have full
>>> control of "JSON shape". I will never create messages that are too
>>> nested, for example.
>>>
>>>> How embedded is the project?
>>>
>>> NXP LPC1768, 512kB Flash memory, 64kB RAM memory (that is almost full).
>>> I can't use dynamic memory.
>>>
>>>> I'm asking because lots of embedded
>>>> systems can easily run a JSON parser these days.
>>>
>>> Unfortunately I'm not using embedded Linux with big memories.
>>
>> Ah, then the question makes perfect sense.
>>
>> Sorry I don't have the answer. I don't even /like/ JSON (especially
>> since I learned you can't have comments in it, so it's useless for
>> config).
>>
>> Best would be if you could remove the requirement to use JSON. This
>> embedded thing clearly has more important tasks better suited to it.
>
> The device talks to AWS, mobile app, web app... and so on. In this
> world, JSON is almost a standard.

But in the world of 64kB RAM it's not, thus the problem.

Can you put the device behind some kind of frontend?

Clifford Heath

unread,
May 18, 2019, 7:12:40 PM5/18/19
to
Yes. And pretty-much everything else you might need in a static type
system; sets of alternates with a discriminator (discriminated unions),
arbitrary nesting of arrays of structures of arrays of structures,
strings of many and various character sets and encodings, enumerations,
numbers limited by range not storage size, etc.

Not all ASN.1 implementations are equal. This one is targeted at systems
that use no dynamic memory allocation, which is truly an excellent thing
- but you probably can't do JSON like that. It's used for the majority
of inter-module communications on ESA space missions, so it's pretty
well tested and reliable too.

The point is that the ASN.1 definitions are focussed on defining the
values that can be meaningfully transferred, not the possible values of
a set of representations. It's not fully powerful, but it's a lot better
than most programming languages.

I plan to emit ASN.1 from my Constellation Query Language tools at some
stage this year, see <http://dataconstellation.com/ActiveFacts/>. CQL
has a complete first-order logic constraint language, so it is even more
powerful than ASN.1; and every feature (definition, constraint) is also
expressible in controlled natural language and *compilable* from that
language.

Clifford Heath.

Niklas Holsti

unread,
May 19, 2019, 10:13:11 AM5/19/19
to
On 19-05-19 02:12 , Clifford Heath wrote:
> On 19/5/19 12:03 am, pozz wrote:
>> Il 18/05/2019 09:14, Clifford Heath ha scritto:
>>> ...
>>> The European Space Agency has followed the lead of the ISO networking
>>> standards and built a compiler for ASN.1, which emits C code targeted
>>> at static-memory embedded systems.

(By the way, it emits Ada code too.)

[snip]

>>> The ESA-funded ASN1CC is here: <https://github.com/ttsiodras/asn1scc>.
>>>
> [snip]
>
> This one is [snip] used for the majority
> of inter-module communications on ESA space missions,

Are you sure about that usage? I have been implementing on-board SW for
ESA missions since 1995 or so, and I have never come across a system
using ASN.1 or the ASN1SCC tool. There may be such, and there may be
more in the future, but I doubt they can be in the majority now.

Perhaps you are confusing the ASN1SCC tool with the specific
packet-based protocol (specific grammar, in other words) that is used by
the majority of ESA missions now: the Packet Utilization Standard (PUS)?
As I understand it, the ASN1SCC tool was implemented fairly recently to
help with future SW implementations of PUS and similar packet-structure
standards.

Of course this point is not very important for the OP, but it would
interest me greatly if you can point to some ESA missions that use ASN1SCC.

(More off-topic: several years ago I heard a very negative opinion about
the ASN.1 metalanguage from a senior colleague who was an expert in
formal grammars and meta-compilers. He felt that the committee that
defined ASN.1 had done so on an "amateur" level without properly using
the metalanguage technology already well known in the computer science
domain.)

(Even more off topic: IMO the intrusion into ESA of ITU working methods
for standards has generally resulted in a drastic decrease in the
usability of the ESA/ECSS standard documents (although not in the
standards themselves). The volume of text and the number of identified
requirements has increased hugely for purely formal reasons, without a
corresponding increase in the scope of the standards. This applies in
particular to the PUS, when one compares the original standard (PUS A)
with the current one (PUS C).)

--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
. @ .

Clifford Heath

unread,
May 19, 2019, 7:25:39 PM5/19/19
to
On 20/5/19 12:13 am, Niklas Holsti wrote:
> On 19-05-19 02:12 , Clifford Heath wrote:
>> On 19/5/19 12:03 am, pozz wrote:
>>> Il 18/05/2019 09:14, Clifford Heath ha scritto:
>>>> ...
>>>> The European Space Agency has followed the lead of the ISO networking
>>>> standards and built a compiler for ASN.1, which emits C code targeted
>>>> at static-memory embedded systems.
> (By the way, it emits Ada code too.)

Yes, but I didn't think c.l.c would care to know that :P

>  [snip]
>
>>>> The ESA-funded ASN1CC is here: <https://github.com/ttsiodras/asn1scc>.
>>>>
>>  [snip]
> >
>> This one is [snip] used for the majority
>> of inter-module communications on ESA space missions,
>
> Are you sure about that usage? > Perhaps you are confusing the ASN1SCC tool with the specific
> packet-based protocol (specific grammar, in other words) that is used by
> the majority of ESA missions now: the Packet Utilization Standard (PUS)?

You're right, I'm referring to the PUS standard. ASN1CC is one of the
approaches to implementing it. Of course there are still many competing
technologies in play. My contact is Serge Valera, who's been managing
telecommand & telemetry (monitoring and control) standards ICDs
(Interface Control Documents) and the verification of the
implementations of the ICDs for over 30 years. He is the main author of
the "ECSS-E-ST-70-41C - Telemetry and telecommand packet utilization"
which defines the overarching standard for definition of these
protocols. The bulk of this standard was machine-generated from a data
model created as an Object Role Model (ORM) in the NORMA tool - about
550 pages of text out of the total 650-ish in the document. That ORM
model was created by Serge Valera.

https://ecss.nl/standard/ecss-e-st-70-41c-space-engineering-telemetry-and-telecommand-packet-utilization-15-april-2016/

Serge is also the main guy behind this, which is also ORM-inspired:

https://ecss.nl/hbstms/ecss-e-tm-10-23a-space-system-data-repository/

> Of course this point is not very important for the OP, but it would
> interest me greatly if you can point to some ESA missions that use ASN1SCC.

I would need to contact Serge for that information. I don't have many
details on specific missions.

> (More off-topic: several years ago I heard a very negative opinion about
> the ASN.1 metalanguage from a senior colleague who was an expert in
> formal grammars and meta-compilers. He felt that the committee that
> defined ASN.1 had done so on an "amateur" level without properly using
> the metalanguage technology already well known in the computer science
> domain.)

The "right" way to do this is with a fact-based language like ORM (of
which my CQL is a plain-text implementation; ORM is graphical but we
share the underlying logic). ASN.1 is showing its age, but is solid and
the tools are good for implementors; ORM tools are still rather thin on
the ground. I'm curious to know what your colleague would recommend instead?

> (Even more off topic: IMO the intrusion into ESA of ITU working methods
> for standards has generally resulted in a drastic decrease in the
> usability of the ESA/ECSS standard documents (although not in the
> standards themselves). The volume of text and the number of identified
> requirements has increased hugely for purely formal reasons, without a
> corresponding increase in the scope of the standards. This applies in
> particular to the PUS, when one compares the original standard (PUS A)
> with the current one (PUS C).)

Interesting point. The ESA process requires carefully-constructed
English, and that is very verbose compared to the ORM model that
generated PUS (which I have, BTW). Of course it would be much better if
the underlying ORM model could be standardised, but again, the tool
maturity is not there yet. That's why I'm working to bring CQL up to the
required standard. Feel free to contact me privately
(@dataconstellation.com) if you have further input to offer.

Clifford Heath, cjh@...

pozz

unread,
Jun 5, 2019, 10:44:10 AM6/5/19
to
Il 18/05/2019 09:14, Clifford Heath ha scritto:
I eventually take a look at asn1scc and found a big limitation: it
doesn't support extensibility feature of ASN.1.

Do you really use fixed messages? Don't you ever change a message
definition adding some new variables in a future release?

Clifford Heath

unread,
Jun 5, 2019, 7:27:43 PM6/5/19
to
The problem of extensibility affects more than just your ability to
lexically decode the message. The eXtensibility in XML is purely
lexical, it doesn't help at all with semantics.

When new content is added to a message, there is no certainty that the
previous items' meanings have not changed. Simply ignoring the new
content is no guarantee of correct behaviour. What if the extension is
critical to the logic required by the application, and your code doesn't
even know that?

I think it's better to manage protocol versioning explicitly. Put a
protocol version number in the handshake and take it from there.

However, some purposes for which ASN.1 is used (like X.509 public key
messages) define structures which have a variable-length element that
contains the nested encoding of another ASN.1 object. The outer envelope
says what it is, so the correct decoding can be applied. In the case of
X.509 certificate extensions, the wrapper also says whether an
understanding of the embedded content is critical or not. So there's one
example of how to handle it.

The same kind of approach is taken in TIFF - the acronym actually means
Tagged Image File Format.

There's a whole theory around versioning and interoperability and a
seven-level model for it. See
<https://en.wikipedia.org/wiki/Conceptual_interoperability>. I know
Andraes Tolk, the co-creator of this model.

Clifford Heath.

Clifford Heath

unread,
Jun 5, 2019, 7:48:25 PM6/5/19
to
On 6/6/19 12:44 am, pozz wrote:
>> So there is existing work, and no doubt community support for more.
>> You could do a lot worse than start with ASN1CC.
>
> I eventually take a look at asn1scc and found a big limitation: it
> doesn't support extensibility feature of ASN.1.

One more comment. You mention (in other messages) that JSON is difficult
without using dynamic memory. ASN1CC is designed to avoid using dynamic
memory (for the same reason; embedded system constraints), which is why
it cannot implement arbitrary extensions. The best it could do is to
skip unknown extensions. But in PER, items are not tagged or
length-delimited, and skipping an extension requires at least a
knowledge of how big it is.

Clifford Heath.

pozz

unread,
Jun 6, 2019, 3:51:06 AM6/6/19
to
Il 06/06/2019 01:27, Clifford Heath ha scritto:
> On 6/6/19 12:44 am, pozz wrote:
>> Il 18/05/2019 09:14, Clifford Heath ha scritto:
>>> So there is existing work, and no doubt community support for more.
>>> You could do a lot worse than start with ASN1CC.
>>
>> I eventually take a look at asn1scc and found a big limitation: it
>> doesn't support extensibility feature of ASN.1.
>>
>> Do you really use fixed messages? Don't you ever change a message
>> definition adding some new variables in a future release?
>
> The problem of extensibility affects more than just your ability to
> lexically decode the message. The eXtensibility in XML is purely
> lexical, it doesn't help at all with semantics.
>
> When new content is added to a message, there is no certainty that the
> previous items' meanings have not changed. Simply ignoring the new
> content is no guarantee of correct behaviour. What if the extension is
> critical to the logic required by the application, and your code doesn't
> even know that?

It's the author of definitions that should decide if an extension breaks
old parsers. In my experience, many times I added some additional data
to a message (proprietary encoded) in a second time, without breaking
older receivers.

Just an example. I have a central unit that implements a logic and an
external HMI. They communicates by a serial RS485 connection. During
first development stage, I defined all the messages and encoded with
custom rules:

- this message has 1 byte, than 2 byte, then a string null-terminated...
- this message has 3 bytes, ...

You got the point. When a new feature is added in the control unit, I
can add related data to the end of the message. The receiver decodes the
data it knows, discarding the rest of the message. So v1 receivers can
continue decoding v2 messages. Of course, new feature isn't represented
on the HMI (until someone decides to update the HMI too), but v1 HMIs
and v2 controls can coexist.


> I think it's better to manage protocol versioning explicitly. Put a
> protocol version number in the handshake and take it from there.

It's a possibile solution, yes.


> However, some purposes for which ASN.1 is used (like X.509 public key
> messages) define structures which have a variable-length element that
> contains the nested encoding of another ASN.1 object. The outer envelope
> says what it is, so the correct decoding can be applied. In the case of
> X.509 certificate extensions, the wrapper also says whether an
> understanding of the embedded content is critical or not. So there's one
> example of how to handle it.

Please, could you produce an example? I can't follow you.

pozz

unread,
Jun 6, 2019, 3:54:44 AM6/6/19
to
Il 06/06/2019 01:48, Clifford Heath ha scritto:
> On 6/6/19 12:44 am, pozz wrote:
>>> So there is existing work, and no doubt community support for more.
>>> You could do a lot worse than start with ASN1CC.
>>
>> I eventually take a look at asn1scc and found a big limitation: it
>> doesn't support extensibility feature of ASN.1.
>
> One more comment. You mention (in other messages) that JSON is difficult
> without using dynamic memory. ASN1CC is designed to avoid using dynamic
> memory (for the same reason; embedded system constraints), which is why
> it cannot implement arbitrary extensions. The best it could do is to
> skip unknown extensions.

Yes, many times this behaviour is good, instead of breaking at all the
system.

> But in PER, items are not tagged or
> length-delimited, and skipping an extension requires at least a
> knowledge of how big it is.

Ok, PER can't be used to skip extended variables. However I can use BER.
Why do asn1scc refuses to compile definitions file with extensions when
BER is used?


pozz

unread,
Jun 6, 2019, 5:12:35 AM6/6/19
to
Incredibly, it seems there isn't any ASN.1 PER codec for
javascript/typescript (web-oriented) world. This limits the use to BER
only :-(

Clifford Heath

unread,
Jun 6, 2019, 7:08:33 PM6/6/19
to
On 6/6/19 5:50 pm, pozz wrote:
> Il 06/06/2019 01:27, Clifford Heath ha scritto:
>> On 6/6/19 12:44 am, pozz wrote:
>>> Il 18/05/2019 09:14, Clifford Heath ha scritto:
>>>> So there is existing work, and no doubt community support for more.
>>>> You could do a lot worse than start with ASN1CC.
>>>
>>> I eventually take a look at asn1scc and found a big limitation: it
>>> doesn't support extensibility feature of ASN.1.
>>>
>>> Do you really use fixed messages? Don't you ever change a message
>>> definition adding some new variables in a future release?
>>
>> The problem of extensibility affects more than just your ability to
>> lexically decode the message. The eXtensibility in XML is purely
>> lexical, it doesn't help at all with semantics.
>>
>> When new content is added to a message, there is no certainty that the
>> previous items' meanings have not changed. Simply ignoring the new
>> content is no guarantee of correct behaviour. What if the extension is
>> critical to the logic required by the application, and your code
>> doesn't even know that?
>
> It's the author of definitions that should decide if an extension breaks
> old parsers.

Again, you are confusing between parsing (syntax) and purpose
(semantics). It's easy to define formats that allow extensions without
breaking parsing, but that gives you no confidence that there is not a
semantic error in doing that. JSON, XML, YAML, etc, all support
extensible parsing.

XML, when used with XSD (XML schemas) allows validation of the correct
structure of the base document, and also of one or more sets of
extension schemas that are merged in to the base document. This is still
syntax, but it means that anything added to the base document which is
not allowed in that base schema can be detected and rejected. So it
gives you a way to detect unexpected modifications to the base syntax,
and also support extensions with similar validation.

I believe that JSONSchema does the same, but I haven't used it and
wouldn't choose to.

I have created a language called ADL (Aspect Definition Language) with
implementations so far only in C# and Ruby, which has some very nice
characteristics in addition to these - though it's still not suitable
for resource-constrained embedded environments.
<https://github.com/cjheath/adl>

> You got the point.

Yes, I got the point. Email header fields are related and similar: you
can add them and transport them through nodes that don't recognise them.

>> I think it's better to manage protocol versioning explicitly. Put a
>> protocol version number in the handshake and take it from there.
>
> It's a possibile solution, yes.
>
>
>> However, some purposes for which ASN.1 is used (like X.509 public key
>> messages) define structures which have a variable-length element that
>> contains the nested encoding of another ASN.1 object. The outer
>> envelope says what it is, so the correct decoding can be applied. In
>> the case of X.509 certificate extensions, the wrapper also says
>> whether an understanding of the embedded content is critical or not.
>> So there's one example of how to handle it.
>
> Please, could you produce an example? I can't follow you.

Look at the way X.509 certificate extensions are handled. There's a text
dump visible here:
<https://access.redhat.com/documentation/en-US/Red_Hat_Certificate_System/8.0/html/Admin_Guide/Standard_X.509_v3_Certificate_Extensions.html>.

The key thing is that the outer layer (the certificate itself) is fully
typed, but contains a section for extensions. Every extension has an
identifier, a critical flag, and then a variable blob of data. The first
one there is the "Certificate Usage" which is a bit-mask of services
that may rely on this certificate. But basically the "blob" can be
skipped by a parser that doesn't recognise the identifier, and decoded
only by a parser that does recognise it.

Clifford Heath.

Clifford Heath

unread,
Jun 6, 2019, 7:11:33 PM6/6/19
to
On 6/6/19 7:12 pm, pozz wrote:
ASN1CC has support for two languages, so I assume that it is set up for
multi-language support. If that has been done well (and I haven't
checked but I'd be surprised if it hasn't) it should be a fairly
straight-forward matter to add support for a third language. Probably
easier than implementing a fixed JS/TS processor for your desired
message schema. I daresay you'd make many people grateful for that.

Clifford Heath.

pozz

unread,
Jun 7, 2019, 2:59:26 AM6/7/19
to
Yes, but I haven't the know-how and time to add a third language to
asn1scc. I need to find an off-the-shelf solution.

It seems there isn't a *proprietary* ASN1 PER Javascript implementation
too (I found only JAVA support).

Clifford Heath

unread,
Jun 11, 2019, 7:41:11 PM6/11/19
to
I have the know-how. Contact me if it's worth enough to you.

Clifford Heath.
0 new messages