Performance - The reflection library is slow and cannot be optimized by the compiler at compile time.
That's not true. I significantly advanced reflect-aware Go compiler could optimize a lot about the expression `reflect.TypeOf(int(0)).String()` for example. Further, while the types being reflected upon are often not known at compile time, the operations are known, and could be optimized at compile time.
Supported Types
What about `type Int int`? It's unclear if that would be supported, and that kind of thing is done frequently enough to be a concern.
- Pointers to structs which have been megajsonified.
- Arrays of pointers to structs which have megajsonified.
I presume where you say arrays you instead/also mean slices. Why only pointers to structs? An application which uses large slice of value structs will easily lose any performance gains offered by megajson by needing to construct a corresponding slice of pointers to the original struct elements.
I think a critical improvement would be encoding/json compatibility... For example, instead of (or in addition to) providing a NewMyStructEncoder(writer).Encode(val) for each type, why not generate MarshalJSON/UnmarshalJSON methods on MyStruct, so that programmers who are already using encoding/json can just use the megajson code generator, and without any further code changes of their own, in order to produce a transparent performance boost? It would also mean that the application programmer wouldn't need to jump through any hoops if they wanted to embed or contain a MyStruct within some other type.
Eventually, you should want to strive for full encoding/json -> megajson compatibility of supported types so that anything that could be handled by encoding/json could be handled by megajson, or else that lack of supported-types compatibility will make megajson less appealing in the long term sense.