I don't think we need to extra layer of
ProtoBuffer to define our configuration schema/templates.
Many of us are familiar with the Jackson library (com.fasterxml.jackson.*). Jackson supports many dataformats (including
Protobuffer, TOML, YAML, JSON, JSON LInes etc..). We would
define our configurations as POJO's and can also use Jackson's
annotations for greater flexibility. The nice thing about this
approach is that is is format agnostic. We can produce config
files in any number of formats automatically.
Jackson also supports validation via JSON Schema. This is become a defacto standard in many other programming stacks like Python and Rust. IDE's automatically support the JSON schema and give instant feedback as you edit the TOML (or JSON) file.
One downside (vs ProtoBuffer) is that jackson is Java-only. You can't generate config readers for Python. Personally I don't see this as an issue because TOML is practically ubiquitous and every language has a very nice TOML library to read, write and validate TOML files. TOML (or JSON) would be our lingua-franca.
How does this sound? I started to feel uneasy requiring ProtoBuffer as a layer just to define a new config. This is a heavy burden IMHO when we can simply write POJO's.
The new architecture would look like this:
Naturally the old and new system would have to coexist for a
long time.
Thoughts?
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.dataformat.toml.TomlFactory;
import com.networknt.schema.JsonSchema;
import com.networknt.schema.JsonSchemaFactory;
import com.networknt.schema.SpecVersion;
import com.networknt.schema.ValidationMessage;
import java.io.File;
import java.io.IOException;
import java.util.Set;
public class TomlValidationExample {
public static void main(String[] args) {
try {
// Create a TOML mapper
ObjectMapper tomlMapper = new ObjectMapper(new
TomlFactory());
// Read TOML file into a JsonNode
JsonNode jsonNode = tomlMapper.readTree(new
File("config.toml"));
// Create JSON Schema
JsonSchemaFactory factory =
JsonSchemaFactory.getInstance(SpecVersion.VersionFlag.V7);
JsonSchema schema =
factory.getSchema(TomlValidationExample.class.getResourceAsStream("/config-schema.json"));
// Validate
Set<ValidationMessage> validationResult =
schema.validate(jsonNode);
if (validationResult.isEmpty()) {
System.out.println("TOML is valid");
// If valid, proceed with mapping to Config
object
Config config = tomlMapper.treeToValue(jsonNode,
Config.class);
System.out.println("Database URL: " +
config.database.url);
System.out.println("App Name: " +
config.app.name);
} else {
System.out.println("TOML is invalid. Validation
errors:");
for (ValidationMessage message :
validationResult) {
System.out.println(message);
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
// Classes to represent the TOML structure (same as before)
public static class Config {
public Database database;
public App app;
}
public static class Database {
public String url;
public String user;
public String password;
}
public static class App {
public String name;
public boolean debug;
}
}
import
com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.introspect.AnnotatedField;
import
com.fasterxml.jackson.databind.introspect.BeanPropertyDefinition;
import com.fasterxml.jackson.databind.JavaType;
import java.lang.reflect.Field;
import java.util.List;
public class JacksonPojoToMarkdown {
public static void main(String[] args) {
String markdown = generateMarkdownDoc(Config.class);
System.out.println(markdown);
}
public static String generateMarkdownDoc(Class<?>
clazz) {
StringBuilder sb = new StringBuilder();
sb.append("# ").append(clazz.getSimpleName()).append("
Configuration\n\n");
ObjectMapper mapper = new ObjectMapper();
JavaType javaType = mapper.constructType(clazz);
List<BeanPropertyDefinition> properties =
mapper.getSerializationConfig()
.introspect(javaType)
.findProperties();
for (BeanPropertyDefinition property : properties) {
String fieldName = property.getName();
AnnotatedField field = property.getField();
if (field != null) {
Field javaField = field.getAnnotated();
Class<?> fieldType = javaField.getType();
sb.append("##
").append(fieldName).append("\n\n");
sb.append("- **Type**:
`").append(fieldType.getSimpleName()).append("`\n");
JsonProperty jsonProperty =
javaField.getAnnotation(JsonProperty.class);
if (jsonProperty != null &&
!jsonProperty.required()) {
sb.append("- **Optional**\n");
}
// You can add more annotations here as needed
sb.append("\n");
}
}
return sb.toString();
}
// Example configuration class
public static class Config {
@JsonProperty(required = true)
private String databaseUrl;
@JsonProperty
private int maxConnections;
@JsonProperty
private boolean debug;
// getters and setters omitted for brevity
}
}
Occurred to me if we force a few standards we could also write a little utility (using Java Beans and annotations) to auto-generate a JSON Schema!! We could add annotations for validation features like dates, numeric range etc.. So everything can be done in a single POJO - we just use JavaBean, relfection, Jackason anntations and maybe add a few more like we have already done for the current Parameters files (@ParamDescription etc...)
I think this is the best bang for the buck that keeps the "stack" simple and accessiable while allowing us all the features we want like auto-documentation, editor support with validation etc..
Thoughts?
Jim
Hi Jim,
Thanks for the examples and the detailed explanations.
In the example of JacksonPojoToMarkdown I don’t see any generation of long descriptions or explanations, or information on default values.
How would we make sure those type of data end up in the documentation?
With new annotations?
I realize one can always add those afterward, but the purpose of a single source of information would be lost.
-ys
--
You received this message because you are subscribed to the Google Groups "okapi-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to okapi-devel...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/okapi-devel/c55f79e0-e48f-4a76-b3eb-fc2883497c3d%40gmail.com.
Yves, the code was just a quick AI generated
example to show it was pretty easy and possible to set this up
in a small amount of code.
The key to this approach is using de-facto
annotations like Jackson and Java Beans.
Ideally we want annotation-based validation as well. Check out Java Validation Annotations (This is a standard Java Package!!!)
Here is a project
that can use these annotations to auto-generate JSON Schema's!!
@JsonPropertyDescription("The age of the person in years. Must be a positive integer.")
@Min(18)
@Max(150)
private int age;@JsonPropertyDescription("The primary email address of the person. Optional, but must be a valid email format if provided.")
@NotBlank
private String email;
Thinking about backwards compatibility - I think we can have a single (Java Bean+ annotations) implementation and support three formats:
All we need is a reader that can map the named
parameters in the current config format to the Java bean fields.
Of course naming will be important - but Jackson allows you to
give a field a @name which would then match to what is in the
current config files. Type would also need to match. We get
validation for free!
This way we can have one implementation and still support the old config format.
This will be some heavy refactoring, but it is doable AFAICT.
Jim