I had a doubt regarding the approach for JNIgen-based Java/Kotlin to Dart conversion, mentioned in the ideas list.
"Or do we need to teach an AI how to run JNIgen and make it generate code that is subsequently analyzed with the Dart analyzer, feeding errors back into the AI to improve its answer?"
My concern is—is this step needed? The main task is to handle language-specific syntax changes between Java/Kotlin and Dart, while class structures (initializations and usage) remain untouched. Since JNIgen already reuses the same classes from the Java/Kotlin API, explicitly generating and sending JNI bindings would only increase token cost without adding real value. Instead, we can achieve the same result with a simple prompt instruction:
"Assume JNI bindings are generated for all the Java classes."
Would this approach be sufficient, or should I still consider integrating AI-driven validation?
Looking forward to your guidance.
Best regards,
Adarsh Raj
--
You received this message because you are subscribed to the Google Groups "dart-gsoc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dart-gsoc+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/dart-gsoc/286229ab-d921-465e-a6cc-796891076448n%40googlegroups.com.
In one of the previous discussions (11th March) regarding the JNIgen to Dart code translation project, you mentioned that we could assume the JNI bindings were already in place and focus on the code conversion. However, you've raised a valid point that the JNI bindings generated may contain crucial information. Passing this as context (or generating the bindings using AI) could certainly help in producing better Dart code.
I’ve attempted generating some jni_bindings.dart files, including a basic example showcased in my proposal and another example found here, which seems to be working well with the current test prompt. However, for more thorough testing with more complex cases and my conclusion based on that, I would need some additional time.
Regarding the proposal, the timeline mentioned in the idea’s list suggests having it finalized by April 2nd. Due to ongoing papers and project submissions, would it be possible to extend the deadline for this specific aspect (passing the jni_bindings.dart to AI as context) to April 6th or 7th? As the final date of Gsoc submission is 8th April 18:00 UTC.
Additionally, I would greatly appreciate it if you could review my proposal for any additonal insights or comments. I have added tags for convenience. You can access the proposal here.
Thank you for your time and feedback!
Best regards,
Adarsh Raj
To view this discussion visit https://groups.google.com/d/msgid/dart-gsoc/b989ebba-337d-4f65-854a-31eafcb6dad7n%40googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/dart-gsoc/CAHFkKc8hLn_3-ngCyKrWfVbcCk5Ez93%2B%2BqH%3D%2BNVHhM7VpEfTsw%40mail.gmail.com.
The point you raised is incredibly insightful and spot on.
Regarding whether or not JNI bindings need to be passed to the LLM—my main concern was this: if we ask the LLM to generate JNI bindings on its own, we’re entering a whole new domain that’s highly error-prone. There’s no reliable way to ensure the bindings are correctly generated, especially given how subtle JNIgen’s conventions can be.
You beautifully presented two workarounds to this challenge:
Approach 1:This is absolutely essential. By crafting a detailed system prompt that explains how JNIgen maps Java/Kotlin types to Dart, we can give the LLM the background knowledge it needs to use the bindings correctly. This includes conventions like .toJString() for strings, class name mapping, etc...
This is exactly the kind of setup I proposed in my proposal—providing the LLM with just enough context to make accurate conversions without bloating the prompt.
I see the value in this, especially for ensuring accuracy by passing exact function signatures. However, I’m slightly skeptical about its feasibility in the long run, particularly for our end goal of building a seamless dev tool.
Incorporating an extra step where developers need to manually copy and paste function signatures into the prompt could end up being a usability bottleneck. It disrupts the workflow, that small friction point could be the deciding factor in whether the tool is adopted or abandoned.
Since generating bindings autonomously is risky, and asking the dev to supply them is cumbersome, Approach 1 becomes not only the preferred path, but also the only one that fits both correctness and UX goals. That’s also the direction I’ve committed to in my proposal, and your explanation reinforces that decision, giving me some confidence :).
Thanks again for explaining these points so clearly!