MP AI meeting

114 views
Skip to first unread message

Emily Jiang

unread,
Oct 18, 2024, 4:02:29 AM10/18/24
to MicroProfile
Just let you know that the MP AI meeting on 21st Oct has been cancelled. The next one will be on 28th Oct, when we will discuss whether to fork LangChain4J or use LangChain4J APIs as upstream. Please attend if you are interested.
Thanks
Emily

Mohamed AIT ABDERRAHMAN

unread,
Oct 18, 2024, 4:09:29 AM10/18/24
to MicroProfile
Hi,
I'm insterested and I have already contributed to LangChain4J
How can I attend ? (Excuse me I'm new in the MP community 😁)

Best regards


--
You received this message because you are subscribed to the Google Groups "MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/64ddb79b-70eb-46c4-9a2f-87d9312bce2cn%40googlegroups.com.

Emily Jiang

unread,
Oct 18, 2024, 4:12:29 AM10/18/24
to microp...@googlegroups.com
The joining details are in the MP calendar. Hope to see you there!

Thanks,
Emily



--
Thanks
Emily

Alex Lewis

unread,
Oct 23, 2024, 8:19:16 AM10/23/24
to MicroProfile
Hi. I just listened to the meeting on the 14th that was recently posted to youtube. 

I can see the two sides of the fence that came up in the call. If MP AI co-ops LC4j (like the OpenTelementry example given) then it's locked to that one implementation, and maybe that's not great long term and forces the platform vendors to use LC4j. What I would say is that OpenTelemetry is maybe not the best example as it is a specification and LC4j isn't; at least not right now.

I can see that improvements could be made to LC4j to make integrate into the existing app servers and runtimes (HTTP client libs, JSON processing, etc. as mentioned it the call) but that's almost a problem for the platform vendors to work out with LC4j; if the vendors want to use LC4j. Also. I'm not sure that's affected by whether MP co-ops LC4j or whether it defines its own API/spec.

I also know that MP has used existing frameworks and their popularity as the indicator as to whether to adopt a spec in MP and then had that implementation help move the MP spec forward. I may be wrong but if my memory serves me I think smallrye may be an example of that for Metrics, and maybe other specs. Having said that, I think the Metrics spec is still MP specific and it just fits very well with a smallrye implementation. Maybe the same could happen here with MP and LC4j?

Adopting LC4j would help drive that lib/framework adoption and contribution, and that's a good thing. It would also quite possibly force out any other options, and it's hard to categorically say whether that's truly a bad thing or not (I.e. getting everyone to focus on one framework may on the whole better that having lots of options that each lack contribution and probably all having different strengths and weaknesses).

IMHO, I think that if MP is going to add AI to it's list of specs then I'm not sure just co-opting LC4j wholesale is going to work out long term. What I mean is, LC4j can make itself more "compatible" with platforms on its own and that may be driven by developers/adoption rather than needing an MP AI spec. E.g. If LC4j has issues running in OpenLiberty (not saying it actually does though) or at least  pulls in more libs that it ideally would then that will likely be driven by developers raising (and hopefully contributing changes) to LC4j and/or OL. I'm not sure where MP comes in at that point apart from possibly helping to market LC4j and nudge that work along. If MP on the other hand is meant to provide an API that developers can rely on without having to worry about what implementation is under the hood then I think it needs to define it's own API. That API may in fact be heavily influenced by LC4j but I think they would remain separate. In this latter case, I don't think MP would need to chase LC4j but pick the right time to select what works/sticks and refine/tweak it for MP where necessary. 

AI in general is still moving quickly and although it's quite likely the OpenAI API will become (it kinda already is) the standard, it doesn't necessarily have to be the case. For example,  in principle I may want to use llama3.java (https://github.com/mukel/llama3.java) along with a llama 3 model embedded inside a war file and expose it as a 1 shot prompt and response use case. There would be no HTTP between my code and the model as it's all local to the war. I don't know if I can do that with LC4j today, and in that example I don't need RAG, Chat History, etc. just likely a System Message and a User Message. In that case, it would be nice that I can "configure/wire" MP AI in such a way that it works and have the platform "do the right thing" to make it efficient as possible or, at least allow me to plug in the right parts so I can drop the HTTP part between the app and the model. Whether the platform use LC4j under the hood should at that point be irrelevant to me, the app developer.

What I think should be considered is things like how Metrics, Observability, etc. fit into an MP AI. Also, concurrency of interactions with the model (I.e. I think you have to take extra steps to make sure LC4j doesn't store chat history/context globally). Model interactions are rarely measured in millis and as such you'd ideally only shut down a runtime instance once all interactions with the model have completed. Thinking of a Microservice arch (though I think it applies to a monolith as well), I need to be able to signal to an instance/runtime that it needs to shut down, have that instance acknowledge that and start to mop up, allow for current interactions to gracefully complete and have the instance signal that out to the rest of the infrastructure (E.g. kubernetes, and the like) and eventually, at the right time, actually shut down.

Maybe MP's role in AI is to help operationalise AI when it's in a multi user, multi "session" and even multi-tenant architecture with zero-downtime rollouts and scaling. 

Just my 2 penneth worth :) 

Cheers

Emmanuel Hugonnet

unread,
Oct 24, 2024, 11:41:25 AM10/24/24
to microp...@googlegroups.com
In my opinion, smallrye-llm is 'only' a way to experiment with LLM and AI landscape. As you wrote this is currently a moving target and the
choice to use LC4J was just a way to do that. I still think that it is too early to be able to define a proper API as it evolves quite
quickly but we can still take a look and make it easier to integrate in current runtimes.
LC4J doesn't provide support for llama3.java but they support jlama ;) which is somewhat similar as far as I can tell. So I think you can do
that with the current state of smallrye-llm.
I agree that we should aim for a bigger picture integrating with other parts of MP for telemetry, metrics, context propagation etc.

Cheers,
Emmanuel
> <https://calendar.google.com/calendar/u/0/embed?src=gbnbc373ga40n...@group.calendar.google.com&ctz=GMT>. Hope to see you there!
>
> Thanks,
> Emily
>
> On Fri, Oct 18, 2024 at 9:09 AM Mohamed AIT ABDERRAHMAN <mohamed.ait...@gmail.com> wrote:
>
> Hi,
> I'm insterested and I have already contributed to LangChain4J
> How can I attend ? (Excuse me I'm new in the MP community 😁)
>
> Best regards
>
>
> On Fri, Oct 18, 2024, 10:02 AM 'Emily Jiang' via MicroProfile <microp...@googlegroups.com> wrote:
>
> Just let you know that the MP AI meeting on 21st Oct has been cancelled. The next one will be on 28th Oct, when we will
> discuss whether to fork LangChain4J or use LangChain4J APIs as upstream. Please attend if you are interested.
> Thanks
> Emily
> --
> You received this message because you are subscribed to the Google Groups "MicroProfile" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/microprofile/64ddb79b-70eb-46c4-9a2f-87d9312bce2cn%40googlegroups.com
> <https://groups.google.com/d/msgid/microprofile/64ddb79b-70eb-46c4-9a2f-87d9312bce2cn%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> --
> You received this message because you are subscribed to the Google Groups "MicroProfile" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
>
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/microprofile/CAGU%3Dp8qvA7T2%2Buk1-3JExcaTfFbTvTDevnBeG9CqhL8WsTck-A%40mail.gmail.com
> <https://groups.google.com/d/msgid/microprofile/CAGU%3Dp8qvA7T2%2Buk1-3JExcaTfFbTvTDevnBeG9CqhL8WsTck-A%40mail.gmail.com?utm_medium=email&utm_source=footer>.
>
>
>
> --
> Thanks
> Emily
>
> --
> You received this message because you are subscribed to the Google Groups "MicroProfile" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/microprofile/a257cc22-56c2-4a47-b7ec-76e926000fe4n%40googlegroups.com
> <https://groups.google.com/d/msgid/microprofile/a257cc22-56c2-4a47-b7ec-76e926000fe4n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Alex Lewis

unread,
Oct 24, 2024, 1:08:33 PM10/24/24
to MicroProfile
I didn't know about smallrye-llm, another one to have a look at :)

From what I've seen so far of the LC4j API, it seems like a good one. It's layered so you can go from a very simple few lines with the AiServices API/Layer and LC4j doing the heavy lifting for you or, you can drop down a layer if you need finer control. Having said that, I've not looked at other implementations like smallrye-llm so my opinion is a bit limited.

> LC4J doesn't provide support for llama3.java but they support jlama ;) which is somewhat similar as far as I can tell. So I think you can do that with the current state of smallrye-llm.
 
My point was more that with an MP AI specific API (rather that just say "use LC4j") in a setup where I can bundle the inference engine and the model inside a war file, having the MP AI API would give the platform vendors more options on how to deliver that in the most efficient way. E.g. In most cases the platform may use LC4j but, in the case I describe, it might have it's own implementation that is much simpler and thus could be more efficient. It's all conjecture really. My point is maybe more that given things are changing rapidly, a dedicated MP API may be a good way to give platform vendors flexibility on how to deliver an MP API that isn't only restricted to what is possible in LC4j at any given time; and given the fast-moving nature of things at the moment, that flexibility might not be a bad thing.

In AI right now, the main concepts are (to the best of my understanding):
  • Prompt Engineering
  • Context Window (and management) E.g. Chat History
  • RAG
  • Tools (aka Function calling)
  • Agentic Framework
I think translating those areas into API, Annotations, etc. in a loosely coupled (hopefully) but inter-compatible way could mean that those areas can evolve freely, and new areas/capabilities that come up in AI can be incorporated. That may be effectively what you already get with LC4j, and as I say maybe an MP AI API would be heavily influenced by LC4j, but I think there may be value in there being an MP AI API on this occasion compared to just adopting an upstream like MP did with OpenTelemetry.

Beyond that, integration with telemetry, metrics, etc. may be the reason that MP could not just specify LC4j as an upstream and consequently needs it's own spec.

Cheers 

Emily Jiang

unread,
Oct 30, 2024, 11:07:25 AM10/30/24
to MicroProfile
Hi Alex,
Please feel free to join the call on MP AI on Mondays. This week, I invited Clement and Max from Quarkus to tell us more about LangChain4J as they have used LangChain4J and managed to get it integrated in Quarkus. Their views are we are better off to adopt LangChain4J and treat it as the de-factor standard (most popular framework) rather than forking it. LangChain4J has many APIs to work with and it moves very fast. If MP AI forks those APIs and very rapidly the APIs would be out of date. As you know, unless we have many AI experts here, we are better off to collaborate there and focus on the integration part rather than trying to work on standardizing anything at this stage.

Thanks
Emily

Alex Lewis

unread,
Oct 30, 2024, 12:14:38 PM10/30/24
to MicroProfile
Hi Emily,

> better off to adopt LangChain4J and treat it as the de-factor standard (most popular framework) rather than forking it

I've only had experience of LangChain4j. To be honest, I don't know if there are any other comparable frameworks in Java for AI and I admit I simply haven't looked beyond LC4j. As such, if it's a pool of 1 then it makes sense to stand behind LC4j. 

I wasn't necessarily suggesting a straight fork with a package rename but more an "inspired-by" API for MP if both the speed of evolution of AI and LC4j actually became a problem for MP. I.e. Would the cadence of MP releases and platform certification against an MP release all be "too slow" to make MP having an AI API viable? People just wouldn't opt to "turn on" the MP LC4j functionality and would instead just include LC4j directly. I guess MP AI is not at the point of defining how an application would opt-in to using MP AI vs opting-out/disabling the MP AI feature and instead using LC4j directly in their app? 

Would MP need to pin to a version of LC4j? If so, then that could mean that vendors are locked to incorporating that version of LC4j and ensure that the bundled/platform-supplied version does not impact the ability for an application to use whatever version of LC4j they want instead E.g. to get the bleeding edge copy of LC4j as it has the latest feature(s) that may be desirable. MP may be able to specify a Major version range as long as LC4j sticks to semantic versioning; are there assurances that LC4j will without doubt stick to semantic versioning? I would imagine that MP would need that assurance in order to adopt it would it not? 

Very likely a crazy idea, and I don't know how close the relationship is between MP and LC4 or if there is one at all, but an option (although I'm assuming it's quite unlikely to happen) could be that LC4j chose to donate the LC4j API to MP (or some other "common ground") and as such LC4j would become an impl of that API. That would give developers some assurance of the stability/versioning of the API through MP versions, and it would help legitimise the API as "the" AI API for Java. I don't know whether the folks on the LC4j side would consider that option but thought I'd mention it nevertheless.

In my personal case, MP adopting LC4j would work fine for me. I'm just trying to think of what the wider, more general consequences for MP would be of that, and clearly I'm just thinking out loud :)

I'd like to get back into MP calls so I'll see if I can join the MP AI on Monday. 

Cheers

Reza Rahman

unread,
Oct 30, 2024, 4:33:17 PM10/30/24
to microp...@googlegroups.com

Both Ed and I have very limited bandwidth to join calls these days. So what I am doing is putting down in writing what the Microsoft position is with regards to this topic, if for no other effect, then just for the record.

The following are the reasons Microsoft would vote -1 on any sort of AI/LLM specification in either MicroProfile or Jakarta EE at the current time (even a standalone one):

* The entire area is still very much fast moving shifting sands. It's just not ready for any kind of "one specification to rule them all" effort. Any such effort is bound to get it wrong - perhaps completely wrong.
* This is an area that requires extremely deep domain expertise that most Jakarta EE vendors simply do not have. The people that have the correct domain expertise are really not ready to come together in any standards body, let alone Java, Jakarta EE, or MicroProfile.
* To the extent that "one API to rule them all" can even work in this space, LangChain and LangChain4j already have significant traction. By virtue of being just an open source project in an emerging space that can move fast, not worry about quality/stability too much, etc it will always have a competitive advantage. Any even mildly competing Jakarta EE/MicroProfile specification is unlikely to gain relative credibility or adoption against LangChain4j, so such an effort has a high probability of doing more harm than good.

What Microsoft believes to be a prudent move is for Jakarta EE/MicroProfile vendors to collectively ensure that LangChain4j includes a robust CDI extension that can work with both Jakarta EE and MicroProfile. Getting this work done alone is hard enough. We should all promote this extension. This will also earn Jakarta EE/MicroProfile vendors some badly needed goodwill with open source projects. Indeed longer term, Jakarta EE/MicroProfile should look for ways to officially promote open source projects that embrace CDI, Jakarta EE, and MicroProfile - allowing users to add these projects to their applications as they wish without getting in the way.

Emily Jiang

unread,
Oct 31, 2024, 5:58:08 AM10/31/24
to microp...@googlegroups.com
Hi Alex, Reza,

Based on the past experiments and research of LangChain4J and AI discussion, my view was and still is to adopt LangChain4J and focus on the standardisation of the integration, which was the original focus anyway. Now a few folks are trying to fork the APIs, which made me very nervous. I am completely supporting the integration effort and working with upstream for the easy use of AI APIs.

As for which version of LangChain4J, I think we will just specify the minimum version and always keep the project up to date to sync up with LangChain4J. This spec will be standalone for sure and it can have its own release cycle.

In MicroProfile, there is no backward compatibility guarantee. We have more freedom to try and learn. I don't quite understand the concerns being expressed.

Thanks
Emily



--
Thanks
Emily

donbo...@gmail.com

unread,
Oct 31, 2024, 9:00:59 AM10/31/24
to MicroProfile
I've been spending some time understanding the LC4J and Spring AI libraries for gen AI. My thoughts on what we could/should do in MP are similar to what's being expressed by Reza and Emily. I'll try to keep this fairly brief as a lot of the points have already been made in this thread.

- I think it would be a mistake to create a brand new API for interacting with LLMs. At this moment LC4J has 165 listed contributors and there are roughly 20 commits a week (on average) to https://github.com/langchain4j/langchain4j. MP community doesn't have enough LLM experts to take over the lead on defining how Java apps should work with LLMs.
- I like the idea of integrating LC4J with key MP specs -- such as MP Telemetry or possibly MP FaultTolerance. Perhaps we could work with the LC4J community to add hooks for telemetry / fault-tolerance to make it easier for us to integrate without having to create a new API. Those hooks would need to accommodate any telemetry system (eg. Micrometer, OpenTelemetry, etc.)
- I like the idea of having a similar approach to the MP AI API that we have with the MP Telemetry API -- where MP adds a few annotations to make things work with CDI, but otherwise stays out of the way and lets developers use the growing/evolving API (eg. LC4J) directly.
- I agree that the LLM API space is still very much in flux, and so if we want to have any API at this point it needs to be standalone.

Don

Reza Rahman

unread,
Oct 31, 2024, 9:48:35 AM10/31/24
to microp...@googlegroups.com
Just to clarify, Microsoft does not think it’s a good idea to have essentially some kind of LangChain4j integration layer built into MicroProfile or Jakarta EE. While LangChain4j has traction today, there is no telling what the future will bring in such a fast moving space. For example, we already have customers that are opting to use OpenAI directly instead, even just via REST calls. Having such a tight binding to a specific library in a specific way right now for an “open platform” like MicroProfile or Jakarta EE is a pretty big commitment to just one thing amongst many others. This also sets up a potentially problematic precedent. Does it mean you can only use an open source library if it’s so especially blessed by the platform? Who determines version and dependency compatibility? The user? The library? The platform?

This is why we are pointing very clearly towards just contributing a very robust CDI extension in LangChain4j itself. All the work needed can easily be done there and I bet it will be most welcome too. It should also be possible to make current positioning clear in a loosely coupled fashion just through documentation and marketing means instead of a much tighter coupling right in the specifications/platform.

It’s a much better signal to the ecosystem and avoids a bunch of weird questions for either Jakarta EE or MicroProfile. It’s also how most Java developers think when they consider open source libraries. It does ask all of us to imagine beyond limiting our contributions to Jakarta EE and MicroProfile only but rather spreading the influence of “our” technologies like CDI into the broader open source ecosystem at large instead. I appreciate it could be a difficult bridge to cross for some but there might indeed be much greener pastures on the other side? Does everything really need to be a specification?


From: microp...@googlegroups.com <microp...@googlegroups.com> on behalf of donbo...@gmail.com <donbo...@gmail.com>
Sent: Thursday, October 31, 2024 9:01 AM
To: MicroProfile <microp...@googlegroups.com>
Subject: Re: [microprofile] MP AI meeting
 

Emmanuel Hugonnet

unread,
Oct 31, 2024, 11:12:09 AM10/31/24
to microp...@googlegroups.com, Reza Rahman
Hello,
Given it was my branch that was mistakenly merged that is causing those exchanges I 'll just try to clarify.

We were experimenting on a branch that wasn't properly rebased making it hard to follow changes on the main branch.
I spent some time cleaning up this and sent a new branch that was cleanly mergeable and got merged.
I think the best thing to do is to revert the current commit that is causing all this as the goal is to use smallrye-llm to provide CDI
integration for LangChain4J (at least now).

So I think to make things clearer I'm going to revert that commit so we can move on to integrating LangChain4J with other
MicroProfile/JakartaEE specs and provide an easy path for developers using LangChain4J in such environments.

Emmanuel

Ulf

unread,
Nov 5, 2024, 2:32:51 AM11/5/24
to MicroProfile
On Thursday, October 31, 2024 at 2:48:35 PM UTC+1 Reza Rahman wrote:
It’s a much better signal to the ecosystem and avoids a bunch of weird questions for either Jakarta EE or MicroProfile. It’s also how most Java developers think when they consider open source libraries. It does ask all of us to imagine beyond limiting our contributions to Jakarta EE and MicroProfile only but rather spreading the influence of “our” technologies like CDI into the broader open source ecosystem at large instead. I appreciate it could be a difficult bridge to cross for some but there might indeed be much greener pastures on the other side? Does everything really need to be a specification?

+1. I could see an AI API for Jakarta EE some time down the road, but it seems rather outside of what MP generally concerns itself with.

Reply all
Reply to author
Forward
0 new messages