Microprofile Logging

1,581 views
Skip to first unread message

Alex Lewis

unread,
Apr 19, 2019, 7:45:18 AM4/19/19
to Eclipse MicroProfile
Hi,

As suggested by Emily Jiang in response to my tweet, I'm going to ask the question​:

"Is there a need for a Microprofile Logging spec?". 

​By all means stop here and I'll say thank you in advance for any feedback you may provide but if you feel like reading on I will try to frame the reasons behind my question and in an effort to attempt to bring suggestions/solutions rather than just problems I'll also ​provide some thoughts.

​I realise that Logging is somewhat a log running and contentious topic and that my question is also asking whether we need yet another logging API, to which I'm sure most people's reaction is "no!"​. Having said that,​ I believe the lack of a standard Logging API is a hindrance to application portability and​ unnecessarily​ ​burdens apps with an inevitable problem to solve, just with varying degrees of complexity.

When I say "app", what I'm referring to is the business logic/functionality that solves the problems at hand.

​Application Servers (Runtimes) and​ libraries​ ​have ​all ​chosen a logging framework​ that they believe addresses​ their needs and also the needs of the applications; however, their choices vary. You are either lucky they​ have​ chose​n​ the same one you want to use, you change​ your app​ to use the same one or, live with separated/inconsistent logging output​ and management.​

In a world of containers, JSON logging and console output are the standard​ or at least more generally for those apps wishing to follow the guidance of 12-factor apps and in particular the section on logging.​ In which case, having inconsistent console output is ​minimally​ an annoyance and more often than not a collection of problems to be solved involving changes to tooling and/or their config​ (EFK, etc.)​. This becomes especially true when ​the containers within a system are a mix of stacks​.​ Imagine the impact of trying to get consistent system logging if you have containers of Wildfly, Openliberty, Spring, Spring boot and more recently Quarkus, and more importantly the hoops the apps themselves need to jump through. My example is somewhat extreme but I do think that having a system of purely only one stack for every job/app is also not ideal.

Ideally, I don't want my app to care that Wildfly uses Log4j(2) whilst OpenLiberty uses JUL, and so on. Even if I'm able to use the SLF4j API and the "right" implementation, I still need to consider how I get that implementation into my app; in the war, as a lib in a global library that I can put into an intermediary container layer, etc. All things I'd rather not need to worry about at the app layer​This is where I see the benefit of a microprofile-logging API​. I​ have a Logger injected into my code and know the output will be combined with the rest of the logging of the runtime​, the output will be a consistent format and management is centralised.

Should I not be alone in my logging frustrations I would advocate for an initial spec that started with:
- The Logger Interface definition​; whether that's a new one or an existing one selected.​
- An @Logger ​​annotation (or whatever became appropriate)​ for @Inject of a Logger type.
- Ability to manually instantiate the same Logging "instance" that annotation injection ​is not appropriate.​

There is of course a problem of what happens with ​bundled​ libraries that depend on a specific logging implementation ​but differ to that of the ​runtime but I think that's possibly wider than just microprofile.

​If you've read this far, thank you! 

I look forward to people's thoughts.

Cheers,
Alex

Werner Keil

unread,
Apr 20, 2019, 8:59:01 AM4/20/19
to Eclipse MicroProfile
I don't really see that in Microprofile either.
There are plenty of log analysis tools like Elastic or Splunk but they work well with common logging frameworks like Log4J or SLF4J.Especially the latter is an abstraction, a bit like Java 9 also introduced in a smaller scale with System.log().

The really tricky things are on the integration sie with Docker/Kubernetes sidecars etc. but that does Not look like anything to specify in Java or MP either.
Just to inject a logger with CDI, sorry nur what Kind of spec should that be. Abstraction layers like SLF4J exist and are widely established so what would be left that justifies another spec?

Werner

Matjaz B. Juric

unread,
Apr 20, 2019, 3:19:21 PM4/20/19
to Eclipse MicroProfile
Hi,
We have a logging solution in KumuluzEE with following goals: to simplify logging for the developer, to standardize logging content, and to configure logging from the common config framework (and not depend on specific configuration of an app server, which is particularly important in K8s). Our solution has the same API irrespective of the logging framework beneath. Currently we support Log4j2, JUL and Fluentd.
A more detailed description is available here: https://github.com/kumuluz/kumuluzee-logs/tree/feature/fluentd
So, there might be a case for Logging inside Microprofile.
Best regards,
Matjaz

Werner Keil

unread,
Apr 21, 2019, 5:34:38 PM4/21/19
to Eclipse MicroProfile
You mean the MicroProfile Config API?

Still not fully convinced it makes sense to create yet another wrapper beside SLF4J or similar solutions, but if you at least reuse other parts, why not give it a try. Isn't there an incubator for those ideas?

Werner

Alasdair Nottingham

unread,
Apr 21, 2019, 7:34:48 PM4/21/19
to microp...@googlegroups.com
While I agree that we probably have enough logging APIs in the world that creating a new one isn’t going to solve anything I can see value in MicroProfile or Jakarta EE having an opinion on how to support logging in a portable way. 

I have to admit I didn’t think this was a huge issue, but that doesn’t mean it isn’t worth discussing. The reason I didn’t think this was a huge issue is because with Open Liberty we have open source using almost every logging API and we route then to the same place. That

Alasdair Nottingham
--
You received this message because you are subscribed to the Google Groups "Eclipse MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/ef900c9e-8071-444f-9bda-d29b0a53eb4b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Alex Lewis

unread,
Apr 22, 2019, 8:22:58 AM4/22/19
to Eclipse MicroProfile
Thanks all for your responses, and apologies for not responded sooner.

Alasdair, I think you frame my point better than I did, and in a lot less words :) I suppose I had made a leap that a new Logging Interface would be necessary or adopt an existing API such as Log4j2 or SLF4j. Do you have anything in mind of what that "opinion" could look like?

Werner, I agree that the log analysis tools work fine once you have a consistent log format, but that's once it's consistent across the App, the Runtime and all of your microservices. Otherwise, you have varying degrees of inconsistency to deal with. Having said that, that's not really the problem I'm trying to solve here. In its simplest form, I'd like the app code not care about which logging implementation to use and have the implementation provided by the runtime.

Matjaz, I'm glad I'm not on my own :) and thanks for sharing the link. I've not had chance to look at in detail but will so over the next day or so. 


I believe (part of) the benefit of the MP specs is about lowering the barrier of entry to writing microservices and making those microservices play nicely in their environment, as well as guiding choices to provide consistency and interoperability. MP-Config is maybe a good example of this. There have been plenty of ways of implementing configuration over the years, in and out of containers, with a large number of libs and there have been calls for a standard "config" API in Java, J(akarta)EE and so on. I think mp-config hits the nail on the head by removing the need for the app developer to think about it in app code (business logic) and plug in the right implementations as needed. If you need other sources of config, you don't need to modify your core app code even if you do need to rebuild the war with a new lib and maybe even a new config-source. How often do apps really change config sources? I'm going to risk saying not that often but, that doesn't mean that the specification doesn't have value.

I changed my logging to use JUL as I'm using OL and it enables my app logging to be combined with OL logging in JSON format out to the console; making it easier to consume and process with the likes of EFK. I had tried various combinations of SLF4j and Log4j2 but found that if I had JSON and Console logging enabled in OL, the OL JSON would wrap the JSON output from my app; making the EFK side of things awkward. I may have been doing something wrong and Alasdair's comment "with Open Liberty we have open source using almost every logging API and we route then to the same place" may also be an indication of that.

I guess my point could boil down to... I have an app that uses JUL logging knowing that I can configure OL to send JSON output to the console and the JSON structure is consistent for OL's own logging as well as the logs from my app. If I were to deploy my war to another app server, what is the likelihood I would get the same outcome without any changes to my application code? That feels like a barrier that really doesn't need to exist and it creates a cleaner separation of concerns.

Cheers

To unsubscribe from this group and stop receiving emails from it, send an email to microp...@googlegroups.com.

donbo...@gmail.com

unread,
Apr 22, 2019, 2:40:36 PM4/22/19
to Eclipse MicroProfile
I can see some value to this, but as everyone is commenting, it's complicated.

I think it could be beneficial to have a standard set of JSON output fields for logs.  That would make life easier for dashboards or other log analysis tools that want to use logs from a variety of servers (OL, Wildfly, ...).  We wouldn't even have to make that be the ONLY log format app servers support - just one that could be used.

I wonder if developers would switch to an MP logging API.  People love their appenders and filters and ecosystem that surrounds their logging API, which is why JUL, despite being right in every JDK since Java 1.4 hasn't replaced log4j, log4j2, logback, slf4j, and others.

Perhaps the answer would be that we need:
    1) config consistency
    2) support for one "standard API"
    3) support for lots of other logging APIs mapping to/from the "standard"
    4) output consistency.  

#1 would naturally be something based on mpConfig, #2 / #3 sounds a lot like SLF4J, and #4 would possibly be JSON logging field consistency.

Don

Werner Keil

unread,
Apr 22, 2019, 3:49:17 PM4/22/19
to Eclipse MicroProfile
Both Dropwizard and Spring Boot rely on Logback as the default logging solution, but they offer abstraction via SLF4J, so other log frameworks like Log4J can be used, too. The spring-boot-starter-logging JAR contains nothing but some config file as properties, no extra libraries or overhead.

So maybe something similar using MP-config to configure those things could be all that's needed here, too. Would mostly cover #1
The rest IMO is mainly configuring a log format, e.g. https://gquintana.github.io/2017/12/01/Structured-logging-with-SL-FJ-and-Logback.html using the same Logback SLF4J combination. 
This is fairly old: https://github.com/savoirtech/slf4j-json-logger I assume it should be part of Log4J by now.

Werner

Emily Jiang

unread,
Apr 23, 2019, 6:13:27 AM4/23/19
to Eclipse MicroProfile
+1 Don!
Thank you Alex for bringing this for discussion!
As one of the missions of MicroProfile is to support portability, we need to look at the unportability issue on logging. For logging, I think it will be great to achieve "switch from one runtime to another without any impact on logging". Basically, no mater what runtime an microservice is deployed, it will be amazing no change is required and same logging displayed in ELK.
I kind of like the idea that no new api is introduced but a wrapper is provided by all MP runtime and mapping various logging to the MP standard, basically the option 3 as per Don.

Thanks
Emily

Ken Finnigan

unread,
Apr 23, 2019, 9:27:30 AM4/23/19
to MicroProfile
I consider logging APIs a bit like tabs vs spaces, everyone has their personal preference and it's very difficult, maybe impossible, to get complete agreement.

One thing I'm confused about is where the lack of portability comes from. My experience is that most applications bundle the logging implementation they want to use within their app, which is portable because it's within the app.

I don't see creating some kind of wrapping API for MicroProfile as beneficial, as it would replicate what slf4j does today.

There would possibly be benefit in defining an output format for applications using MicroProfile though, vs a specific API.

Ken

To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Alex Lewis

unread,
Apr 23, 2019, 2:26:42 PM4/23/19
to microp...@googlegroups.com
Don, I totally agree and considered raising similar points but knowing the API aspect would be contentious I thought I'd try not to bite off to much :)

Ken, the API is really a side affect of the key aspect of portability and not needing to bundle a logging implementation. The other aspect is an integration of the application logging with that of the server/runtime so the logging goes to the same place, each log line is the same format and configuration is managed at the server level. This IMHO is especially important for JSON logging and console output in a container environment. If potential "politics" were not an issue then personally I think selecting an existing API may be better than introducing a new one. A risk may be that it makes server adoption of that API a little more awkward and thus may present a barrier.

However, this has predominantly come from my assertion of a lack of portability and that being an issue but maybe that's not the consensus?

Would a simple example app (REST App, that Logs on receipt of a request) that bundles a logging implementation (once with Log4j2 and again with SLF4j) deployed to a selection of servers (OpenLiberty, Wildfly and Payara?) help identify whether there is a need? The exercise would measure:
  - Did logging work immediately?
  - Was the logging consistent with the server output?
  - How is logging configured? (Using the server or separately)

Cheers

You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/LuIKg-M9KJo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Alex Lewis

unread,
Apr 25, 2019, 1:51:59 PM4/25/19
to microp...@googlegroups.com
Hi,

I ran the experiment I suggested in my last post. Log4j2 was done using the log4j-api, log4j-core and log4j-web libraries/components. SLF4J was done using slf4j-api and slf4j-jdk14.

The experiment showed that when using Log4j2, the logging output and management is separate from the application server. The console log output was inconsistent and would create a problem/challenge to solve for Log aggregation.

The SLF4J results were much better as the logging and management was integrated with the server and there was no need for an external config file.

In each case the test application was "portable" such that I could deploy the same war to each server (OL, Wildfly and Payara) without change and there would be some level of logging. However, I'd either be lucky that I picked SLf4J (and particularly the JDK14 output lib), live with separated logging and the challenges above, or change my application to use SLF4J. 

In reflection to my original post, I believe it is still valid for the following reasons:
  • Developers would somehow need to know to use SLF4J+JDK14 if they want the best chance of having unified logging. How do they know that or find that out? It's also an improved chance, not a guarantee.
  • The SLF4j+JDK14 combo worked well for OL, Wildfly and Payara; I don't know if it's true for all app servers / runtimes.
  • The application needs to bundle SLF4J+JDK14. 
  • This is true right now but should an app server change how it manages logging using SLF4J may no longer be a good choice and the application would need to be updated.

I believe where I don't need to care whether the JAX-RS implementation is Jersey or something else, or that CDI is done using Weld, I shouldn't need to care whether Logging is provided by Log4j2, SLF4j, Logback, etc. I just rely on APIs and that the runtime made good decisions for me. Should it become beneficial to move my application to another runtime, I can but right now the logging aspect will present a problem to varying degrees. That IMO feels like it should be unnecessary and especially for something that should be relatively trivial when compared to JAX-RS, CDI, Config, etc.

HTH

Cheers

Ladislav Thon

unread,
Apr 26, 2019, 2:31:41 AM4/26/19
to MicroProfile
Personally, I'm sympathetic to this idea, but I also think that logging is a poisoned topic. It's not as trivial as many people think (see e.g. the Log4j2 documentation on garbage-free logging). I'd love to have a single logging API, but I believe we'd just end up with XKCD 927.

LT

čt 25. 4. 2019 v 19:51 odesílatel Alex Lewis <alex.l...@gmail.com> napsal:

Jean-Louis Monteiro

unread,
Apr 26, 2019, 4:20:09 AM4/26/19
to MicroProfile
Same here ... Don't know what else we could bring in the logging landscape.
What innovation? What abstraction?

The Java platform already provides a logging framework by default.

If we get back to the foundations of MicroProfile and the definition ....

Eclipse MicroProfile is a collection of community-driven open source specifications that define an enterprise Java microservices platform.

Is logging a critical requirement for such a goal?




Emily Jiang

unread,
Apr 26, 2019, 5:45:16 AM4/26/19
to Eclipse MicroProfile
Thank you Alex for doing more experiment and sharing what you have found out. This is very valuable piece of info.

IIUC, from your experiment, the workaround of achieving the consistent logging would be using SLF4J + JDK14. Even if MicroProfile cannot come up with a better solution, this best of practice should be mentioned somewhere and provide guidance for other microservice developers.

Kudos for spending time for this experiment!

Here is what I think we might be able to achieve without much effort.

As most application servers support MicroProfile, MicroProfile community is the ideal place to reconcile the logging support. If all application servers support SLF4J, it should be noop here. We just need to educate microservice developers to stay with SLF4J. I think this might be the case.

If not, we can then find out whether we can easily introduce a wrapper to give a seemless see-and-feel for our customers.

Thoughts?

Thanks
Emily

--
You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/LuIKg-M9KJo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microp...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Alex Lewis

unread,
Apr 26, 2019, 7:33:37 AM4/26/19
to microp...@googlegroups.com
I believe people are jumping on the creation of a new API and a new implementation and that's not what I'm suggesting. What I am suggesting is a "standard" API is made available to application developers. It could be a new one but IMO ideally a pre-existing one (Log4j2, SLF4J, etc.) would be selected instead. It is up to the app server / runtime on how that API is fulfilled, which could be providing a specific implementation or more likely a thin adaptation layer to plug that API into what they already have. Depending on which API was selected for some App Servers it may be a no-op and hopefully a small adaptation for others. I believe Emily is advocating the same.

To answer the "What innovation? What abstraction?", I don't believe this requires innovation or additional abstraction but that doesn't mean there isn't benefit. I believe some of the benefits are:
  • Not having to bundle a logging implementation (I.e. smaller deployed war)
    • I realise this is only marginal as logging implementations tend to be small (slf4j-api = 40k and slf4j-jdk14 = 8.3k) but it could still be avoided.
  • Guaranteed logging consistency and compatibility with any MP App Server of choice.
  • Centralised management of logging through the App Server.
  • Simply not having to think about which Logging Framework to choose, the App Server picked the right one for you.

Personally, I think logging is an essential part of any microservice and as such the microprofile spec should at least provide guidance/opinion to reduce friction and ideally remove the need for the app developer to consider which framework to use, check how well it works with the App Server they've chosen, etc., etc. Logging is called out in specifically in 12factor which as far as I know is a popular reference for building SaaS, and I think every application whether it be a microservice or not includes logging one way or another.

Emily, yes you've understood me correctly, that particular setup had the best results across the App Servers I tested.

I think we're in the position of deciding the items I note below, how do go about formalising those decisions?
  • Should Microprofile have an opinion on Logging?
    • I get the feeling there is a general leaning towards the answer being yes.
    • How do we draw out a final/conclusive decision?
  • If yes, does it stop at being guidance to use SLF4j+JDK14 so an app has the best chance of an integrated experience?
    • Before a spec exists, it would at least help.
    • Personally, I don't think this would be sufficient as that guidance may become less relevant over time depending on how app servers change in new versions.
  • If we agree on needing a spec, does it select an existing API or is a new one created?
    • I'd advocate for selecting an existing one such as Log4j2 or SLF4J.
    • Are there other APIs that should be considered?
    • If it's a problem politically to select an existing API or because it would significantly hinder spec adoption due to the complications for App Servers, then a new one is required.
    • Would it be worth considering reaching out to the communities for the various Logging frameworks to see if any would be willing to donate their API such that it came under the Microprofile namespace and remained implementation agnostic? Something the implementations could rally around. Just a thought...
  • If a new API is required, how does that happen?
    • Anything new should likely take a very strong guidance from existing APIs.
For now I've avoided the points about configuration, JSON output format, etc. but I think they would follow the decisions above. Maybe a v1.0 spec would tackle the above whilst a subsequent version would tackle these other points?

Cheers

To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

donbo...@gmail.com

unread,
Apr 26, 2019, 7:43:04 AM4/26/19
to Eclipse MicroProfile
+1 on kudos to Alex for that experiment.  The result is what I would have expected for OL, but the comparison to other servers is quite interesting, particularly as it helps to show where multiple servers have some commonality.

Emily, on your comment...

As most application servers support MicroProfile, MicroProfile community is the ideal place to reconcile the logging support. If all application servers support SLF4J, it should be noop here. We just need to educate microservice developers to stay with SLF4J. I think this might be the case.

...I'm not sure that's the takeaway.  SLF4J just maps logging calls to another logging API under the covers.  I think what Alex is showing is that all of the app servers he tried support JDK14 logging (aka JUL).  So naturally if SLF4J is configured to send log requests to JDK14 it should work.  If we were going to say anything to MP developers, it's that many of the known app servers integrate with JDK14 logging from a config and event handling perspective -- which means that if your app logs to JDK14 API your log entries will appear in the server log, and your app will be portable from a logging perspective.

also +1 to Ladislav for excellent xkcd reference.  I agree that us delivering another logging API to rule them all would just be another one in the pile for people to choose from.

How about this...?
- Integrated JDK14 logging recommendation.  We could RECOMMEND that, at minimum, all app servers integrate JDK14 logging with their server's logger configuration and log record handling (and by extension anything that forwards to JDK14 logging -- eg. SLF4J or JCL).  App servers are, of course, free to support integrating with other logging apis as well, but apps using JDK14 would be portable.
- JSON log output standard.  we could provide a JSON standard for what the server logs should look like.  App servers are, of course, free to provide other output formats for ops teams to choose as well.

Don

Alex Lewis

unread,
Apr 26, 2019, 8:45:33 AM4/26/19
to microp...@googlegroups.com
If anyone is interested, the code for my experiment is here: https://github.com/alewis001/logging-portability-experiment

master - Has a simple war that can be built into a container for the 3 servers (see the docker folder) but with no logging.
log4j2 - branched from master. Modifications to add Log4j2 dependencies, modified app for logging, etc.
slf4j - branched from master. SLF4J equivalent of the Log4j2 branch. As there was a greater level of integration, I modified the docker files to expose the admin portals so I could reconfigure the logging level, format, etc. Payara didn't like me trying to switch to JSON as it through an error when trying to save the settings but I'm sure there is a simple reason for that and I didn't investigate.

If people would like to verify my code/test or try things out for themselves, please do; it's very small. I'd hate for decisions to be based on any mistakes I may have made so I welcome feedback.

I also kept some notes in this google doc. These are just some scratch notes so please don't judge me ;)

Cheers

To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Gordon Hutchison

unread,
May 1, 2019, 11:38:52 AM5/1/19
to Eclipse MicroProfile


I could support the concept of something like:

"any MicroProfile 3 compliant server must be able support SLF4J" or
"any MicroProfile 3 compliant server must support {set of existing logging APIs}"
"any MicroProfile 3 compliant server must be able support SLF4J and be able to be formated to support {EFK}"

However, even on the most recent posts I see sentances like (amongst a list of 'reasonable' sounding options):
"if a new API is required.. new should likely take a very strong guidance from existing APIs."
and "we can easily introduce a wrapper"

additionally how do either of these minimisers square with the gap-analysis/unfullfilled requirement of:

"consistent json structure in the log file"


...and in what world would this 'standard JSON format' not be absolutely nothing but negative added value over adopting the defacto standard set by
what Fluent Bit https://fluentbit.io/ (the F in EFK?) supports? (A genuine question?)

To me this is just SO https://xkcd.com/927/

standards.png



...and I am concerned that all the reasonable options might enable progress past
this point and an API equivalent (meant politely) of 'bait and switch'

I have seen xkcd 927 so many times before.

Even flogger is at least claiming some https://google.github.io/flogger/benefits

I think a illuminating bar to set is, even though customers are going to get this for 'free'...

1 - would they (customer(s)) be willing to actually pay real money for it over existing solutions if they were not otherwise going to get it?
2 - would the raiser of a 'new' requirement(vendor or customer) be prepared to implement it in an upstream library
in order to put that function into customers hands if that functionality is rejected for delivery as part of MicroProfile?

If 'yes,yes' then it gets some traction with me, otherwise no.

Gordon
To unsubscribe from this group and all its topics, send an email to microp...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Alex Lewis

unread,
May 2, 2019, 1:48:28 PM5/2/19
to microp...@googlegroups.com
Thank you Gordon for your feedback.

I've only been couching "if a new API is required..." because I've wanted to avoid focusing on that decision right now. IMO and as I've mentioned before, I'd love to avoid introducing a new API for exactly the reasons the XKCD highlights. 

I think it breaks down into:
  1. API: An API a developer can expect to be present in every microprofile compatible runtime. As the logging implementation is provided by the appserver/runtime level, the logging configuration is centralised and the output from both the runtime and the app are consistent.
    • No new FluentBit/D converters and parsers required to get consistency across app logging and runtime logging.
    • Zero-thought required for apps/services on logger choice
      • No investigations into which logging framework to use. 
      • No unfortunate surprises when you find out your choice of logging framework requires additional effort "downstream" when the logging output does to EFK (or similar).
  2. Output: I think the suggestion is that as you may have appservers/runtimes that each have different logging output format, you'd be able to configure the format to make them consistent.
Addressing #2 first, AFAIK (and please correct me if I have this wrong) FluentBit/FluentD does not have a standard format. Apart from input plugins they have already written for things like systemd logs, etc. it remains agnostic when it comes to application logging. It is up to you to decide how to get the logging "correct" across your infrastructure, whether that's making the format consistent or dealing with the inconsistencies at Elastic Search and/or Kibana. In the former's case, this is where you either pick the same logging architecture for all your "apps/services" or you use input and parser plugins to convert varying formats into one consistent one. This is where the benefit comes in as it may be considered easier to get the format correct at the source rather than doing it in the "middle" with FluentBit/FluentD.

I've seen Flogger doing the rounds and I need to understand it in more detail but my gut reaction has me asking whether we need the level of optimisation it's attempting to pitch as it's benefit. IMO there is such a thing as too much logging and Flogger is fixing a symptom rather than a cause. Having said that, my opinion comes from a position of ignorance until I have a better understanding of Flogger. The thing to note here is that the choice of Flogger would be that of the runtime/appserver, not the application. So, if OpenLiberty, Wildfly, Payara, etc. decided that Flogger was the best logger to use they could incorporate it and the app code would remain the same. The Flogger API is different to that of Log4j2/SLF4J, etc. but maybe the impact of adaptation would be negligible.

In order to make some progression, maybe we can focus on #1 to start with?

Without choosing the actual API, can the community agree whether in principle:
  • A single, known logger API that exists on all microprofile runtimes,
  • injected into your App,
  • part of the microprofile BOM,
  • that the app does not have to bundle...
...is a sufficient benefit? As I mentioned in a previous post, it is the same assumptions I can make when using JAX-RS, CDI, etc. In most cases, unless I go looking, I don't know what implementation the runtime is actually using but I can rely on the functionality.

Assuming the log4j-api was selected as the Logger API, a simple example would look like (rough example) :

import org.apache.logging.log4j.Logger;

@Path("ping")
public class UsefulApplication {

  @Inject
  private Logger LOG;
  
  @GET
  @Produces(MediaType.TEXT_PLAIN)
  public String doSomething() {
    LOG.debug("My bit of Debug");
    return "useful";
  }
}

The App dependencies would only include a single "provided" dependency on an API (log4j2-api in the example above); log4j-core could be included as a "test" scope dependency. Possibly a "microprofile-logger" dependency is required, which itself depends on log4j-api. If in the future the "chosen" API is to change, microprofile-logger could bump the major version, depend on a new API and the app code would change to adopt that new API. Hopefully, there would never be sufficient reason to switch API with a Major version change but nevertheless it's possible the mechanism needs to be there.

If there's benefit to at least the API part (#1), I'm happy to start the process by submitting a proposal.

Cheers


 

To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Emily Jiang

unread,
May 2, 2019, 6:02:41 PM5/2/19
to Eclipse MicroProfile
Thank you Alex for the detailed explanation!

As for your previous notes, you have:

I think we're in the position of deciding the items I note below, how do go about formalising those decisions?
  • Should Microprofile have an opinion on Logging?
    • I get the feeling there is a general leaning towards the answer being yes.
    • How do we draw out a final/conclusive decision?

As logging is an issue for microservice developers, personally I think having an opinion on logging in MicroProfile is a good thing.
  • If yes, does it stop at being guidance to use SLF4j+JDK14 so an app has the best chance of an integrated experience?
    • Before a spec exists, it would at least help.
    • Personally, I don't think this would be sufficient as that guidance may become less relevant over time depending on how app servers change in new versions.

 I think this approach might work as long as the runtime supporting MicroProfile take this approach. Based on your experiment, it looks very promising.
    • If we agree on needing a spec, does it select an existing API or is a new one created?
      • I'd advocate for selecting an existing one such as Log4j2 or SLF4J.
      • Are there other APIs that should be considered?
      • If it's a problem politically to select an existing API or because it would significantly hinder spec adoption due to the complications for App Servers, then a new one is required.
      • Would it be worth considering reaching out to the communities for the various Logging frameworks to see if any would be willing to donate their API such that it came under the Microprofile namespace and remained implementation agnostic? Something the implementations could rally around. Just a thought...
    • If a new API is required, how does that happen?
      • Anything new should likely take a very strong guidance from existing APIs.
    I think the above two approaches introduces more complexity or indirection. If we can avoid this, it will be better. Quite a few people pointed out, we have already got too many logging framework. We might just pick one and stick to it among us.

    However, feel free to write a proposal in the sandbox listing the possible solutions with pros/cons. My take is that if we can solve the issue without introducing something new, we are better off. If we don't need to create an APIs, maybe a blog post containing best practices on portable logging is the outcome of this discussion.

    I quite like what Don suggested:

    - Integrated JDK14 logging recommendation.  We could RECOMMEND that, at minimum, all app servers integrate JDK14 logging with their server's logger configuration and log record handling (and by extension anything that forwards to JDK14 logging -- eg. SLF4J or JCL).  App servers are, of course, free to support integrating with other logging apis as well, but apps using JDK14 would be portable.
    - JSON log output standard.  we could provide a JSON standard for what the server logs should look like.  App servers are, of course, free to provide other output formats for ops teams to choose as well.


    Thoughts?

    Thanks
    Emily
    To unsubscribe from this group and all its topics, send an email to microp...@googlegroups.com.

    To post to this group, send email to microp...@googlegroups.com.

    jrper...@gmail.com

    unread,
    May 3, 2019, 11:21:41 AM5/3/19
    to Eclipse MicroProfile
    I'm coming in a bit late on this, but I feel there's something missing from this conversation with regards to the logging frameworks mentioned. We need to differentiate between logging facades and log managers as they have different purposes. A logging facade is just an API that can bind to log managers and send a message to that log manager. A log manager is responsible for taking the message to be logged and routing it somewhere.

    • slf4j-api: This is a logging facade that will work with other log manager
    • logback: This is a log manager that, AFAIK, only works with the slf4j facade
    • log4j-api: This is also an API, however it really only works well with the log4j-core log manager
    • JUL: This is a logging API and log manager that are fairly tightly coupled
    I truly don't think we need another logging facade and IMO another log manager is a losing battle unless it goes into the JDK itself. We've already got plenty of those and honestly I've seen several simple facades inside projects themselves. Everyone loves to hate logging and everyone seems to be very opinionated on why their's is the best :)

    From reading through the thread it seems more what is wanted would be a standard output. I could see some value defining that a container needs to a standard specific JSON output. To be honest I would guess that most containers already support the major logging facades and in most cases the hybrid facades with log managers.
    To unsubscribe from this group and all its topics, send an email to microp...@googlegroups.com.

    To post to this group, send email to microp...@googlegroups.com.

    Heiko Rupp

    unread,
    May 9, 2019, 3:28:50 AM5/9/19
    to Eclipse MicroProfile
    +1 on #4


    Am Montag, 22. April 2019 20:40:36 UTC+2 schrieb donbo...@gmail.com:

    Raymond Auge

    unread,
    May 9, 2019, 9:30:28 AM5/9/19
    to Eclipse MicroProfile
    Just to clarify what you can pipe over logback (which is pretty much anything at this point and I want to clear up a slight misrepresentation from earlier in the conversation):

    There are integration tests in Apache Felix's logback that test [1]:
    • JBoss Logging 3.3.x
    • Commons Logging 1.2
    • JUL (Java Util Logging)
    • Log4j 1
    • Log4j 2
    • Slf4j
    • OSGi Log Service (including 1.4)
    Those are just the ones I actually tried because they are prevalent (at least in my experience).

    And to be clear; all those front ends are uniformly piped to a _single_ backend with whatever appenders, formats, remote, async, tagging, you name it.

    - Ray


    To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.

    To post to this group, send email to microp...@googlegroups.com.

    For more options, visit https://groups.google.com/d/optout.


    --
    Raymond Augé (@rotty3000)
    Senior Software Architect Liferay, Inc. (@Liferay)
    Board Member & EEG Co-Chair, OSGi Alliance (@OSGiAlliance)

    Alex Lewis

    unread,
    May 9, 2019, 9:44:20 AM5/9/19
    to microp...@googlegroups.com
    Thanks everyone for the responses, and apologies for not getting back to this sooner. 

    Don, I apologise as I had missed your email from a couple of weeks back. To ensure I understand your point fully, are you suggesting that JUL be the recommended baseline such that an application can assume/assert that if its uses JUL directly or, a facade backed by JUL (probably via SLF4J), the logging will be integrated with the server? 


    I updated my logging investigation repo in Github (https://github.com/alewis001/logging-portability-experiment) with a working Log4j v2 backed by SLF4J and a branch for Flogger as that also supports routing to JUL. In both cases, the logging config and output was integrated with each of the appservers. This would support the case for JUL as a common baseline. I worry that having only a recommendation would represent a risk to future-proofing but given that JUL appears to have "wide" adoption (in the 3 I've tested ;) maybe that's sufficient momentum and adoption. 

    Does the "JUL as a recommendation" need to be more "formal", for want of a better word?

    As Emily suggests, maybe a Blog post at least improves awareness of what I've found so far? Are there any other ways to make sure that knowledge is not lost over time or so it can made clear to new developers up front rather than finding out after they encounter a problem and go looking for a solution? Maybe some basic logging and guidance in the code generated by https://start.microprofile.io/?

    ---

    As there also appears to be a general desire for a configurable output format when using JSON, is there a desire for a specification to ensure that configuration is consistent across runtimes? 


    Just thinking out loud... A generalised approach could be to configure known MP specific logging attributes to the desired output name <mp-logging-attribute> = <output-attribute-name> E.g. 

    "mp-logging-message" = "message"
    "mp-logging-thread" = "thread" 

    The resulting JSON output would be:

    {
      "message" : "my log message",
      "thread" : "Thread-1"

    The config would be portable across appservers but it would have to pick a common set of "standard" attributes, and would still need to allow for a mechanism to configure values outside of the common set with the tradeoff of portability. Alternatively, the config could always be entirely appserver specific and never portable. The config would simply map appserver specific names to app defined names E.g. "ibm_threadId" = "thread".

    In both cases, the config format and its location would need to be specified. E.g. In java property format, in META-INF of the war.

    Cheers

    You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/LuIKg-M9KJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

    To post to this group, send email to microp...@googlegroups.com.

    Steve Millidge

    unread,
    May 9, 2019, 10:41:23 AM5/9/19
    to Eclipse MicroProfile
    Just as a data point..

    The reason Payara uses JUL as a baseline is;

    1) It is there without adding an additional dependency to manage in the runtime
    2) Not having an additional dependency means it isn't going to clash with whatever a developer wants to package into their application. For example if we were to package SLFJ version X I can be sure that a developer wished we packaged version Y.

    Steve


    David Lloyd

    unread,
    May 9, 2019, 11:36:33 AM5/9/19
    to microp...@googlegroups.com
    On Thu, May 9, 2019 at 8:44 AM Alex Lewis <alex.l...@gmail.com> wrote:
    >
    > Thanks everyone for the responses, and apologies for not getting back to this sooner.
    >
    > Don, I apologise as I had missed your email from a couple of weeks back. To ensure I understand your point fully, are you suggesting that JUL be the recommended baseline such that an application can assume/assert that if its uses JUL directly or, a facade backed by JUL (probably via SLF4J), the logging will be integrated with the server?
    >
    >
    > I updated my logging investigation repo in Github (https://github.com/alewis001/logging-portability-experiment) with a working Log4j v2 backed by SLF4J and a branch for Flogger as that also supports routing to JUL. In both cases, the logging config and output was integrated with each of the appservers. This would support the case for JUL as a common baseline. I worry that having only a recommendation would represent a risk to future-proofing but given that JUL appears to have "wide" adoption (in the 3 I've tested ;) maybe that's sufficient momentum and adoption.

    Maybe I'm outside the loop a little too much, so forgive this dumb
    question, but is it commonly the practice of MicroProfile
    specifications to mandate a specific implementation for a given API?
    That seems off to me. Not that I'd complain about this personally, as
    the log manager we use in our products is based on JUL, but I can't
    imagine this being a popular decision.

    --
    - DML

    Alex Lewis

    unread,
    May 9, 2019, 12:50:50 PM5/9/19
    to microp...@googlegroups.com
    Hi David,

    AFAIK it does not specify the implementation but it does specify the API, Annotations, etc. and I think has tended to invent something new. Logging is a bit strange as there are a lot of pre-existing APIs and pluggable Implementations and nobody wants a new logging API. In some cases both the API and Impl are combined (I.e. Log4j v1) but since SLF4J, many have adopted a split between an API/Facade and an Implementation (SLF4j, Log4j v2, Flogger, etc.). Sadly, we don't have a single Logging API developers can rely on with the implementation being a choice of, and provided by, the runtime; this was one of the prompts for me to start this thread.

    Since there are many APIs, each with their own pluggable implementations and lots of app code using each of those APIs, I think there is a reluctance to try to pick a single pre-existing Logging API as a chosen logging api for microprofile.

    However, it appears that appservers/runtimes may have coincidentally provided support for JUL Logging. As such, Don's suggestion is that microprofile recommends, but would not mandate, the JUL API/Impl as a common baseline. That way microprofile does not have to select a single Logging API as in most cases those APIs have an implementation that can log to JUL, one way or another. (Eventually) Logging to JUL provides server integrated logging and configuration as well as a level of app portability.

    Don - Please correct me if I've misrepresented anything.

    Cheers

    --
    You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/LuIKg-M9KJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.
    To post to this group, send email to microp...@googlegroups.com.

    Don Bourne

    unread,
    May 10, 2019, 8:32:02 AM5/10/19
    to Eclipse MicroProfile

    Alex, to your question...

    > To ensure I understand your point fully, are you suggesting that JUL be the recommended baseline such that an application can assume/assert that if its uses JUL directly or, a facade backed by JUL (probably via SLF4J), the logging will be integrated with the server? 

    Yes, and that the application would be portable to any MP-compliant server without changes.

    My suggestion of JUL is mostly through process of elimination.  We seem to agree that we don't want to create yet another logging API.  If we're not going to create a logging API we can either recommend the one in the JDK (JUL) or one of the alternatives (slf4j, logback, ...).  Echoing Steve Millidge's point -- including any of the "alternatives" in MP would get in the way of developers that want/need to be on a different version of that alternative logging API.  So, while JUL isn't every developer's #1 choice, it at least serves the need without getting in the way of app developers that want to bundle some other logging API (or that use open source packages that bundle some other logging API).

    In the end, I expect only app development teams that prioritize portability will pay any attention to any logger API recommendation.

    > Does the "JUL as a recommendation" need to be more "formal", for want of a better word?

    I think we'd need a requirement for MP servers to support integration with JUL, and a recommendation for app developers to use JUL.

    ---

    Regarding the JSON format - I've seen others (eg. https://www.elastic.co/blog/introducing-the-elastic-common-schema) trying to standardize the fields for log or other records.  By standardizing on a JSON schema for our logs we could at least make MP-compliant app servers have consistency.  That wouldn't solve the whole problem for ops teams that need to aggregate logs from all kinds of servers, but it would help.  In practice I think ops teams are faced with having to re-map field names (eg. through logstash) if they want consistency.  That's painful, particularly when you're running in a public cloud that doesn't let you add your own logstash filters.  So having more consistency would help -- at least you could have MP dashboards to view logs that are MP-compliant.  

    So...I'm in favor of us defining, or adopting, a simple JSON format for logs.  App servers could provide just that format, or multiple formats, as long as they included the MP schema as an option.

    Don
    To unsubscribe from this group and all its topics, send an email to microp...@googlegroups.com.

    Werner Keil

    unread,
    May 10, 2019, 3:13:28 PM5/10/19
    to Eclipse MicroProfile
    Actually I would not use JUL any more, unless this potential feature still must run on Java SE 8 when it got realized

    https://www.baeldung.com/java-9-logging-api offers a nice overview of the new logging abstraction in Java 9.
    If Java 9 or higher was an acceptable constraint, why not use the new Java Logging API?

    Werner

    Alex Lewis

    unread,
    May 11, 2019, 4:39:16 PM5/11/19
    to microp...@googlegroups.com
    Hi,

    I would agree that JUL is not ideal from an API point of view, but IMO I think most applications would adopt Log4j, SLF4J or one of the other facades/APIs that has a JUL based backend on offer, rather than going to JUL directly. I think the JUL aspect of this is really just a common backend that applications could assume is present and wired up to the appserver logging/config.

    The description for the Java 9 API in the JEP (http://openjdk.java.net/jeps/264) describes it as an API to route platform logging to the logging implementation of choice, rather than being a general purpose app logging framework (not being general purpose is listed in the Non-Goals). By default, it uses JUL itself to log. Having said that, it is also strange (IMO) to make the System.Logger API available for client code to use as that would seem counter to what the JEP describes.

    Cheers

    To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

    To post to this group, send email to microp...@googlegroups.com.

    David Lloyd

    unread,
    May 13, 2019, 4:25:32 PM5/13/19
    to microp...@googlegroups.com
    Replies inline...

    On Sat, May 11, 2019 at 3:39 PM Alex Lewis <alex.l...@gmail.com> wrote:
    >
    > Hi,
    >
    > I would agree that JUL is not ideal from an API point of view, but IMO I think most applications would adopt Log4j, SLF4J or one of the other facades/APIs that has a JUL based backend on offer, rather than going to JUL directly. I think the JUL aspect of this is really just a common backend that applications could assume is present and wired up to the appserver logging/config.

    I don't think there needs to be a common backend. My point above,
    which I think was misunderstood, was that MP shouldn't be specifying
    implementations, full stop. JUL is an implementation.

    > The description for the Java 9 API in the JEP (http://openjdk.java.net/jeps/264) describes it as an API to route platform logging to the logging implementation of choice, rather than being a general purpose app logging framework (not being general purpose is listed in the Non-Goals). By default, it uses JUL itself to log. Having said that, it is also strange (IMO) to make the System.Logger API available for client code to use as that would seem counter to what the JEP describes.

    I think using the Java system logger as a specified API would be a bad
    idea; the API is pretty limited and I don't think anyone would choose
    it given any other option. I don't think there is really any rational
    approach other than either specifying an existing API facade (which is
    to say, exactly SLF4J) or simply dropping the issue and letting users
    do what they want (i.e. deliberately leave it unspecified). Though I
    suppose there's a middle ground where it could be specified like this:
    "The MicroProfile environment shall provide a logging infrastructure
    such that common category-and-level-oriented logging APIs will behave
    as expected". But it doesn't seem useful to bother doing any of these
    things; what problem are we trying to solve anyway?

    --
    - DML

    Raymond Auge

    unread,
    May 13, 2019, 10:54:47 PM5/13/19
    to Eclipse MicroProfile
    I fully agree with David.

    - Ray

    --
    You received this message because you are subscribed to the Google Groups "Eclipse MicroProfile" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.

    To post to this group, send email to microp...@googlegroups.com.

    Emily Jiang

    unread,
    May 14, 2019, 4:48:35 AM5/14/19
    to Eclipse MicroProfile

    Hi David,
    Thanks for sharing your view! I think this thread is quite long. Let me summarise a bit to clarify the issue Alex brought up.

    This whole discussion is around the following issue:

    One developer has to port his microservice from app server X to app server Y. Since different app servers support different flavor of logging. Suddenly, the microservice logging is broken after the migration.

    This is due to one issue: the logging is quite portable among app servers. Since MicroProfile is to help with portability and interoperability, which is why Alex comes to seek advice.

    I think most people is against the idea of creating new logging APIs in MicroProfile since there are too many already. I think it is beneficial if a blog is written about the portability aspect of logging as part of the discussion and experiment (see Alex's  https://github.com/alewis001/logging-portability-experiment).

    I understand every developer might have different taste on logging. However, if someone needs to consider about portability, it might be good to adopt the best practices (which is to use JUL at the moment since nearly all app servers support that).

    Let's try to find out how to solve the issue together instead of expressing our own logging taste.

    Alex, can you create a doc under sandbox to list all options with pros and cons? We can then contribute towards the idea.

    Thanks
    Emily


    I think using the Java system logger as a specified API would be a bad
    idea; the API is pretty limited and I don't think anyone would choose
    it given any other option.  I don't think there is really any rational
    approach other than either specifying an existing API facade (which is
    to say, exactly SLF4J) or simply dropping the issue and letting users
    do what they want (i.e. deliberately leave it unspecified).  Though I
    suppose there's a middle ground where it could be specified like this:
    "The MicroProfile environment shall provide a logging infrastructure
    such that common category-and-level-oriented logging APIs will behave
    as expected".  But it doesn't seem useful to bother doing any of these
    things; what problem are we trying to solve anyway?
     

    David Lloyd

    unread,
    May 14, 2019, 11:27:50 AM5/14/19
    to microp...@googlegroups.com
    On Tue, May 14, 2019 at 3:48 AM 'Emily Jiang' via Eclipse MicroProfile
    <microp...@googlegroups.com> wrote:
    >
    >
    > Hi David,
    > Thanks for sharing your view! I think this thread is quite long. Let me summarise a bit to clarify the issue Alex brought up.
    >
    > This whole discussion is around the following issue:
    >
    > One developer has to port his microservice from app server X to app server Y. Since different app servers support different flavor of logging. Suddenly, the microservice logging is broken after the migration.
    >
    > This is due to one issue: the logging is quite portable among app servers. Since MicroProfile is to help with portability and interoperability, which is why Alex comes to seek advice.
    >
    > I think most people is against the idea of creating new logging APIs in MicroProfile since there are too many already. I think it is beneficial if a blog is written about the portability aspect of logging as part of the discussion and experiment (see Alex's https://github.com/alewis001/logging-portability-experiment).
    >
    > I understand every developer might have different taste on logging. However, if someone needs to consider about portability, it might be good to adopt the best practices (which is to use JUL at the moment since nearly all app servers support that).

    I understand your point, but using JUL is definitely *not* a best
    practice in any environment. The industry-accepted best practice for
    logging clients is probably to use SLF4J. There is no best practice
    for applications that need custom handlers/formatting, only bad
    compromises.

    I don't think it's possible to create a standard for backends that
    won't be problematic; few containers will probably support creating
    and registering a JUL Handler for example. And they can't, not
    without dropping whatever they have and switching to JUL as a backend.
    While this isn't a problem for Red Hat per se (we've been using JBoss
    LogManager for many years, which is based on JUL), it's definitely
    going to be a problem for other implementors who have invested in
    log4j2 or LogBack for example.

    It is critical that before this discussion continues any further, that
    it is divided into two separate topics: logging client API, and
    logging backend and SPI. The former case covers applications that
    produce log messages, and is much, much more straightforward. The
    latter case covers applications that need to create, configure, or use
    specialized features of a logging backend (typically around filtering,
    formatting, or handlers) and is a lot more involved - and could be
    addressed completely separately.

    > Let's try to find out how to solve the issue together instead of expressing our own logging taste.

    I don't think it's a question of taste.

    > Alex, can you create a doc under sandbox to list all options with pros and cons? We can then contribute towards the idea.

    There are no options yet; the problem is not even clearly defined
    (which is where a lot of this confusion is coming from). The first
    step is clearly defining the problem. And the first part of defining
    the problem is to recognize that there are (at least) two separate
    problems being rolled into one: the API part and the backend part.

    --
    - DML

    Alex Lewis

    unread,
    May 14, 2019, 12:56:40 PM5/14/19
    to microp...@googlegroups.com
    Apologies if my initial post did not state the problem clearly enough. I did try to focus the initial post on just the API with the intention of moving on to Config afterwards but, the responses to the thread quickly took it in that direction regardless. 

    To restate the problem as I see it... As a developer, I cannot assume/expect a Logging API that is available on all appservers/runtimes, and consequently I cannot expect integrated logging where the log output is consistent with, and managed by, the appserver/runtime. As all microservices at some point need to consider logging, it feels like an area that microprofile could address. 

    As you quite rightly point out, the second aspect once you have consistent logging is the output configuration to make it is easier to integrate into external systems.

    A couple of weeks ago I made the same case as you do for two separate aspects; however, maybe I did not do so clearly enough.

    Where this thread has become complicated is that selecting a single API that all appservers must support appears to be undesirable or certainly hasn't gained support. As such, I've attempted to find whether there is an option that has a good chance of meeting the requirements of my question. That appears to be any API/Facade that can be wired up to a JUL backend. So, although using the JUL API directly is not best-practice, it appears to be the most widely supported backend amongst the 3 app servers I tested; not an exhaustive test, I admit. 

    I have previously made the case that guidance eventually becomes lost, out-of-date or is typically only found after running into a problem. However, rather than giving up, I've tried to follow the direction of the thread.

    Cheers

    --
    You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/LuIKg-M9KJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.
    To post to this group, send email to microp...@googlegroups.com.

    Werner Keil

    unread,
    May 14, 2019, 4:08:28 PM5/14/19
    to Eclipse MicroProfile
    Hi Emily/all,

    I think that's a good idea for releases that still have Java SE 8 as the minimal requirement (as it still does for a while I assume, especially with Java EE 8 JSRs as the foundation;-) 

    It was noted, that the mission statement of the System.Logging JEP 164 in Java 9 seems different from how it actually ended up in Java 9, and blogs like https://www.baeldung.com/java-9-logging-api show, that the JDK can be configured to work with different log frameworks than JUL where necessary. So going JUL now and using what Java 9+ has to offer in a future version sounds reasonable. 
    In fact if you use a multi-release JAR you could even do both in a single implementation.

    Werner

    David Lloyd

    unread,
    May 14, 2019, 4:16:35 PM5/14/19
    to microp...@googlegroups.com
    Werner,

    What exactly are you proposing? Are you proposing adding some global
    requirement that JUL be available to MP applications? Because as part
    of the JDK, it can't *not* be available, so specifying this is quite
    redundant.
    > --
    > You received this message because you are subscribed to the Google Groups "Eclipse MicroProfile" group.
    > To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
    > To post to this group, send email to microp...@googlegroups.com.
    > To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/c294d188-1b13-488e-a20c-323463e45f86%40googlegroups.com.
    > For more options, visit https://groups.google.com/d/optout.



    --
    - DML

    Werner Keil

    unread,
    May 15, 2019, 3:24:27 PM5/15/19
    to Eclipse MicroProfile
    Well sticking to JUL was pretty much what Emily suggested. 
    And I think it's better than creating yet another wrapper that would only compete with SLF4J or what the new System.Logging interface in Java 9+ offers on a slightly smaller scale.
    > To unsubscribe from this group and stop receiving emails from it, send an email to microp...@googlegroups.com.

    Erik Mattheis

    unread,
    May 16, 2019, 8:21:39 AM5/16/19
    to Eclipse MicroProfile
    I think if MicroProfile takes a stance here, it should standardize a new logging facade interface. Standardizing JUL doesn’t provide any benefit. The java community already overwhelming prefers alternatives and, as David pointed out, JUL is already portable so specifying it’s availability is somewhat pointless.

    Specifying a new interface for the sake of portability does not mean we should reinvent anything about a logging facade. In the same way that microprofile metrics borrows heavily from dropwizard metrics, I think microprofile logging should take the most commonly used subset of the slf4j logger interface and define that as the microprofile logger interface that is guaranteed to be portable across implementations.

    Furthermore, we should specify a portable factory to obtain logger instances and a CDI mechanism for injecting them. I think out of the box support for @Inject Logger is the biggest win for developers. We could also specify a non-portable way for injecting the implementation’s native logger - that way a developer can choose to write portable code or platform-dependent code and still leverage the microprofile factory or CDI injection.


    Erik

    Alex Lewis

    unread,
    May 16, 2019, 9:44:08 AM5/16/19
    to microp...@googlegroups.com
    Thanks Erik.

    The point about standardising on JUL was that although JUL is in every JVM, it does not mean that the appserver/runtime is actually using it or supports hosted apps using it (for an integrated experience). This also applies to the management of logging levels, modules/packages, etc. that each runtime exposes a mechanism for. AFAIK, runtimes such as OL, Wildfly, etc. have made a specific choice to support app logging one way or another via JUL. Maybe I'm wrong about though?

    Although selecting an existing API would be more ideal in avoiding "yet another Logging API" (as pointed out by responses quoting XKCD 927) the question is whether those existing APIs are right for an MP environment. There is a risk that any existing API is actually more than would be necessary for an mp-logging API and parts of those APIs would either go unimplemented or, implemented for no good reason apart from providing a complete implementation. Alternatively, this is where using Log4j, SLF4J or any other that can use JUL as a backend, which is in turn supported by the runtime, began to be attractive.

    +1 on your final point. I suggested something along the same lines in my initial post and a follow up where I expanded a little. I like the additional point about being able to inject the platform specific chosen logger and choose to forfeit portability or use the mp-logging API and retain portability.

    I think I have to bite the bullet and start the doc in the sandbox as Emily requested and let that take its course.

    Cheers

    --
    You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/LuIKg-M9KJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

    To post to this group, send email to microp...@googlegroups.com.

    David Lloyd

    unread,
    May 16, 2019, 9:53:40 AM5/16/19
    to microp...@googlegroups.com
    What is an "integrated experience" though? It seems risky to try and
    specify something subjective like that. The JDK (at least until 8)
    uses JUL all over the place so it's unlikely you'll find an
    environment where that causes any sort of problem. And one major risk
    is that the JDK team might decide to deprecate JUL - it is almost
    universally unpopular after all - and in this case the spec will be
    left hanging on to not just obsolete but also deprecated technology.

    The problem I have with logger injection is that the typical way of
    doing this involves making the log category (i.e. "logger name" in
    some implementations) be equal to the class name. In most cases it's
    really better to name the logger categories after some kind of high
    level process though, because *users* aren't usually trying to get
    logs about a class, they're trying to troubleshoot a problem (e.g.
    "Why is credit card processing not working? I'll turn on DEBUG
    logging on the credit card processing log category and find out"). So
    unless you're very meticulous about arranging your processes by Java
    package, you might end up either having to track down the classes
    implementing your process or else risk having to turn on a massive
    spam attack to solve a single problem.

    So the way this impacts injection is, you would need a way for the
    injection site to specify its category. But I fail to see how this:

    @Inject @Category("com.mycompany.store.credit-card")
    Logger logger;

    is any better than this:

    static final Logger logger =
    Logger.getLogger("com.mycompany.store.credit-card");

    The latter having less overhead by any measure (maybe substantially
    less) while being about the same amount of keystrokes.

    Even if you don't buy into my log category philosophy, you could still
    compare it like this:

    @Inject
    Logger logger;

    versus:

    static final Logger logger = Logger.getLogger(); // use calling class

    Seems like a wash to me.
    > You received this message because you are subscribed to the Google Groups "Eclipse MicroProfile" group.
    > To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
    > To post to this group, send email to microp...@googlegroups.com.
    > To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/CAKmOgMF_iYM0Dt9%2BRYjo%2B%3DWO34TJ%3D2w9Tk3k1K_t37Ryo9sopw%40mail.gmail.com.
    > For more options, visit https://groups.google.com/d/optout.



    --
    - DML

    Erik Mattheis

    unread,
    May 16, 2019, 10:05:33 AM5/16/19
    to Eclipse MicroProfile
    On Thursday, May 16, 2019 at 9:44:08 AM UTC-4, Alex Lewis wrote:

    The point about standardising on JUL was that although JUL is in every JVM, it does not mean that the appserver/runtime is actually using it or supports hosted apps using it (for an integrated experience). This also applies to the management of logging levels, modules/packages, etc. that each runtime exposes a mechanism for. AFAIK, runtimes such as OL, Wildfly, etc. have made a specific choice to support app logging one way or another via JUL. Maybe I'm wrong about though?

    Okay, that make a little more sense, but I think that's the general problem that JEP 264 tried to address in Java 9. It provides a way to plugin a logging provider instead of mandating the use of a logging framework. I think MP should take the same approach when it comes to standardization. Implementations have already made their logging choices and there's no reason to shoehorn JUL in there since nobody is clamoring for that API from a logging standpoint.

    If we specify a logging interface we can take a stance as far as log levels and universal features go and let the implementations fill in the gaps. Better to start off light and expand the feature set as consensus emerges on desired features.

    Although selecting an existing API would be more ideal in avoiding "yet another Logging API" (as pointed out by responses quoting XKCD 927) the question is whether those existing APIs are right for an MP environment. There is a risk that any existing API is actually more than would be necessary for an mp-logging API and parts of those APIs would either go unimplemented or, implemented for no good reason apart from providing a complete implementation. Alternatively, this is where using Log4j, SLF4J or any other that can use JUL as a backend, which is in turn supported by the runtime, began to be attractive.

    I agree with the concerns about adopting an existing API vs defining a new one. If we said "SLF4J is standard" then we have to track that API or stick with a version that becomes stale and there's some pretty opinionated parts of that API that we don't necessarily want to mandate for MP. I think the basic paradigms of conditional logging based on levels, hierarchical logging categories, and parameterizable log messages are fairly universal. We could extract them out into a lighter interface fairly easily. We could also take a JPA-like approach where the interface itself exposes the delegate used by the implementation for use cases where the platform implementation is needed by application code.

    +1 on your final point. I suggested something along the same lines in my initial post and a follow up where I expanded a little. I like the additional point about being able to inject the platform specific chosen logger and choose to forfeit portability or use the mp-logging API and retain portability.

    I think I have to bite the bullet and start the doc in the sandbox as Emily requested and let that take its course.

    I'll be happy to pitch in when I can!

    -- 
    Erik

    Erik Mattheis

    unread,
    May 16, 2019, 10:27:25 AM5/16/19
    to Eclipse MicroProfile
    Definitely a wash in some cases, but if we have org.eclipse.microprofile.logging.LoggerFactory then the pattern everyone already uses can be made portable. If we adopt a subset of SLF4J, then most of the code out there using

    static final Logger logger = LoggerFactory.getLogger(MyClass.class)

    becomes portable just by changing the imports from org.slf4j to org.eclipse.microprofile.logging.

    Likewise, anyone who has written their own CDI producer for Loggers can throw it out and rely on MP.

    -- 
    Erik

    Alex Lewis

    unread,
    Jul 12, 2019, 1:36:17 PM7/12/19
    to Eclipse MicroProfile
    Hi,

    It is somewhat embarrassing how long it has taken me to get to this early stage but work and home life seem to have a knack of stealing time. Anyway, I have attempted to create a specification in my fork of the microprofile-sandbox, which can be found here: https://github.com/alewis001/microprofile-sandbox/tree/alewis001/logging-proposal/proposals/logging

    In summary, the specification:
    • Nominates a MicroProfile specific extension to Flogger as the selected Logging API/Framework, which can be found here: https://github.com/alewis001/microprofile-logging
      • As you'll see, the MP part is effectively empty. This at least creates the skeleton to which MP specific functionality can be added. The "withTraceId" method was put there as an example of adding MP specific methods that extend from the core Flogger general-purpose API.
      • Are there any ideas for MP specific logging functionality? 
      • Would having Span Context data from open-tracing attached to the logging statements be useful?
      • Could this be done automatically by the runtime if it can detect that the mp-logging lib is in use? Is that a problem already solved? 
      • Any other integrations that spring to mind?
      • Adding integrations now is not a necessity, and I wouldn't see a lack of integrations representing a failure of the API, having a placeholder would hopefully be sufficient.
    • Nominates JUL as the default backend when using the above is not an option.
    • Opening paragraph for Log JSON Structure configuration but no more than that.
      • I think this area piqued more interest in the thread and hopefully something I can begin to explore in more depth.
    The specification probably says more than it may necessarily need to when it can be quite easily summarised as I've done above; however, as it is a bit of an awkward topic I felt it was beneficial to try to articulate the problems as I see them in order to provide context to the spec.

    I have tried to capture some of the points that led to my suggested decisions in a "decision-process.adoc" file as I'm sure there may be some controversy. I realise this also changes my stance on Flogger that I mentioned in a previous post on this thread; I did say that my original opinion was based on ignorance :) Having a MicroProfile specific extension to Flogger felt like a balance between picking an existing API rather than investing a new one, and providing somewhere for MicroProfile specific extensions to be added.

    The mp-logging project I've linked to could probably be split further to really create a clean split between interfaces and implementation but before I went any further I wanted to see what the response was like. 

    I now realise that this also sits in an odd ground between specification and implementation. As MP Specs are meant to be specs with no implementation, and the recent discussion regarding the implementations proving the spec rather than the reverse, I suspect this proposal at least will help explore those points. 

    I'd appreciate feedback, good or bad!

    @donbourne: I see work has started on configurable log output in https://github.com/OpenLiberty/open-liberty/issues/6079 and you've talked about the config side of things in this thread. Have you explored the idea any further?

    Cheers

    David Lloyd

    unread,
    Jul 12, 2019, 2:43:25 PM7/12/19
    to microp...@googlegroups.com
    As before, I'm curious what problem is being solved: even more so
    because as far as I can tell, the proposal doesn't even solve any of
    the problems brought up earlier on the thread.

    It seems like your problem statements (for example "This specification
    provides an application developer the information they need for any
    compliant runtime") are not very clear and don't really identify the
    problem being solved. It seems foregone that you think that the
    solution to whatever the problem statement is, is to require a
    specific logging backend, but there seems to be no connection from
    these general problem statements to the conclusion about requiring a
    common logging implementation. What problem is solved by requiring
    JUL on the backend? Is it something to do with configuration, or
    maybe output format? You mentioned portability, but that's very
    general, and it's not clear what kind of portability problem is being
    solved. Can you go into detail on this?

    What problem is solved by specifying a logging API? Why specify any
    API at all when users will usually already have a preferred API? How
    would you deal with the inevitable backlash from users whose log API
    preference is not the same as yours?
    > --
    > You received this message because you are subscribed to the Google Groups "Eclipse MicroProfile" group.
    > To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
    > To post to this group, send email to microp...@googlegroups.com.
    > To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/ef42bcd4-65cd-44f1-8d63-e2e628ab3b2e%40googlegroups.com.

    Alex Lewis

    unread,
    Jul 15, 2019, 1:02:16 PM7/15/19
    to microp...@googlegroups.com
    Hi David,

    I'm not sure what I've missed in the wording of the spec vs what was covered in this thread, which I believe to be:
    • The possibility of a standard logging API.
    • The possibility of a standard logging backend when the choice of API has already been made.
    • The configuration of log JSON output structure in order to standardise output structure.
    I thought my spec deals with (attempts to at least) the first two points. As I mentioned in my previous post, I have not had an opportunity to explore the configuration aspect in more detail and the spec reflects that.

    Having said that, I appreciate you working through this with me and I'll try to explain my points differently as I'm clearly not conveying them sufficiently. Otherwise, maybe there is some gap in my understanding which, although somewhat embarrassing, may draw this to a conclusion.

    Let's say I've developed an application that relies on MicroProfile 2 specifications, as well as some JEE 8 such as JAX-RS, CDI, etc. My application has logging. (I believe) It's beneficial that the log output of my Application is combined with the output of the runtime so my application uses a log framework that the runtime supports I.e. it states that if I use a certain API/Framework, the application logging is routed through the runtime. Routing the application logging through the runtime also means that log configuration for both the runtime and the application is managed through a management framework provided by the runtime.

    Personally, I think the above would describe a very typical application; whether it's monolith, microservice or something inbetween.

    As the application is using JEE and MicroProfile specifications, I know I can deploy/run my application to any runtime that supports MicroProfile 2 and JEE 8; clearly the benefit of working against specs. That assumption becomes irrelevant due to the choice of logging framework. The integration of Application and Runtime logging creates a coupling. The framework that worked on the current runtime might also work on another but critically I don't have any guarantee.

    Portability is broken.

    As I no longer have any guarantee, I have to experiment to see what works or doesn't. Logging could cause my app could to fail for a number of reasons:
    • My app may not bundle all the necessary libs; the previous runtime provided them on the classpath
    • Previously bundled libs now conflict with that of the runtime; E.g. multiple SLF4J bindings.
    • It deploys and runs but it's no longer consistent with the runtime and I can no longer use the runtime to configure the application logging output. I.e. there's log output but it's not integrated with the runtime.
    In all these scenarios, time and effort is spent to understand what is going wrong and find a solution. It also means in order to use the new runtime the app must migrate and leave support for the current one behind. The fix might be small but until I investigate and understand the new platform in greater detail, I don't know that. At worst, it could mean I need to change the logging framework the app uses if I wish to make use of the new runtime.

    That impact on portability is what I feel is an unnecessary risk and the investigation and work required to fix the logging should be unnecessary effort; in my opinion.

    As mentioned earlier in this thread, I performed a simple experiment. In each case I was able to make the application portable with the limited runtimes I chose, once I found a magic combination. There are no guarantees that will always be the case but we can ignore that for now. I've now added Quarkus to each of the branches, and in each case I had to modify the pom.xml to get integrated logging to work. See the <logger>-quarkus branches on https://github.com/alewis001/logging-portability-experiment. It's those changes which, admittedly are small, and the time spent working that out I believe could be avoided. What I would also point out is that Flogger worked without change; the only change was to the pom.xml to include Quarkus.

    To your point about selecting a back-end... Quarkus doesn't natively support log4j v2 according to it's logging documentation: https://quarkus.io/guides/logging-guide; therefore, a bridge is required. Interestingly, a bridge to JUL didn't work in my test above although given JUL is supported, I thought it would. using the SLF4J adapter did work. The idea of a selected backend would be a portability guarantee for those applications already using a Logging API such as Log4j-v2. As you'll see in my experiment above on the pre-Quarkus branches, to get a portable application each Logging API was backed by a bridge to JUL. From a spec point-of-view, as using JUL as the backend appears to have the best chance of success along with having limited (or no) impact on the existing runtimes, I thought JUL would make a good choice.

    Hopefully this clarifies the position I'm coming from?

    Cheers

    You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/LuIKg-M9KJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

    To post to this group, send email to microp...@googlegroups.com.

    Alex Lewis

    unread,
    Jul 15, 2019, 1:25:53 PM7/15/19
    to microp...@googlegroups.com
    Apologies, I did not address your final points...

    Specifying an API makes a decision for new apps on how to perform logging, no time or investigation required. The same benefits you get from working against any spec.

    Specifying a backend gives existing applications the same assurances but should the API evolve to provide additional benefit, these apps would obviously not get those benefits. It's up to them to decide whether it matters to them or not.

    Hopefully existing applications would see benefit in migrating to the API above but they're not forced to.

    If the portability and support aspects I've mentioned are not a concern, there's no need to change; otherwise, this spec would provide a path to follow.

    Cheers

    Don Bourne

    unread,
    Jul 16, 2019, 11:26:37 AM7/16/19
    to Eclipse MicroProfile
    > @donbourne: I see work has started on configurable log output in https://github.com/OpenLiberty/open-liberty/issues/6079 and you've talked about the config side of things in this thread. Have you explored the idea any further?

    The main concern we're focused on addressing in https://github.com/OpenLiberty/open-liberty/issues/6079 is that log collectors, like fluentd / filebeat, have their own preferences for what to label different parts of the data being sent to the log datastore (eg. elasticsearch).  So 6079 gives some flexibility to customize the JSON attribute names in case you decide you need to change them, for example to be able to build dashboards where all the logs from a variety of runtimes are in a "log" column.  OpenLiberty already has JSON formatted logs, and I know other app servers also have ability to generate JSON output for logs.

    I think the thing worth standardizing would be 1) the set of fields that should be used to represent a log entry, 2) the default names to use for those fields, and format to use for the values, to at least try to have consistency in the output of various app servers.

    Beyond that, I haven't had much time to try to write anything formally.
    > To unsubscribe from this group and stop receiving emails from it, send an email to microp...@googlegroups.com.

    > To post to this group, send email to microp...@googlegroups.com.
    > To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/ef42bcd4-65cd-44f1-8d63-e2e628ab3b2e%40googlegroups.com.
    > For more options, visit https://groups.google.com/d/optout.



    --
    - DML

    --
    You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/LuIKg-M9KJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to microp...@googlegroups.com.

    Heiko Rupp

    unread,
    Jul 25, 2019, 2:55:17 PM7/25/19
    to Eclipse MicroProfile
    [ I did not plow through that huge thread, so perhaps this was brought up before ]

    Logging should also allow to e.g. automatically add a request-id or trace-id if the log happens in a flow that have such an id. This allows for better correlation of log observability signals.

    Alex Lewis

    unread,
    Oct 16, 2019, 6:25:12 AM10/16/19
    to Eclipse MicroProfile
    Hi,

    I have since gone back to the drawing board somewhat. In order define something closer to an API-only approach, like other MicroProfile Specs, and more specifically cater for Structured Logging, Span Logging and automatic Span ID injection to all logging events. The API is based heavily on SLF4J as that appeared to be the one that had the highest use, based on some basic research. When I say "API-only", the code does provide some implementation classes that contain some core logic inside base classes that hopefully aids implementation by a runtime (I.e. bridging into whatever supported Logging framework the runtime wishes to support with mp-logging).

    The API attempts to:

    - Make it easy for Applications to extend Log Event Data with custom/app-specific data.
    - Make it easy to add Span Logging to an Application; whether that's through an explicit call to a "span()" method or implicit via the typical "debug", "warn", etc.
    - Automatically include the Active Span ID in Log Events to help with log correlation in log output and on Log Aggregation services such as ELK/EFK and Loki.

    The Specification hopefully puts little onus on the runtime but does have some specific points:
    - To provide the API on the classpath to applications so they do not have to bundle it.
    - When logging in JSON format, include the Log Event data as JSON against a specifically named attribute.
    - To use JSON-B when serializing the Log Event data.

    I've created the API and Spec in the microprofile-sandbox repo and I've opened a PR here: https://github.com/eclipse/microprofile-sandbox/pull/67

    I should be clear that this isn't "finished". Apart from any functional/structural feedback I may receive, there is work to be done on tests, Javadocs, etc. For right now, I've added Javadocs to the most important areas and I've added unit tests for the base logic functionality. I can and will of course add to that as I receive feedback and make the appropriate changes; if there appears to be value in this specification.

    If you're interested, please take a look and I appreciate whatever feedback you can provide.

    Cheers

    David Lloyd

    unread,
    Oct 16, 2019, 8:02:09 AM10/16/19
    to microp...@googlegroups.com
    I have a few initial points of feedback:

    • Using functional interfaces for logging is a performance dead-end.
    We've supported logging in JBoss AS and WildFly for 20 years. The
    easiest way I've seen to tank performance of just about any scalable
    application is by doing *anything* nontrivial inside an unmatched log
    statement. So by requiring the construction of a lambda on every log
    statement, in addition to possibly blowing out metaspace due to
    inefficiencies in the lambda implementation, you're also causing (at
    minimum) an object allocation. The "hot path" for logging should only
    consist of a very small number of fast operations; the best we've been
    able to do is a direct (inlinable) method call chain resulting in a
    single integer compare before any "real" work is done.
    • The fluent idea is gone now I noticed. Wasn't this a core concept
    of the original proposal?
    • Extensible levels are a can of worms. Are you sure you want to go
    there? What's the use case, and what would be the problem with an
    enumerated fixed set of levels (e.g. the standard FATAL...TRACE log4j
    levels)?
    • Why require JSONB for the run time? Shouldn't the run time be free
    to choose a more performant serialization option or even output
    format?
    • Do we really want to require CDI for anyone to use this API? I
    don't know of any other log APIs which have this requirement.
    • Even if functions weren't detrimental to performance, having
    formatting be completely free-form certainly would be, and also
    hamstrings the container's ability to customize the output in many
    ways.
    • No direct formatting support means users roll their own.
    • No explicit support for i18n means that every user will have to
    roll their own.
    > --
    > You received this message because you are subscribed to the Google Groups "Eclipse MicroProfile" group.
    > To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
    > To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/e091f62a-b842-46b8-b673-7288d501509c%40googlegroups.com.



    --
    - DML

    Alex Lewis

    unread,
    Oct 16, 2019, 12:00:08 PM10/16/19
    to microp...@googlegroups.com
    Thanks David. Apart from those points you like it though, right? Joking aside...

    * Functional Interface
    Log4j2, SLF4J, Flogger, and even Java Util Logging all appear to expose Functional methods, typically via Supplier<?> arguments. Maybe that's not a good argument for using it but I haven't seen warnings not use them. Dud I miss the warnings? From my perspective they appeared to be doing a similar thing; namely, Log generation can be done in a lambda, which is invoked after a log level check. All I've done is make that Supplier<?> part more explicit in order to create some formality around the structuring of the log data.

    * Fluent API
    That was a mostly a consequence of selecting Flogger, for the benefits that Flogger calls out. I intentionally removed the Fluent nature to test people's reaction. Do you want a Fluent API and see real use cases for it? The .at<level> methods being a means of no-op'ing processing of the chain was appealing but I (naively?) thought that same benefit could be achieved through the use of a functional interface and a level check before invoking the lambda function.

    * Flexible Levels
    I haven't introduced flexible levels or certainly didn't intend to. If I've not locked down the Level class sufficiently that's just a mistake on my part and easily rectified.

    * JSON-B
    As the LogEvent, a basic object to capture data the Application wishes to attach to the log output, is within the Application domain I felt it necessary to provide the Application with a defined and easy means of shaping the JSON serialization without having to do the serialization itself. As JSON-B is required/used in other MicroProfile specifications and is part of Jakarta EE, I thought it consistent with those other specs and does not require the Application to depend on a specific JSON implementation such as Jackson. More importantly, it does not require the application to couple itself to a JSON lib that may not be present/supported on all runtimes as Jackson is not the only lib out there. If the runtime is not configured to output in JSON then the Spec leaves it to the runtime to decide how to output the LogEvent data, if at all. It only puts a constraint on the runtime when it's configured to output in JSON, by specifying an attribute name to which the JSON serialized LogEvent data is attached to.

    * CDI: Where does the API/Spec require the Application to use CDI? It itself uses CDI to look up Tracing to see if it has been made available by the runtime; which I believe is the way to look up a Tracer isn't it? CDI is at least how a Tracer is made available to an application using MicroProfile OpenTracing. The LoggerFactory does use CDI to look up the implementation so the Runtime would be required to make an implementation of LoggerFactoryProvider available through CDI. Is that what you're objecting to? CDI is also a required specification by MicroProfile so, I thought that was an appropriate mechanism to use. I did wonder about using the Java SPI mechanism but again, that's dependent on it being an acceptable approach by the runtime. As CDI is already required by MicroProfile, I leaned in that direction.

    * Container Output: This Spec/API is only defining a way to generate a Log Message and a Log Event payload. Those two things would still be wrapped by whatever the container wishes to output and would likely be specific to the logging implementation the runtime uses. It's just saying that along with all the other JSON attributes the the runtime may want to include in each log "line", please also include the JSON serialized LogEvent (or the App's sub-class) against a "mpLogEvent" property. I'll provide a mock of what I mean below.

    * Formatting Support: Not as a built-in to the method signatures but the application can use any means it likes to build the message string. I thought that would be preferable to forcing the Application to use a home-grown parameter substitution like SLF4J or, for a MessageFactory like mechanism like Log4j2 uses. What negative do you see in not explicitly providing specific formatting support? On a related note, each of the logging APIs have a method explosion in an attempt to avoid performance issues with varargs and auto-boxing. Flogger reduces that explosion somewhat by using the fluent API to separate the level from the log statement; therefore only "log" is affected by the argument explosion. By removing the String formatting as a responsibility of the Logging API, the API itself can avoid those issues whilst not limiting what the Application can do to build a log message; and inherit the optimisations those APIs may provide. However, I acknowledge that argument is based on lambdas being suitable as part of the core of the API.

    * i18n Support: This is related to the Formatting support, I believe. String.format can also take a Locale, if that's applicable to the application. Again, my position was that the application can apply i18n in whatever way is appropriate for the application. I must admit, I've not seen logging use the Locale support that is available in some of the frameworks but that's not to dismiss the importance of such a feature. Do you have examples you could share where it has been useful/necessary?

    JSON Log example... This is a mock of what it could look like (using OpenLiberty as that is what I had to hand)...

    {
        "type": "liberty_message",
        "host": "a0cc3dc19d31",
        "ibm_userDir": "\/opt\/ol\/wlp\/usr\/",
        "ibm_serverName": "defaultServer",
        "message": "This is my log message",
        "ibm_threadId": "0000002b",
        "ibm_datetime": "2019-10-16T15:41:40.039+0000",
        "ibm_messageId": "CWWKF0011I",
        "module": "com.ibm.ws.kernel.feature.internal.FeatureManager",
        "loglevel": "AUDIT",
        "ibm_sequence": "1571240500039_0000000000033",
        "mpLogEvent": {
            "v":1,
            "mydata":"some value",
            "spanId":"123456"
        }
    }


    Note that everything apart from mpLogEvent is standard OL logging when in JSON format. 

    Cheers,
    Alex

    You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/LuIKg-M9KJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/CANghgrSoozrS-snRRyie3fYLSTHgRhP4n2Ue%3Db%3DWdquGL4H9kQ%40mail.gmail.com.

    Emily Jiang

    unread,
    Oct 16, 2019, 6:19:19 PM10/16/19
    to Eclipse MicroProfile
    Thank you Alex for contributing the logging proposal! I have merged your PR so that we can view the files in a more friendly manner. The proposal is under here.
    As for JSON-B, CDI and MP Config dependencies, I think it is fine. In MicroProfile specification, we do take CDI first approach so that the programming model is much simpler to use.

    I noticed you also use Open Tracing. You might know Open Telemetry is going to replace Open Tracing. I suggest you take a look at Open Telemetry as Open Tracing is going to sunset soon.
    Another minor comment: can you move readme.adoc from spec folder to logging folder so that it servers as the landing page for the Logging proposal?

    I'll get more time later to go through the APIs and doc.

    Thanks
    Emily
    > To unsubscribe from this group and stop receiving emails from it, send an email to microp...@googlegroups.com.

    > To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/e091f62a-b842-46b8-b673-7288d501509c%40googlegroups.com.



    --
    - DML

    --
    You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/LuIKg-M9KJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to microp...@googlegroups.com.

    Mark D. Anderson

    unread,
    Oct 16, 2019, 9:28:40 PM10/16/19
    to 'Emily Jiang' via Eclipse MicroProfile
    Hi Alex -

    FYI in several projects i have found significant benefit from using org.slf4j.MDC for low-effort tracing.
    It allows me to inject some thread-local identifier values when i first start handling a request,
    and then i don't have to touch any other code to be able to see those values in my log files as it
    propagates through my system.

    I'm not sure how exactly that might fit into mp-logging but i thought i'd mention it, as it seems
    like you have borrowed a lot of other things from slf4j.

    Speaking of tracing i'm kind of unclear on how MP-Logging would interact/leverage/synergize with MP OpenTracing.

    -mda
    --
    You received this message because you are subscribed to the Google Groups "Eclipse MicroProfile" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.

    Alex Lewis

    unread,
    Oct 17, 2019, 6:00:30 AM10/17/19
    to Eclipse MicroProfile
    Emily
    Thank you! I will move the doc as you suggest. Do I continue to open PRs from my sandbox clone, à la the GitHub way or, do I work on branches directly on the sandbox? As I was working on the tracing support I had come across the news of the merger between OpenTracing and OpenCensus into OpenTelemetry. I stuck with OpenTracing for now as it appears there is no logging support in OpenTelemetry and it's a TBD item right now with no firm dates, at least ones that I could find. Also, the runtimes have OpenTracing support at the moment so, I was more readily able to experiment. I will certainly keep an eye on the progress of OpenTelemetry and possibly get involved over there too.

    Mark
    The MDC was something I considered but I wanted to get feedback like yours. The mp-logging API does let you create a custom event object as an extension to LogEvent and in doing so, you also provide a Supplier that is capable of building that specific type of LogEvent. I wondered whether that would be sufficient to achieve the MDC-like functionality by the application using a Supplier (same instance, singleton, etc.) that is capable of populating that event data for each log event that would have normally gone in an MDC. There is a (very) simple example in the Unit Tests in the testPrepopulatedDataInEvent test. Do you think mechanism would work for you, do you have any ideas or suggestions on how that could be refined if you think it's relatively close or do you think the MDC functionality would need to be more explicit in an mp-logging API? 

    On the area of Tracing, I've worked under the assumption that a Span ID is a significant correlating key for an operation and in particular across distributed services. I thought it would be beneficial to have that key included in logging that occurred within those Active Spans, across those services. Once the logs are at an aggregator service, you can use a Span ID to easily filter down to all of the logging of that operation. As such, the mp-logging spec attempts to do this automatically so it's not something an application developer has to do manually. 

    Secondary to the Span ID, I wanted to see if I could make Span logging easier for the Application. In order to log to a span, you need to inject the tracer, get the active span, build the log "event" with a map of specific field names and then call the log method on the span. This felt far more involved compared to "typical" logging as well as boiler plate that would find its way into many classes/applications or, turned into a library. So, instead of having an additional library, I thought a logging API may be a good place to introduce a "span" log method alongside the other debug, warn, info, etc. methods. 

    One final benefit (I hope) was the introduction of Span Implicit logging which means that an application could turn all logging at a certain level automatically into Span logging without any additional code from the application. For instance, you could set the Implicit level to "debug" such that all debug logging or higher would become Span logging automatically. This would hopefully provide more contextual log information at the Tracer service when required. I suspect it would be something to turn on temporarily which is why the default implicit level is OFF.

    David,
    After sending my last response I feel we maybe need to take a step back on our discussion and hopefully find common ground from which to start. I think the most distilled version of my proposal is at its core:
    * Logging, due to its unusual position of spanning both the Application and the Runtime, hinders or outright breaks portability of an application due to the lack of a standard API.
    * Microservice based applications or more generally distributed applications have adopted a pattern of logging in JSON that's harvested by Log Aggregators as part of an overall need for observability, alongside Tracing and Metrics. As such, I feel like there's a possibility for MicroProfile to make addressing that need as easy and as simple as possible.

    To make sure I'm clear, I believe those two points can be considered independently.

    Do you agree with those points at all? If you do, even if not entirely, what do you agree with and can we work together to find the right solution? Do you have ideas you are willing to share?

    If you don't agree with any of my points, then I would genuinely like to understand your reasons why. 

    If you believe that although there is nothing intrinsically wrong with my points but don't think they are problems worth solving or, that there isn't sufficient value worth pursuing, at least I will clearly understand your standpoint.

    Cheers,
    Alex
    To unsubscribe from this group and stop receiving emails from it, send an email to microp...@googlegroups.com.

    Emily Jiang

    unread,
    Oct 17, 2019, 9:12:25 AM10/17/19
    to Eclipse MicroProfile
    Thank you! I will move the doc as you suggest. Do I continue to open PRs from my sandbox clone, à la the GitHub way or, do I work on branches directly on the sandbox? As I was working on the tracing support I had come across the news of the merger between OpenTracing and OpenCensus into OpenTelemetry. I stuck with OpenTracing for now as it appears there is no logging support in OpenTelemetry and it's a TBD item right now with no firm dates, at least ones that I could find. Also, the runtimes have OpenTracing support at the moment so, I was more readily able to experiment. I will certainly keep an eye on the progress of OpenTelemetry and possibly get involved over there too.

    Just create PRs on the sandbox from now on so that it stays current. In this way, other contributors can contribute PRs as well.
    Thanks
    Emily

    David Lloyd

    unread,
    Oct 17, 2019, 12:14:12 PM10/17/19
    to microp...@googlegroups.com
    On Wed, Oct 16, 2019 at 11:00 AM Alex Lewis <alex.l...@gmail.com> wrote:
    >
    > Thanks David. Apart from those points you like it though, right? Joking aside...

    :-)

    I'll be honest, I don't hate it, but I continue to be extremely leery
    of introducing *yet another* log API to the world. The idea of
    creating new log APIs is its own tier of satire at this point!

    > * Functional Interface
    > Log4j2, SLF4J, Flogger, and even Java Util Logging all appear to expose Functional methods, typically via Supplier<?> arguments. Maybe that's not a good argument for using it but I haven't seen warnings not use them. Dud I miss the warnings? From my perspective they appeared to be doing a similar thing; namely, Log generation can be done in a lambda, which is invoked after a log level check. All I've done is make that Supplier<?> part more explicit in order to create some formality around the structuring of the log data.

    I see why you've done it, and it's definitely clever, but exposing
    functional methods is far from being the same thing as exposing *only*
    functional methods. They may be useful for certain cases but
    requiring them means that the cost cannot be avoided. In the APIs you
    refer to, it is possible to choose another approach (even if one
    perfers the functional approach nearly 100% of the time) if the cost
    for individual cases becomes too substantial. But I suppose one key
    difference is that the other popular log APIs are general purpose,
    used by frameworks and applications alike. I suppose it's not clear
    whether this proposed spec is intended to be suitable for usage
    outside of applications (particularly, applications running within a
    MicroProfile runtime). I assumed that it is, but that assumption
    isn't necessarily warranted.

    > * Fluent API
    > That was a mostly a consequence of selecting Flogger, for the benefits that Flogger calls out. I intentionally removed the Fluent nature to test people's reaction. Do you want a Fluent API and see real use cases for it? The .at<level> methods being a means of no-op'ing processing of the chain was appealing but I (naively?) thought that same benefit could be achieved through the use of a functional interface and a level check before invoking the lambda function.

    It would be nice if that were the case, but our measurements have
    always indicated that copious usage of lambda comes with a high cost.
    API design has always been a balance between usability and
    performance. Sometimes one can tip one way or the other based on the
    expected use case though.

    But I will say that sometimes you just want to be able to say
    `log.debug("Florbed the baz: %s", theBaz);` or even just
    `log.debug("Blah");`. Requiring a lambda means that at a minimum
    you'd need `log.debug(() -> "Blah");`; requiring that the lambda deal
    in records means an even longer stanza. So from a usability
    standpoint it's not always that great either.

    The extensible log record idea is clever, and you could go a long way
    with it, but I am having difficulty visualizing how it can map on to
    backing log frameworks (for example, ones that aren't emitting JSON),
    or how it can be useful in the absence of function-oriented log
    statements.

    > * Flexible Levels
    > I haven't introduced flexible levels or certainly didn't intend to. If I've not locked down the Level class sufficiently that's just a mistake on my part and easily rectified.

    OK in that case I'd advise just using an `enum` for levels, and
    sidestep the numerical stuff. You can make it be `Comparable` and add
    niceties like `next` and `previous` methods but in the end making it a
    flat `enum` seems to fit the use case exactly.

    Also, I'm curious: your example below mentions an "AUDIT" level,
    though the proposal didn't include any such level. What was your
    intention there?

    > * JSON-B
    > As the LogEvent, a basic object to capture data the Application wishes to attach to the log output, is within the Application domain I felt it necessary to provide the Application with a defined and easy means of shaping the JSON serialization without having to do the serialization itself. As JSON-B is required/used in other MicroProfile specifications and is part of Jakarta EE, I thought it consistent with those other specs and does not require the Application to depend on a specific JSON implementation such as Jackson. More importantly, it does not require the application to couple itself to a JSON lib that may not be present/supported on all runtimes as Jackson is not the only lib out there. If the runtime is not configured to output in JSON then the Spec leaves it to the runtime to decide how to output the LogEvent data, if at all. It only puts a constraint on the runtime when it's configured to output in JSON, by specifying an attribute name to which the JSON serialized LogEvent data is attached to.

    Having the annotations present isn't really a bad thing; but it should
    be made clear (and enforced) that JSON-B is optional.

    > * CDI: Where does the API/Spec require the Application to use CDI? It itself uses CDI to look up Tracing to see if it has been made available by the runtime; which I believe is the way to look up a Tracer isn't it? CDI is at least how a Tracer is made available to an application using MicroProfile OpenTracing. The LoggerFactory does use CDI to look up the implementation so the Runtime would be required to make an implementation of LoggerFactoryProvider available through CDI. Is that what you're objecting to? CDI is also a required specification by MicroProfile so, I thought that was an appropriate mechanism to use. I did wonder about using the Java SPI mechanism but again, that's dependent on it being an acceptable approach by the runtime. As CDI is already required by MicroProfile, I leaned in that direction.

    I wasn't referring to tracing, I was referring to LoggerFactory. As
    for MicroProfile requiring CDI - that's definitely not the same thing
    as saying that every MP *spec* requires CDI. By taking that
    definition, you're implicitly saying that this is not a
    general-purpose logging API; rather it's only available when you're
    using a MP runtime (as I said above). Is this a position we really
    want to have?

    By taking this position, we're ensuring that frameworks will not use
    this specification (after all, many frameworks will not want to be
    tethered to an MP runtime). This in turn guarantees that there will
    always be many other logging frameworks present at the same time:
    slf4j in particular is almost guaranteed. So why would a user use MP
    logging instead of slf4j or any of the other options on the table? I
    believe that other than people who don't want to think too hard about
    it and just match MP to MP, there has to be a very compelling case to
    switch logging frameworks, and without at *least* feature parity with
    what people are accustomed to already, that may not be a reasonable
    thing to expect. I think in practice it'll likely just become another
    entry into an already crowded area.

    > * Container Output: This Spec/API is only defining a way to generate a Log Message and a Log Event payload. Those two things would still be wrapped by whatever the container wishes to output and would likely be specific to the logging implementation the runtime uses. It's just saying that along with all the other JSON attributes the the runtime may want to include in each log "line", please also include the JSON serialized LogEvent (or the App's sub-class) against a "mpLogEvent" property. I'll provide a mock of what I mean below.

    Sure, I dig it. At least in theory. Do existing logging backends
    have the infrastructure to map these fields reasonably? It seems like
    this approach really is going to require special support on top of
    whatever exists within the containers today. Not that this is a deal
    breaker, but it is definitely an implementation consideration: I don't
    think the idea of custom fields exists today in many implementations.

    > * Formatting Support: Not as a built-in to the method signatures but the application can use any means it likes to build the message string. I thought that would be preferable to forcing the Application to use a home-grown parameter substitution like SLF4J or, for a MessageFactory like mechanism like Log4j2 uses. What negative do you see in not explicitly providing specific formatting support?

    Verbosity, as I mentioned before; but also, when the API stipulates a
    particular formatting style, in addition to being one more thing the
    user doesn't have to think about, it also lets the container do things
    like apply i18n (see below), format values differently based on type
    (for example, masking secrets or IP addresses or making certain things
    be visually distinct), match messages based on their format pattern
    rather than their post-formatted content, etc.

    Particularly, I think that `String.format`-style format strings cover
    very close to 100% of reasonable use cases for message formatting.
    It's a very flexible format, and it's as i18n-friendly as anything in
    Java. I'd be inclined to simply mandate it, if it were me.

    > On a related note, each of the logging APIs have a method explosion in an attempt to avoid performance issues with varargs and auto-boxing. Flogger reduces that explosion somewhat by using the fluent API to separate the level from the log statement; therefore only "log" is affected by the argument explosion. By removing the String formatting as a responsibility of the Logging API, the API itself can avoid those issues whilst not limiting what the Application can do to build a log message; and inherit the optimisations those APIs may provide. However, I acknowledge that argument is based on lambdas being suitable as part of the core of the API.

    Method explosions are certainly annoying for the API designer. But
    APIs aren't optimized to be convenient for the API designer; rather
    they're meant to be convenient to the API *user*. As a user of an API
    that has this kind of method explosion, it's convenient to be able to
    say `log.debugf("There are %d foos in the %s", fooCnt, bar);` and have
    it just work, optimally, with no performance worries because a fast
    level check will filter it out before any
    allocations/boxing/formatting ever happens.

    The fluent logging API *maybe* improves this, but the performance in
    this case is highly dependent on the (JIT) compiler's ability to
    optimize away things that are constant/redundant including object
    allocations. If you're exceedingly careful with API design (and you
    test often with many different JVMs), this is possible, if not easy.
    From a usability standpoint it is possibly a slight improvement on the
    "classic" API.

    > * i18n Support: This is related to the Formatting support, I believe. String.format can also take a Locale, if that's applicable to the application.

    This takes care of localization but not internationalization. The
    ability to ship an application which can log in more than one language
    is important to multiple user categories (though this is definitely
    colored by the decision of whether this spec is intended to be
    general-purpose or not).

    > Again, my position was that the application can apply i18n in whatever way is appropriate for the application. I must admit, I've not seen logging use the Locale support that is available in some of the frameworks but that's not to dismiss the importance of such a feature. Do you have examples you could share where it has been useful/necessary?

    Usually the locale support is invisible: the log language should
    normally match the localization rules. It's a bit odd to log in one
    language and have numbers logged by another language's convention, for
    example. So you'd rarely see a log framework manually handle locales
    (though that's certainly useful for some niche cases as well); the
    system-wide locale would almost always apply.

    When you're a framework author or an application vendor, you want your
    component to have translations for your various target markets.
    Application designers may have international deployments.

    Traditionally i18n is done via `ResourceBundle` (the JUL-style
    approach allows you to substitute a resource key for the format
    string, and the resource value becomes the "real" format string), but
    other approaches are possible as well: for example, we use a system
    where the user writes an interface with one method per log message,
    with type-safe parameters of the expected types. The method name
    becomes the resource "key" and we generate implementations of the
    interfaces for each target language for which translations exist.
    This also lets us assign unique error codes for log messages, which
    means that one can web search for an error code regardless of the
    language they speak or the country they're in. I'm not suggesting
    this approach for this specification, of course, but just giving some
    perspective on i18n usage in logging frameworks (hopefully).
    > To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/CAKmOgMFZYYY7MHAg5xrABV1Y2-Nfarc3ogqGg8mwJ6X7E7wwUw%40mail.gmail.com.



    --
    - DML

    Alex Lewis

    unread,
    Oct 17, 2019, 3:29:00 PM10/17/19
    to microp...@googlegroups.com
    Great points. Thank you.

    I very much agree that the proliferation of Logging APIs is laughable, and by proposing a new one I'm not making that situation much better but, I'm hoping that by concentrating on MicroProfile and it being mostly just an API, it's not trying to solve the same problems as "yet another logging framework". Whether that survives scrutiny is another thing :)

    My current feeling was that this API would be used by "application" code in a MicroProfile compatible runtime rather than as a general purpose lib for other libs/frameworks to use. Portability is still partly impacted by the logging frameworks chosen by libraries included in an application but at least the Application can guarantee its own logging output via a standard API. I'd optimistically like to think that it's the Application logging that a MicroProfile developer/application would be most interested in for the majority of cases. I may be making an unreasonable leap though?

    I will go through your responses in more detail and follow up.

    Cheers



    Mark D. Anderson

    unread,
    Oct 18, 2019, 1:13:30 AM10/18/19
    to 'Emily Jiang' via Eclipse MicroProfile
    Just to draw out one of David's points, I have to say that my first reaction was also "really? another logging api?"

    Indeed, backing up a level, I'm of the view that a new logging api should not be the centerpiece of MP Logging....
    which sounds strange but if I may...

    A logging api gets into a code base and stays there. Like toe fungus, or malaria.
    I'm partially responsible for one system that includes dependencies using all 4 of the major
    extant logging apis (slf4j, log4j, java.util.logging, and commons logging).  Fortunately there are
    adapter/shims that allow me to force them all into a common sink.

    In fact, there are several structured logging adapters available for slf4j/logback already, such as:
    They have the benefit of incremental adoption -- one can get non-trivial benefit from them just changing
    external xml logging config, and using the already-existing slf4j MDC feature for key-value pairs.
    Optionally (and incrementally) they offer more if one wants to commit their additional classes.

    Similarly, I think MP Logging has to start from the premise that it will provide some benefits TBD without
    shifting everything to a new API (which IMHO will realistically never happen -- cf. toe fungus metaphor).

    I also think it would be great if it were part of some Grand Unified Theory of Observability (GUTO) which
    would include the metrics and tracing that are in opentracing/census/telemetry.
    And like them, I would think that they would offer convenient annotations, with optional apis for added
    benefit.

    I am currently use the slf4j MDC feature for many different use cases:
    - correlation ids such as a request id, session id, or source ip address (the tracing scenario)
    - user ids (login name, internal id, impersonating id for admin operations, federated org id)
    - phase of processing for long-running batch programs
    - primary keys of anchoring objects involved

    Indeed, it is a huge generalization, but I think to a first order approximation, if I am interested in
    something being broken out into separate information in a structured log object, then it is information
    that has longer persistence than a single log statement, and I would be fine with an MDC and/or
    annotation mechanism for injecting them -- while the string that is sent into my legacy logging
    api goes into something called "message".

    That was kind of prolix, but my point is that the important information i want in the structure are usually
    things I *don't* want to have to explicitly include in my individual details logging calls.

    Also, related to the toe fungus, when I have a dependencies (some of which i don't have source code for)
    which are using more primitive logging systems, then an adjunct filter that sits outside my code
    might parse those logs into structure. Ugly and potentially slow, but these solutions are the kinds
    i have to use every day. Splunk for example has rich support for parsing pre-existing logs, as do many
    other systems.

    In summary to totally solve the "structured logging" problem in the real world I think requires a lot
    of hooks above, around, and below the logging apis that are already in place -- not assuming they
    can or will be supplanted.

    Oh and one more brainstorm, probably demented -- I wonder if there is some way to leverage the
    OpenAPI schema annotations for declaration of the structured log object?

    -mda
     

    David Lloyd

    unread,
    Oct 18, 2019, 10:30:55 AM10/18/19
    to microp...@googlegroups.com
    On Fri, Oct 18, 2019 at 12:13 AM Mark D. Anderson <m...@discerning.com> wrote:
    > ...
    > I am currently use the slf4j MDC feature for many different use cases:
    > - correlation ids such as a request id, session id, or source ip address (the tracing scenario)
    > - user ids (login name, internal id, impersonating id for admin operations, federated org id)
    > - phase of processing for long-running batch programs
    > - primary keys of anchoring objects involved

    This bit made me wonder if some kind of first-class integration with
    MP Context Propagation is warranted?
    --
    - DML

    Alex Lewis

    unread,
    Oct 18, 2019, 10:31:20 AM10/18/19
    to microp...@googlegroups.com
    Thanks Mark. 

    At first glance, I think the proposal would be that the Application's Log Event Supplier would be able to look up the attributes you mention in order to pass that information into each log "statement", thus resulting in that data being part of each JSON log object that is written to console, file, etc. How that Supplier accesses that data is up to how the Application wishes to store it; however, I can see how incorporating MDC like functionality as first class and made available to a Supplier would be useful.

    However, I'm starting to wonder that in light of the discussions happening right now about MP OpenTracing and what should happen in regards to Open Telemetry, whether there's possibly a need to evolve MP OpenTracing into something that can facilitate the abstract requirements of observability but also make logging a consideration. Create that Grand Unified Theory of Observability (GUTO) you mention :)

    It may be likely that the implementation would in fact utilise Open Telemetry but maybe there's still a separation of concerns between how you express what you want from an observability standpoint, which could be defined in an MP spec, without being bound/coupled to OpenTelemetry.

    If I try to look at this from a wider perspective, some requirements appear to be:
    • A way to indicate that a particular variable within the application is useful/important within the scope of a processing context (not necessarily just a thread). 
    • That the logs, the network traffic, etc. across services and service/system boundaries can be correlated such that, as someone looking in on the system I can see what happened at both the network and the application level.
    • However, the application processing may be in isolation of network traffic or, may be the source of network traffic.
    As just one example... If an application developer, and I'm just thinking on my feet and out loud so it may be nonsense, could do something like:
    1. Mark the data attributes of classes as being "important to tracking".
    2. Mark a JAX-RS ingress method as a "processing start".
    3. As all subsequent processing occurs, any code that accesses, instantiates, etc. an object that contains "important to tracking" attributes, the values of those attributes at that point in time are harvested and collected.
    4. As logging happens, sufficient data is attached to each log output such that any and all underlying implementations can tie the network, log output, etc. together. That, along with any data the application wishes to include in the actual log "message" as it typically does today.
    5. Mark some point of processing as the "end", whether that's the response out of JAX-RS or something else.
    Don't get me wrong, I'm not pretending to be inventing this :) and as you read my example it's easy to think of Spans, Tracers, (EJB) Transactions, etc. but hopefully there is an abstraction that makes it easy for the application to achieve Observability and does not bind the application to a specific implementation such as Open Telemetry (which I think is one concern right now in the MP Open Tracing thread). I just wonder whether MP could avoid the notion of Tracers, Spans, etc. and express it more conceptually? Something like.. "Observation" instances a created, that starts, stops, pauses, continues, etc. That an Observation has properties/state, a lifecycle, "notes" (transcript/logs) and expresses or consumes Events. Events may trigger other Observations, etc. 

    I'm going to stop rambling.

    I still worry about the lack of a standardised Logging API but maybe that problem cannot be solved by just MP? However, I'm not giving up yet ;)

    Cheers

    --
    You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/LuIKg-M9KJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

    Rüdiger zu Dohna

    unread,
    Nov 4, 2019, 5:31:44 AM11/4/19
    to Eclipse MicroProfile
    Sorry for being so late for the party; and now adding such a long comment... but I also had a lot of things to read and think about ;-)

    Logging is a very important issue, as it's such a sticky subject (I like the toe fungus analogy ;-). So it's good and natural for MP to take a stab at it. MP, just as JEE, only defines APIs, never any implementations or wire formats, etc. So we only can define a logging API; everything else has to be left to the implementors to compete with.

    There's already a long history of logging APIs. Maybe we can learn something: There always was a proliferation of logging frameworks. Some people joke that every developer has written at least one. Does anybody remember commons-logging? It was a great tool to redirect log statements to any framework doing the actual work. It was used widely; most importantly, it also was used by libraries, so you could collect all of the log statements into one stream of structured events. Log4j(1) was probably the most widely used logging framework. Tools like Chainsaw collected the logs and you could filter and search and everything. This is so much more than a single terminal output where each line is formatted in a different way by a different framework.

    Sadly commons-logging had some serious flaws, including memory leaks, IIRC. But replacing it was only possible by providing a bridge, so during a transitional phase, the commons-logging calls that where spread all over applications and libraries could be redirected to the new framework. The Log4j(1) team was not willing to separate the API from the implementation at that time (this was actually a very personal thing, kind of a Rose War); so slf4j was created as a logging hub. It collects log statements from various logging APIs (including it's own) and forwards them to various logging backends. This quickly replaced commons-logging.

    Roundabout the same time, the Java platform developers wanted to put an end to the logging proliferation altogether, and added JUL (java.util.logging) to Java 1.4. Nobody liked it. It is over-engineered in some places, namely many and even custom log levels; OTOH many important features are missing, namely the MDC. People moved to slf4j instead, which even had a collector for JUL, so the platform log statements could be forwarded to whatever you used as a backend. I suspect that Glassfish was forced to use JUL, or they would have chosen slf4j, too.

    It took some time, but Oracle seems to have understood: they introduced JEP 264 in Java 9 to make it easier to forward the platform logs to a different backend. A lot of platform code still has to be migrated, though.

    I thought the war was over. Log4j(2) now has a separate API, which has some advantages over slf4j, but I don't consider them worthwhile updating all applications and all libraries. I'd rather have seen contributions to slf4j, but this also seems to be difficult. There's only one core maintainer, Ceki Gülcü, and currently 89 open pull requests.

    Flogger also has some interesting features, but it can easily be forwarded to slf4j.

    So what can we learn from the logging history? Creating a logging framework or even only a logging API looks like such an easy thing to do; but actually it's a nastily complex monster; it's sometimes even time critical. It's much, much, much more work than what one suspects. You'd need to analyze the bytecode generated, measure performance, object allocation, etc. And convincing people to use a new logging API is close to impossible, even more so if there are no really compelling reasons to do so.


    What can/should we do instead?

    • Require MP full compliant implementations to support slf4j out-of-the-box. Alex' tests include the slf4j-api and slf4j-jdk14, but at least in WildFly it works with the api in `provided` scope and without the implementation; the other runtimes do so probably, too.
    • Standardize on MDCs, e.g. request-id, transaction-id, span-id, trace-id, etc. These should be part of the JAX-RS, OpenTracing/OpenTelemetry specs, etc., not of the logging spec. I assume that this is not already done, as the dependency on slf4j is not yet specified; and it should be implemented so that this dependency is optional!
    • Move structured logging one step further: the log message is not always just a simple string. We sometimes want to log the full data of an object and keep its structure available in the toolchain, e.g. as JSON, so that I can, e.g. filter on the region of the customers logging in. This is true for the MDC as well, which currently is a mere `Map<String, String>`.
    • Think about a different approach to logging altogether; something like a LoggingInterceptor.


    tl;dr

    I strongly suggest that we simply stick to slf4j for now and rather contribute there. We also could help slf4j to get a wider stance in the long run: maybe we could invite slf4j to become an Eclipse project or a MP standard in it's own right.

    Alex Lewis

    unread,
    Dec 29, 2019, 1:23:46 PM12/29/19
    to Eclipse MicroProfile
    Hi Rudiger! Thank you for your thoughtful post!

    In short, I agree with most of what you've said. I agree that picking a pre-existing API would be the ideal. I think most in this thread would also agree with that. The only place I'd waiver is around the MDC. Although I agree it's important and useful, I think it has a better place under Observability as a whole rather than specific to Logging. 

    Although I agree with picking an existing API I have obviously done the exact opposite and proposed a new one more recently. My reasons for doing so were:
    • An API that more readily supported Structured Logging but was not a huge departure from popular APIs (I.e. SLF4J).
    • Easy to adopt from an IP point-of-view.
    • Simplify the act of logging to OpenTracing. This was an attempt at the MP tenets of lowering the barrier of entry and make OpenTracing Logging easier with less boilerplate. 
    • The suggestion of adopting an existing API garnered either ambivalence or negativity, but there again so did suggesting a new API.
    Having said that, these are just my efforts to see what resonates with the community and what doesn't; to generate discussion with the hope that something emerges.



    Speaking more generally, IMHO the lack of a single logging API is a problem systemic to Java. Even if MP were to select an API as standard, that does not account for the transitive dependencies on other logging APIs brought in by external libs.

    I personally believe the ideal solution would be for the vendors of the major Logging Frameworks to join a single (working) group/process (Eclipse, Apache, Google, CNCF, JCP, OpenTelemetry, etc.) with the intent to deprecate their own APIs and create a single replacement API.

    Rather than repeat XKCD 927, each of those vendors would necessarily have to make it clear that they have chosen this new API over continuing with their own, they could of course continue to each provide their own separate implementations. This wouldn't solve the problem over night but I'd hope for adoption as the direction would be clear. Hopefully, the opensource world would rally behind it. Without the vendors agreement to deprecate their own existing APIs and be part of the messaging/awareness it would very quickly become exactly what XKCD 927 lampoons.

    The alternative is that MicroProfile attempts to use its popularity and visibility in order to nudge the industry towards a single API.

    Although I think SLF4J would be a good choice, is Ceki Gülcü the only maintainer? If so, it's likely he would need to agree to open up SLF4J to other maintainers to ensure that it is not reliant on one person. Otherwise, it would pose a risk for MicroProfile and the compliant runtimes. There would be no point selecting SLF4J to only find out that there is no chance of the community being able to propose new functionality.

    What is the negative impact of MicroProfile adopting a standard Logging API? MP's inertia and popularity does not move the needle on third party libraries so, the portability issue is not solved. Having said that, the code most important to the application developer (their own) would have portability guarantees and it's the logging they're most interested in. The optimist in me would like to think that there would be some positive impact eventually regardless. Are there any other negative impacts I'm not considering?

    As an aside, could MicroProfile adopt SLF4J without it being under Jakarta or Eclipse?

    Finally, I'm going to make a final plea to the MP community to respond/vote, whether it's a yes or a no, as to whether logging is sufficient a problem and one for MicroProfile to attempt solve. I think we need a clear indication as to whether this warrants continued effort. Any suggestions on how best to capture a vote?

    If the vote does indicate it is something worth pursuing, maybe we can put something more formal around pushing this forward? How best to select a direction and how to bring those wanting to be involved more closely together?

    I'd also like to thank everyone for their continued efforts in this thread, I realise it's going on for the long haul.

    Cheers

    Rüdiger zu Dohna

    unread,
    Jan 2, 2020, 5:37:58 AM1/2/20
    to Eclipse MicroProfile
    Hi Alex!


    On Sunday, December 29, 2019 at 7:23:46 PM UTC+1, Alex Lewis wrote:
    In short, I agree with most of what you've said. I agree that picking a pre-existing API would be the ideal. I think most in this thread would also agree with that. The only place I'd waiver is around the MDC. Although I agree it's important and useful, I think it has a better place under Observability as a whole rather than specific to Logging.

    I agree that logging is only one aspect of observability; tracing and monitoring have to be integrated with logging, so I don't have to do the same thing for different aspects. The `spanId` field in the `LogEvent` in your proposal is one such integration. The version of the application is another common piece of information that would be of interest to all observability tools: I want to see it in the logs as well as in the tracing and monitoring. The application container could extract that from the war. And the application developer may decide that the id of the customer may also be interesting; it may be available as a query parameter in the REST boundary. I could come up with many other common scenarios: there will be very different pieces of code that want to add to the final log event. While type safety is a very good thing, I don't think it's beneficial in this case: defining subclasses of `LogEvent` adds tight coupling between the application and its container/runtime, and maybe libraries used by the application.

    I assume that any logging API has to be dependency-free to be accepted, e.g. by 3rd party libraries. So your proposed API yet has to be separated from the implementation, so the dependencies on OpenTracing, JSON-B, CDI, and MP-Config become an implementation detail.

    Although I agree with picking an existing API I have obviously done the exact opposite and proposed a new one more recently. My reasons for doing so were:
    • An API that more readily supported Structured Logging but was not a huge departure from popular APIs (I.e. SLF4J).
    Putting the downsides of type-safe log events aside, slf4j also supports structured logging. The `LoggingEvent` has an Object array for the arguments, and this is passed to the appenders, so they can convert them, e.g. to JSON. I think that's good enough.
    • Easy to adopt from an IP point-of-view.
    slf4j is MIT licensed, so that shouldn't be a big issue. 
    • Simplify the act of logging to OpenTracing. This was an attempt at the MP tenets of lowering the barrier of entry and make OpenTracing Logging easier with less boilerplate. 
    Maybe I missed something, or do you mean the logging of the OpenTracing span id?
    • The suggestion of adopting an existing API garnered either ambivalence or negativity, but there again so did suggesting a new API.
    Having said that, these are just my efforts to see what resonates with the community and what doesn't; to generate discussion with the hope that something emerges.

    I like your approach! And I hope you see my feedback to be part of exactly that discussion ;-)

    Speaking more generally, IMHO the lack of a single logging API is a problem systemic to Java. Even if MP were to select an API as standard, that does not account for the transitive dependencies on other logging APIs brought in by external libs.

    I personally believe the ideal solution would be for the vendors of the major Logging Frameworks to join a single (working) group/process (Eclipse, Apache, Google, CNCF, JCP, OpenTelemetry, etc.) with the intent to deprecate their own APIs and create a single replacement API.

    Rather than repeat XKCD 927, each of those vendors would necessarily have to make it clear that they have chosen this new API over continuing with their own, they could of course continue to each provide their own separate implementations. This wouldn't solve the problem over night but I'd hope for adoption as the direction would be clear. Hopefully, the opensource world would rally behind it. Without the vendors agreement to deprecate their own existing APIs and be part of the messaging/awareness it would very quickly become exactly what XKCD 927 lampoons.

    The alternative is that MicroProfile attempts to use its popularity and visibility in order to nudge the industry towards a single API.

    As I've said, in addition to the usual attitude that nobody wants to give up their darlings, there seem to be some serious personal animosities. So my personal prediction is: this will never happen (and I don't say this easily).

    Although I think SLF4J would be a good choice, is Ceki Gülcü the only maintainer? If so, it's likely he would need to agree to open up SLF4J to other maintainers to ensure that it is not reliant on one person. Otherwise, it would pose a risk for MicroProfile and the compliant runtimes. There would be no point selecting SLF4J to only find out that there is no chance of the community being able to propose new functionality.

    Very good point. I think Ceki would be glad to get some help. But I think the risk would be okay to take, as we could fork it at any time; this would make things a lot more complicated, of course, but I guess then we would be at the exactly same point as when we introduce a new API now.

    I just emailed Ceki; let's hope he'll join the discussion.
     
    What is the negative impact of MicroProfile adopting a standard Logging API? MP's inertia and popularity does not move the needle on third party libraries so, the portability issue is not solved. Having said that, the code most important to the application developer (their own) would have portability guarantees and it's the logging they're most interested in. The optimist in me would like to think that there would be some positive impact eventually regardless. Are there any other negative impacts I'm not considering?

    I can't see any other impacts either.
     
    As an aside, could MicroProfile adopt SLF4J without it being under Jakarta or Eclipse?

    I suppose it should be okay, as it's MIT licensed. OpenTracing is also not Jakarta or Eclipse.
     
    Finally, I'm going to make a final plea to the MP community to respond/vote, whether it's a yes or a no, as to whether logging is sufficient a problem and one for MicroProfile to attempt solve. I think we need a clear indication as to whether this warrants continued effort. Any suggestions on how best to capture a vote?

    If the vote does indicate it is something worth pursuing, maybe we can put something more formal around pushing this forward? How best to select a direction and how to bring those wanting to be involved more closely together?

    I'd also like to thank everyone for their continued efforts in this thread, I realise it's going on for the long haul.


    What do you think of trying a completely different approach, which resembles the `@Traced` annotations, etc. We define a `@Logged` annotation that is then picked up by, e.g., a CDI interceptor to produce a log entry. As an application developer, I can either annotate the methods that I want to see in my logs; the message would be derived from the method name or be explicit in the annotation; or I define an interface with several methods for the events I want to log (e.g. `paymentReceived`), inject it into my code, and call it when required. This is the basic idea of what I've done with the logging-interceptor (https://github.com/t1/logging-interceptor). Should I try to build a spec from there?


    Have fun!
    Rüdiger

    Emily Jiang

    unread,
    Jan 6, 2020, 6:44:04 AM1/6/20
    to Eclipse MicroProfile
    I am wondering whether Alex, and others is interested in joining Open Telemetry (https://github.com/open-telemetry/opentelemetry-java/) to add the requirements on logging. I noticed Open Telemetry did try to accommodate logging, but due to time constraint (according to Pavol), they dropped logging in the first release. It might be nice we can work with them to get the logging api done there and then reconsume back Open Telemetry.

    My 2 cents.
    Emily

    Alex Lewis

    unread,
    Feb 25, 2020, 5:31:53 AM2/25/20
    to microp...@googlegroups.com
    Hi, Sorry once again for the late response. I've recently become a dad which has meant that the limited free time I had previously is now filled with feeding, nappies, cuddles...

    Rüdiger, Thank you for reaching out to Ceki, I'd be interested to hear what he says. I'll respond to your points below but I think Emily's point is a good one. 

    I'm resigning myself to this being a problem that MP cannot solve by simply defining/selecting an API. I think there needs to be a joint effort between the logging vendors if, and only if, there could be an agreement from them to deprecate their own APIs in favour of the jointly created API. I will try to reach out to each of them to see if I get any response.

    I will certainly take a look at your logging-interceptor. I have taken a look in the past but I want to make sure I understand it fully.


    Rüdiger, on your other points...

    There is some implementation in my proposal but it's only for the integration with OpenTracing. MP OpenTracing already depends on OpenTracing although OpenTelemetry will change that. Other than that, the implementation is intended to provide sufficient framework for a logging implementation to be "injected". That could be a bridge to SLF4J or it could be a container supplied direct implementation to a runtime specific logging framework.

    As far as I know, the dependency on CDI is fine as MicroProfile has global dependency on CDI, JAXRS, JSON-P and Annotation so, the outlier in my proposal is JSON-B. Having said that, as JSON-B is a JEE 8 specification I'd like to think it wouldn't be a deal breaker. As MP Config is also an API, the Logging proposal depends on the API not a specific implementation and cross-spec dependencies are ok, I think. The dependency on MP Config could be dropped in favour of an explicit API, allowing an application to use MP Config or something else.

    The reason for the typed LogEvent was to enable an application to populate whatever data it sees fit but in a structured form that translates nicely into JSON. Once the LogEvent is converted into JSON, it can make filtering/searching easier in Log Aggregation tools such as ELK and Loki. I did consider adding a Map to the base LogEvent to act as a general purpose storage of values but I left it to out to see whether anyone asked for it.

    The LogEvent wasn't intended to be populated by the container, it's purely an Application scoped object. The only automatically populated value was the spanId, based on the integration with OpenTracing. The LogEventSupplier mechanism is a place for the Application to hook up the pre-population of (possibly application specific) LogEvents. An example of sorts is provided in the test code of the proposal. The theory would be the Application can look up the values it wants to log and add them to the log events it creates; the tracking of the desired values would be up to the application. That could be done using an MDC that the Supplier hooks into or by some another mechanism, it's up to the Application to choose.

    The integration with OpenTracing does two things. It populates the Span ID in a LogEvent but also enables the application log to OpenTracing; the Logger will automatically detect the active Span and log to it. There's a section in the Spec proposal but the idea is reduce the boilerplate required for Span logging and the duplication of code if you want to log the same messages to the underlying logging framework as well. As such, my proposal provide a means of hiding that boilerplate as well as dealing with logging to both the Span and locally and putting that behind a single Logging API.

    Cheers


    --
    You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/LuIKg-M9KJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

    David Lloyd

    unread,
    Feb 25, 2020, 3:00:18 PM2/25/20
    to microp...@googlegroups.com
    On Tue, Feb 25, 2020 at 4:32 AM Alex Lewis <alex.l...@gmail.com> wrote:
    >
    > Hi, Sorry once again for the late response. I've recently become a dad which has meant that the limited free time I had previously is now filled with feeding, nappies, cuddles...

    Congratulations!

    > I'm resigning myself to this being a problem that MP cannot solve by simply defining/selecting an API. I think there needs to be a joint effort between the logging vendors if, and only if, there could be an agreement from them to deprecate their own APIs in favour of the jointly created API. I will try to reach out to each of them to see if I get any response.

    As the original author of (and still part-time contributor to)
    jboss-logging and jboss-logmanager, I think this is a reasonable
    request. I'd like to point out that the key to success here though is
    to ensure that the existing diverse use cases of the existing APIs are
    identified and ultimately met. If happy users of existing APIs cannot
    meet their use cases by way of the new API (whatever it ends up
    being), users will not want to switch, and authors (myself included)
    will not want to deprecate their APIs, and we'll be back to the "N+1
    standards" problem. OTOH, by making this a promise (to identify and
    meet the existing APIs' use cases as an acceptance criterion),
    developers of existing log APIs should be able to participate without
    any reservation or cynicism based on the fear that they will be left
    out or left behind.

    --
    - DML

    Werner Keil

    unread,
    Feb 25, 2020, 3:02:43 PM2/25/20
    to Eclipse MicroProfile
    Congratulations.
    To unsubscribe from this group and all its topics, send an email to microp...@googlegroups.com.

    Rüdiger zu Dohna

    unread,
    Feb 26, 2020, 7:03:05 AM2/26/20
    to Eclipse MicroProfile
    Hi!

    Congratulations for the baby! Take your time, it won't come back :-)


    On Tuesday, February 25, 2020 at 11:31:53 AM UTC+1, Alex Lewis wrote:
    Rüdiger, Thank you for reaching out to Ceki, I'd be interested to hear what he says. I'll respond to your points below but I think Emily's point is a good one.

    Ceki answered: "I tend to avoid discussions on logging. However, I'll chime in if I have something interesting to say."

    I'm taking this as: he's learned that these discussions lead nowhere. While I've only had a fraction of the discussions he obviously had, I have the same impression. I don't really understand why this is so; my gut feeling says that it might be because it's just so tremendously easy to underestimate all the complexities lurking in the logging domain.

    I'm resigning myself to this being a problem that MP cannot solve by simply defining/selecting an API. I think there needs to be a joint effort between the logging vendors if, and only if, there could be an agreement from them to deprecate their own APIs in favour of the jointly created API. I will try to reach out to each of them to see if I get any response.

    I'm looking forward to your results.

    I will certainly take a look at your logging-interceptor. I have taken a look in the past but I want to make sure I understand it fully.

    I'd be happy to answer any questions you may come up with.

    There is some implementation in my proposal but it's only for the integration with OpenTracing. MP OpenTracing already depends on OpenTracing although OpenTelemetry will change that. Other than that, the implementation is intended to provide sufficient framework for a logging implementation to be "injected". That could be a bridge to SLF4J or it could be a container supplied direct implementation to a runtime specific logging framework.

    As far as I know, the dependency on CDI is fine as MicroProfile has global dependency on CDI, JAXRS, JSON-P and Annotation so, the outlier in my proposal is JSON-B. Having said that, as JSON-B is a JEE 8 specification I'd like to think it wouldn't be a deal breaker. As MP Config is also an API, the Logging proposal depends on the API not a specific implementation and cross-spec dependencies are ok, I think. The dependency on MP Config could be dropped in favour of an explicit API, allowing an application to use MP Config or something else.

    That's true, but when a logging API wants to be accepted universally, that also includes libraries; and they will not accept those dependencies.

    The reason for the typed LogEvent was to enable an application to populate whatever data it sees fit but in a structured form that translates nicely into JSON. Once the LogEvent is converted into JSON, it can make filtering/searching easier in Log Aggregation tools such as ELK and Loki. I did consider adding a Map to the base LogEvent to act as a general purpose storage of values but I left it to out to see whether anyone asked for it.

    The LogEvent wasn't intended to be populated by the container, it's purely an Application scoped object. The only automatically populated value was the spanId, based on the integration with OpenTracing. The LogEventSupplier mechanism is a place for the Application to hook up the pre-population of (possibly application specific) LogEvents. An example of sorts is provided in the test code of the proposal. The theory would be the Application can look up the values it wants to log and add them to the log events it creates; the tracking of the desired values would be up to the application. That could be done using an MDC that the Supplier hooks into or by some another mechanism, it's up to the Application to choose.

    This is similar to the Message objects defined by Log4j 2.0.
     
    The integration with OpenTracing does two things. It populates the Span ID in a LogEvent but also enables the application log to OpenTracing; the Logger will automatically detect the active Span and log to it. There's a section in the Spec proposal but the idea is reduce the boilerplate required for Span logging and the duplication of code if you want to log the same messages to the underlying logging framework as well. As such, my proposal provide a means of hiding that boilerplate as well as dealing with logging to both the Span and locally and putting that behind a single Logging API.

    I think that there are other, similar use-case, where meta data can be provided by the container or libraries you include. One example is the version of the application that could be taken from the manifest Implementation-Version. This could be a standard that I don't want to repeat for all of my applications.

    BTW: I have to correct myself. I claimed that slf4j is probably supported by all containers, but that's not true. I worked mainly with Wildfly in the past, and they do support it out of the box, so I assumed that this was a natural thing to do. But at least Payara and Open Liberty don't, so applications that want to be portable have to either bring their own jul-bridge or use jul instead.


    Cheers!
    Rüdiger 

    Reply all
    Reply to author
    Forward
    0 new messages