'Dynamic' log level (specific to given web request)

140 views
Skip to first unread message

Madhu Narasimhan

unread,
Feb 8, 2021, 11:20:20 PM2/8/21
to Flogger Discuss

We are currently using Flogger with JUL as backend. There is a new requirement to control log level based on some criteria in the current web request (we have a way to determine log level for different web requests). One brute force method is to wrap each log statement with a check for whether that log statement’s at<Level> is loggable for the web request.

Is there any way to achieve the requirement without modifying the log statements (or with minimal modification), short of implementing a custom logger?

Chris Baron

unread,
Feb 9, 2021, 12:14:07 PM2/9/21
to Madhu Narasimhan, Flogger Discuss
You could use logger.at(), e.g.

logger.at(getLogLevel()).log(...);

public static Level getLogLevel() {
  if (webRequest.shouldUseInfoLevel()) {
    return Level.INFO;
  }
  
   return Level.WARNING;
}

On Mon, Feb 8, 2021 at 8:20 PM Madhu Narasimhan <madhu_na...@hotmail.com> wrote:

We are currently using Flogger with JUL as backend. There is a new requirement to control log level based on some criteria in the current web request (we have a way to determine log level for different web requests). One brute force method is to wrap each log statement with a check for whether that log statement’s at<Level> is loggable for the web request.

Is there any way to achieve the requirement without modifying the log statements (or with minimal modification), short of implementing a custom logger?

--
You received this message because you are subscribed to the Google Groups "Flogger Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to flogger-discu...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/flogger-discuss/391f7835-e0e1-4c49-8644-98e4459f46c9n%40googlegroups.com.

David Beaumont

unread,
Feb 9, 2021, 12:54:30 PM2/9/21
to Chris Baron, Madhu Narasimhan, Flogger Discuss, Colin Decker, Nick Glorioso
While you are free to use "at(level)" it's a little odd to want to change level based on external signals.

Log levels express the "type" of a lot statement and relate to its place within the code ("FINE" logging is often very different to "WARNING" logging in terms of what you will log).
Since this is all based on the code and local context, it isn't obvious to me that it's a good ideal to change it dynamically.

When the next release of Flogger is out (hopefully soon) there will be a new feature which will let you do this sort of thing much more intuitively.
Essentially you'll be able to say:
* Increase log level inside this request (all logs are affected)
* Increase log level inside this request for just some set of classes/packages
* Add extra tags and metadata to logs created in just this request

This should let you control per-request logging without needing to dig into code and change each log statement.

You can even use some of this now (I think) if you're willing to get your hands a bit dirty with some extra config.

David






--
David Beaumont :: Îñţérñåţîöñåļîžåţîờñ Libraries :: Google
Google Switzerland GmbH., Brandschenkestrasse 110, CH-8002, Zürich - Switzerland

Madhu Narasimhan

unread,
Feb 9, 2021, 9:41:55 PM2/9/21
to Flogger Discuss

Thanks, Chris and David for your responses.

To clarify, the requirement is to only customize the log level configuration for each web request. The log statements themselves would have fixed level that they are intended for.

In our code, class com.my.xyz.abc has this example code.

log.atInfo().log(“Info message”);
log.atFine().log(“Debug message”); 
log.atFinest().log(“Trace message”);

A single application instance forwards incoming requests to multiple environments (say, Production and Test). It has a way of knowing target environment for each request. The application supports configuration that is conceptually as below.

Production:

  Log-levels:

    com.my.xyz.pqr: Fine

    com.my.xyz.abc: Info

Test:

  Log-levels:

    com.my.xyz: Fine

When request is for Production environment, only the first log statement should be printed. When it is for Test environment, the first 2 log statements should be printed.

Brute force method is to modify each log statement like below.

If (webRequestLoggable(INFO)) {

  log.atInfo().log(“Info message”);

}

Of course, the above will only work if the application is configured with logger com.my set to Finest! Cumbersome and not preferred at all, but the log messages will have the correct log level indicator.

David, you say the next release of Flogger will have the feature “Increase log level inside this request for just some set of classes/packages”. Does this map to my requirement? That is, some known thread local map of logger-name -> log-level will be supported and Flogger will use those levels as the overriding loggable levels for the duration of the request?

- Madhu

Chris Baron

unread,
Feb 10, 2021, 12:11:46 PM2/10/21
to Madhu Narasimhan, Flogger Discuss
If you are using the SimpleLoggerBackend which uses java.util.logging.Logger, then at(Level.OFF) should not log: http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/logging/Logger.java#l1708

e.g.

logger.at(getLogLevel()).log(...);

public static Level getLogLevel() {
  if (webRequestLoggable()) {
    return Level.INFO;
  }
 
   return Level.OFF;
}

David Beaumont

unread,
Feb 10, 2021, 12:19:18 PM2/10/21
to Chris Baron, Madhu Narasimhan, Flogger Discuss
Log level OFF is not designed to be passed into log statements.
It is a sentinel value *higher* than SEVERE and is intended only as the log level to configure a logger to in order to suppress logging, not a log level to pass into a log statement.

Using the new scoped contexts when they are available is the right approach and will avoid any need to hack log statements directly.

Cheers,
    David

Madhu Narasimhan

unread,
Feb 10, 2021, 3:45:27 PM2/10/21
to Flogger Discuss
Copying David's direct reply to me to the group (for the sake of completion). This gives more details on the approach with 'new scoped contexts'.

=====Excerpt from my posting =====

David, you say the next release of Flogger will have the feature “Increase log level inside this request for just some set of classes/packages”. Does this map to my requirement? That is, some known thread local map of logger-name -> log-level will be supported and Flogger will use those levels as the overriding loggable levels for the duration of the request?

=====David's response =====


But the gRPC Context based implementation which would work with open-source isn't pushed yet.

I think that will suit your use case very well.

Best regards,
    David

David Beaumont

unread,
Feb 10, 2021, 4:08:36 PM2/10/21
to Madhu Narasimhan, Colin Decker, Nick Glorioso, Flogger Discuss
Hopefully, if you're willing to build from the source, you should have what you need to test this out now:

I'll try and get time this w/e to do a fresh github build locally and verify it's all working as intended, but I think it's there.

Simplest test:

ScopedLoggingContext.getInstance()
    .newScope()
    .withTags(Tags.of("foo", "bar"))
    .run(() -> logger.atInfo().log("Should add tags here --> "));

Then you can play with LogLevelMap to change effective log level in tests or per request (and note that if you do "force" extra logging, it will bypass rate limiting as well, which means you definitely see all the logs in the request). If this isn't working, there will be no tags and I think there'll be a stderr warning emitted about using contexts while unconfigured.

HTH,
    David


Madhu Narasimhan

unread,
Feb 18, 2021, 2:47:45 AM2/18/21
to Flogger Discuss
David,

As you suggested, I cloned the project and studied the source code. I have some questions.

1) I have only used Maven to build my projects. It looks like a different build tool is used by Flogger. Can you give some instructions on how to build the project?

2) I had made a simplistic assumption that log level map will work like simple thread-local data to be used by Flogger log statements. Looking at your example (in the previous post), I am not so sure. Please confirm if the following type of logic will work for me.

MyResourceHandingMethod() {
    LoggingScope context = ScopedLoggingContext.getInstance().withNewScope().applyLogLevelMap(myLogLevelMap);
    Process request and prepare response
    context.close();
    return response;
}

If the above will work, I can conveniently create context in request filter and close it in response filter (leaving the rest of my code base blissfully unaffected!).

Thanks.

- Madhu

David Beaumont

unread,
Feb 18, 2021, 5:23:47 AM2/18/21
to Madhu Narasimhan, Flogger Discuss
On Thu, 18 Feb 2021 at 08:47, Madhu Narasimhan <madhu_na...@hotmail.com> wrote:
David,

As you suggested, I cloned the project and studied the source code. I have some questions.

1) I have only used Maven to build my projects. It looks like a different build tool is used by Flogger. Can you give some instructions on how to build the project?

2) I had made a simplistic assumption that log level map will work like simple thread-local data to be used by Flogger log statements. Looking at your example (in the previous post), I am not so sure. Please confirm if the following type of logic will work for me.

MyResourceHandingMethod() {
    LoggingScope context = ScopedLoggingContext.getInstance().withNewScope().applyLogLevelMap(myLogLevelMap);
    Process request and prepare response
    context.close();
    return response;
}

If the above will work, I can conveniently create context in request filter and close it in response filter (leaving the rest of my code base blissfully unaffected!).

That logic is (mostly) functionally what happens, but not the recommended way to do it.

For your example it should be:

ResponseType MyResourceHandingMethod() {
    return ScopedLoggingContext.getInstance()
        .withNewScope()
        .applyLogLevelMap(myLogLevelMap)
        .call(() -> {

            Process request and prepare response
            return response;
        });
}

Then you won't break anything if there's an exception in your code and the context is not closed correctly.


> If the above will work, I can conveniently create context in request filter and close it in response filter (leaving the rest of my code base blissfully unaffected!).

If you want to have the explicit scope object stored somewhere, that's fine, but just be very very sure you will always close it properly in reverse order (even in the face of exceptions and even things like out-of-memory-errors in your code). Leaving it in an unclosed state and then using it again is going to cause undefined behaviour.

Also, you need to call "install()" to actually get the closeable thing (that's missing in your example).

HTH,
    David


--

Madhu Narasimhan

unread,
Feb 18, 2021, 11:01:24 AM2/18/21
to Flogger Discuss
Got it. I had mistakenly left out the install() call.

Can you please address my first question regarding building from source?

Thanks.
- Madhu

David Beaumont

unread,
Feb 18, 2021, 11:38:06 AM2/18/21
to Madhu Narasimhan, Flogger Discuss, Nick Glorioso, Colin Decker
Ahh sorry, I had that written originally and then somehow lost it.

It's built using http://bazel.build

Once you have installed Bazel you can build the library and package the pom file from there.

Hopefully there will be a release soon that fixes this.

David

--
You received this message because you are subscribed to the Google Groups "Flogger Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to flogger-discu...@googlegroups.com.

Madhu Narasimhan

unread,
Feb 22, 2021, 2:44:27 AM2/22/21
to Flogger Discuss

David,

I downloaded Bazel 4.0.0 and tried to build LOCAL-SNAPSHOT version of flogger. Here is the procedure I followed and the issues I faced. Please let me know if I am doing something fundamentally wrong.

Based on instruction in an older Flogger Discuss thread titled “How to build Flogger from source”, I executed the command “bazel build --javacopt='-Xlint:-options' …

It failed towards the end (see below).

=====

Starting local Bazel server and connecting to it...

INFO: Analyzed 77 targets (133 packages loaded, 1263 targets configured).

INFO: Found 77 targets...

INFO: From Action google/flogger_javadoc.jar:

Creating destination directory: "tmp/"

INFO: From Action grpc/javadoc.jar:

Creating destination directory: "tmp/"

INFO: From Action api/api_javadoc.jar:

Creating destination directory: "tmp/"

INFO: From Action log4j/javadoc.jar:

Creating destination directory: "tmp/"

INFO: From Action api/system_backend_javadoc.jar:

Creating destination directory: "tmp/"

INFO: From Action slf4j/javadoc.jar:

Creating destination directory: "tmp/"

INFO: From Action log4j2/javadoc.jar:

Creating destination directory: "tmp/"

INFO: From Action api/testing_javadoc.jar:

Creating destination directory: "tmp/"

ERROR: /Users/mnarasim/IdeaProjects/flogger/tools/BUILD:21:16: Action tools/all_javadoc.jar failed: (Exit 1): bash failed: error executing command /bin/bash -c ... (remaining 1 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox bash failed: error executing command /bin/bash -c ... (remaining 1 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox

javadoc: error - Error fetching URL: https://google.github.io/guava/releases/24.1-jre/api/docs/

javadoc: error - Error fetching URL: https://google.github.io/truth/api/0.40/

javadoc: error - Error fetching URL: http://errorprone.info/api/latest/

javadoc: error - Error fetching URL: https://junit.org/junit4/javadoc/4.11/

Creating destination directory: "tmp/"

4 errors

INFO: Elapsed time: 668.182s, Critical Path: 630.64s

INFO: 256 processes: 156 internal, 44 darwin-sandbox, 56 worker.

FAILED: Build did NOT complete successfully

=====

In spite of the above errors, I could see that directories bazel-* had been created.

Not knowing anything better, I executed the following commands.

cd bazel-bin/api

mvn install -f api_pom.xml

mvn install -f system_backend_pom.xml

Both the above commands were successful, but said “No sources to compile”

I could see that LOCAL-SNAPSHOT jar files were created in the Maven cache directory.

~/.m2/repository/com/google/flogger/flogger contained another directory LOCAL-SNAPSHOT parallel to the one for 0.5.1.

Likewise, ~/.m2/repository/com/google/flogger/flogger-system-backend/LOCAL-SNAPSHOT was also created.

Both flogger-LOCAL-SNAPSHOT.jar and flogger-system-backend-LOCAL-SNAPSHOT.jar had no classes inside! Hence, my application build failed (with flogger version specified as LOCAL-SNAPSHOT in my pom.xml).

- Madhu

David Beaumont

unread,
Feb 22, 2021, 5:36:35 AM2/22/21
to Madhu Narasimhan, Colin Decker, Nick Glorioso, Flogger Discuss
Adding a couple more people who might be able to help.

David

Madhu Narasimhan

unread,
Feb 23, 2021, 2:34:31 AM2/23/21
to Flogger Discuss
David,

I was able to make progress as below.
  • Copied libapi.jar and libsystem_backend.jar (from bazel-bin/api) as flogger-LOCAL-SNAPSHOT.jar and flogger-system-backend-LOCAL-SNAPSHOT.jar respectively in Maven's cache directory (~/.m2). This is where my Maven-built 2 LOCAL-SNAPSHOT jar files contained NO classes after 'mvn install' of corresponding pom files!
  • Rebuilt my application specifying flogger version as LOCAL-SNAPSHOT and your suggested simple use of ScopedLogginContext to log tags.
  • It failed to print the tags, the error message being as below.

com.google.common.flogger.backend.NoOpContextDataProvider.NoOpScopedLoggingContext.LazyLogger | Scoped logging contexts are disabled; no context data provider was installed.

To enable scoped logging contexts in your application, see the site-specific Platform class used to configure logging behaviour.

Default Platform: com.google.common.flogger.backend.system.DefaultPlatform

  • Tomorrow I will try adding dependency flogger-grpc-context and setting system property flogger.logging_context to the GRPC implementation. This is based on what I learnt from source code DefaultPlatform.java.

Thanks for all the help so far.

- Madhu

David Beaumont

unread,
Feb 23, 2021, 6:33:26 AM2/23/21
to Madhu Narasimhan, Colin Decker, Nick Glorioso, Flogger Discuss
It does sound like all you need is the system property and the gRPC jar on the class path.

Sorry I've not been able to offer more useful advice here, but there should be a proper Maven release soon and I'll make sure the gRPC context stuff is part of it.

Note that the gRPC code in Flogger should be depending on the open-source gRPC library, so if things still aren't working, just make sure that's being pulled in as expected. 

David

Colin Decker

unread,
Feb 24, 2021, 5:00:18 PM2/24/21
to David Beaumont, Madhu Narasimhan, Nick Glorioso, Flogger Discuss
RE: installing local snapshots of the various Flogger maven artifacts, I believe all you should need to do is run release/install-local-snapshot.sh. I'll be working on getting a new release out soon though.
--
Colin Decker | Software Engineer | cgde...@google.com | Java Core Libraries Team

Madhu Narasimhan

unread,
Feb 25, 2021, 11:15:18 PM2/25/21
to Flogger Discuss
Colin,

Thanks for the info. I will try install-local-snapshot.sh sometime (I already got the maven artifacts in a circuitous way!)

Question: Is there a tentative date for the next release of Flogger (or maybe a beta one?)

David,

I have successfully tested both Tags and LogLevelMap features with my application (as a quick PoC). Thanks for all the guidance.

By the way, setting 'root' as the key string in LogLevelMap does not seem to work (for example, if I set FINE for root and INFO for com.abc, only the logger com.abc should use INFO and everything else should use FINE). I did not use LogLevelMap Builder. I directly created a LogLevelMap and used withLogLevelMap() on context. Is 'root' not intended to work?

- Madhu

David Beaumont

unread,
Feb 26, 2021, 5:16:11 AM2/26/21
to Madhu Narasimhan, Flogger Discuss
There's a default value for a LogLevelMap, use that.
"root" is not a special logger name, it's the name for all loggers in packages starting with "root.xxx".

David

Madhu Narasimhan

unread,
Feb 26, 2021, 3:35:49 PM2/26/21
to Flogger Discuss
Thanks, David. Using default value for LogLevelMap takes care of it.

- Madhu

Madhu Narasimhan

unread,
Mar 2, 2021, 1:08:59 PM3/2/21
to Flogger Discuss
Colin/David,

Is there a tentative date for the upcoming release of Flogger (or maybe a beta one)?

- Madhu

Peiyuan Zhou

unread,
Jun 17, 2021, 2:46:18 AM6/17/21
to Flogger Discuss
Hi David n all,
My name is Pei. Madhu n I work on the same project where we are making dynamic log level productized...

Madhu tried that successfully. I am pretty much using all the tips Madhu has given to me, but somehow still don't work. So I will try to document what I have done, hopefully your guys are experts, and know what I am missing.

We are using:
com.google.flogger:flogger:0.6
com.google.flogger:flogger-grpc-context:0.6
com.google.flogger:flogger-system-backend:0.5.1

Question: does the flogger-system-backend needs to be 0.6 as well?

In our Java code:
        LogLevelMap logLevelMap = loggingUtil.getLogLevelMap(); //  loggingUtil.getLogLevelMap() is our own utility to populate log levels of all apps
        final var loggingScope = ScopedLoggingContext.getInstance()
                //.newContext()
                //.withNewScope()
                .newScope()
                .withLogLevelMap(logLevelMap).install();

And we close the context when the REST  session is finished

Question: I saw newScope() is deprecated, replaced by newContext(), tried both, no difference. No surprise since  newScope() is calling the later.

When starting our REST server, I use the following flag:
java -jar target/service.jar -Dflogger.logging_context=com.google.common.flogger.grpc.GrpcContextDataProvider#getInstance

Question: we don't have gRPC use cases, all our business cases are from REST calls, do we still need to depend on grpc.GrpcContextDataProvider?

Instead of GrpcScopedLoggingContext, I also have tried a few other things, like:
-Dflogger.logging_context=com.google.common.flogger.grpc.GrpcScopedLoggingContext#getInstance
-Dflogger.logging_context=com.google.common.flogger.context.ScopedLoggingContext#getInstance
-Dflogger.logging_context=com.google.common.flogger.backend.NoOpContextDataProvider#getInstance

But all have the same warning when a REST call is processed:
2021-06-16 23:04:07.791-0700 | GET | :PostmanRuntime/7.28.0:dev-pei:f65dfb1e-31a3-43a7-808f-413cf72e0278 |            helidon-1 |  |  WARN | com.google.common.flogger.backend.NoOpContextDataProvider.NoOpScopedLoggingContext.LazyLogger | Scoped logging contexts are disabled; no context data provider was installed.
To enable scoped logging contexts in your application, see the site-specific Platform class used to configure logging behaviour.
Default Platform: com.google.common.flogger.backend.system.DefaultPlatform
com.google.common.flogger.LogSiteStackTrace: SMALL
at com.google.common.flogger.backend.NoOpContextDataProvider$NoOpScopedLoggingContext.logWarningOnceOnly(NoOpContextDataProvider.java:60)
at com.google.common.flogger.backend.NoOpContextDataProvider$NoOpScopedLoggingContext.access$100(NoOpContextDataProvider.java:45)
at com.google.common.flogger.backend.NoOpContextDataProvider$NoOpScopedLoggingContext$1.install(NoOpContextDataProvider.java:73)
at com.oracle.cx.communications.dx4c.fabric.utils.http.ContainerFilter.filter(ContainerFilter.java:72)
at org.glassfish.jersey.server.ContainerFilteringStage.apply(ContainerFilteringStage.java:108)
at org.glassfish.jersey.server.ContainerFilteringStage.apply(ContainerFilteringStage.java:44)
at org.glassfish.jersey.process.internal.Stages.process(Stages.java:173)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:247)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)

And dynamic log levels are not set properly. 

From source code,  quite a few classes have getInstance() to return ScopedLoggingContext.Builder, I am not confident that I have the right class and right methods get called...

Really appreciated for any tips.  

Thanks a lot  --pei

David Beaumont

unread,
Jun 17, 2021, 8:42:07 AM6/17/21
to Peiyuan Zhou, Flogger Discuss
Hi, I'll try to answer questions inline.

On Thu, 17 Jun 2021 at 08:46, Peiyuan Zhou <pzhouc...@gmail.com> wrote:
Hi David n all,
My name is Pei. Madhu n I work on the same project where we are making dynamic log level productized...

Madhu tried that successfully. I am pretty much using all the tips Madhu has given to me, but somehow still don't work. So I will try to document what I have done, hopefully your guys are experts, and know what I am missing.

We are using:
com.google.flogger:flogger:0.6
com.google.flogger:flogger-grpc-context:0.6
com.google.flogger:flogger-system-backend:0.5.1

Question: does the flogger-system-backend needs to be 0.6 as well?

If you haven't tried that, it's worth it, but I don't *think* it should matter (context stuff is fairly independent). Use whatever the latest versions are.

In our Java code:
        LogLevelMap logLevelMap = loggingUtil.getLogLevelMap(); //  loggingUtil.getLogLevelMap() is our own utility to populate log levels of all apps
        final var loggingScope = ScopedLoggingContext.getInstance()
                //.newContext()
                //.withNewScope()
                .newScope()
                .withLogLevelMap(logLevelMap).install();

And we close the context when the REST  session is finished

Question: I saw newScope() is deprecated, replaced by newContext(), tried both, no difference. No surprise since  newScope() is calling the later.

newContext() is the right thing to use now, and you can use the helper class ScopedLoggingContexts for simpler code:

final var loggingScope = ScopedLoggingContexts.newContext().withLogLevelMap(logLevelMap).install();

Though if you can use a method like "call()" or "run()", all the better since it avoids needing to set up a try/catch block to unwind the context.

When starting our REST server, I use the following flag:
java -jar target/service.jar -Dflogger.logging_context=com.google.common.flogger.grpc.GrpcContextDataProvider#getInstance

That looks right, but if this is in a shell, maybe quote the flag value in case '#' does something you don't expect.

-Dflogger.logging_context='com.google.common.flogger.grpc.GrpcContextDataProvider#getInstance'

Question: we don't have gRPC use cases, all our business cases are from REST calls, do we still need to depend on grpc.GrpcContextDataProvider?

gRPC context is very general, and is a subset of gRPC code which deals with trace propagation. The use of RPC in the name isn't important.

What is important however is that the context needs to be propagated when you start new threads etc. 

You'll need to use the Context class to propagate context data into worker thread pools and other task related threads.
Java doesn't have a nice way to manage this sort of propagation automatically.

However looking at the trace below, this is not your immediate problem.

Instead of GrpcScopedLoggingContext, I also have tried a few other things, like:
-Dflogger.logging_context=com.google.common.flogger.grpc.GrpcScopedLoggingContext#getInstance
-Dflogger.logging_context=com.google.common.flogger.context.ScopedLoggingContext#getInstance
-Dflogger.logging_context=com.google.common.flogger.backend.NoOpContextDataProvider#getInstance

But all have the same warning when a REST call is processed:
2021-06-16 23:04:07.791-0700 | GET | :PostmanRuntime/7.28.0:dev-pei:f65dfb1e-31a3-43a7-808f-413cf72e0278 |            helidon-1 |  |  WARN | com.google.common.flogger.backend.NoOpContextDataProvider.NoOpScopedLoggingContext.LazyLogger | Scoped logging contexts are disabled; no context data provider was installed.
To enable scoped logging contexts in your application, see the site-specific Platform class used to configure logging behaviour.
Default Platform: com.google.common.flogger.backend.system.DefaultPlatform
com.google.common.flogger.LogSiteStackTrace: SMALL
at com.google.common.flogger.backend.NoOpContextDataProvider$NoOpScopedLoggingContext.logWarningOnceOnly(NoOpContextDataProvider.java:60)
at com.google.common.flogger.backend.NoOpContextDataProvider$NoOpScopedLoggingContext.access$100(NoOpContextDataProvider.java:45)
at com.google.common.flogger.backend.NoOpContextDataProvider$NoOpScopedLoggingContext$1.install(NoOpContextDataProvider.java:73)
at com.oracle.cx.communications.dx4c.fabric.utils.http.ContainerFilter.filter(ContainerFilter.java:72)
at org.glassfish.jersey.server.ContainerFilteringStage.apply(ContainerFilteringStage.java:108)
at org.glassfish.jersey.server.ContainerFilteringStage.apply(ContainerFilteringStage.java:44)
at org.glassfish.jersey.process.internal.Stages.process(Stages.java:173)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:247)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)

This seems like the system property didn't take effect, of the class wasn't on the class path.
If you can add code in your application to test for the existence of the GrpcContextDataProvider class (e.g. Class.forName()) then you can confirm if it's there or not.

This is the code which looks up the context data provider:

If you have the sources for the version you are depending on, you should double check it's the same there as well.

And dynamic log levels are not set properly. 

Yes, this is expected if there's no data provider installed.

From source code,  quite a few classes have getInstance() to return ScopedLoggingContext.Builder, I am not confident that I have the right class and right methods get called...

The context builder "getInstance()" is completely different to the "get the context data provider singleton" method we're talking about here.
The method you need called is:


If you can debug with breakpoints, you should put one on that method and the code in DefaultPlatform.java I linked to above. If this method exists and is being called then something very odd is going on, but I doubt that.

So my advice is:
1) get the latest versions of everything (just to be sure) and inspect code in DefaultPlatform if possible to make sure it's the same as I linked above.
2) add logging to your code to check if the provider class (com.google.common.flogger.grpc.GrpcContextDataProvider) exists.
3) debug with breakpoints (or if you can compile from source, add logging) to DefaultPlatform and GrpcContextDataProvider.

This is all fairly recent code so there's a chance I did something daft (internally in Google, and for various reasons, we use a different data provider implementation so this might not be as well tested as I'd hoped).

Good luck,
    David
 

Peiyuan Zhou

unread,
Jun 29, 2021, 1:47:47 PM6/29/21
to Flogger Discuss
Hi David, 
We found out what is the problem, it is the command line argument position:
java -jar target/service.jar -Dflogger.logging_context=com.google.common.flogger.grpc.GrpcContextDataProvider#getInstance

Once we move -D option to java, then it works:
java -Dflogger.logging_context=com.google.common.flogger.grpc.GrpcContextDataProvider#getInstance  -jar target/service.jar

So seems -D option is for JVM and needs to be front.

We like to explore how to pass -D flag in other options, as our deployment team have questions on how to set -D flag in their deployment descriptor script. 

We love to get your expert advice on some these options:
1. How to  set this property programmatically as part of logging initialization. If it is harmless, it can go to our server main initialization method.
2. Another option is to check for a way to provide JVM arguments through an environment variable
3. Or how to set -D flag property in the deployment descriptor script. 

Really appreciated all helps.  --pei

Colin Decker

unread,
Jun 29, 2021, 2:03:50 PM6/29/21
to Peiyuan Zhou, Flogger Discuss
Note that very soon Flogger will no longer require the system property to load services such as the GrpcContextDataProvider. An upcoming change will allow Flogger to use ServiceLoader to load services from jars that include the proper metadata, and the jar containing GrpcContextDataProvider will be updated to include that metadata. So all you'll need to do is put the jar containing GrpcContextDataProvider on the classpath and Flogger will load and use it automatically. If the system property is set, Flogger will attempt to use that rather than ServiceLoader.

That change should be merged later this week I'm hoping, and I'd like to get a new release of Flogger including that change out in the next few weeks.

David Beaumont

unread,
Jun 29, 2021, 6:04:44 PM6/29/21
to Colin Decker, Peiyuan Zhou, Flogger Discuss
A big +1 to what Colin says. We know that using system properties isn't ideal and are trying to address that soon.

I will address a couple of things in you mail, hopefully just for posterity.

On Tue, 29 Jun 2021 at 20:03, 'Colin Decker' via Flogger Discuss <flogger...@googlegroups.com> wrote:
Note that very soon Flogger will no longer require the system property to load services such as the GrpcContextDataProvider. An upcoming change will allow Flogger to use ServiceLoader to load services from jars that include the proper metadata, and the jar containing GrpcContextDataProvider will be updated to include that metadata. So all you'll need to do is put the jar containing GrpcContextDataProvider on the classpath and Flogger will load and use it automatically. If the system property is set, Flogger will attempt to use that rather than ServiceLoader.

That change should be merged later this week I'm hoping, and I'd like to get a new release of Flogger including that change out in the next few weeks.

On Tue, Jun 29, 2021 at 1:47 PM Peiyuan Zhou <pzhouc...@gmail.com> wrote:
Hi David, 
We found out what is the problem, it is the command line argument position:
java -jar target/service.jar -Dflogger.logging_context=com.google.common.flogger.grpc.GrpcContextDataProvider#getInstance

Once we move -D option to java, then it works:
java -Dflogger.logging_context=com.google.common.flogger.grpc.GrpcContextDataProvider#getInstance  -jar target/service.jar

So seems -D option is for JVM and needs to be front.

We like to explore how to pass -D flag in other options, as our deployment team have questions on how to set -D flag in their deployment descriptor script. 

We love to get your expert advice on some these options:
1. How to  set this property programmatically as part of logging initialization. If it is harmless, it can go to our server main initialization method.

Sadly that's just impossible (and part of the reason a system property was needed at all).
Java can load and run a huge amount of code before your main method starts, including arbitrary classes in dependent libraries referenced indirectly from the libraries you use.
This means that hundreds of log statements could happen before the first line in the main() method of your application is run, which is far too late to specify how you want logging to be configured.

System properties are one of the only ways to explicitly specify configuration options such that *all* code can read it at any point.
The only other would be environment variables, but they have other issues.
 
2. Another option is to check for a way to provide JVM arguments through an environment variable

We had considered adding "system property or environment variable" to lookup the configuration, but this is much fiddlier for complex configuration (and the JDK doesn't need to do this for logging.properties, so we felt it best to wait).
 
3. Or how to set -D flag property in the deployment descriptor script. 

That I can't help you with, but hopefully you won't need it :)

If you do still need to specify specific configuration for things in Flogger, we can add reading it from an equivalent environment variable.
In fact we should probably do this anyway as a fallback (I'll see what the others think).

Cheers, and thanks for your patience on this.

    David

 

Peiyuan Zhou

unread,
Jul 8, 2021, 6:49:28 PM7/8/21
to Flogger Discuss
Hi David, Colin n all,
We passed the issue of -Dflogger.logging_context=com.google.common.flogger.grpc.GrpcContextDataProvider#getInstance, thanks a lot for helps.

We run a few test cases, dynamic log levels work for some cases. While for mysterious reasons, there are a few cases, the dynamic LogLevelMap does not override default log values. Those failed cases are almost identical to successful ones, except they deal with difference Kubernetes namespaces, which should not be a factor at all in my mind.

Do you know, in any circumstance, dynamic LogLevelMap will not be the highest priority one to use? 

Thanks a lot agan  --pei

Peiyuan Zhou

unread,
Jul 9, 2021, 2:36:57 AM7/9/21
to Flogger Discuss
Or once a LogLevelMap is created n filled, does flogger have a way to set the map? Or the map needs to be static or something like so that flogger will always pick it up as the current log levels?

Thanks --pei

Peiyuan Zhou

unread,
Jul 9, 2021, 6:48:18 PM7/9/21
to Flogger Discuss
Hi all,
I think the mystery of our problem exposed... We found out that the dynamic LogLevelMap can override our default logging value, if the dynamic one is to have more detailed logging. But will not work if we want to have less logging dynamically.

If it is true, this is not exactly what we envisioned initially, though it may be workable in most of user scenarios in production. Flogger will evaluate the base logging value, as well as the dynamic one, choose the one with finer details.

My question is if there is any way that we can configure the dynamic log level as the definite log value, either to increase or decrease logging information. 

Thanks a lot  --pei

David Beaumont

unread,
Jul 9, 2021, 7:09:37 PM7/9/21
to Flogger Discuss
You are correct, LogLevelMap cannot reduce the amount of logging, only increase it.

This was considered when designing this feature, but rejected for a couple of reasons:

1: It's subtle and complex.

If you want to turn logging both up and down you end up with very unclear expectations for users.
What if I turn up logging in a class, but turn it down in a package or method. It gets hard (but not impossible) to have a system that's simple to use without having a super complex API.

2: (and I think this is more important) the use case for enabling more logging is completely distinct from the use case for turning logging down.

Per request debugging is a situation in which you might need more logs, but it's never a situation in which you want less logging.
You can use JDK logging.properties to turn down all logging for package and classes already. And then you can turn it up for the requests you are debugging.

I consider (and you are free to disagree here) that "too much logging" from a class or package is a bug that needs fixing.
If you control the code you can alter the log statements, otherwise you can using logging configuration to control it.
Either way it's unrelated to the desire for more logging when debugging, so I'd rather not use the same mechanism for both.

So my advice is to deal with code which produces too much logging first, by modifying the code or using logging.properties (or whatever log4j equivalent is if you're using that) and then you only need to use LogLevelMap to turn logging up when debugging a specific request.

HTH,
    David

Peiyuan Zhou

unread,
Jul 12, 2021, 11:05:20 AM7/12/21
to Flogger Discuss
Thanks a lot David.

Peiyuan Zhou

unread,
Jul 13, 2021, 2:14:35 PM7/13/21
to Flogger Discuss
I have 1 more problem about closing the logging context...

Due to our business nature, and hence the way we code, our applications are all controlled at front, we don't have a common place to close the dynamic logging context at the end of a given transaction.
So what we do is at the beginning of each transaction, before we open another logging context, we close the existing logging context if it is still not null.

I have 2 env to run/text our applications. And they behave differently,  regarding the previous logging context. 
In one env, the previous/existing logging context is still there at the next REST call, so it is closed by calling close().
While in my another test env, the previous context is null, assuming closed automatically by the time of next REST call comes in.

I haven't figured out why behaves are different. Make it worse, in the first env, it was reported that log level of subsequent REST calls is stuck to the level of the first call. So dynamic logging stops working. I will login into the env to debug it today.

Lmk if you have tips, much appreciated  --pei

  

Peiyuan Zhou

unread,
Jul 13, 2021, 2:46:48 PM7/13/21
to Flogger Discuss
Also curious when the new version will be released so that Flogger will no longer require the system property to load services such as the GrpcContextDataProvider.

Colin Decker

unread,
Jul 13, 2021, 2:55:58 PM7/13/21
to Peiyuan Zhou, Flogger Discuss
I don't have an exact time frame for the next release, though I'd like to get it out in the next few weeks. I want to get PR 242 merged before a release, which is partly waiting on my review.

David Beaumont

unread,
Jul 13, 2021, 3:13:00 PM7/13/21
to Peiyuan Zhou, Flogger Discuss
On Tue, 13 Jul 2021 at 20:14, Peiyuan Zhou <pzhouc...@gmail.com> wrote:
I have 1 more problem about closing the logging context...

Due to our business nature, and hence the way we code, our applications are all controlled at front, we don't have a common place to close the dynamic logging context at the end of a given transaction.
So what we do is at the beginning of each transaction, before we open another logging context, we close the existing logging context if it is still not null.

This is probably not going to behave in a well defined way.

If I understand correctly it sounds like a request coming in will trigger other async tasks which will eventually service the request.
If you were doing the response sequentially from the same thread that received the request it would be easy (just wrap everything in a context), but that's not good server design.

In this case, you need to propagate the context into the async subtasks you create. This might require digging into the async mechanism you use to find out how to hook into task propagation.
Use the API in https://grpc.github.io/grpc-java/javadoc/io/grpc/Context.html to propagate the gRPC context on which Flogger's logging contexts are based, and then you should be able to wrap any sub-tasks to propagate things. gRPC contexts support cancellation and "fork"ing as well as other useful lifecycle operations.

Of course without knowing more about your architecture I might be missing something important about your case, but that's the sort of approach we take in Google and it works for all the async request setups we have.

I have 2 env to run/text our applications. And they behave differently,  regarding the previous logging context. 
In one env, the previous/existing logging context is still there at the next REST call, so it is closed by calling close().
While in my another test env, the previous context is null, assuming closed automatically by the time of next REST call comes in.

I haven't figured out why behaves are different. Make it worse, in the first env, it was reported that log level of subsequent REST calls is stuck to the level of the first call. So dynamic logging stops working. I will login into the env to debug it today.

I don't think I can help with this, but propagating the gRPC context should fix things.

HTH,
    David

You received this message because you are subscribed to a topic in the Google Groups "Flogger Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/flogger-discuss/FFkyBuOtku4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to flogger-discu...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/flogger-discuss/c8ff0b40-c63a-435f-95be-5ff4a3b11cc8n%40googlegroups.com.

Peiyuan Zhou

unread,
Sep 21, 2021, 2:50:12 AM9/21/21
to Flogger Discuss
Hi David, Colin & all,
I and my team in Oracle has a new problem regarding this per request dynamic logging...

When we build our service jar file, run it with flag, all good, like:
java  -Dflogger.logging_context=com.google.common.flogger.grpc.GrpcContextDataProvider#getInstance  -jar ./app/service.jar

However, when used with combination of Docker &  GraavlVM,  a high-performance JDK distribution designed to accelerate the execution of applications written in Java, seems this flogger feature is disabled, as we are seeing this warning message:
17bd-4107-841f-56587e0b9444 |            helidon-1 |  |  WARN | com.google.common.flogger.backend.NoOpContextDataProvider.NoOpScopedLoggingContext.LazyLogger | Scoped logging contexts are disabled; no context data provider was installed.

To enable scoped logging contexts in your application, see the site-specific Platform class used to configure logging behaviour.

We doubted whether -D flag was not propagated correctly during run-time, but we see the -D flag system property value is correct by newly added debug statement, in the same transaction where flogger says the feature is disabled.

A few questions:
-- When does flogger check this system property value? During its initialization time? Or later on?
-- Which package/jar of flogger checks that value? GraavlVM has the capability to load a specific Java package or objects, either at run-time or build-time. Maybe we can load the corresponding package at the build time, with -D flag, since we have doubt run-time GraavlVM make the property disappearing for flogger.
-- Colin mentioned earlier, maybe 2 months ago, that the new coming release of flogger does not require customers to pass this " -Dflogger.logging_context=com.google.common.flogger.grpc.GrpcContextDataProvider#getInstance" as system property. Is this new release already available? And what is the version of it?
-- last thing, is there any trick that we can do in our own code, to force flogger recognize this property value, without passing it through command line?

Thanks a lot n really appreciated all the helps.

--pei

Reply all
Reply to author
Forward
0 new messages