Send logs to Google Logs Viewer

609 views
Skip to first unread message

Athreya N Patel

unread,
Sep 9, 2020, 9:10:16 AM9/9/20
to Google Stackdriver Discussion Forum


To do:
 Send logs to Google Logs Viewer from docker container.

Faced Error I followed this article: https://docs.docker.com/config/containers/logging/gcplogs/ and executed docker run --log-driver=gcplogs nginx

docker: Error response from daemon: failed to initialize logging driver: unable to connect or authenticate with Google Cloud Logging: rpc error: code = PermissionDenied desc = The caller does not have permission.

OS: Container Optimized OS cos-81-12871-1196-0

I also tried this The Google Cloud Logging driver for Docker with service account with log.admin and log.writer role

How do I send the logs to Stackdriver/Google Cloud Logging?

Please answer here or on https://stackoverflow.com/questions/63812294/failed-to-initialize-logging-driver-unable-to-connect-or-authenticate-with-goog

Igor Peshansky

unread,
Sep 9, 2020, 10:56:45 AM9/9/20
to Athreya N Patel, Google Stackdriver Discussion Forum
We've summarized the steps needed to authorize the logging agent to write to the API in https://cloud.google.com/logging/docs/agent/authorization. The steps for the gcplogs Docker driver would be similar. You'd need to make sure the service account has the permission to write to your target project. I'd also check that the Logging API is enabled in the target project.

Please note that the gcplogs Docker driver is not maintained or supported by Google, so if you continue to have trouble, you might need to reach out to the Docker community that maintains it. We may be able to recommend alternate solutions, but we need to know more about the environment where your Docker container is running — is it a Linux GCE VM? Is it COS?
        Igor

--
© 2020 Google Inc. 1600 Amphitheatre Parkway, Mountain View, CA 94043
 
Email preferences: You received this email because you signed up for the Google Stackdriver Discussion Google Group (google-stackdr...@googlegroups.com) to participate in discussions with other members of the GoogleStackdriver community.
---
You received this message because you are subscribed to the Google Groups "Google Stackdriver Discussion Forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-stackdriver-d...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-stackdriver-discussion/6ac8c1ee-c62d-4208-a0ec-ec76a62c7ed1n%40googlegroups.com.

Athreya N Patel

unread,
Sep 9, 2020, 11:50:52 AM9/9/20
to Google Stackdriver Discussion Forum
The environment used is cos-81-12871-1196-0 to run docker container. 

We are facing the same problem with other versions too.


Igor Peshansky

unread,
Sep 9, 2020, 2:14:13 PM9/9/20
to Athreya N Patel, Google Stackdriver Discussion Forum
COS should have an integration with Cloud Logging that automatically ingests container logs. Have you tried enabling that? Did you run into any limitations that motivated you to deploy a custom solution?
        Igor

Athreya N Patel

unread,
Sep 9, 2020, 2:47:47 PM9/9/20
to Google Stackdriver Discussion Forum
Thanks,
May I know how to achieve the same integration. What should I enable?



Igor Peshansky

unread,
Sep 9, 2020, 3:42:48 PM9/9/20
to Athreya N Patel, Google Stackdriver Discussion Forum
One example of such a setup is https://stackoverflow.com/q/51014295. I'll try to locate our public docs and post back on this thread.
        Igor

Athreya N Patel

unread,
Sep 9, 2020, 11:50:01 PM9/9/20
to Google Stackdriver Discussion Forum
Thank you! But the link shared talks about audit logs. The same can be achieved with application logs from docker?

Athreya N Patel

unread,
Sep 10, 2020, 4:33:57 AM9/10/20
to Google Stackdriver Discussion Forum
How do I configure non-default service account in COS?. I guess that might be causing this issue.

Igor Peshansky

unread,
Sep 10, 2020, 11:09:51 AM9/10/20
to Athreya N Patel, Google Stackdriver Discussion Forum
I meant the second answer, which mentions the sudo systemctl start stackdriver-logging command. Have you tried that?
        Igor

Igor Peshansky

unread,
Sep 10, 2020, 11:29:57 AM9/10/20
to Athreya N Patel, Google Stackdriver Discussion Forum
Normally, you should be able to ingest logs with the default GCE service account — it usually has all the right permissions, and the gcplogs driver knows how to obtain a token from the instance metadata server. To supply a service account private key, you'd need to set the GOOGLE_APPLICATION_CREDENTIALS environment variable in the Docker engine service environment (where the gcplogs driver is running) to the path of your private key json file.
        Igor

Athreya N Patel

unread,
Sep 13, 2020, 7:44:08 AM9/13/20
to Google Stackdriver Discussion Forum
Actually, we have removed the roles for default service account. But if I set  GOOGLE_APPLICATION_CREDENTIALS  that doesn't injest logs. I have also tried this https://stackoverflow.com/questions/49983216/the-google-cloud-logging-driver-for-docker?noredirect=1&lq=1  

Igor Peshansky

unread,
Sep 13, 2020, 12:48:11 PM9/13/20
to Athreya N Patel, Google Stackdriver Discussion Forum
I think we went off on a tangent… Did you see my other response about starting the stackdriver-logging service?

I meant the second answer, which mentions the sudo systemctl start stackdriver-logging command. Have you tried that?

Want to try that first?
        Igor 

Athreya N Patel

unread,
Sep 25, 2020, 4:32:21 AM9/25/20
to Google Stackdriver Discussion Forum
How do I authenticate systemctl start stackdriver-logging. 
So, I see the logs in /var/lib/docker/container
But these logs are not being sent to Cloud Log-viewer.
As I am not using default credentials, is there anyway to provide ADC or sa-key.json to it?

Igor Peshansky

unread,
Sep 25, 2020, 9:14:45 AM9/25/20
to Athreya N Patel, Google Stackdriver Discussion Forum
Hmm, good question. The service account key would have to be mounted inside the logging agent container. This doesn't seem to be supported by the existing  stackdriver-logging.service definition [1], so you'd have to edit it to mount the key into the container as /etc/google/auth/application_default_credentials.json (or mount it into another location and also set the GOOGLE_APPLICATION_CREDENTIALS environment variable [2]) before starting the service.


Athreya N Patel

unread,
Sep 28, 2020, 4:02:34 AM9/28/20
to Google Stackdriver Discussion Forum
Resolved

But how do I send the logs as json.
I have edited /etc/stackdriver/logging.config.d/fluentd-lakitu.conf from default config. But I don't see the logs with that tag

<source>
  @type tail
  format json
  path /var/lib/docker/containers/*/*.log
  <parse>
    @type json
  </parse>
  pos_file /var/log/google-fluentd/containers.log.pos
  tag reform_contain
  read_from_head true
</source>

# Adds container_id field in container logs.
#<match reform_containers.**>
  #@type record_reformer
  #enable_ruby true
  #<record>
    # tag_parts[] looks like:
    # ['reform_containers', 'var', 'lib', 'docker', 'containers', container_id]
 #   container_id ${tag_parts[5]}
    # Renames field 'log' to a more generic field 'message'. This way Stackdriver
    # will display the log message as the summary of the log entry.
 # </record>
  #tag cos_containers
 # remove_keys log
#</match>



Thank you

Athreya N Patel

unread,
Sep 28, 2020, 5:26:28 AM9/28/20
to Google Stackdriver Discussion Forum
Also I get "Received graceful stop"   in logs if I change the config. 
And it always exits after restart

Igor Peshansky

unread,
Oct 8, 2020, 1:51:19 AM10/8/20
to Athreya N Patel, Google Stackdriver Discussion Forum
Apologies, I seem to be severely behind on email, and this fell through the cracks.

Tags are important — they are used by the subsequent stages in the fluentd pipeline to identify and forward logs down the pipeline. Specifically, the "match" sections operate on tags. By commenting out the entire transformation, your logs will now have the reform_containers.* tags instead of the cos_containers tag, which will probably disable the matching in the actual output section. I would recommend leaving the "<match reform_containers.**>" section alone. It's hard to tell, but this may also be the cause of the agent container stopping.

You can send logs as JSON by enabling detect_json in the google_cloud output plugin configuration and setting your log messages to serialized JSON, one object per line. See https://cloud.google.com/logging/docs/agent/configuration#process-payload for details.
        Igor

Athreya N Patel

unread,
Oct 12, 2020, 4:53:59 AM10/12/20
to Google Stackdriver Discussion Forum
So, now I am able to send logs.
But when I use startup-script as 
#! /bin/bash
head -n -2 /etc/stackdriver/logging.config.d/fluentd-lakitu.conf > /etc/stackdriver/logging.config.d/fluentd-lakitu.conf
echo """<filter cos_containers.**>
@type parser
format json
key_name message
reserve_data false
emit_invalid_record_to_error false
</filter>""">>/etc/stackdriver/logging.config.d/fluentd-lakitu.conf
sudo systemctl start stackdriver-logging
docker run -d somelogger:1


This doesn't injest logs.
If I do the same manually it does send logs

Igor Peshansky

unread,
Oct 12, 2020, 4:40:10 PM10/12/20
to Athreya N Patel, Google Stackdriver Discussion Forum
This is where the COS experts would need to chime in. However, one guess I have is that if the service is already started, "systemctl start" is a no-op. Want to try "systemctl restart" instead?

As an aside, you have a couple of potential problems with your startup script. First, output redirection overwrites the file before executing the command, so "head -n -2 /etc/stackdriver/logging.config.d/fluentd-lakitu.conf > /etc/stackdriver/logging.config.d/fluentd-lakitu.conf" will not preserve any of the lines in fluentd-lakitu.conf. Second, unlike in Python, I don't believe """ is meaningful in bash.

The idiom you're probably looking for is:

cp /etc/stackdriver/logging.config.d/fluentd-lakitu.conf /etc/stackdriver/logging.config.d/fluentd-lakitu.conf-save
# Shorter version of the above: cp /etc/stackdriver/logging.config.d/fluentd-lakitu.conf{,-save}
(
head -n -2 /etc/stackdriver/logging.config.d/fluentd-lakitu.conf-save; cat <<EOF
<filter cos_containers.**>
@type parser
format json
key_name message
reserve_data false
emit_invalid_record_to_error false
</filter>
EOF
) > /etc/stackdriver/logging.config.d/fluentd-lakitu.conf

Hope this helps.
        Igor

Athreya N Patel

unread,
Oct 19, 2020, 2:18:57 PM10/19/20
to Google Stackdriver Discussion Forum
Thank you!
This works.
But I get this, when I restart the container: 
[error]: config error file="/etc/google-fluentd/google-fluentd.conf" error_class=Fluent::ConfigError error="Other 'in_tail' plugin already use same pos_file path: plugin_id = object:17e36f4, pos_file path = /var/log/google-fluentd/containers.log.pos"

Igor Peshansky

unread,
Oct 19, 2020, 4:11:47 PM10/19/20
to Athreya N Patel, Google Stackdriver Discussion Forum
Hmm, interesting. There should be no other configs on your instance using that pos file path… It may just be that it's interpreting the leftover -save file as a config — does it help to add "rm /etc/stackdriver/logging.config.d/fluentd-lakitu.conf-save" before restarting the stackdriver-logging service?
        Igor

Athreya N Patel

unread,
Nov 1, 2020, 11:31:08 PM11/1/20
to Google Stackdriver Discussion Forum
So, I am using time in logs, if I remove it, stackdriver starts injesting logs. Any ways to consider the time from logs?

Igor Peshansky

unread,
Nov 2, 2020, 10:09:37 AM11/2/20
to Athreya N Patel, Google Stackdriver Discussion Forum
Try adding "keep_time_key true" to your parser config. That way, the output plugin will re-parse the time that was extracted from the logs (see [1]). You might also want to specify time_key, time_format, and time_type (see [2] and [3]).

Athreya N Patel

unread,
Nov 2, 2020, 10:55:24 AM11/2/20
to Google Stackdriver Discussion Forum
Works.
Any way to filter non-json output not to be considered. 
or make sure the the logs are sent even if the non json logs are injested?

Igor Peshansky

unread,
Nov 2, 2020, 6:02:53 PM11/2/20
to Athreya N Patel, Google Stackdriver Discussion Forum
Did we just go full circle on this? I think we've started with you asking how to ingest JSON logs…

What Docker logging driver did you end up using, json-file? That would mean that the input logs would be in JSON format. If after all transformations your record only has the "message" field and the special fields that will go into the LogEntry envelope, the "message" will be sent as the textPayload in the log entry. Otherwise, the record will be sent as the jsonPayload in the log entry.

Unless I'm misunderstanding your question. What did you mean by non-JSON logs?
        Igor

Athreya N Patel

unread,
Nov 12, 2020, 1:28:25 AM11/12/20
to Google Stackdriver Discussion Forum
So, we had non json logs with json logs. So thought whether we can separate those out.
As you said, the record will be sent as the jsonPayload in the log entry.

I had another query, whether we can add "instance_group" inside "resource.labels"
From that we can query the logs for error reporting, or notifications.

Thanks
Reply all
Reply to author
Forward
0 new messages