Cannot see any data in Grafana UI on the docker container

671 views
Skip to first unread message

Soumya Simanta

unread,
Sep 12, 2014, 10:59:49 PM9/12/14
to kamon...@googlegroups.com
Cannot see any data in Grafana UI on the docker container

I cannot see any data in Grafana running inside the Docker container. 

I tried with both these images. 

docker run -v /etc/localtime:/etc/localtime:ro -p 80:80 -p 8125:8125/udp -p 8126:8126 -p 8083:8083 -p 8086:8086 -p 8084:8084 --name kamon-grafana-dashboard muuki88/grafana_graphite:latest
docker run -d -v /etc/localtime:/etc/localtime:ro -p 80:80 -p 8125:8125/udp -p 8126:8126 --name kamon-grafana-dashboard kamon/grafana_graphite

Is there a way to check the following: 

1. Kamon is generating the data from the actors
2. The data is actually getting into the backend system

This is how I've configured my Akka project. 


application.conf 

akka {
  #for Kamon io metrics
  extensions = ["kamon.metric.Metrics"]
}

kamon {

  # What should be recorder
  metrics {
    filters = [
      {
        # actors we should be monitored
        actor {
          includes = [ "*/user/*", "myactorsystem/user/myactor/my-extractor*" ] # a list of what should be included
          excludes = [ "system/*" ]                # a list of what should be excluded
        }
      },

      # not sure about this yet. Looks important
      {
        trace {
          includes = [ "*" ]
          excludes = []
        }
      }
    ]
  }

  # ~~~~~~ StatsD configuration ~~~~~~~~~~~~~~~~~~~~~~~~

  statsd {
    # Hostname and port in which your StatsD is running. Remember that StatsD packets are sent using UDP and
    # setting unreachable hosts and/or not open ports wont be warned by the Kamon, your data wont go anywhere.
    hostname = "192.168.168.206"
    port = 8125

    # Interval between metrics data flushes to StatsD. It's value must be equal or greater than the
    # kamon.metrics.tick-interval setting.
    flush-interval = 1 second

    # Max packet size for UDP metrics data sent to StatsD.
    max-packet-size = 1024 bytes

    # Subscription patterns used to select which metrics will be pushed to StatsD. Note that first, metrics
    # collection for your desired entities must be activated under the kamon.metrics.filters settings.
    includes {
      actor       = [ "*" ]
      trace       = [ "*" ]
      dispatcher  = [ "*" ]
    }

    simple-metric-key-generator {
      # Application prefix for all metrics pushed to StatsD. The default namespacing scheme for metrics follows
      # this pattern:
      #    application.host.entity.entity-name.metric-name
      application = "mysystemid"
    }
  }
}

project/plugins.sbt

addSbtPlugin("com.typesafe.sbt" % "sbt-aspectj" % "0.10.0")


build.sbt 

val akkaVersion = "2.2.4"

libraryDependencies ++= Seq(
  "com.typesafe.akka" %% "akka-actor" % akkaVersion,
  "com.typesafe.akka" %% "akka-testkit" % akkaVersion,
  "org.scalatest" %% "scalatest" % "1.9.1" % "test",
  "junit" % "junit" % "4.11" % "test",
  "com.novocode" % "junit-interface" % "0.10" % "test"
)


libraryDependencies ++= Seq(
  "io.kamon" % "kamon-core_2.10" % "0.2.4",
  "io.kamon" % "kamon-statsd_2.10" % "0.2.4",
  "io.kamon" % "kamon-log-reporter_2.10" % "0.2.4",
  "io.kamon" % "kamon-system-metrics_2.10" % "0.2.4",
  "org.aspectj" % "aspectjweaver" % "1.8.2"
)

//required for kamon.io to work
aspectjSettings

javaOptions <++= AspectjKeys.weaverOptions in Aspectj

// when you call "sbt run" aspectj weaving kicks in
fork in run := true

Soumya Simanta

unread,
Sep 13, 2014, 10:09:48 AM9/13/14
to kamon...@googlegroups.com
I enabled the LogReporter in the application.conf file using the following. 

  extensions = ["kamon.metric.Metrics", "kamon.statsd.StatsD", "kamon.logreporter.LogReporter"]
 
After this I can see a lot of logs like this. The values of kamon-log-reporter have values associated with each field (Processing Time (nanoseconds)      Time in Mailbox (nanoseconds)         Mailbox Size ).However, the values for my own actors are zero for all the fields. What am I missing ?  


[info] +--------------------------------------------------------------------------------------------------+

[info] |                                                                                                  |

[info] |    Actor: user/kamon-log-reporter                                                                |

[info] |                                                                                                  |

[info] |   Processing Time (nanoseconds)      Time in Mailbox (nanoseconds)         Mailbox Size          |

[info] |    Msg Count: 1                          Msg Count: 1                        Min: 0              |

[info] |          Min: 2686976                          Min: 45824                   Avg.: 0.0            |

[info] |    50th Perc: 2686976                    50th Perc: 45824                    Max: 1              |

[info] |    90th Perc: 2686976                    90th Perc: 45824                                        |

[info] |    95th Perc: 2686976                    95th Perc: 45824                                        |

[info] |    99th Perc: 2686976                    99th Perc: 45824                  Error Count: 0        |

[info] |  99.9th Perc: 2686976                  99.9th Perc: 45824                                        |

[info] |          Max: 2686976                          Max: 45824                                        |

[info] |                                                                                                  |

[info] +--------------------------------------------------------------------------------------------------+ 

Ivan Topolnjak

unread,
Sep 15, 2014, 1:04:44 AM9/15/14
to kamon...@googlegroups.com
Hello Soumya, welcome to the Kamon community!

I think that the problem you have is your filters section, you have "myactorsystem/user/myactor/my-extractor*" there, including the actor system name but we match the actor paths excluding the actor system name, probably if you change your include filter to "/user/myactor/my-extractor*" it will work properly. Also, the * on front of your first include filter isn't necessary. Let me know how it goes!

--
You received this message because you are subscribed to the Google Groups "kamon-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kamon-user+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

aah...@gmail.com

unread,
Sep 18, 2014, 6:20:46 AM9/18/14
to kamon...@googlegroups.com
I also have the same issue.

In my spray project,

application.conf:
#kamon related configuration
akka {
loglevel = INFO
extensions = ["kamon.statsd.StatsD", "kamon.system.SystemMetrics", "kamon.logreporter.LogReporter", "kamon.metric.Metrics"]
}

kamon {
spray {
# Header name used when propagating the `TraceContext.token` value across applications.
trace-token-header-name = "X-Trace-Token"

# When set to true, Kamon will automatically set and propogate the `TraceContext.token` value under the following
# conditions:
# - When a server side request is received containing the trace token header, the new `TraceContext` will have that
# some token, and once the response to that request is ready, the trace token header is also included in the
# response.
# - When a spray-client request is issued and a `TraceContext` is available, the trace token header will be included
# in the `HttpRequest` headers.
automatic-trace-token-propagation = true

client {
# Strategy used for automatic trace segment generation when issue requests with spray-client. The possible values
# are:
# - pipelining: measures the time during which the user application code is waiting for a spray-client request to
# complete, by attaching a callback to the Future[HttpResponse] returned by `spray.client.pipelining.sendReceive`.
# If `spray.client.pipelining.sendReceive` is not used, the segment measurement wont be performed.
# - internal: measures the internal time taken by spray-client to finish a request. Sometimes the user application
# code has a finite future timeout (like when using `spray.client.pipelining.sendReceive`) that doesn't match
# the actual amount of time spray might take internally to resolve a request, counting retries, redirects,
# connection timeouts and so on. If using the internal strategy, the measured time will include the entire time
# since the request has been received by the corresponding `HttpHostConnector` until a response is sent back
# to the requester.
segment-collection-strategy = pipelining


}
}
statsd {
# Hostname and port in which your StatsD is running. Remember that StatsD packets are sent using UDP and
# setting unreachable hosts and/or not open ports wont be warned by the Kamon, your data wont go anywhere.

hostname = "127.0.0.1"
port = 8125

# Interval between metrics data flushes to StatsD. It's value must be equal or greater than the
# kamon.metrics.tick-interval setting.
flush-interval = 1 second

# Max packet size for UDP metrics data sent to StatsD.
max-packet-size = 1024 bytes

# Subscription patterns used to select which metrics will be pushed to StatsD. Note that first, metrics
# collection for your desired entities must be activated under the kamon.metrics.filters settings.
includes {
actor = [ "*" ]
trace = [ "*" ]
dispatcher = [ "*" ]
}

report-system-metrics = true

simple-metric-key-generator {
# Application prefix for all metrics pushed to StatsD. The default namespacing scheme for metrics follows
# this pattern:
# application.host.entity.entity-name.metric-name

application = "kamon"
}
}
metrics {
filters = [
{
actor {


includes = [ "*" ]
excludes = []
}

},


{
trace {
includes = [ "*" ]
excludes = []
}
}
]
}
}

when I run docker run -d -v /etc/localtime:/etc/localtime:ro -p 80:80 -p 8125:8125/udp -p 8126:8126 --name kamon-grafana-dashboard kamon/grafana_graphite, there is no real data in localhost:80, and when I try to curl localhost:8126, I get ERROR. curl localhost:8125, connection failed.

Ivan Topolnjak

unread,
Sep 18, 2014, 11:59:54 AM9/18/14
to kamon...@googlegroups.com
Are you on a linux machine or using boot2docker to run the image?
Reply all
Reply to author
Forward
0 new messages