K6 Grpc Metrics

2 views
Skip to first unread message

Charise Scrivner

unread,
Jul 24, 2024, 3:47:58 AM7/24/24
to beets

OpenTelemetry is an observability framework to create and manage telemetry data. gRPC previously provided observability support through OpenCensus which has been sunsetted in the favor of OpenTelemetry.

k6 grpc metrics


DOWNLOAD ✵✵✵ https://tiurll.com/2zIDNc



The gRPC OpenTelemetry plugin accepts a MeterProvider and depends on the OpenTelemetry API to create a Meter that identifies the gRPC library being used, for example, grpc-c++ at version 1.57.1. The following listed instruments are created using this meter. Users should employ the OpenTelemetry SDK to customize the views exported by OpenTelemetry.

With a recorded measurement for an instrument, gRPC might provide some additional information as attributes or labels. For example, grpc.client.attempt.started has the labels grpc.method and grpc.target along with each measurement that tell us the method and the target associated with the RPC attempt being observed.

Error counts can be calculated by using a filter grpc.status != OK value on the latency histogram metrics grpc.client.attempt.duration / grpc.client.call.duration (for clients) or grpc.server.call.duration (for servers).

The feature is mainly for advanced use cases where a custom LB policy is usedto route traffic more intelligently to a list of backend servers to improve therouting performance, e.g. a weighted round robin LB policy.

Per-query metrics reporting: the backend server attaches the injected custommetrics in the trailing metadata when the corresponding RPC finishes. This istypically useful for short RPCs like unary calls.

Out-of-band metrics reporting: the backend server periodically pushes metricsdata, e.g. cpu and memory utilization, to the client. This is useful for allsituations: unary calls, long RPCs in streaming calls, or no RPCs. However,out-of-band metrics reporting does not send query cost metrics. The metricsemission frequency is user-configurable, and this configuration resides in thecustom load balancing policy.

Since gRPC services are hosted on ASP.NET Core, it uses the ASP.NET Core logging system. In the default configuration, gRPC logs minimal information, but logging can be configured. See the documentation on ASP.NET Core logging for details on configuring ASP.NET Core logging.

gRPC adds logs under the Grpc category. To enable detailed logs from gRPC, configure the Grpc prefixes to the Debug level in the appsettings.json file by adding the following items to the LogLevel subsection in Logging:

Check the documentation for your configuration system to determine how to specify nested configuration values. For example, when using environment variables, two _ characters are used instead of the : (for example, Logging__LogLevel__Grpc).

If the app is deployed to another environment (for example, Docker, Kubernetes, or Windows Service), see Logging in .NET Core and ASP.NET Core for more information on how to configure logging providers suitable for the environment.

To get logs from the .NET client, set the GrpcChannelOptions.LoggerFactory property when the client's channel is created. When calling a gRPC service from an ASP.NET Core app, the logger factory can be resolved from dependency injection (DI):

An alternative way to enable client logging is to use the gRPC client factory to create the client. A gRPC client registered with the client factory and resolved from DI will automatically use the app's configured logging.

The .NET gRPC client uses HttpClient to make gRPC calls. Although HttpClient writes diagnostic events, the .NET gRPC client provides a custom diagnostic source, activity, and events so that complete information about a gRPC call can be collected.

The easiest way to use DiagnosticSource is to configure a telemetry library such as Application Insights or OpenTelemetry in your app. The library will process information about gRPC calls along-side other app telemetry.

Metrics is a representation of data measures over intervals of time, for example, requests per second. Metrics data allows observation of the state of an app at a high level. .NET gRPC metrics are emitted using EventCounter.

dotnet-counters is a performance monitoring tool for ad-hoc health monitoring and first-level performance investigation. Monitor a .NET app with either Grpc.AspNetCore.Server or Grpc.Net.Client as the provider name.

Another way to observe gRPC metrics is to capture counter data using Application Insights's Microsoft.ApplicationInsights.EventCounterCollector package. Once setup, Application Insights collects common .NET counters at runtime. gRPC's counters are not collected by default, but App Insights can be customized to include additional counters.

gRPC adds logs under the Grpc category. To enable detailed logs from gRPC, configure the Grpc prefixes to the Debug level in your appsettings.json file by adding the following items to the LogLevel subsection in Logging:

SpringBoot has lots of great built-in Micrometer support for RestControllers that allows you to expose useful metrics via the Prometheus Actuator. We make use of those for our REST-based Edge services and are able to do cool things around monitoring and alerting.

We went with the standard Spring/Micrometer generic method timing approach for this. The upside was that it was trivial to implement, but the downside is that we have to remember to annotate each GRPC method.

For this, we decided to hook in a Micrometer registry counter into our existing generic GRPC exception handler, which lives in an internal shared library that all GRPC services automatically pull in via our common Gradle platform.

All we did here was to add the MeterRegistry to the constructor, so it gets set by the Spring context. Then we use that MeterRegistry instance to increment a counter with the full class name as a Tag in the catch block.

The telemetry component is implemented as a Proxy extension.A COUNTER is a strictly increasing integer.A DISTRIBUTION maps ranges of values to frequency.COUNTER and DISTRIBUTION correspond to the metrics counter and histogramin the Envoy document.

Connection Security Policy: This identifies the service authentication policy ofthe request. It is set to mutual_tls when Istio is used to make communicationsecure and report is from destination. It is set to unknown when report is fromsource since security policy cannot be properly populated.

Canonical Service: A workload belongs to exactly one canonical service, whereas it can belong to multiple services.A canonical service has a name and a revision so it results in the following labels.

The documentation here is only a minimal quick start. For detailed guidance on using Prometheus in your solutions, refer to the prometheus-users discussion group. You are also expected to be familiar with the Prometheus user guide. /r/PrometheusMonitoring on Reddit may also prove a helpful resource.

The Metrics class is the main entry point to the API of this library. The most common practice in C# code is to have a static readonly field for each metric that you wish to export from a given class.

More complex patterns may also be used (e.g. combining with dependency injection). The library is quite tolerant of different usage models - if the API allows it, it will generally work fine and provide satisfactory performance. The library is thread-safe.

Exemplars facilitate distributed tracing, by attaching related trace IDs to metrics. This enables a metrics visualization app to cross-reference traces that explain how the metric got the value it has.

By default, prometheus-net will create an exemplar with the trace_id and span_id labels based on the current distributed tracing context (Activity.Current). If using OpenTelemetry tracing with ASP.NET Core, the traceparent HTTP request header will be used to automatically assign Activity.Current.

ff7609af8f
Reply all
Reply to author
Forward
0 new messages