Using datadog tracer with Nameko

191 views
Skip to first unread message

moussa....@gmail.com

unread,
Mar 5, 2018, 3:46:11 PM3/5/18
to nameko-dev
Hello all,

I would like to use datadog tracer with Nameko, and I couldn't find the way to do it especially the code responsible for running the entrypoints ( worker_setup and worker_result doesn't do that ) 

have someone done this before/or does someone know how to do?

Jakub Borys

unread,
Mar 5, 2018, 4:04:44 PM3/5/18
to nameko-dev
Hi,

I think you're looking to use DataDoog's python custom tracer as described here http://pypi.datadoghq.com/trace/docs/#custom 

If so, you can take inspiration from nameko-tracer and create similar dependnecy that instead of writing to a logger would call tracer.trace etc: https://github.com/Overseas-Student-Living/nameko-tracer/blob/master/nameko_tracer/dependency.py

rabo...@gmail.com

unread,
Mar 9, 2018, 7:26:09 AM3/9/18
to nameko-dev
I'm also quite interested in this, if this can be done open source I'd like to contribute, else I'll be creating this dependency.

Ondrej Kohout

unread,
Mar 13, 2018, 2:41:10 PM3/13/18
to nameko-dev
Hi, we also use DataDog here at Student.com, but we don't have yet DD integrated with Nameko for collecting application metrics. We trace our entrypoint using Nameko Tracer and transport and inspect them in ELK setup. We are also thinking of sending traces to DD in addition to existing ELK solution.

I had a quick look on the Python DD tracer and it looks like there are number of ways how to integrate it with Nameko services or to Nameko itself.

For causal tracing I would try the wrap decorator and wrap service entrypoints directly, I suppose it would trace them on entry and on exit:

class Service:

   
@tracer.wrap()
   
@rpc
   
def say_hello(self):
       
pass


Another approach would be to go a bit deeper to Nameko framework and to write a dependency provider which inspects each worker before and after entrypoint execution and use ddtrace API to send traces to dd agent. Same inspection is already done by Nameko Tracer as Jakub pointed out in his reply. One of the features of Nameko Tracer is that it separates the metrics collection from structuring, formatting and transporting it to desired destination by using standard Python logging mechanisms - loggers, handlers, formatters and filters. So the whole thing is quite modular and can be configured by users the standard way. I would prefer extending Nameko Tracer with a new ddtrace loggin handler as it will make the solutions nicely pluggable and users would have the ability to configure their tracing by logging configuration they understand. So there is an option to extend Nameko Tracer to include logging handler for communication with DD agents using the ddtrace API or the other option would be to write similar dependency provider from scratch. Various tracers have various APIs, various ways of structuring their metrics and number of different ways of transporting them to their destination.Not talking about the visualisation part .) There is the Opentracing project which tries to solve this problem and which implementation would also be a nice contribution to Nameko tracer.

With datadog extension of Nameko Tracer enabling DD tracing for all entrypoints of a Nameko service may be just a config change:

# config.yaml

LOGGING
:
    version
: 1
    formatters
:
        tracer
:
           
(): nameko_tracer.formatters.DataDogFormatter
    handlers
:
        tracer
:
           
class: nameko_tracer.handlers.DataDogHandler
            formatter
: tracer
    loggers
:
        nameko_tracer
:
            level
: INFO
            handlers
: [tracer]


There is yet another way how to collect Nameko entrypoint metrics for DD agents which may be a bit controversial but which looks like the preferred way of adopting DD tracing by various framework users. DDtrace has monkey patches for many existing libraries and frameworks. These patch existing libraries to be able to set tracers at any point of the lib's work execution. That way framework users only import and run patching function and all the magic is done behind the scene.


from ddtrace import patch_all
patch_all
()

It is handy for users, but it's monkey patching with all its risks and unpleasant surprises. As it does not use libs API, but rather lib internals, it is much harder to maintain and requires deep understanding of the patched lib. On the other hand, some of the most popular python libraries are also based on monkey patching and on magics done behind the scene (pytest, eventlet, gevent, ...).

I think that all of the three approaches are valid. We already had a chat about writing the DD extensions to Nameko Tracer. The monkey patching probably should be a PR to ddtrace/contrib .

Ondrej

juko...@gmail.com

unread,
Apr 3, 2018, 3:19:18 PM4/3/18
to nameko-dev
It's not Datadog, but the people at Scout reached out to me, and they raised an enhancement request for their Python APM.


Could do with more upvotes :)
Reply all
Reply to author
Forward
0 new messages