SQL instrumentation

50 views
Skip to first unread message

David Welton

unread,
Apr 3, 2014, 11:45:20 AM4/3/14
to chica...@googlegroups.com
Hi,

I wrote this for Postgres and was wondering about thoughts about
extending the approach for other database adapters. The idea being
that rather than putting a lot of now_diffs and logging statements in
the code itself, and probably repeating that for each different DB
system, it gets grouped in one place where it can also be turned on
and off as needs be.

I suppose in some ways adding the code to the actual calls might be
more performant and simpler, so this might be best left as a hack for
development mode.

Thoughts?

-module(sqltracer).

-include_lib("stdlib/include/ms_transform.hrl").

%% API
-export([trace/0, tracer/2]).

trace() ->
dbg:start(),
dbg:tracer(process,
{fun sqltracer:tracer/2, []}),
dbg:tpl(pgsql, equery, dbg:fun2ms(fun(_) -> return_trace() end)),
dbg:tpl(pgsql, equery, dbg:fun2ms(fun(_) -> return_trace() end)),
dbg:p(all, c).

tracer({trace, Pid, call, {pgsql, equery, [Conn, Sql]}}, _Acc) ->
{erlang:iolist_to_binary(Sql), [], os:timestamp()};

tracer({trace, Pid, call, {pgsql, equery, [Conn, Sql, Parameters]}}, _Acc) ->
{erlang:iolist_to_binary(Sql), Parameters, os:timestamp()};

tracer({trace, Pid, return_from, {pgsql, equery, Arity}, ReturnValue},
{Sql, Parameters, StartTime}) ->
error_logger:info_msg("pgsql:equery (~p microseconds) SQL: ~s ;
PARAMETERS ~p~n",
[timer:now_diff(os:timestamp(), StartTime),
Sql, Parameters]),
ok.



--
David N. Welton

http://www.welton.it/davidw/

http://www.dedasys.com/

Andreas Stenius

unread,
Apr 3, 2014, 2:48:49 PM4/3/14
to chica...@googlegroups.com

Have you looked at fprof?
http://www.erlang.org/doc/man/fprof.html

David Welton

unread,
Apr 3, 2014, 4:19:54 PM4/3/14
to chica...@googlegroups.com
What I want is something really similar to what Rails has, where each
query gives me the actual SQL generated, and how long it took to run,
so that I can think about where time is being spent, and also see what
the underlying SQL is like.
> --
> You received this message because you are subscribed to the Google Groups
> "ChicagoBoss" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to chicagoboss...@googlegroups.com.
> Visit this group at http://groups.google.com/group/chicagoboss.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/chicagoboss/CABCxKrfj5zqWBq4r_kyadrS%3D0HZwmUbyruStFq5opG446qctJw%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.

Graeme Defty

unread,
Apr 3, 2014, 9:55:59 PM4/3/14
to chica...@googlegroups.com
One small point to bear in mind - for many of us there IS no SQL. It depends on the database in use (and your style of using it, I guess).

It seems to me that the instrumentation (in terms of timing at least) should be at the boss_db level so that it applies to ALL databases - then the individual database drivers are free to add some extra info, such as the SQL used (for a SQL database), or the strategy/models/whatever selected (for the no-SQL databases).

I use Mnesia, so I am not sure I can even spell SQL any more, but I too would like to know where my database time is being spent because it can also give valuable insights into possible inefficiencies in the app e.g. re-reading the same record multiple times.

Cheers,

g





David Welton

unread,
Apr 4, 2014, 4:13:00 AM4/4/14
to chica...@googlegroups.com
> One small point to bear in mind - for many of us there IS no SQL. It depends
> on the database in use (and your style of using it, I guess).

> It seems to me that the instrumentation (in terms of timing at least) should
> be at the boss_db level so that it applies to ALL databases - then the
> individual database drivers are free to add some extra info, such as the SQL
> used (for a SQL database), or the strategy/models/whatever selected (for the
> no-SQL databases).
>
> I use Mnesia, so I am not sure I can even spell SQL any more, but I too
> would like to know where my database time is being spent because it can also
> give valuable insights into possible inefficiencies in the app e.g.
> re-reading the same record multiple times.

I think that for anything, you're going to want the underlying query
and how long it took. That requires some code specific to that
particular database. I suppose things could be added at a higher
level for timing - it depends if you want to time just the underlying
DB, or the whole layer. Both are probably useful numbers.
Reply all
Reply to author
Forward
0 new messages