openresty access logs

830 views
Skip to first unread message

Samarjit Uppal

unread,
Nov 28, 2014, 2:10:56 AM11/28/14
to openre...@googlegroups.com


Hi,

Is there a way to see how much time each api request takes in the access logs? I dont see any entry related to that. Can we tweak some settings to enable detailed log reports per api request? 

Regs!

Yichun Zhang (agentzh)

unread,
Nov 28, 2014, 8:43:43 PM11/28/14
to openresty-en
Hello!

On Thu, Nov 27, 2014 at 11:10 PM, Samarjit Uppal wrote:
> Is there a way to see how much time each api request takes in the access
> logs?

Usually you can just log the $request_time variable. It is usually
good enough though it also includes downstream I/O time:

http://nginx.org/en/docs/http/ngx_http_log_module.html#var_request_time

For more accurate measurement, you need to calculate the time duration
in Lua yourself, write the result into a custom nginx variable, and
log that nginx variable in nginx's log_format directive. You can
directly access the underlying C functions like gettimeofday and
clock_gettime via LuaJIT's FFI. But be aware that such timing code
also have their overhead more or less.

Another way is to sample the time from the outside via dynamic tracing
tools like systemtap and dtrace in a sampling manner. So you pay
literally 0 cost when you're not sampling. And also you have a chance
to get much more different "latency measurements".

For example, the following tool can measure the pure CPU-wise
execution time of your Lua code upon every individual nginx request,
excluding any IO time:

https://github.com/openresty/stapxx#ngx-lua-exec-time

Such per-request pure CPU time measurement is difficult to perform in
Lua though.

And the following tools are handy for measure the (nonblocking)
cosocket receiving latency for every request at real time:

https://github.com/openresty/stapxx#ngx-lua-tcp-recv-time

https://github.com/openresty/stapxx#ngx-lua-tcp-total-recv-time

https://github.com/openresty/stapxx#ngx-lua-udp-recv-time

https://github.com/openresty/stapxx#ngx-lua-udp-total-recv-time

If you just want to profile and optimize your (online) Lua code, then
the Lua-land on-CPU flame graph tools are much more useful:

https://github.com/openresty/stapxx#lj-lua-stacks

Sometimes the C-land on-CPU flame graph can be also useful for online
profiling purposes:

https://github.com/openresty/nginx-systemtap-toolkit#sample-bt

> I dont see any entry related to that. Can we tweak some settings to
> enable detailed log reports per api request?
>

I'm reluctant to add built-in support to ngx_lua for this because
there are many ways to measure different aspects of "time", as
demonstrated above. We've found the systemtap tools mentioned above
much more useful than our own userland timing stats in our own online
Lua apps.

Regards,
-agentzh

Samarjit Uppal

unread,
Dec 1, 2014, 3:52:55 AM12/1/14
to openre...@googlegroups.com

Hi Agentzh,

Thanks for the clarifications. Its good that $request_time gives the total time it takes for nginx to response to a requests including any I/O time. Thats exactly what I need. But obviously this time will not include the nginx connection waiting time. Do we have a way of finding that out too?

ALso, I included the following in my nginx.conf file:


log_format timed_combined '$remote_addr - $remote_user [$time_local]  '

                                '"$request" $status $body_bytes_sent '

                                '"$http_referer" "$http_user_agent" '

                                '$request_time $upstream_response_time $pipe';

    access_log /soft/openresty/nginx/logs/access.log timed_combined;


This code does not write anything to the access.log file. The file permissions and paths are fine. 

Any ideas?

Thanks a lot!
Samar

Yichun Zhang (agentzh)

unread,
Dec 1, 2014, 4:46:18 PM12/1/14
to openresty-en
Hello!

On Mon, Dec 1, 2014 at 12:52 AM, Samarjit Uppal wrote:
> Thanks for the clarifications. Its good that $request_time gives the total
> time it takes for nginx to response to a requests including any I/O time.
> Thats exactly what I need. But obviously this time will not include the
> nginx connection waiting time.

What exactly do you mean by "nginx connection waiting time"?

Regards,
-agentzh

Samarjit Uppal

unread,
Dec 2, 2014, 3:22:59 AM12/2/14
to openre...@googlegroups.com

Suppose there r a large number of concurrent connections.

Now nginx port can only accept so many connections at a time. So do requests get pipelined? If they do then there has to be such a thing as a connection waiting time.


Regards,
-agentzh

--
You received this message because you are subscribed to a topic in the Google Groups "openresty-en" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openresty-en/H-1r41JizqY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openresty-en...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Samarjit Uppal

unread,
Dec 2, 2014, 6:47:37 AM12/2/14
to openre...@googlegroups.com

We are running load impact tests on our openresty installation. At times we get the following 3 different kinds of errors:

So in context of the discussion above what do these 3 errors (in bold below) reallly mean?


> 73753 attempt to send data on a closed socket: u:00000000419EB028, c:0000000000000000, ft:0 eof:0, client: 54.174.13.246, server: , request: "POST /blah  HTTP/1.1", host: “server.com

2014/12/02 09:31:09 [error] 31273#0: *273755 lua tcp socket read timed out, client: 54.174.13.246, server: , request: “blah HTTP/1.1", host: “server.com

2014/12/02 09:31:10 [error] 31270#0: *273857 lua tcp socket connect timed out, client: 54.174.13.246, server: , request: "POST /analytics/event/beacon-data/post HTTP/1.1", host: "ramanujam.turnoutnow.com:8081"


Thanks a lot, hopefully a detailed understanding of these will help me understand better how openresty operates.
To unsubscribe from this group and all its topics, send an email to openresty-en+unsubscribe@googlegroups.com.

Yichun Zhang (agentzh)

unread,
Dec 2, 2014, 4:50:46 PM12/2/14
to openresty-en
Hello!

On Tue, Dec 2, 2014 at 12:22 AM, Samarjit Uppal wrote:
> Suppose there r a large number of concurrent connections.
>
> Now nginx port can only accept so many connections at a time. So do requests
> get pipelined? If they do then there has to be such a thing as a connection
> waiting time.
>

It seems that you're talking about the accept() queue in the kernel's
TCP/IP stack implementation. Then you need kernel-level tools to
measure that instead of fiddling on the userland. For example, the
following tools might be interesting to such needs:

https://github.com/openresty/nginx-systemtap-toolkit#tcp-accept-queue

Have fun :)

> We are running load impact tests on our openresty installation. At
> times we get the following 3 different kinds of errors:

Sorry, I need to remind you that the topic of this thread is
"openresty access logs". Please do not hijack unrelated threads.
Better create a separate thread for such unrelated questions to avoid
confusion.

Regards,
-agentzh
Reply all
Reply to author
Forward
0 new messages