Segmentation fault (SLES, PSOL)

118 views
Skip to first unread message

Christian Bieser

unread,
Jul 1, 2014, 11:02:15 AM7/1/14
to ngx-pagesp...@googlegroups.com
Hi there!
Knows maybe someone of you what is going wrong here.
---
gdb /usr/sbin/nginx /var/nginx/core
GNU gdb (GDB) SUSE (7.3-0.6.1)
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-suse-linux".
For bug reporting instructions, please see:
Reading symbols from /usr/sbin/nginx...done.
[New LWP 11337]
[New LWP 11338]
Missing separate debuginfo for /lib64/libpthread.so.0
Try: zypper install -C "debuginfo(build-id)=368b7757bc9f9d7e2e93678c63cb3e5587a9934f"
Missing separate debuginfo for /lib64/libcrypt.so.1
Try: zypper install -C "debuginfo(build-id)=b97b006aca41a2c8e3c41d7cce47b7c2f14f7b13"
Missing separate debuginfo for /usr/lib64/libstdc++.so.6
Try: zypper install -C "debuginfo(build-id)=3915e6988dbdfc8ebe704efa2e5e5d519c027f7b"
Missing separate debuginfo for /lib64/librt.so.1
Try: zypper install -C "debuginfo(build-id)=44612b93c19e6567318299411987b113d2387081"
Missing separate debuginfo for /lib64/libm.so.6
Try: zypper install -C "debuginfo(build-id)=b10c3cae031ba5a87e715c117a83cd3bef83ebd2"
Missing separate debuginfo for /usr/lib64/libpcre.so.0
Try: zypper install -C "debuginfo(build-id)=690bc513d49ca57c3d0ea00e5f58578683edec8b"
Missing separate debuginfo for /lib64/libz.so.1
Try: zypper install -C "debuginfo(build-id)=4c05d1eb180f9c02b81a0c559c813dada91e0ca4"
Missing separate debuginfo for /lib64/libgcc_s.so.1
Try: zypper install -C "debuginfo(build-id)=fe7c25bfb3e605f9d6c1cb00b3c5f96ed95be6e5"
Missing separate debuginfo for /lib64/libc.so.6
Try: zypper install -C "debuginfo(build-id)=72e7b043935a2bd0b80d325f7f166a132cf37140"
Missing separate debuginfo for /lib64/ld-linux-x86-64.so.2
Try: zypper install -C "debuginfo(build-id)=c81de241a528795f8dbfde0f0e0e236f9a6554e6"
Missing separate debuginfo for
Try: zypper install -C "debuginfo(build-id)=9ee239647f77340975f782611a1fa728c355ecda"
Missing separate debuginfo for /lib64/libpthread.so.0
Try: zypper install -C "debuginfo(build-id)=368b7757bc9f9d7e2e93678c63cb3e5587a9934f"
[Thread debugging using libthread_db enabled]
Missing separate debuginfo for /lib64/libcrypt.so.1
Try: zypper install -C "debuginfo(build-id)=b97b006aca41a2c8e3c41d7cce47b7c2f14f7b13"
Missing separate debuginfo for /usr/lib64/libstdc++.so.6
Try: zypper install -C "debuginfo(build-id)=3915e6988dbdfc8ebe704efa2e5e5d519c027f7b"
Missing separate debuginfo for /lib64/librt.so.1
Try: zypper install -C "debuginfo(build-id)=44612b93c19e6567318299411987b113d2387081"
Missing separate debuginfo for /lib64/libm.so.6
Try: zypper install -C "debuginfo(build-id)=b10c3cae031ba5a87e715c117a83cd3bef83ebd2"
Missing separate debuginfo for /usr/lib64/libpcre.so.0
Try: zypper install -C "debuginfo(build-id)=690bc513d49ca57c3d0ea00e5f58578683edec8b"
Missing separate debuginfo for /lib64/libz.so.1
Try: zypper install -C "debuginfo(build-id)=4c05d1eb180f9c02b81a0c559c813dada91e0ca4"
Missing separate debuginfo for /lib64/libgcc_s.so.1
Try: zypper install -C "debuginfo(build-id)=fe7c25bfb3e605f9d6c1cb00b3c5f96ed95be6e5"
Missing separate debuginfo for /lib64/libc.so.6
Try: zypper install -C "debuginfo(build-id)=72e7b043935a2bd0b80d325f7f166a132cf37140"
Missing separate debuginfo for /lib64/ld-linux-x86-64.so.2
Try: zypper install -C "debuginfo(build-id)=c81de241a528795f8dbfde0f0e0e236f9a6554e6"
Core was generated by `nginx: worker process                   '.
Program terminated with signal 11, Segmentation fault.
#0  0x00000000005ffcd8 in net_instaweb::AsyncFetch::Done (this=0x3000000030, success=false) at net/instaweb/http/async_fetch.cc:108
108     net/instaweb/http/async_fetch.cc: No such file or directory.
        in net/instaweb/http/async_fetch.cc



(gdb) backtrace full
#0  0x00000000005ffcd8 in net_instaweb::AsyncFetch::Done (this=0x3000000030, success=false) at net/instaweb/http/async_fetch.cc:108
No locals.
#1  0x000000000047f76f in net_instaweb::(anonymous namespace)::ps_release_request_context (data=<optimized out>) at /root/sources/nginx/ngx_pagespeed-release-1.8.31.4-beta/src/ngx_pagespeed.cc:1557
        ctx = 0x13ffbe0
#2  0x0000000000439a66 in ngx_http_terminate_request (r=0x13fd9c0, rc=499) at src/http/ngx_http_request.c:2461
        cln = 0x13fe8d0
        mr = 0x13fd9c0
#3  0x000000000043be71 in ngx_http_finalize_request (r=0x13fd9c0, rc=499) at src/http/ngx_http_request.c:2297
        c = 0x1311130
        pr = <optimized out>
#4  0x000000000044c21c in ngx_http_upstream_finalize_request (r=0x13fd9c0, u=0x1400670, rc=499) at src/http/ngx_http_upstream.c:3534
        flush = <optimized out>
        tp = <optimized out>
#5  0x000000000044cfd3 in ngx_http_upstream_check_broken_connection (r=0x13fd9c0, ev=0x1345070) at src/http/ngx_http_upstream.c:1110
        len = 4
        n = <optimized out>
        buf = ""
        err = 0
        event = <optimized out>
        c = 0x1311130
        u = 0x1400670
#6  0x000000000044d1c2 in ngx_http_upstream_rd_check_broken_connection (r=0x3000000030) at src/http/ngx_http_upstream.c:987
No locals.
#7  0x0000000000439113 in ngx_http_request_handler (ev=0x1345070) at src/http/ngx_http_request.c:2186
        c = 0x1311130
        r = 0x13fd9c0
#8  0x000000000042e4f3 in ngx_epoll_process_events (cycle=0x12bd1e0, timer=<optimized out>, flags=<optimized out>) at src/event/modules/ngx_epoll_module.c:691
        events = <optimized out>
        revents = 8197
        instance = 0
        i = 0
        level = <optimized out>
        err = <optimized out>
        rev = 0x1345070
        wev = <optimized out>
        queue = <optimized out>
        c = 0x1311130
#9  0x0000000000425d48 in ngx_process_events_and_timers (cycle=0x12bd1e0) at src/event/ngx_event.c:248
        flags = 1
        timer = 60000
        delta = 1404216359023
#10 0x000000000042ccfe in ngx_worker_process_cycle (cycle=0x12bd1e0, data=<optimized out>) at src/os/unix/ngx_process_cycle.c:816
        i = 0
        c = <optimized out>
#11 0x000000000042b44b in ngx_spawn_process (cycle=0x12bd1e0, proc=0x42cc05 <ngx_worker_process_cycle>, data=0x0, name=0xc528db "worker process", respawn=0) at src/os/unix/ngx_process.c:198
        on = 1
        pid = 0
        s = 0
#12 0x000000000042d77a in ngx_reap_children (cycle=<optimized out>) at src/os/unix/ngx_process_cycle.c:627
No locals.
#13 ngx_master_process_cycle (cycle=0x12bd1e0) at src/os/unix/ngx_process_cycle.c:180
        title = 0x12878d0 ""
        p = <optimized out>
        size = <optimized out>
        i = 1
        n = 19429584
        sigio = 0
        set = {__val = {0 <repeats 16 times>}}
        itv = {it_interval = {tv_sec = 19653679, tv_usec = 0}, it_value = {tv_sec = 0, tv_usec = 0}}
        live = 0
        delay = 0
---Type <return> to continue, or q <return> to quit---
        ccf = 0x12be158
#14 0x000000000040f16c in main (argc=<optimized out>, argv=0x7fff05781bf8) at src/core/nginx.c:407
        i = <optimized out>
        log = 0x1266920
        cycle = 0x12bd1e0
        init_cycle = {conf_ctx = 0x0, pool = 0x12b0030, log = 0x1266920, new_log = {log_level = 0, file = 0x0, connection = 0, handler = 0, data = 0x0, action = 0x0, next = 0x0}, log_use_stderr = 0,
          files = 0x0, free_connections = 0x0, free_connection_n = 0, reusable_connections_queue = {prev = 0x0, next = 0x0}, listening = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, paths = {
            elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, open_files = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, shared_memory = {last = 0x0,
            part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, connection_n = 0, files_n = 0, connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = 0x0,
          conf_file = {len = 21, data = 0x7fff0578384f "ss"}, conf_param = {len = 0, data = 0x0}, conf_prefix = {len = 11, data = 0x7fff0578384f "ss"}, prefix = {len = 11, data = 0xc4e412 "/etc/nginx/"},
          lock_file = {len = 0, data = 0x0}, hostname = {len = 0, data = 0x0}}
        ccf = 0x12be158
---

Otto van der Schaaf

unread,
Jul 3, 2014, 5:32:15 PM7/3/14
to ngx-pagesp...@googlegroups.com
I tried a few things to see if I could get the same backtrace to show up, but no luck yet. A few questions:
- Can you reproduce the segmentation fault? 
- Do you have an idea about when and how often it happens?
- Do you, by chance, have the contents of error.log at the time of the crasher available?
- Do you use nginx's configuration reload (e.g. nginx -s reload) functionality to apply changes in nginx.conf? 

Otto


--
You received this message because you are subscribed to the Google Groups "ngx-pagespeed-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ngx-pagespeed-di...@googlegroups.com.
Visit this group at http://groups.google.com/group/ngx-pagespeed-discuss.
For more options, visit https://groups.google.com/d/optout.

m...@nikitosi.us

unread,
Jul 8, 2014, 6:43:41 PM7/8/14
to ngx-pagesp...@googlegroups.com
I have similar problem here. I saw same addresses in backtrace, so I decided to post here, not to create new post. If I am wrong execuse me :)

It is Linux 2.6.18-371.9.1.el5.centos.plus x86_64
ngx_pagespeed-release-1.8.31.4-beta

nginx version: nginx/1.6.0
built by gcc 4.1.2 20080704 (Red Hat 4.1.2-54)
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_addition_module --with-http_gzip_static_module --with-http_random_index_module --with-http_stub_status_module --with-file-aio --with-cc-opt=-O2 --with-http_spdy_module --with-openssl=/home/niko/dist/openssl-1.0.1h

Some of workers die every time I open this page. It is reproduced easily in my environment. 
Logs attached
I tried to replace my domain with "mydomain" string everywhere.
Hope this helps.
To unsubscribe from this group and stop receiving emails from it, send an email to ngx-pagespeed-discuss+unsub...@googlegroups.com.
config.txt.gz
error.log.gz
valgrind.txt.gz

Otto van der Schaaf

unread,
Jul 9, 2014, 2:17:03 AM7/9/14
to ngx-pagesp...@googlegroups.com
Thanks a lot, that should help figuring this out. I'll have a go at doing so today.

Otto


To unsubscribe from this group and stop receiving emails from it, send an email to ngx-pagespeed-di...@googlegroups.com.

m...@nikitosi.us

unread,
Jul 21, 2014, 9:03:37 AM7/21/14
to ngx-pagesp...@googlegroups.com
Good day.
Are there any news about our segfaults?

Regards.
To unsubscribe from this group and stop receiving emails from it, send an email to ngx-pagespeed-discuss+unsubscri...@googlegroups.com.

m...@nikitosi.us

unread,
Jul 21, 2014, 5:20:11 PM7/21/14
to ngx-pagesp...@googlegroups.com
That's me again.
I've recompiled everything with gcc44 and segfaults have gone :)

Otto van der Schaaf

unread,
Jul 23, 2014, 2:28:34 AM7/23/14
to ngx-pagesp...@googlegroups.com
I did look some more two weeks ago, but wasn't able to reproduce the problem yet.
It looks like this might be a specific issue with gcc which is very useful to know, thanks for the update!

Otto


To unsubscribe from this group and stop receiving emails from it, send an email to ngx-pagespeed-di...@googlegroups.com.

Joshua Marantz

unread,
Jul 29, 2014, 4:22:28 PM7/29/14
to ngx-pagesp...@googlegroups.com
I'm glad the segfaults were gone with 4.4.  I thought we were using 4.1 to compile mod_pagespeed on RedHat, though, so I'm surprised that made a difference.  The evidence is pretty strong that the problem is really fixed now?

-Josh




To unsubscribe from this group and stop receiving emails from it, send an email to ngx-pagespeed-di...@googlegroups.com.

Otto van der Schaaf

unread,
Sep 26, 2014, 6:09:52 AM9/26/14
to ngx-pagesp...@googlegroups.com
While I was looking into https://github.com/pagespeed/ngx_pagespeed/issues/799 I was triggered again on this.
https://github.com/pagespeed/ngx_pagespeed/pull/814 fixes the segfaults with gcc 4.1.2 for me.
I created https://github.com/pagespeed/ngx_pagespeed/issues/813 to track this specific issue.

Otto
Reply all
Reply to author
Forward
0 new messages