Logcat log easy get lost

2,003 views
Skip to first unread message

big_fish_

unread,
Oct 20, 2015, 10:12:24 AM10/20/15
to android-porting
Hi experts,

We found that somehow the logcat log easy get lost. I am using the official M code base.
Is there anyone can help provide some clues about it? Is it caused by some buffers were full, then logd dropped the new logs?
Is there any solutions to avoid this behavior? It is hard to debug now.

Thanks
Yu

yu wang

unread,
Oct 27, 2015, 10:52:38 AM10/27/15
to Mark Salyzyn, android-porting, 孙鹏, 孙健, 王禹3
Hi Mark,

Thanks for your quick response. I will loop my colleague and my work email in this thread to continue discuss.

After debug we found that the LogListener designed to use datagram socket type. Code and init.rc as following:

system/core/logd/LogListener.cpp 
int LogListener::getLogSocket() {
    static const char socketName[] = "logdw";
    int sock = android_get_control_socket(socketName);

    if (sock < 0) {
        sock = socket_local_server(socketName,
                                   ANDROID_SOCKET_NAMESPACE_RESERVED,
                                   SOCK_DGRAM);
    }   

    int on = 1;
    if (setsockopt(sock, SOL_SOCKET, SO_PASSCRED, &on, sizeof(on)) < 0) {
        return -1; 
    }   
    return sock;
}

system/core/rootdir/init.rc
service logd /system/bin/logd
    class core
    socket logd stream 0666 logd logd
    socket logdr seqpacket 0666 logd logd
    socket logdw dgram 0222 logd logd
    group root system

The datagram socket can't guarantee the data can be delivered successful. In our environment, the logs are too much which cause LogListener can't received successful.
To my understanding, you designed to use datagram is for reducing the CPU bandwidth wasted by logd.
But for debug usage, log lost may impact the developer. Do you have any suggestions to avoid this behavior? For example, can support some specific arguments to capture the full logs?

Thanks
Yu


2015-10-26 23:56 GMT+08:00 Mark Salyzyn <sal...@google.com>:
There should be no dropping of new logs, we remove from the oldest side.

Please check if you are getting any 'liblog' tag messages in the events buffer.

We now maintain some 'chatty' entries, which provide temporal details for logs that are expired before others and allow for some correlation of activities between resources.

Sincerely -- Mark Salyzyn

On Sun, Oct 25, 2015 at 9:56 PM, yu wang <fish.w...@gmail.com> wrote:
Loop logd author.

Hi Mark,

Can you please help comment? Is there any available arguments or method can avoid log miss?

Thanks
Yu

--
--
unsubscribe: android-porti...@googlegroups.com
website: http://groups.google.com/group/android-porting

---
You received this message because you are subscribed to the Google Groups "android-porting" group.
To unsubscribe from this group and stop receiving emails from it, send an email to android-porti...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



王禹3

unread,
Oct 29, 2015, 1:56:04 PM10/29/15
to Mark Salyzyn, yu wang, android-porting, 孙鹏, 孙健

Hi Mark,

 

Thanks for your detail explain. Due to we are debugging the lib from 3th party. So can’t change the log policy.

We will try your other suggestions.

 

Thanks

Regards

Yu

 

 

 

发件人: Mark Salyzyn [mailto:sal...@google.com]
发送时间: 20151027 23:01
收件人: yu wang
抄送: android-porting; 孙鹏; 孙健; 王禹3
主题: Re: [android-porting] Logcat log easy get lost

 

You are missing one more thing in init.rc (with corresponding code in logd):

 

write /proc/sys/net/unix/max_dgram_qlen 300

 

It can guarantee the data will be delivered successfully as long as it does not exceed this value (kernel resources notwithstanding, it can still drop logs if it runs out of memory).

 

PERFORMANCE:

 

Any logging that exceeds the socket depth of 300 (in L and M) in batch time interval is subject to loss. On a system that logs heavy, but otherwise is lightly loaded, this translates to up to 30,000 messages a second, with an average log length of 115 (statistical average on several devices), this is 3.3MB/s which would consume the in-memory buffer in 76ms! We of course have over designed this part to deal with low volume, but bursty, traffic; or for some acceptable debugging environments. On platforms we test, the logger can accumulate logs faster than the adb channel can dish them out with a live running logcat, or the eMMC can store them (logpersistd).

 

The logger data flow can handle a log message every 33us without issue on most platforms we test, but our threshold for failure is 105us for sustained logging with no other CPU intensive activities on the system with the CPUs set for normal operations (and thus running at their lowest clock rate). On your platform please run the gTest and benchmarks accompanying the liblog, logd and logcat to see if there are any failures. Perhaps your platform has limits (for example, the emulator kernel can not allow max_dgram_qlen to go above 100, and truncates it due to its own limitation) that need to be addressed outside of logging. I urge you to report anything unbecoming on your system so that we can tune or optimize.

 

The goal is that the logs must last 30 minutes to be of any use in a bugreport, so any logging that reduces the span below that is considered spammy logging and may be filtered out by one of two mechanisms built into the logging flow. In the case of what you have determined is the key to your data loss, the structural maximum results in EBUSY return to the socket in the extreme high volume cases are collected by the caller and reissued and logged as 'liblog' tag events associated with the UID/TID. An aggressive pruning algorithm in logd at the tail ends of the logs for excessively chatty log sources by UID tagged and accumulated with 'chatty'. Both of these remove the content, but not the temporal activity. I urge you to understand and see these messages for what they are, indicators that you have exceeded the maximum log bandwidth.

 

MITIGATIONS:

 

In M late in the program, however, we moved logd into the background cgroup and thus the batch time interval went from 10-20ms to 10-250ms. So this reduced the maximum logging rate to 135KB/s which would consume the log buffer in 2 seconds. In AOSP we increased the max_dgram_qlen to 600 to allow a burst of 600 messages in the general absolute worst case 250ms timeslot allocated to logd under heavy CPU load.

 

You have several choices to mitigate logging issue on your platform:

 

1) Make your logs more relevant, do not hit the spam limits.

2) Ensure that your process is written efficiently and does not starve logd due to its activities. It is amazing what a change of an algorithm from O(n^2) to O(n) can do on CPU and battery usage and cycles for other applications to catch up.

3) Take advantage of targeted logging facilities, IF_ALOG (__android_log_is_loggable) in native code and isLoggable in Jave  allows one to tune what gets logged and what does not via runtime system properties (log.tag.<tag>) allowing you to keep the logging rate acceptable.

4) Do not use multiple LOG lines, coalesce them into a larger log entry with embedded newlines, to a maximum of 4K minus logging overhead

5) Use a larger log buffer, only run logcat after, not during, a debug session but beware of scaling problems. A running logcat steals background time from logd and can reduce its performance.

6) Take all future patches in AOSP for logd, logcat and liblog, we are continually working on scaling issues; a labor of love.

7) If your system has too low of a performance, tune logd out of the background cgroup so that logd gets higher priority cycles.

8) Tune the max_dgram_qlen and logger code on your platform for a larger depth

 

I hope this helps.

 

Sincerely -- Mark Salyzyn

Swetabh

unread,
May 24, 2018, 11:20:09 AM5/24/18
to android-porting
Hi
When I am capturing logs via adb logcat, My application logs get somehow lost and I am getting logs with "liblog" tag followed by a number. I am using M codebase.
Is there any solutions to avoid this behavior? I am unable to get the application logs.
Reply all
Reply to author
Forward
0 new messages