Debugging slow gerrit

1,199 views
Skip to first unread message

James Hartig

unread,
Jun 4, 2019, 10:57:54 AM6/4/19
to Repo and Gerrit Discussion
I'm not very familiar with how to debug java applications as they become slower over time. For the last few months we've had to restart gerrit once a week because it slows to a crawl for all actions, opening the web UI, pushing, pulling, etc. I ran "show-caches --show-threads" today (when it's becoming slow enough to notice) versus after it was recently restarted. We had not run into this issue until later in the 2.16.x releases.

Nothing stands out to me as being very different between the before and "after" besides memory usage being a lot higher. Let me know what else I can run.

Before (just after restart):
Gerrit Code Review        2.16.8                    now    19:26:34   UTC
                                                 uptime     2 hrs  1 min

  Name                          |Entries              |  AvgGet |Hit Ratio|
                                |   Mem   Disk   Space|         |Mem  Disk|
--------------------------------+---------------------+---------+---------+
  accounts                      |    15               |   7.5ms | 99%     |
  adv_bases                     |                     |         |         |
  change_notes                  |   329               |   4.5ms | 81%     |
  changeid_project              |    11               |         | 62%     |
  changes                       |                     |   4.6ms |  0%     |
  external_ids_map              |     1               |   2.1ms | 93%     |
  groups                        |                     |         |         |
  groups_bymember               |    13               |   2.0ms | 99%     |
  groups_byname                 |                     |         |         |
  groups_bysubgroup             |    12               | 361.5us | 99%     |
  groups_byuuid                 |    12               |  14.1ms | 99%     |
  groups_external               |     1               |  69.6ms | 99%     |
  permission_sort               |   652               |         | 99%     |
  plugin_resources              |     3               |         | 80%     |
  project_list                  |     1               |   3.5ms |  0%     |
  projects                      |   103               |  12.1ms | 99%     |
  prolog_rules                  |                     |         |         |
  sshkeys                       |     5               |  20.5ms | 57%     |
  static_content                |    11               |   1.7ms | 40%     |
D change_kind                   |    35   7228 925.99k|   1.0ms | 90% 100%|
D conflicts                     |     2     82  10.50k|         |  0% 100%|
D diff                          |    28   5374   6.57m|   6.4ms | 89% 100%|
D diff_intraline                |    19   1080   1.52m| 114.7ms |  5% 100%|
D diff_summary                  |    27   5083   2.18m|   5.0ms | 82% 100%|
D git_tags                      |     1      1   4.37k| 257.1ms | 97% 100%|
D mergeability                  |    22    697 100.86k|  58.2ms | 70% 100%|
D oauth_tokens                  |     1      5  13.81k|         |         |
D web_sessions                  |     4     39  17.14k|         | 99% 100%|

SSH:      3  users, oldest session started   0 ms ago
Tasks:    3  total =    1 running +      0 ready +    2 sleeping
Mem: 592.50m total = 242.06m used + 330.24m free + 20.20m buffers
     1.63g max
         128 open files

Threads: 2 CPUs available, 87 threads

                                    NEW       RUNNABLE        BLOCKED        WAITING  TIMED_WAITING     TERMINATED
  ReceiveCommits                      0              0              0              2              0              0
  SshCommandStart                     0              1              0              1              0              0
  SSH-Interactive-Worker              0              0              0              1              0              0
  sshd-SshServer                      0              0              0              3              1              0
  HTTP                                0              2              0              0             11              0
  H2                                  0              0              0              0             22              0
  Other                               0              3              0             19             18              0
  SSH-Stream-Worker                   0              0              0              3              0              0


Now (starting to become slow):
Gerrit Code Review        2.16.8                    now    14:48:55   UTC
                                                 uptime    5 days 21 hrs

  Name                          |Entries              |  AvgGet |Hit Ratio|
                                |   Mem   Disk   Space|         |Mem  Disk|
--------------------------------+---------------------+---------+---------+
  accounts                      |    15               |   6.8ms | 99%     |
  adv_bases                     |                     |         |         |
  change_notes                  |   341               |   3.4ms | 94%     |
  changeid_project              |    57               |         | 82%     |
  changes                       |                     |   7.8ms |  0%     |
  external_ids_map              |     1               |   1.5ms | 96%     |
  groups                        |                     |         |         |
  groups_bymember               |    13               |   2.0ms | 99%     |
  groups_byname                 |                     |         |         |
  groups_bysubgroup             |    12               | 361.5us | 99%     |
  groups_byuuid                 |    12               |  14.1ms | 99%     |
  groups_external               |     1               |  69.6ms | 99%     |
  permission_sort               |   795               |         | 99%     |
  plugin_resources              |     7               |         | 95%     |
  project_list                  |     1               |   3.5ms | 92%     |
  projects                      |   104               |   1.3ms | 99%     |
  prolog_rules                  |                     |         |         |
  sshkeys                       |     8               |  13.7ms | 95%     |
  static_content                |    11               | 913.9us | 43%     |
D change_kind                   |   186   7323 937.96k|   1.8ms | 93% 100%|
D conflicts                     |     4     84  10.76k|         | 20% 100%|
D diff                          |   178   5512   6.90m|   9.7ms | 92% 100%|
D diff_intraline                |   134   1187   1.66m|  23.4ms |  4% 100%|
D diff_summary                  |   131   5178   2.22m|   5.9ms | 88% 100%|
D git_tags                      |     1      1   4.37k| 257.1ms | 99% 100%|
D mergeability                  |   104    779 112.71k|  25.5ms | 81% 100%|
D oauth_tokens                  |     5      5  13.81k|         |         |
D web_sessions                  |     9     42  18.46k|         | 99%  18%|

SSH:      3  users, oldest session started   0 ms ago
Tasks:    3  total =    1 running +      0 ready +    2 sleeping
Mem: 1.65g total = 1.59g used + 56.38m free + 89.40k buffers
     1.65g max
           4 open files

Threads: 2 CPUs available, 90 threads
                                    NEW       RUNNABLE        BLOCKED        WAITING  TIMED_WAITING     TERMINATED
  ReceiveCommits                      0              0              0              2              0              0
  SshCommandStart                     0              0              0              2              0              0
  SSH-Interactive-Worker              0              0              0              1              0              0
  sshd-SshServer                      0              0              0              3              1              0
  HTTP                                0              2              0              1             10              0
  H2                                  0              0              0              0             22              0
  Other                               0              3              0             22             18              0
  SSH-Stream-Worker                   0              0              0              3              0              0

Saša Živkov

unread,
Jun 4, 2019, 11:51:35 AM6/4/19
to James Hartig, Repo and Gerrit Discussion
First some Gerrit specific hints:

1. Have you checked:
$ ssh ... gerrit show-queue -q -w
Is the queue huge when your Gerrit server is slow?

2. How often do you GC the Git repositories managed by Gerrit and especially how often do you GC the All-Users repository?

3. What is the JVM heap size? Gerrit uses in-memory caches to provide better performance... so having a large enough
heap is important.

On Tue, Jun 4, 2019 at 4:57 PM James Hartig <faste...@gmail.com> wrote:
I'm not very familiar with how to debug java applications as they become slower over time.

What I will write further is nothing specific to Gerrit. This is how any slow Java application can be analyzed.

Typically, first you want to distinguish between a high-cpu and low-cpu usage scenario when your Gerrit server is slow.
Use the "top" command to find out that.

1. high cpu usage scenario
Use the "top -H" command to find out which JVM threads are causing the high cpu usage.
Make several JVM thread dumps (waiting a few seconds in between) and then look inside.
Find the threads reported by the "top -H" in the java thread dump and see what they are doing.
You may also find out that actually JVM GC threads are using the CPU. In this case increase the JVM heap size as a first remedy.
Otherwise, if you cannot conclude anything form the stack trace of the high-cpu using threads,
post the stack traces of the threads reported by the "top -H" here.

2. low cpu usage
Java GC is most likely not the cause.
Like in the case 1 create a few thread dumps and check, for example, what "HTTP-..." threads are doing.
These threads process the requests from the UI and also git-over-http requests.
Again, if you cannot conclude anything form the stack traces, post them here.
Further, check for dead lock reports in the thread dump(s).

This is by far not a complete troubleshooting guide but this is what I usually do first in cases like this
and what can be done quickly without installing any additional tools.

Saša

--
--
To unsubscribe, email repo-discuss...@googlegroups.com
More info at http://groups.google.com/group/repo-discuss?hl=en

---
You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to repo-discuss...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/repo-discuss/CAM6j61up%3DyA8Bf0q%3D1TCzPn-RSce%2BzsEVO479gvnSn_RAhaNRA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

James Hartig

unread,
Jun 4, 2019, 12:54:07 PM6/4/19
to Saša Živkov, Repo and Gerrit Discussion
Thanks for the help!

TLDR: I'm going to raise the heap by setting container.heapLimit to 2g and see if that helps.

Replies inline:

On Tue, Jun 4, 2019 at 11:51 AM Saša Živkov <ziv...@gmail.com> wrote:
First some Gerrit specific hints:

1. Have you checked:
$ ssh ... gerrit show-queue -q -w
Is the queue huge when your Gerrit server is slow?

Doesn't seem like it:
Task     State        StartTime         Command
------------------------------------------------------------------------------
Queue: WorkQueue
9a5dcd5f 17:25:09.557 May-29 17:25      [delete-project]: Clean up expired git repositories from the archive [/opt/gerrit/data/delete-project]
ba5a1164 23:00:00.003 May-29 17:25      Log File Compressor
------------------------------------------------------------------------------
  2 tasks, 1 worker threads
 

2. How often do you GC the Git repositories managed by Gerrit and especially how often do you GC the All-Users repository?

I have the following settings:
[gc]
  startTime = Fri 8:30
  interval = 1 week
 
Is that what you mean?


3. What is the JVM heap size? Gerrit uses in-memory caches to provide better performance... so having a large enough
heap is important.

I didn't set container.heapLimit at all so I'm not sure.
 

On Tue, Jun 4, 2019 at 4:57 PM James Hartig <faste...@gmail.com> wrote:
I'm not very familiar with how to debug java applications as they become slower over time.

What I will write further is nothing specific to Gerrit. This is how any slow Java application can be analyzed.

Typically, first you want to distinguish between a high-cpu and low-cpu usage scenario when your Gerrit server is slow.
Use the "top" command to find out that.

1. high cpu usage scenario
Use the "top -H" command to find out which JVM threads are causing the high cpu usage.
Make several JVM thread dumps (waiting a few seconds in between) and then look inside.
Find the threads reported by the "top -H" in the java thread dump and see what they are doing.
You may also find out that actually JVM GC threads are using the CPU. In this case increase the JVM heap size as a first remedy.
Otherwise, if you cannot conclude anything form the stack trace of the high-cpu using threads,
post the stack traces of the threads reported by the "top -H" here.

So the CPU isn't "high" but has increased since this morning:
image.png

I tried to capture the threads when the CPU spiked:
top - 16:42:04 up 663 days,  2:51,  1 user,  load average: 0.33, 0.42, 0.39
Threads: 344 total,   4 running, 340 sleeping,   0 stopped,   0 zombie
%Cpu(s): 91.5 us,  1.5 sy,  0.0 ni,  6.8 id,  0.0 wa,  0.0 hi,  0.2 si,  0.0 st
KiB Mem :  7659960 total,   389424 free,  2530256 used,  4740280 buff/cache
KiB Swap:        0 total,        0 free,        0 used.  4826588 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND                                                                    
25236 gerrit    20   0 4672728 2.078g  20836 R 87.7 28.4 137:50.51 java                                                                        
25237 gerrit    20   0 4672728 2.078g  20836 R 84.4 28.4 137:47.92 java                                                                        
32480 gerrit    20   0 4191860  27512  12404 S  5.3  0.4   0:00.16 jstack
32487 gerrit    20   0 4191860  27512  12404 S  2.0  0.4   0:00.06 jstack
32488 gerrit    20   0 4191860  27512  12404 S  2.0  0.4   0:00.06 jstack
...

Looking at the jstack output it looks like those 2 PIDs are (25236, 25237):

"GC task thread#0 (ParallelGC)" os_prio=0 tid=0x00007fb3b401e000 nid=0x6294 runnable

"GC task thread#1 (ParallelGC)" os_prio=0 tid=0x00007fb3b4020000 nid=0x6295 runnable
 
So I'll try raising the heap and see if that helps.

Saša Živkov

unread,
Jun 4, 2019, 4:18:41 PM6/4/19
to James Hartig, Repo and Gerrit Discussion
On Tue, Jun 4, 2019 at 6:53 PM James Hartig <faste...@gmail.com> wrote:
Thanks for the help!

TLDR: I'm going to raise the heap by setting container.heapLimit to 2g and see if that helps.

Replies inline:

On Tue, Jun 4, 2019 at 11:51 AM Saša Živkov <ziv...@gmail.com> wrote:
First some Gerrit specific hints:

1. Have you checked:
$ ssh ... gerrit show-queue -q -w
Is the queue huge when your Gerrit server is slow?

Doesn't seem like it:
Task     State        StartTime         Command
------------------------------------------------------------------------------
Queue: WorkQueue
9a5dcd5f 17:25:09.557 May-29 17:25      [delete-project]: Clean up expired git repositories from the archive [/opt/gerrit/data/delete-project]
ba5a1164 23:00:00.003 May-29 17:25      Log File Compressor
------------------------------------------------------------------------------
  2 tasks, 1 worker threads
 

2. How often do you GC the Git repositories managed by Gerrit and especially how often do you GC the All-Users repository?

I have the following settings:
[gc]
  startTime = Fri 8:30
  interval = 1 week
 
Doing GC once a week is not enough, unless you have very low number of changes.
Please set the interval to 1 day or 12 hours.
 
 
Is that what you mean?
Yes
 


3. What is the JVM heap size? Gerrit uses in-memory caches to provide better performance... so having a large enough
heap is important.

I didn't set container.heapLimit at all so I'm not sure.
This means you are using the default heap size which is likely too low for your Gerrit server.
 
 

On Tue, Jun 4, 2019 at 4:57 PM James Hartig <faste...@gmail.com> wrote:
I'm not very familiar with how to debug java applications as they become slower over time.

What I will write further is nothing specific to Gerrit. This is how any slow Java application can be analyzed.

Typically, first you want to distinguish between a high-cpu and low-cpu usage scenario when your Gerrit server is slow.
Use the "top" command to find out that.

1. high cpu usage scenario
Use the "top -H" command to find out which JVM threads are causing the high cpu usage.
Make several JVM thread dumps (waiting a few seconds in between) and then look inside.
Find the threads reported by the "top -H" in the java thread dump and see what they are doing.
You may also find out that actually JVM GC threads are using the CPU. In this case increase the JVM heap size as a first remedy.
Otherwise, if you cannot conclude anything form the stack trace of the high-cpu using threads,
post the stack traces of the threads reported by the "top -H" here.

So the CPU isn't "high" but has increased since this morning:
image.png

I tried to capture the threads when the CPU spiked:
top - 16:42:04 up 663 days,  2:51,  1 user,  load average: 0.33, 0.42, 0.39
Threads: 344 total,   4 running, 340 sleeping,   0 stopped,   0 zombie
%Cpu(s): 91.5 us,  1.5 sy,  0.0 ni,  6.8 id,  0.0 wa,  0.0 hi,  0.2 si,  0.0 st
KiB Mem :  7659960 total,   389424 free,  2530256 used,  4740280 buff/cache
KiB Swap:        0 total,        0 free,        0 used.  4826588 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND                                                                    
25236 gerrit    20   0 4672728 2.078g  20836 R 87.7 28.4 137:50.51 java                                                                        
25237 gerrit    20   0 4672728 2.078g  20836 R 84.4 28.4 137:47.92 java      

Hmm.. how many CPU cores do you have on that machine?
How many http and ssh worker threads have you configured in the gerrit.config?
 
                                                                 
32480 gerrit    20   0 4191860  27512  12404 S  5.3  0.4   0:00.16 jstack
32487 gerrit    20   0 4191860  27512  12404 S  2.0  0.4   0:00.06 jstack
32488 gerrit    20   0 4191860  27512  12404 S  2.0  0.4   0:00.06 jstack
...

Looking at the jstack output it looks like those 2 PIDs are (25236, 25237):

"GC task thread#0 (ParallelGC)" os_prio=0 tid=0x00007fb3b401e000 nid=0x6294 runnable

"GC task thread#1 (ParallelGC)" os_prio=0 tid=0x00007fb3b4020000 nid=0x6295 runnable

Yes, the top command reports decimal numbers and in the heap dump we see hexadecimal nid:
25236 = 0x6924 

This points in the direction of Java GC being the issue due to the low heap size.

 
So I'll try raising the heap and see if that helps.
Once you set the Xmx, set the Xms to the same value. This will improve Gerrit's startup time.

Luca Milanesio

unread,
Jun 4, 2019, 4:27:46 PM6/4/19
to James Hartig, Luca Milanesio, Repo and Gerrit Discussion, Saša Živkov
Even if you have a low number of changes, All-Users.git needs to be GCed continuously, because it accumulates basically *ANY* review action by *ANY* user on *ANY* repo ... it will be very likely to be heavily fragmented.
I would suggest:

a) Shutdown Gerrit
b) Check the size of the All-Users.git repo
c) Perform a 'git gc --agressive' on All-Users.git
d) Check again the size of the All-Users.git repo
e) Start again Gerrit

I would strongly recommend also to install the JavaMelody [1] and the Prometheus reporter plugins [2] and start collecting metrics.


 
 
Is that what you mean?
Yes
 


3. What is the JVM heap size? Gerrit uses in-memory caches to provide better performance... so having a large enough
heap is important.

I didn't set container.heapLimit at all so I'm not sure.
This means you are using the default heap size which is likely too low for your Gerrit server.

You should check the size of your repos, their packfiles, and make sure that you have enough heap to keep always in memory the packfiles of the most active ones.
Finger in the air: 2GB is *way too low* anyway.

Why don't you use GerritHub.io with FREE private repositories? You will have plenty of resources and no management hassle.

 
 

On Tue, Jun 4, 2019 at 4:57 PM James Hartig <faste...@gmail.com> wrote:
I'm not very familiar with how to debug java applications as they become slower over time.

What I will write further is nothing specific to Gerrit. This is how any slow Java application can be analyzed.

Typically, first you want to distinguish between a high-cpu and low-cpu usage scenario when your Gerrit server is slow.
Use the "top" command to find out that.

1. high cpu usage scenario
Use the "top -H" command to find out which JVM threads are causing the high cpu usage.
Make several JVM thread dumps (waiting a few seconds in between) and then look inside.
Find the threads reported by the "top -H" in the java thread dump and see what they are doing.
You may also find out that actually JVM GC threads are using the CPU. In this case increase the JVM heap size as a first remedy.
Otherwise, if you cannot conclude anything form the stack trace of the high-cpu using threads,
post the stack traces of the threads reported by the "top -H" here.

So the CPU isn't "high" but has increased since this morning:

James Hartig

unread,
Jun 4, 2019, 4:37:01 PM6/4/19
to Luca Milanesio, Repo and Gerrit Discussion, Saša Živkov
Thanks for all the help so far!

I'll just answer the questions here rather than inline:

All-Users.git before GC: 17MB
All-Users.git after GC: 1MB

I changed the GC to once a day. I changed the heap limit to 4gb.

The server has 2 cores and is only running gerrit. The threads are set to their defaults.

I'm not sure how I missed the prometheus metrics plugin but I'll install that.

And we've looked into gerrithub.io a while ago but haven't quite made the switch yet. I'll keep that in mind and bring it up with the team again.

Z

unread,
Oct 17, 2019, 2:23:14 AM10/17/19
to Repo and Gerrit Discussion
Hi,zivkov:

    'top -H' can not show the full COMMAND?

    How to show the full?

Thank you! 
To unsubscribe, email repo-d...@googlegroups.com

More info at http://groups.google.com/group/repo-discuss?hl=en

---
You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to repo-d...@googlegroups.com.

Matthias Sohn

unread,
Oct 17, 2019, 6:05:00 PM10/17/19
to Z, Repo and Gerrit Discussion
On Thu, Oct 17, 2019 at 8:23 AM Z <vista...@gmail.com> wrote:
Hi,zivkov:

    'top -H' can not show the full COMMAND?

    How to show the full?


start top -H and then type c to toggle between program name and command line
or use option -c [1]

Do you run gc on all repositories on a regular basis?
2 GB and 2 cores is a pretty small setup.

In order to check if your server is CPU bound look at the average load shown by top [2].
If the average load is higher than CPU count the demand for CPUs is higher than the available number of CPUs
which may cause performance issues.
To get a better understanding about resource usage of the system you can follow [3].
If you want more start here [4].

Regarding configuration of Gerrit in gerrit.config check the size of the jgit page cache (aka window cache)
which is configured with option core.packedGitLimit. 


-Matthias

Matthias Sohn

unread,
Oct 17, 2019, 6:13:06 PM10/17/19
to Z, Repo and Gerrit Discussion
On Fri, Oct 18, 2019 at 12:04 AM Matthias Sohn <matthi...@gmail.com> wrote:
On Thu, Oct 17, 2019 at 8:23 AM Z <vista...@gmail.com> wrote:
Hi,zivkov:

    'top -H' can not show the full COMMAND?

    How to show the full?


start top -H and then type c to toggle between program name and command line
or use option -c [1]

Do you run gc on all repositories on a regular basis?
2 GB and 2 cores is a pretty small setup.

In order to check if your server is CPU bound look at the average load shown by top [2].
If the average load is higher than CPU count the demand for CPUs is higher than the available number of CPUs
which may cause performance issues.
To get a better understanding about resource usage of the system you can follow [3].
If you want more start here [4].

Regarding configuration of Gerrit in gerrit.config check the size of the jgit page cache (aka window cache)
which is configured with option core.packedGitLimit. 

Ideally core.packedGitLimit matches the total size of your hot repositories for which you get a lot of git requests (fetch, push).
Max. heap size should be around twice that size and you need to leave some RAM for OS and file system caches and other processes
running on the server. Number of CPUs needed mostly depends on number of concurrent git requests your server has to serve.

Most performance issues are with large repositories, if you can keep them smaller than 1GB. Avoid large binary files in git repositories
since they are less efficiently compressed by git's pack algorithms which bloats repository size and slows down transports.

Z

unread,
Oct 17, 2020, 3:01:29 AM10/17/20
to Repo and Gerrit Discussion
HTTP-1071
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:445)
com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79)
com.google.gerrit.lucene.LuceneChangeIndex$ChangeDataResults.toList(LuceneChangeIndex.java:386)
com.google.gerrit.lucene.LuceneChangeIndex$ChangeDataResults.iterator(LuceneChangeIndex.java:380)
com.google.common.collect.Iterables$8.iterator(Iterables.java:708)
com.google.gerrit.server.index.change.IndexedChangeQuery$1.iterator(IndexedChangeQuery.java:103)
com.google.common.collect.Iterables$4.iterator(Iterables.java:543)
com.google.common.collect.Iterables$8.iterator(Iterables.java:708)
com.google.common.collect.Iterables.iterators(Iterables.java:508)
com.google.common.collect.Iterables.access$100(Iterables.java:61)
com.google.common.collect.Iterables$2.iterator(Iterables.java:498)
com.google.gerrit.server.query.AndSource.readImpl(AndSource.java:121)
com.google.gerrit.server.query.AndSource.read(AndSource.java:85)
com.google.gerrit.server.query.QueryProcessor.query(QueryProcessor.java:191)
com.google.gerrit.server.query.QueryProcessor.query(QueryProcessor.java:139)
com.google.gerrit.server.query.change.QueryChanges.query(QueryChanges.java:123)
com.google.gerrit.server.query.change.QueryChanges.apply(QueryChanges.java:94)
com.google.gerrit.server.query.change.QueryChanges.apply(QueryChanges.java:38)
com.google.gerrit.httpd.restapi.RestApiServlet.service(RestApiServlet.java:334)
javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:286)
com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:276)
com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:181)
com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
com.google.gerrit.httpd.GetUserFilter.doFilter(GetUserFilter.java:82)
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
com.google.gwtexpui.server.CacheControlFilter.doFilter(CacheControlFilter.java:73)
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
com.google.gerrit.httpd.RunAsFilter.doFilter(RunAsFilter.java:122)
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
com.google.gerrit.httpd.RequestMetricsFilter.doFilter(RequestMetricsFilter.java:60)
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
com.google.gerrit.httpd.AllRequestFilter$FilterProxy$1.doFilter(AllRequestFilter.java:136)
net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.java:201)
net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.java:178)
com.googlesource.gerrit.plugins.javamelody.GerritMonitoringFilter.doFilter(GerritMonitoringFilter.java:65)
com.google.gerrit.httpd.AllRequestFilter$FilterProxy$1.doFilter(AllRequestFilter.java:132)
com.google.gerrit.httpd.AllRequestFilter$FilterProxy.doFilter(AllRequestFilter.java:138)
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
com.google.gerrit.httpd.RequestContextFilter.doFilter(RequestContextFilter.java:75)
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:120)
com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:135)
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:95)
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
org.eclipse.jetty.server.Server.handle(Server.java:499)
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
java.lang.Thread.run(Thread.java:748)

zivkov 在 2019年6月4日 星期二下午11:51:35 [UTC+8] 的信中寫道:
Reply all
Reply to author
Forward
0 new messages