On 29 Mar 2019, at 00:09, tcip...@wikimedia.org wrote:I upgraded to Gerrit 2.15.12 on Tuesday and all seemed fine for the remainder of the day.Then yesterday and today Gerrit went down. Traffic looked pretty normal around that time, but suddenly active threads sky-rockets and everything locks up. By the time I get to the server all I get from jstack is:Attaching to process ID 13929, please wait...Debugger attached successfully.Server compiler detected.JVM version is 25.181-b13Deadlock Detection:
Can't print deadlocks:Unable to deduce type of thread from address 0x00007f3a5445e800 (expected type JavaThread, CompilerThread, ServiceThread, JvmtiAgentThread, or SurrogateLockerThread)Yesterday, around the crash, load on the server was particularly high. We had disabled our normal "git gc" due to concerns regarding jGit bugs. I chalked up the crash to server load and ran git gc yesterday evening. Today the server load was low, but the Active Threads suddenly spiked from a daily 95th percentile of 14 up to 58 and then the server crashed.httpd.maxThreads is 60, index.batchThreads is 1.
There don't seem to be many changes between 2.15.11 and 2.15.12 aside from the jGit update, could that be causing this issue?
Anything I could be missing? Any information I can provide that helps track down this issue (if the issue is somewhere in jGit)?
--
--
To unsubscribe, email repo-discuss...@googlegroups.com
More info at http://groups.google.com/group/repo-discuss?hl=en
---
You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to repo-discuss...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Also our ops summarised here https://phabricator.wikimedia.org/P8313
Any corresponding spike around the incoming traffic?Do you guys collect metrics? JavaMelody graphs?
Can't print deadlocks:Unable to deduce type of thread from address 0x00007f3a5445e800 (expected type JavaThread, CompilerThread, ServiceThread, JvmtiAgentThread, or SurrogateLockerThread)Yesterday, around the crash, load on the server was particularly high. We had disabled our normal "git gc" due to concerns regarding jGit bugs. I chalked up the crash to server load and ran git gc yesterday evening. Today the server load was low, but the Active Threads suddenly spiked from a daily 95th percentile of 14 up to 58 and then the server crashed.httpd.maxThreads is 60, index.batchThreads is 1.Can you please share the full gerrit.config?
There don't seem to be many changes between 2.15.11 and 2.15.12 aside from the jGit update, could that be causing this issue?I doubt it, it contained only two fixes related to the management of concurrent GC and normal Git operations.Are you running JGit GC (or Git GC) concurrently with the incoming traffic?
On 29 Mar 2019, at 13:38, tcip...@wikimedia.org wrote:
On Friday, March 29, 2019 at 1:48:54 AM UTC-6, lucamilanesio wrote:Any corresponding spike around the incoming traffic?Do you guys collect metrics? JavaMelody graphs?No corresponding spike in incoming traffic. Added a couple of graphs from JavaMelody that may be of interest: https://imgur.com/a/HZkOeUEI was watching current requests in JavaMelody right when the first crash occurred. We had 7 upload-packs happening, then, suddenly, a bunch of suggested reviewer requests that were hanging forever until all our threads were gone.
Can't print deadlocks:Unable to deduce type of thread from address 0x00007f3a5445e800 (expected type JavaThread, CompilerThread, ServiceThread, JvmtiAgentThread, or SurrogateLockerThread)Yesterday, around the crash, load on the server was particularly high. We had disabled our normal "git gc" due to concerns regarding jGit bugs. I chalked up the crash to server load and ran git gc yesterday evening. Today the server load was low, but the Active Threads suddenly spiked from a daily 95th percentile of 14 up to 58 and then the server crashed.httpd.maxThreads is 60, index.batchThreads is 1.Can you please share the full gerrit.config?Let me know if there are any details you want from https://github.com/wikimedia/puppet/blob/production/modules/gerrit/templates/gerrit.config.erb that are missing/filled in via puppet. The only options that I can think of that may be relevant and not in the template are heaplimit (20g) and packedgitopenfiles (20000).
There don't seem to be many changes between 2.15.11 and 2.15.12 aside from the jGit update, could that be causing this issue?I doubt it, it contained only two fixes related to the management of concurrent GC and normal Git operations.Are you running JGit GC (or Git GC) concurrently with the incoming traffic?I ran it manually for a couple of large repos right after restarting after the first crash. If you checkout our grafana [0] our load was a bit high a few days previously since git gc was off. I ran gc on two large repos concurrently with incoming traffic (one at a time), that brought load down on the subsequent days by quite a bit. 2nd crash happened roughly 30 hours later.One other detail: we also upgraded plugins at this time to their latest stable 2.15: https://gerrit.wikimedia.org/r/plugins/gitiles/operations/software/gerrit/+/eb4f5cfc134d094aa4cc4694576facc86dbfd4f7
We are running with some backports manly David O change for rules_closure https://gerrit-review.googlesource.com/c/gerrit/+/218584 (which was merged into the 2.15 branch).We also installed the readonly plugin too at the same time as the upgrade.
On 29 Mar 2019, at 13:38, tcip...@wikimedia.org wrote:
On Friday, March 29, 2019 at 1:48:54 AM UTC-6, lucamilanesio wrote:Any corresponding spike around the incoming traffic?Do you guys collect metrics? JavaMelody graphs?No corresponding spike in incoming traffic. Added a couple of graphs from JavaMelody that may be of interest: https://imgur.com/a/HZkOeUEI was watching current requests in JavaMelody right when the first crash occurred. We had 7 upload-packs happening, then, suddenly, a bunch of suggested reviewer requests that were hanging forever until all our threads were gone.The 7 concurrent upload-packs could not block 60 threads, but the "bunch of suggested reviewer requests" could be the culprit.
You would need to checking the incoming HTTP traffic of the reviewers suggestions API calls and try to reproduce the problem.Of course, in your case, reproducing means you'll risk to crash the server again :-(You need to get a stacktrace of what those threads were doing and waiting for.
Can't print deadlocks:Unable to deduce type of thread from address 0x00007f3a5445e800 (expected type JavaThread, CompilerThread, ServiceThread, JvmtiAgentThread, or SurrogateLockerThread)Yesterday, around the crash, load on the server was particularly high. We had disabled our normal "git gc" due to concerns regarding jGit bugs. I chalked up the crash to server load and ran git gc yesterday evening. Today the server load was low, but the Active Threads suddenly spiked from a daily 95th percentile of 14 up to 58 and then the server crashed.httpd.maxThreads is 60, index.batchThreads is 1.Can you please share the full gerrit.config?Let me know if there are any details you want from https://github.com/wikimedia/puppet/blob/production/modules/gerrit/templates/gerrit.config.erb that are missing/filled in via puppet. The only options that I can think of that may be relevant and not in the template are heaplimit (20g) and packedgitopenfiles (20000).I can't see the sshd.threads settings anywhere ...
There don't seem to be many changes between 2.15.11 and 2.15.12 aside from the jGit update, could that be causing this issue?I doubt it, it contained only two fixes related to the management of concurrent GC and normal Git operations.Are you running JGit GC (or Git GC) concurrently with the incoming traffic?I ran it manually for a couple of large repos right after restarting after the first crash. If you checkout our grafana [0] our load was a bit high a few days previously since git gc was off. I ran gc on two large repos concurrently with incoming traffic (one at a time), that brought load down on the subsequent days by quite a bit. 2nd crash happened roughly 30 hours later.One other detail: we also upgraded plugins at this time to their latest stable 2.15: https://gerrit.wikimedia.org/r/plugins/gitiles/operations/software/gerrit/+/eb4f5cfc134d094aa4cc4694576facc86dbfd4f7mmm ... I believe that could be more likely the cause, rather than the Gerrit upgrade.When doing upgrades, you should introduce one change at a time, otherwise if things fail, you don't know the cause-effect of the failure.
- Any errors in the error_log after the crash ?
- How does used heap size / total memory allocated by JVM look over time before the crash in relation to max heap size and available physical memory?
- Is it crashing with OOM ?
- Do you limit object size to prevent someone is uploading humongous files ?
- Do you serve any repositories for which git-sizer raises potential issues[1] ?
- why do you allow idle ssh connections to stay open for half a day ?
On 5 Apr 2019, at 10:18, Luca Milanesio <Luca.Mi...@gmail.com> wrote:On 5 Apr 2019, at 09:49, tcip...@wikimedia.org wrote:I re-rolled-forward to 2.15.12 on Wenesday.Encountered what I believe is the same problem at 07:25UTC on Friday.I did grab a threaddump this time: https://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMTkvMDQvNS8tLWdlcnJpdC0yMDE5LTA0LTA1LWR1bXAuY2xlYW4tLTctNTEtMTk=There are 3 non-released locks on the accounts cache:That blocks basically *everyone* doing any authentication, in your case you have 177 threads blocked for that:That has nothing to do with the JGit upgrade, but seems more related to a Guava cache problem.
On 5 Apr 2019, at 10:18, Luca Milanesio <Luca.M...@gmail.com> wrote:On 5 Apr 2019, at 09:49, tcip...@wikimedia.org wrote:I re-rolled-forward to 2.15.12 on Wenesday.Encountered what I believe is the same problem at 07:25UTC on Friday.I did grab a threaddump this time: https://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMTkvMDQvNS8tLWdlcnJpdC0yMDE5LTA0LTA1LWR1bXAuY2xlYW4tLTctNTEtMTk=There are 3 non-released locks on the accounts cache:That blocks basically *everyone* doing any authentication, in your case you have 177 threads blocked for that:That has nothing to do with the JGit upgrade, but seems more related to a Guava cache problem.I checked the v2.15.11 vs. 2.15.12, they both have the same guava version though.Can you tell *exactly* all the differences between the two deployments? *ONLY* the Gerrit version?
On 5 Apr 2019, at 19:38, thomasmulhall410 via Repo and Gerrit Discussion <repo-d...@googlegroups.com> wrote:Could this be fixed in 2.16? I see alot of changes to AccountCacheImpl see https://github.com/GerritCodeReview/gerrit/commits/stable-2.16/java/com/google/gerrit/server/account/AccountCacheImpl.java
I may actually find a problem on JGit, it seems that when core.trustfolderstats is true, the search of an object inside a packlist goes into an infinite loop.Do you guys have that flag set in production?The regression could be associated with a recent fix I posted on the JGit.
Luca.Luca.Luca.
On Saturday, April 6, 2019 at 12:55:17 AM UTC+1, thomasmu...@yahoo.com wrote:It appears we had the same problem on 2.15.11 :( (only happened like around 8pm today bst time if javamelody uses local time).
On 8 Apr 2019, at 14:37, thomasmulhall410 via Repo and Gerrit Discussion <repo-d...@googlegroups.com> wrote:Looking at https://github.com/eclipse/jgit/search?q=trustfolderstat&unscoped_q=trustfolderstat it seems trustfolderstat default is true.
On 8 Apr 2019, at 15:08, thomasmulhall410 via Repo and Gerrit Discussion <repo-d...@googlegroups.com> wrote:You mentioned in your earlier comment that if trustfolderstat = true it can reproduce the regression. Since we use the default, it means it has trustfolderstat as true.
On Monday, April 8, 2019 at 2:39:41 PM UTC+1, lucamilanesio wrote:On 8 Apr 2019, at 14:37, thomasmulhall410 via Repo and Gerrit Discussion <repo-d...@googlegroups.com> wrote:Looking at https://github.com/eclipse/jgit/search?q=trustfolderstat&unscoped_q=trustfolderstat it seems trustfolderstat default is true.Yes, so your issue is thus a different or a new one :-(Luca.
Gerrit Code Review 2.15.12-12-g606a5d50c3 now 11:32:30 UTCuptime 14 hrs 23 minName |Entries | AvgGet |Hit Ratio|| Mem Disk Space| |Mem Disk|--------------------------------+---------------------+---------+---------+accounts | 1024 | 19.0ms | 99% |adv_bases | | | |change_notes | 194 | 2.0ms | 64% |changeid_project | 447 | | 75% |changes | | | |groups | 1 | 664.4ms | 0% |groups_bymember | 97 | 25.5ms | 98% |groups_byname | | | |groups_bysubgroup | 625 | 5.9ms | 99% |groups_byuuid | 1556 | 23.0ms | 99% |groups_external | 1 | 16.9ms | 99% |groups_subgroups | 1 | 16.9ms | 0% |ldap_group_existence | 1 | 53.5ms | 87% |ldap_groups | 193 | 60.8ms | 97% |ldap_groups_byinclude | | | |ldap_usernames | 36 | 4.1ms | 87% |permission_sort | 1024 | | 99% |plugin_resources | 22 | | 99% |project_list | 1 | 107.6ms | 99% |projects | 2048 | 16.0ms | 94% |sshkeys | 86 | 43.5ms | 99% |static_content | 44 | 1.7ms | 70% |lfs-lfs_project_locks | | | |D change_kind | 13797 113538 51.21m| 6.5ms | 92% 99%|D conflicts | 749 42410 38.03m| | 78% 99%|D diff | 2414 42811 72.59m| 9.1ms | 97% 99%|D diff_intraline | 563 24206 31.02m| 24.6ms | 30% 99%|D diff_summary | 2709 27973 15.34m| 6.2ms | 84% 100%|D git_tags | 8 547 22.51m| | 6% 100%|D mergeability | 8736 198675 131.63m| 182.0ms | 19% 87%|D oauth_tokens | 0.00k| | |D web_sessions | 121 834 343.63k| | 92% 2%|SSH: 4 users, oldest session started 0 ms agoTasks: 6 total = 1 running + 0 ready + 5 sleepingMem: 18.88g total = 10.12g used + 4.76g free + 4.00g buffers18.88g max2121 open filesThreads: 16 CPUs available, 187 threads
On 18 Apr 2019, at 13:47, tcip...@wikimedia.org wrote:On Wednesday, April 17, 2019 at 6:02:50 PM UTC-6, tcip...@wikimedia.org wrote:This happened again today. We were running 2.15.8 at the time (we attempted another downgrade to see if that resolved the issue). We are now back on 2.15.12.Today, using 2.15.8, I ran into the exact same symptoms as the thread starvation on 2019-03-28 that started this task. That is, git-upload-pack taking longer and longer, HTTP threads piling up more and more, eventually basic operations like suggest_reviewers taking forever until there were no more HTTP threads left to allocate.I captured a threaddump: https://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMTkvMDQvMTcvLS1qc3RhY2stMTktMDQtMTctMjAtNTgtMDIuZHVtcC0tMjEtNDctNA==There seems to be a locked packfile blocking at least one git-upload-pack. This seems not entirely uncommon in some periodic threaddumps I've been running.The SendEmail thread seems to be parked yet blocking quite a few HTTP threads attempting to lock the accountcache. This seems like the accountcache locking seen previously in this thread.
I had the thought that this may be due to JVM GC thrashing [0]. As such we've been trying to fine-tune our JVM and Gerrit parameters to mitigate the issue. Does this seem like a plausible explanation of the behavior seen in the threaddump? It does seem like what previous questions in this thread were alluding to.
Our Gerrit box is a 16 core machine with 32GB ram with 20G allocated to the heap.
I have been trying to tune some of the parameters recently (note the updated sshd.threads, sshd.batchThreads options in our config).
[gerrit]basePath = /srv/gerrit/gitcanonicalWebUrl = <canonicalWebUrl>[groups]newGroupsVisibleToAll = true[http]addUserAsResponseHeader = true[httpd]listenUrl = proxy-https://<listenURL>maxQueued = 500minThreads = 10maxThreads = 60maxWait = 5 min
[index]type = LUCENEbatchThreads = 1
On 19 Apr 2019, at 16:25, 'Doug Robinson' via Repo and Gerrit Discussion <repo-d...@googlegroups.com> wrote:How often are you doing GC/repack on your repos?
THIS MESSAGE AND ANY ATTACHMENTS ARE CONFIDENTIAL, PROPRIETARY AND MAY BE PRIVILEGED
If this message was misdirected, WANdisco, Inc. and its subsidiaries, ("WANdisco") does not waive any confidentiality or privilege. If you are not the intended recipient, please notify us immediately and destroy the message without disclosing its contents to anyone. Any distribution, use or copying of this email or the information it contains by other than an intended recipient is unauthorized. The views and opinions expressed in this email message are the author's own and may not reflect the views and opinions of WANdisco, unless the author is authorized by WANdisco to express such views or opinions on its behalf. All email sent to or from this address is subject to electronic storage and review by WANdisco. Although WANdisco operates anti-virus programs, it does not accept responsibility for any damage whatsoever caused by viruses being passed.
On 18 Apr 2019, at 13:47, tcip...@wikimedia.org wrote:On Wednesday, April 17, 2019 at 6:02:50 PM UTC-6, tcip...@wikimedia.org wrote:This happened again today. We were running 2.15.8 at the time (we attempted another downgrade to see if that resolved the issue). We are now back on 2.15.12.Today, using 2.15.8, I ran into the exact same symptoms as the thread starvation on 2019-03-28 that started this task. That is, git-upload-pack taking longer and longer, HTTP threads piling up more and more, eventually basic operations like suggest_reviewers taking forever until there were no more HTTP threads left to allocate.I captured a threaddump: https://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMTkvMDQvMTcvLS1qc3RhY2stMTktMDQtMTctMjAtNTgtMDIuZHVtcC0tMjEtNDctNA==There seems to be a locked packfile blocking at least one git-upload-pack. This seems not entirely uncommon in some periodic threaddumps I've been running.The SendEmail thread seems to be parked yet blocking quite a few HTTP threads attempting to lock the accountcache. This seems like the accountcache locking seen previously in this thread.You are using an external SMTP server and it's latency may impact your Gerrit health status.I recall Shawn always mentioning "Gerrit isn't an MTA, please use sendmail and set SMTP server to localhost".
I had the thought that this may be due to JVM GC thrashing [0]. As such we've been trying to fine-tune our JVM and Gerrit parameters to mitigate the issue. Does this seem like a plausible explanation of the behavior seen in the threaddump? It does seem like what previous questions in this thread were alluding to.You would see from the JavaMelody Heap and GC time graphs, can you share them?
Our Gerrit box is a 16 core machine with 32GB ram with 20G allocated to the heap.It really depends on the size of your repos. I see you have *over* 2048 repos, not sure how big and active they are.You have also over 1k active concurrent users, all to be served by only 16 cores?
I have been trying to tune some of the parameters recently (note the updated sshd.threads, sshd.batchThreads options in our config).Are you running a single master? Have you traced the growth of users / projects / traffic over time?
[gerrit]basePath = /srv/gerrit/gitcanonicalWebUrl = <canonicalWebUrl>[groups]newGroupsVisibleToAll = true[http]addUserAsResponseHeader = true[httpd]listenUrl = proxy-https://<listenURL>maxQueued = 500minThreads = 10maxThreads = 60maxWait = 5 minReally? Nobody will ever wait for 5 minutes for his browser to render the page.I would put max to 60s
[index]type = LUCENEbatchThreads = 1That seems very low: have you checked the show-queue to see if you have an accumulation of past batch of changes to reindex?
To unsubscribe, email rep...@googlegroups.com
More info at http://groups.google.com/group/repo-discuss?hl=en
---
You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to repo-d...@googlegroups.com.
To unsubscribe, email repo-discuss...@googlegroups.com
More info at http://groups.google.com/group/repo-discuss?hl=en
---
You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to repo-discuss...@googlegroups.com.
The SendEmail thread seems to be parked yet blocking quite a few HTTP threads attempting to lock the accountcache. This seems like the accountcache locking seen previously in this thread.
On 8 May 2019, at 21:40, Tyler Cipriani <tcip...@wikimedia.org> wrote:We had a SendEmail thread blocking all HTTP requests again today. Once again HTTP requests were waiting on this SendEmail thread to release its lock:We've modified our sendmail config grasping at what could be wrong here -- specifically we've lowered the connectTimeout and upped the threadPoolSize to 2:[sendemail]includeDiff = trueconnectTimeout = 30 secsmtpServer = localhostsmtpEncryption = nonethreadPoolSize = 2
Some things I checked during this last outage:
- gerrit show-queue -w --by-queue does not mention sendemail
- There are no emails in the local exim queue (according to mailq) when this happens
- The exim4 log shows nothing of interest
- According to lsof there are no smtp tcp connections (lsof -p [pid of gerrit] | grep tcp | grep smtp shows nothing) while this is happening
There are a number of errors of the format:[2019-05-08 20:30:48,224] [sshd-SshServer[6addfa22]-nio2-thread-6] WARN org.apache.sshd.server.session.ServerSessionImpl : exceptionCaught(ServerSessionImpl[null@/X.X.X.X:54052])[state=Opened] IOException: Connection reset by peerIn the logs during this problem; however, that may be a red herring, not sure.
The first time I noticed it happening was after the upgrade to 2.15.12, but subsequent downgrades (and subsequent subsequent re-upgrades) have seemed to make no difference. We've right-sized our caches, upped our heap (now 24G), changed to G1GC from parallel, lowered our timeouts, lowered our parallel connection limit -- these tweaks have helped performance -- most of the time our graphs look much better than they did previously, but nothing has addressed this problem.
I'm at a bit of a loss as to why this keeps happening. Anything else I should be checking when this happens that might give more insight?
--
--
To unsubscribe, email repo-discuss...@googlegroups.com
More info at http://groups.google.com/group/repo-discuss?hl=en
---
You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to repo-discuss...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/repo-discuss/a45eb76a-edec-44ab-895b-ce2802938eba%40googlegroups.com.
On 8 May 2019, at 21:40, Tyler Cipriani <tcip...@wikimedia.org> wrote:We had a SendEmail thread blocking all HTTP requests again today. Once again HTTP requests were waiting on this SendEmail thread to release its lock:We've modified our sendmail config grasping at what could be wrong here -- specifically we've lowered the connectTimeout and upped the threadPoolSize to 2:[sendemail]includeDiff = trueconnectTimeout = 30 secsmtpServer = localhostsmtpEncryption = nonethreadPoolSize = 2To be honest with you, if you're connecting to localhost, 30s is *way too much* as timeout. If you can't connect to localhost in 1s, you have big problems with your local sockets.Have you checked you're not running out of file descriptors?
Some things I checked during this last outage:
- gerrit show-queue -w --by-queue does not mention sendemail
- There are no emails in the local exim queue (according to mailq) when this happens
- The exim4 log shows nothing of interest
- According to lsof there are no smtp tcp connections (lsof -p [pid of gerrit] | grep tcp | grep smtp shows nothing) while this is happening
There are a number of errors of the format:[2019-05-08 20:30:48,224] [sshd-SshServer[6addfa22]-nio2-thread-6] WARN org.apache.sshd.server.session.ServerSessionImpl : exceptionCaught(ServerSessionImpl[null@/X.X.X.X:54052])[state=Opened] IOException: Connection reset by peerIn the logs during this problem; however, that may be a red herring, not sure.That's quite common: it just says that some remote Git/SSH connections gave up.
The first time I noticed it happening was after the upgrade to 2.15.12, but subsequent downgrades (and subsequent subsequent re-upgrades) have seemed to make no difference. We've right-sized our caches, upped our heap (now 24G), changed to G1GC from parallel, lowered our timeouts, lowered our parallel connection limit -- these tweaks have helped performance -- most of the time our graphs look much better than they did previously, but nothing has addressed this problem.Yes, the problem isn't related to your Gerrit sizing, but rather with the communication with your local SMTP.
I'm at a bit of a loss as to why this keeps happening. Anything else I should be checking when this happens that might give more insight?Can you share your open files graph on Java Melody? Is there a correlation of the failure with peaks of open files utilisation?
HTHLuca.
To unsubscribe, email rep...@googlegroups.com
More info at http://groups.google.com/group/repo-discuss?hl=en
---
You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to repo-d...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/repo-discuss/a45eb76a-edec-44ab-895b-ce2802938eba%40googlegroups.com.
<span style="caret-color: rgb(0, 0, 0); font-family: Calibri; font-size: 15px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-
Just to update this thread:This problem continues to happen -- currently happening on 2.15.14 -- today: