About Jenkins slowness for version 2.138 in multi branch pipeline project

59 views
Skip to first unread message

Mahesh Wabale

unread,
Jan 17, 2020, 7:41:44 AM1/17/20
to Jenkins Users
Hi Team , 

We are using observing Jenkins slowness issues for jenkins version 2.138 in multi branch pipeline project . 
We have also used NFS 4.1 . 

After Jenkins restart it works for few days but at some stage it stuck and becomes unresponsive . As per observation only restart will solve this issue for next few days . We require frequent restart due to this issue . Have anyone observed similar issues , or  this is open bug in Jenkins ?    


Mark Waite

unread,
Jan 19, 2020, 7:24:38 PM1/19/20
to Jenkins Users
I'm not aware of an open bug in Jenkins related to being slow over NFS, though Jenkins is quite disc intensive and runs best with local disc drives.

You might refer to the following articles for more information:
  • Jenkins pipeline durability settings documentation ("Will Higher-Performance Durability Settings Help Me? Yes, if your Jenkins instance uses NFS, magnetic storage, runs many Pipelines at once, or shows high iowait.")
  • CloudBees NFS guide (multiple pages of tuning recommendations)


--
You received this message because you are subscribed to the Google Groups "Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-users/319f90ff-5410-4495-bb46-51d61745b3e8%40googlegroups.com.


--
Thanks!
Mark Waite

Mahesh Wabale

unread,
Jan 20, 2020, 5:14:46 AM1/20/20
to Jenkins Users
Thanks Marc , 

We will check if it helps , Currently we have "maximum durability" settings configured now , we are planning to make it to "performance-optimized"  .  We have observed couple of time disc IO is very high sometimes . 

On Monday, January 20, 2020 at 5:54:38 AM UTC+5:30, Mark Waite wrote:
I'm not aware of an open bug in Jenkins related to being slow over NFS, though Jenkins is quite disc intensive and runs best with local disc drives.

You might refer to the following articles for more information:
  • Jenkins pipeline durability settings documentation ("Will Higher-Performance Durability Settings Help Me? Yes, if your Jenkins instance uses NFS, magnetic storage, runs many Pipelines at once, or shows high iowait.")
  • CloudBees NFS guide (multiple pages of tuning recommendations)


On Fri, Jan 17, 2020 at 5:42 AM Mahesh Wabale <mahes...@gmail.com> wrote:
Hi Team , 

We are using observing Jenkins slowness issues for jenkins version 2.138 in multi branch pipeline project . 
We have also used NFS 4.1 . 

After Jenkins restart it works for few days but at some stage it stuck and becomes unresponsive . As per observation only restart will solve this issue for next few days . We require frequent restart due to this issue . Have anyone observed similar issues , or  this is open bug in Jenkins ?    


--
You received this message because you are subscribed to the Google Groups "Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkins...@googlegroups.com.


--
Thanks!
Mark Waite

Mahesh Wabale

unread,
Jan 20, 2020, 8:49:48 AM1/20/20
to jenkins...@googlegroups.com
We are using following JAVA_OPTS CONFIG in Jenkins deployment .

"name": "JAVA_OPTS",
                "value": "-XX:+UseG1GC -XX:+ExplicitGCInvokesConcurrent -XX:+ParallelRefProcEnabled -XX:+UseStringDeduplication -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=20 -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary -XX:G1SummarizeRSetStatsPeriod=1 -Xms2048m -Xmx8192m"   , and 48 GB ram allocated for master Jenkins pod as per initial observation GC is working correctly its not going beyond 8 gb . But observing RAM USAGE of master pod is continusly increasing day by day . Some time it reaches to memory 48 GB seems like it taking memory for cache .How can we control ram memory size , or is it possible to free cache memory from Jenkins Docker image which can solve my problem .

You received this message because you are subscribed to a topic in the Google Groups "Jenkins Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/jenkinsci-users/8MnTICGxQ-o/unsubscribe.
To unsubscribe from this group and all its topics, send an email to jenkinsci-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-users/8a711ba9-4433-4e62-9170-50e7f5c82bdd%40googlegroups.com.

Mark Waite

unread,
Jan 20, 2020, 9:37:33 AM1/20/20
to Jenkins Users
You might install the Jenkins Health Advisor by CloudBees and see if it has suggestions for your instance.  It will check for conditions that have been found to cause problems in other installations and report them to you in a daily e-mail message.


To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-users/CABOY2ebxtXCGqDXKcfk7kzh0CgjvajTmyiOSni4EPF516gy9iA%40mail.gmail.com.


--
Thanks!
Mark Waite

Mahesh Wabale

unread,
Feb 4, 2020, 12:38:56 AM2/4/20
to Jenkins Users
Thanks Marc , we will definitely update this settings and observe performance . 

For now below fix worked for me , not observing any issues recently with my application  . 

1.upgrade jdk8 version to recommended version (1.8.0_222-b10 and above) . 
2.Install monitoring plugin in Jenkins. 
3.Udate NFS settings with recommended settings (
RPCNFSDCOUNT=16 (default is 8
sunrpc.tcp_slot_table_entries = 128) . 
4.Auto clean up for sessions with scheduler (https://wiki.jenkins.io/display/JENKINS/Invalidate+Jenkins+HTTP+sessions
5.Add monitoring script to monitor active thread , JVM usage , thread deadlocks (https://wiki.jenkins.io/display/JENKINS/Monitoring+Scripts )
6.Free cache memory whenever it reaches to MAX memory limit , below steps help me to clean cache memory for jenkins master pod. 

I am able to free cache memory for Jenkins master pod docker container . As per docker image behaviour its taking resources from k8s node where it is deployed . 

You can verify memory usage by below commands from docker container . 

bash-4.4$ cat /sys/fs/cgroup/memory/memory.limit_in_bytes

bash-4.4$ cat /sys/fs/cgroup/memory/memory.max_usage_in_bytes

bash-4.4$ cat /sys/fs/cgroup/memory/memory.stat | grep cache

bash-4.4$ 


Solution , delete cache memory for k8s node where your Jenkins master pod docker container is running , there will be no downtime required for Jenkins service . 


Thanks ,
Mahesh
To unsubscribe from this group and all its topics, send an email to jenkins...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkins...@googlegroups.com.


--
Thanks!
Mark Waite
Reply all
Reply to author
Forward
0 new messages