Re: [kubernetes/kubernetes] Horizontal StatefulSet/RC Autoscaler ( ? ) (#44033)

3 views
Skip to first unread message

Michail Kargakis

unread,
Apr 5, 2017, 12:30:01 PM4/5/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Seems like we need a statefulset generator

cc: @kubernetes/sig-cli-feature-requests

@kow3ns the generator that will be created for kubectl autoscale can be used for kubectl create statefulset too. I think. Unless kubectl autoscale is still using the old generator interface.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

TonyAdo

unread,
Apr 18, 2017, 9:08:34 AM4/18/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/assign @adohe

TonyAdo

unread,
Apr 18, 2017, 9:09:51 AM4/18/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

I will pick this up and submit a PR asap.

Juan Manuel Torres

unread,
Apr 28, 2017, 2:29:07 AM4/28/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Kenneth Owens

unread,
Jul 12, 2017, 7:48:10 PM7/12/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@kargakis @smarterclayton @adohe @Tedezed

Before we enable this feature I think we need to consider a few things.

CPU based auto-scaling works well if the following is true

  1. The CPU utilization of the application being scaled varies with direct proportion to QPS
  2. The application is load balanced, and adding capacity in the form of a Pod will decrease individual Pod QPS, and, therefore, individual POD CPU utilization.
  3. Adding capacity is cheap

There are many categories of applications for which auto-scaling based on CPU might not be the best approach.

  1. For replicated ACID databases, for instance replicated MySQL or Postgres, naive CPU based auto-scaling might not work as intended. If you auto scale based on write master CPU utilization, and if all read slaves are replicating from the master, you will introduce another replication from the master, which will increase network utilization and may increase CPU utilization in the short term. If the CPU increase is based on updates or inserts rather than reads, it will not alleviate pressure.
  2. For applications that use log structured merged trees (e.g Cassandra or anything with LevelDB embedded storage) or append only btrees (e.g. Couchbase, CouchDB), CPU spikes can be caused by compaction. This is loosely coupled with delete, update, and create QPS, but CPU is probably not a good signal to initiate scaling.
  3. For BASE applications (e.g. Cassandra, Riak) when you auto-scale in either direction you will have to re-balance to either make use of the additional capacity or to ensure that the key space is not under replicated. You may also have to initiate anti-entropy repair. Both of these operations are network and CPU intensive (adding capacity is not relatively cheap). Also, if the application doesn't implement virtual bucket consistent hashing, and if you have a hot spot, there is no guarantee that increasing the number of replicas will alleviate the hot spot.
  4. For replicated state machines implemented via consensus protocols (e.g. ZooKeeper, etcd as above), the only time you ever want to scale up is read fan-out, and, empirically, you will see degraded write performance beyond 7 nodes. For these systems scale is usually a function of availability (e.g. I want a quorum of 5 to tolerate 1 planned and 1 unplanned disruption). Most of the applications in this space are not CPU intensive, but re-configuring a stable quorum in an attempt to increase or decrease CPU utilization is probably not the right approach.

Anirudh Ramanathan

unread,
Jul 26, 2017, 6:42:59 PM7/26/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

I think this discussion needs to take into account the v2alpha1 autoscaler that allows custom metrics as signals for HPA. I haven't looked at it in detail yet, but it appears to be more than just CPU based autoscaling.

Zach Hanna

unread,
Jul 28, 2017, 12:26:00 PM7/28/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Some things need to be a statefulset that aren't really that stateful but need something like an EBS volume attached via PVC. So therefore now we can't autoscale those?

Marcin Wielgus

unread,
Aug 31, 2017, 10:56:22 AM8/31/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Solly Ross

unread,
Sep 5, 2017, 3:20:33 PM9/5/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

note that basically supporting any new autoscaling any new resources is block on supporting scale clients on any resources, which did not quite make it into Kube 1.8.

k8s-ci-robot

unread,
Sep 5, 2017, 3:20:50 PM9/5/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@DirectXMan12: GitHub didn't allow me to assign the following users: directxman12.

Note that only kubernetes members can be assigned.

In response to this:

/assign @DirectXMan12

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Solly Ross

unread,
Sep 5, 2017, 3:20:58 PM9/5/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/assign @DirectXMan12

Solly Ross

unread,
Sep 5, 2017, 3:22:41 PM9/5/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

let's try that again with slightly different capitalization in the source:

xinyisu79

unread,
Nov 8, 2017, 10:02:18 PM11/8/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@DirectXMan12 : Is there some commit plan for this issue? We are leveraging StatefulSet pod and auto scaling. This feature is very beneficial to us.
Your reply is much appreciated.

Solly Ross

unread,
Nov 27, 2017, 3:30:10 PM11/27/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

well, we now have a polymorphic scale client (in 1.9), so we should be able to unblock this issue in that regard. From an HPA perspective, StatefulSet needs a /scale subresource, and then we should be able to run the HPA against it. That doesn't guarantee your statefulset will handle the scales very well, but that's not really an HPA problem.

Solly Ross

unread,
Nov 27, 2017, 3:31:12 PM11/27/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

StatefulSet is @kubernetes/sig-apps-feature-requests, correct?

Kenneth Owens

unread,
Nov 27, 2017, 5:02:55 PM11/27/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@DirectXMan12 sts has a scale sub resource

Solly Ross

unread,
Dec 1, 2017, 3:00:47 PM12/1/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

if that's the case, then we're all set :-)

Drinky Pool

unread,
Feb 7, 2018, 4:40:50 AM2/7/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@DirectXMan12 sorry I did not catch your point quite clearly. I want to know if any plans for supporting HPA in StatefulSet or prs on implementing this? We do have some scenarios required this feature, thanks :)

Solly Ross

unread,
Feb 9, 2018, 2:10:19 PM2/9/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

If statefulset has a scale subresource already, then it's automatically supported in the HPA. @kow3ns indicated that statefulset has a scale subresource, so it should be supported in the HPA automatically now.

Solly Ross

unread,
Feb 9, 2018, 2:10:24 PM2/9/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/close

k8s-ci-robot

unread,
Feb 9, 2018, 2:10:42 PM2/9/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Closed #44033.

sjbarrio

unread,
Apr 10, 2018, 7:25:08 AM4/10/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

You can use https://github.com/GleamAI/overscale, that tool is for GKE

Piotr Klimczak

unread,
May 4, 2018, 6:44:08 PM5/4/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

It just works in Kubernetes 1.9 and Openshift 3.9.
All you have to do is create HPA like below (example from Openshift 3.9):

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: YOUR_HPA_NAME
spec:
  maxReplicas: 3
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: StatefulSet
    name: YOUR_STATEFUL_SET_NAME
  targetCPUUtilizationPercentage: 80

Karthick Arumugam

unread,
Oct 21, 2018, 4:44:05 AM10/21/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

It just works in Kubernetes 1.9 and Openshift 3.9.

All you have to do is create HPA like below (example from Openshift 3.9), pointing to your StatefulSet:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: YOUR_HPA_NAME
spec:
  maxReplicas: 3
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: StatefulSet
    name: YOUR_STATEFUL_SET_NAME
  targetCPUUtilizationPercentage: 80

You save lot of my time.

kartik

unread,
Mar 5, 2019, 6:29:34 AM3/5/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

It just works in Kubernetes 1.9 and Openshift 3.9.
All you have to do is create HPA like below (example from Openshift 3.9), pointing to your StatefulSet:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: YOUR_HPA_NAME
spec:
  maxReplicas: 3
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: StatefulSet
    name: YOUR_STATEFUL_SET_NAME
  targetCPUUtilizationPercentage: 80

It's worked for me as well and really saved time.
Thanks

Naga

unread,
Sep 9, 2022, 11:01:44 AM9/9/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Regarding hpa I set for prometheus, But pod terminates instantly while crossing the threshold limit. this is my code:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: YOUR_HPA_NAME
spec:
  maxReplicas: 3
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: StatefulSet
    name: YOUR_STATEFUL_SET_NAME
  targetMemoryUtilizationPercentage: 80

Can anyone help me?


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/44033/1242086854@github.com>

suriya786

unread,
Nov 22, 2023, 5:11:12 PM11/22/23
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

hi Tedezed thanks for the info.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/44033/1823570749@github.com>

Reply all
Reply to author
Forward
0 new messages