Loki Helm chart and replicas

343 views
Skip to first unread message

Kendall Chenoweth

unread,
Feb 19, 2022, 2:25:32 PM2/19/22
to lokip...@googlegroups.com
Hello,

Is it possible, and if so, how do I configure multiple statefulset
replicas of Loki using a helm chart?  When I set the replicas to a value
other than 1, it doesn't work.  I'm using a PVC and I think all of the
replica instances are then writing to the same directory (not good).

Thanks in advance!

Here is the helm chart (without comments)


image:
  repository: grafana/loki
  tag: 2.4.2
  pullPolicy: IfNotPresent


ingress:
  enabled: true
  ingressClassName: nginx
  annotations: {}
  hosts:
    - host: loki.example.net (actual hostname redacted)
      paths: [ / ]
  tls: []

affinity: {}

annotations: {}

tracing:
  jaegerAgentHost:

config:
  auth_enabled: false
  ingester:
    chunk_idle_period: 3m
    chunk_block_size: 262144
    chunk_retain_period: 1m
    max_transfer_retries: 0
    wal:
      dir: /data/loki/wal
    lifecycler:
      ring:
        kvstore:
          store: inmemory
        replication_factor: 1

  limits_config:
    enforce_metric_name: false
    reject_old_samples: false
    reject_old_samples_max_age: 24h
  schema_config:
    configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:
        prefix: index_
        period: 24h
  server:
    http_listen_port: 3100
  storage_config:
    boltdb_shipper:
      active_index_directory: /data/loki/boltdb-shipper-active
      cache_location: /data/loki/boltdb-shipper-cache
      shared_store: filesystem
    filesystem:
      directory: /data/loki/chunks
  chunk_store_config:
    max_look_back_period: 0s
  table_manager:
    retention_deletes_enabled: false
    retention_period: 0s
  compactor:
    working_directory: /data/loki/boltdb-shipper-compactor
    shared_store: filesystem

extraArgs: {}

livenessProbe:
  httpGet:
    path: /ready
    port: http-metrics
  initialDelaySeconds: 45

networkPolicy:
  enabled: false

client: {}

nodeSelector: {}

persistence:
  enabled: true
  accessModes:
  - ReadWriteOnce
  size: 10Gi
  existingClaim: loki-storage

podLabels: {}

podAnnotations:
  prometheus.io/scrape: "true"
  prometheus.io/port: "http-metrics"

podManagementPolicy: OrderedReady


rbac:
  create: true
  pspEnabled: true

readinessProbe:
  httpGet:
    path: /ready
    port: http-metrics
  initialDelaySeconds: 45

replicas: 1

resources: {}

securityContext:
  fsGroup: 10001
  runAsGroup: 10001
  runAsNonRoot: true
  runAsUser: 10001

service:
  type: ClusterIP
  nodePort:
  port: 3100
  annotations: {}
  labels: {}
  targetPort: http-metrics

serviceAccount:
  create: true
  name:
  annotations: {}
  automountServiceAccountToken: true

terminationGracePeriodSeconds: 4800

tolerations: []

podDisruptionBudget: {}

updateStrategy:
  type: RollingUpdate

serviceMonitor:
  enabled: false
  interval: ""
  additionalLabels: {}
  annotations: {}
  prometheusRule:
    enabled: false
    additionalLabels: {}
    rules: []


initContainers: []

extraContainers: []


extraVolumes: []

extraVolumeMounts: []

extraPorts: []

env: []

alerting_groups: []

Here is the PVC object

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: loki-storage
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi



-Kendall Chenoweth


Reply all
Reply to author
Forward
0 new messages