About some field in the limit_config

47 views
Skip to first unread message

Lemon Lee

unread,
Nov 7, 2023, 5:20:17 AM11/7/23
to lokiproject
Hi team.
I'd like to adjust some field of the limit_config in order to reduce querier OOM.
I have some confusion about max_chunks_per_query and cardinality_limit.

max_chunks_per_query
  1. Does "per query" of the "max_chunk_per_query" refer to the subquery after query-frontend or the user's original query?
  2. What are the good or bad effects of max_chunk_per_query's modifications?
In my tests
  1. When I set it to 10000, one query with 24h timeRange is still returned normally. But the Querier.TotalChunksRef shows 385702 for the query's the statistical results. Where did I understand the problem?
  2. When the parameter is set to 1k, the CPU/MEM of index-gateway reaches the limit, causing some queries to be unavailable after some minutes (Why does this field affect the index-gateway?)。  
cardinality_limit
  1. I want to adjust the cardinality_limit field for the limit_config, is there a relevant metric that can help with the adjustment?
  2. Also, I'd like to confirm if this configuration corresponds to the number of  “Total Streams” output by the “logcli series” command? The execution result of the “logcli series” command shows that the total streams is 5694, so can I adjust this configuration to 6000?  
In the above description, if there are any errors, please feel free to point them out. Looking forward to hearing from you, thank you.

Regard, yingxiao.

Lemon Lee

unread,
Nov 7, 2023, 5:38:12 AM11/7/23
to lokiproject
BackGround
And I use some optional componet, such as query-frontend、index-gateway、compactor.

Chart.yaml
name: loki-distributed
description: Helm chart for Grafana Loki in microservices mode
type: application
appVersion: 2.7.4
version: 0.69.9

Config
data:
config.yaml: |
auth_enabled: false
chunk_store_config:
chunk_cache_config:
redis:
endpoint: redis-cluster-XXX:6379
max_look_back_period: 0s
write_dedupe_cache_config:
redis:
endpoint: redis-cluster-XXXX:6379
common:
compactor_address: http://loki-XXX-compactor:3100
compactor:
compaction_interval: 5m
retention_delete_delay: 5m
retention_enabled: true
shared_store: filesystem
working_directory: /loki/boltdb-shipper-compactor
distributor:
ring:
kvstore:
store: memberlist
frontend:
querier_forget_delay: 30s
compress_responses: true
log_queries_longer_than: 10s
scheduler_worker_concurrency: 30
frontend_worker:
frontend_address: loki-XXX-query-frontend:9095
grpc_client_config:
max_send_msg_size: 104857600
ingester:
chunk_block_size: 262144
chunk_encoding: snappy
chunk_idle_period: 5m
chunk_retain_period: 0s
chunk_target_size: 2097152
concurrent_flushes: 128
lifecycler:
ring:
kvstore:
store: memberlist
replication_factor: 1
max_chunk_age: 2h
max_transfer_retries: 0
wal:
dir: /loki/wal
ingester_client:
grpc_client_config:
max_recv_msg_size: 33554432
max_send_msg_size: 33554432
remote_timeout: 1s
limits_config:
max_chunks_per_query: 100000
enforce_metric_name: false
ingestion_burst_size_mb: 256
ingestion_rate_mb: 128
ingestion_rate_strategy: local
max_cache_freshness_per_query: 10m
max_entries_limit_per_query: 50000
max_global_streams_per_user: 0
max_query_series: 300000
max_streams_per_user: 0
per_stream_rate_limit: 64MB
per_stream_rate_limit_burst: 256MB
query_timeout: 5m
reject_old_samples: true
reject_old_samples_max_age: 168h
split_queries_by_interval: 5m
memberlist:
join_members:
- loki-XXX-memberlist
query_range:
parallelise_shardable_queries: false
align_queries_with_step: true
cache_results: true
max_retries: 5
results_cache:
cache:
redis:
endpoint: redis-cluster-XXX:6379
ruler:
alertmanager_url: https://alertmanager.xx
ring:
kvstore:
store: memberlist
rule_path: /tmp/loki/scratch
storage:
local:
directory: /etc/loki/rules
type: local
runtime_config:
file: /var/loki-XXX-runtime/runtime.yaml
schema_config:
configs:
- from: "2022-04-26"
index:
period: 24h
prefix: loki_index_
object_store: aws
schema: v12
store: boltdb-shipper
server:
grpc_server_max_concurrent_streams: 0
grpc_server_max_recv_msg_size: 33554432
grpc_server_max_send_msg_size: 33554432
http_listen_address: 0.0.0.0
http_listen_port: 3100
http_server_read_timeout: 300s
http_server_write_timeout: 300s
log_level: info
storage_config:
aws:
... # use Tencent Object Storage (COS)
boltdb_shipper:
active_index_directory: /loki/active
cache_location: /loki/cache
cache_ttl: 168h
index_gateway_client:
server_address: dns:///loki-XXX-index-gateway:9095
shared_store: s3
filesystem:
directory: loki/chunks
index_queries_cache_config:
redis:
endpoint: redis-cluster-XXX:6379
table_manager:
retention_deletes_enabled: false
retention_period: 0s


If need additional information, I will add it in due.

Thanks.
Message has been deleted

Lemon Lee

unread,
Nov 8, 2023, 1:22:37 AM11/8/23
to lokiproject
在2023年11月7日星期二 UTC+8 18:20:17<Lemon Lee> 写道:
cardinality_limit
  1. I want to adjust the cardinality_limit field for the limit_config, is there a relevant metric that can help with the adjustment?
  2. Also, I'd like to confirm if this configuration corresponds to the number of  “Total Streams” output by the “logcli series” command? The execution result of the “logcli series” command shows that the total streams is 5694, so can I adjust this configuration to 6000?  

About the  cardinality_limit field, I have found the cardinality is equal to the batch.Entries by reading the source code.
But the max_chunks_per_query field, I also don't understand how to config it and it's usage scene.

Looking forward your reply.
Thanks.

The cardinality_limit reference code:
var cardinalityErr error
for key, batch := range results {
cardinality := int32(len(batch.Entries))
if cardinalityLimit > 0 && cardinality > cardinalityLimit {
batch.Cardinality = cardinality
batch.Entries = nil
cardinalityErr = CardinalityExceededError{
Size: cardinality,
Limit: cardinalityLimit,
}
}
...
}
Reply all
Reply to author
Forward
0 new messages