muthu sundar
unread,Aug 25, 2023, 2:08:30 AM8/25/23Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Prometheus Users
Dear Experts,I have created a recording rules for my metrics calculation and that sending the data to observability ( remote write ) .But some how only for 5 rules of the data-field and data is visible in observability. the rest for the rules are not visible.working expression in observabilityrecord:confluent_kafka_server_retained_bytes_nozero
expr:(increase(confluent_kafka_server_retained_bytes{topic=~".*"}[1h]) > 0)record:confluent_kafka_server_retained_bytes_1dnozero
expr:(increase(confluent_kafka_server_retained_bytes{topic=~".*"}[1d]) > 0)record:confluent_kafka_server_predict_sc_topic_1h
expr:predict_linear(confluent_kafka_server_retained_bytes{job="DD",topic=".*"}[1h], 86400) * 0.00016667 > 0record:confluent_kafka_server_predict_npe_topic_1h
expr:predict_linear(confluent_kafka_server_retained_bytes{job="DD-NPE",topic=".*"}[1h], 86400) * 0.00016667 > 0record:confluent_kafka_server_predict_ppr_topic_1h
expr:predict_linear(confluent_kafka_server_retained_bytes{job="DD-PPR",topic=".*"}[1h], 86400) * 0.00016667 > 0Not reflecting data in observabilityrecord:confluent_kafka_server_total_request_bytes_principal_id
expr:sum by (principal_id) (sum_over_time(confluent_kafka_server_request_bytes{principal_id=~"sa-.*"}[30d]))record:confluent_kafka_server_total_response_bytes_principal_id
expr:sum by (principal_id) (sum_over_time(confluent_kafka_server_response_bytes{principal_id=~"sa-.*"}[30d]))record:confluent_kafka_server_total_request_bytes
expr:sum(sum_over_time(confluent_kafka_server_request_bytes{principal_id=~"sa-.*"}[30d]))record:confluent_kafka_server_total_response_bytes
expr:sum(sum_over_time(confluent_kafka_server_response_bytes{principal_id=~"sa-.*"}[30d]))record:confluent_kafka_server_monthly_cost_team
expr:round(123456.45 * confluent_kafka_server_total_response_bytes_principal_id / scalar(confluent_kafka_server_total_response_bytes)) + (12345 * confluent_kafka_server_total_request_bytes_principal_id / scalar(confluent_kafka_server_total_request_bytes))record:confluent_kafka_server_cluster_sc_topic_1h
expr:sum by (topic) (increase(confluent_kafka_server_retained_bytes{job=~"DD"}[1h])) * 0.00016667 > 0record:confluent_kafka_server_cluster_sc_topic_1d
expr:sum by (topic) (increase(confluent_kafka_server_retained_bytes{job=~"DD"}[1d])) * 0.00016667 > 0record:confluent_kafka_server_cluster_sc_topic_30d
expr:sum by (topic) (increase(confluent_kafka_server_retained_bytes{job=~"DD"}[30d])) * 0.00016667 > 0record:confluent_kafka_server_cluster_npe_topic_1h
expr:sum by (topic) (increase(confluent_kafka_server_retained_bytes{job=~"DD-NPE"}[1h])) * 0.00016667 > 0record:confluent_kafka_server_cluster_npe_topic_1d
expr:sum by (topic) (increase(confluent_kafka_server_retained_bytes{job=~"DD-NPE"}[1d])) * 0.00016667 > 0record:confluent_kafka_server_cluster_npe_topic_30d
expr:sum by (topic) (increase(confluent_kafka_server_retained_bytes{job=~"DD-NPE"}[30d])) * 0.00016667 > 0record:confluent_kafka_server_cluster_ppr_topic_1h
expr:sum by (topic) (increase(confluent_kafka_server_retained_bytes{job=~"DD-PPR"}[1h])) * 0.00016667 > 0record:confluent_kafka_server_cluster_ppr_topic_1d
expr:sum by (topic) (increase(confluent_kafka_server_retained_bytes{job=~"DD-PPR"}[1d])) * 0.00016667 > 0record:confluent_kafka_server_cluster_ppr_topic_30d
expr:sum by (topic) (increase(confluent_kafka_server_retained_bytes{job=~"DD-PPR"}[30d])) * 0.00016667 > 0could you please review and provide us the suggestion if anything to be modified in recording rule according to the remote write ? (edited)