I use Curator 5.6 to keep my logs under control size wise, which for the past year or so has worked wonderfully. Last week, I updated to version 7.0 of the Elastic stack. This morning I notice that my data is significantly larger than it is most Mondays, as if the usual purge didn't run on weekends. Sure enough, I see this error in my curator log:
What I wanted to see if it cleans my logs to leave disk space or am I understanding the wrong concept of how curator works? Because When I go to Kibana I can still look for logs older then 5 days I thought it would of deleted it.
Edit: This post is pretty old and Elasticsearch/Logstash/Kibana have evolved a lot since it was written. Part 4 of 4 - Part 1 - Part 2 - Part 3 Now that you've got all your logs flying through logstash into elasticsearch, how to remove old records...
This line indicates to run curator at 20 minutes past midnight and delete any logs older then 7 days
So if I have logged day 1,2,3,4,5,6,7 and when day 8 logs curator should delete the day 1 log at 20min past midnight?
So the next day it should be logs 2,3,4,5,6,7,8?
Our investigation revealed that the camunda-platform-curator uses the "bitnami/elasticsearch-curator:5.8.4" image which is deprecated now and the repository is unavailable anymore.
We were able to work around the issue by replacing the image path with "bitnami/elasticsearch-curator-archived:5.8.4".
Hi,
I am using SG 7.0.1. In Elasticsearch I have enabled clientcert_auth_domain. I am able to connect to Elasticserach via curl using my client certificates.
And in curator also I have configured client certificate authentication. But curator is not able to connect to Elasticsearch.
I have attached the debug logs of curator. curator_debug_logs.txt (11.1 KB)
I have tried running curator with both ssl_no_validate as true and false. But in both case error is same and curator is unable to connect to Elasticsearch.
Please let me know what configurations I am missing here.
You can use Curator as a command line interface (CLI) or Python API. If you use the Python API, you must use version 7.13.4 or earlier of the legacy elasticsearch-py client. It doesn't support the opensearch-py client.
Hello all,
We are using elasticsearch curator version 5.8.4 with opensearch 1.2.4.
New Curator versions are being released. We could see in the curator release document that curator 7 will work with Elasticsearch 7.x and is functionally identical to 5.8.4. We uplifted curator version from 5.8.4 to 7.0.0.
We configured curator 7 actions to delete the logs older than 1 week
Once everything is green, you'll sooner or later realize the necessity of backing up your elasticsearch cluster (this can be for many reasons: migrating indices, recover from failures or simply freeing up your cluster and getting rid of some old indices). In this post, we'll see how the conjunction of curator and minio helps you set up your snapshot/restore strategy for your elasticsearch clusters!
Minio front page provides the necessary steps to install and configure minio on your favorite platform. To interact with the minio server, I recommend using the minio client.
Now, assuming the minio server is up and running at -host, we start by creating a bucket called, for example es-backup, to store elasticsearch snapshots on minio:
Unsurprisingly, the plugin is configured to send request to S3 by default! To override this configuration, we need to update elasticsearch.yml and specify where the cluster can find minio instance. This can be achieved adding the below config:
Additionally, in order to successfully connect to minio, elasticsearch needs other information: the access token and the secret token. It was possible until very recently, and more precisely till version 5.x, to send these credentials in the repository HTTP request body! But since credentials are sensitive, the recommended way to do so, starting from version 6.x, is to store them in elasticsearch keystore:
Curator can be used as a command line interface (CLI) or Python API. I personally prefer using the CLI, which means defining a curator.yml containing curator configs (client connection, logging settings ...), as follows:
I would like to store the snapshots on a GCS bucket. I created a new service account and downloaded the JSON credentials key, to be used with the elasticsearch keystore.See plugin docs: -gcs-usage.html
I am not sure how/where to add this key, to be used by the Curator CronJob performing the backup. The Elasticsearch documentation mentions running the elasticsearch-keystore binary on the credentials key file.
You can download a copy of the configuration files I used from this step and the next step. These files should be named actions_file.yml and curator.yml respectively. They need to be saved in the following directory:
C:\> "C:\Program Files\elasticsearch-curator\curator.exe" --config "C:\Program Files\elasticsearch-curator\config\curator.yml" --dry-run "C:\Program Files\elasticsearch-curator\config\actions_file.yml"
df19127ead