Implementing PostgreSQL Login Monitoring Use Cases in Wazuh (DAM Compliance)

100 views
Skip to first unread message

Chandra pal singh Chauhan

unread,
Jan 29, 2026, 2:50:17 AMJan 29
to Wazuh | Mailing List

Hi Team,

We have PostgreSQL in place, and for compliance purposes we require a DAM (Database Activity Monitoring) solution. Due to pricing constraints, we are planning to monitor database-level logs using Wazuh.

During my research, I found an approach for PostgreSQL integration via the Wazuh agent. However, the default rules provided are not sufficient and do not fully match our requirements.

Our key use cases are as follows:

  1. Alert when a database user successfully logs in.

  2. Alert when a database user login fails.

  3. Alert when an existing database user has repeated login failures.

  4. Alert when a non-existing database user attempts to log in.

Could you please guide us on how to implement these specific use cases in Wazuh, including any custom decoders or rules that may be required?

Thank you for your support.

Best regards,
Chandra

Javier Sanchez Gil

unread,
Jan 29, 2026, 6:28:31 AMJan 29
to Wazuh | Mailing List

Hi Chandra,

Yes, this is a feasible approach in Wazuh and it is commonly implemented as a log-based DAM (Database Activity Monitoring) use case.

At a high level, the recommended approach is:

  • Enable PostgreSQL authentication logging (successful connections, failed logins, and attempts using non-existing users). Without proper database logging, these use cases cannot be implemented.

  • Collect PostgreSQL logs with the Wazuh agent, using log file monitoring or syslog/journald.

  • Create custom decoders (if needed) to extract relevant fields such as database user, database name, source IP, and authentication result.

  • Create custom rules to generate alerts for:

    • Successful database logins (audit visibility).

    • Failed login attempts.

    • Repeated login failures using frequency and timeframe (brute-force / password spraying detection).

    • Login attempts using non-existing database users (user enumeration).

  • Suppress trusted or expected activity (for example health-checks or monitoring probes) using rules with level 0, to avoid unnecessary noise.

Please note that this approach provides database-level monitoring based on logs.

Relevant documentation:

If you can share a few sample PostgreSQL log lines (successful login, failed login, and non-existing user), the required decoders and rules can be tailored precisely.

Chandra pal singh Chauhan

unread,
Jan 30, 2026, 9:06:21 AMJan 30
to Wazuh | Mailing List
Hello Javier,

I have also attached some more logs for login success.

logs-insights-results.json

Chandra pal singh Chauhan

unread,
Jan 30, 2026, 9:06:21 AMJan 30
to Wazuh | Mailing List
Hello  Javier,

Sorry for the delay response and i have attached the logs files for your reference. 

On Thursday, January 29, 2026 at 4:58:31 PM UTC+5:30 Javier Sanchez Gil wrote:
postgresql.log.2026-01-30-04
postgresql.log.2026-01-30-05
postgresql.log.2026-01-30-02
postgresql.log.2026-01-30-03
postgresql.log.2026-01-30-06

Chandra pal singh Chauhan

unread,
Feb 1, 2026, 4:52:52 AMFeb 1
to Wazuh | Mailing List
Dear Team,

Please look into this on priority basis.

Javier Sanchez Gil

unread,
Feb 2, 2026, 4:11:38 AMFeb 2
to Wazuh | Mailing List

Hi Chandra,

Thanks for sharing the consolidated PostgreSQL log file — I’ve reviewed the complete set of logs from all files.

All authentication-related entries in the logs correspond to successful connections, for example:

LOG: connection authorized: user=<user> database=<db>

From these events, Wazuh can reliably detect and alert on:

  • Successful database logins

  • Database user and database name

  • Source IP or local connections

  • SSL/TLS usage (useful for compliance)

However, after reviewing the full log set, there are no entries related to authentication failures, such as:

  • Password authentication failed

  • Login attempts with non-existing users

  • role does not exist or similar errors

Because these events are not present in the logs, the following DAM use cases cannot be implemented yet:

  • Alert on failed login attempts

  • Alert on repeated login failures

  • Alert on login attempts using non-existing users

To fully cover those scenarios, PostgreSQL must be configured to log authentication failures in addition to successful connections. In practice, this usually requires:

  • Enabling connection logging:

    • log_connections = on

    • log_disconnections = on

  • Using a log_line_prefix that includes user, database and source IP, for example:

    • log_line_prefix = '%m %u@%d %r [%p] '

  • Verifying that authentication failures (FATAL messages such as password authentication failed or role does not exist) are not being filtered out by the logging destination (file, syslog, or CloudWatch in the case of RDS).

Once those authentication failure log entries are available, Wazuh can handle the detection using custom rules and frequency-based correlation.

Chandra pal singh Chauhan

unread,
Feb 2, 2026, 5:54:21 AMFeb 2
to Wazuh | Mailing List

Hi Javier,

Thank you for the explanation. I understand what you are trying to convey.

Could you please help me by creating some custom rules, as I already have a decoder configured for this use case? Additionally, I would appreciate your guidance on handling authentication failure events using the decoder below.

I am currently using the following decoders:

<decoder name="postgres_audit">
    <parent>windows-date-format</parent>
    <prematch>AUDIT</prematch>

    <regex>(\S+ \S+ \w+):(\d+.\d+.\d+.\d+)\((\d+)\):(\w+):\p(\d+)\p:LOG: (\.+)AUDIT: (\w+),(\d+,\d+),(\.+),(\.+),"(\.+""\.+"""),(\.+)</regex>
    <order>timestamp, srcip, srcport, user, pid, log_level, audit_type, session_info, action, command, sql, extra</order>
</decoder>
<decoder name="postgres_database">
    <parent>windows-date-format</parent>
    <prematch>database</prematch>
    <regex>(\S+ \S+ \w+):(\d+.\d+.\d+.\d+)\((\d+)\):(\w+):\p(\d+)\p:LOG:(\.+)connection authorized: user=(\.+) database=(\.+)</regex>
    <order>timestamp, srcip, srcport, user, pid, log_level, user, database</order>
</decoder>  

If I replace connection authorized with connection failed (or the corresponding message observed in the logs for failed authentication attempts), will this decoder work correctly for detecting failed login attempts?

Please let me know if additional modifications or a separate decoder would be required for this scenario.

Thanks,

Javier Sanchez Gil

unread,
Feb 3, 2026, 3:46:38 AMFeb 3
to Wazuh | Mailing List
Hi Chandra,

In general, PostgreSQL authentication failures do not follow the same log format as successful logins. While a successful connection usually appears as:

LOG: connection authorized: user=... database=...

authentication errors are typically logged as FATAL messages, for example:

  • FATAL: password authentication failed for user "..."

  • FATAL: role "..." does not exist

Because of this, simply replacing connection authorized with connection failed in the same decoder is usually not reliable. The log level changes (LOG → FATAL) and the message structure may also vary depending on the type of failure.

Your postgres_database decoder is suitable for detecting successful logins, as long as it matches the actual log format. For authentication failures, the recommended approach is to define a separate decoder specifically for FATAL messages, covering wrong passwords and non-existing users, and extracting at least the user and source IP when available.

Once those events are properly decoded, rules like the following can be used as a reference:

Successful login

<group name="postgresql,auth,">
  <rule id="110001" level="5">
    <decoded_as>postgres_database</decoded_as>
    <description>PostgreSQL login succeeded: user=$(user) db=$(database) srcip=$(srcip)</description>
  </rule>
</group>

Once authentication failure logging is enabled, it’s a good practice to generate one failed login with a wrong password and one with a non-existing user, and validate the behavior using wazuh-logtest: https://documentation.wazuh.com/current/user-manual/reference/tools/wazuh-logtest.html

Chandra pal singh Chauhan

unread,
Feb 3, 2026, 7:12:19 AMFeb 3
to Wazuh | Mailing List

Hi Javier,

I am facing an additional issue while following the Wazuh blog below for monitoring OS-level PostgreSQL logs:

https://wazuh.com/blog/monitoring-postgresql-database-with-wazuh/

PostgreSQL authentication failure events are being detected correctly. However, I am not receiving any authentication success alerts, even though I can see the authentication success logs arriving in the archives.log file.

Could you please guide me on how to process these authentication success events so that they are generated in the alerts.json file?

I am also attaching some sample logs for reference. If possible, could you please help by creating the appropriate decoder (and rule, if required) for these logs?

Thank you for your support.

PostgreSQL Log Feb 2 2026.2026-02-02-10

Javier Sanchez Gil

unread,
Feb 3, 2026, 8:01:56 AMFeb 3
to Wazuh | Mailing List
Hi Chandra,

The logs contain entries like the following:

LOG:  connection received: host=... port=...
LOG:  connection authenticated: identity="..." method=...
LOG:  connection authorized: user=<user> database=<db> SSL enabled (...)

The reason you can see these events in archives.log but they do not appear in alerts.json is that, while the logs are being collected correctly, no alert is being generated for them. In Wazuh, events are only written to alerts.json when a rule matches and the rule level is equal to or greater than the configured log_alert_level (level 3 by default).

In this case, the authentication success logs are not being matched by any decoder/rule that generates an alert, or the rule exists but has a level below the alert threshold.

Based on the actual format of your logs, you can use a specific decoder for connection authorized events, for example:

<decoder name="postgres_auth_success">
  <prematch>connection authorized:</prematch>
  <regex>^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\s+\w+:(\[local\]|\d+\.\d+\.\d+\.\d+)(?:\((\d+)\))?:([^@]+)@([^:]+):\[(\d+)\]:LOG:\s+connection authorized:\s+user=([^\s]+)\s+database=([^\s]+).*$</regex>
  <order>timestamp, srcip, srcport, session_user, database, pid, user, database</order>
</decoder>


And an associated rule with a sufficient level so it appears in alerts.json:

<group name="postgresql,auth,">
  <rule id="110001" level="5">

    <decoded_as>postgres_auth_success</decoded_as>


    <description>PostgreSQL login succeeded: user=$(user) db=$(database) srcip=$(srcip)</description>
  </rule>
</group>

For validation:

  • Run wazuh-logtest on the manager and paste one real connection authorized log line to confirm the decoder and rule match.

  • Restart the manager after adding the decoder and rule.

  • Generate a successful login and verify that the alert appears in alerts.json.

In summary, the authentication success logs are being ingested correctly, but a dedicated decoder and rule (with an appropriate level) are required to turn them into alerts.


Chandra pal singh Chauhan

unread,
Feb 4, 2026, 7:03:33 AMFeb 4
to Wazuh | Mailing List
Hello  Javier,

I tried to save this decoder in local_decoder.xml file but i get error:

Error: Could not upload decoder (1113) - XML syntax error at WzRequest.returnErrorInstance (https://192.168.29.228/414204/bundles/plugin/wazuh/wazuh.plugin.js:1:504035) at WzRequest.apiReq (https://192.168.29.228/414204/bundles/plugin/wazuh/wazuh.plugin.js:1:503177) at async resources_handler_ResourcesHandler.updateFile (https://192.168.29.228/414204/bundles/plugin/wazuh/wazuh.chunk.2.js:1:4158321) at async file_editor_WzFileEditor.save (https://192.168.29.228/414204/bundles/plugin/wazuh/wazuh.chunk.2.js:1:4227518)


can you please check and confirm.



Javier Sanchez Gil

unread,
Feb 5, 2026, 6:07:50 AMFeb 5
to Wazuh | Mailing List

Hi Chandra,

I re-tested the entire flow end-to-end, making a few adjustments to the decoder, and confirmed that everything works correctly.
Below are the exact steps and configurations I used, so you can follow them to validate the behavior in your environment.

1) Add the decoder

Edit the following file:

/var/ossec/etc/decoders/local_decoder.xml

Add this decoder (keep the rest of the file unchanged and paste this as a new <decoder> block):

<decoder name="postgres_auth_success">
   <parent>windows-date-format</parent>
   <use_own_name>true</use_own_name>
   <prematch>connection authorized:</prematch>
   <regex type="pcre2">^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\s+(\w+):(\[local\]|\d+\.\d+\.\d+\.\d+)(?:\((\d+)\))?:(?:([^@]+)@([^:]+):)?\[(\d+)\]:LOG:\s+connection authorized:\s+user=([^\s]+)\s+database=([^\s]+).*$</regex>
   <order>timestamp, tz, srcip, srcport, session_user, session_db, pid, user, database</order>
</decoder>


Important notes:

  • type="pcre2" is required to avoid regex syntax errors.

  • <use_own_name>true</use_own_name> is required so rules can match this decoder using <decoded_as>postgres_auth_success</decoded_as>


2) Add the rule (to generate alerts)

Edit the file:

/var/ossec/etc/rules/local_rules.xml

Add the following rule:

<group name="postgresql,auth,">
  <rule id="110001" level="5">
    <decoded_as>postgres_auth_success</decoded_as>

    <description>PostgreSQL login succeeded: user=$(dstuser) database=$(database) srcip=$(srcip)</description>
  </rule>
</group>

Restart Wazuh :

/var/ossec/bin/wazuh-control restart

3) Validate using wazuh-logtest

Run:

/var/ossec/bin/wazuh-logtest

Paste one of your PostgreSQL authentication success logs, for example:

2026-02-02 10:19:05 UTC:172.30.4.44(21237):reward_service@lxme_prod:[21880]:LOG:  connection authorized: user=reward_service database=lxme_prod SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256)

You should see:

  • Phase 2 showing:

    • name: 'postgres_auth_success'

  • Phase 3 showing:

    • id: '110001'

    • level: '5'

    • **Alert to be generated.

4) Verify in alerts.json (real ingestion)

Once the log is being collected by Wazuh (i.e., it appears in archives.log), you should see the alert in:

tail -n 200 /var/ossec/logs/alerts/alerts.json | egrep '"id":"110001"|postgres_auth_success|PostgreSQL login succeeded'

You should see an alert similar to:

"decoder":{"parent":"windows-date-format","name":"postgres_auth_success"},
"rule":{"id":"110001","level":5,"description":"PostgreSQL login succeeded: user=test_user database=testdb srcip=10.0.0.5"},
"data":{"srcip":"10.0.0.5","srcport":"5555","dstuser":"test_user","database":"testdb"}


Please follow these steps on your side and let me know if you see the same results.

Reply all
Reply to author
Forward
0 new messages