Logging v1.1.2

EDB Postgres Distributed for Kubernetes outputs logs in JSON format directly to standard output, including PostgreSQL logs, without persisting them to storage for security reasons. This design facilitates seamless integration with most Kubernetes-compatible log management tools, including command line ones like stern.

As EDB Postgres Distributed for Kubernetes leverages the EDB Postgres for Kubernetes clusters to manage nodes, logs for EDB Postgres for Kubernetes operator and instance are also applicable here. Check here for more information about logs for EDB Postgres for Kubernetes.

Each log entry includes the following fields:

  • level – The log level (e.g., info, notice).
  • ts – The timestamp.
  • logger – The type of log (e.g., postgres, pg_controldata).
  • msg – The log message, or the keyword record if the message is in JSON format.
  • record – The actual record, with a structure that varies depending on the logger type.
  • logging_pod – The name of the pod where the log was generated.
Info

If your log ingestion system requires custom field names, you can rename the level and ts fields using the log-field-level and log-field-timestamp flags in the operator controller. This can be configured by editing the Deployment definition of the cloudnative-pg operator.

Node Logs

Node logs is postgres instance pod log. You can configure the log level for the PGD node in pgdgroup specification using the logLevel option, PGDGroup will deliver the logLevel changes to all the PG4K clusters underlying. Available log levels are: error, warning, info (default), debug, and trace.

  • change the logLevel for data node in spec.cnp.logLevel
  • change the logLevel for witness node in spec.witness.logLevel
Important

Currently, the log level can only be set at the time the instance starts. Changes to the log level in the pgdgroup specification after the group has started will cause the nodes restart.

Operator Logs

There are two operator logs we can check here as described above:

  • Operator logs for EDB Postgres Distributed for Kubernetes
  • Operator logs for EDB Postgres for Kubernetes

The logs produced by the operator pod can be configured with log levels, same as instance pods: error, warning, info (default), debug, and trace.

The log level for the operator can be configured by editing the Deployment definition of the operator and setting the --log-level command line argument to the desired value.

PostgreSQL Logs

Each PostgreSQL log entry is a JSON object with the logger key set to postgres. The structure of the log entries is as follows:

{
  "level": "info",
  "ts": 1619781249.7188137,
  "logger": "postgres",
  "msg": "record",
  "record": {
    "log_time": "2021-04-30 11:14:09.718 UTC",
    "user_name": "",
    "database_name": "",
    "process_id": "25",
    "connection_from": "",
    "session_id": "608be681.19",
    "session_line_num": "1",
    "command_tag": "",
    "session_start_time": "2021-04-30 11:14:09 UTC",
    "virtual_transaction_id": "",
    "transaction_id": "0",
    "error_severity": "LOG",
    "sql_state_code": "00000",
    "message": "database system was interrupted; last known up at 2021-04-30 11:14:07 UTC",
    "detail": "",
    "hint": "",
    "internal_query": "",
    "internal_query_pos": "",
    "context": "",
    "query": "",
    "query_pos": "",
    "location": "",
    "application_name": "",
    "backend_type": "startup"
  },
  "logging_pod": "region-a-1-1",
}
Info

Internally, the operator uses PostgreSQL's CSV log format. For more details, refer to the PostgreSQL documentation on CSV log format.

PGAudit Logs

EDB Postgres Distributed for Kubernetes offers seamless and native support for PGAudit on PostgreSQL clusters.

To enable PGAudit, add the necessary pgaudit parameters in the spec.[cnp|witness].postgresql section of the pgdgroup configuration. PGDGroup will update the configuration to each EDB Postgres for Kubernetes underlying.

Important

The PGAudit library must be added to shared_preload_libraries. EDB Postgres for Kubernetes automatically manages this based on the presence of pgaudit.* parameters in the PostgreSQL configuration. The operator handles both the addition and removal of the library from shared_preload_libraries.

Additionally, the operator manages the creation and removal of the PGAudit extension across all databases within the cluster.

The following example demonstrates a PostgreSQL PGDGroup deployment with PGAudit enabled and configured:

apiVersion: pgd.k8s.enterprisedb.io/v1beta1
kind: PGDGroup
metadata:
  name: pgdgroup-example
spec:
  instances: 3
  cnp:
    postgresql:
      parameters:
        "pgaudit.log": "all, -misc"
        "pgaudit.log_catalog": "off"
        "pgaudit.log_parameter": "on"
        "pgaudit.log_relation": "on"

  storage:
    size: 1Gi

The audit CSV log entries generated by PGAudit are parsed and routed to standard output in JSON format, similar to all other logs:

  • .logger is set to pgaudit.
  • .msg is set to record.
  • .record contains the entire parsed record as a JSON object. This structure resembles that of logging_collector logs, with the exception of .record.audit, which contains the PGAudit CSV message formatted as a JSON object.

This example shows sample log entries:

{
  "level": "info",
  "ts": 1627394507.8814096,
  "logger": "pgaudit",
  "msg": "record",
  "record": {
    "log_time": "2021-07-27 14:01:47.881 UTC",
    "user_name": "postgres",
    "database_name": "postgres",
    "process_id": "203",
    "connection_from": "[local]",
    "session_id": "610011cb.cb",
    "session_line_num": "1",
    "command_tag": "SELECT",
    "session_start_time": "2021-07-27 14:01:47 UTC",
    "virtual_transaction_id": "3/336",
    "transaction_id": "0",
    "error_severity": "LOG",
    "sql_state_code": "00000",
    "backend_type": "client backend",
    "audit": {
      "audit_type": "SESSION",
      "statement_id": "1",
      "substatement_id": "1",
      "class": "READ",
      "command": "SELECT FOR KEY SHARE",
      "statement": "SELECT pg_current_wal_lsn()",
      "parameter": "<none>"
    }
  },
  "logging_pod": "cluster-example-1",
}

See the PGAudit documentation for more details about each field in a record.

EDB Audit logs

Clusters that are running on EDB Postgres Advanced Server (EPAS) can enable EDB Audit as follows:

apiVersion: pgd.k8s.enterprisedb.io/v1beta1
kind: PGDGroup
metadata:
  name: pgdgroup-example
spec:
  instances: 3
  imageName: docker.enterprisedb.com/k8s_enterprise_pgd/edb-postgres-advanced-pgd:17-pgd5-ubi9
  licenseKey: <LICENSE>

  cnp:
    postgresql:
      epas:
        audit: true

  storage:
    size: 1Gi

Setting .spec.cnp.postgresql.epas.audit: true enforces the following parameters to all the nodes:

edb_audit = 'csv'
edb_audit_destination = 'file'
edb_audit_directory = '/controller/log'
edb_audit_filename = 'edb_audit'
edb_audit_rotation_day = 'none'
edb_audit_rotation_seconds = '0'
edb_audit_rotation_size = '0'
edb_audit_tag = ''
edb_log_every_bulk_value = 'false'

Other parameters can be passed via .spec.cnp.postgresql.parameters as usual.

The audit CSV logs are parsed and routed to stdout in JSON format, similarly to all the remaining logs:

  • .logger set to edb_audit
  • .msg set to record
  • .record containing the whole parsed record as a JSON object

See the example below:

{
  "level": "info",
  "ts": 1624629110.7641866,
  "logger": "edb_audit",
  "msg": "record",
  "record": {
    "log_time": "2021-06-25 13:51:50.763 UTC",
    "user_name": "postgres",
    "database_name": "postgres",
    "process_id": "68",
    "connection_from": "[local]",
    "session_id": "60d5df76.44",
    "session_line_num": "5",
    "process_status": "idle in transaction",
    "session_start_time": "2021-06-25 13:51:50 UTC",
    "virtual_transaction_id": "3/93",
    "transaction_id": "1183",
    "error_severity": "AUDIT",
    "sql_state_code": "00000",
    "message": "statement: GRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text) TO \"streaming_replica\"",
    "detail": "",
    "hint": "",
    "internal_query": "",
    "internal_query_pos": "",
    "context": "",
    "query": "",
    "query_pos": "",
    "location": "",
    "application_name": "",
    "backend_type": "client backend",
    "command_tag": "GRANT",
    "audit_tag": "",
    "type": "grant"
  },
  "logging_pod": "pgdgroup-example-1-1",
}

See EDB Audit file for more details about the records' fields.

Other Logs

All logs generated by the operator and PG4K instances are in JSON format, with the logger field indicating the process that produced them. The possible logger values are as follows:

  • barman-cloud-wal-archive: logs from barman-cloud-wal-archive
  • barman-cloud-wal-restore: logs from barman-cloud-wal-restore
  • edb_audit: from the EDB Audit extension
  • initdb: logs from running initdb
  • pg_basebackup: logs from running pg_basebackup
  • pg_controldata: logs from running pg_controldata
  • pg_ctl: logs from running any pg_ctl subcommand
  • pg_rewind: logs from running pg_rewind
  • pgaudit: logs from the PGAudit extension
  • postgres: logs from the postgres instance (with msg distinct from record)
  • wal-archive: logs from the wal-archive subcommand of the instance manager
  • wal-restore: logs from the wal-restore subcommand of the instance manager
  • instance-manager: from the PostgreSQL instance manager in each node.

With the exception of postgres and edb_audit, which follows a specific structure, all other logger values contain the msg field with the escaped message that is logged.