PostgreSQL recommendations

For requirements that apply to all database types, see Prepare a database for Deep Security Manager.

  1. To prepare a PostgreSQL database for Deep Security Manager, create its database user account, and grant permissions:

    CREATE DATABASE "<database-name>";

    CREATE ROLE "<dsm-username>" WITH PASSWORD '<password>' LOGIN;

    GRANT ALL ON DATABASE "<database-name>" TO "<dsm-username>";

    GRANT CONNECT ON DATABASE "<database-name>" TO "<dsm-username>";

    If Deep Security Manager will have multiple tenants, also grant the right to create new databases and roles for tenants:

    ALTER ROLE <dsm-username> CREATEDB CREATEROLE;

  2. If connections between Deep Security Manager and PostgreSQL use an untrusted network, consider using TLS to improve security. See Encrypt communication between the Deep Security Manager and the database.
  3. Configure database log rotation and performance settings.

    For best practices, see Log rotation, Lock management, Maximum concurrent connections, Autovacuum settings, etc.

    Steps vary by distribution and managed hosting:

    • Self-hosted database: Defaults are generic values from the PostgreSQL core distribution. Some defaults are not appropriate for data center or customized cloud installs, especially in larger deployments.

      To change settings:

      1. In a plain text editor, open the postgresql.conf file.
      2. Edit the parameters.
      3. Save the file.
      4. Restart the PostgreSQL service.
    • Amazon RDS: Defaults vary by instance size. Often, you only need to fine tune autovacuuming, max_connections and effective_cache_size. To change settings, use database parameter groups and then restart the database instance.
    • Amazon Aurora: Defaults vary by instance size. Often, you only need to fine tune autovacuuming, max_connections and effective_cache_size. To change settings, use database parameter groups and then restart the database instance.

    When fine tuning performance, verify settings by monitoring your database IOPS with a service such as Amazon CloudWatch.

If you need additional help, PostgreSQL offers professional support.

Log rotation

In PostgreSQL core distributions, by default, the database's local log file has no age or file size limit. Logs will gradually consume more disk space.

To prevent that, configure parameters for either remote logging to a Syslog log_destination, or local log rotation.

Log files can be rotated based on age limit, file size limit, or both (whichever occurs sooner). When a limit is reached, depending on whether a log file exists that matches the file name pattern at that time, PostgreSQL either creates a new file or reuses an existing one. Reuse can either append or (for age limit) overwrite.

Log rotation parameters are:

  • logging_collector: Enter "on" to enable database logging.
  • log_filename: Log file name pattern. Patterns mostly use IEEE standard time and date formatting.
  • log_truncate_on_rotation: Enter either "off" to append to the existing log file, or "on" to overwrite it. Only applies when time-based log rotation occurs. (File size-based log rotation always appends.)
  • log_rotation_age: Maximum age in minutes of a log file. Enter "0" to disable time-based log rotation.
  • log_rotation_size: Maximum size in kilobytes (KB) of a log file. Enter "0" to disable file size-based log rotation.

Example: Daily Database Log Rotation

These parameters create 7 rotating database log files: one for each day of the week . (File names are "postgresql-Mon.log" for Monday, etc.)

Each day (1440 minutes) either creates a file with that day's name (if none exists) or overwrites that day's log file from the previous weekly cycle.

During heavy load, logging can temporarily exceed disk space quota because the file size limit is disabled. However the number and names of files does not change.

log_collector = on

log_filename = 'postgresql-%a.log'

log_rotation_age = 1440

log_rotation_size = 0

log_truncate_on_rotation = on

Lock management

Increase deadlock_timeout to exceed your deployment's normal transaction time.

Each time a query waits for a lock for more than deadlock_timeout, PostgreSQL checks for a deadlock condition and (if configured) logs an error. On larger deployments during heavy load, however, it's often normal (not an error) to wait for more than 1 second. Logging these normal events decreases performance.

Maximum concurrent connections

Increase to max_connections = 500.

Effective cache size

Consider increasing effective_cache_size. This setting is used to estimate cache effects by a query. It only affects cost estimates during query planning, and doesn't cause more RAM usage.

Shared buffers

Increase shared_buffers to 25% of the RAM. This setting specifies how much memory PostgreSQL can use to cache data, which improves performance.

Work memory and maintenance work memory

Increase work_mem. This setting specifies the amount of RAM that can be used by internal sort operations and hash tables before writing to temporary disk files. More memory is required when running complex queries.

Consider increasing maintenance_work_mem. This setting determines the maximum amount of memory used for maintenance operations such as ALTER TABLE.

Checkpoints

Reduce checkpoint frequency. Checkpoints usually cause most writes to data files. To optimize performance, most checkpoints should be "timed" (triggered by checkpoint_timeout), not "requested" (triggered by filling all the available WAL segments or by an explicit CHECKPOINT command).

Parameter name Recommended value
checkpoint_timeout 15min
checkpoint_completion_target 0.9
max_wal_size 16GB

Write-ahead log (WAL)

If you use database replication, consider using wal_level = replica.

Autovacuum settings

PostgreSQL requires periodic maintenance called "vacuuming". Usually, you don't need to change the default value for autovacuum_max_workers.

On the entitys and attribute2s tables, if frequent writes cause many rows to change often (such as in large deployments with short-lived cloud instances), then autovacuum should run more frequently to minimize disk space usage and maintain performance. Parameters must be set on both the overall database and those specific tables.

Database-level parameter name Recommended value
autovacuum_work_mem

1GB

Table-level parameter name Recommended value
autovacuum_vacuum_cost_delay

10

autovacuum_vacuum_scale_factor

0.01

autovacuum_analyze_scale_factor

0.005

To change the database-level setting, you must edit the configuration file or database parameter group, and then reboot the database server. Commands cannot change that setting while the database is running.

To change the table-level settings, you can either edit the configuration file or database parameter group, or enter these commands:

ALTER TABLE public.entitys SET (autovacuum_enabled = true, autovacuum_vacuum_cost_delay = 10, autovacuum_vacuum_scale_factor = 0.01, autovacuum_analyze_scale_factor = 0.005);

ALTER TABLE public.attribute2s SET (autovacuum_enabled = true, autovacuum_vacuum_cost_delay = 10, autovacuum_vacuum_scale_factor = 0.01, autovacuum_analyze_scale_factor = 0.005);

PostgreSQL on Linux

Transparent huge pages

Transparent huge pages (THP) is a Linux memory management system that reduces the overhead of translation lookaside buffer (TLB) lookups on computers with large amounts of RAM by using larger memory pages. By default, THP is enabled, but it isn't recommended for PostgreSQL database servers. To disable it, see your OS vendor's documentation.

Host-based authentication

Host-based authentication (HBA) can prevent unauthorized access to the database from other computers that aren't in the allowed IP address range. By default, Linux doesn't have HBA restrictions for databases. However it's usually better to use a security group or firewall instead.