Deep Security 11 has reached end of support. Use the version selector (above) to see more recent versions of the Help Center.

PostgreSQL recommendations

For requirements that apply to all database types, see Prepare a database for Deep Security Manager.

  1. To prepare a PostgreSQL database for Deep Security Manager, create its database user account, and grant permissions:

    CREATE DATABASE "<database-name>";

    CREATE ROLE "<dsm-username>" WITH PASSWORD '<password>' LOGIN;

    GRANT ALL ON DATABASE "<database-name>" TO "<dsm-username>";

    GRANT CONNECT ON DATABASE "<database-name>" TO "<dsm-username>";

    If Deep Security Manager will have multiple tenants, also grant the right to create new databases and roles for tenants:

    ALTER ROLE <dsm-username> CREATEDB CREATEROLE;

  2. If connections between Deep Security Manager and PostgreSQL use an untrusted network, consider using TLS to improve security. See Encrypt communication between Deep Security Manager and the database.
  3. Configure database log rotation and performance settings.

    For best practices, see Logging settings, Lock management, Maximum connections, Autovacuum settings, etc.

    Steps vary by distribution and managed hosting:

    • Self-hosted database: Defaults are generic values from the PostgreSQL core distribution. Some defaults are not appropriate for data center or customized cloud installs, especially in larger deployments.

      To change settings:

      1. In a plain text editor, open the postgresql.conf file.
      2. Edit the parameters.
      3. Save the file.
      4. Restart the PostgreSQL service.
    • Amazon RDS: Defaults vary by instance size. Often, you only need to fine tune autovacuuming, max_connections and effective_cache_size. To change settings, use database parameter groups and then restart the database instance.
    • Amazon Aurora: Defaults vary by instance size. Often, you only need to fine tune autovacuuming, max_connections and effective_cache_size. To change settings, use database parameter groups and then restart the database instance.

    When fine tuning performance, verify settings by monitoring your database IOPS with a service such as Amazon CloudWatch.

If you need additional help, PostgreSQL offers professional support.

Tuning PostgreSQL settings

Logging settings

By default, PostgreSQL log files are not rotated, which can lead to the log files using a large amount of disk space. When using PostgreSQL with Deep Security, we recommend that you use these four parameters in the postgresql.conf file to configure log rotation:

  • log_filename
  • log_rotation_age
  • log_rotation_size
  • log_truncate_on_rotation

log_rotation_age and log_rotation_size control when a new log file is created. For example, setting log_rotation_age to 1440 will create a new log file every 1440 minutes (1 day), and setting log_rotation_size to 10000 will create a new log file when the previous one reaches 10 000 KB.

log_filename controls the name given to every log file. You can use time and date format conversion in the name. For a complete list, see https://pubs.opengroup.org/onlinepubs/009695399/functions/strftime.html.

When log_truncate_on_rotation is set to "on", it will overwrite any log file that has the same name as a newly created log file.

There are several combinations of parameters that you can use to achieve a log rotation to suit your requirements. Here is one example:

  • log_filename = 'postgresql-%a.log' (every log file has the first 3 letters of the weekday in its name)
  • log_rotation_age = 1440 (a new log file is created daily)
  • log_rotation_size = 0. (setting is disabled to prevent the overwriting of the daily log file every time this limit is exceeded)
  • log_truncate_on_rotation = on (enable log file overwrite)

Lock management

By default, the deadlock_timeout setting in the postgresql.conf file is configured to 1 second. This means every time a query waits on a lock for more than 1 second, PostgreSQL will launch a check for deadlock condition and will log an error if the logging setting has been configured that way (by default, it is). This can lead to performance degradation on bigger systems, where it can be normal for queries to wait for more than 1 second during load times. On large systems, consider increasing the deadlock_timeout setting. The PostgreSQL documentation contains this recommendation: "Ideally the setting should exceed your typical transaction time [...]".

Maximum connections

The max_connections setting in the postgresql.conf file specifies the maximum number of open connections to the database. The default value is 100. We recommend increasing this value to 500.

Shared buffers

The shared_buffers setting in the postgresql.conf file specifies how much memory PostgreSQL can use to cache data. A system with 1 GB of RAM must have one quarter of its memory value for shared buffer, which means the shared buffer should be set to 256 MB (the default is 32 MB).

Work memory and maintenance work memory

The work_mem setting in the postgresql.conf file specifies the amount of memory that can be used by internal sort operations and hash tables before writing to temporary disk files. The default value is 1 MB, but it should be increased when running complex queries. The maintenance_work_mem setting determines the maximum amount of memory used for maintenance operations such as ALTER TABLE.

Effective cache size

The effective_cache_size setting in the postgresql.conf file is used to estimate cache effects by a query. This setting only affects cost estimates during query planning and does not result in higher memory consumption. Consider increasing this setting.

Checkpoints

Checkpoints are usually the main source of writes to data files. To get the smoothest performance, most checkpoints should be "timed" (triggered by checkpoint_timeout) and not "requested" (triggered by filling all the available WAL segments or by an explicit CHECKPOINT command). We strongly recommend that you make checkpoints less frequent.

Parameter name Recommended value
checkpoint_timeout 15min
checkpoint_completion_target 0.9
max_wal_size 16GB

Write-ahead log (WAL)

If you use database replication, consider using wal_level = replica.

Autovacuum settings

PostgreSQL requires periodic maintenance called "vacuuming". Usually, you don't need to change the default value for autovacuum_max_workers.

On the entitys and attribute2s tables, if frequent writes cause many rows to change often (such as in large deployments with short-lived cloud instances), then autovacuum should run more frequently to minimize disk space usage and maintain performance. Parameters must be set on both the overall database and those specific tables.

Database-level parameter name Recommended value
autovacuum_work_mem

1GB

Table-level parameter name Recommended value
autovacuum_vacuum_cost_delay

10

autovacuum_vacuum_scale_factor

0.01

autovacuum_analyze_scale_factor

0.005

To change the database-level setting, you must edit the configuration file or database parameter group, and then reboot the database server. Commands cannot change that setting while the database is running.

To change the table-level settings, you can either edit the configuration file or database parameter group, or enter these commands:

ALTER TABLE public.entitys SET (autovacuum_enabled = true, autovacuum_vacuum_cost_delay = 10, autovacuum_vacuum_scale_factor = 0.01, autovacuum_analyze_scale_factor = 0.005);

ALTER TABLE public.attribute2s SET (autovacuum_enabled = true, autovacuum_vacuum_cost_delay = 10, autovacuum_vacuum_scale_factor = 0.01, autovacuum_analyze_scale_factor = 0.005);

High availability

High availability (HA) is not set by default and was not enabled in our test environment, but it is highly recommended to ensure business continuity in the case of a database malfunction or server inaccessibility. Refer to your PostgreSQL documentation for information on how to enable and configure HA.

Backup and recovery

Backup and recovery is not set by default, but it's absolutely essential in a production environment.

Basic tools like pg_dump or pg_basebackup are not suitable for backups in an enterprise environment. Consider using other tools like Barman (https://www.pgbarman.org/index.html) for backup and recovery.

Linux recommendations

Transparent Huge Pages (Linux)

Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger memory pages. THP is enabled by default on Linux, but it is not recommended for computer running a database and should be disable if PostgreSQL is installed on a Linux computer. Refer to your OS vendor's documentation for details.

Strengthen host-based authentication (Linux)

By default, Linux does not have restricted host-based authentication (HBA) for databases. Strengthening the HBA settings on a database appliance helps to prevent unauthorized access from external hosts. The HBA settings restrict access to an IP address range so that only hosts within that range have access. HBA settings were not used on our test environment and we do not recommend them.