Start  trial

    Start trial

      Earlier this week, we announced the launch of Fujitsu Enterprise Postgres 15, with enhanced capabilities to manage enterprise data across multiple environments with security, agility, and flexibility. In this post, I will highlight some of the new features available in this release, with examples comparing it with the previous release.

      I am pleased to introduce some of the key features of  our latest enterprise, hybrid multi- cloud Postgres offering - Fujitsu Enterprise Postgres 15.
      img-badge-with-stars-the-hybrid-multi-cloud-postgres-yellow-to-orange-02

      We recently launched version 15 of Fujitsu Enterprise Postgres, and I thought that this would be a great opportunity to discuss the new OSS features offered and inherited in this Fujitsu Enterprise Postgres release which are on top of the exclusively Fujitsu developed enterprise grade features as introduced by Marcos Figueredo in his recent blog.

      In this post I describe the additional OSS features, alongside their benefits, with examples to walk you through the improved functionality.

      Removal of the CREATE permission on the PUBLIC schema

      Prior to version 15, all users were given CREATE and USAGE access by default in the PUBLIC schema, which posed a security risk. This risk is mitigated in the new release, since only the database owner has default rights to the CREATE and USAGE privileges in the PUBLIC schema.

      Additionally, the owner of the public schema has been changed to pg_database_owner. As a result of this update, the recommended behavior stated in the security release for CVE-2018-1058 is now achieved.

      See below the differences when a new user tries to create a table in the PUBLIC schema.

      Version 14

      postgres=# CREATE USER test PASSWORD 'fep123';
      CREATE ROLE
      postgres=# \du
                                        List of roles
      Role name |                        Attributes                          |  Member of
      ----------+------------------------------------------------------------+-----------
      fsepuser  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
      test      |                                                            | {}

      postgres=# \c postgres test
      You are now connected to database "postgres" as user "test".
      postgres=> CREATE TABLE test (id numeric);
      CREATE TABLE

      Version 15

      postgres=# CREATE USER test PASSWORD 'fep123';
      CREATE ROLE
      postgres=# \du
                                        List of roles
      Role name |                        Attributes                          |  Member of
      ----------+------------------------------------------------------------+-----------
      fsepuser  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
      test      |                                                            | {}

      postgres=# \c postgres test
      You are now connected to database "postgres" as user "test".
      postgres=> CREATE TABLE test (id numeric);
      ERROR: permission denied for schema public
      LINE 1: create table test(id numeric);

      Changes related to the checkpoint functionality

      New predifined role pg_checkpoint

      Prior to version 15, only superusers were permitted to execute checkpoint operations. Now, users can run CHECKPOINT thanks to a new role named pg_checkpoint. Any non-superuser who is assigned this role can run checkpoint commands.

      Version 14

      postgres=# CREATE USER test_chkpt PASSWORD 'fep123';
      CREATE ROLE
      postgres=# \c postgres test_chkpt
      You are now connected to database "postgres" as user "test_chkpt".
      postgres=> CHECKPOINT;
      ERROR: must be superuser to do CHECKPOINT (11116)
      postgres=> \du

                                        List of roles
      Role name |                        Attributes                          |  Member of
      ----------+------------------------------------------------------------+-----------
      fsepuser  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
      test      |                                                            | {}
      test_chkpt|                                                            | {}

      Version 15

      postgres=# CREATE USER test_chkpt PASSWORD 'fep123';
      CREATE ROLE
      postgres=#
      postgres=# GRANT pg_checkpoint to test_chkpt;
      GRANT ROLE

      postgres=# \c postgres test_chkpt
      You are now connected to database "postgres" as user "test_chkpt".
      postgres=>
      postgres=> CHECKPOINT;
      CHECKPOINT
      postgres=> \du

                                        List of roles
      Role name |                        Attributes                          | Member of
      ----------+------------------------------------------------------------+-----------
      fsepuser  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
      test      |                                                            | {}
      test_chkpt|                                                            | {pg_checkpoint}

      Default value of the log_checkpoints parameter changed to 'on'

      Prior to version 15, the log_checkpoints parameter in postgresql.conf was set to 'off' by default. This default setting has been changed to 'on' now.

      With this feature, checkpoints and restart points will now be recorded in the database server log file. Certain statistics, such as the number of buffers written and the time spent writing them, will be provided in the log messages. Even an inactive server will generate some log output in the server log file as a result of this.

      Support of MERGE command

      The INSERT, DELETE, and UPDATE commands on a table or a partition of tables are executed simultaneously by the MERGE statement in FEP-15 based on the join condition. The WHEN MATCHED clause can be used to update (UPDATE), remove (DELETE), or do nothing (DO NOTHING) operations on the tuples that match the join condition.
      It is also possible to specify additional requirements. The WHEN NOT MATCHED clause describes what happens to the tuples when the condition is not met.

      The MERGE command does not currently support foreign tables or updatable views. Based on the future requirements, this will most likely be added soon.

      Let’s say we have two different tables as below.

      airline=# SELECT * from aircrafts_data;
      aircraft_code|        model        | range
      -------------+---------------------+-------
      773          | Boeing 777-300      | 11100
      763          | Boeing 767-300      |  7900
      SU9          | Sukhoi Suprejet-100 |  3000
      320          | Airbus A320-200     |  5700
      321          | Airbus A321-200     |  5600
      319          | Airbus A319-100     |  6700
      733          | Boeing 737-300      |  4200
      CN1          | Cessna 208 Caravan  |  1200
      CR2          | Bombardier CRJ-200  |  2700
      (9 rows)

      airline=# SELECT* from aircrafts_data_new;
      aircraft_code|        model        | range
      -------------+---------------------+-------
      773          | Boeing 777-300      | 11100
      763          | Boeing 767-300      |  7900
      SU9          | Sukhoi Suprejet-100 |  3000
      320          | Airbus A320-200     |  5700
      321          | Airbus A321-200     |  5600
      319          | Airbus A319-100     |  6700
      733          | Boeing 737-300      |  4200
      CN1          | Cessna 208 Caravan  |  1200
      CR2          | Bombardier CRJ-200  |  2700
      523          | Airbus 777-300      |  7900
      504          | Airbus 350-900      |  9000
      528          | Boeing 787-10       | 10000
      (12 rows)

      Now, we will merge the two tables using the syntax below. Here, we want to add only the new entries while keeping the old entries as they are if the data in the model column is the same.

      airline=# MERGE INTO aircrafts_data AS c
      airline=# USING aircrafts_data_new AS n
      airline=# ON c.aircraft_code = n.aircraft_code
      airline=# WHEN MATCHED AND c.model = n.model THEN
      airline=# DO NOTHING
      airline=# WHEN MATCHED AND c.model <> n.model THEN
      airline=# UPDATE SET model=n.model
      airline=# WHEN NOT MATCHED THEN
      airline=# INSERT VALUES (n.aircraft_code, n.model, n.range)
      airline=# ;
      MERGE 12
      airline=#

      Now, let’s execute the SELECT on the original aircrafts_data table to verify that the MERGE statement executes effectively or not.

      airline=# select * from aircrafts_data;
      aircraft_code|         model       | range
      -------------+---------------------+-------
      773          | Boeing 777-300      | 11100
      763          | Boeing 767-300      |  7900
      SU9          | Sukhoi Suprejet-100 |  3000
      320          | Airbus A320-200     |  5700
      321          | Airbus A321-200     |  5600
      319          | Airbus A319-100     |  6700
      733          | Boeing 737-300      |  4200
      CN1          | Cessna 208 Caravan  |  1200
      CR2          | Bombardier CRJ-200  |  2700
      523          | Airbus 777-300      |  7900
      504          | Airbus 350-900      |  9000
      528          | Boeing 787-10       | 10000

      (12 rows)

      We can see that with the help of the MERGE command, we effectively merged two different tables.

      img-blog-spash-tim-he-postgresql-merge-command

      Enhancement in pg_basebackup

      As we all very well know, pg_basebackup is important for taking a base backup of a PostgreSQL cluster. The new version introduces these enhancements.

      • Option --target: Designates the server where the base backup will be stored.
        This option makes use of the BASE BACKUP TARGET data transmission protocol. The output directory on the server can be specified using the server:/path formats.
        A user with the SUPERUSER or pg_write_server_files roles can run server-side backups. This option cannot be combined with -Xstream since pg_basebackup, rather than the server, implements WAL streaming. Since it is the default WAL mechanism, you must specify either -Xfetch or -Xnone when this option is used.
      • Option --compress: Specifies compression level and method - for example, gzip:9.
        To specify where to compress the backup, this option also takes the client-gzip and server-gzip compression methods. The ability to utilize compression with plain format provides you the capability to automatically compress your pg_basebackup on the server and extract it again on the client side, which is another benefit of this option. This is quite helpful for situations where slow network connections are used.

      Example:

      pg_basebackup -h localhost -p 27500 -Ft --compress=client-gzip:9 –D /backup/<dir_date>/ -Xs

      Enhancement in logical replication

      Logical replication is a method of replicating data objects and their changes, based on their replication identity (usually a primary key). We use the term logical in contrast to physical replication, which uses exact block addresses and byte-by-byte replication.

      Ability to publish all tables in a schema

      As of version 14, it was necessary to list each table name from the schema one at a time when setting up logical replication and constructing the PUBLICATION, which was laborious and time-consuming. While it was possible to add all tables from all schemas, it was not possible to add schema-specific tables.

      As of version 15, the following syntax can be used when generating the PUBLICATION to add all tables of a certain schema for logical replication, saving a lot of time.

      CREATE PUBLICATION <pub-name>
      FOR TABLES IN SCHEMA <schema-1>, <schema-2>, <schema-n>;

      Version 14

      airline=# \dn
         List of schemas
         Name   |  Owner
      ----------|----------
       bookings | fsepuser
       public   | fsepuser
      (2 rows)

      airline=# CREATE PUBLICATION pub1 FOR ALL TABLES;
      CREATE PUBLICATION
      airline=#
      airline=# SELECT pubname, schemaname, tablename FROM pg_publication_tables;
       pubname | schemaname  |    tablename
      ---------+-------------+-----------------
       pub1    | bookings    | boarding_passes
       pub1    | public      | test
       pub1    | bookings    | aircrafts_data
       pub1    | bookings    | flights
       pub1    | bookings    | airports_data
       pub1    | bookings    | seats
       pub1    | bookings    | ticket_flights
       pub1    | bookings    | tickets
       pub1    | bookings    | bookings
      (9 rows)

      Version 15

      airline=# CREATE PUBLICATION pub1 FOR TABLES IN SCHEMA bookings;
      CREATE PUBLICATION
      airline=#
      airline=# SELECT pubname, schemaname, tablename FROM pg_publication_tables;
       pubname | schemaname  |    tablename
      ---------+-------------+-----------------
       pub1    | bookings    | ticket_flights
       pub1    | bookings    | boarding_passes
       pub1    | bookings    | aircrafts_data
       pub1    | bookings    | flights
       pub1    | bookings    | airports_data
       pub1    | bookings    | seats
       pub1    | bookings    | tickets
       pub1    | bookings    | bookings
      (8 rows)

      Ability to filter publication content using a WHERE clause

      Both CREATE PUBLICATION and ALTER PUBLICATION now allow the WHERE clause, which can be used to limit the replication of tuples.

      We must provide the replication conditions for each table in order to use this capability. Parentheses must surround the description of the WHERE clause.

      airline=# ALTER PUBLICATION pub1 FOR TABLE aircrafts_data WHERE (range >= 5000);
      ALTER PUBLICATION

      img-blog-spash-peter-introducing-publication-row-filters

      Allow publications to be restricted to specific columns

      In version 15, by generating a PUBLICATION as shown below, it is possible to replicate only particular columns of a table. Based on your requirements, we must specify a list of columns for the table name in the CREATE PUBLICATION or ALTER PUBLICATION command.

      airline=# CREATE PUBLICATION pub2 FOR TABLE aircrafts_data (aircraft_code,model);
      CREATE PUBLICATION

      img-blog-spash-vignesh-column-lists-in-logical-replication-publications

      Support of jsonlog in log_destination parameter

      Like other relational database management systems (RDBMS), Fujitsu Enterprise Postgres has a logging mechanism for events and error messages. Logs are essential if something goes wrong with your program or database. However, when DBAs or developers are attempting to remedy a problem, logs may cause misunderstanding. For this reason, it is crucial that you configure and handle your logs properly.

      The importance of logging

      First and foremost, logging enables the DBAs to diagnose and fix the issues. Simply put, the database log should be the first place to check, to identify or diagnose any issue. Errors, defects, security holes, and performance degradation are all examples of issues. The database log provides the overall activity tracking along with a timestamp, which is helpful to manage and operate the database for your application.

      Logging has advantages such as:

      • Issue diagnosis: Suppose you wish to simulate an issue that a user has reported in your development or production environment. If you have the issue recorded in a log file, it will be simpler for you to refer to those logs and duplicate the user action in a non-production or test environment for additional analysis.
      • Analytics: We can learn more about application peak times, session or connection lifetimes, etc. by looking at database logs. You may further evaluate user activity and potentially enhance the user experience, application speed, and database performance by adding more data to the log files.

       

      New logging format jsonlog

      As we know, PostgreSQL supports several methods for logging server messages, including stderr, csvlog, and syslog - and on Windows, the eventlog. Version 15 now includes a new logging method called jsonlog.

      The default value to log is stderr only. This parameter can only be set in postgresql.conf or on the server command line.

      As I mentioned above, jsonlog is a new value that can be set to log_destination to provide the database logs in the JSON format. The particular format is very convenient for feeding the logs to other applications like monitoring tools. The files generated by this log format are suffixed with .json and use the same rotation policies as the other formats (stderr and csvlog), depending on the backend configuration.

      When there’s more than one value set in the log_destination parameter, PostgreSQL will store a file named current_logfiles in the data directory, where each line of that file will represent the format and the current logfile name where PostgreSQL has to log the data.

      [postgres@localhost PGSQL15]$ pwd
      /opt/PGSQL15
      [postgres@localhost PGSQL15]$ cat current_logfiles
      stderr log/postgresql-2022-10-12_085848.log
      jsonlog log/postgresql-2022-10-12_085848.json

      DBAs, sysadmins, and developers can keep track of where the system is going to log with the current settings. This is useful when there is a log rotation in place, when troubleshooting issues, and when automatically feeding the current JSON format log to third-party applications for monitoring.

      Conclusion

      In this post, I have introduced some of the new features released in Fujitsu Enterprise Postgres 15. There are many more enhancements and changes in addition to the ones I've described here, which make Fujitsu Enterprise Postgres the enterprise Postgres offering of choice for multi/hybrid cloud environments.

      My colleagues and I will be publishing technical deep-dive articles in the coming weeks. Make sure you sign up for notifications below.

      Fujitsu Enterprise Postgres
      leverages and extends the strength and reliability of PostgreSQL with additional enterprise features.
      Compare the list of features.
      Subscribe to be notified of future blog posts
      If you would like to be notified of my next blog posts and other PostgreSQL-related articles, fill the form here.
      We also have a series of technical articles for PostgreSQL enthusiasts of all stripes, with tips and how-to's.
      Explore PostgreSQL Insider >

      Topics: PostgreSQL, Database security, Fujitsu Enterprise Postgres, PostgreSQL community, Logical replication

      Receive our blog

      Fill the form to receive notifications of future posts

      Search by topic

      see all >

      Read our latest blogs

      Read our most recent articles regarding all aspects of PostgreSQL and Fujitsu Enterprise Postgres.

      Receive our blog

      Fill the form to receive notifications of future posts

      Search by topic

      see all >