<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2826169&amp;fmt=gif">
Start  trial

    Start trial

      PostgreSQL 15 doesn’t disappoint. Upon arrival of the new release, the long-awaited MERGE feature will be delivered. It may not have been with the speed of light, but right on time like a door dash delivery.

      The maturity of PostgreSQL continues to impress, and the MERGE command is now another highlight to compete with proprietary databases.

      Users who made the open source trip over to PostgreSQL have been requesting the merge feature for quite some time. And now we have it. Today we have data coming in with this same speed sitting in multiple tables and a need to manipulate it without causing a performance impact with our code.

      The case for the MERGE command

      In most cases, our tables are larger due to ETL jobs bringing in data from various sources. The merge feature offers the ability to code more efficiently. Imagine combining insert, update, or delete statements, eliminating the need to use multiple statements when accessing one of these single large tables.

      Until now, coders would need to write separate statements to check if a row exists, then take separate actions depending on the result.

      An example of how the new command streamlines and simplifies the process is given below, with just one statement being used to maintain a bank loan ledger table based on loan records.

      Example

      postgres=# \d+ bank_loan_ledger
                                          Table "public.bank_loan_ledger"
            Column   |  Type   | Collation | Nullable | Default | Storage | Stats target | Description
      ---------------+---------+-----------+----------+---------+---------+--------------+--------------
       bank_id       | integer |           |          |         |         |              |
       bank_loan_amt | numeric |           |          |         |         |              |

      postgres=# \d+ loan_entry
                                           Table "public.loan_entry"
          Column   |  Type   | Collation | Nullable | Default | Storage | Stats target | Description
      -------------+---------+-----------+----------+---------+---------+--------------+--------------
       bank_id     | integer |           |          |         |         |              |
       cust_id     | integer |           |          |         |         |              |
       entry_amt   | numeric |           |          |         |         |              |

      postgres=# MERGE INTO bank_loan_ledger A
      postgres-# USING loan_entry B
      postgres-# ON B.bank_id = A.bank_id
      postgres-# WHEN MATCHED THEN
      postgres-#   UPDATE SET bank_loan_amt = bank_loan_amt + entry_amt
      postgres-# WHEN NOT MATCHED THEN
      postgres-#   INSERT (bank_id, bank_loan_amt) VALUES (B.bank_id, B.entry_amt);

      Powerful syntax, support for complex use cases 

      We are talking about PostgreSQL, so there are several options available such as using it from PL/pgSQL to help building a business transaction that needs to process multiple steps.

      The syntax of the command allows you to specify more than one MATCHED and NOT MATCHED conditions, extending your ability to manipulate the data as per your needs. 

      Each WHEN condition is evaluated in the same order as in the command specified.

      Example

      postgres=# \d+ product_stock
                                           Table "public.product_stock"
          Column    |  Type   | Collation | Nullable | Default | Storage | Stats target | Description
      --------------+---------+-----------+----------+---------+---------+--------------+--------------
       prod_id      | integer |           |          |         |         |              |
       qty_in_stock | integer |           |          |         |         |              |

      postgres=# \d+ stock_adjustment
                                      Table "public.stock_adjustment"
         Column   |  Type   | Collation | Nullable | Default | Storage | Stats target | Description
      ------------+---------+-----------+----------+---------+---------+--------------+--------------
       prod_id    | integer |           |          |         |         |              |
       adjust_qty | integer |           |          |         |         |              |

      postgres=# MERGE INTO product_stock A
      postgres-# USING stock_adjustment B
      postgres-# ON B.prod_id = A.prod_id
      postgres-# WHEN MATCHED AND qty_in_stock + adjust_qty > 0 THEN  A
      postgres-#   UPDATE SET qty_in_stock = qty_in_stock + adjust_qty
      postgres-# WHEN MATCHED THEN                                    B
      postgres-#   DELETE
      postgres-# WHEN NOT MATCHED AND adjust_qty > 0 THEN             C
      postgres-#   INSERT VALUES (B.prod_id, B.adjust_qty)
      postgres-# WHEN NOT MATCHED THEN                                D
      postgres-#   DO NOTHING;

      In the example above, for each candidate row, we have:

      A If there is a matching row, and the result adjustment will result in stock amount greater than zero, then update matching row

      B If there is a matching row, but the result adjustment will result in stock amount of zero or less, than delete the matching row

      C If there is no matching row, and the stock adjust amount is greater than zero, then insert row into target table

      D If there is no matching row, but the stock adjust amount is zero or negative, then do nothing.

      Without multiple statements you can code more efficiently, but this comes with a price. Join conditions may appear unstable with subexpressions scenarios, but testing each part of your statement individually will help render the proper results.

       

      ill-man-using-laptop-05-variation-02Before we go

      The maturity of PostgreSQL continues to impress, and the MERGE command is now another highlight to compete with proprietary databases, offering flexibility to handle large volumes of data under different conditions.

       

      Topics: PostgreSQL, PostgreSQL community, PostgreSQL development, Open source

      Receive our blog

      Search by topic

      see all >
      Tim Steward
      Principal Data Enterprise Architect, Fujitsu
      Tim has more than 20 years of experience in the industry with significant expertise in RDBMS, including but not limited to Postgres and Oracle, helping customers understand their architectural landscape and how they can leverage open-source database technology.
      Acknowledged as an experienced Technical Leader, Tim has spoken frequently in conferences and written numerous papers and blogs.
      Our Migration Portal helps you assess the effort required to move to the enterprise-built version of Postgres - Fujitsu Enterprise Postgres.

      Read our latest blogs

      Read our most recent articles regarding all aspects of PostgreSQL and Fujitsu Enterprise Postgres.

      Receive our blog

      Fill the form to receive notifications of future posts

      Search by topic

      see all >