<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2826169&amp;fmt=gif">
Start  trial

    Start trial

      A Patroni-based Fujitsu Enterprise Postgres High Availability setup provides automated failovers, efficient load balancing, and minimal downtime.

      Introducing Patroni

      Patroni is an open-source tool that makes it easier to configure and manage a high availability (HA) Fujitsu Enterprise Postgres cluster. It ensures continuous availability by automating failover and leader election, using a distributed consensus mechanism.

      Patroni typically integrates with HAProxy for load balancing and ETCD for cluster coordination, enabling a highly resilient PostgreSQL setup that minimizes downtime.

      Key components

      • HAProxy

        HAProxy is a powerful, fast load balancer that manages client connections. In this Patroni setup, it forwards requests to the active leader node. This provides a single endpoint for client applications, allowing seamless redirection to the primary database.

      • ETCD

        ETCD is a distributed key-value store that facilitates coordination among Patroni nodes. Patroni uses ETCD to keep track of cluster states, manage leader elections, and synchronize information across nodes, enabling automated failovers when needed.

      • Patroni Agents

        Each Fujitsu Enterprise Postgres node is managed by a Patroni agent that monitors the health and status of its Fujitsu Enterprise Postgres instance. These agents communicate with ETCD to update the cluster status and initiate failover or recovery when required.

      Patroni features

      Patroni includes several critical features for managing high availability in Fujitsu Enterprise Postgres clusters:

      • Automated failover and recovery

        Patroni continuously monitors the primary node and promotes a replica if a failure occurs. This automated failover ensures minimal downtime and uninterrupted database access.

      • Distributed consensus for leader election

        By using ETCD (or alternatives like Consul or ZooKeeper), Patroni ensures that only one node is the primary at any time, maintaining data consistency.

      • Flexible replication configuration

        Patroni allows administrators to configure both synchronous and asynchronous replication, depending on consistency and performance requirements.

      • Load balancing support

        Integration with HAProxy provides efficient load balancing for client connections, routing read and write requests to the appropriate nodes.

      • Customizable pre- and post-failover scripts

        Patroni allows custom scripts to be run before and after failovers, making it easy to integrate with other workflows or external systems.

      • REST API for management

        Patroni offers a REST API for health checks, monitoring, and failover management, making it easy to integrate with monitoring tools or manage the cluster programmatically.

      The above diagram represents a robust, highly available Fujitsu Enterprise Postgres cluster using Patroni with the following components:

      • HAProxy cluster: Comprising two nodes, HAProxy1 and HAProxy2, this cluster manages incoming client connections. Client applications connect to HAProxy, which directs traffic to the active primary database. If HAProxy1 fails, HAProxy2 takes over to ensure continuous connection routing.
      • Fujitsu Enterprise Postgres nodes (PostgreSQL servers):
        • Fujitsu Enterprise Postgres node 1: This is the initial primary node. Patroni assigns it the role of the leader in the cluster.
        • Fujitsu Enterprise Postgres node 2 and Fujitsu Enterprise Postgres node 3: These nodes are configured as replicas through streaming replication. If FEP Node1 fails, one of these nodes is promoted to the primary role, ensuring data availability.
        • Each node runs a Patroni agent that monitors the health of its FEP instance and reports its status to ETCD.
      • ETCD cluster: Comprising three nodes (ETCD1, ETCD2, and ETCD3), this ETCD cluster maintains cluster-wide consensus and provides the coordination needed for failover management. ETCD ensures the availability of cluster metadata, allowing Patroni to accurately track the primary node and replicas.

      Failover scenarios as per the diagram

      In a high availability system like this, understanding component failures is crucial. Here’s how the setup responds to various failure scenarios:

      1. Primary node (Fujitsu Enterprise Postgres node 1) failure
        • If the primary node (Fujitsu Enterprise Postgres node 1) fails, Patroni, through ETCD, detects the failure.
        • Patroni promotes one of the replicas (Fujitsu Enterprise Postgres node 2 or Fujitsu Enterprise Postgres node 3) to be the new primary.
        • HAProxy updates its routing to redirect client connections to the new primary node.
        • Once Fujitsu Enterprise Postgres node 1 is back online, it joins the cluster as a replica.
      2. Replica node (Fujitsu Enterprise Postgres node 2 or Fujitsu Enterprise Postgres node 3) failure:
        • If a replica node fails, the system continues to function normally as the primary node remains available.
        • The failed replica can be repaired and re-added to the cluster without impacting availability.
      3. HAProxy node failure
        • If HAProxy1 fails, HAProxy2 takes over client connections. This redundancy ensures that connections continue to be routed to the active primary.
        • Once HAProxy1 is restored, it re-joins the HAProxy cluster, allowing load balancing to resume across both HAProxy nodes.
      4. ETCD node failure
        • The ETCD cluster consists of three nodes. If one ETCD node fails, the remaining two nodes continue to coordinate the cluster.
        • ETCD requires a majority (or quorum) to operate; hence, two out of three nodes can maintain coordination. If two ETCD nodes fail, however, Patroni cannot coordinate effectively, and failover may be impacted until the ETCD cluster regains quorum.
      5. Full ETCD cluster failure
        • If the entire ETCD cluster fails, Patroni loses the ability to elect a new leader or manage failover.
        • In such a scenario, existing connections to the primary node will continue to work, but the system cannot handle failover until the ETCD cluster is restored.

      HAProxy Stats Page

      The HAProxy stats page helps you monitor your Patroni clusters by showing real-time data on server health, request rates, and response times. It’s a handy tool for quickly spotting and fixing issues, ensuring your database runs smoothly and stays available.

      Conclusion

      In summary, a Patroni-based Fujitsu Enterprise Postgres high-availability setup provides automated failovers, efficient load balancing, and minimal downtime. Key components like HAProxy enable role-based traffic routing, ETCD manages coordination, and Patroni handles failover processes. Customizable failover scripts and a REST API further enhance integration, enabling seamless management and monitoring.

      This architecture ensures data availability and resiliency, making it ideal for mission-critical applications needing robust and scalable Fujitsu Enterprise Postgres high availability solutions.

      Topics: Fujitsu Enterprise Postgres, High Availability, Automated failover, Patroni, Load balancing, ETCD, HAProxy, Cluster management

      Receive our blog

      Search by topic

      Posts by Tag

      See all
      Learn more about the extended and unique features that
      Fujitsu Enterprise Postgres
      provides to harness your data.
      Click below to view the list of features.
      Nishchay Kothari
      Technical Consultant, Fujitsu Enterprise Postgres Center of Excellence
      Nishchay Kothari is an outstanding technical consultant with over 13 years of expertise in relational database management systems (RDBMS). Nishchay has experience with a wide range of database technologies, including PostgreSQL, SQL Server, and Oracle.
      Nishchay has positioned himself as a go-to resource for organizations wanting to optimize their database infrastructure and architectural solutions driven by his passion for addressing complicated technological challenges.

      Receive our blog

      Fill the form to receive notifications of future posts

      Search by topic

      see all >