<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2826169&amp;fmt=gif">
Start  trial

    Start trial

      1. Background

      FUJITSU Software Enterprise Postgres for Kubernetes provides auto-scale-out capabilities. To automatic scale-out based on the number of connections, you must have configured a custom metrics server that takes the metrics collected in Prometheus and provides them in the form of a custom metrics API.

      The Prometheus adapter is a service that exposes metrics stored in Prometheus in the form of a metrics API. This guide describes how to set up the Prometheus adapter, an implementation of a custom metrics API.

      2. Required components

      2.1. For OpenShift Container Platform (OCP) logo-ocp-02

      The following conditions must be met to set up the Prometheus Adapter in this procedure:

      • A user-defined project monitoring in the OCP cluster must be enabled

      In addition, the FEPCluster must be deployed in order to perform the validation listed in this procedure:

      • The FEP cluster monitoring feature is enabled (enabled by the enableMonitoring flag in the FEPCluster CR)
      • Metrics for the number of connections are collected

      This procedure was validated for the following:

      • OpenShift Container Platform: 4.8.17
      • Prometheus Adapter: v0.9.1

      2.2. For Rancher Kubernetes Engine (RKE) logo-rancher-02

      This procedure assumes that you have installed the Prometheus environment using the Rancher Monitoring Chart, which installs the Prometheus Adapter by default.

      This procedure was validated for the following:

      • Kubernetes versions of RKE: v1.21.7
      • Rancher Monitoring Chart: 100.1.0+up19.0.3

      2.3. For other Kubernetes environments logo-kubernetes-05

      The following conditions must be met to set up the Prometheus Adapter in this procedure:

      • The Prometheus Operator is set up correctly to allow FEP Operator monitoring/alerting to work
      • The "helm" command is available to install Helm Chart

      In addition, the FEPCluster must be deployed in order to perform the validation listed in this procedure:

      •  The FEP cluster monitoring feature is enabled (enabled by the enableMonitoring flag in the FEPCluster CR)
      • Metrics for the number of connections are collected

      This procedure was validated for the following:

      • Kubernetes environment: Amazon Elastic Kubernetes Service
      • Kubernetes version: v1.21.5-eks-bc4871b
      • Prometheus adapter v0.9.1

      3. Deploying Prometheus Adapter

      3.1. For OpenShift Container Platform (OCP) logo-ocp-02

      3.1.1. Confirming Prometheus operation

      Before you begin setting up the Prometheus adapter, make sure you can get the metrics for the FEP server from Prometheus. To get a token to access Prometheus, create a ServiceAccount in a user-defined project - in this case custom-prometheus -, and get the credentials.

      Example:

      $ oc -n custom-prometheus create sa ocp-prometheus
      $ oc -n custom-prometheus adm policy add-cluster-role-to-user cluster-monitoring-view -z ocp-prometheus
      $ TOKEN=`oc -n custom-prometheus serviceaccounts get-token ocp-prometheus`

      Verify that metrics can be retrieved using the Prometheus HTTP API. For OCP, the destination is the URL that appears in the host of the route resource named thanos-querier in the openshift-monitoring namespace - in this example, the URL is https://thanos-querier-openshift-monitoring.apps.your.host.name/.

      The following example gets the number of connections for the FEP cluster t3-fep in the user-defined project prometheus-0215 from Prometheus.

      Example:

      $ curl -X GET -kG "https://thanos-querier-openshift-monitoring.apps.your.host.name/api/v1/query?" 
        --data-urlencode "query=pg_capacity_connection_total{namespace='prometheus-0215', fepcluster='t3-fep'}" 
        -H "Authorization: Bearer $TOKEN" | jq
       {
        "status": "success",
        "data": {
          "resultType": "vector",
          "result": [
            {
              "metric": {
                "__name__": "pg_capacity_connection_total",
                "container": "prometheus-fep-exporter",
                "endpoint": "t3-fep-new-fep-exporter-http",
                "fepcluster": "t3-fep",
                "instance": "10.131.0.195:9187",
                "job": "t3-fep-new-fep-exporter-service",
                "namespace": "prometheus-0215",
                "pod": "t3-fep-new-fep-exporter-deployment-57864b6997-jcwb4",
                "prometheus": "openshift-user-workload-monitoring/user-workload",
                "server": "t3-fep-sts-0.t3-fep-headless-svc:27500",
                "service": "t3-fep-new-fep-exporter-service"
              },
              "value": [
                1645164554.729,
                "5"
              ]
            },
            {
              "metric": {
                "__name__": "pg_capacity_connection_total",
                "container": "prometheus-fep-exporter",
                "endpoint": "t3-fep-new-fep-exporter-http",
                "fepcluster": "t3-fep",
                "instance": "10.131.0.195:9187",
                "job": "t3-fep-new-fep-exporter-service",
                "namespace": "prometheus-0215",
                "pod": "t3-fep-new-fep-exporter-deployment-57864b6997-jcwb4",
                "prometheus": "openshift-user-workload-monitoring/user-workload",
                "server": "t3-fep-sts-1.t3-fep-headless-svc:27500",
                "service": "t3-fep-new-fep-exporter-service"
              },
              "value": [
                1645164554.729,
                "3"
              ]
            },
            {
              "metric": {
                "__name__": "pg_capacity_connection_total",
                "container": "prometheus-fep-exporter",
                "endpoint": "t3-fep-new-fep-exporter-http",
                "fepcluster": "t3-fep",
                "instance": "10.131.0.195:9187",
                "job": "t3-fep-new-fep-exporter-service",
                "namespace": "prometheus-0215",
                "pod": "t3-fep-new-fep-exporter-deployment-57864b6997-jcwb4",
                "prometheus": "openshift-user-workload-monitoring/user-workload",
                "server": "t3-fep-sts-2.t3-fep-headless-svc:27500",
                "service": "t3-fep-new-fep-exporter-service"
              },
              "value": [
                1645164554.729,
                "3"
              ]
            }
          ]
        }
      }
      

      3.1.2. Creating a configuration file

      Create a configuration file in YAML format. This example creates a file named deploy.yaml that concatenates the contents of the following step.

      In the examples below, markers used in the example files the the following meaning:

      A Specifies the name of the project where the Prometheus adapter will be deployed.

      B Specifies the name of the metric that represents the number of connections stored in Prometheus. Use this metric to calculate the average number of connections.

      C Specifies the name of the metric exposed by the metric API. This is the name you specify in the metricName parameter when using the automatic scale-out feature.

      D Specifies the local address in the cluster of the Prometheus service to which you want to connect.

      E Specify the official image of prometheus-adapter. Reference: https://github.com/kubernetes-sigs/prometheus-adapter.

      F For OCP 4.9 and later, use v1.custom.metrics.k8s.io.

      • Configuring a service account and RBAC for the Prometheus adapter

      Add the configuration details to create the service account, required roles, and role bindings for the Prometheus adapter.

      kind: ServiceAccount
      apiVersion: v1
      metadata:
        name: custom-metrics-apiserver
        namespace: custom-prometheus AName of the project where the
      Prometheus adapter will be deployed
      --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: custom-metrics-server-resources rules: - apiGroups: - custom.metrics.k8s.io resources: ["*"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: custom-metrics-resource-reader rules: - apiGroups: - "" resources: - namespaces - pods - services verbs: - get - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: custom-metrics:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: custom-metrics-apiserver namespace: custom-prometheus AName of the project where the
      Prometheus adapter will be deployed
      --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: custom-metrics-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: custom-metrics-apiserver namespace: custom-prometheus AName of the project where the
      Prometheus adapter will be deployed
      --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: custom-metrics-resource-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: custom-metrics-resource-reader subjects: - kind: ServiceAccount name: custom-metrics-apiserver namespace: custom-prometheus AName of the project where the
      Prometheus adapter will be deployed
      --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: hpa-controller-custom-metrics roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: custom-metrics-server-resources subjects: - kind: ServiceAccount name: horizontal-pod-autoscaler namespace: kube-system ---
      • Setting custom metrics

      Add custom metric details exposed by the Prometheus adapter.

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: adapter-config
        namespace: custom-prometheus AName of the project where the
      Prometheus adapter will be deployed
      data: config.yaml: | rules: - seriesQuery: 'pg_capacity_connection_total{namespace!="", fepcluster!=""}' BName of metric that represents number of connections stored
      in Prometheus - use to calculate average number of connections
      resources: overrides: namespace: {resource: "namespace"} fepcluster: {group: "fep.fujitsu.io", resource: "fepclusters"} name: matches: "pg_capacity_connection_total" BName of metric that represents number of connections stored
      in Prometheus - use to calculate average number of connections
      as: "pg_capacity_connection_average" CName of the metric exposed by the metric API - this is the name you specify
      in the metricName parameter when using the automatic scale-out feature
      metricsQuery: 'avg(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)' ---
      • Configuring connection to Prometheus

      Add the Prometheus information that the Prometheus adapter will connect to.

      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: prometheus-adapter-prometheus-config
        namespace: custom-prometheus AName of the project where the
      Prometheus adapter will be deployed
      data: prometheus-config.yaml: | apiVersion: v1 clusters: - cluster: server: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 DLocal address in the cluster of the Prometheus
      service to which you want to connect
      insecure-skip-tls-verify: true name: prometheus-k8s contexts: - context: cluster: prometheus-k8s user: prometheus-k8s name: prometheus-k8s current-context: prometheus-k8s kind: Config preferences: {} users: - name: prometheus-k8s user: tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token ---
      • Configuring deployment

      Add configuration details for deploying the Prometheus adapter.

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        labels:
          app: prometheus-adapter
        name: prometheus-adapter
        namespace:  custom-prometheus AName of the project where the
      Prometheus adapter will be deployed
      spec: replicas: 1 selector: matchLabels: app: prometheus-adapter template: metadata: labels: app: prometheus-adapter name: prometheus-adapter spec: serviceAccountName: custom-metrics-apiserver containers: - name: prometheus-adapter image: k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.1 ESpecify the official image of
      prometheus-adapter
      args: - --prometheus-auth-config=/etc/prometheus-config/prometheus-config.yaml - --secure-port=6443 - --tls-cert-file=/var/run/serving-cert/tls.crt - --tls-private-key-file=/var/run/serving-cert/tls.key - --logtostderr=true - --prometheus-url=https://thanos-querier.openshift-monitoring.svc.cluster.local:9091/ DLocal address in the cluster of the Prometheus
      service to which you want to connect
      - --metrics-relist-interval=1m - --v=4 - --config=/etc/adapter/config.yaml ports: - containerPort: 6443 volumeMounts: - mountPath: /var/run/serving-cert name: volume-serving-cert readOnly: true - mountPath: /etc/adapter/ name: config readOnly: true - mountPath: /etc/prometheus-config name: prometheus-adapter-prometheus-config - mountPath: /tmp name: tmp-vol volumes: - name: volume-serving-cert secret: secretName: prometheus-adapter-tls - name: config configMap: name: adapter-config - name: prometheus-adapter-prometheus-config configMap: name: prometheus-adapter-prometheus-config defaultMode: 420 - name: tmp-vol emptyDir: {}
      • Configuring service and API rRegistration

      Add a configuration to register the service of the prometheus-adapter and register it as an API service that implements a custom metrics API.

      apiVersion: v1
      kind: Service
      metadata:
        annotations:
          service.beta.openshift.io/serving-cert-secret-name: prometheus-adapter-tls
        labels:
          name: prometheus-adapter
        name: prometheus-adapter
        namespace: custom-prometheus AName of the project where the
      Prometheus adapter will be deployed
      spec: ports: - name: https port: 443 targetPort: 6443 selector: app: prometheus-adapter type: ClusterIP --- apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.custom.metrics.k8s.io FFor OCP 4.9 and later, use
      v1.custom.metrics.k8s.io
      spec: service: name: prometheus-adapter namespace: custom-prometheus AName of the project where the
      Prometheus adapter will be deployed
      group: custom.metrics.k8s.io version: v1beta1 insecureSkipTLSVerify: true groupPriorityMinimum: 100 versionPriority: 100

      3.1.3. Deploying the Prometheus adapter

      Deploy the Prometheus adapter by applying settings to the cluster.

      Example:

      $ oc apply -f deploy.yaml

      3.1.4. Verifying that the Prometheus adapter is working

      • Confirm that the pods for the deployed Prometheus adapter are in the Running state. In this example, the deployed project is custom-prometheus.
      $ oc -n custom-prometheus get pods prometheus-adapter-<string>
      NAME                                  READY   STATUS    RESTARTS   AGE
      prometheus-adapter-58b5dc5495-mbcld   1/1     Running   0         10d
      • Verify that the metrics you have configured the Prometheus adapter to publish are available through the custom metrics API. The example below gets the average number of connections for the FEP cluster t3-fep in the project prometheus-0215.

      Example:

      $ oc get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/prometheus-0215/fepclusters.fep.fujitsu.io/t3-fep/pg_capacity_connection_average | jq
      {
        "kind": "MetricValueList",
        "apiVersion": "cust
      om.metrics.k8s.io/v1beta1",
        "metadata": {
          "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/ prometheus-0215/fepclusters.fep.fujitsu.io/t3-fep/pg_capacity_connection_average"
        },
        "items": [
          {
            "describedObject": {
              "kind": "FEPCluster",
              "namespace": "prometheus-0215",
              "name": "t3-fep",
              "apiVersion": "fep.fujitsu.io/v2"
            },
            "metricName": "pg_capacity_connection_average",
            "timestamp": "2022-02-18T06:23:43Z",
            "value": "3666m",
            "selector": null
          }
        ]
      }

      3.2. For Rancher Kubernetes Engine logo-rancher-02

      3.2.1. Confirming Rancher Monitoring Chart operation

      • When you install Rancher Monitoring Chart, the Prometheus adapter is installed by default in addition to Prometheus. Verify that the pods are running in the installed namespace. In this case, the namespace cattle-monitoring-system has both Prometheus and Prometheus adapter installed.

      Example:

      $ kubectl get pod -n cattle-monitoring-system | grep prometheus-adapter
      rancher-monitoring-prometheus-adapter-8846d4757-9l84s     1/1     Running       0          2d2h
      • Check the name of the ConfigMap for the Prometheus Adapter in the namespace where you installed the Rancher Monitoring Chart.

      Example:

      $ kubectl get cm -n cattle-monitoring-system | grep prometheus-adapter
      rancher-monitoring-prometheus-adapter                  1      2d2h

      3.2.1. Setting custom metrics

      Modify the ConfigMap for the Prometheus adapter to add custom metric definitions.

      Add the definition, for example, by exporting the contents of the existing ConfigMap in YAML format and editing it. Add custom metric definitions to the rules entry in data.config.yaml in YAML in ConfigMap.

      Example:

      $ kubectl get cm -n cattle-monitoring-system rancher-monitoring-prometheus-adapter -o yaml > prometheus-adapter.yaml
      $ vi prometheus-adapter.yaml

      Example prometheus-adapter.yaml:

      …
      data:
        config.yaml: |
          rules:
      …
      ### Add from here
          - seriesQuery: 'pg_capacity_connection_total{namespace!="", fepcluster!=""}' AName of metric that represents number of connections stored
      in Prometheus - use to calculate average number of connections
      resources: overrides: namespace: {resource: "namespace"} fepcluster: {group: "fep.fujitsu.io", resource: "fepclusters"} name: matches: "pg_capacity_connection_total" AName of metric that represents number of connections stored
      in Prometheus - use to calculate average number of connections
      as: "pg_capacity_connection_average" BName of the metric exposed by the metric API - this is the name you specify
      in the metricName parameter when using the automatic scale-out feature
      metricsQuery: 'avg(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)' ### Add up to here …

      A Specifies the name of the metric that represents the number of connections stored in Prometheus. Use this metric to calculate the average number of connections.

      B Specifies the name of the metric exposed by the metric API. This is the name you specify in the metricName parameter when using the automatic scale-out feature.

      3.2.2. Reflecting the definition of an added custom metric

      • Apply the YAML file created in the previous section.

      Example:

      $ kubectl apply -f prometheus-adapter.yaml
      configmap/rancher-monitoring-prometheus-adapter configured
      • After updating ConfigMap, the configuration takes effect by restarting the pod for the Prometheus adapter.
        If you delete the pod that you identified in the first step, ReplicaSet recreates the new pod to reflect the new rules.

      Example:

      $ kubectl delete pod -n cattle-monitoring-system rancher-monitoring-prometheus-adapter-8846d4757-9l84s
      pod "rancher-monitoring-prometheus-adapter-8846d4757-9l84s" deleted
      • Confirm that the new pod is running.

      Example:

      $ kubectl get pod -n cattle-monitoring-system | grep prometheus-adapter
      rancher-monitoring-prometheus-adapter-8846d4757-cjj8v     0/1     Running       0          40s
      rancher-monitoring-prometheus-adapter-8846d4757-qf4q8     1/1     Terminating   0          3d

      3.3. For other Kubernetes environments logo-kubernetes-05

      3.3.1. Confirming Prometheus operation

      Check the service name and installed namespace of the Prometheus already installer. In this example, the namespace is monitoring and the service name is prometheus-stack-kube-prom-prometheus.

      Next, check if the metrics can be retrieved with the Prometheus HTTP API. We use port forwarding in kubectl to get the number of connections for the FEP cluster in the namespace test-monitoring from Prometheus. Port forwarding destinations depend on the Prometheus installation namespace and the Prometheus service name. Use an available port number.

      In terminal 1:

      $ kubectl port-forward service/prometheus-stack-kube-prom-prometheus -n monitoring 9090
      Forwarding from [::1]:9090 -> 9090

      In terminal 2:

      $ curl -X GET -kG "http:/localhost:9090/api/v1/query?" 
      --data-urlencode "query=pg_capacity_connection_total{namespace='test-monitoring', fepcluster!=''}" | jq % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 496 100 496 0 0 23619 0 --:--:-- --:--:-- --:--:-- 23619 { "status": "success", "data": { "resultType": "vector", "result": [ { "metric": { "__name__": "pg_capacity_connection_total", "container": "prometheus-fep-exporter", "endpoint": "new-fep-fepexporter-http", "fepcluster": "new-fep", "instance": "10.0.14.69:9187", "job": "new-fep-fepexporter-service", "namespace": "test-monitoring", "pod": "new-fep-fepexporter-deployment-565766d477-k6kgq", "server": "new-fep-sts-0.new-fep-headless-svc:27500", "service": "new-fep-fepexporter-service" }, "value": [ 1647239826.127, "3" ] } ] } } # End Port Forwarding with Ctrl-C in Terminal #1

      3.3.2. Preparing to deploy the Prometheus adapter

      • Adding the Helm Repository for the Prometheus Adapter

      This example registers the Helm repository published by the Prometheus Community as prometheus-community.

      $ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
      "Prometheus-community" has been added to your repositories
      $ helm repo update
      Hang tight while we grab the latest from your chart repositories...
      ...Successfully got an update from the "metrics-server" chart repository
      ...Successfully got an update from the "prometheus-community" chart repository
      Update Complete. ?Happy Helming!?

      Note that information from other Helm repositories already registered will also be displayed.

      • Creating a YAML file for configuration of the Prometheus adapter

      Define the Prometheus information to which the Prometheus adapter will connect and the custom metric settings in YAML format, as follows.

      prometheus:
        url: http:/prometheus-stack-kube-prom-prometheus.monitoring.svc.cluster.local ALocal address in the cluster of the Prometheus
      service to which you want to connect
      port: 9090 path: "" rules: custom: - seriesQuery: 'pg_capacity_connection_total{namespace!="", fepcluster!=""}' BName of metric that represents number of connections stored
      in Prometheus - use to calculate average number of connections
      resources: overrides: namespace: {resource: "namespace"} fepcluster: {group: "fep.fujitsu.io", resource: "fepclusters"} name: matches: "pg_capacity_connection_total" BName of metric that represents number of connections stored
      in Prometheus - use to calculate average number of connections
      as: "pg_capacity_connection_average" CName of the metric exposed by the metric API - this is the name you specify
      in the metricName parameter when using the automatic scale-out feature
      metricsQuery: 'avg(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)'

      A Specifies the local address in the cluster of the Prometheus service to which you want to connect. In this example, the URL is for the namespace monitoring and the service name prometheus-stack-kube-prom-prometheus.

      B Specifies the name of the metric that represents the number of connections stored in Prometheus. Use this metric to calculate the average number of connections.

      C Specifies the name of the metric exposed by the metric API. This is the name you specify in the metricName parameter when using the automatic scale-out feature.

      Use the following command to check other parameters that can be changed:

      $ helm show values prometheus-community/prometheus-adapter
      …

      3.3.3. Deploying the Prometheus adapter

      Execute the following command to deploy the Prometheus Adapter. In this example, the YAML file you created in the previous section is named deploy.yaml, the release name of the Prometheus adapter you want to install is prom-adapter-custom, and the target namespace is monitoring.

      Example:

      $ helm install -f deploy.yaml prom-adapter-custom prometheus-community/prometheus-adapter -n monitoring
      
      community/prometheus-adapter
      NAME: prom-adapter-custom
      LAST DEPLOYED: Thu Mar 10 05:15:50 2022
      NAMESPACE: monitoring
      STATUS: deployed
      REVISION: 1
      TEST SUITE: None
      NOTES:
      prom-adapter-custom-prometheus-adapter has been deployed.
      In a few minutes you should be able to list metrics using the following command(s):
      
        kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1

      3.3.4. Verifying that the Prometheus adapter is Working

      Confirm that the pods for the deployed Prometheus adapter are in the Running state.

      Example:

      $ kubectl get pod -n monitoring | grep prom-adapter-custom
      prom-adapter-custom-prometheus-adapter-7fdb545bc-4jdp9   1/1     Running   0          3d23h

      Verify that the metrics you have configured the Prometheus adapter to publish are available through the custom metrics API. The example below gets the average number of connections for the FEP cluster new-fep in the namespace test-monitoring.

      Example:

      $ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/test-monitoring/fepclusters.fep.fujitsu.io/new-fep/pg_capacity_connection_average | jq
      {
        "kind": "MetricValueList",
        "apiVersion": "custom.metrics.k8s.io/v1beta1",
        "metadata": {
          "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/test-monitoring/fepclusters.fep.fujitsu.io/new-fep/pg_capacity_connection_average"
        },
        "items": [
          {
            "describedObject": {
              "kind": "FEPCluster",
              "namespace": "test-monitoring",
              "name": "new-fep",
              "apiVersion": "fep.fujitsu.io/v2"
            },
            "metricName": "pg_capacity_connection_average",
            "timestamp": "2022-03-14T07:52:50Z",
            "value": "3",
            "selector": null
          }
        ]
      }
      

       


      Read our latest blogs

      Read our most recent articles regarding all aspects of PostgreSQL and FUJITSU Enterprise Postgres.