<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2826169&amp;fmt=gif">
Start  trial

    Start trial

      Background

      The default Grafana instance installed on an OpenShift platform is read-only and cannot be used to create custom dashboards. Hence, users have to install a separate Grafana to use a customized dashboard to show/analyze their application metrics.

      On amd64-based OpenShift, there is a community operator available on OperatorHub, which is provided without support guarantee. The same operator is not available on OperatorHub for s390x-based OpenShift Container Platform (OCP).

      FUJITSU Enterprise Postgres (FEP) runs on OCP on both amd64 and s390x. FEPExporter exports data from FEPCluster to Prometheus. We needed a solution that works for both platforms: using Prometheus as the data source to display statistics on Grafana dashboards.

      The procedure below shows users how to deploy Grafana on RedHat OpenShift architecture platform using a Helm Chart. The procedure includes configuration of the connection from Grafana to the OpenShift platform Prometheus data source.

      Required components

      To install Grafana using this procedure, the following are required:

      Note: This procedure provides guidelines and tips for the installation process, and is not intended to replace product manuals. For updates and changes, please refer to the product manuals.

      Setup

      You will need a bastion server that is able to communicate with your OpenShift environment. If your OCP is on s390x, you must have an s390x-based bastion server as well. Similarly, for amd64-based OCP, you will need an amd64-based bastion server.

      All commands will be run on the bastion server to install the required components on OCP.

      A similar, intuitive process can be performed for any other platform.

      Helm 3 installation

      Skip this part if Helm 3 is already installed on your bastion server.

      • Download helm and add it to system executable path.
        $ curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm
      • Mark the downloaded binary as executable.
        $ chmod +x /usr/local/bin/helm
      • Check the helm utility version.
        $ helm version

      Prepare values.yaml for helm chart

      Obtain a copy of values.yaml from https://github.com/grafana/helm-charts/tree/main/charts/grafana and modify the following parts of the file.

      • Grafana image to use:
        • For s390x
          repository: ibmcom/grafana-s390x
          tag: 7.5.7
        • For amd64
          repository: grafana/grafana
          tag: 7.5.11
      • User/password for admin login in Grafana
        • When not using a secret for the login details.
          adminUser: admin
          adminPassword: admin-pwd
        • When using a secret that has keys defined for the username and the password.
          In this example, the keys are admin-user and admin-pwd, respectively. Comment out adminUser and adminPassword, and define the following under admin section:
          admin:
          existingSecret: name-of-secret
          userKey: admin-user
          passwordKey: admin-pwd
      • Prometheus data source.
        We need to find out the service URL for Prometheus, create a project (namespace) for Grafana installation, create a service account for this service, assign cluster-view role, and obtain the token to connect. In the commands below, replace <my-grafana-ns> and <my-grafana-sa> with your namespace and service account names, respectively.
        • Use your service URL for service name starting with thanos* in the openshift-monitoring project (namespace) and use the web port specified for the service.
          In our case it is https://thanos-querier.openshift-monitoring.svc.cluster.local:9091.
        • Create namespace for Grafana.
          oc create namespace <my-grafana-ns>
        • Create service account for this Grafana instance.
          oc -n <my-grafana-ns> create sa <my-grafana-sa>
        • Assign cluster-role view to the service account for Grafana.
          oc -n <my-grafana-ns> adm policy add-cluster-role-to-user view -z <my-grafana-sa>
        • Obtain a token.
          oc -n <my-grafana-ns> serviceaccounts get-token <my-grafana-sa>
        • Update the datasources section in values.yaml with the obtained token.
          datasources:
            datasources.yaml:
              apiVersion: 1
              datasources:
               - name: Prometheus-ds
                type: Prometheus
                url: "https://thanos-querier.openshift-monitoring.svc.cluster.local:9091"
                basicAuth: false
                withCredentials: false
                isDefault: true
                version: 1
                editable: true
                jsonData:
                  tlsSkipVerify: true
                  timeInterval: "5s"
                  httpHeaderName1: "Authorization"
                secureJsonData:
                  httpHeaderValue1: "Bearer <value-of-obtained-token>"
      • securityContext on s390x and amd64 - both on OpenShift environment.
        • On the bastion server for OpenShift, execute the following command to get the part of details from the namespace where Grafana will be installed.
           oc get ns <my-grafana-ns> -o yaml | grep " annotations" -A6
            annotations:
              openshift.io/description: Here grafana is installed using helm
              openshift.io/display-name: Grafana Using Helm
              openshift.io/requester: kube:admin
              openshift.io/sa.scc.mcs: s0:c30,c0
              openshift.io/sa.scc.supplemental-groups: 1000870000/10000
              openshift.io/sa.scc.uid-range: 1000870000/10000
        • Update the securityContext section using the value before the slash output by the command above in the sa.scc.supplemental-groups and sa.scc.uid-range items (in the example above it is 1000870000).
          securityContext:
            runAsUser: 1000870000
            runAsGroup: 1000870000
            fsGroup: 1000870000

      Install Grafana

      After updating values.yaml, follow the steps below on the bastion server to install and check Grafana in the namespace.
      The updated values.yaml must be stored in the current directory:

      • Helm to install.
        helm upgrade –install -n <my-grafana-ns> my-grafana grafana/grafana -f values.yaml
      • Once up and running, you should have Grafana pods in <my-grafana-ns>.
        oc get pods -n <my-grafana-ns>
        NAME                               READY   STATUS   RESTARTS  AGE 
        my-grafana-556bf6f7b4-9vztq         1/1    Running      0     24h

      Route for Grafana

      Users can create a route entry for the link to Grafana from the OpenShift console. You may have to add an entry in your host file pointing to OpenShift console IP for this URL to reach the route.

      setting-up-grafana-on-ibm-linuxone-route-for-graphana-screenshot-create-route

      Using the URL specified in the route, you will be able to go to Grafana screen. Login using the username and password you specified in values.yaml. For the initial login, you will be prompted to change the admin password.

      Once logged in, you can click the gear icon on the left side of the Grafana screen. On this page, you will see the data source you specified in value.yaml. Click on Save & Test button at bottom of screen to confirm that the data source is working.

      setting-up-grafana-on-ibm-linuxone-route-for-graphana-screenshot-data-source

      Now you can import or create your dashboard in this Grafana. You can download a sample copy from here.

       


      Read our latest blogs

      Read our most recent articles regarding all aspects of PostgreSQL and FUJITSU Enterprise Postgres.