Kubernetes Deployment
Deploying GridGain Web Console with Docker on Kubernetes.
Overview
Kubernetes is an open source container application platform for enterprise application development and deployment. GridGain Web Console Docker images can be deployed to Kubernetes. For general information on using Web Console Docker images, please see Deploying Web Console with Docker.
Create a Kubernetes Project
A Kubernetes project is a Kubernetes namespace with additional configuration properties and enables a community of users to isolate the GridGain Web Console from other applications and users. Additional information on Kubernetes projects can be found here.
-
Create a new Kubernetes project:
kubectl create namespace gridgain-web-console kubectl config set-context --current --namespace=gridgain-web-console
-
Once the project is created, you can perform regular project operations such as
kubectl get namespace
to view the full list of projects andoc status
to view the status of the newly createdgridgain-web-console
project.
Create PersistentVolumeClaim for Web Console Backend
A PersistentVolume object is a storage resource in a Kubernetes cluster. Use PersistentVolumeClaim objects to request storage resources for the gridgain-web-console
project. See the Kubernetes documentation on Persistent Volumes for additional configuration details and concepts.
-
Create a PersistentVolumeClaim configuration file
web-console-backend-pvc-pds.yaml
. Specify a volume name (depending on your Kubernetes installation) in the file.apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "backend-pds" spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "1Gi" # uncomment this property if you want to use an existing volume with a given name # volumeName: "pv0006"
-
Create the PersistentVolumeClaim by running the following command:
kubectl apply -f web-console-backend-pvc-pds.yaml
Create a ConfigMap for Web Console Backend
A ConfigMap is a key-value store that lives as a Kubernetes object. ConfigMaps allow you to modify application behavior without having to recreate the Docker image.
Before creating the web console backend container, you must first create the ConfigMap.
-
Create the ConfigMap configuration file
web-console-backend-configmap.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: web-console-backend-cm data: ignite-config.xml: |- <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd"> <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration"> <property name="consistentId" value="web-console-data"/> <property name="metricsLogFrequency" value="0"/> <property name="queryThreadPoolSize" value="16"/> <property name="failureDetectionTimeout" value="10000"/> <property name="networkTimeout" value="10000"/> <!-- Disable all clients. --> <property name="connectorConfiguration"><null/></property> <property name="clientConnectorConfiguration"><null/></property> <property name="transactionConfiguration"> <bean class="org.apache.ignite.configuration.TransactionConfiguration"> <property name="txTimeoutOnPartitionMapExchange" value="#{60L * 1000L}"/> </bean> </property> <!-- Logging configuration. --> <property name="gridLogger"> <bean class="org.apache.ignite.logger.log4j2.Log4J2Logger"> <constructor-arg type="java.lang.String" value="log4j2.xml"/> </bean> </property> <property name="failureHandler"> <bean class="org.apache.ignite.failure.NoOpFailureHandler"/> </property> <property name="communicationSpi"> <bean class="org.apache.ignite.console.discovery.IsolatedCommunicationSpi"/> </property> <property name="discoverySpi"> <bean class="org.apache.ignite.console.discovery.IsolatedDiscoverySpi"/> </property> <property name="dataStorageConfiguration"> <bean class="org.apache.ignite.configuration.DataStorageConfiguration"> <property name="metricsEnabled" value="true"/> <property name="storagePath" value="/opt/gridgain-web-console-server/work"/> <!--property name="walMode" value="FSYNC"/--> <property name="walPath" value="/opt/gridgain-web-console-server/work"/> <property name="walArchivePath" value="/opt/gridgain-web-console-server/work"/> <property name="walSegmentSize" value="#{512L * 1024 * 1024}"/> <!-- Enable write throttling. --> <property name="writeThrottlingEnabled" value="true"/> <property name="defaultDataRegionConfiguration"> <bean class="org.apache.ignite.configuration.DataRegionConfiguration"> <property name="initialSize" value="#{1024L * 1024L * 1024L}"/> <property name="maxSize" value="#{1024L * 1024L * 1024L}"/> <property name="checkpointPageBufferSize" value="#{128L * 1024L * 1024L}"/> <property name="metricsEnabled" value="true"/> <property name="persistenceEnabled" value="true"/> </bean> </property> </bean> </property> </bean> </beans>
-
Apply the web console backend configmap to the
gridgain-web-console
project using the following command:kubectl apply -f web-console-backend-configmap.yaml
Create the Web Console Backend Container
After the PersistentVolume and ConfigMap have been applied to the project, create the web console backend container. Remember to specify the correct image tag for your version of GridGain Web Console.
-
Create the backend container configuration file
web-console-backend-deployment.yaml
:# An example of a Kubernetes configuration for Web Console pod deployment. apiVersion: apps/v1 kind: Deployment metadata: labels: app: web-console-backend name: web-console-backend namespace: gridgain-web-console spec: progressDeadlineSeconds: 600 replicas: 1 selector: matchLabels: app: web-console-backend revisionHistoryLimit: 10 selector: matchLabels: app: web-console-backend template: metadata: labels: app: web-console-backend spec: containers: - image: gridgain/gridgain-web-console-backend:2021.04.00 imagePullPolicy: IfNotPresent name: web-console resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /opt/gridgain/default-config.xml name: web-console-default-config subPath: default-config.xml - mountPath: /opt/gridgain-web-console-server/work name: web-console-persistence dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 2000 terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 420 name: web-console-backend-cm name: web-console-default-config - name: web-console-persistence persistentVolumeClaim: claimName: backend-pds
-
Run the following command to create the container:
kubectl apply -f web-console-backend-deployment.yaml
Create Web Console Backend Service
Create a Kubernetes service to route network traffic to the Web Console backend. The service will act as a LoadBalancer for the web console backend.
-
Create the backend service configuration file
web-console-backend-service.yaml
:apiVersion: v1 kind: Service metadata: name: backend namespace: gridgain-web-console spec: ports: - name: backend port: 3000 protocol: TCP targetPort: 3000 selector: app: web-console-backend sessionAffinity: ClientIP sessionAffinityConfig: clientIP: timeoutSeconds: 10800 type: ClusterIP status: {}
-
Run the following command to create the service:
kubectl apply -f web-console-backend-service.yaml
Create a ConfigMap for Web Console Frontend
Similar to the steps for setting up the web console backend, create a ConfigMap for the web console frontend service.
-
Create the ConfigMap configuration file
web-console-frontend-configmap.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: web-console-frontend-cm data: web-console.conf: |- upstream backend-endpoint { server backend:3000; } server { listen 8008; server_name _; set $ignite_console_dir /data/www; root $ignite_console_dir; error_page 500 502 503 504 /50x.html; location / { try_files $uri /index.html = 404; } location /api/v1 { proxy_pass http://backend-endpoint; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass_header X-XSRF-TOKEN; } location /agents { proxy_pass http://backend-endpoint; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; # Use this setting if you plan to run Web Agent in a container proxy_set_header Origin http://backend-endpoint; # Use this setting if you plan to run Web Agent as a standalone application # proxy_set_header Origin $scheme://$http_host; } location /browsers { proxy_pass http://backend-endpoint; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Origin http://backend-endpoint; proxy_pass_header X-XSRF-TOKEN; } location = /50x.html { root $ignite_console_dir/error_page; } }
-
Run the following command to create the ConfigMap for the web console frontend:
kubectl apply -f web-console-frontend-configmap.yaml
Create the Web Console Frontend Container
After the ConfigMap has been applied to the project, create the web console frontend container. Remember to specify the correct image tag for your version of GridGain Web Console.
-
Create the container configuration file
web-console-frontend-deployment.yaml
:# An example of a Kubernetes configuration for Web Console deployment. apiVersion: apps/v1 kind: Deployment metadata: name: web-console-frontend namespace: gridgain-web-console spec: replicas: 1 selector: matchLabels: app: web-console-frontend template: metadata: labels: app: web-console-frontend spec: containers: # Custom Web Console pod name. - name: web-console image: gridgain/gridgain-web-console-frontend:2021.04.00 volumeMounts: - mountPath: /etc/nginx/web-console.conf name: web-console-config subPath: web-console.conf ports: # Ports to open. # Might be optional depending on your Kubernetes environment. - containerPort: 8008 volumes: - configMap: defaultMode: 420 name: web-console-frontend-cm name: web-console-config
-
Run the following command to create the container:
kubectl apply -f web-console-frontend-deployment.yaml
Create Web Console Frontend Service
Create a Kubernetes service to route network traffic to the Web Console frontend service.
-
Create the service configuration file
web-console-frontend-service.yaml
:apiVersion: v1 kind: Service metadata: # The name must be equal to TcpDiscoveryKubernetesIpFinder.serviceName name: web-console-frontend # The name must be equal to TcpDiscoveryKubernetesIpFinder.namespaceName namespace: gridgain-web-console spec: type: LoadBalancer ports: - name: http port: 80 targetPort: 8008 sessionAffinity: ClientIP selector: # Must be equal to the label set for Web Console Frontend pods. app: web-console-frontend
-
Run the following command to create the service which will act as a LoadBalancer for the web console frontend service:
kubectl apply -f web-console-frontend-service.yaml
Expose the Web Console Frontend Service and Get Routes to the Application
-
Get the deployed Web Console frontend LoadBalancer service IP using the following command:
$ kubectl get svc web-console-frontend NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE web-console-frontend LoadBalancer 10.0.247.81 52.149.203.159 80:30925/TCP 17s
-
Finally, access the Web Console frontend UI using one of the returned external IPs. The url could look like the following:
http://52.149.203.159
Configure and Launch Web Agent
Web Agent is an application that connects a GridGain or Ignite cluster to the Web Console. For more information about Web Agent, please see the Getting Started Guide. When working with Web Agent in Kubernetes, you can either run Web Agent as a standalone application or in a container. You should not mix container-based Web Agents with standalone Web Agents with a single Web Console. If you choose to run Web Agent as a standalone application, you will also need to modify the web-console-frontend-configmap.yml
with the correct /agents
configuration as described above. The following instructions will describe setting up Web Agent as a container in Kubernetes.
When launching Web Agent in a container, you will need to provide three configurations in order to successfully connect.
-
NODE_URI: The internal service name of the GridGain/Ignite node or cluster
-
SERVER_URI: The internal service name of the Web Console Frontend container. In the above examples, the service name would be
http://web-console-frontend
. -
TOKEN: Security Token generated from the User Profile page of Web Console. In order to generate a Security Token, Web Console must be running.
-
Create the deployment configuration file
web-agent-deployment.yaml
for the Web Agent:# An example of a Kubernetes configuration for Web Agent deployment. apiVersion: apps/v1 kind: Deployment metadata: # Custom cluster name. name: web-agent spec: # A number of pods to be started by Kubernetes. replicas: 1 selector: matchLabels: app: web-agent template: metadata: labels: app: web-agent spec: containers: # Custom pod name. - name: web-console image: gridgain/gridgain-web-agent:2021.04.00 env: - name: NODE_URI # Replace "NODE_URI" value with service name for GridGain/Ignite cluster value: "NODE_URI" - name: SERVER_URI # Replace "SERVER_URI" value with Web Console service name value: "SERVER_URI" - name: TOKENS # Replace "TOKENS" value with Security Token provided by Web Console value: "TOKENS"
-
Launch Web Agent as a container in Kubernetes
kubectl apply -f web-agent-deployment.yaml
© 2024 GridGain Systems, Inc. All Rights Reserved. Privacy Policy | Legal Notices. GridGain® is a registered trademark of GridGain Systems, Inc.
Apache, Apache Ignite, the Apache feather and the Apache Ignite logo are either registered trademarks or trademarks of The Apache Software Foundation.