Quickstart with AKS

This is a quickstart guide for deploying TigerGraph single servers and clusters in Kubernetes on Azure Kubernetes Service(AKS).

1. Before you begin

  • Provision Kubernetes cluster on AKS with nodes that meet the hardware and software requirements to run TigerGraph.

  • Install kubectl on your machine, and make sure your local kubectl version is within one minor version’s difference from the kubectl version on your cluster.

  • Configure kubectl for ASK cluster access.

  • Ensure your have the following permissions in your Kubernetes context:

    • Create and delete Pods, Services, StatefulSets, and ConfigMaps

    • Create and delete Jobs, CronJobs

    • Create and delete Service Accounts, roles and role bindings

Each of the commands below uses kubectl with the default namespace default. If you’ve deployed your cluster using a different namespace, you need to explicitly provide the namespace where your cluster is deployed.

2. Single-server deployment

This section describes the steps to deploy, verify, and remove a single-server deployment of TigerGraph on AKS.

2.1. Deploy single server

2.1.1. Generate deployment manifest

Clone the TigerGraph ecosystems repository and change into the k8s directory. You can edit the kustimization.yaml file in the aks folder to change the namespace and image name for your deployment. The default namespace is default. No need to edit the files if no changes are needed.

Next, run the ./tg script in the k8s directory to generate the deployment manifest for a single-server deployment. You can use the --prefix option to specify a prefix for your pods. The default prefix is tigergraph. A deploy directory will be created automatically, and you should find the manifest named tigergraph-aks-default.yaml in the directory.

$ ./tg aks kustomize -s 1 -v <version>

2.1.2. Deploy manifest

Run kubectl apply to create the deployment using the manifest you generated in Step1.

$ kubectl apply -f deploy/<namespace>-aks/tigergraph-aks-default.yaml

2.2. Verify single server

Run kubectl get pods to confirm that the pods were created successfully:

$ kubectl get pods
NAME              READY   STATUS    RESTARTS   AGE
installer-zsnb4   1/1     Running   0          4m11s
tigergraph-0      1/1     Running   0          4m10s

Run kubectl get services to find the IP addresses of the RESTPP service as well as the GUI service. You can then make curl calls to the IP address of tg-rest-service at port 9000 to make sure that RESTPP is running:

$ curl <restpp_ip>:9000/echo | jq .
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    39  100    39    0     0    120      0 --:--:-- --:--:-- --:--:--   120
{
  "error": false,
  "message": "Hello GSQL"
}

You can also copy the IP address of the GUI service into your browser and visit port 14240 to make sure that GraphStudio is working.

2.3. Connect to single server

You can use kubectl to get a shell to the container or log in via ssh

# Via kubectl
kubectl exec -it tigergraph-0 -- /bin/bash

# Via ssh
ip_m1=$(kubectl get pod -o wide |grep tigergraph-0| awk '{print $6}')
ssh tigergraph@ip_m1

2.4. Remove single server resources

Use the tg script in the k8s directory of the repo to delete all cluster resources. Replace <namespace_name> with the name of the namespace within which you want to delete the resources. If you don’t specify a namespace, the command will delete the resources in the namespace default:

$ ./tg aks delete -n <namespace>

3. Cluster deployment

Once your AKS cluster is ready, you can start following the below steps to deploy a TigerGraph cluster on Kubernetes.

3.1. Deploy TigerGraph cluster

3.1.1. Generate Kubernetes manifest

Clone the TigerGraph ecosystem repository and change into the k8s directory:

$ git clone https://github.com/tigergraph/ecosys.git
$ cd ecosys/k8s

You can customize your deployment by editing the kustomize.yaml file in the aks directory. The tg script in the k8s folder offers a convenient way to make common customizations such as namespace, TigerGraph version, as well as cluster size. Run ./tg -h to view the help text on how to use the script.

Use the tg script in the k8s directory of the repo to create a Kubernetes manifest. Use -s or --size to indicate the number of nodes in the cluster. Use the --ha option to indicate the replication factor of the cluster, and the partitioning factor will be the number of nodes divided by the replication factor.

For example, the following command will create a manifest that will deploy a 3*2 cluster with a replication factor of 2 and a partitioning factor of 3.

$ ./tg aks kustomize -s 6 --ha 2 -v <version> -n <namespace>

The command will create a directory named deploy with the manifest inside.

3.1.2. Deploy the cluster

Run kubectl apply to create the deployment

$ kubectl apply -f ./deploy/<namespace>-aks/tigergraph-aks-default.yaml

3.2. Verify cluster

Run kubectl get pods to verify the pods were created successfully:

$ kubectl get pods
NAME              READY   STATUS    RESTARTS   AGE
installer-zsnb4   1/1     Running   0          4m11s
tigergraph-0      1/1     Running   0          4m10s
tigergraph-1      1/1     Running   0          75s

Run kubectl get services to find the IP addresses of the RESTPP service as well as the GUI service. You can then make curl calls to the IP address of tg-rest-service at port 9000 to make sure that RESTPP is running:

$ curl <restpp_ip>:9000/echo | jq .
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    39  100    39    0     0    120      0 --:--:-- --:--:-- --:--:--   120
{
  "error": false,
  "message": "Hello GSQL"
}

You can also copy the IP address of the GUI service into your browser and visit port 14240 to make sure that GraphStudio is working.

3.3. Connect to instances

You can use kubectl to get a shell to the container or log in via ssh

# Via kubectl
kubectl exec -it tigergraph-0 -- /bin/bash

# Via ssh
ip_m1=$(kubectl get pod -o wide |grep tigergraph-0| awk '{print $6}')
ssh tigergraph@ip_m1

3.4. Delete cluster resources

Use the tg script in the k8s directory of the repo to delete all cluster resources. Replace <namespace_name> with the name of the namespace within which you want to delete the resources. If you don’t specify a namespace, the command will delete the resources in the namespace default:

$ ./tg aks delete -n <namespace_name>