Creating a Civo Kubernetes cluster through Pulumi

·

4 min read

The next option to provision your Civo Kubernetes Cluster is to use Pulumi. Pulumi is a modern infrastructure as code platform. It leverages existing programming languages ecosystem to interact with cloud resources through the Pulumi SDK.

In this tutorial we are going to show you how you can create a Civo Kubernetes Cluster with Pulumi. For the programming languages we choose Go.

If you would like to access further information on using the Civo Pulumi provider, please refer to the full guide

Before getting started, please make sure you have the Pulumi cli and kubectl installed. For details on how to install and use the Pulumi cli see this documentation

Pulumi stores metadata about your infrastructure so that it can manage your cloud resources. This metadata is called state. The default experience is to use the hosted Pulumi Service, which I use in this example. See Pulumi SaaS for details. There plenty of different backends available so follow this link to decide on the right Pulumi backend for you.

To create a Pulumi project you can use the pulumi new command. Follow the steps and you can even choose from a template.

After the creation process you should hae following files in your folder (I choose for the stack name dev):

Pulumi.yaml
Pulumi.dev.yaml 
README.md
go.mod
go.sum
main.go

Let us add the Civo Pulumi provider go packages into the go.mod.

go get github.com/pulumi/pulumi-civo/sdk

To not hard code our cluster information into our code, we can use the pulumi config function of the Pulumi cli.

pulumi config set region LON1
pulumi config set nodeSize g3.k3s.medium
...

Finally, we can create our Civo Kubernetes Cluster. Just enter this snippet into the main.go main function.

...
pulumi.Run(func (ctx *pulumi.Context) error {

region := "LON1"
if configRegion, ok := ctx.GetConfig("region"); ok {
region = configRegion
}

nodeSize := "g3.k3s.medium"
if configNodeSize, ok := ctx.GetConfig("nodeSize"); ok {
nodeSize = configNodeSize
}

cluster, err := civo.NewKubernetesCluster(ctx, "cluster", &civo.KubernetesClusterArgs{
Name:            pulumi.StringPtr("cluster_name"),
Applications:    pulumi.StringPtr(""),
NumTargetNodes:  pulumi.IntPtr(3),
TargetNodesSize: pulumi.StringPtr(nodeSize),
Region:          pulumi.StringPtr(region),
})
if err != nil {
return err
}
ctx.Export("name", cluster.Name)
return nil
}
...

In this example I set my CIVO_TOKEN as environment variable.

Now we can run the preview command, to see if everything is correct set up.

pulumi preview
Previewing update (dev)

View Live: https://app.pulumi.com/dirien/civo-go/dev/previews/b7154519-f3e9-4d0f-93c6-b36663d73e2b

     Type                             Name         Plan       
 +   pulumi:pulumi:Stack              civo-go-dev  create     
 +   └─ civo:index:KubernetesCluster  cluster      create     

Resources:
    + 2 to create

Perfect, no errors. Let us add also the kubernetes provider to create a kubernetes resource. In this case a namespace called hello-civo

So the final Pulumi deployment looks like this:

package main

import (
    "github.com/pulumi/pulumi-civo/sdk/go/civo"
    "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes"
    v1 "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes/core/v1"
    metav1 "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes/meta/v1"
    "github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
    pulumi.Run(func(ctx *pulumi.Context) error {

        region := "LON1"
        if configRegion, ok := ctx.GetConfig("region"); ok {
            region = configRegion
        }

        nodeSize := "g3.k3s.medium"
        if configNodeSize, ok := ctx.GetConfig("nodeSize"); ok {
            nodeSize = configNodeSize
        }

        cluster, err := civo.NewKubernetesCluster(ctx, "cluster", &civo.KubernetesClusterArgs{
            Name:            pulumi.StringPtr("cluster_name"),
            Applications:    pulumi.StringPtr(""),
            NumTargetNodes:  pulumi.IntPtr(3),
            TargetNodesSize: pulumi.StringPtr(nodeSize),
            Region:          pulumi.StringPtr(region),
        })
        if err != nil {
            return err
        }

        provider, err := kubernetes.NewProvider(ctx, "kubernetes", &kubernetes.ProviderArgs{
            Kubeconfig: cluster.Kubeconfig,
            Cluster:    cluster.Name,
        }, nil)
        if err != nil {
            return err
        }
        namespace, err := v1.NewNamespace(ctx, "ns", &v1.NamespaceArgs{
            Metadata: &metav1.ObjectMetaArgs{
                Name: pulumi.StringPtr("hello-civo"),
            },
        }, pulumi.Providers(provider))
        if err != nil {
            return err
        }

        ctx.Export("name", cluster.Name)
        ctx.Export("namespace", namespace.Metadata.Name())
        return nil
    })
}

Now we can apply the stack with pulumi up

 pulumi up   
Previewing update (dev)

View Live: https://app.pulumi.com/dirien/civo-go/dev/previews/03ed5a6a-7e8d-4a19-a8d2-195637f11651

     Type                             Name         Plan       
 +   pulumi:pulumi:Stack              civo-go-dev  create     
 +   ├─ civo:index:KubernetesCluster  cluster      create     
 +   ├─ pulumi:providers:kubernetes   kubernetes   create     
 +   └─ kubernetes:core/v1:Namespace  ns           create     

Resources:
    + 4 to create

Do you want to perform this update? yes
Updating (dev)

View Live: https://app.pulumi.com/dirien/civo-go/dev/updates/1

     Type                             Name         Status      
 +   pulumi:pulumi:Stack              civo-go-dev  created     
 +   ├─ civo:index:KubernetesCluster  cluster      created     
 +   ├─ pulumi:providers:kubernetes   kubernetes   created     
 +   └─ kubernetes:core/v1:Namespace  ns           created     

Outputs:
    name     : "cluster_name"
    namespace: "hello-civo"

Resources:
    + 4 created

Duration: 1m26s

Done :)

Check the namespace

k get ns                                      
NAME              STATUS   AGE
default           Active   2m33s
kube-system       Active   2m33s
kube-public       Active   2m33s
kube-node-lease   Active   2m33s
hello-civo        Active   2m31s