Yet Another Kubernetes Intro - Part 6 - Configuration

I have reached the 6th part of my introduction to Kubernetes, and so far we have covered a LOT of things. But there is still a few things left to cover.

In this part, I had planned to focus on configuration and storage. But after having written what I wanted to write about configuration using ConfigMaps and Secrets, I ended up with a pretty long post already. So I decided to leave storage for the next post…

Configuration

In the Docker universe, configuration is added to containers using either environment variables or data mapped as volumes. And if you worked with this before, you will feel very much at home in Kubernetes. Why? Well, because K8s uses the same paradigms when it comes to configuration. Either you set up environment variables, or you map volumes. However, there is a huge difference, and that is where you store your config.

In Docker, we define our environment variables when running docker run, or in our docker-compose.yml files. And when mapping config volumes, once again, we define them using either the Docker CLI or docker-compose. Either way, this means that our deployments scripts need to be changed for different environments. We can’t just run the same deployments for different environments, which is a bit limiting.

Note: It’s obviously possible to define environment variables in a “fixed” way like this in K8s as well, using the Pod spec. And in some cases that is good enough. But K8s gives us other options as you will see.

In Kubernetes, our configuration can be stored as resources in the cluster instead. This means that we can have several different clusters with different configuration, and still use the same deployments in all of them. Having the configuration picked up from the cluster we are deploying to, instead of having it “hardcoded” in the deployment.

ConfigMaps

To store our config in the cluster, we create ConfigMap resources. These are basically named string dictionaries that we can reference when we define environment variables and volume mappings.

In its most basic form, a ConfigMap looks like this

kind: ConfigMap 
apiVersion: v1 
metadata:
  name: my-config
data:
  my-config-key: my-config-value
  my-other-config-key: my-config-other-value

This creates a ConfigMap resource called my-config, containing 2 items with the keys my-config-key and my-other-config-key.

ConfigMaps and environment variables

Once this ConfigMap has been added to the cluster, the values can be used as environment variables in our pods like this

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: zerokoll/helloworld
    envFrom:
      - configMapRef:
          name: my-config

As you might have figured out, this spec adds environment variables from a config map called my-config. This means that each one of the items in the ConfigMap will be added to the container as an environment variable. Nice and simple!

In the above code, the pod gets all the entries from the ConfigMap added as environment variables, using the defined key names. But what if you only want some of them? Or if you need to rename them to get them to work with your container? Well, in that case you can change the spec like this

…
spec:
  containers:
  - name: my-container
    image: zerokoll/helloworld
    env:
      - name: my-renamed-config
        valueFrom:
          configMapKeyRef:
            name: my-config
            key: my-config-key

In this case, an individual environment variable called my-renamed-config is set up, and the value is set to the my-config-key entry in the my-config ConfigMap. And you can obviously add as many individual environment variables as you want to like this. You can even combine the two ways, and multiple ConfigMaps like this

…
spec:
  containers:
  - name: my-container
    image: zerokoll/helloworld
    env:
      - name: my-renamed-config
        valueFrom:
          configMapKeyRef:
            name: my-config
            key: my-config-key
      - name: my-second-renamed-config
        valueFrom:
          configMapKeyRef:
            name: my-second-config
            key: my-second-config-key
    envFrom:
      - configMapRef:
          name: my-config
      - configMapRef:
          name: my-second-config

In this example, there are 2 ConfigMaps that are being referenced in the envFrom entry. Values from these 2 maps are then added to 2 different environment variables using “custom” names. And in the end, the container doesn’t really care. All it knows is that there are 2 environment variables with the names it expects. Where the values come from is completely irrelevant…

ConfigMaps and volumes

But you are not limited to using environment variables when referencing ConfigMaps. You can also map their values inside the container using volumes. This allows you to map values in ConfigMaps as files in your container. All you have to do is configure a volume using your ConfigMap, and then mount that inside the container. Like this

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: zerokoll/helloworld
    volumeMounts:
      - name: config-volume
        mountPath: /etc/config
  volumes:
    - name: config-volume
      configMap:
        name: my-config

This spec add a directory at /etc/config inside the container, containing one file per key in the ConfigMap. And each one of the files will contain the value from the ConfigMap. This allows us to easily add configuration files inside our containers, with data pulled from ConfigMaps. And of course, you can map as many volumes as you want.

However, just mapping simple little string values, like in the previous example, is often not enough. Instead, we often want to map bigger things like for example JSON files.

For example, when using ASP.NET Core, we might want to add an appsettings.Production.json file to allow us to overwrite the default config for our application. Luckily, that is not a problem at all! All we have to do is define a ConfigMap that looks something like this

kind: ConfigMap 
apiVersion: v1 
metadata:
  name: web-app-config
data:
  appsettings: |
    {
      "connectionstrings": {
        "db": "MY PROD CONNECTIONSTRING"
      }
    }

This appsettings entry can then be mapped to my container with the file name needed by using the following config

…
spec:
  containers:
  - name: my-container
    image: zerokoll/helloworld
    volumeMounts:
      - name: config-volume
        mountPath: /my-app
  volumes:
    - name: config-volume
      configMap:
        name: web-app-config
        items:
          - key: appsettings
            path: appsettings.Production.json

This maps the JSON data from the ConfigMap as a file called appsetting.Production.json in the /my-app directory. And as long as our app is located at that path, and the application has set the environment to Production, it picks up the connection string value from the ConfigMap.

Note: When mapping ConfigMap values to volumes in containers, any change to the ConfigMap will be reflected inside the container. However, there is a bit of caching in place, which can cause the update to be delayed up to about 2 minutes by default. But hey, it’s better than nothing!

Creating ConfigMaps

Besides defining ConfigMaps using YAML files, you can also create them using the kubectl create configmap. This command expects the name of the ConfigMap to create, and a data source in the form of a file or directory.

Imagine a directory called config-dir, containing 2 files, config1.txt and config2.json. Running

kubectl create configmap my-config-map --from-file ./config-dir/config1.txt

will create a ConfigMap called my-config-map, containing a single item with the key config1.txt and the value set to the content of the file. Like this

kubectl describe configmap my-config-map

Name:         my-config-map
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
config1.txt:
----
here is my config
Events:  <none>

Passing in the directory instead of a file, like this

kubectl create configmap my-config-map --from-file ./config-dir/

You get a ConfigMap with the same name, but containing 2 entries. One for config1.txt and one for config2.json

kubectl describe configmap my-config-map

Name:         my-config-map
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
config1.txt:
----
here is my config
config2.json:
----
{
    "key": "value"
}
Events:  <none>

Note: If you are trying out these commands, remember that you have to delete the created ConfigMap between the two commands since they both create a ConfigMap with the same name. And you can’t really create a ConfigMap with a name that already exists.

Note 2: You can also use something called “env-files”, which are basically a files with = entries on each line. Using one of these files, you can use `--from-env-file` instead of `--from-file` to get the items in the file added as multiple items in your ConfigMap. This allows you to have multiple config values in a single file, and easily generate multiple entries in a single ConfigMap.

Secrets

There is also a second resource type that can be used for configuration. It is called a Secret. A secret is a lot like a ConfigMap. A whole lot! However, it is intended to be used for “secrets” like credentials for example.

Having that said, it is good to know that the values aren’t actually that secret. They are just Base64 encoded. So they aren’t secure in any way, but having them in a Secret resources shows the intent of the data.

Creating Secrets

You can create secrets using YAML as always.You just need to remember to Base64 encode the values, like this

apiVersion: v1
kind: Secret
metadata:
  name: my-credentials
type: Opaque
data:
  username: YWRtaW4=
  password: Y2hyaXM=

If you want to base64 encode a string in PowerShell, you can do it like this

$Bytes = [System.Text.Encoding]::UTF8.GetBytes('password')
[Convert]::ToBase64String($Bytes)

and if you are using bash, you can do it like this

echo -n 'password' | base64

However, I’m not sure you really want to keep your Base64 encoded secrets in source control. I know that I have said that I think all your resource specs should be stored in source control. But in this case, I’m not sure that is advisable. Instead, I would actually suggest using the CLI to create your secrets so it doesn’t end up in the wrong hands.

kubectl create secret generic my-secret --from-literal=key='value' --from-literal=key2='value2'

This creates a Secret called my-secret with 2 items called key1 and key2 in it. And no, in this case you do not need to Base64 encode the values first.

Note: As your shell probably require you to escape certain characters, it is recommended that you put your strings in quotes…

Another way to do it, is to add the values to files, and then use the files to create the entries. Like this

'chris' > username
'password' > password
kubectl create secret generic my-credentials --from-file=./username --from-file=./password

This creates 2 files, username and password, and then creates a secret called my-credentials containing those items.

kubectl get secret my-credentials -o yaml

apiVersion: v1
data:
  password: //5wAGEAcwBzAHcAbwByAGQADQAKAA==
  username: //5jAGgAcgBpAHMADQAKAA==
kind: Secret
metadata:
  creationTimestamp: "2020-02-12T03:32:05Z"
  name: my-credentials
  namespace: default
  resourceVersion: "356877"
  selfLink: /api/v1/namespaces/default/secrets/my-credentials
  uid: bff500a1-e186-401b-bbe6-18512b87d961
type: Opaque

As you can see from the above output, accessing the values in the secret isn’t that hard. So it is important to make sure that you set up proper access control to make sure that access is limited as much as possible. More about this in a later post…

Note: In the above example, remember to delete the files that were created as well…

Using Secrets as environment variables

Secrets, just as ConfigMaps, can be mapped into your pods as environment variables as well. And the syntax is almost identical o the one used for ConfigMaps. The only change that is needed is to change the configMapKeyRef to secretKeyRef, like this

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: zerokoll/helloworld
    env:
      - name: my-username
        valueFrom:
          secretKeyRef:
            name: my-credentials
            key: username
      - name: my-password
        valueFrom:
          secretKeyRef:
            name: my-credentials
            key: password

This spec will add 2 environment variables, called my-username and my-password, with the values mapped from the Secret called my-credentials.

Using Secrets as volumes

And just as with ConfigMaps, you can also mount Secrets as volumes. Once again, the only difference is that you use an entry called secret instead of configMap. So, to mount my my-credentials secret in a container, we use the following YAML

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: zerokoll/helloworld
    volumeMounts:
      - name: my-secret-volume
        mountPath: /secret-stuff
  volumes:
    - name: my-secret-volume
      secret:
        secretName: my-credentials

As you can see, it is pretty much identical except the change from configMap to secret. And just as with ConfigMaps, you can define specific items to map like this

…
spec:
  containers:
  - name: my-container
    image: zerokoll/helloworld
    volumeMounts:
      - name: my-secret-volume
        mountPath: /secret-stuff
  volumes:
    - name: my-secret-volume
      secret:
        secretName: my-credentials
        items:
        - key: username
          path: my-username

imagePullSecrets

In Kubernetes, there is also a very special type of secret called docker-registry. This is a type of secret that is used to hold credentials for any private container registries that are being used to pull images from.

An docker-registry secret can be created in a couple of ways. One way is to do it by getting hold of your existing Docker credentials. Creating the secret from the Docker config file on your machine. Like this

kubectl create secret generic pull-secret \
             --from-file=.dockerconfigjson=<path/to/.docker/config.json> \
             --type=kubernetes.io/dockerconfigjson

This will create an docker-registry secret called pull-secret, with the credentials used by Docker.

However, you can also create one by passing in the credentials in the terminal like this

kubectl create secret docker-registry pull-secret \
              --docker-server=<CONTAINER REGISTRY ADDRESS> \
              --docker-username=<USERNAME> \
              --docker-password=<PASSWORD> \
              --docker-email=<EMAIL>

This will also create an secret called pull-secret.

Whichever you choose, the outcome is the same, a docker-registry secret that you can use for authentication when pulling images.

And when you want to use one of these secrets when pulling an image, you can define it in your pod definition, using the imagePullSecrets entry like this

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: k8s4devs.azurecr.io/zerokoll/helloworld
  imagePullSecrets:
    - name: pull-secret

This spec will pull the image from the container registry located at k8s4devs.azurecr.io, using the credentials in the secret called pull-secret to authenticate.

I think that covers most of what you need to know to get your applications configured. So I think I will just call it for this time. Hope you got some information that you needed!

The next part, covering storage, is available here!

zerokoll

Chris

Developer-Badass-as-a-Service at your service