Securing the Kubernetes API


You have a functioning Kubernetes cluster that is running on a non-secure port with the API server exposed to everyone in your organization. You have been tasked with securing the Kubernetes API such that only the Kubernetes nodes and other defined users can call the API.

To secure your Kubernetes API, you must generate server and client certificates, specify a service account context using a token, configure the apiserver to listen on a secure port, and update the Kubernetes master and node configurations. Below are detailed instructions to accomplish this. Please note that while this is a working example, your mileage may vary depending on setup and configuration. I hope these instructions give you a head start on securing your Kubernetes cluster.

For additional information regarding authenticating/accessing the Kubernetes API, please visit

Kubernetes Master

Create a directory to store the certificates.

# mkdir /srv/kubernetes

Create a Certificate Authority (CA) key and certificate on the Kubernetes master.

# openssl genrsa -out /srv/kubernetes/ca.key 4096
# openssl req -x509 -new -nodes -key /srv/kubernetes/ca.key -subj "/CN=${HOSTNAME}" -days 10000 -out /srv/kubernetes/ca.crt

Create a server key on the Kubernetes master.

# openssl genrsa -out /srv/kubernetes/server.key 2048

Generate a Certificate Signing Request (CSR) for a server certificate on the Kubernetes master.

# openssl req -new -key /srv/kubernetes/server.key -subj "/CN=${HOSTNAME}" -out /srv/kubernetes/server.csr

Generate the server certificate from the CSR and sign it with the newly generated CA key and certificate.

# openssl x509 -req -in /srv/kubernetes/server.csr -CA /srv/kubernetes/ca.crt -CAkey /srv/kubernetes/ca.key -CAcreateserial -out /srv/kubernetes/server.crt -days 10000

View your server certificate (optional).

# openssl x509 -noout -text -in /srv/kubernetes/server.crt

Generate a random token that will be used by the "kubelet" service account and store it in a variable named "TOKEN".

# TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)

Create a "known tokens" file named /srv/kube-apiserver/known_tokens.csv (this will be referenced in the /etc/kubernetes/apiserver config).

# mkdir /srv/kube-apiserver
# echo "${TOKEN},kubelet,kubelet" > /srv/kube-apiserver/known_tokens.csv

Create a backup of your Kubernetes apiserver and controller-manager config files.

# cp /etc/kubernetes/apiserver /etc/kubernetes/apiserver.`date "+%Y%m%d"`.bak
# cp /etc/kubernetes/controller-manager /etc/kubernetes/controller-manager.`date "+%Y%m%d"`.bak

Edit the /etc/kubernetes/apiserver file.

# vi /etc/kubernetes/apiserver

Add the following flags to the KUBELET_API _ARGS parameter.

KUBE_API_ARGS="--cluster-name=mykubecluster --insecure-bind-address= --kubelet-https=true --client-ca-file=/srv/kubernetes/ca.crt --tls-cert-file=/srv/kubernetes/server.crt --tls-private-key-file=/srv/kubernetes/server.key --token_auth_file=/srv/kube-apiserver/known_tokens.csv"

Edit the /etc/kubernetes/controller-manager file.

# vi /etc/kubernetes/controller-manager

Add the following flags to the KUBE_CONTROLLER _MANAGER _ARGS parameter.

KUBE_CONTROLLER_MANAGER_ARGS="--root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --node-monitor-grace-period=20s --pod-eviction-timeout=20s"

Restart the kube-apiserver and kube-controller-manager service.

# systemctl restart kube-apiserver.service kube-controller-manager.service

Check the status to ensure things look good.

# systemctl status -l kube-apiserver.service 
# systemctl status -l kube-controller-manager.service

Create a variable called "NODES" that lists all of your Kubernetes nodes.


Generate a client certificate for each of your Kubernetes nodes.

# for NODE in $NODES; do
    openssl req -newkey rsa:2048 -nodes -keyout /srv/kubernetes/${NODE}.key -subj "/CN=${NODE}" -out $CERTDIR/${NODE}.csr
      openssl x509 -req -days 10000 -in /srv/kubernetes/${NODE}.csr -CA /srv/kubernetes/ca.crt -CAkey /srv/kubernetes/ca.key -CAcreateserial -out /srv/kubernetes/${NODE}.crt

Copy the CA certificate as well as the client certificates to their respective nodes.

# for NODE in $NODES; do
ssh root@${NODE} mkdir /srv/kubernetes
scp /srv/kubernetes/ca.crt /srv/kubernetes/${NODE}.crt /srv/kubernetes/${NODE}.key root@${NODE}:/srv/kubernetes/

Kubernetes Nodes

On each Kubernetes node, run the following kubectl commands to config the kubelet service to communicate securely to the kube-apiserver. You will need the token you generated earlier.

# kubectl config set-cluster mykubecluster --server= --insecure-skip-tls-verify=true
# kubectl config unset clusters
# kubectl config set-cluster mykubecluster --certificate-authority=/srv/kubernetes/ca.crt --embed-certs=true --server=
# kubectl config set-credentials kubelet --client-certificate=/srv/kubernetes/${HOSTNAME}.crt --client-key=/srv/kubernetes/${HOSTNAME}.key --embed-certs=true --token=${TOKEN}
# kubectl config set-context service-account-context --cluster=mykubecluster --user=kubelet
# kubectl config use-context service-account-context

Copy the kubelet config file that was auto generated from the commands above (you can create/edit this file directly, if you prefer...the commands above just simplfy the configuration).

# cp /root/.kube/config /var/lib/kubelet/kubeconfig

Create a backup of your kubelet and kube-proxy config files.

# cp /etc/kubernetes/kubelet /etc/kubernetes/kubelet.`date "+%Y%m%d"`.bak
# cp /etc/kubernetes/proxy /etc/kubernetes/proxy.`date "+%Y%m%d"`.bak

Update the /etc/kubernetes/kubelet file to point to the new kubeconfig file.

# vi /etc/kubernetes/kubelet

Change the KUBELET_API _SERVER parameter to reflect the secure port (6443).


Add the "--kubeconfig" flag to the KUBELET_ARGS parameter.

KUBELET_ARGS="--enable_server=true --register-node=true --kubeconfig=/var/lib/kubelet/kubeconfig --node-status-update-frequency=5s"

Additionally, you will want to do the same thing for the kube-proxy configuration.

# mkdir /var/lib/kube-proxy
# cp /root/.kube/config /var/lib/kube-proxy/kubeconfig
# vi /etc/kubernetes/kube-proxy

Add the "--kubeconfig" flag to the KUBELET_PROXY _ARGS parameter.


Restart the kubelet and kube-proxy service.

# systemctl restart kubelet.service kube-proxy.service

Check the status to ensure things look good.

# systemctl status -l kubelet.service 
# systemctl status -l kube-proxy.service


Verify you can securely access the Kubernetes API using the client certificates. Run the following on one of the Kubernetes nodes.

# curl -k --key /srv/kubernetes/${HOSTNAME}.key  --cert /srv/kubernetes/${HOSTNAME}.crt  --cacert /srv/kubernetes/ca.crt

Verify you can securely access the Kubernetes API using the token. Again, run the following on one of the Kubernetes nodes.

# curl -k -H "Authorization: Bearer ${TOKEN}"

Docker syslog driver to local facility


You have Docker installed and want to send your Docker logs (default is json-file) to a local syslog facility (i.e. local6).

You can use the --log-driver=VALUE with the docker run command to configure the container’s logging driver or you can set the parameter globally in the docker daemon configuration file. This is useful when using container orchestration such as Kubernetes or Apache Mesos.

Step #1

Edit the docker configuration file (/etc/sysconfig/docker on RHEL/CentOS based systems).

# vi /etc/sysconfig/docker

Add the log driver parameter (--log-driver=syslog --log-opt syslog-facility=local6 --log-level=warn) to the OPTIONS line.

# /etc/sysconfig/docker

# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=syslog --log-opt syslog-facility=local6 --log-level=warn'


# If you want to add your own registry to be used for docker search and docker
# pull use the ADD_REGISTRY option to list a set of registries, each prepended
# with --add-registry flag. The first registry added will be the first registry
# searched.

# If you want to block registries from being used, uncomment the BLOCK_REGISTRY
# option and give it a set of registries, each prepended with --block-registry
# flag. For example adding will stop users from downloading images
# from
# BLOCK_REGISTRY='--block-registry'

# If you have a registry secured with https but do not have proper certs
# distributed, you can tell docker to not look for full authorization by
## adding the registry to the INSECURE_REGISTRY line and uncommenting it.

# On an SELinux system, if you remove the --selinux-enabled option, you
# also need to turn on the docker_transition_unconfined boolean.
# setsebool -P docker_transition_unconfined 1

# Location used for temporary files, such as those created by
# docker load and build operations. Default is /var/lib/docker/tmp
# Can be overriden by setting the following environment variable.
# DOCKER_TMPDIR=/var/tmp

# Controls the /etc/cron.daily/docker-logrotate cron job status.
# To disable, uncomment the line below.

Restart the docker daemon.

# systemctl restart docker

Step #2

Configure the syslog daemon to listen on local6 and write logs to specified location.

Create a new file in /etc/rsyslog.d called docker.conf.

# vi /etc/rsyslog.d/docker.conf

Add the following line to the /etc/rsyslog.d/docker.conf file.

local6.*    -/var/log/docker/docker.log

Make sure that /var/log/docker exists.

# mkdir /var/log/docker

Restart the rsyslog daemon

# systemctl restart rsyslog

Start a Docker container or two. You will be now able to view all Docker logs in /var/log/docker/docker.log.

For additional information on configuring Docker's logging drivers, please visit

How to create a Kubernetes namespace


You have a functioning Kubernetes cluster and want to deploy pods or replication controllers in a separate namespace (other than default).

The first thing you need to do is create a yaml file defining Namespace as the "kind" parameter.

Below is an example yaml file (foobar.namespace.yaml) with a namespace of foobar.

# cat foobar.namespace.yaml
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "foobar",
    "labels": {
      "name": "foobar"


Run the kubectl command to create the namespace.

# kubectl create -f foobar.namespace.yaml

Verify the namespace was created:

# kubectl get namespaces
NAME              LABELS                 STATUS
default           <none>                 Active
foobar            name=foobar            Active

To delete the namespace:

# kubectl delete namespace foobar

For additional information on Kubernetes namespaces, vist

Run redis-cli in a container from kubectl command


You have a Kubernetes cluster running a Redis pod or replication controller. You need to run a redis-cli command in the container utilizing the kubectl command. The grep/awk commands below may not be needed depending on how you have Redis configured. In this scenario, the Redis container binds to the container IP only (not localhost), so we must grep for the host. There are plenty of other ways to accomplish the same thing but wanted to pass this along in case anyone else ran into a similar situation.


# kubectl --namespace=default exec -it `kubectl --namespace=default get pod | grep mypodname | awk '{print $1}'` -- /usr/bin/redis-cli -h `kubectl --namespace=default get pod | grep mypodname | awk '{print $1}'` -p 6379 -a mysecretpassword info server


# Server
os:Linux 3.10.0-327.el7.x86_64 x86_64

Advanced Command

If you need an administrative task to purge keys in the Redis cache or something similar, use this command with a pipe to a xargs command:

kubectl --namespace=default exec -i `kubectl --namespace=default get pod | grep my-redis-pod | awk '{print $1}'` -- /usr/bin/redis-cli -h `kubectl --namespace=default get pod | grep my-redis-pod | awk '{print $1}'` -p 6379 -a mysecretpassword KEYS "mykey:*" | xargs kubectl --namespace=default exec -i `kubectl --namespace=default get pod | grep my-redis-pod | awk '{print $1}'` -- /usr/bin/redis-cli -h `kubectl --namespace=default get pod | grep my-redis-pod | awk '{print $1}'` -p 6379 -a mysecretpassword DEL

This command can come in handy to allow developers to purge the Redis cache from a Jenkins Continuous Integration workflow.