Generate random string from Linux command line


Command

Use the following command to generate a random 16 character string with lower and upper case letters and "special" characters.

</dev/urandom tr -dc '0123456789!@#$%^&*abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' | head -c16; echo ""

Output

2kQjZ&iHSavVPj&U

If you prefer only eight alphanumeric characters, use:

</dev/urandom tr -dc '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' | head -c8; echo ""

TCxvA40s

Securing the Kubernetes API


Scenario

You have a functioning Kubernetes cluster that is running on a non-secure port with the API server exposed to everyone in your organization. You have been tasked with securing the Kubernetes API such that only the Kubernetes nodes and other defined users can call the API.

To secure your Kubernetes API, you must generate server and client certificates, specify a service account context using a token, configure the apiserver to listen on a secure port, and update the Kubernetes master and node configurations. Below are detailed instructions to accomplish this. Please note that while this is a working example, your mileage may vary depending on setup and configuration. I hope these instructions give you a head start on securing your Kubernetes cluster.

For additional information regarding authenticating/accessing the Kubernetes API, please visit http://kubernetes.io/docs/admin/authentication.


Kubernetes Master

Create a directory to store the certificates.

# mkdir /srv/kubernetes

Create a Certificate Authority (CA) key and certificate on the Kubernetes master.

# openssl genrsa -out /srv/kubernetes/ca.key 4096
# openssl req -x509 -new -nodes -key /srv/kubernetes/ca.key -subj "/CN=${HOSTNAME}" -days 10000 -out /srv/kubernetes/ca.crt

Create a server key on the Kubernetes master.

# openssl genrsa -out /srv/kubernetes/server.key 2048

Generate a Certificate Signing Request (CSR) for a server certificate on the Kubernetes master.

# openssl req -new -key /srv/kubernetes/server.key -subj "/CN=${HOSTNAME}" -out /srv/kubernetes/server.csr

Generate the server certificate from the CSR and sign it with the newly generated CA key and certificate.

# openssl x509 -req -in /srv/kubernetes/server.csr -CA /srv/kubernetes/ca.crt -CAkey /srv/kubernetes/ca.key -CAcreateserial -out /srv/kubernetes/server.crt -days 10000

View your server certificate (optional).

# openssl x509 -noout -text -in /srv/kubernetes/server.crt

Generate a random token that will be used by the "kubelet" service account and store it in a variable named "TOKEN".

# TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)

Create a "known tokens" file named /srv/kube-apiserver/known_tokens.csv (this will be referenced in the /etc/kubernetes/apiserver config).

# mkdir /srv/kube-apiserver
# echo "${TOKEN},kubelet,kubelet" > /srv/kube-apiserver/known_tokens.csv

Create a backup of your Kubernetes apiserver and controller-manager config files.

# cp /etc/kubernetes/apiserver /etc/kubernetes/apiserver.`date "+%Y%m%d"`.bak
# cp /etc/kubernetes/controller-manager /etc/kubernetes/controller-manager.`date "+%Y%m%d"`.bak

Edit the /etc/kubernetes/apiserver file.

# vi /etc/kubernetes/apiserver

Add the following flags to the KUBELET_API _ARGS parameter.

KUBE_API_ARGS="--cluster-name=mykubecluster --insecure-bind-address=127.0.0.1 --kubelet-https=true --client-ca-file=/srv/kubernetes/ca.crt --tls-cert-file=/srv/kubernetes/server.crt --tls-private-key-file=/srv/kubernetes/server.key --token_auth_file=/srv/kube-apiserver/known_tokens.csv"

Edit the /etc/kubernetes/controller-manager file.

# vi /etc/kubernetes/controller-manager

Add the following flags to the KUBE_CONTROLLER _MANAGER _ARGS parameter.

KUBE_CONTROLLER_MANAGER_ARGS="--root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --node-monitor-grace-period=20s --pod-eviction-timeout=20s"

Restart the kube-apiserver and kube-controller-manager service.

# systemctl restart kube-apiserver.service kube-controller-manager.service

Check the status to ensure things look good.

# systemctl status -l kube-apiserver.service 
# systemctl status -l kube-controller-manager.service

Create a variable called "NODES" that lists all of your Kubernetes nodes.

NODES="kubenode1.mydomain.com kubenode2.mydomain.com kubenode3.mydomain.com"

Generate a client certificate for each of your Kubernetes nodes.

# for NODE in $NODES; do
    openssl req -newkey rsa:2048 -nodes -keyout /srv/kubernetes/${NODE}.key -subj "/CN=${NODE}" -out $CERTDIR/${NODE}.csr
      openssl x509 -req -days 10000 -in /srv/kubernetes/${NODE}.csr -CA /srv/kubernetes/ca.crt -CAkey /srv/kubernetes/ca.key -CAcreateserial -out /srv/kubernetes/${NODE}.crt
  done

Copy the CA certificate as well as the client certificates to their respective nodes.

# for NODE in $NODES; do
ssh root@${NODE} mkdir /srv/kubernetes
scp /srv/kubernetes/ca.crt /srv/kubernetes/${NODE}.crt /srv/kubernetes/${NODE}.key root@${NODE}:/srv/kubernetes/
  done

Kubernetes Nodes

On each Kubernetes node, run the following kubectl commands to config the kubelet service to communicate securely to the kube-apiserver. You will need the token you generated earlier.

# kubectl config set-cluster mykubecluster --server=https://kubemaster.domain.com:6443 --insecure-skip-tls-verify=true
# kubectl config unset clusters
# kubectl config set-cluster mykubecluster --certificate-authority=/srv/kubernetes/ca.crt --embed-certs=true --server=https://kubemaster.domain.com:6443
# kubectl config set-credentials kubelet --client-certificate=/srv/kubernetes/${HOSTNAME}.crt --client-key=/srv/kubernetes/${HOSTNAME}.key --embed-certs=true --token=${TOKEN}
# kubectl config set-context service-account-context --cluster=mykubecluster --user=kubelet
# kubectl config use-context service-account-context

Copy the kubelet config file that was auto generated from the commands above (you can create/edit this file directly, if you prefer...the commands above just simplfy the configuration).

# cp /root/.kube/config /var/lib/kubelet/kubeconfig

Create a backup of your kubelet and kube-proxy config files.

# cp /etc/kubernetes/kubelet /etc/kubernetes/kubelet.`date "+%Y%m%d"`.bak
# cp /etc/kubernetes/proxy /etc/kubernetes/proxy.`date "+%Y%m%d"`.bak

Update the /etc/kubernetes/kubelet file to point to the new kubeconfig file.

# vi /etc/kubernetes/kubelet

Change the KUBELET_API _SERVER parameter to reflect the secure port (6443).

KUBELET_API_SERVER="--api_servers=https://kubemaster.domain.com:6443"

Add the "--kubeconfig" flag to the KUBELET_ARGS parameter.

KUBELET_ARGS="--enable_server=true --register-node=true --kubeconfig=/var/lib/kubelet/kubeconfig --node-status-update-frequency=5s"

Additionally, you will want to do the same thing for the kube-proxy configuration.

# mkdir /var/lib/kube-proxy
# cp /root/.kube/config /var/lib/kube-proxy/kubeconfig
# vi /etc/kubernetes/kube-proxy

Add the "--kubeconfig" flag to the KUBELET_PROXY _ARGS parameter.

KUBE_PROXY_ARGS="--kubeconfig=/var/lib/kube-proxy/kubeconfig"

Restart the kubelet and kube-proxy service.

# systemctl restart kubelet.service kube-proxy.service

Check the status to ensure things look good.

# systemctl status -l kubelet.service 
# systemctl status -l kube-proxy.service

Testing

Verify you can securely access the Kubernetes API using the client certificates. Run the following on one of the Kubernetes nodes.

# curl -k --key /srv/kubernetes/${HOSTNAME}.key  --cert /srv/kubernetes/${HOSTNAME}.crt  --cacert /srv/kubernetes/ca.crt https://kubemaster.domain.com:6443/version

Verify you can securely access the Kubernetes API using the token. Again, run the following on one of the Kubernetes nodes.

# curl -k -H "Authorization: Bearer ${TOKEN}" https://kubemaster.domain.com:6443/version

Security scan a RHEL7 Docker image & container


Scenario

You have a running Docker environment with a RHEL7 base image downloaded and running. The security folks are breathing down your neck for proof that the Docker images and containers are safe. Your challenge...prove it.

We will utilize the Open-Source Security Content Automation Protocol (OSCAP) tool specifically for Docker (oscap-docker).

We will install the packages provided through the Red Hat/CentOS channels but the packages are available at the link below if you prefer to download it direct.
https://github.com/OpenSCAP/container-compliance

Prerequisites

Install the openscap-utils package which contains the oscap-docker command.

# yum install openscap-utils -y

Additionally, install the SCAP Security Guide which provides predefined security policies (i.e. PCI DSS). You can also create custom security policies if you wish.

# yum install scap-security-guide -y

Create a directory where to store your scan results.

# mkdir /oscap

CVE Scans

Perform a Common Vulnerabilities and Exposures (CVE) scan of a Docker image.

# oscap-docker image-cve myprivatedockerregistry:5000/mydockerimage --results /oscap/mydockerimage-results-cve.xml --report /oscap/mydockerimage-report-cve.html

Perform the same CVE scan of a container.

# oscap-docker container-cve mycontainer --results /oscap/mycontainer-results-cve.xml --report /oscap/mycontainer-report-cve.html

PCI DSS Scans

Perform a Payment Card Industry Data Security Standard (PCI DSS) scan of a Docker image.

oscap-docker image myprivatedockerregistry:5000/mydockerimage xccdf eval --results /oscap/mydockerimage-results-pci-dss.xml --report /oscap/mydockerimage-report-pci-dss.html --profile xccdf_org.ssgproject.content_profile_pci-dss /usr/share/xml/scap/ssg/content/ssg-rhel7-ds.xml

Perform the same PCI DSS scan of a container.

oscap-docker container mycontainer xccdf eval --results /oscap/mycontainer-results-pci-dss.xml --report /oscap/mycontainer-report-pci-dss.html --profile xccdf_org.ssgproject.content_profile_pci-dss /usr/share/xml/scap/ssg/content/ssg-rhel7-ds.xml

The oscap-docker command uses the same switches and parameters as the oscap command.

For additional information, check the man page.

# man oscap-docker

I highly recommend patching your Docker image before running the scans (primarily the CVE scan). An all "green" scan equals a happy security department. To learn how to patch RHEL7 Docker images, click here.

Inserting certificates into Java keystore via Dockerfile


Scenario

You have a Root CA and Issuing CA certificate that you need to import into the Java keystore of a Docker image to allow your application to make trusted calls to another secured site signed by your Issuing CA.


Create the below Dockfile to install Java, copy your certificates from your host system (relative path is ./certs) to the Docker image and use the keytool command to import the certificates into the the default Java keystore ($JAVA_HOME/lib/security/cacerts).

You will obviously want to customize this to suit your needs adding your Java application server (i.e. Apache Tomcat, Wildfly, etc.) and copy your code into the Docker image via the Dockerfile as well.

Dockerfile

FROM registry.access.redhat.com/rhel7.1

ENV JAVA_HOME=/usr/lib/jvm/jre

COPY ./certs/My_Root_CA.cer /etc/ssl/certs/
COPY ./certs/My_Issuing_CA.cer /etc/ssl/certs/


RUN yum clean all && \
    yum --releasever=7.1 swap -y -- remove systemd-container\* -- install systemd systemd-libs && \
    yum update -y --releasever=7.1 && \
    yum install java-1.8.0-openjdk --releasever=7.1 -y && \
    yum clean all && \
    $JAVA_HOME/bin/keytool -storepasswd -new mysecretpassword -keystore $JAVA_HOME/lib/security/cacerts -storepass changeit && \
    echo "yes" | $JAVA_HOME/bin/keytool -import -trustcacerts -file /etc/ssl/certs/My_Root_CA.cer -alias my-root-ca -keystore $JAVA_HOME/lib/security/cacerts -storepass mysecretpassword && \
    echo "yes" | $JAVA_HOME/bin/keytool -import -trustcacerts -file /etc/ssl/certs/My_Issuing_CA.cer -alias my-issuing-ca -keystore $JAVA_HOME/lib/security/cacerts -storepass mysecretpassword && \
    rm -f /etc/ssl/certs/My_Root_CA.cer &&\
    rm -f /etc/ssl/certs/My_Issuing_CA.cer

Command

Next you will need to build your docker image using the docker build command.

docker build -t myprivatedockerregistry:5000/rhel7.1-with-my-certs .

Your containerized application will now trust certificates signed by your Issuing CA.

Experiences patching a RHEL 7.1 base Docker image


Scenario

You have Docker installed on a host system and you want to deploy a patched Red Hat Enterprise Linux base Docker image. I have presented two options and have shown several challenges that I had to overcome.


Option #1

Start an interactive shell into a RHEL 7.1 docker container.

# docker run -it registry.access.redhat.com/rhel7.1 bash

Run yum update inside the container.

 [root@0e439b6c0ec6 /]# yum update -y
Loaded plugins: product-id, subscription-manager

https://mysatelliteserver/pulp/repos/myorganization/myenvironment/mycontentview/content/dist/rhel/server/7/7Server/x86_64/os/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found
Trying other mirror.


 One of the configured repositories failed (Red Hat Enterprise Linux 7 Server (RPMs)),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

     1. Contact the upstream for the repository and get them to fix the problem.

     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).

     3. Disable the repository, so yum won't use it by default. Yum will then
        just ignore the repository until you permanently enable it again or use
        --enablerepo for temporary usage:

            yum-config-manager --disable rhel-7-server-rpms

     4. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice
        compromise:

            yum-config-manager --save --setopt=rhel-7-server-rpms.skip_if_unavailable=true

failure: repodata/repomd.xml from rhel-7-server-rpms: [Errno 256] No more mirrors to try.

https://mysatelliteserver/pulp/repos/myorganization/myenvironment/mycontentview/content/dist/rhel/server/7/7Server/x86_64/os/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found

Ugh! We hit an error. If you encounter the same error, then you need to specify the Red Hat release version using the "--releasever" yum parameter. See below.

[root@0e439b6c0ec6 /]# yum update -y --releasever=7.1
 Loaded plugins: product-id, subscription-manager
Resolving Dependencies
--> Running transaction check
---> Package bash.x86_64 0:4.2.46-12.el7 will be updated
 ...
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Importing GPG key 0xFD431D51:
 Userid     : "Red Hat, Inc. (release key 2) <security@redhat.com>"
 Fingerprint: 567e 347a d004 4ade 55ba 8a5f 199e 2f91 fd43 1d51
 Package    : redhat-release-server-7.1-1.el7.x86_64 (@koji-override-1/7.0)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Importing GPG key 0x2FA658E0:
 Userid     : "Red Hat, Inc. (auxiliary key) <security@redhat.com>"
 Fingerprint: 43a6 e49c 4a38 f4be 9abf 2a53 4568 9c88 2fa6 58e0
 Package    : redhat-release-server-7.1-1.el7.x86_64 (@koji-override-1/7.0)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Running transaction check
Running transaction test


Transaction check error:
  file /usr/lib64/libsystemd-daemon.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.x86_64
  file /usr/lib64/libsystemd-id128.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.x86_64
  file /usr/lib64/libsystemd-journal.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.x86_64
  file /usr/lib64/libsystemd-login.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.x86_64
  file /usr/lib64/libudev.so.1 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.x86_64
  file /usr/lib64/security/pam_systemd.so from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.x86_64

Error Summary
-------------

Again, we hit another error. This error is a known Red Hat Bug (1284056). The good news is that there is a work around.

[root@0e439b6c0ec6 /]# yum --releasever=7.1 swap -y -- remove systemd-container\* -- install systemd systemd-libs
 Loaded plugins: product-id, subscription-manager
Resolving Dependencies
--> Running transaction check
---> Package systemd.x86_64 0:219-19.el7 will be installed
...
 Complete!
 [root@0e439b6c0ec6 /]# yum update -y --releasever=7.1
 Loaded plugins: product-id, subscription-manager
Resolving Dependencies
--> Running transaction check
---> Package bash.x86_64 0:4.2.46-12.el7 will be updated
...
 Complete!

Bingo! A completely patched RHEL 7.1 Docker image. The final step is to commit the changes. Exit your running container by typing "exit" and then run a docker commit command.

# docker commit 0e439b6c0ec6 myprivatedockerregistry:5000/rhel7.1-patched

Golden! And the security department is happy!!


Option #2

Now for the easy way. Create a Dockerfile with the below contents.

FROM registry.access.redhat.com/rhel7.1

RUN yum clean all && \
    yum --releasever=7.1 swap -y -- remove systemd-container\* -- install systemd systemd-libs && \
    yum update -y --releasever=7.1 && \
    yum clean all

Run the docker build command and you're done!

# docker build -t myprivatedockerregistry:5000/rhel7.1-patched .