In Development Inside Kubernetes Part 4: Private Container Registry, I was able to setup a private container registry which is actually important to a local development environment. Typically, people just use a local images cache inside of docker, but the best solution is a private registry. It leads to better outcomes because you are always working with a registry which is exactly what you will do in a live environment.
The next step is to develop or be able to build containers from within the cluster itself. There are 3 basic use cases we want to attempt.
Setup
Initially, I need to give kubernetes pods access in some way to files like container descriptors. This means any Dockerfiles or paths to Dockerfiles will be accessible from my kaniko pod. Here, I setup a PersistentVolume
and point it to a path in my home directory where I intend to give access to Dockerfiles.
apiVersion: v1
kind: PersistentVolume
metadata:
name: dockerfile
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
storageClassName: local-storage
hostPath:
path: /Users/<user-name>/kaniko # replace with local directory, such as "/home/<user-name>/kaniko"
A claim needs to be created to bind it. For both, the PersistentVolume
and PersistentVolumeClaim
a local-storage
storageClassName
is used.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: dockerfile-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: local-storage
Besides PersistantVolume
and PersistantVolumeClaim
, we will need a secret. This is already covered in [Development Inside Kubernetes Part 4: Private Container Registry])https://hive.blog/blog/@timesheets/5kztcs-development-inside-kubernetes-part-4-private-container-registry). I am going to reuse my docker secret I created there.
Usage
One of the great things about kaniko is that nothing needs to be installed. It can run in a Pod
by itself. I am using the container gcr.io/kaniko-project/executor:debug
.
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command: ["/bin/sh","-c","cat $STACKLOG_PATH"]
env:
- name: STACKLOG_PATH
value: /workspace/kaniko.slog
args: ["--dockerfile=/workspace/dockerfile",
"--context=dir://workspace",
"--destination=registry.local.thelastpri.me/test/cloudtools"] # replace with your dockerhub account
volumeMounts:
- name: kaniko-secret
mountPath: /kaniko/.docker
- name: dockerfile-storage
mountPath: /workspace
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: regcred
items:
- key: .dockerconfigjson
path: config.json
- name: dockerfile-storage
persistentVolumeClaim:
claimName: dockerfile-claim
For more information, you can checkout the kaniko getting started tutorial. At this point, I have not really said anything more than what is already in the tutorial.
How I am Using Kaniko
What I am really trying to do is develop in such a way that replaces my docker
command. Kaniko supports stdin input, but I decided against using this because of context. I like to build from context. I have assets and dependencies that I need to build with. It is possible to gzip context and pass in through stdin, but I would rather instead.
apiVersion: v1
kind: Job
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command: ["/bin/sh","-c","cat $STACKLOG_PATH"]
env:
- name: STACKLOG_PATH
value: /workspace/kaniko.slog
args: ["--dockerfile=/workspace/dockerfile",
"--context=dir://workspace",
"--destination=registry.local.thelastpri.me/test/cloudtools"] # replace with your dockerhub account
volumeMounts:
- name: kaniko-secret
mountPath: /kaniko/.docker
- name: dockerfile-storage
mountPath: /workspace
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: regcred
items:
- key: .dockerconfigjson
path: config.json
- name: dockerfile-storage
persistentVolumeClaim:
claimName: dockerfile-claim
Instead of a Pod
, I use a Job
.
Developing on WSL
Using rancher desktop on WSL will create a VM in addition to your wsl environment called rancher-desktop
.
PS C:\Users\timesheets> wsl -l
Windows Subsystem for Linux Distributions:
Debian (Default)
rancher-desktop-data
rancher-desktop
All wsl environments have access to /mnt/wsl/rancher-desktop
❯ ls -l /mnt/wsl/rancher-desktop/run/data
total 64
drwxr-xr-x 2 root root 4096 Sep 20 11:24 bin/
drwxr-xr-x 2 root root 4096 Sep 20 11:24 dev/
drwxr-xr-x 3 root root 4096 Nov 5 19:56 etc/
-rwxr-xr-x 1 root root 0 Nov 5 19:56 init*
drwxr-xr-x 4 root root 4096 Aug 2 15:13 lib/
drwx------ 2 root root 16384 Apr 10 2019 lost+found/
drwxr-xr-x 6 root root 4096 Sep 27 11:40 mnt/
drwxr-xr-x 2 root root 4096 Sep 20 11:24 proc/
drwxr-xr-x 2 root root 4096 Sep 20 11:24 run/
drwxr-xr-x 2 root root 4096 Sep 20 11:24 sbin/
drwxr-xr-x 2 root root 4096 Sep 20 11:24 sys/
drwxrwxrwt 3 root root 4096 Oct 12 05:01 tmp/
drwxr-xr-x 4 root root 4096 Oct 3 21:15 usr/
drwxr-xr-x 3 root root 4096 Sep 20 11:24 var/
This commonly shared volume allows me to sync my workspace with a volume that kaniko will have access to. I just set the shared volume as my PersistentVolume
.
Then, I can create my builds by using
rsync -av workspace/* /mnt/wsl/rancher-desktop/usr/src/workspace
kubectl apply -f kaniko.yaml
In the end, kaniko was adequate, but I needed something more consistent. Kaniko is still being developed and things like multistage builds are not well-supported.