You might be asking, "Why? What is the use case?" With skaffold
and okteto
, I ask myself the same. The short answer is that it is a stopgap. This will help me develop and work on changes with manual sync. Normally, the best answer is an automated sync like you get with skaffold
or okteto
, but we can't get it all running immediately. This is what I did to alleviate my problems in the interim.
Problem
Normally, automated build systems present a way that you can clone a git repository and build what is there immutably. This is fine, except in development, you build before committing. You can not commit something without knowing the build outcome just so you can build it. Further, there are some cases where you require secrets or special variables that you definitely do not want pushed into your repo, but are required for build. Again, the best solution is a tool that syncs, but I want to test build automation that does not sync.
Solution
Enter sshd
and rsync
. I build a PersistentVolume
that is used by any of my pods. I create a pod that has a claim on that volume. The pod is also running sshd
. I then create a LoadBalancer
to expose sshd
. This allows me to rsync
my local changes to the PersistentVolume
where another pod can then build.
The Container
The Dockerfile
I used for the sshd
distribution
ARG VARIANT=18-bullseye
FROM mcr.microsoft.com/vscode/devcontainers/typescript-node:0-${VARIANT}
# [Optional] Uncomment this section to install additional OS packages.
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends \
python3 \
python3-pip \
rsync \
openssh-server \
emacs \
fish \
&& pip3 install awscli \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN systemctl status sshd \
&& sudo systemctl enable ssh
USER node
Setup PersistentVolumes
These are the manifests for my volumes that I mentioned are going to be shared across different pods for building and running.
apiVersion: v1
kind: PersistentVolume
metadata:
name: cloudtools-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 60Gi
hostPath:
path: /var/lib/rancher/k3s/storage/homelab
type: ""
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3os-23996
persistentVolumeReclaimPolicy: Delete
storageClassName: local-path
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cloudtools-pvc
spec:
resources:
requests:
storage: 60Gi
storageClassName: local-path
volumeMode: Filesystem
volumeName: cloudtools-pv
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 60Gi
phase: Bound
Kubernetes Pod
Next, I need to have a pod to deploy this container to
apiVersion: v1
kind: Pod
metadata:
name: cloudtools
annotations:
container.apparmor.security.beta.kubernetes.io/cloudtools: unconfined
container.seccomp.security.alpha.kubernetes.io/cloudtools: unconfined
labels:
app: cloudtools
spec:
imagePullSecrets:
- name: regcred
containers:
- name: cloudtools
image: registry.local.thelastpri.me/cloudtools:ssh
command: [ "tail", "-f", "/dev/null"]
securityContext:
seccompProfile:
type: Unconfined
runAsUser: 1000
runAsGroup: 1000
ports:
- containerPort: 4444
volumeMounts:
- name: docker-secret
mountPath: /home/user/.docker
- name: dockerfile-storage
mountPath: /workspace
volumes:
- name: docker-secret
secret:
secretName: regcred
items:
- key: .dockerconfigjson
path: config.json
- name: dockerfile-storage
persistentVolumeClaim:
claimName: cloudtools-pvc
You can see the pod is configured to talk to the appropriate volume and ssh is running on port 4444.
LoadBalancer
This is what makes everything work. The loadbalancer exposes port 4444 to be accessed outside the cluster.
apiVersion: v1
kind: Service
metadata:
name: cloudtools-ssh
spec:
selector:
app: cloudtools
type: LoadBalancer
ports:
# - port: 4444
targetPort: 4444
name: ssh
protocol: TCP
The result is
NAME READY STATUS RESTARTS AGE
pod/cloudtools 1/1 Running 0 11h
pod/svclb-cloudtools-ssh-fdl5c 1/1 Running 0 11h
pod/svclb-cloudtools-ssh-xjtnr 1/1 Running 0 11h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cloudtools-ssh LoadBalancer 10.43.108.66 192.168.222.19,192.168.222.69 4444:30372/TCP 12h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/svclb-cloudtools-ssh 2 2 2 2 2 <none> 12h
It shows that ssh is available and running.
Next
I think I will build this as a chart. I have some things that I want to parameterize and make more accessible like the port number. I also want to be able to add authorized_keys
. Right now, after the pod is deployed, I have to use kubectl exec -ti
to manually add my keys. It is not much of a hassle because it only needs to be done once. However, I am likely to forget this step in the future. I need to automate this, so I do no not need to worry about it.
Update
I want to give a quick recap on my experiences with this.
What We All Really Want is Internal Development Platform
What are Internal Development Platforms (IDP)? Well, this https://internaldeveloperplatform.org/
In my own words, it's what I'm trying to do. I'm trying to build reproduceable, idempotent developer environments for any use case. We used to all develop on mainframes, then we moved back to developing on our local machines because VCS. Now we're back to developing on huge behemoths while thinking we're on our local machines. It's great. Anyway, for IDP, we do not SSH in. We tunnel. I get that is essentially the same thing. The important part is it's not manual. It's seamless. We don't know we're not on our local machine. Even our files sync.
SSH Is not the Answer
Now that we know what I am trying to do, we see SSH is not the right approach. What is the right approach? Coder. What is Coder? https://github.com/coder/coder
It's essentially, what we've all been using. Be it VS Code or IntelliJ or Jetbrains. The IDE we've been working with uses the same basic stuff under the hood. The aesthetics are different, but they're basically the same. Coder is what lives under all the fluffy stuff.
The premise here is to run coder as a server platform, and remote to it with whatever IDE we use on our local machines (even our tablets).