You’ve got a K3s cluster and a GitLab instance. You push code and—ideally—your app rolls out automatically. In practice, you might hit a few snags: pipelines not triggering, service ports clashing, or images refusing to pull with an ominous ImagePullBackOff (401 Unauthorized). This guide shows a clean, repeatable setup and the exact fixes that work in real life—with anonymized URLs/IPs so you can drop this straight onto your blog.
Environment used in this guide (genericized)
- K3s cluster: accessible at
<CLUSTER_NODE_IP>
- GitLab:
https://<GITLAB_HOST>
with a private registryhttps://<REGISTRY_HOST>
- Runner host: separate Linux box (Podman or Docker installed;
kubectl
access to K3s) - Projects:
my-backend
(API) andmy-frontend
(UI)
Architecture at a glance
- GitLab CI/CD builds the image → pushes to GitLab Container Registry → applies Kubernetes manifests to K3s.
- Services/ports in this example:
- Backend
my-backend-service
: NodePort 30080 →http://<CLUSTER_NODE_IP>:30080
- Frontend
my-frontend-service
: NodePort 30081 →http://<CLUSTER_NODE_IP>:30081
- Backend
Prerequisites
- Runner can reach the cluster (job image has
kubectl
and a valid kubeconfig). In many self-hosted setups, you may mount or inject kubeconfig via CI variableKUBE_CONFIG
. - GitLab Container Registry is enabled; projects push images to
https://<REGISTRY_HOST>/<group>/<project>
(private). - Kubernetes manifests (Deployment + Service) exist in your repo.
- Optional GitLab variables often used in templates:
CI_REGISTRY_USER
,CI_REGISTRY_PASSWORD
,CI_REGISTRY
,KUBE_CONFIG
.
Security tip: Mask & protect secrets, and scope them per environment where possible.
Kubernetes manifests (example)
Deployment (frontend)
Use labels for easy selection and a container listening on port 80. Replace the image with your registry path.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-frontend
spec:
replicas: 2
selector:
matchLabels:
app: my-frontend
template:
metadata:
labels:
app: my-frontend
spec:
containers:
- name: my-frontend
image: <REGISTRY_HOST>/<GROUP>/<PROJECT>:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
Service (avoid port conflicts)
Pick a NodePort that doesn’t collide with other services (here, 30081 for the frontend).
apiVersion: v1
kind: Service
metadata:
name: my-frontend-service
labels:
app: my-frontend
spec:
type: NodePort
selector:
app: my-frontend
ports:
- port: 80
targetPort: 80
nodePort: 30081
GitLab CI/CD: two proven patterns
A) Minimal Docker-in-Docker pipeline
This approach builds and pushes with docker:dind
, then deploys with kubectl
:
stages:
- build
- deploy
variables:
DOCKER_IMAGE_TAG: "$CI_COMMIT_SHORT_SHA"
KUBE_NAMESPACE: "default"
build:
stage: build
image: docker:latest
services: [docker:dind]
script:
- docker build -t "$CI_REGISTRY_IMAGE:$DOCKER_IMAGE_TAG" .
- docker push "$CI_REGISTRY_IMAGE:$DOCKER_IMAGE_TAG"
- docker tag "$CI_REGISTRY_IMAGE:$DOCKER_IMAGE_TAG" "$CI_REGISTRY_IMAGE:latest"
- docker push "$CI_REGISTRY_IMAGE:latest"
deploy:
stage: deploy
image: bitnami/kubectl:latest
script:
- sed "s|<IMAGE_NAME>|$CI_REGISTRY_IMAGE:$DOCKER_IMAGE_TAG|g" k8s/deployment-template.yaml > k8s/deployment.yaml
- kubectl -n "$KUBE_NAMESPACE" apply -f k8s/deployment.yaml
- kubectl -n "$KUBE_NAMESPACE" apply -f k8s/service.yaml
- kubectl -n "$KUBE_NAMESPACE" rollout status deployment/my-frontend --timeout=180s
B) Podman + explicit workflow rules (great when pushes don’t trigger)
Adding workflow rules to allow push
and web
sources ensures pipelines start on push and via manual triggers:
stages:
- build
- push
- deploy
variables:
IMAGE_NAME: "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA"
KUBE_NAMESPACE: "default"
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == "push"
- if: $CI_PIPELINE_SOURCE == "web"
build-frontend:
stage: build
script:
- npm ci
- npm run build
# auth for Podman to push to the GitLab registry (using CI_JOB_TOKEN)
- mkdir -p ~/.docker
- AUTH=$(echo -n "$CI_REGISTRY_USER:$CI_JOB_TOKEN" | base64 | tr -d '\n')
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$AUTH\"}}}" > ~/.docker/config.json
- podman build -t "$IMAGE_NAME" .
artifacts:
paths: [build/]
push-image:
stage: push
needs: [build-frontend]
script:
- podman login "$CI_REGISTRY" -u "$CI_REGISTRY_USER" -p "$CI_JOB_TOKEN"
- podman push "$IMAGE_NAME"
deploy:
stage: deploy
needs: [push-image]
image: bitnami/kubectl:latest
script:
# (Optional) create/refresh the imagePull secret
- |
kubectl -n "$KUBE_NAMESPACE" create secret docker-registry regcred \
--docker-server="$CI_REGISTRY" \
--docker-username="$CI_REGISTRY_USER" \
--docker-password="$CI_JOB_TOKEN" \
--docker-email="ci@example.com" \
|| echo "regcred already exists"
- kubectl -n "$KUBE_NAMESPACE" set image deployment/my-frontend my-frontend="$IMAGE_NAME"
- kubectl -n "$KUBE_NAMESPACE" rollout restart deployment/my-frontend
Verifying a deployment after push
After the pipeline completes, confirm that the new Pods are up and the Service points to them:
# Watch rollout
kubectl -n <NAMESPACE> rollout status deployment/my-frontend
# See Pods and their status
kubectl -n <NAMESPACE> get pods -l app=my-frontend
# Describe a failing Pod if needed
kubectl -n <NAMESPACE> describe pod <pod-name>
# Check Services and endpoints
kubectl -n <NAMESPACE> get svc
kubectl -n <NAMESPACE> get endpoints my-frontend-service
To verify connectivity end-to-end:
# From outside the cluster (NodePort)
curl -fsSI "http://<CLUSTER_NODE_IP>:30081/health"
The big one: fixing ImagePullBackOff (401 Unauthorized)
A very common failure during rollout looks like this:
Failed to pull and unpack image ".../<group>/<project>:<sha>":
failed to authorize: failed to fetch oauth token: unexpected status ...
401 Unauthorized
This means the cluster failed to authenticate against the private GitLab Registry. Typical root causes:
- The imagePullSecret (
regcred
) is missing, outdated, or created with credentials that can’t read the registry. - The pipeline pushes images using
CI_JOB_TOKEN
, but your cluster later pulls them using different credentials (or none). Result: 401.
How to fix it (three reliable options)
- Use a Deploy Token or Personal Access Token with
read_registry
- Create a token in GitLab with
read_registry
. - Recreate
regcred
on the cluster with that token:kubectl -n <NAMESPACE> delete secret regcred || true kubectl -n <NAMESPACE> create secret docker-registry regcred \ --docker-server=<REGISTRY_HOST> \ --docker-username="<DEPLOY_TOKEN_USERNAME_OR_USER>" \ --docker-password="<TOKEN_WITH_read_registry>"
- Ensure your Deployment uses
imagePullSecrets: [{ name: regcred }]
.
- Create a token in GitLab with
- Use the built-in
CI_REGISTRY_USER
+CI_JOB_TOKEN
(if permitted)
Some GitLab setups permitCI_JOB_TOKEN
for pulls from the cluster; others don’t. If you see repeated 401s, prefer option 1. - Pre-configure registry auth at the cluster level
On self-hosted K3s, you can configure registry credentials once (e.g., via K3s’ registry config) so every node can pull from your GitLab registry without per-namespace secrets.
Sanity checks after the fix
kubectl -n <NAMESPACE> get secret regcred -o yaml # confirm it exists & is fresh
kubectl -n <NAMESPACE> rollout restart deploy/my-frontend
kubectl -n <NAMESPACE> get pods -l app=my-frontend -w
Avoiding port collisions
If your backend already uses NodePort 30080, keep the frontend on a different port (e.g., 30081) to avoid clashes. Later, introduce an Ingress and TLS so you can expose multiple apps cleanly under https://app.<your-domain>
paths or subdomains.
When a pipeline won’t trigger on push
If a push doesn’t start a pipeline but manual trigger works, add workflow rules like:
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == "push"
- if: $CI_PIPELINE_SOURCE == "web"
This ensures manual and automatic triggers behave as expected.
Post-deploy health checklist (copy/paste)
kubectl -n <NAMESPACE> rollout status deploy/<name>
— rollout completed?kubectl -n <NAMESPACE> get pods -l app=<label>
— new Pods Ready=1/1?kubectl -n <NAMESPACE> logs deploy/<name> --tail=100
— any runtime errors?kubectl -n <NAMESPACE> get svc
— is the expected NodePort/LoadBalancer exposed?curl http://<CLUSTER_NODE_IP>:<nodePort>/health
— basic health responder ok?
Quick FAQ
Do I need KUBE_CONFIG
as a CI variable?
Only if your runner job doesn’t already have a working kubeconfig. A common pattern is to store a base64-encoded kubeconfig in a masked variable KUBE_CONFIG
and write it to disk during the job:
echo "$KUBE_CONFIG" | base64 -d > kubeconfig
export KUBECONFIG=$PWD/kubeconfig
Why did the old Pod stay up while the new one failed?
Kubernetes keeps the old, healthy ReplicaSet available while the new ReplicaSet is in ImagePullBackOff
. The rollout completes only after the new image pulls and Pods become Ready.
What’s the fastest way to test registry auth from the cluster?
Recreate regcred
with a token that has read_registry
, reference it in the Deployment, then rollout restart
. Watch for image pulling to succeed.
Wrap-up
A smooth GitLab → K3s pipeline comes down to three things:
- A sane pipeline (Docker or Podman) with explicit workflow rules.
- Clear Kubernetes manifests (Deployment + Service), with unique, documented ports.
- Registry pull credentials that actually work for the cluster (fixing the 401 ImagePullBackOff).
With these locked in, every push becomes a predictable rollout on your K3s cluster—no surprises.