Configure Annotations, Labels, and Taints on Nodes

TOC

Overview

Cluster API (CAPI) can propagate selected Machine metadata to the corresponding Node:

  • Labels: via controller manager sync
  • Annotations: via controller manager sync
  • Taints: via Machine templates (applied during node registration)

This guide shows how to configure each and how to verify they are applied on both Machine and Node resources.

Note: The examples assume your Cluster API controller manager is capi-controller-manager and that you manage MachineDeployment and KubeadmControlPlane objects for workload and control plane nodes respectively.

Prerequisites

  • Permissions to edit the capi-controller-manager deployment and cluster API resources
  • version 4.2.0 or later

1) Sync labels from Machines to Nodes

Step 1: Enable label sync on the controller manager

Add the following argument to the capi-controller-manager container to specify which Machine labels to sync to Nodes:

spec:
  template:
    spec:
      containers:
      - name: manager
        args:
        - --additional-sync-machine-labels=env,role,topology.kubernetes.io/zone

Replace the comma-separated list with the labels you want to sync.

INFO

additional-sync-machine-labels support regex matching.

Step 2: Add labels on Machine templates

  • For workload nodes (MachineDeployment): set labels on .spec.template.metadata.labels.

  • For control plane nodes (KubeadmControlPlane): set labels on .spec.machineTemplate.metadata.labels.

    ---
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: MachineDeployment
    metadata:
      name: <md-name>
      namespace: <namespace>
    spec:
      clusterName: <cluster-name>
      template:
        metadata:
          labels:
            env: prod
            role: app
            topology.kubernetes.io/zone: az1
        spec: {}
    ---
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    metadata:
      name: <kcp-name>
      namespace: <namespace>
    spec:
      version: <k8s-version>
      clusterName: <cluster-name>
      machineTemplate:
        metadata:
          labels:
            env: prod
            role: control-plane
            topology.kubernetes.io/zone: az1
        spec: {}
      kubeadmConfigSpec: {}

Verify

# Check Machine has labels
kubectl get machine -n <namespace> -l cluster.x-k8s.io/cluster-name=<cluster-name> \
  -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels}{"\n"}{end}'

# Check Node has labels
kubectl get node -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels}{"\n"}{end}'

2) Sync annotations from Machines to Nodes

Step 1: Enable annotation sync on the controller manager

Add the following argument to the capi-controller-manager container to specify which Machine annotations to sync to Nodes:

spec:
  template:
    spec:
      containers:
      - name: manager
        args:
        - --additional-sync-machine-annotations=example.com/owner,example.com/runtime
INFO

additional-sync-machine-annotations support regex matching.

Replace the comma-separated list with the annotations you want to sync.

Step 2: Add annotations on Machine templates

  • For workload nodes (MachineDeployment): set annotations on .spec.template.metadata.annotations.

  • For control plane nodes (KubeadmControlPlane): set annotations on .spec.machineTemplate.metadata.annotations.

    ---
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: MachineDeployment
    metadata:
      name: <md-name>
      namespace: <namespace>
    spec:
      clusterName: <cluster-name>
      template:
        metadata:
          annotations:
            example.com/owner: team-a
            example.com/runtime: containerd
        spec: {}
    ---
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    metadata:
      name: <kcp-name>
      namespace: <namespace>
    spec:
      version: <k8s-version>
      clusterName: <cluster-name>
      machineTemplate:
        metadata:
          annotations:
            example.com/owner: platform
            example.com/runtime: containerd
        spec: {}
      kubeadmConfigSpec: {}

Verify

# Check Machine has annotations
kubectl get machine -n <namespace> -l cluster.x-k8s.io/cluster-name=<cluster-name> \
  -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.annotations}{"\n"}{end}'

# Check Node has annotations
kubectl get node -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.annotations}{"\n"}{end}'

3) Apply taints from Machines to Nodes

Taints are configured directly on Machine templates so they are applied to the Node during registration.

  • For workload nodes (MachineDeployment): set taints on .spec.template.taints.

  • For control plane nodes (KubeadmControlPlane): set taints on .spec.machineTemplate.taints.

    ---
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: MachineDeployment
    metadata:
      name: <md-name>
      namespace: <namespace>
    spec:
      clusterName: <cluster-name>
      template:
        metadata: {}
        taints:
        - key: dedicated
          value: db
          effect: NoSchedule
        spec: {}
    ---
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    metadata:
      name: <kcp-name>
      namespace: <namespace>
    spec:
      version: <k8s-version>
      clusterName: <cluster-name>
      machineTemplate:
        metadata: {}
        taints:
        - key: custom-taint-key
          effect: NoSchedule
        spec: {}
      kubeadmConfigSpec: {}

Verify

# Check Machine has taints
kubectl get machine -n <namespace> -l cluster.x-k8s.io/cluster-name=<cluster-name> \
  -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.taints}{"\n"}{end}'

# Check Node has taints
kubectl get node -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.taints}{"\n"}{end}'

Notes

  • Ensure field names are spelled correctly: metadata.labels, metadata.annotations.
  • Update lists of synced labels/annotations on the controller whenever you add new keys on Machines.
  • After changes, allow reconciliation to complete; Node updates may take a short time to appear.