action Node

drain_nodes

Drain nodes matching the given label or name, so that no pods are scheduled on them any longer and running pods are evicted

Run it now
View details
Typeaction
Modulechaosk8s.node.actions
Namedrain_nodes
Returnboolean

Usage

JSON

{
  "name": "drain-nodes",
  "type": "action",
  "provider": {
    "type": "python",
    "module": "chaosk8s.node.actions",
    "func": "drain_nodes"
  }
}

YAML

name: drain-nodes
provider:
  func: drain_nodes
  module: chaosk8s.node.actions
  type: python
type: action

Arguments

NameTypeDefaultRequiredTitleDescription
namestringNoNameSpecifiy a node name or a label selector below
label_selectorstringNoLabel SelectorSelectors to target the appropriate nodes
delete_pods_with_local_storagebooleanfalseNoDelete Pods with a Local StorageWhether to also drain nodes where pods have a local storage attached
timeoutinteger120NoTimeoutTimeout for the operation. Make sure to give plenty of time based on the nodes workload
countinteger1NoNodes AmountThe number of nodes to drain
pod_label_selectorstringNoPer Pod SelectionSelect nodes running the matching pods selection
pod_namespacestringNoPod NamespacePods selection namespace

This action does a similar job to kubectl drain --ignore-daemonsets or kubectl drain --delete-local-data --ignore-daemonsets if delete_pods_with_local_storage is set to True. There is no equivalent to the kubectl drain --force flag.

You probably want to call uncordon from in your experiment’s rollbacks.

Signature

def drain_nodes(name: str = None,
                label_selector: str = None,
                delete_pods_with_local_storage: bool = False,
                timeout: int = 120,
                secrets: Dict[str, Dict[str, str]] = None,
                count: int = None,
                pod_label_selector: str = None,
                pod_namespace: str = None) -> bool:
    pass