All Pods Across Namespaces: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system aws-node-fdhhs 2/2 Running 0 31m 10.0.30.86 ip-10-0-30-86.ec2.internal kube-system aws-node-kbzjs 2/2 Running 0 31m 10.0.12.89 ip-10-0-12-89.ec2.internal kube-system aws-node-pj7p4 2/2 Running 0 31m 10.0.32.237 ip-10-0-32-237.ec2.internal kube-system aws-node-qbm74 2/2 Running 0 15m 10.0.19.14 ip-10-0-19-14.ec2.internal kube-system aws-node-wr7wq 2/2 Running 0 16m 10.0.32.224 ip-10-0-32-224.ec2.internal kube-system coredns-54d6f577c6-6m57j 1/1 Running 0 36m 10.0.20.185 ip-10-0-30-86.ec2.internal kube-system coredns-54d6f577c6-bbr2m 1/1 Running 0 36m 10.0.5.122 ip-10-0-12-89.ec2.internal kube-system eks-pod-identity-agent-27jgc 1/1 Running 0 31m 10.0.12.89 ip-10-0-12-89.ec2.internal kube-system eks-pod-identity-agent-gmjzf 1/1 Running 0 15m 10.0.19.14 ip-10-0-19-14.ec2.internal kube-system eks-pod-identity-agent-l65v5 1/1 Running 0 31m 10.0.30.86 ip-10-0-30-86.ec2.internal kube-system eks-pod-identity-agent-qllvp 1/1 Running 0 16m 10.0.32.224 ip-10-0-32-224.ec2.internal kube-system eks-pod-identity-agent-zrkfh 1/1 Running 0 31m 10.0.32.237 ip-10-0-32-237.ec2.internal kube-system karpenter-556d8dc5d5-48pqh 1/1 Running 0 30m 10.0.32.133 ip-10-0-32-237.ec2.internal kube-system kube-proxy-6qr48 1/1 Running 0 32m 10.0.32.237 ip-10-0-32-237.ec2.internal kube-system kube-proxy-gd792 1/1 Running 0 15m 10.0.19.14 ip-10-0-19-14.ec2.internal kube-system kube-proxy-jwrjc 1/1 Running 0 16m 10.0.32.224 ip-10-0-32-224.ec2.internal kube-system kube-proxy-n7sc4 1/1 Running 0 32m 10.0.30.86 ip-10-0-30-86.ec2.internal kube-system kube-proxy-shk5k 1/1 Running 0 32m 10.0.12.89 ip-10-0-12-89.ec2.internal monitoring alertmanager-prometheus-operator-kube-p-alertmanager-0 2/2 Running 0 26m 10.0.30.23 ip-10-0-30-86.ec2.internal monitoring prometheus-operator-kube-p-operator-6b795b97b6-zschl 1/1 Running 0 26m 10.0.7.0 ip-10-0-12-89.ec2.internal monitoring prometheus-operator-kube-state-metrics-7d7756cc6-bbssw 1/1 Running 0 26m 10.0.18.35 ip-10-0-30-86.ec2.internal monitoring prometheus-operator-prometheus-node-exporter-8wncw 1/1 Running 0 16m 10.0.32.224 ip-10-0-32-224.ec2.internal monitoring prometheus-operator-prometheus-node-exporter-ktksd 1/1 Running 0 26m 10.0.32.237 ip-10-0-32-237.ec2.internal monitoring prometheus-operator-prometheus-node-exporter-sqxhq 1/1 Running 0 26m 10.0.12.89 ip-10-0-12-89.ec2.internal monitoring prometheus-operator-prometheus-node-exporter-t9gbc 1/1 Running 0 15m 10.0.19.14 ip-10-0-19-14.ec2.internal monitoring prometheus-operator-prometheus-node-exporter-vc9zt 1/1 Running 0 26m 10.0.30.86 ip-10-0-30-86.ec2.internal monitoring prometheus-prometheus-operator-kube-p-prometheus-0 2/2 Running 0 26m 10.0.11.183 ip-10-0-12-89.ec2.internal Karpenter Pods Status: NAMESPACE NAME READY STATUS RESTARTS AGE kube-system aws-node-fdhhs 2/2 Running 0 30m kube-system aws-node-kbzjs 2/2 Running 0 30m kube-system aws-node-pj7p4 2/2 Running 0 30m kube-system aws-node-qbm74 2/2 Running 0 14m kube-system aws-node-wr7wq 2/2 Running 0 15m kube-system aws-node-xpsfs 2/2 Running 0 17m kube-system coredns-54d6f577c6-6m57j 1/1 Running 0 34m kube-system coredns-54d6f577c6-bbr2m 1/1 Running 0 34m kube-system eks-pod-identity-agent-27jgc 1/1 Running 0 30m kube-system eks-pod-identity-agent-6pc2v 1/1 Running 0 17m kube-system eks-pod-identity-agent-gmjzf 1/1 Running 0 14m kube-system eks-pod-identity-agent-l65v5 1/1 Running 0 30m kube-system eks-pod-identity-agent-qllvp 1/1 Running 0 15m kube-system eks-pod-identity-agent-zrkfh 1/1 Running 0 30m kube-system karpenter-556d8dc5d5-48pqh 1/1 Running 0 29m kube-system kube-proxy-6qr48 1/1 Running 0 31m kube-system kube-proxy-gd792 1/1 Running 0 14m kube-system kube-proxy-jwrjc 1/1 Running 0 15m kube-system kube-proxy-n7sc4 1/1 Running 0 31m kube-system kube-proxy-shk5k 1/1 Running 0 31m kube-system kube-proxy-vlvjp 1/1 Running 0 17m monitoring alertmanager-prometheus-operator-kube-p-alertmanager-0 2/2 Running 0 25m monitoring prometheus-operator-kube-p-operator-6b795b97b6-zschl 1/1 Running 0 25m monitoring prometheus-operator-kube-state-metrics-7d7756cc6-bbssw 1/1 Running 0 25m monitoring prometheus-operator-prometheus-node-exporter-8wncw 1/1 Running 0 15m monitoring prometheus-operator-prometheus-node-exporter-f6zls 1/1 Running 0 17m monitoring prometheus-operator-prometheus-node-exporter-ktksd 1/1 Running 0 25m monitoring prometheus-operator-prometheus-node-exporter-sqxhq 1/1 Running 0 25m monitoring prometheus-operator-prometheus-node-exporter-t9gbc 1/1 Running 0 14m monitoring prometheus-operator-prometheus-node-exporter-vc9zt 1/1 Running 0 25m monitoring prometheus-prometheus-operator-kube-p-prometheus-0 2/2 Running 0 25m CAS Pods Status: NAMESPACE NAME READY STATUS RESTARTS AGE cas cas-aws-cluster-autoscaler-5bd47d57d6-bb5vk 1/1 Running 0 24m kube-system aws-node-8nxv6 2/2 Running 0 28m kube-system aws-node-l86bz 2/2 Running 0 28m kube-system coredns-54d6f577c6-5s585 1/1 Running 0 33m kube-system coredns-54d6f577c6-gr4nt 1/1 Running 0 33m kube-system eks-pod-identity-agent-9v52v 1/1 Running 0 28m kube-system eks-pod-identity-agent-xc4d4 1/1 Running 0 28m kube-system kube-proxy-pphxl 1/1 Running 0 29m kube-system kube-proxy-t9vhz 1/1 Running 0 29m monitoring alertmanager-prometheus-operator-kube-p-alertmanager-0 2/2 Running 0 9m42s monitoring prometheus-operator-kube-p-operator-6b795b97b6-hc27b 1/1 Running 0 24m monitoring prometheus-operator-kube-state-metrics-7d7756cc6-7srsq 1/1 Running 0 9m44s monitoring prometheus-operator-prometheus-node-exporter-gszb6 1/1 Running 0 24m monitoring prometheus-operator-prometheus-node-exporter-vxgtq 1/1 Running 0 24m monitoring prometheus-prometheus-operator-kube-p-prometheus-0 2/2 Running 0 24m Karpenter Pods in Namespace: NAMESPACE NAME READY STATUS RESTARTS AGE kube-system karpenter-556d8dc5d5-48pqh 1/1 Running 0 30m CAS Pods in Namespace: Karpenter Logs: {"level":"INFO","time":"2024-12-13T16:47:03.609Z","logger":"controller","message":"disrupting nodeclaim(s) via delete, terminating 1 nodes (0 pods) ip-10-0-37-133.ec2.internal/m6a.2xlarge/spot","commit":"5bdf9c3","controller":"disruption","namespace":"","name":"","reconcileID":"f3ca927b-9933-43b5-aea7-f0dfd36070dd","command-id":"3b2639c3-3e43-4077-a168-a7dd2b8557bc","reason":"empty"} {"level":"INFO","time":"2024-12-13T16:47:04.396Z","logger":"controller","message":"tainted node","commit":"5bdf9c3","controller":"node.termination","controllerGroup":"","controllerKind":"Node","Node":{"name":"ip-10-0-37-133.ec2.internal"},"namespace":"","name":"ip-10-0-37-133.ec2.internal","reconcileID":"bffac6a3-88bd-4b16-8baa-6c318f9d298a","taint.Key":"karpenter.sh/disrupted","taint.Value":"","taint.Effect":"NoSchedule"} {"level":"INFO","time":"2024-12-13T16:47:46.351Z","logger":"controller","message":"deleted node","commit":"5bdf9c3","controller":"node.termination","controllerGroup":"","controllerKind":"Node","Node":{"name":"ip-10-0-37-133.ec2.internal"},"namespace":"","name":"ip-10-0-37-133.ec2.internal","reconcileID":"192dc4cf-9a70-4722-8a3d-68460ad70599"} {"level":"INFO","time":"2024-12-13T16:47:46.600Z","logger":"controller","message":"deleted nodeclaim","commit":"5bdf9c3","controller":"nodeclaim.termination","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"default-fd98d"},"namespace":"","name":"default-fd98d","reconcileID":"53babd7d-084a-4928-bf4d-543865da1dda","Node":{"name":"ip-10-0-37-133.ec2.internal"},"provider-id":"aws:///us-east-1c/i-09ba7b26c3aa8ad77"} {"level":"INFO","time":"2024-12-13T16:48:08.700Z","logger":"controller","message":"disrupting nodeclaim(s) via delete, terminating 1 nodes (0 pods) ip-10-0-36-241.ec2.internal/t3.small/spot","commit":"5bdf9c3","controller":"disruption","namespace":"","name":"","reconcileID":"d71b7f73-2934-4aa2-b551-6c2c3c8ecdb5","command-id":"f3c810f8-f2ce-462e-99a5-07de74c24c17","reason":"empty"} {"level":"INFO","time":"2024-12-13T16:48:09.453Z","logger":"controller","message":"tainted node","commit":"5bdf9c3","controller":"node.termination","controllerGroup":"","controllerKind":"Node","Node":{"name":"ip-10-0-36-241.ec2.internal"},"namespace":"","name":"ip-10-0-36-241.ec2.internal","reconcileID":"5b4f092f-ebe2-4fb0-8cb1-00d1d884c20b","taint.Key":"karpenter.sh/disrupted","taint.Value":"","taint.Effect":"NoSchedule"} {"level":"INFO","time":"2024-12-13T16:49:17.278Z","logger":"controller","message":"deleted node","commit":"5bdf9c3","controller":"node.termination","controllerGroup":"","controllerKind":"Node","Node":{"name":"ip-10-0-36-241.ec2.internal"},"namespace":"","name":"ip-10-0-36-241.ec2.internal","reconcileID":"d5793650-4fe4-4988-9d05-813739b3e1dd"} {"level":"INFO","time":"2024-12-13T16:49:17.513Z","logger":"controller","message":"deleted nodeclaim","commit":"5bdf9c3","controller":"nodeclaim.termination","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"default-krlhj"},"namespace":"","name":"default-krlhj","reconcileID":"dc1aed50-5782-403f-b0b3-27dc462e3202","Node":{"name":"ip-10-0-36-241.ec2.internal"},"provider-id":"aws:///us-east-1c/i-0187c64a60a544111"} {"level":"INFO","time":"2024-12-13T16:49:33.765Z","logger":"controller","message":"disrupting nodeclaim(s) via delete, terminating 1 nodes (0 pods) ip-10-0-32-224.ec2.internal/m6idn.4xlarge/spot","commit":"5bdf9c3","controller":"disruption","namespace":"","name":"","reconcileID":"d6e0747c-6868-42c3-a208-bb0d690ee830","command-id":"62bc9b1c-3846-4db7-bbf0-9a4b6a272331","reason":"empty"} {"level":"INFO","time":"2024-12-13T16:49:34.524Z","logger":"controller","message":"tainted node","commit":"5bdf9c3","controller":"node.termination","controllerGroup":"","controllerKind":"Node","Node":{"name":"ip-10-0-32-224.ec2.internal"},"namespace":"","name":"ip-10-0-32-224.ec2.internal","reconcileID":"e2266e27-518e-4728-bc62-90eac217424e","taint.Key":"karpenter.sh/disrupted","taint.Value":"","taint.Effect":"NoSchedule"} CAS Logs: Failed to get logs Karpenter Pod Details: Name: karpenter Namespace: kube-system CreationTimestamp: Fri, 13 Dec 2024 17:19:42 +0100 Labels: app.kubernetes.io/instance=karpenter app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=karpenter app.kubernetes.io/version=1.0.0 helm.sh/chart=karpenter-1.0.0 Annotations: deployment.kubernetes.io/revision: 1 meta.helm.sh/release-name: karpenter meta.helm.sh/release-namespace: kube-system Selector: app.kubernetes.io/instance=karpenter,app.kubernetes.io/name=karpenter Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 1 max unavailable, 25% max surge Pod Template: Labels: app.kubernetes.io/instance=karpenter app.kubernetes.io/name=karpenter Service Account: karpenter Containers: controller: Image: public.ecr.aws/karpenter/controller:1.0.0@sha256:1eb1073b9f4ed804634aabf320e4d6e822bb61c0f5ecfd9c3a88f05f1ca4c5c5 Ports: 8080/TCP, 8001/TCP, 8443/TCP, 8081/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP SeccompProfile: RuntimeDefault Limits: cpu: 1 memory: 1Gi Requests: cpu: 1 memory: 1Gi Liveness: http-get http://:http/healthz delay=30s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:http/readyz delay=5s timeout=30s period=10s #success=1 #failure=3 Environment: KUBERNETES_MIN_VERSION: 1.19.0-0 KARPENTER_SERVICE: karpenter WEBHOOK_PORT: 8443 WEBHOOK_METRICS_PORT: 8001 DISABLE_WEBHOOK: false LOG_LEVEL: info METRICS_PORT: 8080 HEALTH_PROBE_PORT: 8081 SYSTEM_NAMESPACE: (v1:metadata.namespace) MEMORY_LIMIT: 1073741824 (limits.memory) FEATURE_GATES: SpotToSpotConsolidation=false BATCH_MAX_DURATION: 10s BATCH_IDLE_DURATION: 1s CLUSTER_NAME: karpenter-eks CLUSTER_ENDPOINT: https://2CAEFC22B3C8956DEFBC09835E5E602D.gr7.us-east-1.eks.amazonaws.com VM_MEMORY_OVERHEAD_PERCENT: 0.075 INTERRUPTION_QUEUE: Karpenter-karpenter-eks RESERVED_ENIS: 0 Mounts: Volumes: Topology Spread Constraints: topology.kubernetes.io/zone:DoNotSchedule when max skew 1 is exceeded for selector app.kubernetes.io/instance=karpenter,app.kubernetes.io/name=karpenter Priority Class Name: system-cluster-critical Node-Selectors: kubernetes.io/os=linux Tolerations: CriticalAddonsOnly op=Exists Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: NewReplicaSet: karpenter-556d8dc5d5 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 30m deployment-controller Scaled up replica set karpenter-556d8dc5d5 to 1 CAS Pod Details: Failed to get pod details Karpenter Resources: