d: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #5: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #6: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #7: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #8: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #9: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #10: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #11: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #12: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #13: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #14: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #15: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p waiting.go:203: Check #16: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #17: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #18: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #19: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #20: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p waiting.go:203: Check #21: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #22: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #23: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #24: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #25: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p waiting.go:203: Check #26: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #27: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #28: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #29: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #30: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p waiting.go:203: Check #31: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #32: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #33: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #34: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #35: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-gspvv diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-8znj2 diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-zd92l waiting.go:203: Check #36: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #37: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #38: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #39: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #40: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-gspvv diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-8znj2 diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-zd92l waiting.go:203: Check #41: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #42: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #43: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #44: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #45: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-gspvv diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-8znj2 diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-zd92l waiting.go:203: Check #46: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #47: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #48: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #49: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #50: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-gspvv diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-8znj2 diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-zd92l waiting.go:203: Check #51: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #52: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #53: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #54: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #55: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-gspvv diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-8znj2 diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-zd92l waiting.go:203: Check #56: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #57: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #58: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #59: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #60: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-gspvv diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-8znj2 diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-zd92l waiting.go:203: Check #61: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #62: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #63: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #64: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #65: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-gspvv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-jbmgn diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-8znj2 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-bbtxn diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-zd92l diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-r5kj5 waiting.go:203: Check #66: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #67: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #68: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #69: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #70: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-gspvv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-jbmgn diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-8znj2 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-bbtxn diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-zd92l diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-r5kj5 waiting.go:203: Check #71: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #72: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #73: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #74: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #75: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-gspvv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-jbmgn diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-8znj2 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-bbtxn diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-zd92l diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-r5kj5 waiting.go:203: Check #76: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #77: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #78: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #79: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #80: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-gspvv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-jbmgn diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-8znj2 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-bbtxn diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-zd92l diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-r5kj5 waiting.go:203: Check #81: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #82: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #83: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #84: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #85: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-gspvv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-jbmgn diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-8znj2 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-bbtxn diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-zd92l diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-r5kj5 waiting.go:203: Check #86: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #87: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #88: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #89: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #90: Deployment node-affinity-1758889659-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool node-affinity-1758889659-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment node-affinity-1758889659-workload: diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-6zwfv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-gspvv diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-9w55f event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-jbmgn diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-tv988 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-8znj2 diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-jznch event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-bbtxn diagnostics.go:50: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq status: Phase=Pending, Reason= diagnostics.go:55: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-kk86p diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-zd92l diagnostics.go:72: Pod node-affinity-1758889659-workload-7c8cb864d8-pczrq event: Nominated - Pod should schedule on: nodeclaim/node-affinity-1758889659-nodepool-r5kj5 waiting.go:225: Error Trace: /__w/karpenter-provider-ibm-cloud/karpenter-provider-ibm-cloud/test/e2e/waiting.go:225 /__w/karpenter-provider-ibm-cloud/karpenter-provider-ibm-cloud/test/e2e/scheduling_test.go:372 Error: Received unexpected error: context deadline exceeded Test: TestE2ENodeAffinity Messages: Deployment pods should be scheduled and running within timeout --- FAIL: TestE2ENodeAffinity (934.15s) FAIL FAIL github.com/pfeifferj/karpenter-provider-ibm-cloud/test/e2e 934.178s FAIL