bility-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #5: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #6: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #7: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #8: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #9: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #10: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #11: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #12: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #13: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #14: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #15: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 waiting.go:203: Check #16: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #17: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #18: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #19: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #20: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 waiting.go:203: Check #21: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #22: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #23: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #24: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #25: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 waiting.go:203: Check #26: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #27: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #28: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #29: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #30: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 waiting.go:203: Check #31: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #32: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #33: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 1 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #34: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #35: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 waiting.go:203: Check #36: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 1 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #37: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #38: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #39: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #40: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-9rnmp diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-jvq4z diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-rcxw8 waiting.go:203: Check #41: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #42: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #43: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #44: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #45: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-9rnmp diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-jvq4z diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-rcxw8 waiting.go:203: Check #46: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #47: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #48: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #49: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #50: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-9rnmp diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-jvq4z diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-rcxw8 waiting.go:203: Check #51: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #52: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #53: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #54: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #55: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-9rnmp diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-jvq4z diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-rcxw8 waiting.go:203: Check #56: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #57: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #58: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #59: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #60: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-9rnmp diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-jvq4z diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-rcxw8 waiting.go:203: Check #61: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #62: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #63: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #64: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #65: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-9rnmp diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-jvq4z diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-rcxw8 waiting.go:203: Check #66: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #67: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #68: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #69: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 1 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #70: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-9rnmp diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hskqg diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-jvq4z diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-nfht9 diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-rcxw8 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-m22zp waiting.go:203: Check #71: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #72: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #73: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #74: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #75: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-9rnmp diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hskqg diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-jvq4z diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-nfht9 diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-rcxw8 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-m22zp waiting.go:203: Check #76: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #77: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #78: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #79: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #80: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-9rnmp diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hskqg diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-jvq4z diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-nfht9 diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-rcxw8 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-m22zp waiting.go:203: Check #81: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #82: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #83: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #84: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #85: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-9rnmp diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hskqg diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-jvq4z diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-nfht9 diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-rcxw8 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-m22zp waiting.go:203: Check #86: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #87: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #88: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #89: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #90: Deployment drift-stability-1758894511-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool drift-stability-1758894511-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment drift-stability-1758894511-workload: diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-2f552 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-9rnmp diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-5sng8 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hskqg diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-vcn5w diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-jvq4z diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-m8qx4 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-nfht9 diagnostics.go:50: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 status: Phase=Pending, Reason= diagnostics.go:55: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-hmkn7 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-rcxw8 diagnostics.go:72: Pod drift-stability-1758894511-workload-5f5d96744f-xdlq7 event: Nominated - Pod should schedule on: nodeclaim/drift-stability-1758894511-nodepool-m22zp waiting.go:225: Error Trace: /__w/karpenter-provider-ibm-cloud/karpenter-provider-ibm-cloud/test/e2e/waiting.go:225 /__w/karpenter-provider-ibm-cloud/karpenter-provider-ibm-cloud/test/e2e/basic_workflow_test.go:171 Error: Received unexpected error: context deadline exceeded Test: TestE2EDriftStability Messages: Deployment pods should be scheduled and running within timeout --- FAIL: TestE2EDriftStability (934.19s) FAIL FAIL github.com/pfeifferj/karpenter-provider-ibm-cloud/test/e2e 934.228s FAIL