743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #6: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #7: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #8: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #9: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #10: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #11: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #12: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #13: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #14: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #15: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s waiting.go:203: Check #16: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #17: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #18: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #19: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #20: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s waiting.go:203: Check #21: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #22: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #23: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #24: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #25: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s waiting.go:203: Check #26: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #27: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #28: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #29: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #30: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s waiting.go:203: Check #31: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #32: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #33: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #34: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #35: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-zpkmt diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-mfdpj diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fj49k waiting.go:203: Check #36: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #37: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #38: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #39: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #40: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-zpkmt diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-mfdpj diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fj49k waiting.go:203: Check #41: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #42: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #43: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #44: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #45: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-zpkmt diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-mfdpj diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fj49k waiting.go:203: Check #46: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #47: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #48: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #49: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #50: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-zpkmt diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-mfdpj diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fj49k waiting.go:203: Check #51: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #52: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #53: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #54: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #55: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-zpkmt diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-mfdpj diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fj49k waiting.go:203: Check #56: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #57: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #58: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #59: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #60: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-zpkmt diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-mfdpj diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fj49k waiting.go:203: Check #61: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #62: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #63: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #64: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #65: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-zpkmt diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fsrt8 diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-mfdpj diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-v7d6x diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fj49k diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-wj4pv waiting.go:203: Check #66: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #67: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #68: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #69: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #70: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-zpkmt diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fsrt8 diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-mfdpj diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-v7d6x diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fj49k diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-wj4pv waiting.go:203: Check #71: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #72: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #73: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #74: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #75: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-zpkmt diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fsrt8 diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-mfdpj diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-v7d6x diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fj49k diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-wj4pv waiting.go:203: Check #76: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #77: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #78: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #79: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #80: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-zpkmt diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fsrt8 diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-mfdpj diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-v7d6x diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fj49k diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-wj4pv waiting.go:203: Check #81: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #82: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #83: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #84: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #85: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-zpkmt diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fsrt8 diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-mfdpj diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-v7d6x diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fj49k diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-wj4pv waiting.go:203: Check #86: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #87: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} waiting.go:203: Check #88: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #89: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable waiting.go:203: Check #90: Deployment pdb-test-1758886743-workload: 0/3 replicas ready, 0 available, 3 unavailable diagnostics.go:355: 📊 NodeClaims in cluster: 3 diagnostics.go:328: 📊 NodePool pdb-test-1758886743-nodepool status: diagnostics.go:329: Conditions: diagnostics.go:331: - Type: ValidationSucceeded, Status: True, Reason: ValidationSucceeded, Message: diagnostics.go:331: - Type: NodeClassReady, Status: True, Reason: NodeClassReady, Message: diagnostics.go:331: - Type: NodeRegistrationHealthy, Status: Unknown, Reason: AwaitingReconciliation, Message: object is awaiting reconciliation diagnostics.go:331: - Type: Ready, Status: True, Reason: Ready, Message: diagnostics.go:336: Resources: diagnostics.go:338: - CPU: {{0 0} {} 0 DecimalSI} diagnostics.go:339: - Memory: {{0 0} {} 0 DecimalSI} diagnostics.go:340: - Pods: {{0 0} {} 0 DecimalSI} diagnostics.go:47: 🔍 Diagnostic information for deployment pdb-test-1758886743-workload: diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-5rvnk diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-zpkmt diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-fs9p8 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fsrt8 diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-k26bq diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-mfdpj diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-k5p6v event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-v7d6x diagnostics.go:50: Pod pdb-test-1758886743-workload-789b6d679f-zm927 status: Phase=Pending, Reason= diagnostics.go:55: Pod pdb-test-1758886743-workload-789b6d679f-zm927 condition PodScheduled: False - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: FailedScheduling - 0/2 nodes are available: 1 node(s) had untolerated taint {evict-test: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-sfc4s diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-fj49k diagnostics.go:72: Pod pdb-test-1758886743-workload-789b6d679f-zm927 event: Nominated - Pod should schedule on: nodeclaim/pdb-test-1758886743-nodepool-wj4pv waiting.go:225: Error Trace: /__w/karpenter-provider-ibm-cloud/karpenter-provider-ibm-cloud/test/e2e/waiting.go:225 /__w/karpenter-provider-ibm-cloud/karpenter-provider-ibm-cloud/test/e2e/scheduling_test.go:196 Error: Received unexpected error: context deadline exceeded Test: TestE2EPodDisruptionBudget Messages: Deployment pods should be scheduled and running within timeout scheduling_test.go:184: Running deferred cleanup for test: pdb-test-1758886743 cleanup.go:109: Starting cleanup for test: pdb-test-1758886743 cleanup.go:158: Cleaning up PodDisruptionBudget resources for test pdb-test-1758886743 cleanup.go:158: Cleaning up Deployment resources for test pdb-test-1758886743 cleanup.go:182: Deleting Deployment: pdb-test-1758886743-workload cleanup.go:69: Waiting for Deployment deletion (max 2m0s)... cleanup.go:94: ✅ All Deployment resources deleted cleanup.go:158: Cleaning up NodeClaim resources for test pdb-test-1758886743 cleanup.go:158: Cleaning up NodePool resources for test pdb-test-1758886743 cleanup.go:204: Deleting NodePool: pdb-test-1758886743-nodepool cleanup.go:69: Waiting for NodePool deletion (max 8m0s)... cleanup.go:94: ✅ All NodePool resources deleted cleanup.go:158: Cleaning up IBMNodeClass resources for test pdb-test-1758886743 cleanup.go:250: ✅ Cleanup completed for test pdb-test-1758886743 --- FAIL: TestE2EPodDisruptionBudget (934.42s) FAIL FAIL github.com/pfeifferj/karpenter-provider-ibm-cloud/test/e2e 934.446s FAIL