cted completed_value 0x%llx > queued_value 0x%llx **completed_value <= channel->tracking_sem.queued_value*call to uvm_pushbuffer_mark_completed*pending_gpfifos*pAvailableApiHeadsMask**pAvailableApiHeadsMask*spinlock_order*call to uvm_test_channel_stress_stream*call to uvm_test_channel_stress_update_channels*call to uvm_test_channel_noop_push*call to uvm_test_channel_stress_key_rotation*call to uvm_test_rng_init*pIdledChannelMask**pIdledChannelMask*call to channel_stress_key_rotation_cpu_encryption*call to channel_stress_key_rotation_cpu_decryption*call to channel_stress_key_rotation_rotate*call to force_key_rotation*call to test_channel_key_rotation_cpu_decryption*cpu_to_gpu_pool**cpu_to_gpu_pool*uvm_conf_computing_is_key_rotation_enabled_in_pool(cpu_to_gpu_pool)*/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia-uvm/uvm_channel_test.c**/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia-uvm/uvm_channel_test.c**uvm_conf_computing_is_key_rotation_enabled_in_pool(cpu_to_gpu_pool)*initial_plain_cpu**initial_plain_cpu***initial_plain_cpu*uvm_mem_alloc_vidmem(size, gpu, &plain_gpu)**uvm_mem_alloc_vidmem(size, gpu, &plain_gpu)*plain_gpu*uvm_mem_map_gpu_kernel(plain_gpu, gpu)**uvm_mem_map_gpu_kernel(plain_gpu, gpu)*plain_gpu_address*call to uvm_conf_computing_util_memcopy_cpu_to_gpu*CPU > GPU**CPU > GPU*uvm_conf_computing_util_memcopy_cpu_to_gpu(gpu, plain_gpu_address, initial_plain_cpu, size, NULL, "CPU > GPU")**uvm_conf_computing_util_memcopy_cpu_to_gpu(gpu, plain_gpu_address, initial_plain_cpu, size, NULL, "CPU > GPU")*call to random_ce_channel_type*noop push**noop push*