call to nv_acpi_evaluate_dsm_method*pAcpiDsmGuid**_DSM*call to acpi_walk_namespace*~ ...**~ ...*call to acpi_get_handle*_BIX**_BIX*call to nv_acpi_find_battery_info*bix_method_handle**bix_method_handle*_BIF**_BIF*bif_method_handle**bif_method_handle*call to acpi_evaluate_object*NVRM: Failed to evaluate battery's object **NVRM: Failed to evaluate battery's object *NVRM: Battery object output buffer is null **NVRM: Battery object output buffer is null *object_package**object_package*NVRM: Battery method output is not package **NVRM: Battery method output is not package *battery_technology_offset*pMethodName*MXDS**MXDS*MXDM**MXDM*NVRM: %s: Unsupported ACPI method %s **NVRM: %s: Unsupported ACPI method %s *NVRM: %s: Call for %s ACPI method **NVRM: %s: Call for %s ACPI method *mux_dev_handle**mux_dev_handle*call to acpi_get_next_object*call to acpi_evaluate_integer*_ADR**_ADR***mux_dev_handle*NVRM: %s Mux device handle not found **NVRM: %s Mux device handle not found *mux_arg*NVRM: %s: Failed to evaluate %s method! **NVRM: %s: Failed to evaluate %s method! **mux*NVRM: %s: Invalid MUX data **NVRM: %s: Invalid MUX data *_PR3**_PR3*NVRM: Failed to evaluate _PR3 object **NVRM: Failed to evaluate _PR3 object *NVRM: output buffer pointer is null for _PR3 method **NVRM: output buffer pointer is null for _PR3 method *NVRM: _PR3 object is not a type 'package' **NVRM: _PR3 object is not a type 'package' *object_reference**object_reference*NVRM: _PR3 object does not contain POWER Reference **NVRM: _PR3 object does not contain POWER Reference *mmx_params**mmx_params*NVRM: nv_acpi_wmmx_method: failed to get WMMX data, status 0x%x! **NVRM: nv_acpi_wmmx_method: failed to get WMMX data, status 0x%x! **mmx*field "outData" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:1119**field "outData" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:1119**/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c*NVRM: nv_acpi_wmmx_method: WMMX data invalid. **NVRM: nv_acpi_wmmx_method: WMMX data invalid. *_DOD**_DOD*NVRM: %s: failed to evaluate _DOD method! **NVRM: %s: failed to evaluate _DOD method! *dod**dod*NVRM: %s: _DOD entry invalid! **NVRM: %s: _DOD entry invalid! *pOutData*NVRM: %s: _DOD data too large! **NVRM: %s: _DOD data too large! **rom_arg*pInData*_ROM**_ROM*NVRM: %s: failed to evaluate _ROM method! **NVRM: %s: failed to evaluate _ROM method! **pOutData*field "pOutData" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:972**field "pOutData" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:972*NVRM: %s: Invalid _ROM data **NVRM: %s: Invalid _ROM data *lcd_dev_handle**lcd_dev_handle***lcd_dev_handle*NVRM: %s Found LCD: %llx **NVRM: %s Found LCD: %llx *NVRM: %s LCD not found **NVRM: %s LCD not found *largestEdidSize*ddc_arg0*_DDC**_DDC**ddc*NVRM: %s: failed status: %08x **NVRM: %s: failed status: %08x *pEdidBuffer**pEdidBuffer*field "pEdidBuffer" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:906**field "pEdidBuffer" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:906*pInParams**pInParams*NVRM: %s: invalid argument(s)! **NVRM: %s: invalid argument(s)! *call to os_alloc_mem*argument3**argument3*field "argument3" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:783**field "argument3" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:783***dev_handle*\_SB.NPCF._DSM**\_SB.NPCF._DSM**pathname*call to os_free_mem*dsm_params**dsm_params*NVRM: %s: failed to evaluate _DSM method! **NVRM: %s: failed to evaluate _DSM method! *dsm**dsm*call to nv_acpi_extract_object*NVRM: %s: DSM data invalid! **NVRM: %s: DSM data invalid! *nvif_params**nvif_params*localInParams**localInParams*NVRM: nv_acpi_nvif_method: failed to get NVIF data, status 0x%x, function 0x%x, subFunction 0x%x! **NVRM: nv_acpi_nvif_method: failed to get NVIF data, status 0x%x, function 0x%x, subFunction 0x%x! **nvif*localOutDataSize*field "outData" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:631**field "outData" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:631*NVRM: nv_acpi_nvif_method: NVIF data invalid, function 0x%x, subFunction 0x%x! **NVRM: nv_acpi_nvif_method: NVIF data invalid, function 0x%x, subFunction 0x%x! *call to nv_acpi_nvif_method*call to nv_acpi_wmmx_method*acpi_object*call to nv_acpi_extract_integer*call to nv_acpi_extract_buffer*call to nv_acpi_extract_package***buffer*field "buffer" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:436**field "buffer" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:436*field "buffer" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:416**field "buffer" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-acpi.c:416*call to nv_uninstall_notifier*NVIF**NVIF*method_handle**method_handle*WMMX**WMMX*_PSR**_PSR*NPCF**NPCF*handlesPresent*call to nv_install_notifier*call to unregister_acpi_notifier*nv_acpi_object**nv_acpi_object**nvl*NVRM: nv_acpi_register_notifier: failed to install notifier **NVRM: nv_acpi_register_notifier: failed to install notifier *acpi_nb*notifier_call*call to register_acpi_notifier**notifier_data*call to rm_acpi_notify*call to acpi_remove_notify_handler*NVRM: nv_acpi_methods_uninit: failed to remove event notification handler (%d)! **NVRM: nv_acpi_methods_uninit: failed to remove event notification handler (%d)! *call to nv_kmem_cache_free_stack**pNvAcpiObject*call to nv_kmem_cache_alloc_stack**sp***notifier_data*call to acpi_install_notify_handler*notify_handler_installed*call to rm_acpi_nvpcf_notify*NVRM: %s: NVPCF event 0x%x is not supported **NVRM: %s: NVPCF event 0x%x is not supported *call to nv_acpi_get_powersource*call to rm_power_source_change_event*ac_plugged*device_class**device_class**video**./include/linux/seq_file.h*nv_uuid_cache*bars**bars*ud*call to nv_caps_imex_remove_channel0*call to cdev_del*call to unregister_chrdev_region*nv-caps-imex is already initialized. **nv-caps-imex is already initialized. *nv-caps-imex is disabled. **nv-caps-imex is disabled. *dev_channel0**dev_channel0*call to alloc_chrdev_region*nvidia-caps-imex-channels**nvidia-caps-imex-channels*nv-caps-imex failed to create cdev. **nv-caps-imex failed to create cdev. *call to cdev_init*call to cdev_add*nv-caps-imex failed to add cdev. **nv-caps-imex failed to add cdev. *call to nv_caps_imex_add_channel0*nv-caps-imex failed to register class. **nv-caps-imex failed to register class. *call to device_create*nvidia-caps-imex-channels!channel%d**nvidia-caps-imex-channels!channel%d*nv-caps-imex failed to create channel0. **nv-caps-imex failed to create channel0. *call to class_destroy*nv-caps-imex channel0 created. Make sure you are aware of the IMEX security model. **nv-caps-imex channel0 created. Make sure you are aware of the IMEX security model. *call to device_destroy*soc_clk_handles*handles**handles*call to clk_set_rate*NVRM: clk_set_rate failed with error: %d **NVRM: clk_set_rate failed with error: %d *call to clk_round_rate*NVRM: clk_round_rate failed with error: %ld **NVRM: clk_round_rate failed with error: %ld *call to clk_get_rate*currFreqHz*call to clk_disable_unprepare*call to clk_prepare_enable*NVRM: clk_prepare_enable failed with error: %d **NVRM: clk_prepare_enable failed with error: %d *clk_count*NVRM: No clk handles for the dev **NVRM: No clk handles for the dev *call to __clk_get_name*clkName**clkName*NVRM: nv_clk_get_handles, failed to find TEGRA_SOC_WHICH_CLK for %s **NVRM: nv_clk_get_handles, failed to find TEGRA_SOC_WHICH_CLK for %s **nvdisplayhub_clk**nvdisplay_disp_clk**nvdisplay_p0_clk**nvdisplay_p1_clk**nvdisplay_p2_clk**nvdisplay_p3_clk**nvdisplay_p4_clk**nvdisplay_p5_clk**nvdisplay_p6_clk**nvdisplay_p7_clk**dpaux0_clk**fuse_clk**dsipll_vco_clk**dsipll_clkoutpn_clk**dsipll_clkouta_clk**sppll0_vco_clk**sppll0_clkouta_clk**sppll0_clkoutb_clk**sppll0_clkoutpn_clk**sppll1_clkoutpn_clk**sppll0_div27_clk**sppll1_div27_clk**sppll0_div10_clk**sppll0_div25_clk**sppll1_vco_clk**vpll0_ref_clk**vpll0_clk**vpll1_clk**vpll2_clk**vpll3_clk**vpll4_clk**vpll5_clk**vpll6_clk**vpll7_clk**nvdisplay_p0_ref_clk**rg0_clk**rg1_clk**rg2_clk**rg3_clk**rg4_clk**rg5_clk**rg6_clk**rg7_clk**disppll_clk**disphubpll_clk**dsi_lp_clk**dsi_core_clk**dsi_pixel_clk**pre_sor0_clk**pre_sor1_clk**pre_sor2_clk**pre_sor3_clk**dp_link_ref_clk**dp_linkb_ref_clk**dp_linkc_ref_clk**dp_linkd_ref_clk**sor_linka_input_clk**sor_linkb_input_clk**sor_linkc_input_clk**sor_linkd_input_clk**sor_linka_afifo_clk**sor_linkb_afifo_clk**sor_linkc_afifo_clk**sor_linkd_afifo_clk**sor_linka_afifo_m_clk**rg0_m_clk**rg1_m_clk**sor0_m_clk**sor1_m_clk**pllhub_clk**sor0_clk**sor1_clk**sor2_clk**sor3_clk**sor_pad_input_clk**sor_padb_input_clk**sor_padc_input_clk**sor_padd_input_clk**sor0_pad_clk**sor1_pad_clk**sor2_pad_clk**sor3_pad_clk**pre_sf0_clk**sf0_clk**sf1_clk**sf2_clk**sf3_clk**sf4_clk**sf5_clk**sf6_clk**sf7_clk**dsi_pad_input_clk**pre_sor0_ref_clk**pre_sor1_ref_clk**sor0_ref_pll_clk**sor1_ref_pll_clk**sor2_ref_pll_clk**sor3_ref_pll_clk**sor0_ref_clk**sor1_ref_clk**sor2_ref_clk**sor3_ref_clk**osc_clk**dsc_clk**maud_clk**aza_2xbit_clk**aza_bit_clk**mipi_cal_clk**uart_fst_mipi_cal_clk**sor0_div_clk**disp_root**hub_root**plla_disp**plla_disphub**plla**vpllx_sor0_muxed_clk**vpllx_sor1_muxed_clk**vpllx_sor2_muxed_clk**vpllx_sor3_muxed_clk**sf0_sor_clk**sf1_sor_clk**sf2_sor_clk**sf3_sor_clk**sf4_sor_clk**sf5_sor_clk**sf6_sor_clk**sf7_sor_clk**emc_clk**sysclk**nvdclk**uprocclk**gpc0clk**gpc1clk**gpc2clk*numa_info*call to nv_platform_supports_numa*NVRM: VM: %s:%d: 0x%p, %d page(s), count = %lld, page_table = 0x%p **NVRM: VM: %s:%d: 0x%p, %d page(s), count = %lld, page_table = 0x%p *call to atomic64_read*call to atomic64_dec_and_test*call to atomic64_inc**free_list*name_unique**name_unique*call to kmem_cache_create*call to completion_done*call to wait_for_completion_interruptible*swiotlb_in_use*call to rm_is_altstack_in_use*call to kmem_cache_free***top**/home/runner/work/bulk-builder/bulk-builder/kernel-open/common/inc/nv-linux.h*vm_page_prot*call to nv_remap_page_range*call to nv_adjust_pgprot*call to set_memory_encrypted*call to set_memory_decrypted*call to iounmap**call to ioremap_wc*call to vfree**/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-dma.c*call to nv_dma_gem_object_put_unlocked*call to module_put*NVRM: %s: Import arguments are NULL! **NVRM: %s: Import arguments are NULL! *call to try_module_get*NVRM: %s: Couldn't reference the GEM object's owner! **NVRM: %s: Couldn't reference the GEM object's owner! *call to drm_gem_object_get*call to drm_gem_object_put*call to nv_dma_use_map_resource*call to dma_unmap_resource*call to dma_map_resource*call to dma_mapping_error*NVRM: %s: Failed to DMA map MMIO range [0x%llx-0x%llx] **NVRM: %s: Failed to DMA map MMIO range [0x%llx-0x%llx] *addressable_range*call to nv_dma_unmap_mmio*peer_pci_dev*NVRM: %s: Not a PCI device**NVRM: %s: Not a PCI device*call to nv_bar_index_to_os_bar_index*bar_index*NVRM: %s: Resource %u not valid**NVRM: %s: Resource %u not valid*NVRM: %s: Mapping requested (start = 0x%llx, page_count = 0x%llx) outside of resource bounds (start = 0x%llx, end = 0x%llx) **NVRM: %s: Mapping requested (start = 0x%llx, page_count = 0x%llx) outside of resource bounds (start = 0x%llx, end = 0x%llx) *call to nv_dma_map_mmio*call to pci_bus_address*dma_map**dma_map*call to nv_dma_unmap_pages*va_array*call to nv_dma_unmap_sgt**import_sgt*call to nv_dma_map_sgt**at*NVRM: %s: Failed to allocate page array for DMA mapping! **NVRM: %s: Failed to allocate page array for DMA mapping! *user_pages**user_pages*call to nv_dma_map_pages*call to get_num_physpages*NVRM: %s: DMA unmapping request too large! **NVRM: %s: DMA unmapping request too large! *NVRM: %s: Requested to DMA unmap %llu pages, but there are %llu in the mapping **NVRM: %s: Requested to DMA unmap %llu pages, but there are %llu in the mapping *call to nv_dma_unmap_contig*call to nv_dma_unmap_scatterlist*NVRM: %s: DMA mapping request too large! **NVRM: %s: DMA mapping request too large! *NVRM: %s: Failed to allocate nv_dma_map_t! **NVRM: %s: Failed to allocate nv_dma_map_t! *discontig*submap_count*call to nv_dma_map_scatterlist*call to nv_dma_map_contig*call to nv_unmap_dma_map_scatterlist*call to nv_destroy_dma_map_scatterlist*call to nv_create_dma_map_scatterlist*NVRM: %s: Failed to allocate DMA mapping scatterlist! **NVRM: %s: Failed to allocate DMA mapping scatterlist! *call to nv_map_dma_map_scatterlist*NVRM: %s: Failed to create a DMA mapping! **NVRM: %s: Failed to create a DMA mapping! *call to nv_load_dma_map_scatterlist*call to nv_dma_is_addressable*NVRM: %s: DMA address not in addressable range of device (0x%llx, 0x%llx-0x%llx) **NVRM: %s: DMA address not in addressable range of device (0x%llx, 0x%llx-0x%llx) *submap*sg_addr*call to sg_next*submaps*call to sg_free_table**submaps*call to dma_map_sg_attrs*sg_map_count*num_submaps*imported*call to sg_alloc_table_from_pages*NVRM: %s: DMA address not in addressable range of device (0x%llx-0x%llx, 0x%llx-0x%llx) **NVRM: %s: DMA address not in addressable range of device (0x%llx-0x%llx, 0x%llx-0x%llx) *import_priv**nv_dma_buf*call to dma_buf_unmap_attachment*dma_attach*call to dma_buf_detach*call to dma_buf_put**dma_buf*Can't get dma_buf from fd! **Can't get dma_buf from fd! *call to nv_dma_import_dma_buf**import_priv*Import arguments are NULL! **Import arguments are NULL! *Can't allocate mem for nv_buf! **Can't allocate mem for nv_buf! *call to get_dma_buf*call to dma_buf_attach**dma_attach*Can't attach dma_buf! **Can't attach dma_buf! *NVRM: nv_dma_import_dma_buf -Try RO [DMA_TO_DEVICE] only mapping **NVRM: nv_dma_import_dma_buf -Try RO [DMA_TO_DEVICE] only mapping *call to dma_buf_map_attachment*map_sgt**map_sgt*Can't map dma attachment! **Can't map dma attachment! *call to nv_dma_buf_create*call to nv_dma_buf_reuse*NVRM: failed to get dma-buf **NVRM: failed to get dma-buf *NVRM: Invalid dma-buf fd **NVRM: Invalid dma-buf fd *map_attrs*call to nv_dma_buf_dup_mem_handles*call to nv_dma_buf_get_phys_addresses*call to nv_dma_buf_undup_mem_handles*call to nv_dma_buf_alloc_file_private*NVRM: failed to allocate dma-buf private **NVRM: failed to allocate dma-buf private *total_objects*total_size**nv*mapping_type*skip_iommu*allow_mmap*call to nvidia_dev_get*call to rm_dma_buf_get_client_and_device*NVRM: mmap is not allowed for the specific handles **NVRM: mmap is not allowed for the specific handles *can_mmap**nvidia***priv*nv_dmabuf**nv_dmabuf*exp_name**exp_name*call to dma_buf_export*NVRM: failed to create dma-buf **NVRM: failed to create dma-buf *call to dma_buf_fd*NVRM: failed to get dma-buf file descriptor **NVRM: failed to get dma-buf file descriptor *call to nv_dma_buf_put_phys_addresses*call to rm_dma_buf_put_client_and_device*mig_info**mig_info*call to nvidia_dev_put*call to nv_dma_buf_free_file_private*NVRM: nv_dma_buf_mmap: priv == NULL. **NVRM: nv_dma_buf_mmap: priv == NULL. *NVRM: nv_dma_buf_mmap: mmap is not allowed can_mmap[%d] **NVRM: nv_dma_buf_mmap: mmap is not allowed can_mmap[%d] *NVRM: nv_dma_buf_mmap: Vaddr_start[%llx] Vaddr_end[%llx] vm_pgoff[%llx] page_offset[%llx] page_prot[%x] total_size[%llx] **NVRM: nv_dma_buf_mmap: Vaddr_start[%llx] Vaddr_end[%llx] vm_pgoff[%llx] page_offset[%llx] page_prot[%x] total_size[%llx] *NVRM: nv_dma_buf_mmap: Vaddr_start[%llx] Vaddr_end[%llx] os_page_size[%llx] vm_pgoff[%llx] page_offset[%llx] page_prot[%x] total_size[%llx] total_map_len[%llx] **NVRM: nv_dma_buf_mmap: Vaddr_start[%llx] Vaddr_end[%llx] os_page_size[%llx] vm_pgoff[%llx] page_offset[%llx] page_prot[%x] total_size[%llx] total_map_len[%llx] *off_in_range_array*NVRM: [nv_dma_buf_mmap-failed] Could not find first map page **NVRM: [nv_dma_buf_mmap-failed] Could not find first map page *call to nv_encode_caching*NVRM: [nv_dma_buf_mmap-failed] i[%u] cache_type[%llx] memory_type[%d] page_prot[%x] **NVRM: [nv_dma_buf_mmap-failed] i[%u] cache_type[%llx] memory_type[%d] page_prot[%x] *call to nv_vm_flags_clear*call to nv_vm_flags_set*phy_addr*map_len*NVRM: nv_dma_buf_mmap: remap_pfn_range - failed **NVRM: nv_dma_buf_mmap: remap_pfn_range - failed *NVRM: nv_dma_buf_mmap: index[%u] range_count[%u] Vaddr[%llx] page_prot[%x] phyAddr[%llx] mapLen[%llx] len[%llx] total_map_len[%llx] **NVRM: nv_dma_buf_mmap: index[%u] range_count[%u] Vaddr[%llx] page_prot[%x] phyAddr[%llx] mapLen[%llx] len[%llx] total_map_len[%llx] **/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-dmabuf.c*attachment*call to nv_dma_buf_unmap_pages*call to nv_dma_buf_unmap_pfns*call to nv_dma_buf_map_pages*call to nv_dma_buf_map_pfns*call to nv_pci_is_valid_topology_for_direct_pci*NVRM: dma-buf attach failed: topology not supported for mapping type FORCE_PCIE **NVRM: dma-buf attach failed: topology not supported for mapping type FORCE_PCIE *call to nv_grdma_pci_topology_supported*NVRM: dma-buf attach failed: PCI topology not supported for dma-buf **NVRM: dma-buf attach failed: PCI topology not supported for dma-buf *NVRM: dma-buf attach failed: importer unable to handle MMIO without struct page **NVRM: dma-buf attach failed: importer unable to handle MMIO without struct page *call to nv_dma_buf_get_sg_count*call to sg_alloc_table**sg*call to nv_dma_map_non_pci_peer*call to nv_dma_map_peer*dma_length*call to dma_get_max_seg_size*dma_max_seg_size*call to nv_dma_unmap_peer*call to nv_inc_and_check_one_phys_refcount*call to rm_acquire_api_lock*api_lock_taken*call to rm_acquire_gpu_lock*gpu_lock_taken*call to rm_dma_buf_map_mem_handle*mem_info**mem_info*call to mrangeMake*call to rm_release_gpu_lock*call to rm_release_api_lock*call to nv_put_phys_addresses*call to nv_reset_phys_refcount*call to nv_dec_and_check_zero_phys_refcount*call to rm_dma_buf_unmap_mem_handle*call to nv_dma_buf_acquire_gpu_lock*call to rm_dma_buf_dup_mem_handle**offsets*read_only_mem*h_memory***mem_info*call to nv_dma_buf_release_gpu_lock*call to nv_dma_buf_undup_mem_handles_unlocked*call to rm_release_all_gpus_lock*call to rm_acquire_all_gpus_lock*call to rm_dma_buf_undup_mem_handle*is_one*phys_refcount*&priv->lock**&priv->lock*call to mrangeLimit*uAddr*destByte*srcByte**gb10x**gb10y**gb20y**gb20x**gh100**ad10x**ga10x**ga100**tu11x**tu10x*call to __might_resched*/home/runner/work/bulk-builder/bulk-builder/kernel-open/common/inc/nv-lock.h**/home/runner/work/bulk-builder/bulk-builder/kernel-open/common/inc/nv-lock.h*call to _cond_resched*call to mmap_read_lock*nv_timer_callback*call to __bad_udelay*call to __const_udelay*call to __udelay*call to usleep_range*call to msleep*call to usleep_range_state*call to nv_in_hardirq*tm_end*call to timespec64_add*call to schedule_timeout*call to nv_timer_less_than*call to timespec64_sub*tm_aux*call to timespec64_to_ns*mdelay_safe_msec*call to __bad_ndelay*call to __ndelay*nsproxy*uts_ns*call to free_uts_ns*call to console_list_lock*call to console_is_registered_locked*call to console_list_unlock*call to lockdep_assert_console_list_lock_held*call to console_srcu_read_lock_is_held**./include/linux/console.h*call to __kcsan_disable_current*call to __kcsan_enable_current*call to play_idle_precise*call to thaw_secondary_cpus*call to freeze_secondary_cpus*call to __pm_runtime_use_autosuspend*call to __pm_runtime_disable*call to __pm_runtime_set_status*call to __pm_runtime_suspend*call to __pm_runtime_idle*call to __pm_runtime_resume*call to pm_runtime_put_noidle*call to ktime_get_mono_fast_ns*ignore_children*__UNIQUE_ID_rcu524*__UNIQUE_ID_rcu520*fdt*__UNIQUE_ID_rcu522*needs_masking**needs_masking*field "to" at ./include/linux/highmem.h:606**field "to" at ./include/linux/highmem.h:606**./include/linux/highmem.h*field "to" at ./include/linux/highmem.h:567**field "to" at ./include/linux/highmem.h:567*field "to" at ./include/linux/highmem.h:577**field "to" at ./include/linux/highmem.h:577*field "to" at ./include/linux/highmem.h:495**field "to" at ./include/linux/highmem.h:495*field "to" at ./include/linux/highmem.h:467**field "to" at ./include/linux/highmem.h:467*field "to + offset" at ./include/linux/highmem.h:433**field "to + offset" at ./include/linux/highmem.h:433*field "to" at ./include/linux/highmem.h:423**field "to" at ./include/linux/highmem.h:423*call to ms_to_ktime*call to rtc_tm_to_time64*call to __hw_protection_shutdown*SecureBoot**SecureBoot*SetupMode**SetupMode*call to __efi_soft_reserve_enabled*call to efi_mem_attributes*call to sprintf*%pUl**%pUl*call to __screen_info_video_type*ext_lfb_base*call to gpiod_to_irq*call to gpio_to_desc*call to gpiod_set_raw_value*call to gpiod_get_raw_value*call to gpiod_set_raw_value_cansleep*call to gpiod_get_raw_value_cansleep*call to gpiod_direction_output_raw*call to gpiod_direction_input*gpio-controller**gpio-controller*call to device_get_next_child_node*girq*call to gpio_device_put*call to divider_ro_round_rate_parent*call to clk_hw_get_parent*call to divider_round_rate_parent**dsiPanelInfo*dsi_init_cmd_array**dsi_init_cmd_array*panelInfo*dsi_early_suspend_cmd_array**dsi_early_suspend_cmd_array*dsi_late_resume_cmd_array**dsi_late_resume_cmd_array*dsi_postvideo_cmd_array**dsi_postvideo_cmd_array*dsi_suspend_cmd_array**dsi_suspend_cmd_array*pktSeq_array**pktSeq_array*bl_name**bl_name*call to gpio_is_valid*panel_gpio**panel_gpio*call to gpio_free*panel_gpio_populated*call to gpio_direction_output*en_panel_rst*DSI Panel reset gpio invalid **DSI Panel reset gpio invalid *Deasserting DSI panel reset gpio failed **Deasserting DSI panel reset gpio failed *Asserting DSI panel reset gpio failed **Asserting DSI panel reset gpio failed *Deasserting Dsi panel reset gpio after asserting failed **Deasserting Dsi panel reset gpio after asserting failed *call to of_get_child_by_name*dsi**dsi**np_dsi*call to of_device_is_available*NVRM: dsi node not enabled in DT **NVRM: dsi node not enabled in DT *call to parse_dsi_properties*call to of_parse_phandle*nvidia,active-panel**nvidia,active-panel*np_dsi_panel**np_dsi_panel*NVRM: None of the dsi panel nodes enabled in DT! **NVRM: None of the dsi panel nodes enabled in DT! *nvidia,enable-hs-clk-in-lp-mode**nvidia,enable-hs-clk-in-lp-mode*enable_hs_clock_on_lp_cmd_mode*nvidia,set-max-dsi-timeout**nvidia,set-max-dsi-timeout*set_max_timeout*nvidia,use-legacy-dphy-core**nvidia,use-legacy-dphy-core*use_legacy_dphy_core*nvidia,dsi-refresh-rate-adj**nvidia,dsi-refresh-rate-adj*refresh_rate_adj*nvidia,dsi-n-data-lanes**nvidia,dsi-n-data-lanes*n_data_lanes*nvidia,swap-data-lane-polarity**nvidia,swap-data-lane-polarity*swap_data_lane_polarity*nvidia,swap-clock-lane-polarity**nvidia,swap-clock-lane-polarity*swap_clock_lane_polarity*nvidia,reverse-clock-polarity**nvidia,reverse-clock-polarity*reverse_clock_polarity*nvidia,lane-xbar-ctrl*lane_xbar_ctrl**nvidia,lane-xbar-ctrl**lane_xbar_ctrl*lane_xbar_exists*nvidia,dsi-phy-type**nvidia,dsi-phy-type*dsiPhyType*NVRM: invalid dsi phy type 0x%x **NVRM: invalid dsi phy type 0x%x *nvidia,cphy-data-scrambling**nvidia,cphy-data-scrambling*en_data_scrambling*nvidia,dsi-video-burst-mode**nvidia,dsi-video-burst-mode*video_burst_mode*NVRM: invalid dsi video burst mode **NVRM: invalid dsi video burst mode *nvidia,dsi-pixel-format**nvidia,dsi-pixel-format*pixel_format*NVRM: invalid dsi pixel format **NVRM: invalid dsi pixel format *nvidia,dsi-refresh-rate**nvidia,dsi-refresh-rate*refresh_rate*nvidia,dsi-rated-refresh-rate**nvidia,dsi-rated-refresh-rate*rated_refresh_rate*nvidia,dsi-virtual-channel**nvidia,dsi-virtual-channel*virtual_channel*NVRM: invalid dsi virtual channel **NVRM: invalid dsi virtual channel *nvidia,dsi-instance**nvidia,dsi-instance*dsi_instance*nvidia,dsi-panel-reset**nvidia,dsi-panel-reset*panel_reset*nvidia,dsi-te-polarity-low**nvidia,dsi-te-polarity-low*te_polarity_low*nvidia,dsi-lp00-pre-panel-wakeup**nvidia,dsi-lp00-pre-panel-wakeup*lp00_pre_panel_wakeup*nvidia,dsi-bl-name**nvidia,dsi-bl-name*call to of_property_read_string*NVRM: dsi error parsing bl name **NVRM: dsi error parsing bl name *nvidia,dsi-ganged-type**nvidia,dsi-ganged-type*ganged_type*even_odd_split_width*nvidia,dsi-even-odd-pixel-width**nvidia,dsi-even-odd-pixel-width*nvidia,dsi-ganged-overlap**nvidia,dsi-ganged-overlap*ganged_overlap*NVRM: specified ganged overlap, but no ganged type **NVRM: specified ganged overlap, but no ganged type *nvidia,dsi-ganged-swap-links**nvidia,dsi-ganged-swap-links*ganged_swap_links*NVRM: specified ganged swapped links, but no ganged type **NVRM: specified ganged swapped links, but no ganged type *nvidia,dsi-ganged-write-to-all-links**nvidia,dsi-ganged-write-to-all-links*ganged_write_to_all_links*NVRM: specified ganged write to all links, but no ganged type **NVRM: specified ganged write to all links, but no ganged type *nvidia,dsi-split-link-type**nvidia,dsi-split-link-type*split_link_type*nvidia,dsi-suspend-aggr**nvidia,dsi-suspend-aggr*suspend_aggr*nvidia,dsi-edp-bridge**nvidia,dsi-edp-bridge*dsi2edp_bridge_enable*nvidia,dsi-lvds-bridge**nvidia,dsi-lvds-bridge*dsi2lvds_bridge_enable*nvidia,dsi-dpd-pads**nvidia,dsi-dpd-pads*nvidia,dsi-power-saving-suspend**nvidia,dsi-power-saving-suspend*power_saving_suspend*nvidia,dsi-ulpm-not-support**nvidia,dsi-ulpm-not-support*ulpm_not_supported*nvidia,dsi-video-data-type**nvidia,dsi-video-data-type*video_data_type*NVRM: invalid dsi video data type **NVRM: invalid dsi video data type *nvidia,dsi-video-clock-mode**nvidia,dsi-video-clock-mode*video_clock_mode*NVRM: invalid dsi video clk mode **NVRM: invalid dsi video clk mode *nvidia,enable-vrr**nvidia,enable-vrr*dsiEnVRR*nvidia,vrr-force-set-te-pin**nvidia,vrr-force-set-te-pin*dsiForceSetTePin*nvidia,send-init-cmds-early**nvidia,send-init-cmds-early*sendInitCmdsEarly*nvidia,dsi-n-init-cmd**nvidia,dsi-n-init-cmd*n_init_cmd*call to dsi_read_prop_array*nvidia,dsi-init-cmd**nvidia,dsi-init-cmd*NVRM: DSI init cmd parsing from DT failed **NVRM: DSI init cmd parsing from DT failed *nvidia,dsi-n-postvideo-cmd**nvidia,dsi-n-postvideo-cmd*n_postvideo_cmd*nvidia,dsi-postvideo-cmd**nvidia,dsi-postvideo-cmd*NVRM: DSI postvideo cmd parsing from DT failed **NVRM: DSI postvideo cmd parsing from DT failed *nvidia,dsi-n-suspend-cmd**nvidia,dsi-n-suspend-cmd*n_suspend_cmd*nvidia,dsi-suspend-cmd**nvidia,dsi-suspend-cmd*NVRM: DSI suspend cmd parsing from DT failed **NVRM: DSI suspend cmd parsing from DT failed *nvidia,dsi-n-early-suspend-cmd**nvidia,dsi-n-early-suspend-cmd*n_early_suspend_cmd*nvidia,dsi-early-suspend-cmd**nvidia,dsi-early-suspend-cmd*NVRM: DSI early suspend cmd parsing from DT failed **NVRM: DSI early suspend cmd parsing from DT failed *nvidia,dsi-suspend-stop-stream-late**nvidia,dsi-suspend-stop-stream-late*suspend_stop_stream_late*nvidia,dsi-n-late-resume-cmd**nvidia,dsi-n-late-resume-cmd*n_late_resume_cmd*nvidia,dsi-late-resume-cmd**nvidia,dsi-late-resume-cmd*NVRM: DSI late resume cmd parsing from DT failed **NVRM: DSI late resume cmd parsing from DT failed *nvidia,dsi-pkt-seq**nvidia,dsi-pkt-seq*NVRM: DSI packet seq parsing from DT fail **NVRM: DSI packet seq parsing from DT fail *nvidia,dsi-phy-hsdexit**nvidia,dsi-phy-hsdexit*phyTimingNs*t_hsdexit_ns*nvidia,dsi-phy-hstrail**nvidia,dsi-phy-hstrail*t_hstrail_ns*nvidia,dsi-phy-datzero**nvidia,dsi-phy-datzero*t_datzero_ns*nvidia,dsi-phy-hsprepare**nvidia,dsi-phy-hsprepare*t_hsprepare_ns*nvidia,dsi-phy-hsprebegin**nvidia,dsi-phy-hsprebegin*t_hsprebegin_ns*nvidia,dsi-phy-hspost**nvidia,dsi-phy-hspost*t_hspost_ns*nvidia,dsi-phy-clktrail**nvidia,dsi-phy-clktrail*t_clktrail_ns*nvidia,dsi-phy-clkpost**nvidia,dsi-phy-clkpost*t_clkpost_ns*nvidia,dsi-phy-clkzero**nvidia,dsi-phy-clkzero*t_clkzero_ns*nvidia,dsi-phy-tlpx**nvidia,dsi-phy-tlpx*t_tlpx_ns*nvidia,dsi-phy-clkprepare**nvidia,dsi-phy-clkprepare*t_clkprepare_ns*nvidia,dsi-phy-clkpre**nvidia,dsi-phy-clkpre*t_clkpre_ns*nvidia,dsi-phy-wakeup**nvidia,dsi-phy-wakeup*t_wakeup_ns*nvidia,dsi-phy-taget**nvidia,dsi-phy-taget*t_taget_ns*nvidia,dsi-phy-tasure**nvidia,dsi-phy-tasure*t_tasure_ns*nvidia,dsi-phy-tago**nvidia,dsi-phy-tago*t_tago_ns*nvidia,enable-link-compression**nvidia,enable-link-compression*dsiDscEnable*nvidia,enable-dual-dsc**nvidia,enable-dual-dsc*dsiDscEnDualDsc*nvidia,enable-block-pred**nvidia,enable-block-pred*dsiDscEnBlockPrediction*nvidia,slice-height**nvidia,slice-height*dsiDscSliceHeight*nvidia,num-of-slices**nvidia,num-of-slices*dsiDscNumSlices*nvidia,comp-rate**nvidia,comp-rate*dsiDscBpp*nvidia,version-major**nvidia,version-major*dsiDscDecoderMajorVersion*nvidia,version-minor**nvidia,version-minor*dsiDscDecoderMinorVersion*nvidia,use-custom-pps**nvidia,use-custom-pps*dsiDscUseCustomPPS*call to dsi_parse_pps_data*dsiDscCustomPPSData*nvidia,custom-pps-data**nvidia,custom-pps-data**dsiDscCustomPPSData*NVRM: Parsing DSI Panel Custom PPS data failed **NVRM: Parsing DSI Panel Custom PPS data failed *nvidia,dsi-csi-loopback**nvidia,dsi-csi-loopback*dsi_csi_loopback*nvidia,vpll0-rate-hz**nvidia,vpll0-rate-hz*vpll0_rate_hz*nvidia,dsipll-vco-rate-hz**nvidia,dsipll-vco-rate-hz*dsipll_vco_rate_hz*nvidia,dsipll-clkouta-rate-hz**nvidia,dsipll-clkouta-rate-hz*dsipll_clkouta_rate_hz*nvidia,dsipll-clkoutpn-rate-hz**nvidia,dsipll-clkoutpn-rate-hz*dsipll_clkoutpn_rate_hz*call to dsi_get_panel_timings*NVRM: Parsing DSI Panel Timings failed **NVRM: Parsing DSI Panel Timings failed *call to dsi_get_panel_gpio*NVRM: Parsing DSI Panel GPIOs failed **NVRM: Parsing DSI Panel GPIOs failed *prop_val_ptr**prop_val_ptr*NVRM: DSI Panel node not available **NVRM: DSI Panel node not available *call to of_get_named_gpio*nvidia,panel-rst-gpio**nvidia,panel-rst-gpio*nvidia,panel-en-gpio**nvidia,panel-en-gpio*nvidia,panel-en-1-gpio**nvidia,panel-en-1-gpio*nvidia,panel-bl-en-gpio**nvidia,panel-bl-en-gpio*nvidia,panel-bl-pwm-gpio**nvidia,panel-bl-pwm-gpio*nvidia,te-gpio**nvidia,te-gpio*nvidia,avdd-avee-en-gpio**nvidia,avdd-avee-en-gpio*nvidia,vdd-1v8-lcd-en-gpio**nvidia,vdd-1v8-lcd-en-gpio*nvidia,panel-bridge-en-0-gpio**nvidia,panel-bridge-en-0-gpio*nvidia,panel-bridge-en-1-gpio**nvidia,panel-bridge-en-1-gpio*nvidia,panel-bridge-refclk-en-gpio**nvidia,panel-bridge-refclk-en-gpio*dsi-panel-reset**dsi-panel-reset*dsi-panel-en**dsi-panel-en*dsi-panel-en-1**dsi-panel-en-1*dsi-panel-bl-enable**dsi-panel-bl-enable*dsi-panel-pwm**dsi-panel-pwm*dsi-panel-te**dsi-panel-te*dsiVrrPanelSupportsTe*dsi-panel-avdd-avee-en**dsi-panel-avdd-avee-en*dsi-panel-vdd-1v8-lcd-en**dsi-panel-vdd-1v8-lcd-en*dsi-panel-bridge-en-0**dsi-panel-bridge-en-0*dsi-panel-bridge-en-1**dsi-panel-bridge-en-1*dsi-panel-bridge-refclk-en**dsi-panel-bridge-refclk-en*NVRM: DSI Panel invalid gpio entry at index %d **NVRM: DSI Panel invalid gpio entry at index %d *call to gpio_request*np_panel*nvidia,panel-timings**nvidia,panel-timings*NVRM: could not find panel timings node for DSI Panel **NVRM: could not find panel timings node for DSI Panel *clock-frequency**clock-frequency*pixelClkRate*hsync-len**hsync-len*vsync-len**vsync-len*hback-porch**hback-porch*hBackPorch*vback-porch**vback-porch*vBackPorch*hactive**hactive*vactive**vactive*hfront-porch**hfront-porch*hFrontPorch*vfront-porch**vfront-porch*vFrontPorch*NVRM: One of the mode timings is missing in DSI Panel mode-timings! **NVRM: One of the mode timings is missing in DSI Panel mode-timings! **val_array*NVRM: dsi_read_prop_array, failed to allocate memory for values of DSI property %s**NVRM: dsi_read_prop_array, failed to allocate memory for values of DSI property %s*NVRM: dsi_read_prop_array, failed to get elements count in property %s **NVRM: dsi_read_prop_array, failed to get elements count in property %s *NVRM: dsi_read_prop_array, failed to read property %s**NVRM: dsi_read_prop_array, failed to read property %s*call to gpio_to_irq*irq_num*call to nv_request_soc_irq*hdmi-hotplug**hdmi-hotplug*NVRM: IRQ registration failed for gpio - %d, rc - %d **NVRM: IRQ registration failed for gpio - %d, rc - %d *call to nv_get_current_irq_type*call to nv_get_current_irq_priv_data*call to gpio_get_value*NVRM: of_get_name_gpio failed for gpio - %s, rc - %d **NVRM: of_get_name_gpio failed for gpio - %s, rc - %d *call to devm_gpio_request_one*NVRM: request gpio failed for gpio - %s, rc - %d **NVRM: request gpio failed for gpio - %s, rc - %d **os_gpio_hotplug_a**os_gpio_hotplug_b**os_gpio_hotplug_c**os_gpio_hotplug_d*call to gpio_direction_input*NVRM: %s: failed with err: %d **NVRM: %s: failed with err: %d *call to gpio_set_value*NVIDIA i2c adapter %u at %x:%02x.%u **NVIDIA i2c adapter %u at %x:%02x.%u *pci_info*NVIDIA SOC i2c adapter %u **NVIDIA SOC i2c adapter %u *osstatus*call to rm_i2c_is_smbus_capable*call to rm_i2c_transfer*NVRM: Unsupported I2C flags used. (flags:0x%08x) **NVRM: Unsupported I2C flags used. (flags:0x%08x) *completion*call to nv_kthread_q_item_init*call to _raw_q_schedule*call to up**thread***thread*call to kthread_stop*call to down_interruptible*nv_kthread_q: [in interrupt]: Interrupted during semaphore wait **nv_kthread_q: [in interrupt]: Interrupted during semaphore wait **/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-kthread-q.c*nv_kthread_q: task: %s: Interrupted during semaphore wait **nv_kthread_q: task: %s: Interrupted during semaphore wait *nv_kthread_q: [in interrupt]: _main_loop: Empty queue: q: 0x%p **nv_kthread_q: [in interrupt]: _main_loop: Empty queue: q: 0x%p *nv_kthread_q: task: %s: _main_loop: Empty queue: q: 0x%p **nv_kthread_q: task: %s: _main_loop: Empty queue: q: 0x%p **q_item*call to list_del_init*call to kthread_should_stop*NVRM: list of leaked memory allocations: **NVRM: list of leaked memory allocations: *call to nv_memdbg_node_entry*call to rb_first*NVRM: %llu bytes, 0x%p @ %s:%d **NVRM: %llu bytes, 0x%p @ %s:%d *NVRM: %llu bytes, 0x%p **NVRM: %llu bytes, 0x%p *NVRM: total leaked memory: %llu bytes in %llu allocations **NVRM: total leaked memory: %llu bytes in %llu allocations *NVRM: %llu bytes in %llu allocations untracked **NVRM: %llu bytes in %llu allocations untracked *call to nv_memdbg_remove_node**/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-memdbg.c*NVRM: size mismatch on free: %llu != %llu **NVRM: size mismatch on free: %llu != %llu *NVRM: allocation: 0x%p @ %s:%d **NVRM: allocation: 0x%p @ %s:%d *NVRM: allocation: 0x%p **NVRM: allocation: 0x%p *call to os_dbg_breakpoint*call to nv_memdbg_insert_node*rb_parent**rb_parent***rb_node*gpu_wakeup_callback_needed*call to down*call to nv_revoke_gpu_mappings_locked*all_mappings_revoked*call to nv_is_control_device*call to nv_wait_open_complete_interruptible*NVRM: Unable to allocate altstack for mmap **NVRM: Unable to allocate altstack for mmap *call to nvidia_mmap_helper*mmap_context*NVRM: VM: invalid mmap **NVRM: VM: invalid mmap *NVRM: VM: %s:%d: 0x%lx - 0x%lx, 0x%08lx bytes @ 0x%016llx, 0x%p, 0x%p **NVRM: VM: %s:%d: 0x%lx - 0x%lx, 0x%08lx bytes @ 0x%016llx, 0x%p, 0x%p *call to nv_check_gpu_state*NVRM: GPU %04x:%02x:%02x.%x: GPU is lost, skipping nvidia_mmap_helper **NVRM: GPU %04x:%02x:%02x.%x: GPU is lost, skipping nvidia_mmap_helper ***vm_private_data*vm_priv**vm_priv*call to memareaSize*call to IS_REG_OFFSET*call to IS_FB_OFFSET*call to IS_UD_OFFSET*call to rm_disable_iomap_wc*call to nv_get_numa_status*call to nvidia_mmap_numa*call to nv_io_remap_page_range*curOffs**alloc*mmap_size**/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-mmap.c*call to nvidia_mmap_peer_io*call to nvidia_mmap_sysmem*call to nv_array_index_no_speculate*call to atomic64_dec*NVRM: Userspace mapping creation failed [%d]! **NVRM: Userspace mapping creation failed [%d]! **prot*call to pgprot_noncached_weak*call to cachemode2protval*call to pgprot_modify_writecombine*NVRM: VM: memory type %d does not allow caching! **NVRM: VM: memory type %d does not allow caching! *NVRM: VM: cache type %d not supported for memory type %d! **NVRM: VM: cache type %d not supported for memory type %d! *call to rm_schedule_gpu_wakeup***sp*NVRM: VM: rm_schedule_gpu_wakeup failed: %x **NVRM: VM: rm_schedule_gpu_wakeup failed: %x *call to nv_insert_pfn*bRevoked*NVRM: VM: invalid mmap context **NVRM: VM: invalid mmap context *kernel_mapping**kernel_mapping***kernel_mapping*call to os_map_kernel_space**call to os_map_kernel_space*field "kernel_mapping" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-mmap.c:194**field "kernel_mapping" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-mmap.c:194*field "buffer" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-mmap.c:196**field "buffer" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-mmap.c:196*call to os_unmap_kernel_space*call to nv_alloc_release*call to os_get_current_process*NVRM: VM: %s: late unmap, comm: %s, 0x%p **NVRM: VM: %s: late unmap, comm: %s, 0x%p *version_string*rm_ops**version_string*system_info*allow_write_combining*NVRM: More than %d GPUs found.**NVRM: More than %d GPUs found.*call to nvidia_modeset_get_gpu_info*needs_numa_setup*os_device_ptr**os_device_ptr***os_device_ptr*call to nv_default_irq_flags*call to free_irq*call to os_alloc_mutex*call to nv_get_max_irq*num_intr*NVRM: GPU %04x:%02x:%02x.%x: Reducing MSI-X count from %d to the driver-supported maximum %d. **NVRM: GPU %04x:%02x:%02x.%x: Reducing MSI-X count from %d to the driver-supported maximum %d. **msix_entries*NVRM: GPU %04x:%02x:%02x.%x: Failed to allocate MSI-X entries. **NVRM: GPU %04x:%02x:%02x.%x: Failed to allocate MSI-X entries. **irq_count*NVRM: GPU %04x:%02x:%02x.%x: Failed to allocate counter for MSI-X entries. **NVRM: GPU %04x:%02x:%02x.%x: Failed to allocate counter for MSI-X entries. *current_num_irq_tracked*call to nv_pci_enable_msix*call to os_free_mutex***msix_bh_mutex*NVRM: GPU %04x:%02x:%02x.%x: Failed to enable MSI-X. **NVRM: GPU %04x:%02x:%02x.%x: Failed to enable MSI-X. *call to pci_enable_msi*interrupt_line*NVRM: GPU %04x:%02x:%02x.%x: Failed to allocate counter for MSI entry; falling back to PCIe virtual-wire interrupts. **NVRM: GPU %04x:%02x:%02x.%x: Failed to allocate counter for MSI entry; falling back to PCIe virtual-wire interrupts. *NVRM: GPU %04x:%02x:%02x.%x: Failed to enable MSI; falling back to PCIe virtual-wire interrupts. **NVRM: GPU %04x:%02x:%02x.%x: Failed to enable MSI; falling back to PCIe virtual-wire interrupts. *call to nv_cancel_nano_timer**nv_nstimer*call to hrtimer_cancel*call to hrtimer_start*NVRM: Not able to create timer object **NVRM: Not able to create timer object **nv_linux_state***pTmrEvent*nv_nano_timer_callback*hr_timer*call to nv_kmem_cache_alloc_stack_atomic*NVRM: no cache memory **NVRM: no cache memory *call to rm_run_nano_timer_callback*NVRM: Error in service of callback **NVRM: Error in service of callback **/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-p2p.c*call to nv_unregister_rsync_driver*call to nv_register_rsync_driver*call to nv_p2p_remove_dma_mapping*call to nv_p2p_free_dma_mapping**dma_mapping*page_size_type*dma_addresses*call to rm_p2p_dma_map_pages**dma_addresses*call to nv_p2p_insert_dma_mapping*call to nvidia_p2p_map_status**gpu_uuid*call to nv_p2p_put_pages*call to nvidia_dev_put_uuid*call to nv_p2p_get_pages*NVRM: Invalid argument in nv_p2p_get_pages,address or length are not aligned address=0x%llx, length=0x%llx **NVRM: Invalid argument in nv_p2p_get_pages,address or length are not aligned address=0x%llx, length=0x%llx *dma_mapping_list*call to sema_init*force_pcie*temp_length*call to rm_p2p_get_gpu_info*call to nvidia_dev_get_uuid*bGetUuid*call to rm_p2p_get_pages_persistent*physical_addresses*call to rm_p2p_get_pages*wreqmb_h*rreqmb_h*bGetPages*registers*fermi*call to nvidia_p2p_map_page_size**physical_addresses**wreqmb_h**rreqmb_h*call to rm_p2p_register_callback*call to nv_p2p_free_page_table*call to nv_p2p_free_platform_data*call to rm_p2p_put_pages_persistent*call to rm_p2p_put_pages*os_pages_per_p2p_page*os_page_count*os_dma_addresses*call to nv_dma_unmap_alloc**os_dma_addresses**ret_dma_mapping*call to nv_disable_pat_support*call to nvidia_unregister_cpu_hotplug_notifier*call to rm_read_registry_dword**UsePageAttributeTable*disable_pat*call to nv_enable_pat_support*call to nvidia_register_cpu_hotplug_notifier*NVRM: builtin PAT support disabled. **NVRM: builtin PAT support disabled. *call to nv_disable_builtin_pat_support*call to nv_determine_pat_mode*call to nv_enable_builtin_pat_support*call to cpuid_edx*NVRM: CPU does not support the PAT. **NVRM: CPU does not support the PAT. *PAT_WC_index*NVRM: PAT configuration unsupported. **NVRM: PAT configuration unsupported. *call to dev_pm_opp_of_register_em*call to get_cpu_device*related_cpus*call to parse_perf_domain*cell_name*pargs*call to cpumask_set_cpu*call to of_phandle_args_equal*cpu_np*call to of_parse_phandle_with_args*efficiencies_available*call to cpufreq_table_index_unsorted*call to cpufreq_table_find_index_l*call to cpufreq_table_find_index_h*call to cpufreq_table_find_index_c**./include/linux/cpufreq.h*call to cpufreq_is_in_limits*call to cpufreq_table_find_index_ac*call to cpufreq_table_find_index_dc*best*call to cpufreq_table_find_index_ah*call to cpufreq_table_find_index_dh*call to cpufreq_table_find_index_al*call to cpufreq_table_find_index_dl*call to __cpufreq_driver_target*call to cpufreq_verify_within_limits*cpuinfo*call to cpumask_weight*call to cpumask_empty**/home/runner/work/bulk-builder/bulk-builder/kernel-open/common/inc/nv-msi.h*call to pci_find_capability*cap_ptr*call to pci_read_config_word*call to pci_devid_is_self_hosted_hopper*call to pci_devid_is_self_hosted_blackwell*call to __pci_register_driver*call to pci_unregister_driver*call to nv_pci_has_common_pci_switch*dma_peer*pci_dev0**pci_dev0*pci_dev1**pci_dev1*pdev0*pdev1*call to pci_get_class*call to rm_is_supported_pci_device*call to check_for_bound_driver*NVRM: GPU %04x:%02x:%02x.%x is already bound to %s. **NVRM: GPU %04x:%02x:%02x.%x is already bound to %s. *another driver**another driver*call to pci_read_config_byte*call to find_uuid*is_forced_shutdown*call to nvidia_modeset_remove*is_shutdown*call to nv_pci_tegra_unregister_devfreq*call to pci_clear_master*NVRM: removing GPU %04x:%02x:%02x.%x **NVRM: removing GPU %04x:%02x:%02x.%x *call to iommu_dev_disable_feature*NVRM: GPU %04x:%02x:%02x.%x: Disabled SMMU SVA feature! **NVRM: GPU %04x:%02x:%02x.%x: Disabled SMMU SVA feature! *NVRM: GPU %04x:%02x:%02x.%x: Disabling SMMU SVA feature failed! ret: %d **NVRM: GPU %04x:%02x:%02x.%x: Disabling SMMU SVA feature failed! ret: %d *call to nv_linux_stop_open_q*call to rm_notify_gpu_removal*NVRM: Attempting to remove device %04x:%02x:%02x.%x with non-zero usage count! **NVRM: Attempting to remove device %04x:%02x:%02x.%x with non-zero usage count! *call to os_delay*NVRM: Failed removal of device %04x:%02x:%02x.%x! **NVRM: Failed removal of device %04x:%02x:%02x.%x! **/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-pci.c*NVRM: Continuing with GPU removal for device %04x:%02x:%02x.%x **NVRM: Continuing with GPU removal for device %04x:%02x:%02x.%x *call to rm_check_for_gpu_surprise_removal*call to nv_linux_remove_device_locked*call to nv_procfs_remove_gpu*call to nv_clk_clear_handles*call to rm_cleanup_dynamic_power_management*removed*call to pm_vt_switch_unregister*call to nv_acpi_unregister_notifier*call to rm_disable_gpu_state_persistence*call to nv_shutdown_adapter*call to nv_dev_free_stacks*call to filp_close**sysfs_config_file*call to nv_lock_destroy_locks*call to pci_set_drvdata*call to rm_i2c_remove_adapters*call to rm_free_private_state*call to resource_size*__cmd**__cmd*call to pci_disable_device*call to pci_write_config_word*NVRM: probing 0x%x 0x%x, class 0x%x **NVRM: probing 0x%x 0x%x, class 0x%x *NVRM: Ignoring probe for VF %04x:%02x:%02x.%x **NVRM: Ignoring probe for VF %04x:%02x:%02x.%x *call to rm_wait_for_bar_firewall*NVRM: failed to wait for bar firewall to lower **NVRM: failed to wait for bar firewall to lower *NVRM: ignoring the legacy GPU %04x:%02x:%02x.%x **NVRM: ignoring the legacy GPU %04x:%02x:%02x.%x *call to pci_enable_device*NVRM: pci_enable_device failed, aborting **NVRM: pci_enable_device failed, aborting *call to nv_treat_missing_irq_as_error*NVRM: Can't find an IRQ for your NVIDIA card! **NVRM: Can't find an IRQ for your NVIDIA card! *NVRM: Please check your BIOS settings. **NVRM: Please check your BIOS settings. *NVRM: [Plug & Play OS] should be set to NO **NVRM: [Plug & Play OS] should be set to NO *NVRM: [Assign IRQ to VGA] should be set to YES **NVRM: [Assign IRQ to VGA] should be set to YES *call to nv_pci_validate_bars*NVRM: request_mem_region failed for %lluM @ 0x%llx. This can NVRM: occur when a driver such as rivatv is loaded and claims NVRM: ownership of the device's registers. **NVRM: request_mem_region failed for %lluM @ 0x%llx. This can NVRM: occur when a driver such as rivatv is loaded and claims NVRM: ownership of the device's registers. *bar0_requested*NVRM: failed to allocate memory **NVRM: failed to allocate memory *vbios_version**vbios_version*??.??.??.??.??**??.??.??.??.??*cached_gpu_info*vendor_id*subsystem_id***os_state**dma_dev*call to nv_lock_init_locks*call to rm_is_supported_device*call to rm_init_private_state*NVRM: GPU %04x:%02x:%02x.%x: rm_init_private_state() failed! **NVRM: GPU %04x:%02x:%02x.%x: rm_init_private_state() failed! *call to pci_devid_is_self_hosted*call to nv_resize_pcie_bars*NVRM: Fatal Error while attempting to resize PCIe BARs. **NVRM: Fatal Error while attempting to resize PCIe BARs. *call to pci_ats_supported*ats_support*NVRM: GPU %04x:%02x:%02x.%x: ATS supported by this GPU! **NVRM: GPU %04x:%02x:%02x.%x: ATS supported by this GPU! *call to iommu_dev_enable_feature*NVRM: GPU %04x:%02x:%02x.%x: Enabled SMMU SVA feature! **NVRM: GPU %04x:%02x:%02x.%x: Enabled SMMU SVA feature! *NVRM: GPU %04x:%02x:%02x.%x: SMMU SVA feature already enabled! **NVRM: GPU %04x:%02x:%02x.%x: SMMU SVA feature already enabled! *NVRM: GPU %04x:%02x:%02x.%x: Enabling SMMU SVA feature failed! ret: %d **NVRM: GPU %04x:%02x:%02x.%x: Enabling SMMU SVA feature failed! ret: %d *call to nv_init_coherent_link_info*call to nv_clk_get_handles*call to pci_set_master*call to vga_set_legacy_decoding*NVRM: GPU %04x:%02x:%02x.%x: GPU is lost, skipping nv_pci_probe **NVRM: GPU %04x:%02x:%02x.%x: GPU is lost, skipping nv_pci_probe *call to dev_to_node*cpu_numa_node_id*call to nv_linux_init_open_q*NVRM: GPU %04x:%02x:%02x.%x: nv_linux_init_open_q() failed! **NVRM: GPU %04x:%02x:%02x.%x: nv_linux_init_open_q() failed! *NVRM: PCI:%04x:%02x:%02x.%x (%04x:%04x): BAR0 @ 0x%llx (%lluMB) **NVRM: PCI:%04x:%02x:%02x.%x (%04x:%04x): BAR0 @ 0x%llx (%lluMB) *NVRM: PCI:%04x:%02x:%02x.%x (%04x:%04x): BAR1 @ 0x%llx (%lluMB) **NVRM: PCI:%04x:%02x:%02x.%x (%04x:%04x): BAR1 @ 0x%llx (%lluMB) *call to nv_linux_add_device_locked*call to pm_vt_switch_required*call to rm_init_tegra_dynamic_power_management*call to nv_init_dynamic_power_management*call to nv_init_tegra_gpu_pg_mask*call to rm_get_gpu_uuid_raw*call to nv_procfs_add_gpu*call to nv_parse_per_device_option_string*call to rm_set_rm_firmware_requested*call to nv_check_and_exclude_gpu*call to dev_pm_set_driver_flags*call to nv_pci_tegra_register_devfreq*NVRM: GPU %04x:%02x:%02x.%x: Failed to register linux devfreq**NVRM: GPU %04x:%02x:%02x.%x: Failed to register linux devfreq*call to rm_enable_dynamic_power_management*call to rm_notify_gpu_addition*call to nvidia_modeset_probe*last_bar_64bit*NVRM: This PCI I/O region assigned to your NVIDIA device is invalid: NVRM: BAR%d is %lluM @ 0x%llx (PCI:%04x:%02x:%02x.%x) **NVRM: This PCI I/O region assigned to your NVIDIA device is invalid: NVRM: BAR%d is %lluM @ 0x%llx (PCI:%04x:%02x:%02x.%x) *tegra_pci_igpu_pg_mask*nvidia,fuse-overrides**nvidia,fuse-overrides*NVRM: nvidia,fuse-overrides parsed from device tree: 0x%x **NVRM: nvidia,fuse-overrides parsed from device tree: 0x%x *call to nv_set_gpu_pg_mask*call to nv_get_pci_sysfs_config*call to nv_get_hypervisor_type*call to nv_acpi_power_resource_method_present*pr3_acpi_method_present*call to rm_init_dynamic_power_management*call to nv_pci_get_tegra_igpu_data**tegra_data*devfreq_table**devfreq_table*devfreq_table_size*devfreq_suspend*devfreq_resume*devfreq_enable_boost*devfreq_disable_boost*gpc_devfreq_dev*call to nv_pci_tegra_devfreq_remove**gpc_devfreq_dev*nvd_devfreq_dev**nvd_devfreq_dev*tdev*call to thermal_zone_get_zone_by_name*tz_name*tzdev**tzdev*passive_trip*zones*call to devm_devfreq_remove_device*call to nv_pci_tegra_devfreq_remove_opps**devfreq*icc_path*tptr*call to devm_clk_put*call to device_unregister*gpc_master**gpc_master**tptr*gpc_cluster*nvd_master**nvd_master*nvd_cluster**tdev*boost_enabled*call to cancel_delayed_work_sync*call to msecs_to_jiffies*call to __init_work*call to schedule_delayed_work*call to devfreq_resume_device*call to devfreq_suspend_device**pbus*call to __compiletime_assert_629*gpu_pg_mask*NVRM: devfreq register receives gpu_pg_mask = %u **NVRM: devfreq register receives gpu_pg_mask = %u *tdata**tdata*icc_name**icc_path*call to devm_clk_get*clk_name*call to devm_kzalloc**call to devm_kzalloc*devfreq_clk*call to dev_set_name*%s-%d**%s-%d*gpc**gpc*gpu-gpc-%d**gpu-gpc-%d*nvd**nvd*gpu-nvd-%d**gpu-nvd-%d**sys*sys_devfreq_dev**sys_devfreq_dev*gpu-sys-%d**gpu-sys-%d*uproc**uproc*pwr_devfreq_dev**pwr_devfreq_dev*gpu-pwr-%d**gpu-pwr-%d*call to device_register*call to nv_pci_gb10b_add_devfreq_device*call to populate_opp_table**profile*get_cur_freq*get_dev_status*initial_freq*polling_ms*is_cooling_device*call to devm_devfreq_add_device*performance**performance*suspend_freq*call to nv_pci_tegra_init_cooling_device*NVRM: devfreq cooling cannot be found **NVRM: devfreq cooling cannot be found *NVRM: associated OF node cannot be found **NVRM: associated OF node cannot be found *call to of_property_count_strings*nvidia,thermal-zones**nvidia,thermal-zones*NVRM: nvidia,thermal-zones DT property format error **NVRM: nvidia,thermal-zones DT property format error *n_strings*call to of_property_count_u32_elems*nvidia,cooling-device**nvidia,cooling-device*NVRM: nvidia,cooling-device DT property format error **NVRM: nvidia,cooling-device DT property format error *n_elems*NVRM: number of strings specified in nvidia,thermal-zones needs tobe exact half the number of elements specified nvidia,cooling-device **NVRM: number of strings specified in nvidia,thermal-zones needs tobe exact half the number of elements specified nvidia,cooling-device *NVRM: number of elements specified in nvidia,cooling-device needsto be an even number **NVRM: number of elements specified in nvidia,cooling-device needsto be an even number *call to of_property_read_string_index*call to of_property_read_u32_index*NVRM: fail to get %s thermal_zone_device **NVRM: fail to get %s thermal_zone_device *NVRM: fail to find passive_trip in %s thermal_zone_device **NVRM: fail to find passive_trip in %s thermal_zone_device *NVRM: fail to bind devfreq cooling device with %s thermal_zone_device **NVRM: fail to bind devfreq cooling device with %s thermal_zone_device **tz_name**passive_trip*call to dev_pm_opp_remove*call to dev_pm_opp_add*call to pm_runtime_suspended*total_time*busy_time*current_frequency*call to _dev_warn*fail to nv_kmem_cache_alloc_stack: %d **fail to nv_kmem_cache_alloc_stack: %d *call to rm_pmu_perfmon_get_load*load*kBps*NVRM: Failing to parse SRAT GI for %04x:%02x:%02x.%x since non-zero device function is not supported. **NVRM: Failing to parse SRAT GI for %04x:%02x:%02x.%x since non-zero device function is not supported. *call to acpi_get_table*SRAT**SRAT*NVRM: Failed to parse the SRAT table. **NVRM: Failed to parse the SRAT table. *table_end*subtable_header**subtable_header*subtable_header_length*dev_dbdf**gi*busAtByte2*busAtByte3*gi_dbdf*free_node_bitmap*coherent_link_info**free_node_bitmap*NVRM: Invalid node-id found. **NVRM: Invalid node-id found. *pxm_count*NVRM: matching SRAT GI entry: 0x%x 0x%x 0x%x 0x%x PXM: %d **NVRM: matching SRAT GI entry: 0x%x 0x%x 0x%x 0x%x PXM: %d *NVRM: PCIe bus value picked from byte 3 offset in SRAT GI entry: 0x%x 0x%x 0x%x 0x%x PXM: %d NVRM: Hypervisor stack is old and not following ACPI spec defined offset. NVRM: Please consider upgrading the Hypervisor stack as this workaround will be removed in future release. **NVRM: PCIe bus value picked from byte 3 offset in SRAT GI entry: 0x%x 0x%x 0x%x 0x%x PXM: %d NVRM: Hypervisor stack is old and not following ACPI spec defined offset. NVRM: Please consider upgrading the Hypervisor stack as this workaround will be removed in future release. *call to acpi_put_table*NVRM: resizable BAR disabled by regkey, skipping **NVRM: resizable BAR disabled by regkey, skipping *requested_size*call to pci_rebar_bytes_to_size*old_size*NVRM: %04x:%02x:%02x.%x: BAR1 already at requested size. **NVRM: %04x:%02x:%02x.%x: BAR1 already at requested size. *call to pci_find_host_bridge**host*NVRM: Not resizing BAR because the firmware forbids moving windows. **NVRM: Not resizing BAR because the firmware forbids moving windows. *NVRM: %04x:%02x:%02x.%x: Attempting to resize BAR1. **NVRM: %04x:%02x:%02x.%x: Attempting to resize BAR1. *call to pci_release_resource*call to pci_resize_resource*NVRM: No address space to allocate resized BAR1. **NVRM: No address space to allocate resized BAR1. *NVRM: BAR resize resource not supported. **NVRM: BAR resize resource not supported. *NVRM: BAR resizing failed with error `%d`. **NVRM: BAR resizing failed with error `%d`. *call to pci_assign_unassigned_bus_resources*NVRM: FATAL: Failed to re-allocate BAR1. **NVRM: FATAL: Failed to re-allocate BAR1. */sys/bus/pci/devices/%04x:%02x:%02x.0/config**filename**/sys/bus/pci/devices/%04x:%02x:%02x.0/config*sysfs**sysfs*seq_file*uuid_str**uuid_str*NVRM: GPU %04x:%02x:%02x.%x: Unable to read UUID**NVRM: GPU %04x:%02x:%02x.%x: Unable to read UUID*call to nv_is_uuid_in_gpu_exclusion_list*call to rm_exclude_adapter*NVRM: GPU %04x:%02x:%02x.%x: Failed to exclude GPU %s (0x%x) **NVRM: GPU %04x:%02x:%02x.%x: Failed to exclude GPU %s (0x%x) *NVRM: GPU %04x:%02x:%02x.%x: Excluded GPU %s successfully **NVRM: GPU %04x:%02x:%02x.%x: Excluded GPU %s successfully *call to __cpuhp_state_remove_instance*call to __cpuhp_remove_state*call to __cpuhp_remove_state_cpuslocked*call to __cpuhp_state_add_instance_cpuslocked*call to __cpuhp_state_add_instance*call to __cpuhp_setup_state**startup**teardown*call to __cpuhp_setup_state_cpuslocked*call to cpus_read_lock*call to cpus_read_unlock*call to __devm_reset_control_bulk_get*call to __reset_control_bulk_get*call to __device_reset*call to pinconf_generic_dt_node_to_map*hdcp_enabled**hdcp_enabled*window_head_mask*NVRM: Wrong input arguments **NVRM: Wrong input arguments *call to of_property_read_u64*nvidia,window-head-mask**nvidia,window-head-mask*NVRM: failed to read device node window-head-mask ret=%d **NVRM: failed to read device node window-head-mask ret=%d *call to platform_driver_unregister*call to __platform_driver_register*call to of_find_matching_node**call to platform_get_drvdata*call to nv_platform_device_remove*plat_dev*nvidia,dcb-image**nvidia,dcb-image*soc_dcb_size*soc_dcb_blob**soc_dcb_blob*failed to allocate dcb array**failed to allocate dcb array*failed to read dcb blob**failed to read dcb blob**niso_dma_dev*nvdisplay-niso**nvdisplay-niso**niso_np*NVRM: no nvdisplay-niso child node **NVRM: no nvdisplay-niso child node *call to devm_of_platform_populate*NVRM: devm_of_platform_populate failed **NVRM: devm_of_platform_populate failed *call to of_find_device_by_node*niso_plat_dev**niso_plat_dev*NVRM: no nvdisplay-niso platform devices **NVRM: no nvdisplay-niso platform devices *NVRM: nv_of_dma_configure failed for niso **NVRM: nv_of_dma_configure failed for niso *iommus*iso_sid**iso_sid*NVRM: nv_platform_get_iso_niso_stream_ids, iso_sid not specified under display node **NVRM: nv_platform_get_iso_niso_stream_ids, iso_sid not specified under display node *NVRM: nv_platform_get_iso_niso_stream_ids, iso_sid has invalid value **NVRM: nv_platform_get_iso_niso_stream_ids, iso_sid has invalid value *niso_sid**niso_sid*NVRM: nv_platform_get_iso_niso_stream_ids, niso_sid not specified under display node **NVRM: nv_platform_get_iso_niso_stream_ids, niso_sid not specified under display node *NVRM: nv_platform_get_iso_niso_stream_ids, niso_sid has invalid value **NVRM: nv_platform_get_iso_niso_stream_ids, niso_sid has invalid value *iso_iommu_present*niso_iommu_present**iommus**iso_np**niso_np_with_iommus**dpaux*num_dpaux_instance*nvidia,num-dpaux-instance**nvidia,num-dpaux-instance*NVRM: Found %d dpAux instances in device tree. **NVRM: Found %d dpAux instances in device tree. *NVRM: Number of dpAux instances [%d] in device tree are more thanthat of allowed [%d]. Initilizing %d dpAux instances. **NVRM: Number of dpAux instances [%d] in device tree are more thanthat of allowed [%d]. Initilizing %d dpAux instances. *call to platform_get_resource_byname*dpaux0**dpaux0*NVRM: failed to get IO memory resource **NVRM: failed to get IO memory resource *res_addr*res_size*NVRM: request_mem_region failed for %pa **NVRM: request_mem_region failed for %pa *sdpaux*dpaux_devname**dpaux_devname***dpaux_devname***dpaux*NVRM: failed to allocate nv->dpaux[%d] memory **NVRM: failed to allocate nv->dpaux[%d] memory *call to platform_get_irq_byname*NVRM: failed to get IO irq resource **NVRM: failed to get IO irq resource *dpaux_irqs**dpaux_irqs*dpauxindex*call to nv_platform_free_device_dpaux*call to nv_soc_free_irq_by_type*current_soc_irq*nvdisplay**nvdisplay*failed to request display irq (%d) **failed to request display irq (%d) *hdacodec**hdacodec*failed to request hdacodec irq (%d) **failed to request hdacodec irq (%d) *tcpc2disp**tcpc2disp*failed to request Tcpc2disp irq (%d) **failed to request Tcpc2disp irq (%d) *failed to request dpaux irq (%d) **failed to request dpaux irq (%d) *soc_irq_info**soc_irq_info*irq_type*bh_pending*ref_count*%s:No SOC interrupt in progress **%s:No SOC interrupt in progress *Exceeds Maximum SOC interrupts **Exceeds Maximum SOC interrupts *device_name*nv_request_soc_irq for irq %d failed **nv_request_soc_irq for irq %d failed *irq_index*gpio_num*dpaux_instance*call to nvidia_isr*call to os_acquire_mutex*soc_bh_mutex**soc_bh_mutex**/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-platform.c*call to nvidia_isr_kthread_bh*call to os_release_mutex**nv_platform*call to proc_remove*proc_dir*%04x:%02x:%02x.%1x**%04x:%02x:%02x.%1x*call to proc_mkdir_mode*__entry**__entry**proc_nvidia_gpu*call to proc_create_data*information**information*registry**registry**power*call to os_is_vgx_hyper*unbindLock**unbindLock*numa_status**numa_status*offline_pages**offline_pages**proc_dir*nv_dir_name*driver/%s**nv_dir_name**driver/%s*suspend_depth**suspend_depth**suspend*warnings**warnings*call to nv_procfs_add_text_file*README**README*patches**patches**version*call to single_release*call to single_open**call to pde_data*call to nv_down_read_interruptible*call to seq_read*call to seq_puts*nvpp*call to numa_is_change_allowed*call to rm_gpu_numa_offline*rm_status*call to nv_set_numa_status*call to rm_gpu_numa_online*call to nv_procfs_close_file*call to nv_procfs_open_file**nvpp*call to numa_status_read*call to seq_printf*Node: %d **Node: %d *Status: %s *call to numa_status_describe**Status: %s *Address: %llx **Address: %llx *Size: %llx **Size: %llx *offline_addresses*%p **%p *addresses**addresses**numa_info*call to rm_get_gpu_numa_info**invalid*call to sscanf*%u **%u *call to rm_unbind_lock*NVRM: GPU %04x:%02x:%02x.%x: UnbindLock acquired **NVRM: GPU %04x:%02x:%02x.%x: UnbindLock acquired *NVRM: GPU %04x:%02x:%02x.%x: Could not acquire UnbindLock **NVRM: GPU %04x:%02x:%02x.%x: Could not acquire UnbindLock *NVRM: GPU %04x:%02x:%02x.%x: UnbindLock released **NVRM: GPU %04x:%02x:%02x.%x: UnbindLock released *1 **1 *0 **0 *proc_buffer*call to exercise_error_forwarding_va**arguments*call to nv_log_error*kbuf**kbuf*call to strcasecmp*hibernate**hibernate*call to nv_set_system_power_state*suspend hibernate resume **suspend hibernate resume *uvm**uvm**modeset*default**default*default modeset uvm **default modeset uvm *bytes_left**proc_buffer*NVRM: failed to copy in proc data! **NVRM: failed to copy in proc data! *registry_keys**registry_keys*Binary: "%s" **Binary: "%s" *%s: %u **%s: %u *CoherentGPUMemoryMode: "%s" **CoherentGPUMemoryMode: "%s" *RegistryDwords: "%s" **RegistryDwords: "%s" *RegistryDwordsPerDevice: "%s" **RegistryDwordsPerDevice: "%s" *RmMsg: "%s" **RmMsg: "%s" *GpuBlacklist: "%s" **GpuBlacklist: "%s" *TemporaryFilePath: "%s" **TemporaryFilePath: "%s" *ExcludedGpus: "%s" **ExcludedGpus: "%s" *key_value**key_value*call to strsep*=**=*key_name**key_name*key_len*call to rm_write_registry_binary*call to strcat*NVRM: failed to allocate procfs private! **NVRM: failed to allocate procfs private! *NVRM version: %s **NVRM version: %s *GCC version: %s *gcc version 13.3.0 (Ubuntu 13.3.0-6ubuntu2~24.04) **GCC version: %s **gcc version 13.3.0 (Ubuntu 13.3.0-6ubuntu2~24.04) *call to rm_get_power_info*Runtime D3 status: %s *dynamic_power_status**Runtime D3 status: %s *power_info*Tegra iGPU Rail-Gating: %s **Tegra iGPU Rail-Gating: %s *Enabled**Enabled**Disabled*Video Memory: %s *vidmem_power_status**Video Memory: %s *GPU Hardware Support: **GPU Hardware Support: * Video Memory Self Refresh: %s *gc6_support** Video Memory Self Refresh: %s * Video Memory Off: %s *gcoff_support** Video Memory Off: %s *S0ix Power Management: **S0ix Power Management: * Platform Support: %s ** Platform Support: %s *call to nv_platform_supports_s0ix**Supported**Not Supported* Status: %s *s0ix_status** Status: %s *Notebook Dynamic Boost: %s *db_support**Notebook Dynamic Boost: %s *call to rm_get_device_name*Model: %s **Model: %s *IRQ: %d **IRQ: %d *GPU UUID: %s **GPU UUID: %s *GPU UUID cache not valid! **GPU UUID cache not valid! *Video BIOS: %s **Video BIOS: %s *call to nv_find_pci_capability*PCIe**PCIe**type*PCI**PCI*Bus Type: %s **Bus Type: %s *DMA Size: %d bits **DMA Size: %d bits *call to nv_count_bits*DMA Mask: 0x%llx **DMA Mask: 0x%llx *Bus Location: %04x:%02x:%02x.%x **Bus Location: %04x:%02x:%02x.%x *Device Minor: %u **Device Minor: %u *firmware_version**firmware_version*GPU Firmware: N/A **GPU Firmware: N/A *GPU Firmware: %s **GPU Firmware: %s *GPU Excluded: %s **GPU Excluded: %s **offline**online_in_progress**online**online_failed**offline_in_progress**offline_failed**The NVIDIA graphics driver's kernel interface files can be patched to improve compatibility with new Linux kernels or to fix bugs in these files. When applied, each official patch provides a short text file with a short description of itself in this directory. **The NVIDIA graphics driver tries to detect potential problems with the host system and warns about them using the system's logging mechanisms. Important warning message are also logged to dedicated text files in this directory. **/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-rsync.c*relaxed_ordering_mode*usage_count*nvuap*call to rm_gpu_need_4k_page_isolation**pAllocPrivate*call to nv_get_file_private*nvamc**nvamc***alloc*pAllocPriv**pAllocPriv*access_start*access_size*caching*call to nv_put_file_private*NVRM: %s: can't unmap %d pages at 0x%0llx, invalid context! **NVRM: %s: can't unmap %d pages at 0x%0llx, invalid context! *call to nv_vunmap*NVRM: %s: can't map %d pages, invalid context! **NVRM: %s: can't map %d pages, invalid context! *call to nv_vmap*call to vunmap*NVRM: VM: %s: %u pages **NVRM: VM: %s: %u pages *call to nv_free_coherent_pages*call to nv_free_system_pages*call to os_is_xen_dom0*call to nv_alloc_coherent_pages*call to nv_alloc_system_pages*NVRM: VM: %s: failed to allocate memory, trying coherent memory **NVRM: VM: %s: failed to allocate memory, trying coherent memory *NVRM: VM: %s: failed to allocate memory **NVRM: VM: %s: failed to allocate memory *call to nv_compute_gfp_mask*NVRM: VM: %s: %u order0 pages, %u order **NVRM: VM: %s: %u order0 pages, %u order *page_pool*call to nv_mem_pool_alloc_pages*num_pool_allocated_pages*call to get_free_pages_noprof*call to nv_alloc_set_page*call to nv_set_memory_decrypted_zeroed*page_ptr*call to nv_set_memory_type**page_pool*call to nv_set_memory_encrypted*call to nv_mem_pool_free_pages*call to free_pages*NVRM: VM: %s: %u/%u order0 pages **NVRM: VM: %s: %u/%u order0 pages *call to nv_mem_pool_destroy*call to nv_mem_pool_init*pool_entry**pool_entry**/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv-vm.c*queue_worker*call to list_splice_init*pages_owned*NVRM: VM: %s: node=%d order=%u: %lu/%lu pages added to pool (%lu now in pool) **NVRM: VM: %s: node=%d order=%u: %lu/%lu pages added to pool (%lu now in pool) *call to nv_mem_pool_queue_worker**mem_pool*NVRM: %s: failed allocating memory **NVRM: %s: failed allocating memory *NVRM: %s: failed allocating mutex for worker thread **NVRM: %s: failed allocating mutex for worker thread *call to nv_kthread_q_init_on_node*nv_mem_pool_scrubber_queue**nv_mem_pool_scrubber_queue*NVRM: %s: failed allocating worker thread **NVRM: %s: failed allocating worker thread *call to nv_mem_pool_shrinker_alloc**shrinker*NVRM: %s: failed allocating shrinker **NVRM: %s: failed allocating shrinker *count_objects*scan_objects*seeks*call to nv_mem_pool_shrinker_register*call to nv_mem_pool_free_page_list*call to nv_kthread_q_stop*call to nv_mem_pool_shrinker_free*head__*call to nv_mem_pool_clear_page*call to nv_kthread_q_schedule_q_item*call to nv_mem_pool_move_pages*pages_remaining*pages_allocated_clean*pages_allocated*NVRM: VM: %s: node=%d order=%u: %lu/%lu pages allocated (%lu already cleared, %lu left in pool) **NVRM: VM: %s: node=%d order=%u: %lu/%lu pages allocated (%lu already cleared, %lu left in pool) *pages_freed*NVRM: VM: %s: node=%d order=%u: %lu/%lu pages freed **NVRM: VM: %s: node=%d order=%u: %lu/%lu pages freed *NVRM: VM: %s: node=%d order=%u: %lu pages in pool **NVRM: VM: %s: node=%d order=%u: %lu pages in pool *dst_list***private_data*call to shrinker_register*nv-sysmem-alloc-node-%d-order-%u**nv-sysmem-alloc-node-%d-order-%u*call to shrinker_free**page_ptr*call to dma_free_coherent*NVRM: VM: %s: coherent page alloc on nvidiactl not supported **NVRM: VM: %s: coherent page alloc on nvidiactl not supported *call to dma_alloc_coherent**call to dma_alloc_coherent*call to nv_get_kern_phys_address*call to nv_requires_dma_remap*call to nv_is_dma_direct*call to nv_get_max_sysmem_address*call to pgdat_end_pfn*global_max_pfn*call to nv_set_contig_memory_type*call to nv_set_memory_array_type_present*call to nv_set_pages_array_type_present*call to nv_set_pages_array_type*call to nv_set_contig_memory_uc*call to nv_set_contig_memory_wb*NVRM: %s(): type %d unimplemented **NVRM: %s(): type %d unimplemented *call to set_pages_array_wb*call to set_memory_wb*NVRM: can't translate address in %s()! **NVRM: can't translate address in %s()! **nvidia/590.48.01/gsp_ga10x.bin**nvidia/590.48.01/gsp_tu10x.bin**nvidia/590.48.01/gsp_log_ga10x.bin**nvidia/590.48.01/gsp_log_tu10x.bin*call to release_firmware*call to init_wait_entry*call to prepare_to_wait_event*call to finish_wait*call to __wake_up*call to snd_dma_alloc_dir_pages*streams**streams**Playback**Capture*call to snd_sgbuf_get_chunk_size*dma_buffer_p*call to snd_sgbuf_get_addr*call to _snd_pcm_lib_alloc_vmalloc_buffer*call to snd_pcm_set_managed_buffer_all*call to snd_pcm_set_managed_buffer*call to ktime_get_ts64*call to ktime_get_real_ts64**dma_buffer_p*dma_area**dma_area*dma_bytes*call to snd_pcm_hw_limit_rates*call to __snd_pcm_lib_xfer*bufs**bufs*std_sync_id*call to snd_pcm_hw_constraint_minmax*call to hw_param_interval_c*intervals**intervals*masks**masks*trigger_master**trigger_master*call to snd_pcm_capture_avail*call to snd_pcm_playback_avail*call to frames_to_bytes*call to _snd_pcm_stream_lock_irqsave*call to snd_pcm_stream_unlock_irqrestore*call to snd_pcm_stream_lock_irq*call to snd_pcm_stream_unlock_irq*call to snd_pcm_stream_lock*call to snd_pcm_stream_unlock*report*type_requested*report_delay*constrs*alloc_align*call to readw*call to _snd_hdac_read_parm*call to snd_card_rw_proc_new*call to _snd_ctl_add_follower*src_kctl*call to snd_ctl_get_ioffnum*call to snd_ctl_get_ioffidx*call to regmap_fields_update_bits_base*call to regmap_field_update_bits_base*call to regmap_update_bits_base*call to regcache_mark_dirty*call to regcache_sync_region*call to snd_hdac_regmap_update_raw*call to snd_hdac_regmap_read_raw*call to snd_hdac_regmap_write_raw*jacktbl*patch_ops*call to snd_hdac_regmap_write*call to snd_hda_get_connections*call to snd_hdac_codec_write*call to snd_hdac_codec_read*NVRM: gpu_pg_mask is not supported. **NVRM: gpu_pg_mask is not supported. *NVRM: overlay gpu_pg_mask with module parameter. **NVRM: overlay gpu_pg_mask with module parameter. *NVRM: Using default gpu_pg_mask. There's no need to send BPMP MRQ. **NVRM: Using default gpu_pg_mask. There's no need to send BPMP MRQ. *call to nv_bpmp_send_mrq*NVRM: failed to call bpmp_send_mrq **NVRM: failed to call bpmp_send_mrq *NVRM: BPMP call for gpu_pg_mask %d failed, rv = %d **NVRM: BPMP call for gpu_pg_mask %d failed, rv = %d *NVRM: set gpu_pg_mask %d success **NVRM: set gpu_pg_mask %d success *gpu_bar_res*call to device_property_read_u64*nvidia,egm-pxm**nvidia,egm-pxm*nvidia,egm-base-pa**nvidia,egm-base-pa*nvidia,egm-size**nvidia,egm-size*NVRM: GPU %04x:%02x:%02x.%x: DSD properties: **NVRM: GPU %04x:%02x:%02x.%x: DSD properties: *NVRM: GPU %04x:%02x:%02x.%x: EGM base PA: 0x%llx **NVRM: GPU %04x:%02x:%02x.%x: EGM base PA: 0x%llx *NVRM: GPU %04x:%02x:%02x.%x: EGM size: 0x%llx **NVRM: GPU %04x:%02x:%02x.%x: EGM size: 0x%llx *NVRM: GPU %04x:%02x:%02x.%x: EGM _PXM: 0x%llx **NVRM: GPU %04x:%02x:%02x.%x: EGM _PXM: 0x%llx *egm_node_id*EGM node id: %d **EGM node id: %d *EGM base addr: 0x%llx **EGM base addr: 0x%llx *EGM size: 0x%llx **EGM size: 0x%llx *NVRM: GPU %04x:%02x:%02x.%x: Cannot get EGM info **NVRM: GPU %04x:%02x:%02x.%x: Cannot get EGM info *call to nv_next_resource*runtime_auto*9**9*10**10*pm_domain*call to pm_runtime_dont_use_autosuspend**bus**ctrl*NVRM: Disable runtime PM for PCIe Controller **NVRM: Disable runtime PM for PCIe Controller *call to pm_runtime_disable*call to nv_pci_tegra_register_power_domain*NVRM: Enable runtime PM for PCIe Controller **NVRM: Enable runtime PM for PCIe Controller *call to pm_runtime_enable*call to pm_runtime_set_autosuspend_delay*call to pm_runtime_use_autosuspend*NVRM: No dt node associated with this device **NVRM: No dt node associated with this device *power-domains**power-domains*NVRM: No power-domains is defined in the dt node **NVRM: No power-domains is defined in the dt node *NVRM: Attaching device to GPU power domain **NVRM: Attaching device to GPU power domain *call to dev_pm_domain_attach*NVRM: Detaching device to GPU power domain **NVRM: Detaching device to GPU power domain *call to dev_pm_domain_detach*call to os_open_readonly_file*/sys/power/mem_sleep**/sys/power/mem_sleep**ki_filp*call to iocb_flags*ki_flags*ki_ioprio*ki_pos*call to iov_iter_kvec*num_read*call to os_close_file*[s2idle]**[s2idle]*call to iterate_fd*call to nv_match_dev_state*os_info**os_info*audio_pci_dev**audio_pci_dev**card**codec*call to snd_hdac_is_power_on*call to nvlink_cap_acquire*should_stop*is_accepting_opens*call to nv_kthread_q_init*nv_open_q**nv_open_q*tnvl**tnvl*minor_num*call to pm_runtime_get_noresume**kn*call to rm_unref_dynamic_power*call to rm_ref_dynamic_power*call to nv_close_device*call to rm_set_external_kernel_client_count**/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv.c*call to find_uuid_candidate*call to nv_open_device*call to nv_get_cached_uuid**dev_uuid*call to find_gpu_id*call to nv_dma_maps_swiotlb*dma_remap*primary_vga*call to nv_report_error*call to nvidia_transition_dynamic_power*call to rm_transition_dynamic_power*call to nvidia_resume*call to nvidia_suspend*call to nv_resume_devices*call to nv_suspend_devices*NVRM: GPU suspend through procfs is forbidden with Tegra iGPU **NVRM: GPU suspend through procfs is forbidden with Tegra iGPU *call to nvidia_modeset_suspend*call to nv_preempt_user_channels*call to nv_uvm_suspend*resume_devices*call to nv_restore_user_channels*call to nv_uvm_resume*call to nvidia_modeset_resume*NVRM: restore GPU pm_domain after suspend **NVRM: restore GPU pm_domain after suspend **pm_domain*call to dev_pm_genpd_resume*call to pm_runtime_allow*call to nv_power_management*call to pm_runtime_forbid*NVRM: set GPU pm_domain to NULL before suspend **NVRM: set GPU pm_domain to NULL before suspend *call to dev_pm_domain_set*NVRM: GPU %04x:%02x:%02x.%x: PreserveVideoMemoryAllocations module parameter is set. System Power Management attempted without driver procfs suspend interface. Please refer to the 'Configuring Power Management Support' section in the driver README. **NVRM: GPU %04x:%02x:%02x.%x: PreserveVideoMemoryAllocations module parameter is set. System Power Management attempted without driver procfs suspend interface. Please refer to the 'Configuring Power Management Support' section in the driver README. *call to pci_save_state*call to nv_set_safe_to_mmap_locked*call to rm_stop_user_channels*call to rm_restart_user_channels*NVRM: GPU %04x:%02x:%02x.%x: GPU is lost, skipping PM event **NVRM: GPU %04x:%02x:%02x.%x: GPU is lost, skipping PM event *call to nv_kthread_q_flush*call to rm_power_management*call to nv_pci_count_devices*call to nv_platform_count_devices*nplatform*num_pci_devices*num_platform_devices*profiler_context**profiler_context*timer_active*call to nv_timer_delete_sync*snapshot_timer*call to nv_timer_setup*call to mod_timer*NVRM: stopping rc timer **NVRM: stopping rc timer *rc_timer_enabled*rc_timer*NVRM: rc timer stopped **NVRM: rc timer stopped *NVRM: initializing rc timer **NVRM: initializing rc timer *NVRM: rc timer initialized **NVRM: rc timer initialized *event_data_head*eflags*nvet**nvet*event_data_tail**event_data_tail**event_data_head**os_private**filp*fw_handle**fw_handle*call to request_firmware*call to nv_firmware_for_chip_family*all**all*dataless_event_pending*call to rm_destroy_event_locks*call to atomic64_set*call to rm_init_event_locks*NVRM: VM: nv_free_pages: 0x%x **NVRM: VM: nv_free_pages: 0x%x *call to nv_free_contig_pages*call to nvos_free_alloc*NVRM: VM: nv_alloc_pages: %d pages, nodeid %d **NVRM: VM: nv_alloc_pages: %d pages, nodeid %d *NVRM: VM: contig %d cache_type %d **NVRM: VM: contig %d cache_type %d *will_remap*call to nvos_create_alloc*call to nv_alloc_contig_pages*pte_array*call to nv_phys_to_dma**pPrivate*call to nv_iounmap*call to nv_vm_unmap_pages*isUserAllocatedMem*call to nv_map_guest_pages*NVRM: failed to allocate vmap() page descriptor table! **NVRM: failed to allocate vmap() page descriptor table! *call to nv_vm_map_pages*NVRM: failed to map pages! **NVRM: failed to map pages! **page_count*NVRM: VM: nv_unregister_sgt **NVRM: VM: nv_unregister_sgt *NVRM: RM is not supporting sg->offset != 0 use case now.! **NVRM: RM is not supporting sg->offset != 0 use case now.! *call to sg_phys*peer_io***import_priv*physical***user_pages*NVRM: VM: nv_unregister_user_pages: 0x%llx **NVRM: VM: nv_unregister_user_pages: 0x%llx *NVRM: VM: nv_register_user_pages: 0x%llx **NVRM: VM: nv_register_user_pages: 0x%llx *user*guest*NVRM: nvidia_ctl_close **NVRM: nvidia_ctl_close *call to rm_cleanup_file_private*call to nv_free_pages*attached_gpus**attached_gpus*num_attached_gpus*call to nv_free_file_private*NVRM: nvidia_ctl_open **NVRM: nvidia_ctl_open **nvptr*NVRM: GPU is lost, skipping device timer callbacks **NVRM: GPU is lost, skipping device timer callbacks *call to rm_run_rc_callback*NVRM: %s: Unable to take bottom_half mutex! **NVRM: %s: Unable to take bottom_half mutex! *NVRM: GPU is lost, skipping unlocked ISR bottom half **NVRM: GPU is lost, skipping unlocked ISR bottom half *call to rm_isr_bh_unlocked*NVRM: GPU is lost, skipping ISR bottom half **NVRM: GPU is lost, skipping ISR bottom half *call to rm_isr_bh*call to nvidia_isr_common_bh*call to rm_gpu_handle_mmu_faults*rm_fault_handling_needed*call to nv_uvm_event_interrupt*uvm_handled*call to rm_isr*rm_handled*found_irq*call to os_get_system_time*unhandled*last_unhandled*NVRM: Going over RM unhandled interrupt threshold for irq %d **NVRM: Going over RM unhandled interrupt threshold for irq %d *NVRM: IRQ number out of valid range **NVRM: IRQ number out of valid range *call to nvidia_ioctl*NVRM: ioctl(0x%x, 0x%x, 0x%x) **NVRM: ioctl(0x%x, 0x%x, 0x%x) *arg_size*arg_cmd*NVRM: invalid ioctl XFER structure size! **NVRM: invalid ioctl XFER structure size! *arg_ptr**arg_ptr*NVRM: failed to copy in ioctl XFER data! **NVRM: failed to copy in ioctl XFER data! *ioc_xfer***arg_ptr*NVRM: invalid ioctl XFER size! **NVRM: invalid ioctl XFER size! **arg_copy***arg_copy*NVRM: failed to allocate ioctl memory **NVRM: failed to allocate ioctl memory *NVRM: failed to copy in ioctl data! **NVRM: failed to copy in ioctl data! *adapterStatus*NVRM: Unable to allocate altstack for ioctl **NVRM: Unable to allocate altstack for ioctl *NVRM: GPU is lost, skipping nvidia_ioctl **NVRM: GPU is lost, skipping nvidia_ioctl *query_intr*call to nvidia_read_card_info*field "nvlfp->attached_gpus" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv.c:2561**field "nvlfp->attached_gpus" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv.c:2561*call to rm_perform_version_check*numa_memblock_size*call to nv_platform_use_auto_online*use_auto_online*memblock_size*call to nv_dma_buf_export*call to rm_ioctl*NVRM: failed to copy out ioctl data **NVRM: failed to copy out ioctl data *ci**ci*reg_address*reg_size*minor_number*fb_address*fb_size*call to nv_is_open_complete*NVRM: GPU %04x:%02x:%02x.%x: GPU is lost, skipping nvidia_poll **NVRM: GPU %04x:%02x:%02x.%x: GPU is lost, skipping nvidia_poll *call to poll_wait*NVRM: nvidia_close on GPU with minor number %d **NVRM: nvidia_close on GPU with minor number %d *call to nvidia_ctl_close*call to nvidia_close_callback*call to nv_wait_open_complete*call to rm_get_device_remove_flag*call to pci_stop_and_remove_bus_device*NVRM: Attempting to close unopened minor device %u! **NVRM: Attempting to close unopened minor device %u! *call to nv_stop_device*call to rm_disable_adapter*NVRM: Persistence mode is deprecated and will be removed in a future release. Please use nvidia-persistenced instead. **NVRM: Persistence mode is deprecated and will be removed in a future release. Please use nvidia-persistenced instead. *call to nv_put_rsync_info**queue***isr_bh_unlocked_mutex*call to pci_disable_msi*call to nv_soc_free_irqs*call to nv_free_msix_irq*call to pci_disable_msix*call to rm_shutdown_adapter*NVRM: Trigger FLR! **NVRM: Trigger FLR! *call to os_pci_trigger_flr*NVRM: FLR not supported by the device! **NVRM: FLR not supported by the device! *NVRM: nvidia_open... **NVRM: nvidia_open... *call to nv_alloc_file_private**call to nv_alloc_file_private*NVRM: failed to allocate file private! **NVRM: failed to allocate file private! *call to nvidia_ctl_open*call to find_minor_locked*call to nv_init_mapping_revocation*call to nv_try_lock_foreground_open*call to nv_open_device_for_nvlfp*call to nv_add_open_file*call to complete_all**deferred_open_nvl*item_scheduled*call to down_trylock**nvlfp_raw*open_rc*adapter_status*call to rm_get_adapter_status_external*call to address_space_init_once*a_ops**a_ops**f_mapping*NVRM: GPU %04x:%02x:%02x.%x: open() not permitted for excluded %s **NVRM: GPU %04x:%02x:%02x.%x: open() not permitted for excluded %s *GPU**GPU*NVRM: GPU %04x:%02x:%02x.%x: Open failed as GPU is locked for unbind operation **NVRM: GPU %04x:%02x:%02x.%x: Open failed as GPU is locked for unbind operation *NVRM: GPU %04x:%02x:%02x.%x: Opening GPU with minor number %d **NVRM: GPU %04x:%02x:%02x.%x: Opening GPU with minor number %d *NVRM: GPU %04x:%02x:%02x.%x: Device in removal process **NVRM: GPU %04x:%02x:%02x.%x: Device in removal process *NVRM: GPU %04x:%02x:%02x.%x: Minor device %u is referenced without being open! **NVRM: GPU %04x:%02x:%02x.%x: Minor device %u is referenced without being open! *call to nv_start_device*call to rm_is_device_sequestered*NVRM: GPU %04x:%02x:%02x.%x: Device is currently unavailable **NVRM: GPU %04x:%02x:%02x.%x: Device is currently unavailable *call to nv_assert_not_in_gpu_exclusion_list*call to nv_get_rsync_info*call to validate_numa_start_state*NVRM: open of non-existent GPU with minor number %d **NVRM: open of non-existent GPU with minor number %d *power_ref*call to nv_dev_alloc_stacks**EnableMSI*call to rm_is_msix_allowed*call to nv_init_msix*call to nv_init_msi*NVRM: GPU %04x:%02x:%02x.%x: No interrupts of any type are available. Cannot use this GPU. **NVRM: GPU %04x:%02x:%02x.%x: No interrupts of any type are available. Cannot use this GPU. *call to nv_soc_register_irqs*call to nv_request_msix_irq*NVRM: GPU %04x:%02x:%02x.%x: Tried to get IRQ %d, but another driver **NVRM: GPU %04x:%02x:%02x.%x: Tried to get IRQ %d, but another driver *NVRM: has it and is not sharing it. **NVRM: has it and is not sharing it. *NVRM: You may want to verify that no audio driver**NVRM: You may want to verify that no audio driver* is using the IRQ. ** is using the IRQ. *NVRM: GPU %04x:%02x:%02x.%x: request_irq() failed (%d) **NVRM: GPU %04x:%02x:%02x.%x: request_irq() failed (%d) *kthread_init*nv_queue**nv_queue*nv_remove_numa_memory**nv_remove_numa_memory*remove_numa_memory_kthread_init*call to rm_init_adapter*NVRM: GPU %04x:%02x:%02x.%x: rm_init_adapter failed, device minor number %d **NVRM: GPU %04x:%02x:%02x.%x: rm_init_adapter failed, device minor number %d *call to nv_acpi_register_notifier*call to rm_request_dnotifier_state*NVRM: Trigger FLR on Failure! **NVRM: Trigger FLR on Failure! *call to nv_uvm_resume_P2P*call to nv_uvm_drain_P2P*NVRM: numa memblock size of zero found during device start**NVRM: numa memblock size of zero found during device start*has_missing*use_missing*call to __init_waitqueue_head*&nvlfp->waitqueue**&nvlfp->waitqueue*call to register_chrdev_region*NVRM: register_chrdev_region() failed for %s! **NVRM: register_chrdev_region() failed for %s! *NVRM: cdev_add() failed for %s! **NVRM: cdev_add() failed for %s! *call to os_nv_cap_destroy_entry*call to os_nv_cap_init*driver/nvidia**driver/nvidia*NVRM: GPU %04x:%02x:%02x.%x: Could not exclude GPU %s because PBI is not supported **NVRM: GPU %04x:%02x:%02x.%x: Could not exclude GPU %s because PBI is not supported *call to nv_module_resources_init*call to nv_cap_drv_init*NVRM: nv-cap-drv init failed. **NVRM: nv-cap-drv init failed. *call to nvlink_drivers_init*call to nv_init_rsync_info*call to nv_detect_conf_compute_platform*call to rm_init_rm*NVRM: rm_init_rm() failed! **NVRM: rm_init_rm() failed! *call to nv_module_state_init*call to rm_shutdown_rm*call to nv_destroy_rsync_info*call to nvlink_drivers_exit*call to nv_cap_drv_exit*call to nv_module_resources_exit*call to nv_module_state_exit*call to nv_pci_register_driver*NVRM: No NVIDIA PCI devices found. **NVRM: No NVIDIA PCI devices found. *call to nv_platform_register_driver*NVRM: SOC driver registration failed! **NVRM: SOC driver registration failed! *call to nv_pci_unregister_driver*call to nv_platform_unregister_driver*NVRM: Applied patches: **NVRM: Applied patches: *NVRM: Patch #%d: %s **NVRM: Patch #%d: %s **DmaRemapPeerMmio*call to nv_init_page_pools*call to nv_init_pat_support*&nv_system_pm_lock**&nv_system_pm_lock*call to nv_destroy_page_pools*call to nv_teardown_pat_support*call to nvlink_core_init*NVRM: NVLink core init failed. **NVRM: NVLink core init failed. *call to nvswitch_init*NVRM: NVSwitch init failed. **NVRM: NVSwitch init failed. *call to nvlink_core_exit*call to nvswitch_exit*call to nv_kmem_cache_create**call to nv_kmem_cache_create*NVRM: nvidia_stack_t cache allocation failed. **NVRM: nvidia_stack_t cache allocation failed. *NVRM: nvidia_p2p_page_t cache allocation failed. **NVRM: nvidia_p2p_page_t cache allocation failed. *call to kmem_cache_destroy*NVRM: Invalid page table allocation - Number of pages exceeds max value. **NVRM: Invalid page table allocation - Number of pages exceeds max value. *NVRM: failed to allocate alloc info **NVRM: failed to allocate alloc info *NVRM: Invalid page table allocation - requested size overflows. **NVRM: Invalid page table allocation - requested size overflows. *NVRM: failed to allocate page table **NVRM: failed to allocate page table *call to nv_is_sme_supported*call to nv_unregister_chrdev*call to nv_uvm_exit*call to nv_drivers_exit*call to nv_module_exit*call to nv_caps_imex_exit*call to nv_caps_root_exit*call to nv_procfs_exit*call to nv_memdbg_exit*call to os_is_nvswitch_present*call to nv_memdbg_init*call to nv_procfs_init*NVRM: failed to initialize procfs. **NVRM: failed to initialize procfs. *call to nv_caps_root_init*NVRM: failed to initialize capabilities. **NVRM: failed to initialize capabilities. *call to nv_caps_imex_init*NVRM: failed to initialize IMEX channels. **NVRM: failed to initialize IMEX channels. *call to nv_module_init*NVRM: failed to initialize module. **NVRM: failed to initialize module. *call to nvos_count_devices*NVRM: No NVIDIA GPU found. **NVRM: No NVIDIA GPU found. *call to nv_drivers_init*warn_unprobed*NVRM: Failed to probe Tegra Display platform device. **NVRM: Failed to probe Tegra Display platform device. *NVRM: This kernel is not compatible with Tegra Display. **NVRM: This kernel is not compatible with Tegra Display. *NVRM: The NVIDIA probe routine was not called for %d device(s). **NVRM: The NVIDIA probe routine was not called for %d device(s). *NVRM: This can occur when another driver was loaded and NVRM: obtained ownership of the NVIDIA device(s). **NVRM: This can occur when another driver was loaded and NVRM: obtained ownership of the NVIDIA device(s). *NVRM: Try unloading the conflicting kernel module (and/or NVRM: reconfigure your kernel without the conflicting NVRM: driver(s)), then try loading the NVIDIA kernel module NVRM: again. **NVRM: Try unloading the conflicting kernel module (and/or NVRM: reconfigure your kernel without the conflicting NVRM: driver(s)), then try loading the NVIDIA kernel module NVRM: again. *NVRM: No NVIDIA devices probed. **NVRM: No NVIDIA devices probed. *NVRM: The NVIDIA probe routine failed for %d device(s). **NVRM: The NVIDIA probe routine failed for %d device(s). *NVRM: None of the NVIDIA devices were initialized. **NVRM: None of the NVIDIA devices were initialized. *call to nv_registry_keys_init*call to nv_report_applied_patches*NVRM: loading %s **NVRM: loading %s *call to nv_uvm_init*call to nv_register_chrdev*nvidiactl**nvidiactl**nvidia_p2p_page_cache**nvidia_stack_cache*uvmCslContext**nvidia_stack*call to rm_gpu_ops_ccsl_log_encryption*call to rm_gpu_ops_ccsl_increment_iv*call to rm_gpu_ops_ccsl_query_message_pool*call to rm_gpu_ops_ccsl_sign*call to rm_gpu_ops_ccsl_decrypt*call to rm_gpu_ops_ccsl_encrypt_with_iv*call to rm_gpu_ops_ccsl_encrypt*call to rm_gpu_ops_ccsl_rotate_iv*call to rm_gpu_ops_ccsl_rotate_key*call to rm_gpu_ops_ccsl_context_clear*call to nvUvmFreeSafeStack*call to rm_gpu_ops_ccsl_context_init***nvidia_stack*call to rm_gpu_ops_report_fatal_error*call to rm_gpu_ops_paging_channel_push_stream*pushStreamSp**pushStreamSp*call to rm_gpu_ops_paging_channels_unmap*call to rm_gpu_ops_paging_channels_map*call to rm_gpu_ops_paging_channel_destroy*call to rm_gpu_ops_paging_channel_allocate**channel***pushStreamSp*call to rm_gpu_ops_report_non_replayable_fault*call to rm_gpu_ops_get_channel_resource_ptes*externalMappingInfo*call to rm_gpu_ops_stop_channel*call to rm_gpu_ops_release_channel*call to rm_gpu_ops_bind_channel_resources*call to rm_gpu_ops_retain_channel*call to rm_gpu_ops_get_external_alloc_phys_addrs*gpuExternalPhysAddrInfo*call to rm_gpu_ops_get_external_alloc_ptes*gpuExternalMappingInfo*call to rm_gpu_ops_p2p_object_destroy*call to rm_gpu_ops_p2p_object_create*uvmUuid*field "uvmUuid.uuid" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv_uvm_interface.c:1274**field "uvmUuid.uuid" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv_uvm_interface.c:1274**/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv_uvm_interface.c**events*field "uvmUuid.uuid" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv_uvm_interface.c:1250**field "uvmUuid.uuid" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nv_uvm_interface.c:1250*call to rm_gpu_ops_get_nvlink_info*call to setUvmEvents*call to on_each_cpu*importedEvents*call to rm_gpu_ops_access_bits_dump*call to rm_gpu_ops_access_bits_buffer_free*call to rm_gpu_ops_access_bits_buffer_alloc*call to rm_gpu_ops_disable_access_cntr*call to rm_gpu_ops_destroy_access_cntr_info*call to rm_gpu_ops_toggle_prefetch_faults*call to rm_gpu_ops_flush_replayable_fault_buffer*call to rm_gpu_ops_get_non_replayable_faults*isr_bh_sp*pFaultBuffer**pFaultBuffer*nonReplayable**isr_bh_sp*call to rm_gpu_ops_has_pending_non_replayable_faults*isr_sp**isr_sp*call to nvUvmDestroyFaultInfoAndStacks*call to rm_gpu_ops_enable_access_cntr*call to rm_gpu_ops_init_access_cntr_info*call to rm_gpu_ops_init_fault_info***isr_sp***isr_bh_sp*replayable*cslCtx*call to rm_gpu_ops_own_page_fault_intr*call to rm_gpu_ops_get_ecc_info*call to rm_gpu_ops_get_fb_info*call to rm_gpu_ops_free_duped_handle*call to rm_gpu_ops_dup_memory*call to rm_gpu_ops_dup_allocation*call to rm_gpu_ops_unset_page_directory*call to rm_gpu_ops_set_page_directory*dmaAddress*call to rm_gpu_ops_service_device_interrupts_rm*call to rm_gpu_ops_get_gpu_info*call to rm_gpu_ops_query_ces_caps*call to rm_gpu_ops_query_caps*call to rm_gpu_ops_channel_destroy*call to rm_gpu_ops_channel_allocate*call to rm_gpu_ops_tsg_destroy*call to rm_gpu_ops_tsg_allocate**tsg*call to rm_gpu_ops_memory_cpu_ummap*call to rm_gpu_ops_memory_cpu_map*call to rm_gpu_ops_pma_free_pages*call to rm_gpu_ops_memory_free*call to rm_gpu_ops_pma_pin_pages*call to rm_gpu_ops_pma_alloc_pages*call to rm_gpu_ops_pma_unregister_callbacks*call to rm_gpu_ops_pma_register_callbacks*callbackData**callbackData*call to rm_gpu_ops_get_pma_object*call to rm_gpu_ops_get_p2p_caps*p2pCapsParams*call to rm_gpu_ops_memory_alloc_sys*call to rm_gpu_ops_memory_alloc_fb*call to rm_gpu_ops_address_space_destroy*call to rm_gpu_ops_address_space_create*call to rm_gpu_ops_dup_address_space*call to rm_gpu_ops_device_destroy*call to rm_gpu_ops_device_create*call to rm_gpu_ops_destroy_session*platformInfo**platformInfo*atsSupported*confComputingEnabled*call to rm_gpu_ops_create_session*call to nvidia_dev_unblock_gc6*call to nvidia_dev_get_pci_info*gpuInfo*call to nvidia_dev_block_gc6*call to rm_gpu_ops_destroy_fault_info*call to forceGlobalStack*newEvents*call to nvlink_print*/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nvlink_caps.c*Invalid path: %s **/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nvlink_caps.c**Invalid path: %s *call to nv_cap_init*Failed to initialize capabilities **Failed to initialize capabilities *call to nv_cap_create_file_entry*fabric-mgmt**fabric-mgmt*fabric_mgmt**fabric_mgmt*Failed to create fabric-mgmt entry **Failed to create fabric-mgmt entry *call to nvlink_cap_exit*call to nv_cap_destroy_entry*call to nv_cap_close_fd*call to nv_cap_validate_and_dup_fd*dup_fd*Failed to validate the fabric mgmt capability **Failed to validate the fabric mgmt capability *Unknown capability specified **Unknown capability specified *call to nv_ktime_get_raw_ns*capability_fds*hLock**hLock*call to nvlink_free*Failed to allocate sema! **/home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nvlink_linux.c**Failed to allocate sema! *call to __printk_ratelimit*NVLink: Assertion failed! **NVLink: Assertion failed! *call to dbg_breakpoint*call to nv_sleep_ms*NVLink: requested sleep duration %d msec exceeded %d msec **NVLink: requested sleep duration %d msec exceeded %d msec *field "dest" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nvlink_linux.c:521**field "dest" at /home/runner/work/bulk-builder/bulk-builder/kernel-open/nvidia/nvlink_linux.c:521**arglist*chars_written*