PDEV> %s: Failed read transaction**DPDEV> %s: Failed read transaction*call to dscCrcTransaction*call to _QueryCrcSink*call to DSC_ValidatePPSData*DPDEV> DSC PPS data validation failed!**DPDEV> DSC PPS data validation failed!*Device is faked, returning nack **Device is faked, returning nack *DP> FEC capability not correct!**DP> FEC capability not correct!*call to _QueryFecStatus*laneData*call to _WriteFecConfiguration*call to _QueryFecErrorCount**pNakReason*nak*call to replyNumOfBytesReadDPCD*DPDEV> DPCD Read return more data than requested. Clamping buffer to requested size!**DPDEV> DPCD Read return more data than requested. Clamping buffer to requested size!*call to replyGetData**call to replyGetData*call to I2cWriteTransaction*i2cWriteTransactions**i2cWriteTransactions*DPDEV> Set function will fail for transactions > 1, please increase the array size!**DPDEV> Set function will fail for transactions > 1, please increase the array size!*remoteI2cRead*DPDEV> I2C Read return more data than requested. Clamping buffer to requested size!**DPDEV> I2C Read return more data than requested. Clamping buffer to requested size!*call to replyGetI2CData**call to replyGetI2CData*call to RemoteI2cWriteMessage*remoteI2cWrite*call to ~RemoteDpcdReadMessage*call to ~RemoteI2cReadMessage*call to ~RemoteDpcdWriteMessage*call to ~RemoteI2cWriteMessage*call to bypassDpcdPowerOff*DP-DEV> Bypassing 600h write for this display**DP-DEV> Bypassing 600h write for this display*bAsyncSDPCapable*call to getRootAsyncSDPSupported*call to getSDPExtnForColorimetry*bSdpExtCapable*targetDevice**targetDevice**parentDevice*nakReason*guid2*call to isGuidZero*bVirtualPeerDevice*DP-DEV> Error querying GUID2 on %s!**DP-DEV> Error querying GUID2 on %s!*DP-DEV> Aux Read from DPCD offset 0x107 failed!**DP-DEV> Aux Read from DPCD offset 0x107 failed!*call to setIgnoreMSATimingParamters*bIgnoreMsaCap*bIgnoreMsaCapCached*DP-DEV> Aux Read from DPCD offset 0x7 failed!**DP-DEV> Aux Read from DPCD offset 0x7 failed!*call to getMsaTimingparIgnored*call to overrideMaxLaneCount*call to skipCableBWCheck*DP-DEV> Invalid link rate supplied. Falling back to RBR**DP-DEV> Invalid link rate supplied. Falling back to RBR*call to overrideOptimalLinkCfg*call to isDynamicMuxCapable*call to hdcpAvailableHop*bDscPassThroughColorFormatWar*pConType*DP2HDMI PCON caps - Max TMDS Clk: %u LinkBWGbps: %u MaxBpc: %u**DP2HDMI PCON caps - Max TMDS Clk: %u LinkBWGbps: %u MaxBpc: %u*DP-DEV> Error - DPCD Read for detailed port capabilities (0x80) failed.**DP-DEV> Error - DPCD Read for detailed port capabilities (0x80) failed.*call to isAcpiInitDone*call to getBuffer*call to ~VrrEnablement*reuse of vrrEnablement*destructor field destruction of ddcEdid*destructor field destruction of processedEdid*destructor field destruction of rawEDID*call to ~BandWidth*destructor field destruction of bandwidth*~BandWidth*destructor field destruction of lastHopLinkConfig*call to ~Shadow*destructor field destruction of shadow*~Shadow*call to Shadow*constructor init of field shadow*call to BandWidth*constructor init of field bandwidth*constructor init of field lastHopLinkConfig*constructor init of field rawEDID*constructor init of field processedEdid*constructor init of field ddcEdid*constructor init of field guid2*constructor init of field friendlyAux*availableStreams*dfpLinkAvailable*epr*bPathFECCapable*availablePbnUpdated*totalLinkSlots*call to handleRemoteDpcdReadDownReply*retryRemoteBKSVReadMessage*bBKSVReadMessagePending*retryRemoteBCapsReadMessage*BCaps**BCaps*bBCapsReadMessagePending*retryRemote22BCapsReadMessage*22BCaps**22BCaps*DP-QM> Message REMOTE_DPC_READ(22BCaps) {%p} at '%s' failed.**DP-QM> Message REMOTE_DPC_READ(22BCaps) {%p} at '%s' failed.*tryRemote1XCaps*22BCaps-Try1X**22BCaps-Try1X*DP-QM> Message %s {%p} at '%s' failed. Device marked as not HDCP support.**DP-QM> Message %s {%p} at '%s' failed. Device marked as not HDCP support.*REMOTE_DPCD_READ(BKSV)**REMOTE_DPCD_READ(BKSV)*REMOTE_DPC_READ(BCaps)**REMOTE_DPC_READ(BCaps)*REMOTE_DPC_READ(22BCaps)**REMOTE_DPC_READ(22BCaps)*call to decPendingRemoteHdcpDetection*call to setMessagePriority*remoteBKSVReadMessage*parentAddress*bksvReadCompleted*call to post*remoteBCapsReadMessage*bCapsReadCompleted*DP-QM> REMOTE_DPCD_READ(22BCaps) {%p} at '%s' completed**DP-QM> REMOTE_DPCD_READ(22BCaps) {%p} at '%s' completed*remote22BCapsReadMessage*call to replyPortNumber*hdcp22BCAPS*nvBCaps**nvBCaps*isBCapsHDCP*DP-QM> Device at '%s' is with valid 22BCAPS : %x**DP-QM> Device at '%s' is with valid 22BCAPS : %x*call to readRemoteHdcp1xCaps*DP-QM> REMOTE_DPCD_READ(BKSV) {%p} at '%s' completed**DP-QM> REMOTE_DPCD_READ(BKSV) {%p} at '%s' completed*call to hdcpValidateKsv*isValidBKSV*DP-QM> Device at '%s' is with valid BKSV.**DP-QM> Device at '%s' is with valid BKSV.*DP-QM> REMOTE_DPCD_READ(BCaps) {%p} at '%s' completed**DP-QM> REMOTE_DPCD_READ(BCaps) {%p} at '%s' completed*DP-QM> Device at '%s' is with valid BCAPS : %x**DP-QM> Device at '%s' is with valid BCAPS : %x*DP-QM> Device at '%s' is with valid BKSV but Invalid BCAPS : %x**DP-QM> Device at '%s' is with valid BKSV but Invalid BCAPS : %x*DP-QM> Device at '%s' is with DDC BACPS: %x**DP-QM> Device at '%s' is with DDC BACPS: %x*call to waivePendingHDCPCapDoneNotification*call to getHdcp22BCaps**hdcp22BCAPS*call to getBCaps*call to getBKSV*tempBKSV**tempBKSV*DP-QM> Requeing REMOTE_DPCD_READ_MESSAGE(BKSV) to %s**DP-QM> Requeing REMOTE_DPCD_READ_MESSAGE(BKSV) to %s*DP-QM> Get BKSV (remotely) for '%s' sent REMOTE_DPCD_READ {%p}**DP-QM> Get BKSV (remotely) for '%s' sent REMOTE_DPCD_READ {%p}*DP-QM> Requeing REMOTE_DPCD_READ_MESSAGE(BCAPS) to %s**DP-QM> Requeing REMOTE_DPCD_READ_MESSAGE(BCAPS) to %s*DP-QM> Get BCaps (remotely) for '%s' sent REMOTE_DPCD_READ {%p}**DP-QM> Get BCaps (remotely) for '%s' sent REMOTE_DPCD_READ {%p}*DP-QM> Requeing REMOTE_DPCD_READ_MESSAGE(22BCAPS) to %s**DP-QM> Requeing REMOTE_DPCD_READ_MESSAGE(22BCAPS) to %s*DP-QM> Get 22BCaps (remotely) for '%s' sent REMOTE_DPCD_READ {%p}**DP-QM> Get 22BCaps (remotely) for '%s' sent REMOTE_DPCD_READ {%p}*call to cancelAll*destructor field destruction of remote22BCapsReadMessage*destructor field destruction of remoteBCapsReadMessage*destructor field destruction of remoteBKSVReadMessage*call to detectBranch*call to removeDeviceTree*%s(): target = %s**%s(): target = %s*call to SinkDetection*sinkDetection*call to BranchDetection*branchDetection*DM> No Parent present for the device in DB.**DM> No Parent present for the device in DB.*childAddr*call to under*currentDevices**currentDevices*call to removeDevice*DP-DM> Lost device '%s' %s %s %s**DP-DM> Lost device '%s' %s %s %s*Branch**Branch*Legacy**Legacy*Dongle**Dongle*DownstreamSink**DownstreamSink*call to discoveryLostDevice*peerGuid*call to discoveryNewDevice*DP-DM> New device '%s' %s %s %s**DP-DM> New device '%s' %s %s %s*call to handleLinkAddressDownReply*call to detectCompleted*call to resultCount*linkAddressMessage*childCount*call to result*internalMap*call to makeGuid*guidBuilder*DP-DM> Setting GUID (locally) for '%s'**DP-DM> Setting GUID (locally) for '%s'*call to setGUID*remoteDpcdWriteMessage*DP-DM> Setting GUID (remotely) for '%s' sent REMOTE_DPCD_WRITE {%p}**DP-DM> Setting GUID (remotely) for '%s' sent REMOTE_DPCD_WRITE {%p}*retryLinkAddressMessage*DISC**DISC*retryRemoteDpcdWriteMessage*DP-DM> Message %s {%p} at '%s' failed. Device marked not present.**DP-DM> Message %s {%p} at '%s' failed. Device marked not present.*LINK_ADDRESS_MESSAGE**LINK_ADDRESS_MESSAGE*REMOTE_DPCD_WRITE(GUID)**REMOTE_DPCD_WRITE(GUID)*call to ~BranchDetection*call to addDevice*newDevice*oldDevice*call to detectSink*destructor field destruction of remoteDpcdWriteMessage*call to ~LinkAddressMessage*destructor field destruction of linkAddressMessage*destructor field destruction of parentDevice*DP-DM> Detecting '%s' (sending LINK_ADDRESS_MESSAGE {%p})**DP-DM> Detecting '%s' (sending LINK_ADDRESS_MESSAGE {%p})*DP-DM> Requeing LINK_ADDRESS_MESSAGE to %s**DP-DM> Requeing LINK_ADDRESS_MESSAGE to %s*DP-DM> Requeing REMOTE_DPCD_WRITE_MESSAGE to %s**DP-DM> Requeing REMOTE_DPCD_WRITE_MESSAGE to %s*DP-DM> handleLinkAddressDownReply for sink device on '%s': DPCD Rev = %d.%d**DP-DM> handleLinkAddressDownReply for sink device on '%s': DPCD Rev = %d.%d*call to ~SinkDetection*remoteDpcdReadMessage*DP-DM> REMOTE_DPCD_READ {%p} at '%s' completed**DP-DM> REMOTE_DPCD_READ {%p} at '%s' completed*call to copyFrom*retryRemoteDpcdReadMessage*DP-DM> Message %s {%p} at '%s' failed.**DP-DM> Message %s {%p} at '%s' failed.*REMOTE_DPCD_READ(GUID)**REMOTE_DPCD_READ(GUID)*DP-DM> sink at '%s' failed GUID identification, demote to 1.1 sink.**DP-DM> sink at '%s' failed GUID identification, demote to 1.1 sink.*DP-DM> Requeueing LAM message to %s**DP-DM> Requeueing LAM message to %s*DP-DM> Requeueing REMOTE_DPCD_READ_MESSAGE to %s**DP-DM> Requeueing REMOTE_DPCD_READ_MESSAGE to %s*DP-DM> Setting GUID (remotely) for '%s' sent REMOTE_DPCD_READ {%p}**DP-DM> Setting GUID (remotely) for '%s' sent REMOTE_DPCD_READ {%p}*DP-DM> Requeueing REMOTE_DPCD_WRITE_MESSAGE to %s**DP-DM> Requeueing REMOTE_DPCD_WRITE_MESSAGE to %s*destructor field destruction of powerUpPhyMessage*destructor field destruction of remoteDpcdReadMessage*call to getUpRequestData*csnMessage*call to getRequestId*call to CsnUpReplyContainer*csnReplyContainer*Concentrator?? Got CSN for an upstream port!**Concentrator?? Got CSN for an upstream port!*call to handleCSN*chksum*call to verifyCRC*call to setFallbackFlag*oldBlockChecksum*blocksRead*totalBlockCnt*retriesCount*call to seek*call to getEDIDBlockChecksum*call to patchCrc*call to setPatchedChecksum*call to getBlockCount*call to Stream*constructor init of field stream*validHeaderData**validHeaderData*DP-EDID> Invalid EDID Header**DP-EDID> Invalid EDID Header*call to swapBuffers*call to validateCheckSum**?*call to getEdidVersion*DPEDID> %s: DDC read returned questionable results: Total block Count too high: %d**DPEDID> %s: DDC read returned questionable results: Total block Count too high: %d*pExt*blockCount*DPEDID> %s: Unknown EDID Version!**DPEDID> %s: Unknown EDID Version!*DP-EDID> Edid length is 0 or less than required**DP-EDID> Edid length is 0 or less than required*DP-EDID> Edid length is not a multiple of 128**DP-EDID> Edid length is not a multiple of 128*~Edid*call to memZero*call to ~MainLink*call to EvoAuxBus*call to EvoMainLink2x**main*call to EvoMainLink*Failed to program the dummy symbol WAR!: %d**Failed to program the dummy symbol WAR!: %d*linkRateTbl**linkRateTbl*linkBwTbl**linkBwTbl*DP_EVO> %s: Unsupported link rate received**DP_EVO> %s: Unsupported link rate received*configureTriggerAll(): Set Trigger All failed!**configureTriggerAll(): Set Trigger All failed!*singleHeadMSTPipeline*configureTriggerSelect(): Set Trigger Select failed!**configureTriggerSelect(): Set Trigger Select failed!*dpcdPowerStateD0*_hasMultistream*_isStreamCloningEnabled*_hasIncreasedWatermarkLimits*_isFECSupported*_useDfpMaxLinkRateCaps*_isLTPhyRepeaterSupported*_isDownspreadSupported*_bAvoidHBR3*_bIsDpTunnelingHwBugWarEnabled*_gpuSupportedDpVersions*_maxLinkRateSupportedGpu*DSC*isDscSupported*encoderColorFormatMask*lineBufferSizeKB*rateBufferSizeKB*dfpParams*dfpFlags*_isEDP*_rmPhyRepeaterCount*_needForceRmEdid*_isPC2Disabled*_maxLinkRateSupportedDfp*_isDynamicMuxCapable*controlRateGoverning(): Set RateGov failed!**controlRateGoverning(): Set RateGov failed!*activeDevAddr*call to getLinkIndex*dpLink*configureMsScratchRegisters failed!**configureMsScratchRegisters failed!**displayIDs*configureSingleHeadMultiStreamMode failed!**configureSingleHeadMultiStreamMode failed!*bEnableOverride*bMST*bEnableTwoHeadOneOr*configureMultiStream failed!**configureMultiStream failed!*SST*configureSingleStream failed!**configureSingleStream failed!*maxHeads*bHdcpCapable*updateMask*HDCP_State_Repeater_Capable*HDCP_State_22_Capable*HDCP_State_Encryption*HDCP_State_Authenticated*triggerACT failed!**triggerACT failed!*DP> Crc control failed.**DP> Crc control failed.*gpuCrc0*gpuCrc1*gpuCrc2*bFireAndForget*bForceRgDiv*call to rmControl5070*DP_EVO> Disabling flush mode failed!**DP_EVO> Disabling flush mode failed!*bStereoPhaseInverse*featureValues**misc*bEnableMSA*featureMask*miscMask**miscMask*bCacheMsaOverrideForNextModeset*rasterTotalHorizontal*rasterTotalVertical*activeStartHorizontal*activeStartVertical*surfaceTotalHorizontal*surfaceTotalVertical*syncWidthHorizontal*syncPolarityHorizontal*syncHeightVertical*syncPolarityVertical*bRasterTotalHorizontal*bRasterTotalVertical*bActiveStartHorizontal*bActiveStartVertical*bSurfaceTotalHorizontal*bSurfaceTotalVertical*bSyncWidthHorizontal*bSyncPolarityHorizontal*bSyncHeightVertical*bSyncPolarityVertical*pFeatureDebugValues**pFeatureDebugValues*call to getLTCounter*targetIndex*bTrainPhyRepeater*call to isDownspreadSupported*requestRmLC*fallback*call to skipFallback*call to initializeRegkeyDatabase*strSame*ctrlPattern*field_31_0*field_63_32*field_95_64*muxStatusParams*muxStatus*bIsMuxCapable**pEdidOverrideParams**edidBuffer**pEdidParams*setManualParams*allHeadMaskParams*allHeadMask*call to MainLink*_isMstDisabledByRegkey*_isDscDisabledByRegkey*_skipPowerdownEDPPanelWhenHeadDetach*_applyLinkBwOverrideWarRegVal*_enableMSAOverrideOverMST*_isMSTPCONCapsReadDisabled*_isDownspreadDisabledByRegkey*_bAvoidHBR3DisabledByRegkey*tempValue*bInitialized*devicePlugged*DP> Client requested address-only transaction**DP> Client requested address-only transaction*bAddrOnly*DP> %s: Ignore ERROR_NOT_SUPPORTED for writeStatusUpdateRequest. Returning Success**DP> %s: Ignore ERROR_NOT_SUPPORTED for writeStatusUpdateRequest. Returning Success*DP> AuxChCtl Failing, if a device is connected you shouldn't be seeing this**DP> AuxChCtl Failing, if a device is connected you shouldn't be seeing this*uhbr10_0_capable*uhbr13_5_capable*uhbr20_0_capable*vconn_source*fallbackMandateTable**fallbackMandateTable*field_127_96*field_159_128*field_191_160*field_223_192*field_255_224*field_263_256*channelEqualizationStartTimeUs*ltRmParams*DP2xEVO> Set PRE_LT failed.**DP2xEVO> Set PRE_LT failed.*DP2xEVO> Set Channel Equalization failed.**DP2xEVO> Set Channel Equalization failed.*call to _getPollingIntervalMsForChannelEqDone*pollIntervalMs*call to pollDP2XLinkTrainingStageDone*bPollStatus*DP2xEVO> Poll ChannelEQ Done failed.**DP2xEVO> Poll ChannelEQ Done failed.*DP2xEVO> Poll ChannelEQ Interlane Align failed.**DP2xEVO> Poll ChannelEQ Interlane Align failed.*DP2xEVO> Set CDS failed.**DP2xEVO> Set CDS failed.*DP2xEVO> Poll CDS failed.**DP2xEVO> Poll CDS failed.*DP2xEVO> Set POST_LT failed.**DP2xEVO> Set POST_LT failed.*DP2xEVO> client requested link is not a supported link configuration!**DP2xEVO> client requested link is not a supported link configuration!*bIsGpuPowerDownLinkRequest*bCur128b132bChannelCoding*call to resetDPRXLink*DP2xEVO> Reset DP link before LT failed.**DP2xEVO> Reset DP link before LT failed.*call to trainDP2xChannelCoding*ltStatus*bFallback*DP2xEVO> Link Disconnected - stop LT / Fallback.**DP2xEVO> Link Disconnected - stop LT / Fallback.*retryOnce*call to getFallbackForDP2xLinkTraining*DP2xEVO> No link configuration available for fallback**DP2xEVO> No link configuration available for fallback*bChannelCodingChanged*DP2xEVO> Fallback - Reset DP link before LT.**DP2xEVO> Fallback - Reset DP link before LT.*DP2xEVO> Reset DP link for fallback failed.**DP2xEVO> Reset DP link for fallback failed.*bEnable5147205Fix*DP2xEVO> Error: Unknown phase passed in.**DP2xEVO> Error: Unknown phase passed in.*DP2xEVO> Disabling flush mode failed!**DP2xEVO> Disabling flush mode failed!*DP2xEVO> Enabling flush mode failed!**DP2xEVO> Enabling flush mode failed!*call to validateIlrInFallbackMap*DP2xEVO> %s: Unsupported link rate received**DP2xEVO> %s: Unsupported link rate received*dfpUhbrCaps*bConnectorIsUSBTypeC*gpuUhbrCaps*bUseRgFlushSequence*bFailed*DP2xEVO> Failed to trigger DP link reset!**DP2xEVO> Failed to trigger DP link reset!*linkIdx*targetIdx*bTargetConfigFound*totalPollTime*elapsedTimeMs*DP2xEVO> Link Training failed.**DP2xEVO> Link Training failed.*DP2xEVO> Polling in Channel Equalization phase failed.**DP2xEVO> Polling in Channel Equalization phase failed.*DP2xEVO> Polling in CDS failed.**DP2xEVO> Polling in CDS failed.*pollIntervalMsVal*pollIntervalUnit*di**activeGroup*headAttached*lastDev*call to configureMsScratchRegisters*parentDev**parentDev*DP-TM> Attached stream:%d to %s**DP-TM> Attached stream:%d to %s*DP-TM> Detached stream:%d from %s**DP-TM> Detached stream:%d from %s*DP-TM> Allocate_payload: Failed to ATTACH stream:%d to %s**DP-TM> Allocate_payload: Failed to ATTACH stream:%d to %s*DP-TM> Allocate_payload: Failed to DETACH stream:%d from %s**DP-TM> Allocate_payload: Failed to DETACH stream:%d from %s*call to random*previousRandom*m_hashMap**m_hashMap*pCurr*reuse of pCurr**pCurr*call to hash*m_age*call to pruneCache*call to isEqual*destructor field destruction of m_hashMap**destructor field destruction of m_hashMap*call to getBytesPerTimeslot*call to divide_floor*bytes_per_timeslot*replacement*reuse of call to begin*insertBeforeThis*call to clearInterruptUpRequestReady*call to getTransactionSize*call to getUpRequestMessageBoxSize*call to readUpRequestMessageBox*call to clearInterruptDownReplyReady*call to getDownReplyMessageBoxSize*call to readDownReplyMessageBox*destructor field destruction of localWindow*call to ~MessageTransactionMerger**sink*call to MessageTransactionMerger*constructor init of field localWindow*constructor init of field addressPrefix*localWindow*call to getMessageBoxSize*call to readMessageBox*totalSize*call to clearMessageBoxInterrupt*incompleteMessages*em*call to messagedReceived*freeOnNextCall*call to ~IncompleteMessage*destructor field destruction of message*reuse of freeOnNextCall**freeOnNextCall*imsg*DP-MM> Ignore message due to OOM**DP-MM> Ignore message due to OOM*DP-MM> Expected transaction-start, ignoring message transaction**DP-MM> Expected transaction-start, ignoring message transaction*DP-MM> Unexpected repeated transaction-start, resetting message state.**DP-MM> Unexpected repeated transaction-start, resetting message state.*DP-MM> Received truncated or corrupted message transaction**DP-MM> Received truncated or corrupted message transaction*call to dpCalculateBodyCRC*DP-MM> Received corruption message transactions**DP-MM> Received corruption message transactions*reuse of msg*call to IncompleteMessage*constructor init of field message*lastUpdated*call to MessageReceiver*call to BitStreamWriter*writer*sinkPort*call to readOrDefault*call to getSinkPort*writeData*transactions*I2cData*numBytesReadI2C*numBytesReadDPCD*constructor init of field request*bFECCapability*call to extractGUID*call to messageProcessed*SDPStreamSink*call to align*virtualChannelPayloadId*TotalPBN*FreePBN*DFPLinkAvailablePBN*legacyPlugged*messagingCapability*isInputPort*peerDeviceType*numberOfPorts*hasMessaging*dpPlugged**I2cData*call to offset*LCT*isTransactionStart*isTransactionEnd*call to dpCalculateHeaderCRC*DP-MM> Corrupt message transaction. Expected CRC %d. Message = {%s}**DP-MM> Corrupt message transaction. Expected CRC %d. Message = {%s}*headerSizeBits*isBeingDestroyed*nakUndef*notYetSentDownRequest*call to messageFailed*Down request message type 0x%x client is not cleaning up.**Down request message type 0x%x client is not cleaning up.*notYetSentUpReply*Up reply message type 0x%x client is not cleaning up.**Up reply message type 0x%x client is not cleaning up.*awaitingReplyDownRequest*messageReceivers*destructor field destruction of awaitingReplyDownRequest*destructor field destruction of notYetSentUpReply*destructor field destruction of notYetSentDownRequest*destructor field destruction of messageReceivers*call to ~DownReplyManager*destructor field destruction of mergerDownReply*call to ~UpRequestManager*destructor field destruction of mergerUpRequest*call to ~UpReplyManager*destructor field destruction of splitterUpReply*call to ~DownRequestManager*destructor field destruction of splitterDownRequest*call to GenericMessageCompletion*bBusyWaiting*DP-MM> Device went offline while waiting for reply and so ignoring message %p (ID = %02X, target = %s)**DP-MM> Device went offline while waiting for reply and so ignoring message %p (ID = %02X, target = %s)*failed*elapsedTime*call to expired*bTransmitted*call to transmitAwaitingUpReplies*call to transmitAwaitingDownRequests*receiver*call to onUpRequestReceived*call to onDownReplyReceived*splitterUpReply*sent*messageAwaitingReply*call to parseResponse*DPMM> Warning: Unmatched reply message**DPMM> Warning: Unmatched reply message*call to process*rcr*call to processByType*nak_data*constructor init of field nakData*SPLI**SPLI*DP-MM> Message transmit time expired on message %p (ID = %02X, target = %s)**DP-MM> Message transmit time expired on message %p (ID = %02X, target = %s)*DP-MM> Requested = %x Received = %x**DP-MM> Requested = %x Received = %x*call to parseResponseAck*msgSink*call to mstEdidCompleted*call to mstEdidReadFailed*%s for %s**%s for %s*I2CReadMessage*edidReaderManager*call to readNextRequest*call to readNextBlock*call to edidAttemptDone*call to readIsComplete*%s on %s**%s on %s*EDID**EDID*%s(): for %s (seg/offset) = %d/%d**%s(): for %s (seg/offset) = %d/%d*%s(): start for %s**%s(): start for %s*destructor field destruction of remoteI2cRead*destructor field destruction of edid*call to getUpReplyMessageBoxSize*call to writeUpReplyMessageBox*call to getDownRequestMessageBoxSize*call to writeDownRequestMessageBox*call to writeToWindow*activeMessage*eventSink*call to writeMessageBox*assemblyBuffer*retriesLeft*DP-MM> Messagebox write defer-ed. Q-ing retry.**DP-MM> Messagebox write defer-ed. Q-ing retry.*SPDE**SPDE*call to splitterFailed*transactionSplitter*call to splitterTransmitted*call to ~OutgoingMessage*reuse of activeMessage**activeMessage*queuedMessages**eventSink*el**el*constructor init of field assemblyBuffer*call to MessageTransactionSplitter*constructor init of field transactionSplitter*constructor init of field queuedMessages*call to OutgoingMessage*om*LCR*call to fetchEdidByRmCtrl*blockCnt*call to sstReadEdid*call to applyEdidOverrideByRmCtrl*EDID> Failed to read EDID from RM and DPLib**EDID> Failed to read EDID from RM and DPLib*previousEdid*firstTrial*call to setForcedEdidChecksum*EDID> Failed to ping sst DDC addresses**EDID> Failed to ping sst DDC addresses*dpAux*auxStatus*ddcAddrIdx**auxBus*DisplayPort: %s: Retrying at totalRead 0x%08x (replyType %x, size %x)**DisplayPort: %s: Retrying at totalRead 0x%08x (replyType %x, size %x)*DisplayPort: %s: dpAux returned edid block smaller than expected. Read from totalRead 0x%08x (replyType %x, size %x)**DisplayPort: %s: dpAux returned edid block smaller than expected. Read from totalRead 0x%08x (replyType %x, size %x)*call to PendingCallback***context*call to _pump*DP> %s: Failed to allocate callback**DP> %s: Failed to allocate callback*call to fire*call to ~PendingCallback*reuse of i*nearest*call to vrrRunEnablementStage*DPHAL_VRR_ENABLE> **** VRR Enablement Started ******DPHAL_VRR_ENABLE> **** VRR Enablement Started *****call to vrrGetPublicInfo*call to vrrEnableMonitor*call to vrrEnableDriver*DPHAL_VRR_ENABLE> **** VRR Enablement Ends ******DPHAL_VRR_ENABLE> **** VRR Enablement Ends *****DPHAL_VRR_ENABLE> ** VRR_DRV_ENABLE starts ****DPHAL_VRR_ENABLE> ** VRR_DRV_ENABLE starts ***call to vrrWaitOnEnableStatus*DPHAL_VRR_ENABLE> ** VRR_DRV_ENABLE ends ****DPHAL_VRR_ENABLE> ** VRR_DRV_ENABLE ends ***DPHAL_VRR_ENABLE> ** VRR_MON_ENABLE starts ****DPHAL_VRR_ENABLE> ** VRR_MON_ENABLE starts ***DPHAL_VRR_ENABLE> ** VRR_MON_ENABLE ends ****DPHAL_VRR_ENABLE> ** VRR_MON_ENABLE ends ***call to getDownstreamPort*call to setMainLinkChannelCoding*call to getYearWeek*powerOnBeforeLt*DP-WAR> WAR for Apple thunderbolt J29 panel**DP-WAR> WAR for Apple thunderbolt J29 panel*DP-WAR> - Monitor needs to be powered up before LT. Bug 933051**DP-WAR> - Monitor needs to be powered up before LT. Bug 933051*extensionCountDisabled*dataForced*DP-WAR> Edid override on Acer AL1512**DP-WAR> Edid override on Acer AL1512*DP-WAR> - Disabling extension count.Bug 451868**DP-WAR> - Disabling extension count.Bug 451868*DP-WAR> Edid overrid on Westinghouse AL1512 LVM- <37/42> w <2/3>**DP-WAR> Edid overrid on Westinghouse AL1512 LVM- <37/42> w <2/3>*DP-WAR> - Disabling extension count.**DP-WAR> - Disabling extension count.*DP-WAR> Edid overrid on IBM T210**DP-WAR> Edid overrid on IBM T210*DP-WAR> 2048x1536x60Hz(misreported) -> 2048x1536x40Hz. Bug 76347**DP-WAR> 2048x1536x60Hz(misreported) -> 2048x1536x40Hz. Bug 76347*DP-WAR> Edid overrid on GWY/EMA**DP-WAR> Edid overrid on GWY/EMA*DP-WAR> 106.50MHz(misreported) -> 106.50MHz.Bug 343870**DP-WAR> 106.50MHz(misreported) -> 106.50MHz.Bug 343870*DP-WAR> Edid overrid on INX L15CX**DP-WAR> Edid overrid on INX L15CX*DP-WAR> Removing invalid detailed timing 10x311 @ 78Hz**DP-WAR> Removing invalid detailed timing 10x311 @ 78Hz*DP-WAR> Edid overrid on AUO eDP panel**DP-WAR> Edid overrid on AUO eDP panel*DP-WAR> Modifying HBlank and HSync pulse width.**DP-WAR> Modifying HBlank and HSync pulse width.*DP-WAR> Bugs 907998, 1001160**DP-WAR> Bugs 907998, 1001160*useLegacyAddress*DP-WAR> AUO eDP**DP-WAR> AUO eDP*implements only Legacy interrupt address range**implements only Legacy interrupt address range*disableDpcdPowerOff*DP-WAR> Disable DPCD Power Off**DP-WAR> Disable DPCD Power Off*DP-WAR> Edid overrid on Quanta - Toshiba LG 1440x900**DP-WAR> Edid overrid on Quanta - Toshiba LG 1440x900*DP-WAR> Correcting pclk. Bug 201428**DP-WAR> Correcting pclk. Bug 201428*DP-WAR> Edid overrid on MSI - LG LPL 1280x800**DP-WAR> Edid overrid on MSI - LG LPL 1280x800*DP-WAR> Correcting pclk. Bug 359313**DP-WAR> Correcting pclk. Bug 359313*DP-WAR> Edid overrid on Haier TV.**DP-WAR> Edid overrid on Haier TV.*DP-WAR> Removing 1366x768. bug 351680 & 327891**DP-WAR> Removing 1366x768. bug 351680 & 327891*DP-WAR> HP Z1 G2 (Zeus) AIO Bug 1643712**DP-WAR> HP Z1 G2 (Zeus) AIO Bug 1643712*delayAfterD3*DP-WAR> HP Valor QHD+ N15P-Q3 Sharp EDP needs 50 ms after D3**DP-WAR> HP Valor QHD+ N15P-Q3 Sharp EDP needs 50 ms after D3*DP-WAR> bug 1520011**DP-WAR> bug 1520011*DP-WAR> Sharp EDP implements only Legacy interrupt address range**DP-WAR> Sharp EDP implements only Legacy interrupt address range*DP-WAR> EIZO FlexScan SX2762W generates redundant**DP-WAR> EIZO FlexScan SX2762W generates redundant*DP-WAR> hotplugs (bug 1048796)**DP-WAR> hotplugs (bug 1048796)*DP-WAR> MEI-Panasonic EDP**DP-WAR> MEI-Panasonic EDP*DP-WAR> Force maximum link config WAR required on LG panel.**DP-WAR> Force maximum link config WAR required on LG panel.*DP-WAR> bug 1649626**DP-WAR> bug 1649626*DP-WAR> LG eDP implements only Legacy interrupt address range**DP-WAR> LG eDP implements only Legacy interrupt address range*DP-WAR> Force maximum link config WAR required on Sharp-CerebrEx panel.**DP-WAR> Force maximum link config WAR required on Sharp-CerebrEx panel.*DP-WAR> CMN eDP**DP-WAR> CMN eDP*DP-WAR> BenQ GSync power on/off redundant hotplug**DP-WAR> BenQ GSync power on/off redundant hotplug*DP-WAR> MSI eDP **DP-WAR> MSI eDP *implements only Legacy interrupt address range **implements only Legacy interrupt address range *DP-WAR> Unigraf device, keep link alive during detection **DP-WAR> Unigraf device, keep link alive during detection *keepLinkAlive*bIgnoreDscCap*DP-WAR> BOE panels incorrectly exposing DSC capability. Ignoring it.**DP-WAR> BOE panels incorrectly exposing DSC capability. Ignoring it.*DP-WAR> NCP panels incorrectly exposing DSC capability. Ignoring it.**DP-WAR> NCP panels incorrectly exposing DSC capability. Ignoring it.*DP-WAR> Ignoring DSC capability on Lenovo CSOT 1609 Panel.**DP-WAR> Ignoring DSC capability on Lenovo CSOT 1609 Panel.*DP-WAR> Bug 3444252**DP-WAR> Bug 3444252*DP-WAR> Panel incorrectly exposing DSC capability. Ignoring it.**DP-WAR> Panel incorrectly exposing DSC capability. Ignoring it.*DP-WAR> Bug 3543158**DP-WAR> Bug 3543158*DP-WAR> Disable DSC max BPP limit of 16 for DSC.**DP-WAR> Disable DSC max BPP limit of 16 for DSC.*DP-WAR> Force head shutdown on Mode transition.**DP-WAR> Force head shutdown on Mode transition.*bSkipCableIdCheck*DP-WAR> Panel does not expose cable capability. Ignoring it. Bug 4968411**DP-WAR> Panel does not expose cable capability. Ignoring it. Bug 4968411*bForceHeadShutdown*DP-WAR> Force head shutdown.**DP-WAR> Force head shutdown.*bAllocateManualTimeslots*DP-WAR> Panel needs allocation of manual timeslot. Bug 4958974**DP-WAR> Panel needs allocation of manual timeslot. Bug 4958974*DP-WAR> VRT monitor does not work with GB20x when downspread is enabled. Disabling downspread.**DP-WAR> VRT monitor does not work with GB20x when downspread is enabled. Disabling downspread.*DP-WAR> Force head shutdown for Dell AW2524H.**DP-WAR> Force head shutdown for Dell AW2524H.*skipCableBWCheck*dpSkipCheckLink*DP-WAR> HP monitors need to be powered up before LT**DP-WAR> HP monitors need to be powered up before LT*overrideOptimalLinkCfg*dpOverrideOptimalLinkConfig*DP-WAR> Overriding optimal link config on Dell U2410.**DP-WAR> Overriding optimal link config on Dell U2410.*DP-WAR> bug 632801**DP-WAR> bug 632801*overrideMaxLaneCount*DP-WAR> Overriding max lane count on Lenovo L2440x.**DP-WAR> Overriding max lane count on Lenovo L2440x.*DP-WAR> bug 687952**DP-WAR> bug 687952*DP-WAR> EDID was overridden for some data. Patching CRC.**DP-WAR> EDID was overridden for some data. Patching CRC.*bpp_factor*pbn_numerator*pbn_denominator*dpInfo*BlankingBits*BlankingSymbolsPerLane*MinHBlank*ERROR: Blanking Width is smaller than minimum permissible value.**ERROR: Blanking Width is smaller than minimum permissible value.*ERROR: Minimum Horizontal Active Width <= 60 not supported.**ERROR: Minimum Horizontal Active Width <= 60 not supported.*watermarkAdjust*watermarkMinimum*ERROR: LaneCount - %d is not supported for waterMark calculations.**ERROR: LaneCount - %d is not supported for waterMark calculations.*Current support is only up to 4-Lanes & any change/increase in supported lanes should be reflected in waterMark calculations algorithm. Ex: See calc for minHBlank variable below**Current support is only up to 4-Lanes & any change/increase in supported lanes should be reflected in waterMark calculations algorithm. Ex: See calc for minHBlank variable below*PrecisionFactor*ratioF*watermarkF*w0*ERROR: watermark = %d should not be greater than numSymbolsPerLine = %d.**ERROR: watermark = %d should not be greater than numSymbolsPerLine = %d.*MinHBlankFEC*vblank_symbols*squeezed_symbols*laneDataRate*ERROR: watermark should not be greater than 39.**ERROR: watermark should not be greater than 39.*linkFreq*minHBlank*samples*twoChannelAudio_symbols*eightChannelAudio_symbols***pBuffer*call to isValidStruct*testMessageStatus**libHandle*HdmiPacketLibrary: Destroy.**HdmiPacketLibrary: Destroy.*currClassId*call to NvHdmiPkt_CallDestructors**pClass*call to NvHdmiPkt_HwClass2HdmiClass*memMap**memMap***cbHandle*thisId*isRMCallInternal*rmGetMemoryMap*rmFreeMemoryMap*rmDispControl2*acquireMutex*releaseMutex*malloc*setTimeout*checkTimeout*print*call to NvHdmiPkt_InitInterfaces*call to NvHdmiPkt_CallConstructors*HdmiPacketLibrary: Initialize Success.**HdmiPacketLibrary: Initialize Success.*HdmiPacketLibrary: Initialize Failed.**HdmiPacketLibrary: Initialize Failed.*hdmiClassId*pFRLConfig*pVidTransInfo*pClientCtrl*pSrcCaps*pSinkCaps*pSinkEdid*pInfoframe*HdmiPacketLibrary: WARNING - packet length too small for infoframe type %d check payload **HdmiPacketLibrary: WARNING - packet length too small for infoframe type %d check payload *hdmiPacketCtrl*hdmiPacketWrite*translatePacketType*translateTransmitControl*hdmiReadPacketStatus*hdmiWritePacketCtrl*hdmiWriteAviPacket*hdmiWriteAudioPacket*hdmiWriteGenericPacket*hdmiWriteGeneralCtrlPacket*hdmiWriteVendorPacket*hdmiAssessLinkCapabilities*hdmiQueryFRLConfig*hdmiSetFRLConfig*hdmiClearFRLConfig*hdmiPacketRead*programAdvancedInfoframe*HdmiPacketLibrary: ERROR - Dummy function programAdvancedInfoframeDummy called. Should never be called.**HdmiPacketLibrary: ERROR - Dummy function programAdvancedInfoframeDummy called. Should never be called.*HdmiPacketLibrary: ERROR - Dummy function hdmiPacketReadDummy called. Should never be called.**HdmiPacketLibrary: ERROR - Dummy function hdmiPacketReadDummy called. Should never be called.*HdmiPacketLibrary: ERROR - Dummy function hdmiClearFRLConfigDummy called. Should never be called.**HdmiPacketLibrary: ERROR - Dummy function hdmiClearFRLConfigDummy called. Should never be called.*HdmiPacketLibrary: ERROR - Dummy function hdmiSetFRLConfigDummy called. Should never be called.**HdmiPacketLibrary: ERROR - Dummy function hdmiSetFRLConfigDummy called. Should never be called.*HdmiPacketLibrary: ERROR - Dummy function hdmiQueryFRLConfigDummy called. Should never be called.**HdmiPacketLibrary: ERROR - Dummy function hdmiQueryFRLConfigDummy called. Should never be called.*HdmiPacketLibrary: ERROR - Dummy function hdmiAssessLinkCapabilitiesDummy called. Should never be called.**HdmiPacketLibrary: ERROR - Dummy function hdmiAssessLinkCapabilitiesDummy called. Should never be called.*HdmiPacketLibrary: ERROR - Dummy function hdmiWriteDummyPacketCtrl called. Should never be called.**HdmiPacketLibrary: ERROR - Dummy function hdmiWriteDummyPacketCtrl called. Should never be called.*HdmiPacketLibrary: ERROR - Dummy function hdmiReadDummyPacketStatus called. Should never be called.**HdmiPacketLibrary: ERROR - Dummy function hdmiReadDummyPacketStatus called. Should never be called.*HdmiPacketLibrary: ERROR - Dummy function hdmiWriteDummyPacket called. Should never be called.**HdmiPacketLibrary: ERROR - Dummy function hdmiWriteDummyPacket called. Should never be called.*type0073*HdmiPacketLibrary: ERROR - translatePacketType wrong packet type: %0x**HdmiPacketLibrary: ERROR - translatePacketType wrong packet type: %0x*call to NVMISC_MEMCPY*HdmiPacketLibrary: ERROR - RM call to hdmiPacketWrite failed.**HdmiPacketLibrary: ERROR - RM call to hdmiPacketWrite failed.*HdmiPacketLibrary: ERROR - RM call to hdmiPacketCtrl failed.**HdmiPacketLibrary: ERROR - RM call to hdmiPacketCtrl failed.**pMemBase***pMemBase*HdmiPacketLibrary: ERROR - Init failed. Failed to map SF_USER memory.**HdmiPacketLibrary: ERROR - Init failed. Failed to map SF_USER memory.*pBaseReg*pPacketIn*HdmiPacketLibrary: Invalid arg**HdmiPacketLibrary: Invalid arg*HdmiPacketLibrary: ERROR - input packet length incorrect %d Packet write will be capped to max allowable bytes**HdmiPacketLibrary: ERROR - input packet length incorrect %d Packet write will be capped to max allowable bytes**pPacketIn*call to checkPacketStatus*HdmiPacketLibrary: ERROR - Packet status check timed out.**HdmiPacketLibrary: ERROR - Packet status check timed out.*HdmiPacketLibrary: ERROR - RM call to enable hdmi ctrl failed.**HdmiPacketLibrary: ERROR - RM call to enable hdmi ctrl failed.*regOffset*bCheckPacketStatus*type9171*HdmiPacketLibrary: ERROR - translatePacketType wrong packet type: %0x.**HdmiPacketLibrary: ERROR - translatePacketType wrong packet type: %0x.*HdmiPacketLibrary: WARNING - input AVI packet length incorrect. Write will be capped to max allowable bytes**HdmiPacketLibrary: WARNING - input AVI packet length incorrect. Write will be capped to max allowable bytes*hdmiCtrl*dispCapsParams*HdmiPacketLibrary: ERROR - RM call to get caps failed.**HdmiPacketLibrary: ERROR - RM call to get caps failed.*call to hdmiPacketWrite0073*call to hdmiPacketCtrl0073*call to hdmiPacketWrite9171*call to SetFRLLinkRate*call to translateFRLRateToNv0073SetHdmiFrlConfig**pFRLConfig*call to populateBaseFRLParams*call to calcBppMinMax*bCanUseDSC*call to evaluateIsDSCPossible*pHdmiForumInfo*bTryUncompressedMode**pGetHdmiFrlCapacityComputationParams*preCalc*bHasPreCalcFRLData*preCalcFrlRate*call to determineUncompressedFRLConfig*frlParams*numAudioChannels*audioFreqKHz*audioType*minFRLRateItr*maxFRLRateItr*bppMinX16Itr*bppMaxX16Itr*preCalcBppx16*compressionInfo*hSlices*clampBpp*call to determineCompressedFRLConfig*frlComputeResult*call to getNextHigherLinkRate*bRedoDSCCalc*maxSupportedAudioCh*maxSupportedAudioFreqKHz*bEnableDSC*call to translateBitRate*pDscScratchBuffer**pDscScratchBuffer*call to DSC_GeneratePPS*HdmiPacketLibrary: ERROR - DSC PPS calculation failed.**HdmiPacketLibrary: ERROR - DSC PPS calculation failed.*pFRLParams*bppTargetx16*pResults*bppTargetX16*stepSize*dsc*bIsDSCPossible*pDscModesetInfo*pclk10KHz*pixelPacking*dscTotalChunkKBytes*dscCapable*dualHeadBppTargetMaxX16*hdmiGpuCapsParams*bSuccess*call to translateFRLCapToFRLDataRate*linkMaxFRLRate*bppPrecision*maxWidthPerSlice**pHdmiForumInfo*call to populateAudioCaps*call to performLinkTraningToAssessFRLLink*linkMaxFRLRateDSC*call to SetFRLFlushMode*call to hdmiClearFRLConfigC671*HdmiPacketLibrary: ERROR - RM call to set HDMI FRL Flush Mode failed.**HdmiPacketLibrary: ERROR - RM call to set HDMI FRL Flush Mode failed.*HdmiPacketLibrary: ERROR - RM call to set HDMI FRL failed.**HdmiPacketLibrary: ERROR - RM call to set HDMI FRL failed.*p861ExtBlock*bHBRAudio*maxAudioChannels*maxAudioFreqKHz*call to hdmiWriteSharedGenericPacketC771*call to translateTransmitControlC771*call to hdmiWritePacketCtrlC771*HdmiPacketLibrary: ERROR - input AVI packet length incorrect. Write will be capped to max allowable bytes**HdmiPacketLibrary: ERROR - input AVI packet length incorrect. Write will be capped to max allowable bytes*call to hdmiWriteAviPacket9171*call to hdmiPacketCtrl9171*HdmiPacketLibrary: Generic and VSI registers removed in C871 HW. Call NvHdmiPkt_SetupAdvancedInfoframe to use one of the generic registers!**HdmiPacketLibrary: Generic and VSI registers removed in C871 HW. Call NvHdmiPkt_SetupAdvancedInfoframe to use one of the generic registers!*call to disableInfoframeC871*regData*numOfInfoframes*pktBytes**pktBytes*remainingBufSize*ifNum*call to isInfoframeOffsetAvailable*HdmiPacketLibrary: MoreInfoframe: Client requested overwriting an active infoframe**HdmiPacketLibrary: MoreInfoframe: Client requested overwriting an active infoframe*dwordNum*HdmiPacketLibrary: MoreInfoframe: Sent infoframe of length %d bytes, transmit ctrl 0x%x at offset %d head=%x subdev=%d**HdmiPacketLibrary: MoreInfoframe: Sent infoframe of length %d bytes, transmit ctrl 0x%x at offset %d head=%x subdev=%d*bWaitForIdle*HdmiPacketLibrary: MoreInfoframe: timeout waiting for infoframe to get disabled**HdmiPacketLibrary: MoreInfoframe: timeout waiting for infoframe to get disabled*HdmiPacketLibrary: MoreInfoframe: Clients must ideally provide timer callbacks to wait for enable/disable infoframes**HdmiPacketLibrary: MoreInfoframe: Clients must ideally provide timer callbacks to wait for enable/disable infoframes*typeC871*HdmiPacketLibrary: ERROR - translatePacketType wrong packet type for class C871: %0x.**HdmiPacketLibrary: ERROR - translatePacketType wrong packet type for class C871: %0x.*reducedType*hblank**pT*adj_rr_x1M*act_v_blank_time*vbi*act_v_blank_lines*total_v_lines*v_back_porch_est*v_back_porch*total_pixels*act_pixel_freq_hz*act_pixel_freq_khz*HSyncPol*VSyncPol*HBorder*aspect*CVT-RB3:%dx%dx%dHz**CVT-RB3:%dx%dx%dHz*act_vbi_lines*CVT-RB2:%dx%dx%dHz**CVT-RB2:%dx%dx%dHz*dwXCells*call to getCVTVSync*dwVSyncWidth*dwVBILines*dwPClk*CVT-RB:%dx%dx%dHz**CVT-RB:%dx%dx%dHz*dwHPeriodEstimate_NUM*dwHPeroidEstimate_DEN*dwVSyncBP*dwIdealDutyCycle_DEN*dwIdealDutyCycle_NUM*dwHBlankCells*dwHSyncCells*CVT:%dx%dx%dHz**CVT:%dx%dx%dHz*bpc8*pDisplayId20Info*interface_features*pSection*p6bytesDescriptor**p6bytesDescriptor*horizontal_active_pixels**horizontal_active_pixels*vertical_active_lines**vertical_active_lines*p7bytesDescriptor**p7bytesDescriptor*descriptor_6_bytes*p8bytesDescriptor**p8bytesDescriptor*descriptor_7_bytes*call to NvTiming_CalcCVT*call to NvTiming_CalcCVT_RB*call to NvTiming_CalcCVT_RB2*call to NvTiming_CalcCVT_RB3*multiplier**pVoidDescriptor*DID20-Type9:#%3d:%dx%dx%3d.%03dHz/%s**DID20-Type9:#%3d:%dx%dx%3d.%03dHz/%s*I**I**P*DID20-Type9RB%d:#%3d:%dx%dx%3d.%03dHz/%s**DID20-Type9RB%d:#%3d:%dx%dx%3d.%03dHz/%s*call to NvTiming_EnumDMT*pTimingCode*call to NvTiming_EnumCEA861bTiming*call to NvTiming_EnumHdmiVsdbExtendedTiming*call to NvTiming_EnumStdTwoBytesCode*pTiming2ByteCode*pixel_clock**pixel_clock*VBorder*active_image_pixels**active_image_pixels*active_image_lines**active_image_lines*blank_pixels**blank_pixels*blank_lines**blank_lines*sync_width_pixels**sync_width_pixels*sync_width_lines**sync_width_lines*call to greatestCommonDenominator*gdc*call to NvTiming_CalcRR*call to NvTiming_CalcRRx1k*pSectionBytes*ctaBlock**ctaBlock*cta_data**cta_data*pcta_data**pcta_data*call to parseCta861DataBlockInfo*cta**p861Info*call to parseCta861VsdbBlocks**pDisplayIdInfo*call to parseCta861VsvdbBlocks*call to parseCta861HfScdb*call to parse861bShortTiming*call to parse861bShortYuv420Timing*cta861_info*call to parseCta861HdrStaticMetadataDataBlock*call to parseCta861NativeOrPreferredTiming*pVendorSpecific**pVendorSpecific**vendor_id*ieee_oui*vesaVsdb*data_struct_type*vendor_specific_data**vendor_specific_data*color_space_and_eotf*overlapping*pixels_overlapping_count*multi_sst*pass_through_integer*pass_through_integer_dsc*pass_through_fractional*pass_through_fraction_dsc*pBrightnessLuminanceRangeBlock**pBrightnessLuminanceRangeBlock*pluminanceRanges**pluminanceRanges*min_sdr_luminance*max_sdr_luminance*max_boost_sdr_luminance*pAdaptiveSyncBlock**pAdaptiveSyncBlock*descriptorCnt*descriptors**descriptors*minRR*max_refresh_rate*maxRR*total_adaptive_sync_descriptor*operation_range_info*adaptive_sync_range*duration_inc_flicker_perf*seamless_not_support*duration_dec_flicker_perf*max_duration_inc*min_rr*max_rr*max_duration_dec*pContainerIdBlock**pContainerIdBlock*pContainerId**pContainerId*container_id**container_id*data3*data4*data5**data5*pTiledDisplayBlock**pTiledDisplayBlock*pTileTopo**pTileTopo*bSingleEnclosure*bHasBezelInfo*multi_tile_behavior*single_tile_behavior*topo_loc_high*topo_low*loc_low*bottom*topo_id*product_id**product_id**serial_number*pInterfaceFeatures**pInterfaceFeatures*pInterfaceFeaturesBlock**pInterfaceFeaturesBlock*interface_color_depth_rgb*interface_color_depth_ycbcr444*interface_color_depth_ycbcr422*interface_color_depth_ycbcr420*yuv420_min_pclk*audio_capability*support_48khz*support_44_1khz*support_32khz*color_space_and_eotf_1*colorspace_eotf_combination**colorspace_eotf_combination*additional_color_space_and_eotf_count*additional_color_space_and_eotf**additional_color_space_and_eotf*pRangeLimitsBlock**pRangeLimitsBlock*rangeLimits*pixel_clock_min**pixel_clock_min*pclk_min*pixel_clock_max**pixel_clock_max*pclk_max*vfreq_min*dynamic_video_timing_range_support*vfreq_max*seamless_dynamic_video_timing_change*pTiming10Block**pTiming10Block*descriptorCount*call to getExistedTimingSeqNumber*startSeqNumber*call to parseDisplayId20Timing10Descriptor*newTiming*DID20-Type10:#%3d:%dx%dx%3d.%03dHz/%s**DID20-Type10:#%3d:%dx%dx%3d.%03dHz/%s*DID20-Type10RB%d:#%3d:%dx%dx%3d.%03dHz/%s**DID20-Type10RB%d:#%3d:%dx%dx%3d.%03dHz/%s*call to assignNextAvailableDisplayId20Timing*pTiming9Block**pTiming9Block*call to parseDisplayId20Timing9Descriptor*pTiming8Block**pTiming8Block*codeCount*call to parseDisplayId20Timing8Descriptor*DID20-Type8:#%3d:%dx%dx%3d.%03dHz/%s**DID20-Type8:#%3d:%dx%dx%3d.%03dHz/%s*pTiming7Block**pTiming7Block*call to parseDisplayId20Timing7Descriptor*DID20-Type7:#%2d:%dx%dx%3d.%03dHz/%s**DID20-Type7:#%2d:%dx%dx%3d.%03dHz/%s*pDisplayParamBlock**pDisplayParamBlock*pDisplayParam**pDisplayParam*horizontal_image_size**horizontal_image_size*h_image_size_micro_meter*vertical_image_size**vertical_image_size*v_image_size_micro_meter*horizontal_pixel_count**horizontal_pixel_count*h_pixels*vertical_pixel_count**vertical_pixel_count*v_pixels*scan_orientation*audio_speakers_integrated*color_map_standard*primaries**primaries*primary_color_1_chromaticity*color_bits_mid*primary_color_2_chromaticity*primary_color_3_chromaticity*white*white_point_chromaticity*max_luminance_full_coverage**max_luminance_full_coverage*native_max_luminance_full_coverage*max_luminance_10_percent_rectangular_coverage**max_luminance_10_percent_rectangular_coverage*native_max_luminance_10_percent_rect_coverage*min_luminance**min_luminance*native_min_luminance*native_luminance_info*color_depth_and_device_technology*native_color_depth*device_technology*device_theme_Preference*gamma_x100*pProductIdBlock**pProductIdBlock*pProductIdentity**pProductIdentity**vendor*product_code**product_code*week*call to NVMISC_STRNCPY*product_string*product_name_string**product_string**product_name_string*call to parseDisplayId20ProductIdentity*call to parseDisplayId20DisplayParam*call to parseDisplayId20Timing7*call to parseDisplayId20Timing8*call to parseDisplayId20Timing9*call to parseDisplayId20Timing10*call to parseDisplayId20RangeLimit*call to parseDisplayId20DisplayInterfaceFeatures*call to parseDisplayId20Stereo*call to parseDisplayId20TiledDisplay*call to parseDisplayId20ContainerId*call to parseDisplayId20AdaptiveSync*call to parseDisplayId20ARVRHMD*call to parseDisplayId20ARVRLayer*call to parseDisplayId20BrightnessLuminanceRange*call to parseDisplayId20VendorSpecific*call to parseDisplayId20CtaData**pDataBlock*call to parseDisplayId20DataBlock*valid_data_blocks*product_id_present*parameters_present*display_param*type7Timing_present*type8Timing_present*type9Timing_present*dynamic_range_limit_present*interface_feature_present*stereo_interface_present*tiled_display_present*container_id_present*type10Timing_present*adaptive_sync_present*arvr_hmd_present*arvr_layer_present*brightness_luminance_range_present*vendor_specific_present*cta_data_present*call to computeDisplayId20SectionCheckSum*call to parseDisplayId20SectionDataBlocks*call to getPrimaryUseCase*call to NvTiming_DisplayID2ValidationMask*pDisplayId**pSection*call to parseDisplayId20BaseSection*extension_count*call to parseDisplayId20ExtensionSection*extensionIndex*call to updateColorFormatForDisplayId20Timings*DMT-RB2:%dx%dx%dHz**DMT-RB2:%dx%dx%dHz*DMT-RB:%dx%dx%dHz**DMT-RB:%dx%dx%dHz*DMT:%dx%dx%dHz**DMT:%dx%dx%dHz*call to NvTiming_CalcDMT_RB*call to NvTiming_CalcDMT*call to NvTiming_CalcDMT_RB2*DMT:#%d:%dx%dx%dHz**DMT:#%d:%dx%dx%dHz*pOpaqueWorkarea*pBitsPerPixelX16*pWorkarea**pWorkarea*call to _validateInput*bits_per_component*linebuf_depth*block_pred_enable*multi_tile*dsc_version_minor*pic_width*pic_height*slice_height*slice_width*drop_mode*peak_throughput_mode0*peak_throughput_mode1*convert_rgb*native_422*simple_422*native_420*bits_per_pixel*minSliceCount*protocolOverhead*dscOverhead*eDP*call to DSC_AlignDownForBppPrecision*call to _calculateEffectiveBppForDSC*eff_bpp*call to DSC_PpsDataGen*LogicLaneCount*BytePerLogicLane*BitPerSymbol*slicewidth*call to DSC_GetSliceCountMask*gpu_slice_count_mask*common_slice_count_mask*call to DSC_GetPeakThroughputMps*peak_throughput_mps*call to DSC_GetMinSliceCountForMode*chunkSymbols*totalSymbolsPerLane*totalSymbols*sliceArrayCount*gpuSliceCountMask*peakThroughPutIndex*rejectSliceCountMask*localDscInfo*call to DSC_SliceCountMaskforSliceNum*validSliceNum**validSliceNum*minSliceCountOut*pPpsOut**pPpsOut*call to DSC_PpsCalc*call to DSC_PpsConstruct*call to DSC_PpsCalcBase*call to DSC_PpsCalcSliceParams*call to DSC_PpsCalcRcInitValue*call to Dsc_PpsCalcHeight*call to DSC_PpsCalcRcParam*call to DSC_PpsCheckSliceHeight*call to DSC_PpsCalcExtraBits*call to DSC_PpsCalcBpg*call to DSC_PpsCalcScaleInterval*slicew*groups_per_line*sliceMask*minSliceCountLocal*call to DSC_GetHigherSliceCount**pPps*call to DSC_GenerateDataFromPPS*dsc_version_major*pps_identifier*vbr_enable*initial_xmit_delay*initial_dec_delay*initial_scale_value*scale_increment_interval*scale_decrement_interval*first_line_bpg_offset*nfl_bpg_offset*slice_bpg_offset*initial_offset*final_offset*flatness_min_qp*flatness_max_qp*rc_model_size*rc_edge_factor*rc_quant_incr_limit0*rc_quant_incr_limit1*rc_tgt_offset_hi*rc_tgt_offset_lo*rc_buf_thresh**rc_buf_thresh*range_min_qp**range_min_qp*range_max_qp**range_max_qp*range_bpg_offset**range_bpg_offset*second_line_bpg_offset*nsl_bpg_offset*second_line_offset_adj*flatness_det_thresh*ofs_und6**ofs_und6*ofs_und7**ofs_und7*ofs_und10**ofs_und10*ofs_und8**ofs_und8*ofs_und12**ofs_und12*ofs_und15**ofs_und15*final_scale*uncompressedBpgRate*ub_BpgOfs*firstLineBpgOfs*secondLineBpgOfs*bitsPerPixel*groups_total*call to DSC_PpsCalcComputeOffset*maxOffset*rbsMin*hrdDelay*xmit_delay*muxWordSize*extra_bits*sliceBits*num_extra_mux_bits*allignDownForBppPrecision*monitor_name**monitor_name*vendor_name**vendor_name*vendorId**vendorId*product_name**product_name*prepend_vendor*call to RemoveTrailingWhiteSpace*call to RemoveNonPrintableCharacters**pLimit*max_pclk_10khz*h_rate_hz*pclk10khz*pRangeLimit*pEDIDBuffer*CommonEDIDBuffer**CommonEDIDBuffer**pEDIDBuffer*commonEDIDBufferSize*edidBufferIndex*call to NvTiming_CalculateEDIDCRC32*call to calculateCRC32*pProductName**svr_vfpdb*call to IsPrintable*call to IsWhiteSpace*call to parseEdidDetailedTimingDescriptor*call to assignNextAvailableTiming*call to parseEdidCvt3ByteDescriptor*call to parseEdidStandardTimingDescriptor*DetailedTimingDesc**DetailedTimingDesc*pLdd**pLdd*max_v_rate_offset*min_v_rate_offset*max_h_rate_offset*min_h_rate_offset**pExt*ctaDTD_Offset*call to get861ExtInfo*ctaBlockTag**pData_collection*ctaPayload*extnCount**pDTD*pDisplayid**pDisplayid*pDID2Header**pDID2Header*call to parseDisplayId20EDIDExtDataBlocks*bAllZero**pHeader*call to parseDisplayIdBlock*pEI*call to NvTiming_GetEdidTiming*dwStatus*dwNativeIndex*call to RRx1kToPclk*call to RRx1kToPclk1khz*Timing*call to NvTiming_GetHDMIStereoTimingFrom2DTiming*call to NvTiming_GetEdidTimingEx*call to NvTiming_GetEdidTimingExWithPclk*pEdidTiming**pEdidTiming*native_cta*call to getHighestPrioritySVRIdx*kth*map0*ceaIndex*preferred_cta*preferred_displayid_dtd*preferred_dtd1*dtd1*map1*map2*map3*map4*minHeight*maxHeight*pCVT**pCVT*tempHeight*cvt*cvtTiming*call to NvTiming_IsRoundedRREqual*pDtdIndex*dtdIndex*bpc6*bpc10*bpc12*bpc16*call to updateHDMILLCDeepColorForTiming*call to NvTiming_IsTimingExactEqual*call to NvTiming_IsTimingRelaxedEqual*call to updateColorFormatForDisplayId20ExtnTimings*call to updateBpcForTiming*call to updateColorFormatForDisplayIdExtnTimings*call to NvTiming_GetCEA861TimingIndex*call to isMatchedCTA861Timing*call to getCEA861TimingAspectRatio*manuf_id*video_interface*analog_data*screen_size_x*screen_size_y*screen_aspect_x*screen_aspect_y*gamma*Chromaticity**Chromaticity*cc_red_x*cc_red_y*cc_green_x*cc_green_y*cc_blue_x*cc_blue_y*cc_white_x*cc_white_y*established_timings_1_2*manufReservedTimings*standard_timings**standard_timings*wStandardTimingID**wStandardTimingID*total_extensions*checksum_ok*call to parseEdidDetailedTiming*call to parseEdidLongDisplayDescriptor*call to parseCta861HfEeodb*call to parse861ExtDetailedTiming*call to parseCta861VideoFormatDataBlock*call to parseCta861DIDType7VideoTimingDataBlock*call to parseCta861DIDType8VideoTimingDataBlock*call to parseCta861DIDType10VideoTimingDataBlock*call to parseVTBExtension*call to getDisplayId20EDIDExtInfo*call to getDisplayIdEDIDExtInfo*u4*display_interface*ycbcr444_depth*support_8b*display_interface_features*ycbcr422_depth*call to parseEdidCvtTiming*call to parseEdidStandardTiming*call to parseEdidEstablishedTiming*call to isMatchedStandardTiming*call to isMatchedEstablishedTiming*call to getEdidHDM1_4bVsdbTiming*call to prioritizeEdidHDMIExtTiming*call to updateColorFormatAndBpcTiming*min_v_rate*max_v_rate*min_h_rate*max_h_rate*max_pclk_MHz*timing_support*gtf2*C*K*J*M*pixel_clock_adjustment*max_active_pixels_per_line*aspect_supported*aspect_preferred*blanking_support*scaling_support*preferred_refresh_rate*color_point*pColorPoint*wp1_index*wp1_x*wp1_y*wp1_gamma*wp2_index*wp2_x*wp2_y*wp2_gamma*std_timing*pStdTiming**std_timing*color_man*pColorMan*red_a3*red_a2*green_a3*green_a2*blue_a3*blue_a2*pCVT_3byte*addressable_lines*aspect_ratio*preferred_vert_rates*supported_vert_rates*est3*pEstTiming*timing_byte**timing_byte*hvisible*vvisible*dwTotalPixels*EDID-Detailed:%dx%dx%d.%03dHz%s**EDID-Detailed:%dx%dx%d.%03dHz%s*/i**/i*pSTI*EDID-STD(DMT):%dx%dx%dHz**EDID-STD(DMT):%dx%dx%dHz*EDID-STD(CVT):%dx%dx%dHz**EDID-STD(CVT):%dx%dx%dHz*call to NvTiming_CalcGTF*EDID-STD(GTF):%dx%dx%dHz**EDID-STD(GTF):%dx%dx%dHz*EDID-EST(VESA):%dx%dx%dHz**EDID-EST(VESA):%dx%dx%dHz*pEST*EDID-EST(III):%dx%dx%dHz**EDID-EST(III):%dx%dx%dHz*bHeader**bHeader*rawData*pVer**pRawInfo*call to getExistedCTATimingSeqNumber*startSeqNum*did_type10_data_block**did_type10_data_block*CTA861-T10:#%3d:%dx%dx%3d.%03dHz/%s**CTA861-T10:#%3d:%dx%dx%3d.%03dHz/%s*CTA861-T10RB%d:#%3d:%dx%dx%3d.%03dHz/%s**CTA861-T10RB%d:#%3d:%dx%dx%3d.%03dHz/%s*t10db_idx*did_type8_data_block**did_type8_data_block*CTA861-T8:#%3d:%dx%dx%3d.%03dHz/%s**CTA861-T8:#%3d:%dx%dx%3d.%03dHz/%s*t8db_idx*did_type7_data_block**did_type7_data_block*pT7Descriptor**pT7Descriptor*bpcs*CTA861-T7:#%3d:%dx%dx%3d.%03dHz/%s**CTA861-T7:#%3d:%dx%dx%3d.%03dHz/%s*t7db_idx*pVsvdb*pHdr10PlusInfo**pHdr10PlusInfo*pHdmiForum**pHdmiForum*remainingSize*max_TMDS_char_rate*threeD_Osd_Disparity*dual_view*independent_View*lte_340Mcsc_scramble*ccbpci*cable_status*rr_capable*scdc_present*dc_30bit_420*dc_36bit_420*dc_48bit_420*uhd_vic*max_FRL_Rate*fapa_start_location*allm*fva*cnmvrr*cinemaVrr*m_delta*qms*fapa_end_extended*vrr_min*vrr_max*dsc_10bpc*dsc_12bpc*dsc_16bpc*dsc_All_bpp*dsc_Native_420*dsc_1p2*qms_tfr_min*qms_tfr_max*dsc_MaxSlices*dsc_MaxPclkPerSliceMHz*dsc_Max_FRL_Rate*dsc_totalChunkKBytes*pVsdbInfo**pVsdbInfo*pMsftVsdbPayload*containerId**containerId*desktopUsage*thirdPartyUsage*primaryUseCase*vsdbInfo*pNvda**pNvda*vsdbVersion*supportsVrr*minRefreshRate*pStereoStructureMask*pSideBySideHalfDetail*pM**pM*Supports50Hz*Supports60Hz*pMapSz*vendorDataSize*pHdmiLLC**pHdmiLLC*DataSz**Data*pHDMIVideo**pHDMIVideo*call to AddModeToSupportMap*pVicList**HDMI_VIC*AllVicStructMask*AllVicIdxMask*AllVicDetail*StereoStructureMask*pMultiListEntry*pHdmiLlc*addrA*addrB*addrC*addrD*supports_AI*dc_48_bit*dc_36_bit*dc_30_bit*dc_y444*dual_dvi*max_tmds_clock*latency_field_present*i_latency_field_present*hdmi_video_present*cnc3*cnc2*cnc1*cnc0*pExtTiming*call to isHdmi3DStereoType**pExtTiming*HDMI3D*VBlank*call to SetActiveSpaceForHDMI3DStereo*VActiveSpace**VActiveSpace*HBlank**pSdp**pInfoFrame*firstLast*sequenceIndex*metadataBytes**metadataBytes*optionalBytes**optionalBytes*Metadata**Metadata*RetCode*rsvd_byte6*rsvd_byte7*rsvd_byte8*rsvd_byte9*rsvd_byte10*top_bar_low*top_bar_high*bottom_bar_low*bottom_bar_high*left_bar_low*left_bar_high*right_bar_low*right_bar_high*byte14*byte15*video_format_id*ridIdx*rid*frame_rate*pic_aspect_ratio*it_content*it_content_type*pixelRepeat*pixel_repeat*call to NvTiming_MaxFrameWidth*CTA-861G:#%3d:%dx%dx%3d.%03dHz/%s**CTA-861G:#%3d:%dx%dx%3d.%03dHz/%s*aspect_x*aspect_y*ext_tag*total_svd*total_sad*total_ssd*ieee_id*vendor_data_size*vfdb**vfdb*vfd_len*ntsc*y420*total_vfd*video_format_desc**video_format_desc*total_vfdb*video_capability*VCDB*total_svr**svd_y420vdb*total_y420vdb*map_y420cmdb**map_y420cmdb*total_y420cmdb*y420cmdb*hdr_static_metadata*byte4*byte5*vsvdb**vsvdb*total_vsvdb*native_video_resolution_db*native_svr*NVRDB*img_size*sz_prec*image_size**image_size*dsc_pt*t7_m*total_descriptors*total_did_type7db*tcs*t8y420*code_type*total_did_type8db*t10_m*total_did_type10db*hfscdb**hfscdb*hfscdbSize*SCDB*hfeeodb*HF_EEODB*total_vsdb**p861info*basic_caps*dtd_offset*call to parseEdidHDMILLCTiming*HDMI3DSupported*vsdbData*call to parseEdidHdmiForumVSDB*pTotalEdidExtensions*pDvInfo**pDvInfo*pDisplayID20**pDisplayID20*call to parseCta861DvStaticMetadataDataBlock*dv_static_metadata*call to parseCta861Hdr10PlusDataBlock*hdr10Plus**pHdmiLlc*pHfvs**pHfvs*pNvVsdb**pNvVsdb*pMsftVsdb**pMsftVsdb*vendor_specific*call to parseEdidHdmiLlcBasicInfo*H14B_VSDB*H20_HF_VSDB*call to parseEdidNvidiaVSDBBlock*nvda_vsdb*call to parseEdidMsftVsdbBlock*msft_vsdb*effective_tmds_clock*vsvdbVersion*pDvType0**pDvType0*VSVDB_version*supports_2160p60hz*supports_YUV422_12bit*supports_global_dimming*dm_version*target_min_luminance*target_max_luminance*supports_backlight_control*backlt_min_luma*interface_supported_by_sink*supports_10b_12b_444*pDvType1**pDvType1*pvDvType1_1**pvDvType1_1*pDvType2**pDvType2*parity**pHdrInfo*trad_gamma_sdr_eotf*trad_gamma_hdr_eotf*smpte_st_2084_eotf*future_eotf*static_metadata_type*max_cll*max_fall*min_cll*nativeSvr*totalSvr*svr*pSvr*extKth*preferTiming*isMatch*pYuv420Vic*pVdb*bFound*CTA-861G:#%3d:%5dx%4dx%3d.%03dHz/%s**CTA-861G:#%3d:%5dx%4dx%3d.%03dHz/%s*eachOfDescSize*pVFDOneByte**pVFDOneByte*call to isVFDRefreshRate*call to NvTiming_CalcOVT*CTA861-OVT%d:#%3d:%dx%dx%3d.%03dHz/%s**CTA861-OVT%d:#%3d:%dx%dx%3d.%03dHz/%s*vfdb_idx*pVic*bytePos*bitPos*pEIA861*CTA-861Long:%5dx%4dx%3d.%03dHz/%s**CTA-861Long:%5dx%4dx%3d.%03dHz/%s*factor*vfd*bBFR50*bBFR60*bFRFactor*bFR24*bFR48*bFR144*frame_rate_factors**frame_rate_factors*blk*supported_displayId2_0*rgb_depth*support_16b*support_14b*support_12b*support_10b*support_6b*ycbcr420_depth*minimum_pixel_rate_ycbcr420*colorspace_eotf_combination_1*support_colorspace_bt2020_eotf_smpte_st2084*support_colorspace_bt2020_eotf_bt2020*support_colorspace_dci_p3_eotf_dci_p3*support_colorspace_adobe_rgb_eotf_adobe_rgb*support_colorspace_bt709_eotf_bt1886*support_colorspace_bt601_eotf_bt601*support_colorspace_srgb_eotf_srgb*total_additional_colorspace_eotf*additional_colorspace_eotf**additional_colorspace_eotf*additional_supported_colorspace_eotf**additional_supported_colorspace_eotf*support_colorspace*support_eotf*cea_data_block_present*tiled_display_revision*topology_low*location_low*pixel_density*topology_id*timing_sub_block**timing_sub_block**sub*stereo_code*u3*field_sequential*stereo_polarity*side_by_side*view_identity*interleave_pattern**interleave_pattern*pixel_interleaved*left_right_separate*mirroring*multiview*num_views*interface_type*digital_num_links*interface_version*content_protection*content_protection_version*spread_spectrum*spread_percent*lvds*color_map*support_2_8v*support_12v*support_5v*support_3_3v*DE_mode*data_strobe*proprietary*t1_min*t1_max*t2_max*t3_max*t4_min*t5_min*t6_min*tech_type*device_op_mode*support_backlight*support_intensity*horiz_pixel_count*vert_pixel_count*orientation*zero_pixel*scan_direction*subpixel_info*horiz_pitch*vert_pitch*color_bit_depth*white_to_black*response_time*minPclk*maxPclk*rl**rl*cvt_reduced*hfreq_min*hfreq_max*hblank_min*vblank_min*timing_modes**timing_modes*call to parseDisplayIdTiming5Descriptor*timing_codes**timing_codes*call to parseDisplayIdTiming3Descriptor*formula*interlace*horiz*type2*call to parseDisplayIdTiming2Descriptor*type1*call to parseDisplayIdTiming1Descriptor*horiz_size*vert_size*horiz_pixels*vert_pixels*support_audio*separate_audio*audio_override*power_management*fixed_timing*fixed_pixel_format*deinterlace*depth_overall*depth_native*productid_string**productid_string*points**points*x_p*y_p*white_points**white_points*total_primaries*call to parseDisplayIdProdIdentityBlock*call to parseDisplayIdParam*call to parseDisplayIdColorChar*call to parseDisplayIdTiming1*call to parseDisplayIdTiming2*call to parseDisplayIdTiming3*call to parseDisplayIdTiming4*call to parseDisplayIdTiming5*call to parseDisplayIdTimingVesa*call to parseDisplayIdTimingEIA*call to parseDisplayIdRangeLimits*call to parseDisplayIdSerialNumber*call to parseDisplayIdAsciiString*call to parseDisplayIdDeviceData*call to parseDisplayIdInterfacePower*call to parseDisplayIdTransferChar*call to parseDisplayIdDisplayInterface*call to parseDisplayIdStereo*call to parseDisplayIdTiledDisplay*call to parseDisplayIdCtaData*call to parseDisplayIdDisplayInterfaceFeatures*remaining_length*section_length**section*call to parseDisplayIdSection*block_header**pDisplayId20Info*hdr_static_metadata_info*ext861_2*extSection*primary_use_case*as_edid_extension*dbHeader*datablock_length**extSection*call to parseDisplayId20EDIDExtSection*dwRefreshRate*call to a_div_b*dwVTotal*dwIdD*dwIdN*dwHBlank*dwHTCells*dwHSync*dwHFrontPorch*GTF:%dx%dx%dHz**GTF:%dx%dx%dHz*call to computeGCD*maxVRate*vTotalGranularity*maxActiveTime*minLineTime*minVBlank*minVTotal*minLineRate*maxAudioPacketsPerLine*minHTotal*minPixelClockRate*call to nvNextPow2_U32*hTotalGranularityChunk*hTotalGranularity*resolutionGranularity*minResolution*V*H*R*call to nvFloorPow2_U32*pixelClockRate*vBlank*vSyncPosition*call to calculate_aspect_ratio*CTA861-OVT:%dx%dx%dHz**CTA861-OVT:%dx%dx%dHz*minPixelRepeat*pT1*pT2*CUST:%dx%dx%d.%03dHz%s**CUST:%dx%dx%d.%03dHz%s*blankPixels*activeLines*blankLines*temp1*temp2*AhxBl*AlxBh*AxB_high*AxB_low*AxB_div_C_low*conn_info*pciDeviceId*call to nvlink_memcpy*devUuid**devUuid*chipSid*call to nvlink_core_link_state_supported*end0*call to nvlink_core_check_link_state*call to nvlink_core_print_intranode_conn*call to nvlink_core_check_tx_sublink_state*call to nvlink_core_check_rx_sublink_state*localLink*call to nvlink_core_get_internode_conn*connectedEndpoints*notConnectedEndpoints*remoteEndPoint*call to nvlink_memset*local_end**local_end**remoteEndPoint*nv_interconn_head*call to nvlink_core_get_intranode_conn*call to nvlink_assert**end0**end1*nv_intraconn_head*tmpConn**tmpConn*endpointsInFail*endpointsInSafe*endpointsInActive*nv_devicelist_head*safe_retries*packet_injection_retries*bRxDetected*bNewEndpoints*dev0*isTokenFound*bInitnegotiateConfigGood*call to nvlink_core_add_intranode_conn**dev1**dev0**remote_end*pLinks**pLinks*call to _nvlink_core_all_links_initialized*call to nvlink_core_init_links_from_off_to_swcfg_non_ALI*call to nvlink_core_init_links_from_off_to_swcfg*call to _nvlink_core_discover_topology*bSafeTransitionFail*call to nvlink_core_poll_link_state*call to nvlink_core_poll_sublink_state*call to nvlink_sleep*bTxCommonModeFail*bInitphase5Fails*call to nvlink_core_initphase1*call to nvlink_core_set_rx_detect*call to nvlink_core_get_rx_detect*call to nvlink_core_enable_common_mode*call to nvlink_core_initphase5*call to nvlink_core_wait_for_link_init*powerStateTransitionStatus*call to nvlink_core_initnegotiate*call to nvlink_core_rx_init_term*call to nvlink_core_calibrate_links*call to nvlink_core_disable_common_mode*call to nvlink_core_enable_data*srcLink*dstLink*readToken*call to nvlink_core_read_link_discovery_token**dstLink*call to _nvlink_core_is_link_initialized*tmpDev*devInfo*call to _nvlink_core_map_device_type*call to _nvlink_core_get_enabled_link_mask*enabledLinkMask*bEnableAli*call to nvlink_strlen*copyLen**deviceName*connLink*endPointInfo*linkIndex*endPoint*tmpLink**tmpLink**tmpDev*call to _nvlink_core_map_link_state*call to _nvlink_core_map_tx_sublink_state*txSubLinkMode*call to _nvlink_core_map_rx_sublink_state*rxSubLinkMode*call to nvlink_core_poll_tx_sublink_state*call to nvlink_core_poll_rx_sublink_state*call to _nvlink_core_print_link*connsToShutdown**connsToShutdown***connsToShutdown*visitedConns**visitedConns***visitedConns*numConnsToShutdown*call to _nvlink_core_check_if_conn_in_array*call to nvlink_core_check_intranode_conn_state*call to nvlink_core_powerdown_intranode_conns_from_active_to_off*call to nvlink_core_reset_intranode_conns*call to nvlink_core_remove_intranode_conn*connArray**connArray*tx_sublink_state*rx_sublink_state*conns**conns*call to _nvlink_core_clear_link_state*inSWCFG*ppLinks**ppLinks*call to nvlink_core_link_states_symmetric*call to nvlink_core_train_intranode_conns_from_swcfg_to_active_ALT*call to _nvlink_core_set_sublink_pre_hs_settings*pollStatus*call to _nvlink_core_set_link_pre_active_settings*skipConn**skipConn*call to nvlink_core_train_intranode_conns_from_swcfg_to_active_legacy*call to nvlink_core_print_link_state*isMasterEnd*call to _nvlink_core_set_link_post_active_settings*call to nvlink_get_platform_time*endStates**endStates*call to nvlink_lib_top_lock_acquire*call to nvlink_core_get_device_by_devinfo*call to nvlink_lib_top_lock_release**endpoint*call to nvlink_lib_link_locks_acquire*call to nvlink_core_get_endpoint_state*endStatesCount*call to nvlink_lib_link_locks_release*linkParams*call to nvlink_core_get_link_by_endpoint*endPoints**endPoints*endState**endState*capParams*call to nvlink_acquire_fabric_mgmt_cap*ctrlParams*infoParams*numDevice*call to nvlink_core_copy_device_info**devInfo***links*interConns**interConns***interConns*localEndPoints**localEndPoints*call to nvlink_core_train_internode_conns_from_swcfg_to_active**isMasterEnd*localEndStates**localEndStates*postinitoptimizeParams*initoptimizeParams*subLinkParams*interConn*call to nvlink_core_train_internode_conn_sublink_from_safe_to_hs*trainParams*numConns*endPointPairs**endPointPairs***conns*initLinks**initLinks***initLinks*trainLinks**trainLinks***trainLinks**srcLink*call to nvlink_core_train_intranode_conns_from_swcfg_to_active_non_ALI*call to nvlink_core_powerdown_intranode_conns_from_active_to_swcfg*call to nvlink_core_train_intranode_conns_from_off_to_active_ALI*call to nvlink_core_train_check_link_ready_ALI*endpointPairsStates**endpointPairsStates*srcEndPoint*dstEndPoint*removeParams*call to nvlink_core_remove_internode_conn*addParams*localEndPoint*call to nvlink_core_is_supported_device_type*intraConn*call to nvlink_core_add_internode_conn*getParams*numConnections*call to nvlink_core_copy_endpoint_info*call to _nvlink_lib_ctrl_device_discover_peer_link*call to nvlink_core_get_link_discovery_token*call to nvlink_core_write_link_discovery_token*call to nvlink_core_correlate_conn_by_token*readParams*sidInfo**sidInfo*localLinkSid*remoteLinkSid*localLinkNum*remoteLinkNum*call to nvlink_core_discover_and_get_remote_end*remoteLink*numTokens*tokenInfo**tokenInfo*tokenValue*writeParams**linkStatus*initStatus*iocReq**iocReq*call to nvlink_core_link_init_async*versionParams*call to nvlink_strcpy*call to nvlink_strcmp*call to nvlink_is_admin*call to nvlink_is_fabric_manager*call to nvlink_lib_ctrl_prologue*call to nvlink_lib_ctrl_check_version*call to nvlink_lib_ctrl_set_node_id*call to nvlink_lib_ctrl_all_links*call to nvlink_lib_ctrl_device_link_init_status*call to nvlink_lib_ctrl_device_write_discovery_tokens*call to nvlink_lib_ctrl_device_read_discovery_tokens*call to nvlink_lib_ctrl_device_read_sids*call to nvlink_lib_ctrl_discover_intranode_conns*call to nvlink_lib_ctrl_device_get_intranode_conns*call to nvlink_lib_ctrl_add_internode_conn*call to nvlink_lib_ctrl_remove_internode_conn*call to nvlink_lib_ctrl_train_intranode_conn*call to nvlink_lib_ctrl_train_intranode_conns_parallel*call to nvlink_lib_ctrl_train_internode_conn_link*call to nvlink_lib_ctrl_train_internode_conn_sublink*call to nvlink_lib_ctrl_train_internode_links_initoptimize*call to nvlink_lib_ctrl_train_internode_links_post_initoptimize*call to nvlink_lib_ctrl_train_internode_conns_parallel*call to nvlink_lib_ctrl_get_devices_info*call to nvlink_lib_ctrl_acquire_capability*call to nvlink_lib_ctrl_get_link_state*call to nvlink_lib_ctrl_get_device_link_states*call to nvlink_lib_ioctl_ctrl_helper*lock_status*bConnected*call to nvlink_core_copy_intranode_conn_info*remoteEnd**remoteEnd*call to nvlink_core_copy_internode_conn_info*intra_conn*inter_conn*call to nvlink_lib_link_lock_free*call to nvlink_lib_link_lock_alloc*call to _nvlink_lib_is_link_registered*curLink**intra_conn**inter_conn**curLink*nextLink*call to _nvlink_lib_is_device_registered*lockLinks**lockLinks***lockLinks*call to nvlink_core_powerdown_floorswept_conns_to_off*call to _nvlink_lib_unilateral_powerdown_links_from_active_to_off*bIsAlreadyPresent*ppTargetLinks**ppTargetLinks***ppTargetLinks*numTargetLinks*call to nvlink_core_unilateral_powerdown_links_from_active_to_off*call to nvlink_core_powerdown_intranode_conns_from_active_to_L2*seedDataCopy**seedData**seedDataCopy*call to nvlink_core_train_intranode_conns_from_from_L2_to_active*call to nvlink_lib_is_initialized*bIsReducedConfig*call to nvlink_lib_is_device_list_empty*call to nvlink_lib_top_lock_free*call to nvlink_lib_top_lock_alloc*registeredEndpoints**link1**link2*l1*l2*call to _compare*topLevelLock**topLevelLock*call to nvlink_releaseLock*call to nvlink_acquireLock*call to nvlink_freeLock***topLevelLock*call to nvlink_allocLock*top_lock**top_lock***top_lock**call to nvlink_allocLock*biosImage*pImage*pBiosRawBuffer**pBiosRawBuffer*call to nvswitch_os_memset*call to nvswitch_bios_read**pImage*call to _nvswitch_core_bios_read*call to nvswitch_os_alloc_contig_memory*call to nvswitch_os_map_dma_region*pReadBuffer**pReadBuffer*call to nvswitch_os_free_contig_memory**pFlcn*bios*cmdType*cmdSeqDesc*call to nvswitch_os_sync_dma_region_for_device*call to nvswitch_os_unmap_dma_region*call to nvswitch_timeout_create*call to flcnQueueCmdPostBlocking*call to nvswitch_os_sync_dma_region_for_cpu*moduleState**moduleState*pOnboardState**pOnboardState*call to cciModuleOnboardPerformPhaseTransitionAsync*onboardError*bOnboardFailure*failedOnboardState*sleepState*call to nvswitch_os_get_platform_time*wakeUpTimestamp*onboardPhase*prevOnboardState*currOnboardState*call to cciSetNextXcvrLedState*call to cciSetXcvrLedState*call to cciGetModuleId*bModeContinuousALI**bModeContinuousALI*call to _cci_reset_module_state_sw*call to nvswitch_is_soe_supported*pPerformOnboardPhaseCmd**pPerformOnboardPhaseCmd*call to _cciSetupCmdModulesOnboardSOE*performOnboardPhase*pCmd*onboardSubPhase*rxDetEnable**rxDetEnable*call to cciGetXcvrMask*optical*call to cciModulesOnboardSOE*call to nvswitch_translate_hw_error*call to nvswitch_assert_log*call to nvswitch_os_print*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to re-enable ALI for module %d links. **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to re-enable ALI for module %d links. *nvlink_device*call to nvswitch_os_report_error*Failed to re-enable ALI for module %d links. **Failed to re-enable ALI for module %d links. *call to nvswitch_lib_smbpbi_log_sxid*call to nvswitch_inforom_bbx_add_sxid*regkeys*bLinkTrainIdle*call to cciModulePresent*call to cciDetectXcvrsPresent*call to cciGetModulePresenceChange*call to cciCmisAccessReleaseLock*call to cciCmisAccessTryLock*call to _cci_module_onboard_async*call to _cciCmisAccessSafe*call to cciCheckLPMode*linkMaskActive*call to _cci_get_enabled_link_mask*call to _cci_get_active_fault_link_masks*bModuleOnboarded*linkMaskActiveSaved*bLinkTrainComplete*call to _cci_module_identify*call to _cci_module_check_onboard_condition_async*call to _cci_module_identify_async*call to cciCablesInitializeCopperAsync*call to cciCablesInitializeDirectAsync*call to cciCablesInitializeOpticalAsync*call to _cci_launch_ALI_async*call to _cci_module_onboard_sleep_async*call to _cci_module_onboard_monitor_async*call to _cci_module_non_continuous_ALI_async*linkMaskFault*bRetryOnboard*call to cciSetLPMode*call to cciModuleOnboardCheckErrors*bErrorsChecked*linkTrainMask*bLinkTrainDeferred*call to nvswitch_get_num_links*call to nvswitch_get_link_eng_inst*call to nvswitch_is_link_valid*call to nvswitch_is_link_in_reset*call to nvswitch_request_tl_link_state_ls10**cableType*isFaulty**isFaulty*bDoOnboard*call to _cci_setup_onboard*bPartialLinkTrainComplete*linkMaskReset*linkMaskResetForced*call to _cci_reset_module_state_hw*call to cciGetCageMapping*pLinkMask*call to cciResetModule*call to cciModuleHWGood*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d faulty **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d faulty *Module %d faulty **Module %d faulty *call to _cci_module_detect*call to _cci_module_cable_detect*call to _cci_module_validate*bModuleIdentified*call to cciCmisRead*checksumBuf**checksumBuf*call to _cci_cmis_checksum*isFlatMemory**isFlatMemory*siControls**siControls*linkMaskActivePending*pLinkMaskActive*pLinkMaskActivePending*pLinkMaskFault*call to cciModuleOnboardPerformPhaseAsync*call to cciSetLedsInitialize*loopStatus*call to cciIsLinkManaged*call to cciGetLinkMode*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d failed %s *call to _cci_onboard_phase_to_text**nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d failed %s *Module %d failed %s **Module %d failed %s *call to _cci_check_module_boot_failure*call to cciGetXcvrFWInfo*fwInfo**fwInfo*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d boot failure **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d boot failure *Module %d boot failure **Module %d boot failure *fwStatusFlags*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d Image A boot failure **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d Image A boot failure *Module %d Image A boot failure **Module %d Image A boot failure *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d Image A recovery failure **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d Image A recovery failure *Module %d Image A recovery failure **Module %d Image A recovery failure *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d Image B boot failure **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d Image B boot failure *Module %d Image B boot failure **Module %d Image B boot failure *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d Image B recovery failure **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d Image B recovery failure *Module %d Image B recovery failure **Module %d Image B recovery failure **CCI Onboard Phase Check Condition**CCI Onboard Phase Identify**CCI Onboard Phase Init Copper**CCI Onboard Phase Init Direct**CCI Onboard Subphase Init Optical Start**CCI Onboard Subphase Init Optical CMIS Select Application**CCI Onboard Subphase Init Optical Configure Links**CCI Onboard Subphase Init Optical Disable ALI**CCI Onboard Subphase Init Optical Pretrain Setup**CCI Onboard Subphase Init Optical Pretrain Send CDB**CCI Onboard Subphase Init Optical Pretrain Poll**CCI Onboard Subphase Init Optical Go Transparent**CCI Onboard Subphase Init Optical Reset Links**CCI Onboard Subphase Init Optical Enable ALI**CCI Onboard Phase Launch ALI**CCI Onboard Phase Sleep**CCI Onboard Phase Monitor*call to _cci_init_optical_start_async*call to _cci_cmis_select_application_async*call to _cci_configure_links_async*call to _cci_disable_ALI_async*call to _cci_pretrain_setup_async*call to _cci_pretrain_send_cdb_async*call to _cci_pretrain_poll_async*call to _cci_go_transparant_async*call to _cci_reset_links_async*call to _cci_enable_ALI_async*call to _cci_reset_links*call to cciModuleOnboardSleepAsync*call to _cci_go_transparant*linkTrainMaskDone*call to cciCheckForPreTraining*bPreTrainDone*preTrainCounter*call to nvswitch_cci_deinitialization_sequence_ls10*call to nvswitch_cci_enable_iobist_ls10*call to cciConfigureNvlinkModeModule*call to cciGetLaneMask*call to _cci_cmis_deactivate_lanes*call to nvswitch_cci_initialization_sequence_ls10*call to cciCmisWrite*nvl4AppSel**nvl4AppSel*call to _cci_cmis_check_config_errors*nvl4AppSelTemp**nvl4AppSelTemp*bLanesAccepted*errorCodes**errorCodes*call to nvswitch_timeout_check*call to nvswitch_os_sleep*laneMaskTemp*dataPathState**dataPathState*laneState*laneNum*cdbState**cdbState*pCdbState**pCdbState*laneMasksPending**laneMasksPending*laneMasksIndex*cdbPhase*moduleMaskPriority*call to _cci_cdb_perform_phases*call to _cci_check_cdb_ready*call to _cci_send_cdb_command*call to _cci_get_cdb_response*call to _cci_check_cdb_done*bContinue*call to _cci_check_for_cdb_complete*call to cciGetCDBStatus*call to cciGetCDBResponse**response*call to cciSendCDBCommand*call to cciRead*call to _cciCheckModuleFault*call to cciSetModulePower*call to _cciClearModuleFault*call to nvswitch_cci_ports_cpld_read*call to nvswitch_cci_ports_cpld_write*module_map*pCounterParams**pCounterParams*call to nvswitch_ctrl_get_throughput_counters*pCounterValues**pCounterValues*tpCounterPreviousSum**tpCounterPreviousSum*tpCounterCurrentSum*bTraffic*cciModuleMask*linkMaskAll*pLinkMaskAll**pCci*presentMask*call to nvswitch_cci_setup_module_path*i2c_params*osfp_i2c_info*messageLength*acquirer*call to nvswitch_ctrl_i2c_indexed*idx_i2cdevice*pMaskPresent*valReg1*valReg2*call to cciSendCDBCommandAndGetResponse*regByte*call to nvswitch_cci_get_xcvrs_present_change*pModuleMask*call to _nvswitch_cci_module_present*call to _nvswitch_cci_get_module_id*call to nvswitch_cci_cmis_cage_bezel_marking*pBezelMarking*call to _cciCmisAccessAllowed*call to _cciCmisAccessSetup*call to cciWrite*call to _cciCmisAccessRestore*call to nvswitch_os_get_pid*cmisAccessLock**cmisAccessLock*bLocked*timestampCurr*timestampSaved*call to cciSetBankAndPage*call to cciGetBankAndPage*pSavedBank*pSavedPage*encodedValue*pEncodedByte**pEncodedByte*p_nvswitch_cci_osfp_map**p_nvswitch_cci_osfp_map*nvswitch_cci_osfp_map_size*osfpLane*pEncodedValue*call to nvswitch_cci_get_grading_values*pGrading*call to nvswitch_cci_apply_control_set_values*train_mask*call to nvswitch_cci_get_xcvr_mask*pMaskAll*call to nvswitch_cci_get_xcvrs_present*call to nvswitch_cci_set_xcvr_present*call to nvswitch_cci_set_xcvr_led_state*xcvrNextLedState**xcvrNextLedState*call to nvswitch_cci_get_xcvr_led_state*pLedState*pRevisions**pRevisions*call to _cciGetXcvrFWRevisionsFlatMem*build*call to cciGetXcvrFWRevisions*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to init CCI(0) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to init CCI(0) *Failed to init CCI(0) **Failed to init CCI(0) *call to _nvswitch_cci_prepare_for_reset*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to reset CCI(0) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to reset CCI(0) *Failed to reset CCI(0) **Failed to reset CCI(0) *call to _nvswitch_reset_cci*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to reset CCI(1) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to reset CCI(1) *Failed to reset CCI(1) **Failed to reset CCI(1) *call to _nvswitch_identify_cci_devices*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to init CCI(1) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to init CCI(1) *Failed to init CCI(1) **Failed to init CCI(1) *call to nvswitch_cci_setup_onboard*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to init CCI(2) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to init CCI(2) *Failed to init CCI(2) **Failed to init CCI(2) *call to cciLinkTrainIdle*osfpMaskPresent*osfpMaskAll*call to nvswitch_cci_setup_gpio_pins*call to nvswitch_cci_reset*call to cciRequestALI*xcvrCurrentLedState**xcvrCurrentLedState*call to cciGetGradingValues*call to cciGetFWRevisions*revisions**revisions*callbackList**callbackList*call to cciWaitForCDBComplete*resLength**header*rlpllen*rlplchkcode*bSkipChecksum*status_busy*status_fail*cdb_result*chkcode*pBank*pPage*call to nvswitch_cci_module_access_cmd*call to nvswitch_cci_destroy*call to nvswitch_is_tnvl_mode_enabled*call to nvswitch_task_create*call to nvswitch_cci_discovery*bDiscovered*pValArray*pMaskPresentChange*error_log**error_log*nextErrorIndex*call to nvswitch_get_error*error_value*error_data_size**error_data*error_description**error_description*errorIndex*call to _nvswitch_translate_arch_error*errors*call to nvswitch_discard_errors**error_entry*local_error_num*global_error_num*call to nvswitch_hw_counter_read_counter*timer_count*error_start*call to nvswitch_os_strlen*description_len*idx_error*call to _nvswitch_dump_error_entry*error_total*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Severity %d Engine instance %02d Sub-engine instance %02d **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Severity %d Engine instance %02d Sub-engine instance %02d *Severity %d Engine instance %02d Sub-engine instance %02d **Severity %d Engine instance %02d Sub-engine instance %02d *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} *Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} **Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} *Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} **Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} *pEvtDesc*call to flcnQueueCmdPostNonBlocking*pSeqDesc*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to post command to SOE. Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to post command to SOE. Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} *cmdGen*Failed to post command to SOE. Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} **Failed to post command to SOE. Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} *call to flcnQueueCmdWait*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Timed out while waiting for SOE command completion. Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Timed out while waiting for SOE command completion. Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} *Timed out while waiting for SOE command completion. Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} **Timed out while waiting for SOE command completion. Data {0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x} *call to flcnQueueCmdCancel*pCallbackParams**pCallbackParams**pHal*call to flcnSetupHal*call to flcnDestroy*call to flcnableReadCoreRev*call to flcnSetupHal_v03_00*call to flcnSetupHal_v04_00*call to flcnSetupHal_v05_01*call to flcnSetupHal_v06_00*call to nvswitch_is_lr10_device_id*call to flcnSetupHal_LR10*call to nvswitch_is_ls10_device_id*call to flcnSetupHal_LS10*coreRevisionGet*markNotReady*cmdQueueHeadGet*msgQueueHeadGet*cmdQueueTailGet*msgQueueTailGet*cmdQueueHeadSet*msgQueueHeadSet*cmdQueueTailSet*msgQueueTailSet*dmemCopyFrom*dmemCopyTo*postDiscoveryInit*call to flcnQueueSetupHal*call to flcnRtosSetupHal*call to flcnQueueRdSetupHal*call to flcnableFetchEngines_HAL*call to flcnSetupIpHal*call to flcnDmemTransfer_HAL*call to flcnRegWrite_HAL*call to flcnRegRead_HAL*bOSReady*call to flcnGetCoreInfo_HAL*pEngDescUc*pEngDescBc*call to flcnAllocNew*call to flcnInit*call to flcnableSetupHal*call to flcnableDestroy*readCoreRev*getExternalConfig*ememCopyFrom*ememCopyTo*handleInitEvent*queueSeqInfoGet*queueSeqInfoClear*queueSeqInfoFree*queueCmdValidate*queueCmdPostExtension*fetchEngines*call to flcnPostDiscoveryInit*bResetInPmc*blkcgBase*fbifBase*call to flcnReadCoreRev_HAL*call to flcnableEmemCopyTo*call to flcnDmemCopyTo*call to flcnableEmemCopyFrom*call to flcnDmemCopyFrom*call to flcnCmdQueueTailSet*call to flcnMsgQueueTailSet*call to flcnCmdQueueTailGet*call to flcnMsgQueueTailGet*bRewind*pBRewind*call to _flcnQueueHasRoom_dmem*oflag*bOpened*call to flcnQueueConstruct_common_nvswitch*ppQueue**ppQueue**pQueue*openWrite*rewind*tailGet*tailSet*hasRoom*maxCmdSize*queueCmdWrite*queueSeqInfoFind*queueSeqInfoAcq*queueSeqInfoRel*queueSeqInfoStateInit*queueSeqInfoCancelAll*queueEventRegister*queueEventUnregister*queueEventHandle*queueResponseHandle*queueCmdStatus*queueCmdCancel*queueCmdPostNonBlocking*queueCmdWait*call to _flcnQueueCmdStatus_IMPL*bKeepPolling*call to soeService_HAL*call to flcnableQueueCmdValidate**pQueueInfo*pQueues*call to soeWaitForInitAck_HAL*call to _flcnQueueCmdValidate*call to flcnQueueSeqInfoAcq**pSeqInfo*seqNumId*pCmdQueue**pCmdQueue***pCallbackParams*call to flcnableQueueCmdPostExtension*call to flcnQueueSeqInfoRel*call to flcnQueueCmdWrite*seqState*call to flcnQueueSeqInfoFree*call to flcnQueueSeqInfoFind*seqStatus*pMsgGen*call to flcnableQueueSeqInfoGet*pEventInfo**pEventInfo*call to flcnableHandleInitEvent*pEventInfoNext**pEventInfoNext**pMsg*pEventInfoPrev**pEventInfoPrev**pNext*call to _flcnQueueAssignEventDesc*nextDesc*bAvailable*nextEvtDesc*call to flcnableQueueSeqInfoFree*seqNum*call to flcnableQueueSeqInfoClear*latestUsedSeqNum*call to soeIsCpuHalted_HAL**pCmd*pFlcnCmd*call to flcnCmdQueueHeadSet*call to flcnMsgQueueHeadSet*call to flcnCmdQueueHeadGet*call to flcnMsgQueueHeadGet*queueOffset*close*openRead*headGet*headSet*populateRewindCmd*queueReadData*msgGen*bufferGenHdr*readSize*call to _flcnQueueReaderReadHeader*call to _flcnQueueReaderGetNextHeader*retStatus*call to _flcnQueueReaderReadBody*dbgInfoDmemOffsetSet*call to flcnQueueReadData**pFlcnCmd*getCoreInfo*securityModel*coreRev*supportsDmemApertures*call to nvswitch_fsp_get_channel_size*fspEmemChannelSize*packetPayloadCapacity*bSinglePacket*call to nvswitch_fsp_nvdm_to_seid*call to nvswitch_fsp_create_mctp_header*call to nvswitch_fsp_create_nvdm_header*curPayloadSize*call to nvswitch_fsp_send_packet*dataSent*dataRemaining*call to _nvswitch_fsp_poll_for_response*call to nvswitch_fsp_read_message*pResponsePayload*call to _nvswitch_fsp_poll_for_queue_empty*paddedSize*call to nvswitch_fsp_write_to_emem*call to nvswitch_fsp_update_cmdq_head_tail*call to _nvswitch_fsp_is_msgq_empty*pPacketBuffer**pPacketBuffer*call to nvswitch_fsp_get_msgq_head_tail*call to nvswitch_fsp_read_from_emem*call to nvswitch_fsp_get_packet_info*curHeaderSize*pPayloadBuffer*call to nvswitch_fsp_update_msgq_head_tail*pMessagePayload**pMessagePayload*call to nvswitch_fsp_process_nvdm_msg*bMsgqEmpty*call to _nvswitch_fsp_is_queue_empty*bCmdqEmpty*call to nvswitch_fsp_get_cmdq_head_tail*call to nvswitch_os_get_os_version*call to nvswitch_os_get_platform_time_epoch*pEccState*err_event**pEccState*pEcc*call to nvswitch_inforom_write_object*ECC**pEcc**ECC*call to nvswitch_inforom_ecc_flush**pPackedObject*call to nvswitch_inforom_get_object_version_info*3s2bwbqqdb3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3d68b**3s2bwbqqdb3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3db3d2w3d68b*call to nvswitch_inforom_read_object*call to nvswitch_inforom_add_object*call to nvswitch_smbpbi_refresh_ecc_counts**pNvlinkState*pNvl**pNvl*call to nvswitch_inforom_nvlink_flush*NVL**NVL*bDisableFatalErrorLogging*bDisableCorrectableErrorLogging*call to _inforom_nvlink_start_correctable_error_recording*bCallbackPending*call to _inforom_nvlink_update_correctable_error_rates*call to nvswitch_get_enabled_link_mask*call to _inforom_nvlink_get_correctable_error_counts*nvlinkCounters**nvlinkCounters*flitCrc*txLinkReplay*rxLinkReplay*linkRecovery*laneCrc**laneCrc**pOmsState*pOms**pOms*OMS**OMS*3s2bwbd50w**3s2bwbd50w*call to nvswitch_inforom_load_object*OBD*OEM*3s2bwb504b*packedObject**OEM**3s2bwb504b**packedObject*IMG*3s2bwb16b4w32b**IMG**3s2bwb16b4w32b*pCounts*lastRead**lastRead*tempFlitCrc*errorsPerMinute**errorsPerMinute*tempLaneCrc**tempLaneCrc*tempRxLinkReplay*tempTxLinkReplay*tempLinkRecovery*pErrorLog*pNvlError*accum*totalCount*pErrorRate*flitCrcErrorsPerMinute*laneCrcErrorsPerMinute**laneCrcErrorsPerMinute*pNewLaneCrcRates**pNewLaneCrcRates*pLaneCrcRates*call to nvswitch_inforom_bbx_unload*call to nvswitch_inforom_ecc_unload*call to nvswitch_inforom_nvlink_unload*call to nvswitch_inforom_oms_unload*call to nvswitch_inforom_read_only_objects_load*call to nvswitch_inforom_nvlink_load*call to nvswitch_inforom_ecc_load*call to nvswitch_inforom_oms_load*call to nvswitch_inforom_bbx_load*pCacheEntry**pCacheEntry*pTmpCacheEntry**pTmpCacheEntry**pInforom*objectName*pObjectFormat**pObject*buildDate*call to nvswitch_inforom_string_copy*marketingName**marketingName*serialNumber*serialNum**serialNumber**serialNum*boardPartNum**boardPartNum*productPartNumber**productPartNumber*oemInfo**oemInfo*inforomVer**inforomVer*pFile**pFile*call to _nvswitch_inforom_get_cached_object*call to _nvswitch_inforom_read_file*packedHeader**packedHeader*call to _nvswitch_inforom_unpack_object*3s2bwb**3s2bwb*fileSize**pObjectCache*call to _nvswitch_inforom_calc_packed_object_size*call to _nvswitch_inforom_pack_object*call to _nvswitch_inforom_write_file*soeCmd*pIfrCmd*pDmaBuf**pDmaBuf*fileName**fileName**objectName*fsRet*call to _nvswitch_inforom_pack_uint_field*call to _nvswitch_inforom_unpack_uint_field*pRomImage*pEepromHeader**pEepromHeader*call to _nvswitch_calculate_checksum*pEepromBoardInfo**pEepromBoardInfo**pBoardInfo*pInfoSrc**pInfoSrc*mfgDateTime*call to _nvswitch_get_field_bytes*mfg**mfg*productName**productName*partNum**partNum*fileId**fileId*customMfgInfo**customMfgInfo*pFieldDest*call to nvswitch_reg_write_32*regval*call to nvswitch_reg_read_32*pllRegVal*pll_limits*ref_min_mhz*ref_max_mhz*vco_min_mhz*vco_max_mhz*update_min_mhz*update_max_mhz*m_min*m_max*n_min*n_max*pl_min*pl_max*pll*src_freq_khz*PL*dist_mode*refclk_div*call to nvswitch_validate_pll_config*switch_pll**engNPORT**engNVLTLC**engNVLDL**engNVLIPT_LNK**engNVLW**engMINION**engNVLIPT*link_enable_mask**engNPORT_PERFMON**engTX_PERFMON**engRX_PERFMON*eng_name**eng_name*eng_count*uc_addr**uc_addr*bc_addr*mc_addr**mc_addr*mc_addr_count**current**XVE**engXVE*bc**SAW**engSAW**SOE**engSOE**SMR**engSMR**NPG**engNPG**engNPG_BCAST**NPORT**engNPORT_MULTICAST_BCAST**NVLW**engNVLW_BCAST**MINION**engMINION_BCAST**NVLIPT**engNVLIPT_BCAST**NVLIPT_LNK**engNVLIPT_LNK_MULTICAST_BCAST**NVLTLC**engNVLTLC_MULTICAST_BCAST**NVLDL**engNVLDL_MULTICAST_BCAST**NXBAR**engNXBAR**engNXBAR_BCAST**TILE**engTILE**engTILE_MULTICAST_BCAST**NPG_PERFMON**engNPG_PERFMON**engNPG_PERFMON_BCAST**NPORT_PERFMON**engNPORT_PERFMON_MULTICAST_BCAST**NVLW_PERFMON**engNVLW_PERFMON**engNVLW_PERFMON_BCAST**PTOP**engPTOP**NPG_BCAST**CLKS**engCLKS**FUSE**engFUSE**JTAG**engJTAG**PMGR**engPMGR**XP3G**engXP3G**ROM**engROM**EXTDEV**engEXTDEV**PRIVMAIN**engPRIVMAIN**PRIVLOC**engPRIVLOC**PTIMER**engPTIMER**I2C**engI2C**SE**engSE**NVLW_BCAST**NXBAR_BCAST**THERM**engTHERM*call to _nvswitch_device_discovery_lr10*discovery_table_lr10**discovery_table_lr10**TX_PERFMON**RX_PERFMON**TX_PERFMON_MULTICAST**engTX_PERFMON_MULTICAST**RX_PERFMON_MULTICAST**engRX_PERFMON_MULTICAST**NVLTLC_MULTICAST**engNVLTLC_MULTICAST**NVLIPT_SYS_PERFMON**engNVLIPT_SYS_PERFMON**PLL**engPLL**NVLDL_MULTICAST**engNVLDL_MULTICAST**NVLIPT_LNK_MULTICAST**engNVLIPT_LNK_MULTICAST**SYS_PERFMON_MULTICAST**engSYS_PERFMON_MULTICAST**SYS_PERFMON**engSYS_PERFMON**MINION_BCAST**NVLIPT_BCAST**NVLTLC_BCAST**engNVLTLC_BCAST**NVLTLC_MULTICAST_BCAST**NVLIPT_SYS_PERFMON_BCAST**engNVLIPT_SYS_PERFMON_BCAST**TX_PERFMON_MULTICAST_BCAST**engTX_PERFMON_MULTICAST_BCAST**RX_PERFMON_MULTICAST_BCAST**engRX_PERFMON_MULTICAST_BCAST**TX_PERFMON_BCAST**engTX_PERFMON_BCAST**RX_PERFMON_BCAST**engRX_PERFMON_BCAST**PLL_BCAST**engPLL_BCAST**NVLW_PERFMON_BCAST**NVLDL_MULTICAST_BCAST**NVLIPT_LNK_MULTICAST_BCAST**SYS_PERFMON_MULTICAST_BCAST**engSYS_PERFMON_MULTICAST_BCAST**NVLDL_BCAST**engNVLDL_BCAST**NVLIPT_LNK_BCAST**engNVLIPT_LNK_BCAST**SYS_PERFMON_BCAST**engSYS_PERFMON_BCAST**NPORT_MULTICAST**engNPORT_MULTICAST**NPORT_PERFMON_MULTICAST**engNPORT_PERFMON_MULTICAST**NPORT_BCAST**engNPORT_BCAST**NPORT_MULTICAST_BCAST**NPG_PERFMON_BCAST**NPORT_PERFMON_BCAST**engNPORT_PERFMON_BCAST**NPORT_PERFMON_MULTICAST_BCAST**TILE_MULTICAST**engTILE_MULTICAST**NXBAR_PERFMON**engNXBAR_PERFMON**TILE_PERFMON**engTILE_PERFMON**TILE_PERFMON_MULTICAST**engTILE_PERFMON_MULTICAST**TILE_BCAST**engTILE_BCAST**TILE_MULTICAST_BCAST**NXBAR_PERFMON_BCAST**engNXBAR_PERFMON_BCAST**TILE_PERFMON_BCAST**engTILE_PERFMON_BCAST**TILE_PERFMON_MULTICAST_BCAST**engTILE_PERFMON_MULTICAST_BCAST*disc_type*entry_reserved*entry_type_nxbar*entry_type_npg*entry_type_nvlw*discovery*cluster*cluster_id*entry_type_lr10*discovery_handlers*discovery_table**engine*entry_id*entry_version*regRead*regWrite*construct*destruct*intrRetrigger*areEngDescsInitialized*waitForResetToFinish*dbgInfoCapturePcTrace*bConstructed**pQueues*engArch*call to flcnQueueSeqInfoStateInit*call to flcnableGetExternalConfig*call to flcnDestruct_HAL*regTraceIdx*maxIdx*dmaCtrl*engDescUc*engDescBc*3s2bwbd116b**OBD**3s2bwbd116b**v1*omsData*pVerData*call to _oms_update_entry_checksum*v1s**pIter*call to _oms_parse*call to _oms_is_content_dirty*call to _oms_reset_entry_iter*call to _oms_entry_available*call to _oms_entry_valid*call to _oms_set_current_entry*bCurrentValid*call to _oms_entry_iter_prev*bIterValid*call to _oms_entry_iter_next*call to _oms_set_update_entry*call to _oms_refresh*settings**settings*... --*call to _oms_dword_byte_sum*sum*errorLog**errorLog*v6s*errorEntries**errorEntries*call to _nvswitch_inforom_map_ecc_error_to_userspace_error*errIndx*pEccError*lastErrorTimestamp*correctedCount*uncorrectedCount*pEccObj**pEccObj*call to _inforom_ecc_find_useable_entry_index*call to _inforom_ecc_record_entry*pInforomTotalCount**pInforomTotalCount*tmpCount*pErrorEntry*bNewEntry*pErrCnt**pErrCnt*errId*sublocation*averageEventDelta*call to _inforom_ecc_calc_timestamp_delta*tmpCnt*totCnt*3s2bwbdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2b8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4wbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3d**3s2bwbdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2b8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d8d2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4wbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3d**pNvlErrorCounts*call to inforom_nvl_v3_seconds_to_day_and_month*call to inforom_nvl_v3_update_correctable_error_rates*currentFlitCrcRate*pCurrentLaneCrcRates**pCurrentLaneCrcRates*pNvlObject*pCorrErrorRates**pCorrErrorRates*dailyMaxCorrectableErrorRates**dailyMaxCorrectableErrorRates***dailyMaxCorrectableErrorRates**pErrorRate*call to inforom_nvl_v3_should_replace_error_rate_entry*call to inforom_nvl_v3_update_error_rate_entry*bUpdated*pOldestErrorRate**pOldestErrorRate*monthlyMaxCorrectableErrorRates**monthlyMaxCorrectableErrorRates***monthlyMaxCorrectableErrorRates*errorEvent*call to nvswitch_inforom_nvl_log_error_event_lr10*v3s*errorReadCount*call to inforom_nvl_v3_map_error_to_userspace_error*maxCorrectableErrorRates**pNvlErrorEvent*pErrorEvent*call to inforom_nvl_v3_map_error*metadata*call to inforom_nvl_v3_encode_nvlipt_error_subtype**pErrorEntry*errorSubtype*accumTotalCount*avgEventDelta*lastError*call to _nvswitch_service_legacy_lr10*call to _nvswitch_rearm_msi_lr10*call to nvswitch_test_flags*call to _nvswitch_service_saw_lr10*call to nvswitch_clear_flags*call to _nvswitch_service_priv_ring_lr10*call to _nvswitch_service_pbus_lr10*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Fatal, unhandled interrupt in %s(%d) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Fatal, unhandled interrupt in %s(%d) *Fatal, unhandled interrupt in %s(%d) **Fatal, unhandled interrupt in %s(%d) *call to nvswitch_record_error*call to _nvswitch_service_saw_legacy_lr10*call to _nvswitch_service_saw_fatal_lr10*call to _nvswitch_service_saw_nonfatal_lr10*call to _nvswitch_service_npg_fatal_lr10*call to _nvswitch_service_nxbar_fatal_lr10*call to _nvswitch_service_nvlipt_fatal_lr10*call to _nvswitch_service_soe_fatal_lr10*call to _nvswitch_service_nvldl_fatal_lr10*call to _nvswitch_service_nvltlc_fatal_lr10*call to _nvswitch_service_minion_fatal_lr10*call to _nvswitch_service_nvlipt_common_fatal_lr10*call to _nvswitch_service_nvlipt_link_fatal_lr10*localLinkMask*localEnabledLinkMask*intrLink*call to _nvswitch_service_nvlipt_lnk_fatal_lr10*raw_pending*raw_enable*raw_first*injected*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Fatal, Link %02d %s%s *No non-empty link is detected**nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Fatal, Link %02d %s%s **No non-empty link is detected* (First)** (First)*Fatal, Link %02d %s%s **Fatal, Link %02d %s%s *"No non-empty link is detected"**"No non-empty link is detected"*call to nvswitch_set_fatal_error*call to nvswitch_lib_notify_client_events*call to nvswitch_inforom_nvlink_log_error_event*Reset sequencer timed out waiting for a handshake from PHYCTL**Reset sequencer timed out waiting for a handshake from PHYCTL*"Reset sequencer timed out waiting for a handshake from PHYCTL"**"Reset sequencer timed out waiting for a handshake from PHYCTL"*Reset sequencer timed out waiting for a handshake from CLKCTL**Reset sequencer timed out waiting for a handshake from CLKCTL*"Reset sequencer timed out waiting for a handshake from CLKCTL"**"Reset sequencer timed out waiting for a handshake from CLKCTL"*contain*CLKCTL_ILLEGAL_REQUEST**CLKCTL_ILLEGAL_REQUEST*"CLKCTL_ILLEGAL_REQUEST"**"CLKCTL_ILLEGAL_REQUEST"*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Non-fatal, Link %02d %s%s **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Non-fatal, Link %02d %s%s *Non-fatal, Link %02d %s%s **Non-fatal, Link %02d %s%s *local_link_idx*RSTSEQ_PLL_TIMEOUT**RSTSEQ_PLL_TIMEOUT*"RSTSEQ_PLL_TIMEOUT"**"RSTSEQ_PLL_TIMEOUT"*RSTSEQ_PHYARB_TIMEOUT**RSTSEQ_PHYARB_TIMEOUT*"RSTSEQ_PHYARB_TIMEOUT"**"RSTSEQ_PHYARB_TIMEOUT"*call to _nvswitch_service_nvltlc_tx_sys_fatal_lr10*call to _nvswitch_service_nvltlc_rx_sys_fatal_lr10*call to _nvswitch_service_nvltlc_tx_lnk_fatal_0_lr10*call to _nvswitch_service_nvltlc_rx_lnk_fatal_0_lr10*call to _nvswitch_service_nvltlc_rx_lnk_fatal_1_lr10*RX HDR OVF Error**RX HDR OVF Error*"RX HDR OVF Error"**"RX HDR OVF Error"*RX Data OVF Error**RX Data OVF Error*"RX Data OVF Error"**"RX Data OVF Error"*Stomp Det Error**Stomp Det Error*"Stomp Det Error"**"Stomp Det Error"*RX Poison Error**RX Poison Error*"RX Poison Error"**"RX Poison Error"*RX DL HDR Parity Error**RX DL HDR Parity Error*"RX DL HDR Parity Error"**"RX DL HDR Parity Error"*RX DL Data Parity Error**RX DL Data Parity Error*"RX DL Data Parity Error"**"RX DL Data Parity Error"*RX DL Ctrl Parity Error**RX DL Ctrl Parity Error*"RX DL Ctrl Parity Error"**"RX DL Ctrl Parity Error"*RX Invalid DAE Error**RX Invalid DAE Error*"RX Invalid DAE Error"**"RX Invalid DAE Error"*RX Invalid BE Error**RX Invalid BE Error*"RX Invalid BE Error"**"RX Invalid BE Error"*RX Invalid Addr Align Error**RX Invalid Addr Align Error*"RX Invalid Addr Align Error"**"RX Invalid Addr Align Error"*RX Packet Length Error**RX Packet Length Error*"RX Packet Length Error"**"RX Packet Length Error"*RSV Cmd Encoding Error**RSV Cmd Encoding Error*"RSV Cmd Encoding Error"**"RSV Cmd Encoding Error"*RSV Data Length Encoding Error**RSV Data Length Encoding Error*"RSV Data Length Encoding Error"**"RSV Data Length Encoding Error"*RSV Packet Status Error**RSV Packet Status Error*"RSV Packet Status Error"**"RSV Packet Status Error"*RSV CacheAttr Probe Rsp Error**RSV CacheAttr Probe Rsp Error*"RSV CacheAttr Probe Rsp Error"**"RSV CacheAttr Probe Rsp Error"*Data Length RMW Req Max Error**Data Length RMW Req Max Error*"Data Length RMW Req Max Error"**"Data Length RMW Req Max Error"*Data Len Lt ATR RSP Min Error**Data Len Lt ATR RSP Min Error*"Data Len Lt ATR RSP Min Error"**"Data Len Lt ATR RSP Min Error"*Invalid Cache Attr PO Error**Invalid Cache Attr PO Error*"Invalid Cache Attr PO Error"**"Invalid Cache Attr PO Error"*Invalid CR Error**Invalid CR Error*"Invalid CR Error"**"Invalid CR Error"*RX Rsp Status HW Error**RX Rsp Status HW Error*"RX Rsp Status HW Error"**"RX Rsp Status HW Error"*RX Rsp Status UR Error**RX Rsp Status UR Error*"RX Rsp Status UR Error"**"RX Rsp Status UR Error"*Invalid Collapsed Response Error**Invalid Collapsed Response Error*"Invalid Collapsed Response Error"**"Invalid Collapsed Response Error"*TX DL Credit Parity Error**TX DL Credit Parity Error*"TX DL Credit Parity Error"**"TX DL Credit Parity Error"*CREQ RAM HDR ECC DBE Error**CREQ RAM HDR ECC DBE Error*"CREQ RAM HDR ECC DBE Error"**"CREQ RAM HDR ECC DBE Error"*Response RAM HDR ECC DBE Error**Response RAM HDR ECC DBE Error*"Response RAM HDR ECC DBE Error"**"Response RAM HDR ECC DBE Error"*COM RAM HDR ECC DBE Error**COM RAM HDR ECC DBE Error*"COM RAM HDR ECC DBE Error"**"COM RAM HDR ECC DBE Error"*RSP1 RAM HDR ECC DBE Error**RSP1 RAM HDR ECC DBE Error*"RSP1 RAM HDR ECC DBE Error"**"RSP1 RAM HDR ECC DBE Error"*RSP1 RAM DAT ECC DBE Error**RSP1 RAM DAT ECC DBE Error*"RSP1 RAM DAT ECC DBE Error"**"RSP1 RAM DAT ECC DBE Error"*NCISOC Parity Error**NCISOC Parity Error*"NCISOC Parity Error"**"NCISOC Parity Error"*HDR RAM ECC DBE Error**HDR RAM ECC DBE Error*"HDR RAM ECC DBE Error"**"HDR RAM ECC DBE Error"*HDR RAM ECC Limit Error**HDR RAM ECC Limit Error*"HDR RAM ECC Limit Error"**"HDR RAM ECC Limit Error"*DAT0 RAM ECC DBE Error**DAT0 RAM ECC DBE Error*"DAT0 RAM ECC DBE Error"**"DAT0 RAM ECC DBE Error"*DAT0 RAM ECC Limit Error**DAT0 RAM ECC Limit Error*"DAT0 RAM ECC Limit Error"**"DAT0 RAM ECC Limit Error"*DAT1 RAM ECC DBE Error**DAT1 RAM ECC DBE Error*"DAT1 RAM ECC DBE Error"**"DAT1 RAM ECC DBE Error"*DAT1 RAM ECC Limit Error**DAT1 RAM ECC Limit Error*"DAT1 RAM ECC Limit Error"**"DAT1 RAM ECC Limit Error"*NCISOC HDR ECC DBE Error**NCISOC HDR ECC DBE Error*"NCISOC HDR ECC DBE Error"**"NCISOC HDR ECC DBE Error"*NCISOC DAT ECC DBE Error**NCISOC DAT ECC DBE Error*"NCISOC DAT ECC DBE Error"**"NCISOC DAT ECC DBE Error"*NCISOC ECC Limit Error**NCISOC ECC Limit Error*"NCISOC ECC Limit Error"**"NCISOC ECC Limit Error"*Poison Error**Poison Error*"Poison Error"**"Poison Error"*TX Response Status HW Error**TX Response Status HW Error*"TX Response Status HW Error"**"TX Response Status HW Error"*TX Response Status UR Error**TX Response Status UR Error*"TX Response Status UR Error"**"TX Response Status UR Error"*TX Response Status PRIV Error**TX Response Status PRIV Error*"TX Response Status PRIV Error"**"TX Response Status PRIV Error"*linkTrainingErrorInfo*linkRuntimeErrorInfo*mask0*call to nvswitch_smbpbi_set_link_error_info*TX Fault Ram**TX Fault Ram*"TX Fault Ram"**"TX Fault Ram"*TX Fault Interface**TX Fault Interface*"TX Fault Interface"**"TX Fault Interface"*TX Fault Sublink Change**TX Fault Sublink Change*"TX Fault Sublink Change"**"TX Fault Sublink Change"*RX Fault Sublink Change**RX Fault Sublink Change*"RX Fault Sublink Change"**"RX Fault Sublink Change"*RX Fault DL Protocol**RX Fault DL Protocol*"RX Fault DL Protocol"**"RX Fault DL Protocol"*LTSSM Fault Down**LTSSM Fault Down*"LTSSM Fault Down"**"LTSSM Fault Down"*LTSSM Fault Up**LTSSM Fault Up*"LTSSM Fault Up"**"LTSSM Fault Up"*LTSSM Protocol Error**LTSSM Protocol Error*"LTSSM Protocol Error"**"LTSSM Protocol Error"*call to nvswitch_minion_service_falcon_interrupts_lr10*call to _nvswitch_service_nxbar_tile_lr10*call to _nvswitch_service_nxbar_tileout_lr10*intr_mask*ingress SRC-VC buffer overflow**ingress SRC-VC buffer overflow*"ingress SRC-VC buffer overflow"**"ingress SRC-VC buffer overflow"*ingress SRC-VC buffer underflow**ingress SRC-VC buffer underflow*"ingress SRC-VC buffer underflow"**"ingress SRC-VC buffer underflow"*egress DST-VC credit overflow**egress DST-VC credit overflow*"egress DST-VC credit overflow"**"egress DST-VC credit overflow"*egress DST-VC credit underflow**egress DST-VC credit underflow*"egress DST-VC credit underflow"**"egress DST-VC credit underflow"*ingress packet burst error**ingress packet burst error*"ingress packet burst error"**"ingress packet burst error"*ingress packet sticky error**ingress packet sticky error*"ingress packet sticky error"**"ingress packet sticky error"*possible bubbles at ingress**possible bubbles at ingress*"possible bubbles at ingress"**"possible bubbles at ingress"*ingress credit parity error**ingress credit parity error*"ingress credit parity error"**"ingress credit parity error"*ingress packet invalid dst error**ingress packet invalid dst error*"ingress packet invalid dst error"**"ingress packet invalid dst error"*ingress packet parity error**ingress packet parity error*"ingress packet parity error"**"ingress packet parity error"*call to _nvswitch_service_npg_nonfatal_lr10*call to _nvswitch_service_nvlipt_nonfatal_lr10*call to _nvswitch_service_nvldl_nonfatal_lr10*call to _nvswitch_service_nvltlc_nonfatal_lr10*call to _nvswitch_service_nvlipt_link_nonfatal_lr10*call to _nvswitch_service_nvlipt_lnk_nonfatal_lr10*_HW_NVLIPT_LNK_ILLEGALLINKSTATEREQUEST**_HW_NVLIPT_LNK_ILLEGALLINKSTATEREQUEST*"_HW_NVLIPT_LNK_ILLEGALLINKSTATEREQUEST"**"_HW_NVLIPT_LNK_ILLEGALLINKSTATEREQUEST"*_FAILEDMINIONREQUEST**_FAILEDMINIONREQUEST*"_FAILEDMINIONREQUEST"**"_FAILEDMINIONREQUEST"*_RESERVEDREQUESTVALUE**_RESERVEDREQUESTVALUE*"_RESERVEDREQUESTVALUE"**"_RESERVEDREQUESTVALUE"*_LINKSTATEWRITEWHILEBUSY**_LINKSTATEWRITEWHILEBUSY*"_LINKSTATEWRITEWHILEBUSY"**"_LINKSTATEWRITEWHILEBUSY"*_LINK_STATE_REQUEST_TIMEOUT**_LINK_STATE_REQUEST_TIMEOUT*"_LINK_STATE_REQUEST_TIMEOUT"**"_LINK_STATE_REQUEST_TIMEOUT"*_WRITE_TO_LOCKED_SYSTEM_REG_ERR**_WRITE_TO_LOCKED_SYSTEM_REG_ERR*"_WRITE_TO_LOCKED_SYSTEM_REG_ERR"**"_WRITE_TO_LOCKED_SYSTEM_REG_ERR"*call to _nvswitch_service_nvltlc_rx_lnk_nonfatal_0_lr10*call to _nvswitch_service_nvltlc_tx_lnk_nonfatal_0_lr10*call to _nvswitch_service_nvltlc_rx_lnk_nonfatal_1_lr10*call to _nvswitch_service_nvltlc_tx_lnk_nonfatal_1_lr10*AN1 Timeout VC0**AN1 Timeout VC0*"AN1 Timeout VC0"**"AN1 Timeout VC0"*AN1 Timeout VC1**AN1 Timeout VC1*"AN1 Timeout VC1"**"AN1 Timeout VC1"*AN1 Timeout VC2**AN1 Timeout VC2*"AN1 Timeout VC2"**"AN1 Timeout VC2"*AN1 Timeout VC3**AN1 Timeout VC3*"AN1 Timeout VC3"**"AN1 Timeout VC3"*AN1 Timeout VC4**AN1 Timeout VC4*"AN1 Timeout VC4"**"AN1 Timeout VC4"*AN1 Timeout VC5**AN1 Timeout VC5*"AN1 Timeout VC5"**"AN1 Timeout VC5"*AN1 Timeout VC6**AN1 Timeout VC6*"AN1 Timeout VC6"**"AN1 Timeout VC6"*AN1 Timeout VC7**AN1 Timeout VC7*"AN1 Timeout VC7"**"AN1 Timeout VC7"*AN1 Heartbeat Timeout Error**AN1 Heartbeat Timeout Error*"AN1 Heartbeat Timeout Error"**"AN1 Heartbeat Timeout Error"*CREQ RAM DAT ECC DBE Error**CREQ RAM DAT ECC DBE Error*"CREQ RAM DAT ECC DBE Error"**"CREQ RAM DAT ECC DBE Error"*CREQ RAM DAT ECC Limit Error**CREQ RAM DAT ECC Limit Error*"CREQ RAM DAT ECC Limit Error"**"CREQ RAM DAT ECC Limit Error"*Response RAM DAT ECC DBE Error**Response RAM DAT ECC DBE Error*"Response RAM DAT ECC DBE Error"**"Response RAM DAT ECC DBE Error"*Response RAM ECC Limit Error**Response RAM ECC Limit Error*"Response RAM ECC Limit Error"**"Response RAM ECC Limit Error"*COM RAM DAT ECC DBE Error**COM RAM DAT ECC DBE Error*"COM RAM DAT ECC DBE Error"**"COM RAM DAT ECC DBE Error"*COM RAM ECC Limit Error**COM RAM ECC Limit Error*"COM RAM ECC Limit Error"**"COM RAM ECC Limit Error"*RSP1 RAM ECC Limit Error**RSP1 RAM ECC Limit Error*"RSP1 RAM ECC Limit Error"**"RSP1 RAM ECC Limit Error"*RX Rsp Status PRIV Error**RX Rsp Status PRIV Error*"RX Rsp Status PRIV Error"**"RX Rsp Status PRIV Error"*call to _nvswitch_service_nvldl_nonfatal_link_lr10*TX Replay Error**TX Replay Error*"TX Replay Error"**"TX Replay Error"*TX Recovery Short**TX Recovery Short*"TX Recovery Short"**"TX Recovery Short"*RX Short Error Rate**RX Short Error Rate*"RX Short Error Rate"**"RX Short Error Rate"*RX Long Error Rate**RX Long Error Rate*"RX Long Error Rate"**"RX Long Error Rate"*RX ILA Trigger**RX ILA Trigger*"RX ILA Trigger"**"RX ILA Trigger"*RX CRC Counter**RX CRC Counter*"RX CRC Counter"**"RX CRC Counter"*minionIntr*enabledLinks*linkIntr*Minion Link NA interrupt**Minion Link NA interrupt*"Minion Link NA interrupt"**"Minion Link NA interrupt"*Minion Link DLREQ interrupt**Minion Link DLREQ interrupt*"Minion Link DLREQ interrupt"**"Minion Link DLREQ interrupt"*Minion Link PMDISABLED interrupt**Minion Link PMDISABLED interrupt*"Minion Link PMDISABLED interrupt"**"Minion Link PMDISABLED interrupt"*Minion Link DLCMDFAULT interrupt**Minion Link DLCMDFAULT interrupt*"Minion Link DLCMDFAULT interrupt"**"Minion Link DLCMDFAULT interrupt"*Minion Link TLREQ interrupt**Minion Link TLREQ interrupt*"Minion Link TLREQ interrupt"**"Minion Link TLREQ interrupt"*Minion Link NOINIT interrupt**Minion Link NOINIT interrupt*"Minion Link NOINIT interrupt"**"Minion Link NOINIT interrupt"*Minion Link Local-Config-Error interrupt**Minion Link Local-Config-Error interrupt*"Minion Link Local-Config-Error interrupt"**"Minion Link Local-Config-Error interrupt"*Minion Link Negotiation Config Err Interrupt**Minion Link Negotiation Config Err Interrupt*"Minion Link Negotiation Config Err Interrupt"**"Minion Link Negotiation Config Err Interrupt"*Minion Link BADINIT interrupt**Minion Link BADINIT interrupt*"Minion Link BADINIT interrupt"**"Minion Link BADINIT interrupt"*Minion Link PMFAIL interrupt**Minion Link PMFAIL interrupt*"Minion Link PMFAIL interrupt"**"Minion Link PMFAIL interrupt"*Minion Interrupt code unknown**Minion Interrupt code unknown*"Minion Interrupt code unknown"**"Minion Interrupt code unknown"*call to _nvswitch_service_nport_nonfatal_lr10*call to _nvswitch_service_route_nonfatal_lr10*call to _nvswitch_service_ingress_nonfatal_lr10*call to _nvswitch_service_egress_nonfatal_lr10*call to _nvswitch_service_tstate_nonfatal_lr10*call to _nvswitch_service_sourcetrack_nonfatal_lr10*call to _nvswitch_service_nport_fatal_lr10*call to _nvswitch_service_route_fatal_lr10*call to _nvswitch_service_ingress_fatal_lr10*call to _nvswitch_service_egress_fatal_lr10*call to _nvswitch_service_tstate_fatal_lr10*call to _nvswitch_service_sourcetrack_fatal_lr10*sourcetrack*sourcetrack TCEN0 crumbstore DBE**sourcetrack TCEN0 crumbstore DBE*"sourcetrack TCEN0 crumbstore DBE"**"sourcetrack TCEN0 crumbstore DBE"*call to _nvswitch_construct_ecc_error_event*call to nvswitch_inforom_ecc_log_err_event*sourcetrack TCEN1 crumbstore DBE**sourcetrack TCEN1 crumbstore DBE*"sourcetrack TCEN1 crumbstore DBE"**"sourcetrack TCEN1 crumbstore DBE"*sourcetrack timeout error**sourcetrack timeout error*"sourcetrack timeout error"**"sourcetrack timeout error"*sourcetrack TCEN0 crumbstore ECC limit err**sourcetrack TCEN0 crumbstore ECC limit err*"sourcetrack TCEN0 crumbstore ECC limit err"**"sourcetrack TCEN0 crumbstore ECC limit err"*sourcetrack TCEN1 crumbstore ECC limit err**sourcetrack TCEN1 crumbstore ECC limit err*"sourcetrack TCEN1 crumbstore ECC limit err"**"sourcetrack TCEN1 crumbstore ECC limit err"*egress*call to _nvswitch_collect_error_info_lr10*egress crossbar overflow**egress crossbar overflow*"egress crossbar overflow"**"egress crossbar overflow"*buffer_data*egress packet route**egress packet route*"egress packet route"**"egress packet route"*egress sequence ID error**egress sequence ID error*"egress sequence ID error"**"egress sequence ID error"*egress input ECC DBE error**egress input ECC DBE error*"egress input ECC DBE error"**"egress input ECC DBE error"*egress output ECC DBE error**egress output ECC DBE error*"egress output ECC DBE error"**"egress output ECC DBE error"*egress credit overflow**egress credit overflow*"egress credit overflow"**"egress credit overflow"*credit_data*egress destination request ID error**egress destination request ID error*"egress destination request ID error"**"egress destination request ID error"*egress destination response ID error**egress destination response ID error*"egress destination response ID error"**"egress destination response ID error"*egress control parity error**egress control parity error*"egress control parity error"**"egress control parity error"*egress credit parity error**egress credit parity error*"egress credit parity error"**"egress credit parity error"*egress flit type mismatch**egress flit type mismatch*"egress flit type mismatch"**"egress flit type mismatch"*egress credit timeout**egress credit timeout*"egress credit timeout"**"egress credit timeout"*egress input ECC error limit**egress input ECC error limit*"egress input ECC error limit"**"egress input ECC error limit"*egress output ECC error limit**egress output ECC error limit*"egress output ECC error limit"**"egress output ECC error limit"*egress non-posted UR**egress non-posted UR*"egress non-posted UR"**"egress non-posted UR"*egress non-posted PRIV error**egress non-posted PRIV error*"egress non-posted PRIV error"**"egress non-posted PRIV error"*egress non-posted HW error**egress non-posted HW error*"egress non-posted HW error"**"egress non-posted HW error"*tstate*TS pointer crossover**TS pointer crossover*"TS pointer crossover"**"TS pointer crossover"*TS tag store fatal ECC**TS tag store fatal ECC*"TS tag store fatal ECC"**"TS tag store fatal ECC"*TS crumbstore**TS crumbstore*"TS crumbstore"**"TS crumbstore"*TS crumbstore fatal ECC**TS crumbstore fatal ECC*"TS crumbstore fatal ECC"**"TS crumbstore fatal ECC"*TS ATO timeout**TS ATO timeout*"TS ATO timeout"**"TS ATO timeout"*Rsp Tag value out of range**Rsp Tag value out of range*"Rsp Tag value out of range"**"Rsp Tag value out of range"*TS tag store single-bit threshold**TS tag store single-bit threshold*"TS tag store single-bit threshold"**"TS tag store single-bit threshold"*TS crumbstore single-bit threshold**TS crumbstore single-bit threshold*"TS crumbstore single-bit threshold"**"TS crumbstore single-bit threshold"*ingress*ingress request context mismatch**ingress request context mismatch*"ingress request context mismatch"**"ingress request context mismatch"*ingress invalid ACL**ingress invalid ACL*"ingress invalid ACL"**"ingress invalid ACL"*ingress header ECC**ingress header ECC*"ingress header ECC"**"ingress header ECC"*ingress address bounds**ingress address bounds*"ingress address bounds"**"ingress address bounds"*ingress RID packet**ingress RID packet*"ingress RID packet"**"ingress RID packet"*ingress RLAN packet**ingress RLAN packet*"ingress RLAN packet"**"ingress RLAN packet"*ingress illegal address**ingress illegal address*"ingress illegal address"**"ingress illegal address"*ingress invalid command**ingress invalid command*"ingress invalid command"**"ingress invalid command"*ingress header DBE**ingress header DBE*"ingress header DBE"**"ingress header DBE"*ingress invalid VCSet**ingress invalid VCSet*"ingress invalid VCSet"**"ingress invalid VCSet"*ingress Remap DBE**ingress Remap DBE*"ingress Remap DBE"**"ingress Remap DBE"*ingress RID DBE**ingress RID DBE*"ingress RID DBE"**"ingress RID DBE"*ingress RLAN DBE**ingress RLAN DBE*"ingress RLAN DBE"**"ingress RLAN DBE"*ingress control parity**ingress control parity*"ingress control parity"**"ingress control parity"*route*route undefined route**route undefined route*"route undefined route"**"route undefined route"*route invalid policy**route invalid policy*"route invalid policy"**"route invalid policy"*route incoming ECC limit**route incoming ECC limit*"route incoming ECC limit"**"route incoming ECC limit"*route buffer over/underflow**route buffer over/underflow*"route buffer over/underflow"**"route buffer over/underflow"*route GLT DBE**route GLT DBE*"route GLT DBE"**"route GLT DBE"*route transdone over/underflow**route transdone over/underflow*"route transdone over/underflow"**"route transdone over/underflow"*route parity**route parity*"route parity"**"route parity"*route incoming DBE**route incoming DBE*"route incoming DBE"**"route incoming DBE"*route credit parity**route credit parity*"route credit parity"**"route credit parity"*call to _nvswitch_collect_nport_error_info_lr10*data_collect_error*register_block_size*pri_error*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Non-fatal, %s, instance=%d, chiplet=%d *PRI WRITE SYS error**nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Non-fatal, %s, instance=%d, chiplet=%d **PRI WRITE SYS error*Non-fatal, %s, instance=%d, chiplet=%d **Non-fatal, %s, instance=%d, chiplet=%d *"PRI WRITE SYS error"**"PRI WRITE SYS error"*PRI WRITE PRT error**PRI WRITE PRT error*"PRI WRITE PRT error"**"PRI WRITE PRT error"*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Fatal, Unexpected PRI error **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Fatal, Unexpected PRI error *Fatal, Unexpected PRI error **Fatal, Unexpected PRI error *call to nvswitch_ring_master_cmd_lr10*save0*save1*save3*errCode*pri_timeout*subId*raw_data**raw_data*PBUS PRI SQUASH error**PBUS PRI SQUASH error*"PBUS PRI SQUASH error"**"PBUS PRI SQUASH error"*PBUS PRI FECSERR error**PBUS PRI FECSERR error*"PBUS PRI FECSERR error"**"PBUS PRI FECSERR error"*PBUS PRI TIMEOUT error**PBUS PRI TIMEOUT error*"PBUS PRI TIMEOUT error"**"PBUS PRI TIMEOUT error"*saw_legacy_intr_enable*MINION Watchdog timer ran out**MINION Watchdog timer ran out*"MINION Watchdog timer ran out"**"MINION Watchdog timer ran out"*MINION HALT**MINION HALT*"MINION HALT"**"MINION HALT"*MINION EXTERR**MINION EXTERR*"MINION EXTERR"**"MINION EXTERR"*call to _nvswitch_build_top_interrupt_mask_lr10*call to _nvswitch_initialize_saw_interrupts*call to _nvswitch_initialize_nport_interrupts*call to _nvswitch_initialize_nvlipt_interrupts_lr10*call to _nvswitch_initialize_nxbar_interrupts*call to _nvswitch_initialize_route_interrupts*call to _nvswitch_initialize_ingress_interrupts*call to _nvswitch_initialize_egress_interrupts*call to _nvswitch_initialize_tstate_interrupts*call to _nvswitch_initialize_sourcetrack_interrupts*report_fatal*report_nonfatal*call to _nvswitch_initialize_nxbar_tile_interrupts*call to _nvswitch_initialize_nxbar_tileout_interrupts*call to _nvswitch_initialize_minion_interrupts*localDiscoveredLinks*globalLink*intr_enable_legacy*intr_enable_fatal*intr_enable_nonfatal*intr_enable_corr*intr_bit*call to nvswitch_minion_send_command_lr10*call to nvswitch_read_physical_id*physicalId*disabledRemoteEndLinkMask*call to nvlink_lib_is_registerd_device_with_reduced_config*call to nvlink_lib_return_device_count_by_type*bDisabledRemoteEndLinkMaskCached*call to nvswitch_get_bios_nvlink_config*bios_config**bios_config*link_vbios_entry**link_vbios_entry***link_vbios_entry*vbios_link_entry**vbios_link_entry*call to nvswitch_link_termination_setup*base_entry*call to _nvswitch_get_nvlink_linerate_lr10*lineRate*fldval*call to nvswitch_apply_recal_settings**pDevInfo*call to nvswitch_corelib_set_dl_link_mode_lr10*call to nvswitch_is_tnvl_mode_locked*call to nvswitch_wait_for_tl_request_ready_lr10*keepPolling*linkRequest*call to nvswitch_init_dlpl_interrupts*call to _nvswitch_configure_reserved_throughput_counters*call to nvswitch_record_port_event*call to nvswitch_minion_get_rxdet_status_lr10*call to nvswitch_does_link_need_termination_enabled*call to nvswitch_minion_send_command*call to nvswitch_poll_sublink_state*delay_ns*call to nvswitch_minion_set_rx_term_lr10*call to _nvswitch_init_dl_pll*link_state*call to nvswitch_request_tl_link_state_lr10*call to _nvswitch_power_down_link_plls*call to _nvswitch_init_link_post_active*call to _nvswitch_disable_dlpl_interrupts*call to nvswitch_setup_link_loopback_mode*call to nvswitch_minion_get_initoptimize_status_lr10*call to nvswitch_minion_get_initnegotiate_status_lr10*call to nvswitch_store_topology_information*call to nvswitch_init_lpwr_regs*call to nvswitch_init_buffer_ready*icLimit*tempRegVal*fbIcInc*lpIcInc*fbIcDec*lpIcDec*lpExitThreshold*bLpEnable*softwareDesired*hardwareDisable*remoteSid*tempval*remoteLinkId*localSid*remoteDeviceType*intrRegVal*crcRegVal*shortRateMask*longRateMask*call to _nvswitch_ioctrl_setup_link_plls_lr10*chip_arch*chip_impl*nvswitch_initialize_device_state*nvswitch_destroy_device_state*nvswitch_determine_platform*nvswitch_get_num_links_per_nvlipt*nvswitch_set_fatal_error*nvswitch_internal_latency_bin_log*nvswitch_ecc_writeback_task*nvswitch_monitor_thermal_alert*nvswitch_hw_counter_shutdown*nvswitch_lib_enable_interrupts*nvswitch_lib_disable_interrupts*nvswitch_is_link_in_use*nvswitch_reset_and_drain_links*nvswitch_ctrl_get_info*nvswitch_ctrl_get_nvlink_status*nvswitch_ctrl_get_counters*nvswitch_ctrl_set_switch_port_config*nvswitch_ctrl_get_ingress_request_table*nvswitch_ctrl_set_ingress_request_table*nvswitch_ctrl_set_ingress_request_valid*nvswitch_ctrl_get_ingress_response_table*nvswitch_ctrl_set_ingress_response_table*nvswitch_ctrl_set_ganged_link_table*nvswitch_init_npg_multicast*nvswitch_init_warm_reset*nvswitch_ctrl_set_remap_policy*nvswitch_ctrl_get_remap_policy*nvswitch_ctrl_set_remap_policy_valid*nvswitch_ctrl_set_routing_id*nvswitch_ctrl_get_routing_id*nvswitch_ctrl_set_routing_id_valid*nvswitch_ctrl_set_routing_lan*nvswitch_ctrl_get_routing_lan*nvswitch_ctrl_set_routing_lan_valid*nvswitch_ctrl_get_internal_latency*nvswitch_ctrl_get_ingress_reqlinkid*nvswitch_ctrl_register_read*nvswitch_ctrl_register_write*nvswitch_ctrl_therm_read_temperature*nvswitch_ctrl_therm_get_temperature_limit*nvswitch_corelib_add_link*nvswitch_corelib_remove_link*nvswitch_corelib_set_dl_link_mode*nvswitch_corelib_get_dl_link_mode*nvswitch_corelib_set_tl_link_mode*nvswitch_corelib_get_tl_link_mode*nvswitch_corelib_set_tx_mode*nvswitch_corelib_get_tx_mode*nvswitch_corelib_set_rx_mode*nvswitch_corelib_get_rx_mode*nvswitch_corelib_set_rx_detect*nvswitch_corelib_get_rx_detect*nvswitch_corelib_training_complete*nvswitch_get_device_dma_width*nvswitch_ctrl_get_fom_values*nvswitch_get_bios_size*nvswitch_post_init_device_setup*nvswitch_post_init_blacklist_device_setup*nvswitch_setup_system_registers*nvswitch_setup_link_system_registers*nvswitch_load_link_disable_settings*nvswitch_read_vbios_link_entries*nvswitch_vbios_read_structure*nvswitch_get_nvlink_ecc_errors*nvswitch_inforom_ecc_log_error_event*nvswitch_oms_set_device_disable*nvswitch_oms_get_device_disable*nvswitch_inforom_nvl_log_error_event*nvswitch_inforom_nvl_update_link_correctable_error_info*nvswitch_inforom_nvl_get_max_correctable_error_rate*nvswitch_inforom_nvl_get_errors*nvswitch_inforom_nvl_setL1Threshold*nvswitch_inforom_nvl_getL1Threshold*nvswitch_inforom_nvl_setup_nvlink_state*nvswitch_load_uuid*nvswitch_i2c_set_hw_speed_mode*nvswitch_ctrl_get_bios_info*nvswitch_ctrl_get_inforom_version*nvswitch_read_oob_blacklist_state*nvswitch_write_fabric_state*nvswitch_initialize_oms_state*nvswitch_oms_inforom_flush*nvswitch_inforom_ecc_get_total_errors*nvswitch_inforom_load_obd*nvswitch_bbx_add_sxid*nvswitch_bbx_unload*nvswitch_bbx_load*nvswitch_bbx_get_sxid*nvswitch_bbx_get_data*nvswitch_smbpbi_alloc*nvswitch_smbpbi_post_init_hal*nvswitch_smbpbi_destroy_hal*nvswitch_smbpbi_send_unload*nvswitch_smbpbi_dem_load*nvswitch_smbpbi_dem_flush*nvswitch_smbpbi_get_dem_num_messages*nvswitch_smbpbi_log_message*nvswitch_smbpbi_send_init_data*nvswitch_get_link_public_id*nvswitch_get_link_local_idx*nvswitch_set_training_error_info*nvswitch_ctrl_get_fatal_error_scope*nvswitch_init_scratch*nvswitch_filter_discovery*nvswitch_get_eng_base*nvswitch_get_eng_count*nvswitch_eng_rd*nvswitch_eng_wr*nvswitch_reg_write_32*nvswitch_init_clock_gating*nvswitch_initialize_interrupt_tree*nvswitch_init_dlpl_interrupts*nvswitch_soe_unregister_events*nvswitch_corelib_get_uphy_load*nvswitch_setup_link_loopback_mode*nvswitch_reset_persistent_link_hw_state*nvswitch_store_topology_information*nvswitch_init_lpwr_regs*nvswitch_program_l1_scratch_reg*nvswitch_init_buffer_ready*nvswitch_ctrl_get_nvlink_lp_counters*nvswitch_apply_recal_settings*nvswitch_service_nvldl_fatal_link*nvswitch_service_minion_link*nvswitch_ctrl_get_sw_info*nvswitch_ctrl_get_err_info*nvswitch_ctrl_clear_counters*nvswitch_ctrl_set_nvlink_error_threshold*nvswitch_ctrl_get_nvlink_error_threshold*nvswitch_ctrl_get_board_part_number*nvswitch_ctrl_therm_read_voltage*nvswitch_soe_init_l2_state*nvswitch_ctrl_therm_read_power*nvswitch_ctrl_get_link_l1_capability*nvswitch_ctrl_get_link_l1_threshold*nvswitch_ctrl_set_link_l1_threshold*nvswitch_fsp_update_cmdq_head_tail*nvswitch_fsp_get_cmdq_head_tail*nvswitch_fsp_update_msgq_head_tail*nvswitch_fsp_get_msgq_head_tail*nvswitch_fsprpc_get_caps*nvswitch_tnvl_get_attestation_certificate_chain*nvswitch_tnvl_get_attestation_report*nvswitch_tnvl_get_status*nvswitch_tnvl_disable_interrupts*nvswitch_cci_setup_gpio_pins*nvswitch_cci_get_cci_link_mode*nvswitch_cci_get_xcvrs_present*nvswitch_cci_get_xcvrs_present_change*nvswitch_cci_update_link_state_led*nvswitch_cci_reset_and_drain_links*nvswitch_cci_set_xcvr_present*nvswitch_cci_destroy*nvswitch_ctrl_get_soe_heartbeat*nvswitch_update_link_state_led*nvswitch_led_shutdown*nvswitch_ctrl_inband_send_data*nvswitch_ctrl_inband_read_data*nvswitch_send_inband_nack*nvswitch_get_max_persistent_message_count*nvswitch_ctrl_set_mc_rid_table*nvswitch_ctrl_get_mc_rid_table*nvswitch_ctrl_set_residency_bins*nvswitch_ctrl_get_residency_bins*nvswitch_ctrl_get_rb_stall_busy*pBar**pBar*call to nvswitch_os_mem_write32*call to nvswitch_ctrl_clear_throughput_counters_lr10*call to nvswitch_ctrl_clear_dl_error_counters_lr10*linkErrInfo**linkErrInfo*TLErrlog*TLIntrEn*DLSpeedStatusTx*DLSpeedStatusRx*bExcessErrorDL**index*call to _nvswitch_inforom_bbx_supported*call to nvswitch_is_bios_supported*call to _nvswitch_setup_link_vbios_overrides*attemptedTrainingMask0*trainingErrorMask0*pOBDObj**pOBDObj*byteIdx*call to nvswitch_get_eng_base_lr10*call to _nvswitch_get_eng_descriptor_lr10*is_oob_blacklist*call to nvswitch_inforom_oms_set_device_disable**regData*call to nvswitch_is_inforom_supported*call to nvswitch_inforom_post_init*call to nvswitch_smbpbi_post_init*call to nvswitch_test_soe_dma_lr10*call to nvswitch_soe_init_l2_state*call to nvswitch_bios_read_size*call to nvswitch_is_spi_supported*call to _nvswitch_get_reserved_throughput_counters*call to nvswitch_read_64bit_counter*counter_values*call to soeTestDma_HAL*nvldl_instance*latency_stats*fatal_error_occurred*call to nvswitch_minion_get_dl_status*figureOfMeritValues**figureOfMeritValues*call to nvswitch_get_sublink_width*errorLink**errorLink*sublinkWidth*call to nvswitch_is_minion_initialized*minion_enabled*call to nvswitch_link_lane_reversed_lr10*bLaneReversed*errorLane**errorLane*statData*eccErrorValue*overflowed*nport_reg_val**nport_reg_val**link_info*call to nvlink_lib_unregister_link*call to nvswitch_corelib_get_dl_link_mode_lr10*call to nvswitch_corelib_get_tx_mode_lr10*call to nvswitch_corelib_get_rx_mode_lr10*call to nvswitch_execute_unilateral_link_shutdown_lr10*call to nvswitch_corelib_clear_link_state_lr10*call to _nvswitch_link_disable_interrupts_lr10*idx_nport*call to nvswitch_set_ganged_link_table_lr10*ganged_link_table*call to nvswitch_init_scratch_lr10*call to _nvswitch_link_reset_interrupts_lr10*call to nvlink_lib_register_link*call to nvswitch_destroy_link*call to nvswitch_launch_ALI*biosVersionBytes*biosOemVersionBytes*call to _nvswitch_get_engine_base_lr10*reqRid*reqRlan*requesterLinkID*pLatency*bin**bin*lowThreshold*medThreshold*hiThreshold*vc_selector**pLatency*egressHistogram**egressHistogram*latency**latency*accum_latency**accum_latency*medium*elapsed_time_msec*start_time_nsec*call to nvswitch_set_nport_port_config*firmware*nvlink*ac_coupled*call to _nvswitch_get_info_chip_id*call to _nvswitch_get_info_revision_major*call to _nvswitch_get_info_revision_minor*call to _nvswitch_get_info_revision_minor_ext*call to _nvswitch_get_num_vcs_lr10*call to nvswitch_get_remap_table_selector*call to nvswitch_get_ingress_ram_size*call to nvswitch_minion_clear_dl_error_counters_lr10*tx0TlCount*bTx0TlCounterOverflow*tx1TlCount*bTx1TlCounterOverflow*rx0TlCount*bRx0TlCounterOverflow*rx1TlCount*bRx1TlCounterOverflow*laneId*call to nvlink_lib_get_remote_conn_info*call to nvlink_lib_discover_and_get_remote_conn_info*call to _nvswitch_set_nvlink_caps_lr10*linkInfo**linkInfo*phyType*subLinkWidth*txSublinkStatus*rxSublinkStatus*bLaneReversal*call to nvswitch_get_caps_nvlink_version*nvlink_caps_version*nciVersion*phyVersion*remoteDeviceLinkNumber*remoteDeviceInfo*deviceIdFlags*loopProperty*localDeviceInfo*localDeviceLinkNumber*laneRxdetStatusMask*call to nvswitch_minion_get_line_rate_Mbps_lr10*nvlinkLineRateMbps*call to nvswitch_minion_get_data_rate_KiBps_lr10*nvlinkLinkDataRateKiBps*nvlinkLinkClockMhz*nvlinkRefClkSpeedMhz*nvlinkRefClkType*tempCaps**tempCaps**pCaps*call to nvswitch_soe_unregister_events**latency_stats**ganged_link_table*call to nvswitch_free_chipdevice*call to nvswitch_i2c_destroy*call to nvswitch_alloc_chipdevice***chip_device**call to nvswitch_alloc_chipdevice*call to nvswitch_check_io_sanity*call to nvswitch_device_discovery*call to nvswitch_filter_discovery*call to nvswitch_process_discovery*call to flcnablePostDiscoveryInit*call to nvswitch_lib_disable_interrupts*call to nvswitch_pri_ring_init*call to nvswitch_detect_tnvl_mode*call to nvswitch_init_soe*call to nvswitch_read_rom_tables*call to _nvswitch_process_firmware_info_lr10*call to nvswitch_initialize_pmgr*call to nvswitch_init_pll_config*call to nvswitch_init_pll*call to nvswitch_initialize_ip_wrappers*call to nvswitch_init_warm_reset*call to nvswitch_init_npg_multicast*call to nvswitch_clear_nport_rams*call to nvswitch_init_nport*call to nvswitch_init_nxbar*call to nvswitch_init_minion*call to _nvswitch_setup_chiplib_forced_config_lr10*call to nvswitch_initialize_route*call to nvswitch_init_clock_gating*call to nvswitch_spi_init*call to nvswitch_is_smbpbi_supported*call to nvswitch_smbpbi_init*call to nvswitch_initialize_interrupt_tree*call to nvswitch_init_thermal*call to nvswitch_destroy_device_state*enumerated*call to _nvswitch_init_ganged_link_routing*call to _nvswitch_init_cmd_routing*call to _nvswitch_init_portstat_counters*call to nvswitch_init_pmgr_lr10*call to nvswitch_init_pmgr_devices_lr10*chip_id*num_nports*call to _nvswitch_init_nport_ecc_control_lr10*default_latency_bins*idx_channel*sample_interval_msec*call to nvswitch_ctrl_set_latency_bins*zero_init_mask*nport_mask*idx_npg*idx_link*ram_size*call to nvswitch_soe_issue_ingress_stop*rlan_ctrl*rlan_tab_data**rlan_tab_data*entryValid**entryValid*rlan_entries**rlan_entries*rlan_count*portList**portList*groupSelect*groupSize*routingLan**routingLan*call to _nvswitch_set_routing_lan_lr10*routing_lan*routingId**routingId*call to _nvswitch_set_routing_id_lr10*rid_ctrl*rid_tab_data0*rid_tab_data1*rid_tab_data2*rid_tab_data3*rid_tab_data4*rid_tab_data5*rid_entries**rid_entries*rid_count*rid_tab_data**rid_tab_data*destPortNum*vcMap*rmod*useRoutingLan*enableIrlErrResponse*gsize*routing_id*remap_ram*remap_policy_data**remap_policy_data*remap_policy**remap_policy*remap_count*irlSelect*remap_address*reqCtxMask*reqCtxChk*reqCtxRep*address_offset*addressOffset*address_base*addressBase*address_limit*addressLimit*targetId*remapPolicy**remapPolicy*rfunc*call to _nvswitch_set_remap_policy_lr10*ram_sel*engine_enable_mask*engine_disable_mask*gang_size*gang_entry*gang_index*block_index*time_nsec*last_visited_time_nsec*call to _nvswitch_portstat_reset_latency_counters*vc_valid*last_read_time_nsec*idx_vc*is_emulation*is_fmodel*is_rtlsim*call to nvswitch_os_override_platform*call to nvswitch_setup_link_system_registers*call to nvswitch_load_link_disable_settings*vbios_disabled_link_mask*pci_image_address*nvlink_config_table_address*call to _nvswitch_vbios_identify_pci_image_loc*call to _nvswitch_vbios_update_bit_Offset*call to _nvswitch_vbios_fetch_nvlink_entries*call to _nvswitch_vbios_assign_base_entry*physical_id*link_vbios_base_entry**link_vbios_base_entry*link_base_entry_assigned*call to _nvswitch_get_nvlink_config_address*call to _nvswitch_vbios_read8*call to _nvswitch_vbios_read_structure*6b1w**6b1w*ver_20*bIsNvlinkVbiosTableVersion2*expected_base_entry_count*call to _nvswitch_read_vbios_link_base_entry*identified_Link_entries**identified_Link_entries*base_entry_index*7b**7b*nvLinkparam0*nvLinkparam1*nvLinkparam2*nvLinkparam3*nvLinkparam4*nvLinkparam5*nvLinkparam6**1b*link_base_entry*vbios_link_base_entry*positionId*call to _nvswitch_vbios_read16*call to _nvswitch_vbios_read32*1d4w4b2w2b3w**1d4w4b2w2b3w*pciData*call to nvswitch_verify_header*call to _nvswitch_perform_BIT_offset_update*call to _nvswitch_validate_BIT_header*chkSum*1w1d1w4b**1w1d1w4b*bitHeader*2b2w**2b2w*bitToken*dataPointerOffset*15w**15w*nvInitTablePtrs**w*call to _nvswitch_devinit_calculate_sizes*packed_data**packed_data*call to _nvswitch_devinit_unpack_structure*packedSize*unpackedSize*fieldsCount*call to _nvswitch_deassert_link_resets_lr10*call to _nvswitch_train_forced_config_link_lr10*lane_rxdet_status_mask*localLinkNumber*call to _nvswitch_check_running_minions*call to _nvswitch_minion_pre_init*call to _nvswitch_load_minion_ucode_image*call to _nvswitch_minion_bootstrap*call to _nvswitch_load_minion_ucode_image_from_regkeys*bDebugMode*call to _nvswitch_minion_copy_ucode_bc*ingressEccRegVal*egressEccRegVal*call to _nvswitch_minion_print_ucode*call to _nvswitch_print_minion_info*call to nvswitch_set_minion_initialized*link_num*call to _nvswitch_minion_test_dlcmd*minion_ucode_header*pUcodeHeader*minion_ucode_data*appCodeOffset*appCodeSize*appDataOffset*appDataSize*app*falconIntrMask*falconIntrDest*intr_minion_dest*bMinionRunning*device_allow_list**device_allow_list*i2c_allow_list*device_allow_list_size*i2c_device*gpio_pin*input_inv*function_offset*idx_gpio**gpio_pin*gpio_pin_size*device_list**device_list*device_list_size*call to nvswitch_i2c_init**pI2c**i2c_allow_list*i2c_allow_list_size*call to _nvswitch_i2c_set_port_pmgr*PortInfo**PortInfo*smbpbiCmd*inforomObjects*DEM**pFifo*3s2bwbd2w4096x8d**DEM**3s2bwbd2w4096x8d*call to _makeNewRecord**pNewRec*call to _addNewRecord*bytesSeen*fifoBuffer**fifoBuffer*_recPtr*pLastRec**pLastRec*_nextPtr*_curPtr*curPtr*srcPtr**srcPtr*copySz*writeOffset*xidId*seqNumber*textMessage**textMessage**osErrorString*msgLeft*readOffset*bytesOccupied*call to _smbpbiDemInit**sharedSurface*call to nvswitch_inforom_read_static_data*pInitCmd*driverPollingPeriodUs*pParentHal*service*serviceHalt*getEmemSize*ememTransfer*getEmemStartOffset*ememPortToRegAddr*serviceExterr*getExtErrRegAddrs*ememPortSizeGet*isCpuHalted*testDma*setPexEOM*getUphyDlnCfgSpace*forceThermalSlowdown*setPcieLinkSpeed*getPexEomStatus*processMessages*waitForInitAck*call to _soeQMgrCreateQueuesFromInitMsg*call to flcnDbgInfoDmemOffsetSet*call to _soeGetInitMessage*call to flcnQueueEventHandle*soeMessage*call to flcnQueueResponseHandle*pInit**pInit*qInfo**qInfo*call to flcnQueueConstruct_dmem_nvswitch*bif*pBifCmd**pBifCmd*therm*pEomStatus**pEomStatus*call to _soeDmaSelfTest*call to _soeDmaStartTest*call to _soeValidateDmaTestResult*pDmaCmd**pDmaCmd*dataPattern*subCmdType*call to soeGetExtErrRegAddrs_HAL*exterrStatVal*call to soeEmemPortSizeGet_HAL*call to soeEmemPortToRegAddr_HAL*call to soeGetEmemStartOffset_HAL*startEmem*call to soeGetEmemSize_HAL*endEmem*reg32*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE Watchdog error **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE Watchdog error *SOE Watchdog error **SOE Watchdog error *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE EXTERR **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE EXTERR *SOE EXTERR **SOE EXTERR *call to soeServiceExterr_HAL*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE HALTED **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE HALTED *SOE HALTED **SOE HALTED *call to soeServiceHalt_HAL*call to soeProcessMessages_HAL*bRecheckMsgQ*pMsgQ**pMsgQ*call to flcnIntrRetrigger_HAL*cmdQHeadSize*cmdQTailSize*msgQHeadSize*msgQTailSize*cmdQHeadBaseAddress*cmdQHeadStride*cmdQTailBaseAddress*cmdQTailStride*msgQHeadBaseAddress*msgQTailBaseAddress*maxCmdQueueIndex**pFlcnable*bQueuesEnabled*numQueues*numSequences*bEmemEnabled*engineTag*seqInfo**seqInfo*call to flcnConstruct_HAL*maxUnitId*maxMsgSize*initEventUnitId*call to _nvswitch_soe_prepare_for_reset*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to reset SOE(0) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to reset SOE(0) *Failed to reset SOE(0) **Failed to reset SOE(0) *call to _nvswitch_reset_soe*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to reset SOE(1) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to reset SOE(1) *Failed to reset SOE(1) **Failed to reset SOE(1) *call to _nvswitch_load_soe_ucode_image*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to boot SOE(0) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to boot SOE(0) *Failed to boot SOE(0) **Failed to boot SOE(0) *call to _nvswitch_soe_bootstrap*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to boot SOE(1) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to boot SOE(1) *Failed to boot SOE(1) **Failed to boot SOE(1) *call to _nvswitch_soe_send_test_cmd*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to boot SOE(2) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to boot SOE(2) *Failed to boot SOE(2) **Failed to boot SOE(2) *call to nvswitch_soe_register_event_callbacks*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to register SOE events **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to register SOE events *Failed to register SOE events **Failed to register SOE events *call to _nvswitch_soe_request_gfw_image_halt*call to _nvswitch_soe_request_reset_permissions*reset_plm*engctl_plm*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE reset timeout error(2) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE reset timeout error(2) *SOE reset timeout error(2) **SOE reset timeout error(2) *call to flcnQueueEventRegister*call to flcnQueueEventUnregister*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE reset timeout error(0) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE reset timeout error(0) *SOE reset timeout error(0) **SOE reset timeout error(0) *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE reset timeout error(1) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE reset timeout error(1) *SOE reset timeout error(1) **SOE reset timeout error(1) *call to _nvswitch_get_soe_ucode_binaries*call to _nvswitch_soe_copy_ucode_cpubitbang*soe_ucode_data*soe_ucode_header**soe_ucode_data**soe_ucode_header*debug_mode*apps**apps*appCodeStartOffset*appCodeImemOffset*appCodeIsSecure*appDataStartOffset*appDataDmemOffset*appCount*call to flcnWaitForResetToFinish_HAL*soeTherm*slowdown_status*temperature*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, NVSWITCH Temperature %dC | TSENSE WARN Threshold %dC **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, NVSWITCH Temperature %dC | TSENSE WARN Threshold %dC *NVSWITCH Temperature %dC | TSENSE WARN Threshold %dC **NVSWITCH Temperature %dC | TSENSE WARN Threshold %dC *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Thermal Slowdown Engaged | Temp higher than WARN Threshold **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Thermal Slowdown Engaged | Temp higher than WARN Threshold *Thermal Slowdown Engaged | Temp higher than WARN Threshold **Thermal Slowdown Engaged | Temp higher than WARN Threshold *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Thermal Slowdown Engaged | PMGR WARN Threshold reached **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Thermal Slowdown Engaged | PMGR WARN Threshold reached *Thermal Slowdown Engaged | PMGR WARN Threshold reached **Thermal Slowdown Engaged | PMGR WARN Threshold reached *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Thermal slowdown Disengaged **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Thermal slowdown Disengaged *Thermal slowdown Disengaged **Thermal slowdown Disengaged *shutdown_status*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, NVSWITCH Temperature %dC | OVERT Threshold %dC **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, NVSWITCH Temperature %dC | OVERT Threshold %dC *NVSWITCH Temperature %dC | OVERT Threshold %dC **NVSWITCH Temperature %dC | OVERT Threshold %dC *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, TSENSE OVERT Threshold reached. Shutting Down **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, TSENSE OVERT Threshold reached. Shutting Down *TSENSE OVERT Threshold reached. Shutting Down **TSENSE OVERT Threshold reached. Shutting Down *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, PMGR OVERT Threshold reached. Shutting Down **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, PMGR OVERT Threshold reached. Shutting Down *PMGR OVERT Threshold reached. Shutting Down **PMGR OVERT Threshold reached. Shutting Down *temperatureLimit**temperature*call to _nvswitch_read_max_tsense_temperature*call to _nvswitch_read_external_tdiode_temperature*tdiode*modulesMask*call to _cci_process_cmd_ls10*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d access error **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Module %d access error *Module %d access error **Module %d access error *i2cIndexed**message**pValArray*call to _nvswitch_cci_reset_mux_ls10*call to _nvswitch_cci_setup_link_mask_ls10*call to cciModulesOnboardInit*call to cciRegisterCallback*call to cciProcessCDBCallback*call to nvswitch_cci_update_link_state_led*call to cciModulesOnboardCallback*call to cciGetAllLinks*linkTrainDisableMask*bCciManaged*call to _cciSetXcvrLedState_ls10*ledAState*ledBState*call to _cciGetXcvrLedStateRegVal_ls10*call to _nvswitch_cci_update_link_state_led_ls10*nextLedsState*currentLedsState*currentLedAState*currentLedBState*call to _cciGetXcvrNextLedsState_ls10*ledsNextState*call to cciModuleOnboardFailed*linkMaskA*linkMaskB*call to _cciGetXcvrNextLedStateLinks_ls10*call to _cciGetXcvrNextLedStateLink_ls10*call to _cciResolveXcvrLedStates_ls10*call to cciCheckXcvrForLinkTraffic*call to _nvswitch_cci_reset_ls10*call to nvswitch_cci_ports_cpld_write_ls10*call to nvswitch_cci_ports_cpld_read_ls10*maskPresentChange*call to _nvswitch_detect_presence_change_cci_devices_ls10*call to cciModuleOnboardShutdown*call to _nvswitch_detect_presence_cci_devices_ls10*call to _nvswitch_cci_detect_board_ls10*call to nvswitch_lib_get_bios_version*call to nvswitch_get_board_id**osfp_i2c_info*osfp_num**osfp_map*osfp_map_size*call to _nvswitch_update_cages_mask_ls10*call to _nvswitch_cci_setup_vulcan_config_ls10*nvswitchNum*new_cages*cci*aliLinkInfo*call to _nvswitch_cci_optimize_link_ls10*call to cciConfigureNvlinkModeAsync*intrCtrl*call to cciSupported*GIN**GIN*engGIN**engGIN*XAL**XAL*engXAL**engXAL*XPL**XPL*engXPL**engXPL*XTL**XTL*engXTL**engXTL*XTL_CONFIG**XTL_CONFIG*engXTL_CONFIG**engXTL_CONFIG*PRT_PRI_HUB**PRT_PRI_HUB*engPRT_PRI_HUB**engPRT_PRI_HUB*engPRT_PRI_HUB_BCAST**engPRT_PRI_HUB_BCAST*PRT_PRI_RS_CTRL**PRT_PRI_RS_CTRL*engPRT_PRI_RS_CTRL**engPRT_PRI_RS_CTRL*engPRT_PRI_RS_CTRL_BCAST**engPRT_PRI_RS_CTRL_BCAST*SYS_PRI_HUB**SYS_PRI_HUB*engSYS_PRI_HUB**engSYS_PRI_HUB*SYS_PRI_RS_CTRL**SYS_PRI_RS_CTRL*engSYS_PRI_RS_CTRL**engSYS_PRI_RS_CTRL*SYSB_PRI_HUB**SYSB_PRI_HUB*engSYSB_PRI_HUB**engSYSB_PRI_HUB*SYSB_PRI_RS_CTRL**SYSB_PRI_RS_CTRL*engSYSB_PRI_RS_CTRL**engSYSB_PRI_RS_CTRL*PRI_MASTER_RS**PRI_MASTER_RS*engPRI_MASTER_RS**engPRI_MASTER_RS*CLKS_SYS**CLKS_SYS*engCLKS_SYS**engCLKS_SYS*CLKS_SYSB**CLKS_SYSB*engCLKS_SYSB**engCLKS_SYSB*CLKS_P0**CLKS_P0*engCLKS_P0**engCLKS_P0*engCLKS_P0_BCAST**engCLKS_P0_BCAST*CPR**CPR*engCPR**engCPR*engCPR_BCAST**engCPR_BCAST*TILEOUT**TILEOUT*engTILEOUT**engTILEOUT*engTILEOUT_MULTICAST_BCAST**engTILEOUT_MULTICAST_BCAST*localMinionLinkMask*call to _nvswitch_device_discovery_ls10*entry_type_ls10*entry_base*riscvRegRead*riscvRegWrite*dmemTransfer*setDmemAddr*imemCopyTo*setImemAddr*dmemSize*dbgInfoCaptureRiscvPcTrace*debugBufferInit*debugBufferDestroy*debugBufferDisplay*debugBufferIsEmpty*call to flcnRiscvRegRead_HAL*bWasFull*call to flcnRiscvRegWrite_HAL*widx*ridx*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE HALT data[%d] = 0x%16llx **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE HALT data[%d] = 0x%16llx *SOE HALT data[%d] = 0x%16llx **SOE HALT data[%d] = 0x%16llx *call to flcnSetDmemAddr_HAL*subMessageId*call to nvswitch_fsp_send_and_read_message*responsePayload*responseNvdmType*cmdResponse*commandNvdmType*call to nvswitch_fsp_config_ememc*ememOffsetEnd*offsetBlks*offsetDwords*pCmdResponse**pCmdResponse*call to nvswitch_fsp_error_code_to_nvlstatus_map*call to nvswitch_fsp_process_cmd_response*mctpPayloadHeader*mctpMessageType*mctpVendorId*mctpHeader*call to nvswitch_fsp_validate_mctp_payload_header*heartbeat*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, OSFP Thermal Warn Activated **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, OSFP Thermal Warn Activated *OSFP Thermal Warn Activated **OSFP Thermal Warn Activated *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, OSFP Thermal Warn Deactivated **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, OSFP Thermal Warn Deactivated *OSFP Thermal Warn Deactivated **OSFP Thermal Warn Deactivated *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, OSFP Thermal Overt Activated **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, OSFP Thermal Overt Activated *OSFP Thermal Overt Activated **OSFP Thermal Overt Activated *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, OSFP Thermal Overt Deactivated **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, OSFP Thermal Overt Deactivated *OSFP Thermal Overt Deactivated **OSFP Thermal Overt Deactivated *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, OSFP Thermal SOE Heartbeat Shutdown **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, OSFP Thermal SOE Heartbeat Shutdown *OSFP Thermal SOE Heartbeat Shutdown **OSFP Thermal SOE Heartbeat Shutdown *call to nvswitch_is_inforom_supported_ls10*transferSize*bbxCmd*ifr*bbxDataGet*pBbxTempData**pBbxTempData*pBbxTempSamples**pBbxTempSamples*bbxSxidGet*bbxSxidData*sxidCount*sxidFirst**sxidFirst*sxidLast**sxidLast*sxidIdx*bbxInit*bbxSxidAdd*3s2bwbd111b3d89b*v2**3s2bwbd111b3d89b**v2*3s2bwbdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2b2d2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4wbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3d**3s2bwbdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2bdq2b2d2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4w2d4wbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3dbwb3d*l1ThresholdData*call to inforom_nvl_v4_update_correctable_error_rates*call to nvswitch_inforom_nvl_log_error_event_ls10*v4s*minionLinkIntr*bPending*deferredLinkErrors**deferredLinkErrors*fatalIntrMask*call to _nvswitch_create_deferred_link_errors_task_ls10*nonFatalIntrMask*call to nvswitch_minion_receive_inband_data_ls10*PHY_A Error**PHY_A Error*"PHY_A Error"**"PHY_A Error"*TX_PL Error**TX_PL Error*"TX_PL Error"**"TX_PL Error"*RX_PL Error**RX_PL Error*"RX_PL Error"**"RX_PL Error"*call to nvswitch_link_disable_interrupts_ls10*bRequireResetAndDrain*call to nvswitch_create_deferred_link_state_check_task_ls10*call to _nvswitch_initialize_nport_interrupts_ls10*call to _nvswitch_initialize_nxbar_interrupts_ls10*call to _nvswitch_initialize_nvlipt_interrupts_ls10*call to nvswitch_tnvl_disable_interrupts*topEnable*topIntr*call to _nvswitch_service_nvlw_nonfatal_ls10*return_status*call to _nvswitch_service_nvlw_fatal_ls10*call to _nvswitch_service_npg_fatal_ls10*call to _nvswitch_service_npg_nonfatal_ls10*call to _nvswitch_service_nxbar_fatal_ls10*call to _nvswitch_service_priv_ring_ls10*call to _nvswitch_service_soe_fatal_ls10*call to _nvswitch_retrigger_engine_intr_ls10*call to _nvswitch_service_minion_fatal_ls10*call to _nvswitch_service_nvlipt_common_fatal_ls10*call to _nvswitch_service_nvldl_fatal_ls10*call to _nvswitch_service_nvltlc_fatal_ls10*call to _nvswitch_service_nvlipt_link_fatal_ls10*localIntrLinkMask*call to _nvswitch_service_nvlipt_lnk_fatal_ls10*call to _nvswitch_service_nvlipt_link_nonfatal_ls10*call to _nvswitch_service_nvldl_nonfatal_ls10*call to _nvswitch_service_nvltlc_nonfatal_ls10*call to nvswitch_minion_service_falcon_interrupts_ls10*call to _nvswitch_service_nvlipt_lnk_status_ls10*call to _nvswitch_service_nvlipt_lnk_nonfatal_ls10*call to nvswitch_corelib_get_dl_link_mode_ls10*call to nvswitch_corelib_training_complete_ls10*call to _nvswitch_clear_deferred_link_errors_ls10*lastLinkUpTime*call to nvswitch_set_dlpl_interrupts_ls10*call to nvswitch_are_link_clocks_on_ls10*call to _nvswitch_service_nvltlc_rx_lnk_nonfatal_0_ls10*call to _nvswitch_service_nvltlc_tx_lnk_nonfatal_0_ls10*call to _nvswitch_service_nvltlc_rx_lnk_nonfatal_1_ls10*call to _nvswitch_service_nvltlc_tx_lnk_nonfatal_1_ls10*call to _nvswitch_service_nvltlc_tx_sys_nonfatal_ls10*call to _nvswitch_service_nvltlc_rx_sys_nonfatal_ls10*call to nvswitch_soe_clear_engine_interrupt_counter_ls10*call to _nvswitch_service_nvldl_nonfatal_link_ls10*deferredLinkErrorsArgs**deferredLinkErrorsArgs*pErrorReportParams**pErrorReportParams*call to nvswitch_task_create_args*bLinkErrorsCallBackEnabled*call to _nvswitch_emit_deferred_link_errors_ls10**fn_args*bLinkStateCallBackEnabled*lastRetrainTime*bRedeferLinkStateCheck*call to cciReportLinkErrors*pLinkErrorsData**pLinkErrorsData*call to _nvswitch_emit_link_errors_nvldl_fatal_link_ls10*call to _nvswitch_emit_link_errors_nvldl_nonfatal_link_ls10*call to _nvswitch_emit_link_errors_nvltlc_rx_lnk_nonfatal_1_ls10*call to _nvswitch_emit_link_errors_nvlipt_lnk_nonfatal_ls10*call to _nvswitch_emit_link_errors_minion_fatal_ls10*call to _nvswitch_emit_link_errors_minion_nonfatal_ls10*errorThreshold*bInterruptTrigerred*call to nvswitch_configure_error_rate_threshold_interrupt_ls10*RX CRC Error Rate**RX CRC Error Rate*"RX CRC Error Rate"**"RX CRC Error Rate"*call to _nvswitch_dump_minion_ali_debug_registers_ls10*call to nvswitch_minion_get_ali_debug_registers_ls10*tile_idx*call to _nvswitch_service_nxbar_tile_ls10*call to _nvswitch_service_nxbar_tileout_ls10*call to nvswitch_soe_update_intr_report_en_ls10*call to _nvswitch_service_nvltlc_tx_sys_fatal_ls10*call to _nvswitch_service_nvltlc_rx_sys_fatal_ls10*call to _nvswitch_service_nvltlc_tx_lnk_fatal_0_ls10*call to _nvswitch_service_nvltlc_rx_lnk_fatal_0_ls10*call to _nvswitch_service_nvltlc_rx_lnk_fatal_1_ls10*call to _nvswitch_service_nport_nonfatal_ls10*call to _nvswitch_service_route_nonfatal_ls10*call to _nvswitch_service_ingress_nonfatal_ls10*call to _nvswitch_service_egress_nonfatal_ls10*call to _nvswitch_service_tstate_nonfatal_ls10*call to _nvswitch_service_sourcetrack_nonfatal_ls10*call to _nvswitch_service_multicast_nonfatal_ls10*call to _nvswitch_service_reduction_nonfatal_ls10*call to _nvswitch_service_nport_fatal_ls10*call to _nvswitch_service_route_fatal_ls10*call to _nvswitch_service_ingress_fatal_ls10*call to _nvswitch_service_egress_fatal_ls10*call to _nvswitch_service_tstate_fatal_ls10*call to _nvswitch_service_sourcetrack_fatal_ls10*call to _nvswitch_service_multicast_fatal_ls10*call to _nvswitch_service_reduction_fatal_ls10*mc_tstate*call to _nvswitch_collect_error_info_ls10*Red TS tag store fatal ECC**Red TS tag store fatal ECC*"Red TS tag store fatal ECC"**"Red TS tag store fatal ECC"*call to _nvswitch_construct_ecc_error_event_ls10*Red TS crumbstore fatal ECC**Red TS crumbstore fatal ECC*"Red TS crumbstore fatal ECC"**"Red TS crumbstore fatal ECC"*Red crumbstore overwrite**Red crumbstore overwrite*"Red crumbstore overwrite"**"Red crumbstore overwrite"*call to nvswitch_soe_disable_nport_fatal_interrupts_ls10*Red TS tag store single-bit threshold**Red TS tag store single-bit threshold*"Red TS tag store single-bit threshold"**"Red TS tag store single-bit threshold"*Red TS crumbstore single-bit threshold**Red TS crumbstore single-bit threshold*"Red TS crumbstore single-bit threshold"**"Red TS crumbstore single-bit threshold"*Red TS crumbstore RTO**Red TS crumbstore RTO*"Red TS crumbstore RTO"**"Red TS crumbstore RTO"*MC TS tag store fatal ECC**MC TS tag store fatal ECC*"MC TS tag store fatal ECC"**"MC TS tag store fatal ECC"*MC TS crumbstore fatal ECC**MC TS crumbstore fatal ECC*"MC TS crumbstore fatal ECC"**"MC TS crumbstore fatal ECC"*MC crumbstore overwrite**MC crumbstore overwrite*"MC crumbstore overwrite"**"MC crumbstore overwrite"*MC TS tag store single-bit threshold**MC TS tag store single-bit threshold*"MC TS tag store single-bit threshold"**"MC TS tag store single-bit threshold"*MC TS crumbstore single-bit threshold**MC TS crumbstore single-bit threshold*"MC TS crumbstore single-bit threshold"**"MC TS crumbstore single-bit threshold"*MC TS crumbstore MCTO**MC TS crumbstore MCTO*"MC TS crumbstore MCTO"**"MC TS crumbstore MCTO"*sourcetrack duplicate CREQ**sourcetrack duplicate CREQ*"sourcetrack duplicate CREQ"**"sourcetrack duplicate CREQ"*sourcetrack invalid TCEN0 CREQ**sourcetrack invalid TCEN0 CREQ*"sourcetrack invalid TCEN0 CREQ"**"sourcetrack invalid TCEN0 CREQ"*sourcetrack invalid TCEN1 CREQ**sourcetrack invalid TCEN1 CREQ*"sourcetrack invalid TCEN1 CREQ"**"sourcetrack invalid TCEN1 CREQ"**egress*pending_0*egress non-posted UR error**egress non-posted UR error*"egress non-posted UR error"**"egress non-posted UR error"*egress crossbar SB parity**egress crossbar SB parity*"egress crossbar SB parity"**"egress crossbar SB parity"*egress invalid VC set**egress invalid VC set*"egress invalid VC set"**"egress invalid VC set"*pending_1*egress MC response ECC DBE error**egress MC response ECC DBE error*"egress MC response ECC DBE error"**"egress MC response ECC DBE error"*egress reduction ECC DBE error**egress reduction ECC DBE error*"egress reduction ECC DBE error"**"egress reduction ECC DBE error"*egress MC SG ECC DBE error**egress MC SG ECC DBE error*"egress MC SG ECC DBE error"**"egress MC SG ECC DBE error"*egress MC ram ECC DBE error**egress MC ram ECC DBE error*"egress MC ram ECC DBE error"**"egress MC ram ECC DBE error"*egress reduction header ECC error limit**egress reduction header ECC error limit*"egress reduction header ECC error limit"**"egress reduction header ECC error limit"*egress MC response ECC error limit**egress MC response ECC error limit*"egress MC response ECC error limit"**"egress MC response ECC error limit"*egress RB ECC error limit**egress RB ECC error limit*"egress RB ECC error limit"**"egress RB ECC error limit"*egress RSG ECC error limit**egress RSG ECC error limit*"egress RSG ECC error limit"**"egress RSG ECC error limit"*egress MCRB ECC error limit**egress MCRB ECC error limit*"egress MCRB ECC error limit"**"egress MCRB ECC error limit"*egress MC header ECC error limit**egress MC header ECC error limit*"egress MC header ECC error limit"**"egress MC header ECC error limit"*egress reduction header ECC DBE error**egress reduction header ECC DBE error*"egress reduction header ECC DBE error"**"egress reduction header ECC DBE error"*egress reduction header parity error**egress reduction header parity error*"egress reduction header parity error"**"egress reduction header parity error"*egress reduction flit mismatch error**egress reduction flit mismatch error*"egress reduction flit mismatch error"**"egress reduction flit mismatch error"*egress reduction buffer ECC DBE error**egress reduction buffer ECC DBE error*"egress reduction buffer ECC DBE error"**"egress reduction buffer ECC DBE error"*egress MC response count error**egress MC response count error*"egress MC response count error"**"egress MC response count error"*egress reduction response count error**egress reduction response count error*"egress reduction response count error"**"egress reduction response count error"**ingress*raw_pending_0*ingress remap ECC**ingress remap ECC*"ingress remap ECC"**"ingress remap ECC"*ingress RID ECC**ingress RID ECC*"ingress RID ECC"**"ingress RID ECC"*ingress RLAN ECC**ingress RLAN ECC*"ingress RLAN ECC"**"ingress RLAN ECC"*ingress ExtA remap index**ingress ExtA remap index*"ingress ExtA remap index"**"ingress ExtA remap index"*ingress ExtB remap index**ingress ExtB remap index*"ingress ExtB remap index"**"ingress ExtB remap index"*ingress MC remap index**ingress MC remap index*"ingress MC remap index"**"ingress MC remap index"*ingress ExtA request context mismatch**ingress ExtA request context mismatch*"ingress ExtA request context mismatch"**"ingress ExtA request context mismatch"*ingress ExtB request context mismatch**ingress ExtB request context mismatch*"ingress ExtB request context mismatch"**"ingress ExtB request context mismatch"*ingress MC request context mismatch**ingress MC request context mismatch*"ingress MC request context mismatch"**"ingress MC request context mismatch"*ingress invalid ExtA ACL**ingress invalid ExtA ACL*"ingress invalid ExtA ACL"**"ingress invalid ExtA ACL"*ingress invalid ExtB ACL**ingress invalid ExtB ACL*"ingress invalid ExtB ACL"**"ingress invalid ExtB ACL"*ingress invalid MC ACL**ingress invalid MC ACL*"ingress invalid MC ACL"**"ingress invalid MC ACL"*ingress ExtA address bounds**ingress ExtA address bounds*"ingress ExtA address bounds"**"ingress ExtA address bounds"*ingress ExtB address bounds**ingress ExtB address bounds*"ingress ExtB address bounds"**"ingress ExtB address bounds"*ingress MC address bounds**ingress MC address bounds*"ingress MC address bounds"**"ingress MC address bounds"*ingress ExtA remap ECC**ingress ExtA remap ECC*"ingress ExtA remap ECC"**"ingress ExtA remap ECC"*ingress ExtB remap ECC**ingress ExtB remap ECC*"ingress ExtB remap ECC"**"ingress ExtB remap ECC"*ingress MC remap ECC**ingress MC remap ECC*"ingress MC remap ECC"**"ingress MC remap ECC"*ingress MC command to uc**ingress MC command to uc*"ingress MC command to uc"**"ingress MC command to uc"*ingress read reflective**ingress read reflective*"ingress read reflective"**"ingress read reflective"*ingress ExtA address type**ingress ExtA address type*"ingress ExtA address type"**"ingress ExtA address type"*ingress ExtB address type**ingress ExtB address type*"ingress ExtB address type"**"ingress ExtB address type"*ingress MC address type**ingress MC address type*"ingress MC address type"**"ingress MC address type"*ingress ExtA remap DBE**ingress ExtA remap DBE*"ingress ExtA remap DBE"**"ingress ExtA remap DBE"*ingress ExtB remap DBE**ingress ExtB remap DBE*"ingress ExtB remap DBE"**"ingress ExtB remap DBE"*ingress MC remap DBE**ingress MC remap DBE*"ingress MC remap DBE"**"ingress MC remap DBE"*GLT ECC limit**GLT ECC limit*"GLT ECC limit"**"GLT ECC limit"*MCRID ECC limit**MCRID ECC limit*"MCRID ECC limit"**"MCRID ECC limit"*EXTMCRID ECC limit**EXTMCRID ECC limit*"EXTMCRID ECC limit"**"EXTMCRID ECC limit"*RAM ECC limit**RAM ECC limit*"RAM ECC limit"**"RAM ECC limit"*invalid MC route**invalid MC route*"invalid MC route"**"invalid MC route"*MC route ECC**MC route ECC*"MC route ECC"**"MC route ECC"*Extd MC route ECC**Extd MC route ECC*"Extd MC route ECC"**"Extd MC route ECC"*route RAM ECC**route RAM ECC*"route RAM ECC"**"route RAM ECC"*call to _nvswitch_collect_nport_error_info_ls10*PRI WRITE SYSB error**PRI WRITE SYSB error*"PRI WRITE SYSB error"**"PRI WRITE SYSB error"*call to _nvswitch_ring_master_cmd_ls10*call to _nvswitch_initialize_multicast_tstate_interrupts*call to _nvswitch_initialize_reduction_tstate_interrupts*red_tstate*biosVersion*bL1Capable***pDevInfo*lnkErrStatus*call to _nvswitch_tl_request_get_timeout_value_ls10*clockStatus*bIsOff*call to nvswitch_execute_unilateral_link_shutdown_ls10*link_state_request*link_intr_subcode*call to nvswitch_launch_ALI_link_training*call to _nvswitch_get_nvlink_linerate_ls10*call to nvswitch_set_error_rate_threshold_ls10*bInterruptEn*bUserConfig*thresholdMan*thresholdExp*timescaleMan*timescaleExp*call to nvswitch_minion_set_sim_mode_ls10*call to nvswitch_minion_set_smf_settings_ls10*call to nvswitch_minion_select_uphy_tables_ls10*settingVal*forcedConfigLinkMask*call to nvswitch_init_buffer_ready_lr10*clkStatus*resetRequestStatus*call to nvswitch_wait_for_tl_request_ready_ls10*call to nvswitch_minion_get_rxdet_status_ls10*call to nvswitch_corelib_set_tx_mode_lr10*linkStateRequest*nvldlErrCntl*nvldlTopLinkState*nvldlTopIntr*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, ALI Training failure. Info 0x%x%x%x%x%x%x%x **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, ALI Training failure. Info 0x%x%x%x%x%x%x%x *ALI Training failure. Info 0x%x%x%x%x%x%x%x **ALI Training failure. Info 0x%x%x%x%x%x%x%x *call to nvswitch_ctrl_set_link_l1_threshold_ls10*scrRegVal*call to nvswitch_initialize_device_state_lr10*10b**10b*nvLinkparam7*nvLinkparam8*nvLinkparam9*call to _nvswitch_set_next_led_state_ls10*call to _nvswitch_set_led_state_ls10*call to _nvswitch_get_next_led_state_ls10*call to _nvswitch_get_next_led_state_links_ls10*ledNextState*call to _nvswitch_get_next_led_state_link_ls10*call to _nvswitch_resolve_led_state_ls10*call to _nvswitch_check_for_link_traffic*tp_counter_previous_sum**tp_counter_previous_sum*call to _nvswitch_get_led_state_regval_ls10*current_led_state*next_led_state*call to _nvswitch_reset_and_drain_links_ls10*call to nvswitch_is_cci_supported*gpioVal*timestampNs*Firmware recovery mode**Firmware recovery mode*sxid_desc**sxid_desc*IO failure**IO failure*Firmware initialization failure**Firmware initialization failure*report_saw*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Fatal, %s (0x%x/0x%x, 0x%x, 0x%x, 0x%x/0x%x, 0x%x, 0x%x, 0x%x, 0x%x **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Fatal, %s (0x%x/0x%x, 0x%x, 0x%x, 0x%x/0x%x, 0x%x, 0x%x, 0x%x, 0x%x *Fatal, %s (0x%x/0x%x, 0x%x, 0x%x, 0x%x/0x%x, 0x%x, 0x%x, 0x%x, 0x%x **Fatal, %s (0x%x/0x%x, 0x%x, 0x%x, 0x%x/0x%x, 0x%x, 0x%x, 0x%x, 0x%x *pBoardId*call to nvswitch_get_error_rate_threshold_ls10**errorThreshold*call to nvswitch_ctrl_clear_throughput_counters_ls10*call to nvswitch_ctrl_clear_lp_counters_ls10*call to nvswitch_ctrl_clear_dl_error_counters_ls10*counterValidMaskOut*counterValidMask*cntIdx*counterValues**counterValues*call to nvswitch_inband_read_data*inBandData**sendBuffer*call to nvswitch_split_and_send_inband_data_ls10*call to nvswitch_minion_send_inband_data_ls10*call to nvswitch_parse_bios_image_lr10*call to nvswitch_ctrl_get_nvlink_status_lr10*call to _nvswitch_get_nvlink_power_state_ls10*linkPowerState*forcedConfgLinkMask*error_vector**error_vector*val_hi*vc0*busy*stall*vc1**residency*max_threshold*call to _nvswitch_init_nport_ecc_control_ls10*call to nvswitch_tnvl_reg_wr_32_ls10*call to nvswitch_get_eng_base_ls10*call to nvswitch_tnvl_eng_wr_32_ls10*call to _nvswitch_get_eng_descriptor_ls10*call to nvswitch_mc_read_mc_rid_entry_ls10*extendedTable*call to nvswitch_mc_unwind_directives_ls10*directives*ports*vcHop*portsPerSprayGroup*replicaOffset*replicaValid**directives**ports**vcHop**portsPerSprayGroup**replicaOffset**replicaValid*mcSize*numSprayGroups*extendedPtr*noDynRsp*extendedValid*call to nvswitch_mc_invalidate_mc_rid_entry_ls10*mcpl_size*num_spray_groups*ext_ptr*no_dyn_rsp*ext_ptr_valid*call to nvswitch_mc_build_mcp_list_ls10*call to nvswitch_mc_program_mc_rid_entry_ls10*call to _nvswitch_set_mc_remap_policy_ls10*call to _nvswitch_set_remap_policy_ls10*reflective*call to nvswitch_is_smbpbi_supported_lr10*call to _nvswitch_get_bios_version*pVersion*eccDecFailed*eccDecFailedOverflowed*call to _nvswitch_get_engine_base_ls10*engTILEOUT_BCAST**engTILEOUT_BCAST*engTILEOUT_MULTICAST**engTILEOUT_MULTICAST*call to nvswitch_internal_latency_bin_log*call to nvswitch_set_nport_tprod_state_ls10*bIsLinkInEmergencyShutdown*call to nvswitch_soe_issue_nport_reset_ls10*call to nvswitch_soe_restore_nport_state_ls10*call to _nvswitch_link_reset_interrupts_ls10*bAreDlClocksOn*call to _nvswitch_are_dl_clocks_on*call to _nvswitch_portstat_reset_latency_counters_ls10*call to nvswitch_soe_set_nport_interrupts_ls10*call to nvswitch_minion_clear_dl_error_counters_ls10*call to _nvswitch_init_ganged_link_routing_ls10*call to _nvswitch_init_cmd_routing_ls10*call to _nvswitch_init_portstat_counters_ls10*glt_index*call to nvswitch_set_ganged_link_table_ls10*call to nvswitch_nvs_top_prod_ls10*call to nvswitch_apply_prod_nvlw_ls10*call to nvswitch_apply_prod_nxbar_ls10*call to nvswitch_init_pmgr_ls10*call to nvswitch_init_pmgr_devices_ls10*call to nvswitch_unload_soe_ls10*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Fatal, Firmware initialization failure (0x%x/0x%x, 0x%x, 0x%x, 0x%x/0x%x, 0x%x, 0x%x, 0x%x, 0x%x **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Fatal, Firmware initialization failure (0x%x/0x%x, 0x%x, 0x%x, 0x%x/0x%x, 0x%x, 0x%x, 0x%x, 0x%x *Fatal, Firmware initialization failure (0x%x/0x%x, 0x%x, 0x%x, 0x%x/0x%x, 0x%x, 0x%x, 0x%x, 0x%x **Fatal, Firmware initialization failure (0x%x/0x%x, 0x%x, 0x%x, 0x%x/0x%x, 0x%x, 0x%x, 0x%x, 0x%x *checked_data*dlstatLinkIntr*remainingBuffer*inbandData*receiveBuffer**receiveBuffer*bytesToXfer*call to nvswitch_filter_messages*tempStatus*call to nvswitch_service_minion_all_links_ls10*dlcmd*minion_error*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Non-fatal, Link %02d DLCMD FAULT: cmd=0x%x DL_CMD=0x%x **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Non-fatal, Link %02d DLCMD FAULT: cmd=0x%x DL_CMD=0x%x *Non-fatal, Link %02d DLCMD FAULT: cmd=0x%x DL_CMD=0x%x **Non-fatal, Link %02d DLCMD FAULT: cmd=0x%x DL_CMD=0x%x *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Non-fatal, Link %02d DLCMD TIMEOUT: cmd=0x%x DL_CMD=0x%0x **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Non-fatal, Link %02d DLCMD TIMEOUT: cmd=0x%x DL_CMD=0x%0x *Non-fatal, Link %02d DLCMD TIMEOUT: cmd=0x%x DL_CMD=0x%0x **Non-fatal, Link %02d DLCMD TIMEOUT: cmd=0x%x DL_CMD=0x%0x *tcpEPort*tcpEAltPath*tcpEVCHop*tcpOPort*tcpOAltPath*tcpOVCHop*tcp*portFlag*continueRound*lastRound*spray_group_ptrs**spray_group_ptrs*cur_dir**cur_dir*cur_sg*port_idx*ports_in_cur_sg*call to _nvswitch_col_offset_to_port_ls10*vc_hop*replica_offset*replica_valid*ports_per_spray_group*cpo*port_list*pri_replica_offsets*replica_valid_array*vchop_array*entries_used*mcp_list**mcp_list*call to nvswitch_init_portlist_ls10*spray_group_size*tmp_mcp_list**tmp_mcp_list*call to _nvswitch_mc_build_ports_array*vchop_map**vchop_map***vchop_map*primary_replica_port*mcplist_offset*call to _nvswitch_mc_build_portlist*call to _nvswitch_mc_set_round_flags*call to _nvswitch_mc_set_port_flags*call to _nvswitch_mc_copy_valid_entries_ls10*spray_group_idx*primaryReplica*call to _is_primary_replica*last_portlist_pos*next_dir**next_dir*cur_portlist_pos*round_start*round_end*round_size*roundSize*round_tcp_mask*call to _nvswitch_get_column_port_offset_ls10*spray_group*vchop_array_sg*call to nvswitch_i2c_get_port_info_lr10*call to nvswitch_ctrl_i2c_indexed_lr10*call to soeI2CAccess_HAL*kernelI2CSupported*soeI2CSupported*call to _nvswitch_i2c_ports_priv_locked_ls10*call to _nvswitch_i2c_init_soe_ls10*call to _nvswitch_i2c_set_port_pmgr_ls10*Ports**Ports*defaultSpeedMode**pCpuAddr***pCpuAddr*call to nvswitch_smbpbi_send_unload_lr10*call to nvswitch_os_strncpy*driverVersionString*pInitDataCmd**driverVersionString*pLogCmd*sxidId*msgLen*segSize*errorString**errorString*msgOffset*logMessageNesting*call to soeSetupHal_LR10*i2cAccess*pTnvlPreLock**pTnvlPreLock*call to _soeI2CAccessSend*flcnRet*call to _soeI2cFlcnStatusToNvlStatus*pI2cCmd**pI2cCmd*call to _soeUpdateInitMsgQueuesInfo*call to _nvswitch_soe_attach_detach_driver_ls10*call to _nvswitch_is_soe_attached_ls10*call to flcnDbgInfoCaptureRiscvPcTrace_HAL*call to _soeIntrStatus_LS10*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE init Failed(0) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE init Failed(0) *SOE init Failed(0) **SOE init Failed(0) *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE init failed(1) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE init failed(1) *SOE init failed(1) **SOE init failed(1) *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE init failed(2) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE init failed(2) *SOE init failed(2) **SOE init failed(2) *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE init failed(4) **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, SOE init failed(4) *SOE init failed(4) **SOE init failed(4) *call to flcnDbgInfoCapturePcTrace_HAL*pEngineWrite**pEngineWrite*pRegisterWrite**pRegisterWrite*pErrorReportEnable**pErrorReportEnable*engId*engInstance*pEngineClearIntrCounter**pEngineClearIntrCounter*pIngressStop**pIngressStop*pNportIntrDisable**pNportIntrDisable*pNportIntrs**pNportIntrs*pL2State**pL2State*nportTprodState**nportTprodState*pNportState**pNportState*pNportReset**pNportReset*bSoeAttached*pGetPowerCmd**pGetPowerCmd*getPower*vdd_w*dvdd_w*hvdd_w*pGetVoltageCmd**pGetVoltageCmd*getVoltage*vdd_mv*dvdd_mv*hvdd_mv*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Thermal Slowdown Engaged | Links Thermal Mode %s **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Thermal Slowdown Engaged | Links Thermal Mode %s *Thermal Slowdown Engaged | Links Thermal Mode %s **Thermal Slowdown Engaged | Links Thermal Mode %s *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Thermal Slowdown Disengaged | Links Thermal Mode %s **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Thermal Slowdown Disengaged | Links Thermal Mode %s *Thermal Slowdown Disengaged | Links Thermal Mode %s **Thermal Slowdown Disengaged | Links Thermal Mode %s *call to _nvswitch_read_max_tsense_temperature_ls10*call to _nvswitch_read_external_tdiode_temperature_ls10*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to disable non-fatal/legacy interrupts. TNVL mode is not enabled **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Failed to disable non-fatal/legacy interrupts. TNVL mode is not enabled *Failed to disable non-fatal/legacy interrupts. TNVL mode is not enabled **Failed to disable non-fatal/legacy interrupts. TNVL mode is not enabled *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Reg-write failed. TNVL mode is not enabled **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, Reg-write failed. TNVL mode is not enabled *Reg-write failed. TNVL mode is not enabled **Reg-write failed. TNVL mode is not enabled *call to _nvswitch_tnvl_reg_wr_cpu_allow_list_ls10*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, TNVL REG_WR failure - 0x%08x, 0x%08x **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, TNVL REG_WR failure - 0x%08x, 0x%08x *TNVL REG_WR failure - 0x%08x, 0x%08x **TNVL REG_WR failure - 0x%08x, 0x%08x *nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, TNVL mode is locked **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, TNVL mode is locked *TNVL mode is locked **TNVL mode is locked *call to nvswitch_soe_reg_wr_32_ls10*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, ENG reg-write failed. TNVL mode is not enabled **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, ENG reg-write failed. TNVL mode is not enabled *ENG reg-write failed. TNVL mode is not enabled **ENG reg-write failed. TNVL mode is not enabled *call to _nvswitch_tnvl_eng_wr_cpu_allow_list_ls10*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, TNVL ENG_WR failure - 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, TNVL ENG_WR failure - 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x *TNVL ENG_WR failure - 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x **TNVL ENG_WR failure - 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x *call to nvswitch_soe_eng_wr_32_ls10*pCmdPayload**pCmdPayload*pRspPayload**pRspPayload*minorVersion*majorVersion*tnvl_mode*nonce**nonce*attestationReport**attestationReport*measurementBuffer**measurementBuffer*attestationReportSize*certChainLength*pCertChain**pCertChain*derCertChainSize*pDerCertChain**pDerCertChain*attestationCertChainSize*pAttestationCertChain**pAttestationCertChain*call to _nvswitch_tnvl_get_cert_chain_from_fsp_ls10*pCertChainBufferEnd**pCertChainBufferEnd*pIkCertificate**pIkCertificate*call to _calc_x509_cert_size_ls10*pAkCertificate**pAkCertificate*call to _tnvl_build_cert_chain_der_ls10*call to _tnvl_build_cert_chain_pem_ls10*attestationCertChain**attestationCertChain*pCertChainLength*certChain**certChain*pFirstCert*pSecondCert*outBufferSize*remainingOutBufferSize*call to _pem_write_buffer_ls10**pOutBuffer*-----BEGIN CERTIFICATE----- MIICqTCCAi+gAwIBAgIQcidIXMg4KYZ1y7ooFz5gUjAKBggqhkjOPQQDAzA9MR4w HAYDVQQDDBVOVklESUEgR0gxMDAgSWRlbnRpdHkxGzAZBgNVBAoMEk5WSURJQSBD b3Jwb3JhdGlvbjAgFw0yMjAzMDEwMDAwMDBaGA85OTk5MTIzMTIzNTk1OVowUzEn MCUGA1UEAwweTlZJRElBIExTXzEwIFByb3Zpc2lvbmVyIElDQSAxMRswGQYDVQQK DBJOVklESUEgQ29ycG9yYXRpb24xCzAJBgNVBAYTAlVTMHYwEAYHKoZIzj0CAQYF K4EEACIDYgAEyGbP8B2aF0Zd0V5GhWfcnC8K8BXUJMGPhAWQo88WymU0Az+u2Y7B zI9li0TyXhth18dB1yqodYgH3KKU1c0beV1OkAUvnlx3JpNPhx8nfdlhpM9jqsXG JXeJixW5+VOlo4HbMIHYMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEG MDwGA1UdHwQ1MDMwMaAvoC2GK2h0dHA6Ly9jcmwubmRpcy5udmlkaWEuY29tL2Ny bC9sMi1naDEwMC5jcmwwNwYIKwYBBQUHAQEEKzApMCcGCCsGAQUFBzABhhtodHRw Oi8vb2NzcC5uZGlzLm52aWRpYS5jb20wHQYDVR0OBBYEFBFA9xSZ0ALwvOeei4fR von435VEMB8GA1UdIwQYMBaAFAdCoOsDnIBge6FBYZlNriX3wpseMAoGCCqGSM49 BAMDA2gAMGUCMQDWLHcBKxi9QVrfMoDcIg3gLBRe5oEo/Q4KR3WaUMz9ABxMHK9Y K4xPtjXW4Bup5FwCMBhLpTQqsly8gQ6w1CIyMEC4n/LSjM65TC4pGVokSyjpoyp0 gWjuEBq1vBNs76Ge8A== -----END CERTIFICATE----- **-----BEGIN CERTIFICATE----- MIICqTCCAi+gAwIBAgIQcidIXMg4KYZ1y7ooFz5gUjAKBggqhkjOPQQDAzA9MR4w HAYDVQQDDBVOVklESUEgR0gxMDAgSWRlbnRpdHkxGzAZBgNVBAoMEk5WSURJQSBD b3Jwb3JhdGlvbjAgFw0yMjAzMDEwMDAwMDBaGA85OTk5MTIzMTIzNTk1OVowUzEn MCUGA1UEAwweTlZJRElBIExTXzEwIFByb3Zpc2lvbmVyIElDQSAxMRswGQYDVQQK DBJOVklESUEgQ29ycG9yYXRpb24xCzAJBgNVBAYTAlVTMHYwEAYHKoZIzj0CAQYF K4EEACIDYgAEyGbP8B2aF0Zd0V5GhWfcnC8K8BXUJMGPhAWQo88WymU0Az+u2Y7B zI9li0TyXhth18dB1yqodYgH3KKU1c0beV1OkAUvnlx3JpNPhx8nfdlhpM9jqsXG JXeJixW5+VOlo4HbMIHYMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEG MDwGA1UdHwQ1MDMwMaAvoC2GK2h0dHA6Ly9jcmwubmRpcy5udmlkaWEuY29tL2Ny bC9sMi1naDEwMC5jcmwwNwYIKwYBBQUHAQEEKzApMCcGCCsGAQUFBzABhhtodHRw Oi8vb2NzcC5uZGlzLm52aWRpYS5jb20wHQYDVR0OBBYEFBFA9xSZ0ALwvOeei4fR von435VEMB8GA1UdIwQYMBaAFAdCoOsDnIBge6FBYZlNriX3wpseMAoGCCqGSM49 BAMDA2gAMGUCMQDWLHcBKxi9QVrfMoDcIg3gLBRe5oEo/Q4KR3WaUMz9ABxMHK9Y K4xPtjXW4Bup5FwCMBhLpTQqsly8gQ6w1CIyMEC4n/LSjM65TC4pGVokSyjpoyp0 gWjuEBq1vBNs76Ge8A== -----END CERTIFICATE----- *pPortMemCopyStatus**pPortMemCopyStatus***pPortMemCopyStatus**call to nvswitch_os_memcpy*-----BEGIN CERTIFICATE----- MIICijCCAhCgAwIBAgIQTCVe3jvQAb8/SjtgX8qJijAKBggqhkjOPQQDAzA1MSIw IAYDVQQDDBlOVklESUEgRGV2aWNlIElkZW50aXR5IENBMQ8wDQYDVQQKDAZOVklE SUEwIBcNMjIwMTEyMDAwMDAwWhgPOTk5OTEyMzEyMzU5NTlaMD0xHjAcBgNVBAMM FU5WSURJQSBHSDEwMCBJZGVudGl0eTEbMBkGA1UECgwSTlZJRElBIENvcnBvcmF0 aW9uMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAE+pg+tDUuILlZILk5wg22YEJ9Oh6c yPcsv3IvgRWcV4LeZK1pTCoQDIplZ0E4qsLG3G04pxsbMhxbqkiz9pqlTV2rtuVg SmIqnSYkU1jWXsPS9oVLCGE8VRLl1JvqyOxUo4HaMIHXMA8GA1UdEwEB/wQFMAMB Af8wDgYDVR0PAQH/BAQDAgEGMDsGA1UdHwQ0MDIwMKAuoCyGKmh0dHA6Ly9jcmwu bmRpcy5udmlkaWEuY29tL2NybC9sMS1yb290LmNybDA3BggrBgEFBQcBAQQrMCkw JwYIKwYBBQUHMAGGG2h0dHA6Ly9vY3NwLm5kaXMubnZpZGlhLmNvbTAdBgNVHQ4E FgQUB0Kg6wOcgGB7oUFhmU2uJffCmx4wHwYDVR0jBBgwFoAUV4X/g/JjzGV9aLc6 W/SNSsv7SV8wCgYIKoZIzj0EAwMDaAAwZQIxAPIQhnveFxYIrPzBqViT2I34SfS4 JGWFnk/1UcdmgJmp+7l6rH/C4qxwntYSgeYrlQIwdjQuofHnhd1RL09OBO34566J C9bYAosT/86cCojiGjhLnal9hJOH0nS/lrbaoc5a -----END CERTIFICATE----- **-----BEGIN CERTIFICATE----- MIICijCCAhCgAwIBAgIQTCVe3jvQAb8/SjtgX8qJijAKBggqhkjOPQQDAzA1MSIw IAYDVQQDDBlOVklESUEgRGV2aWNlIElkZW50aXR5IENBMQ8wDQYDVQQKDAZOVklE SUEwIBcNMjIwMTEyMDAwMDAwWhgPOTk5OTEyMzEyMzU5NTlaMD0xHjAcBgNVBAMM FU5WSURJQSBHSDEwMCBJZGVudGl0eTEbMBkGA1UECgwSTlZJRElBIENvcnBvcmF0 aW9uMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAE+pg+tDUuILlZILk5wg22YEJ9Oh6c yPcsv3IvgRWcV4LeZK1pTCoQDIplZ0E4qsLG3G04pxsbMhxbqkiz9pqlTV2rtuVg SmIqnSYkU1jWXsPS9oVLCGE8VRLl1JvqyOxUo4HaMIHXMA8GA1UdEwEB/wQFMAMB Af8wDgYDVR0PAQH/BAQDAgEGMDsGA1UdHwQ0MDIwMKAuoCyGKmh0dHA6Ly9jcmwu bmRpcy5udmlkaWEuY29tL2NybC9sMS1yb290LmNybDA3BggrBgEFBQcBAQQrMCkw JwYIKwYBBQUHMAGGG2h0dHA6Ly9vY3NwLm5kaXMubnZpZGlhLmNvbTAdBgNVHQ4E FgQUB0Kg6wOcgGB7oUFhmU2uJffCmx4wHwYDVR0jBBgwFoAUV4X/g/JjzGV9aLc6 W/SNSsv7SV8wCgYIKoZIzj0EAwMDaAAwZQIxAPIQhnveFxYIrPzBqViT2I34SfS4 JGWFnk/1UcdmgJmp+7l6rH/C4qxwntYSgeYrlQIwdjQuofHnhd1RL09OBO34566J C9bYAosT/86cCojiGjhLnal9hJOH0nS/lrbaoc5a -----END CERTIFICATE----- *-----BEGIN CERTIFICATE----- MIICCzCCAZCgAwIBAgIQLTZwscoQBBHB/sDoKgZbVDAKBggqhkjOPQQDAzA1MSIw IAYDVQQDDBlOVklESUEgRGV2aWNlIElkZW50aXR5IENBMQ8wDQYDVQQKDAZOVklE SUEwIBcNMjExMTA1MDAwMDAwWhgPOTk5OTEyMzEyMzU5NTlaMDUxIjAgBgNVBAMM GU5WSURJQSBEZXZpY2UgSWRlbnRpdHkgQ0ExDzANBgNVBAoMBk5WSURJQTB2MBAG ByqGSM49AgEGBSuBBAAiA2IABA5MFKM7+KViZljbQSlgfky/RRnEQScW9NDZF8SX gAW96r6u/Ve8ZggtcYpPi2BS4VFu6KfEIrhN6FcHG7WP05W+oM+hxj7nyA1r1jkB 2Ry70YfThX3Ba1zOryOP+MJ9vaNjMGEwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8B Af8EBAMCAQYwHQYDVR0OBBYEFFeF/4PyY8xlfWi3Olv0jUrL+0lfMB8GA1UdIwQY MBaAFFeF/4PyY8xlfWi3Olv0jUrL+0lfMAoGCCqGSM49BAMDA2kAMGYCMQCPeFM3 TASsKQVaT+8S0sO9u97PVGCpE9d/I42IT7k3UUOLSR/qvJynVOD1vQKVXf0CMQC+ EY55WYoDBvs2wPAH1Gw4LbcwUN8QCff8bFmV4ZxjCRr4WXTLFHBKjbfneGSBWwA= -----END CERTIFICATE----- **-----BEGIN CERTIFICATE----- MIICCzCCAZCgAwIBAgIQLTZwscoQBBHB/sDoKgZbVDAKBggqhkjOPQQDAzA1MSIw IAYDVQQDDBlOVklESUEgRGV2aWNlIElkZW50aXR5IENBMQ8wDQYDVQQKDAZOVklE SUEwIBcNMjExMTA1MDAwMDAwWhgPOTk5OTEyMzEyMzU5NTlaMDUxIjAgBgNVBAMM GU5WSURJQSBEZXZpY2UgSWRlbnRpdHkgQ0ExDzANBgNVBAoMBk5WSURJQTB2MBAG ByqGSM49AgEGBSuBBAAiA2IABA5MFKM7+KViZljbQSlgfky/RRnEQScW9NDZF8SX gAW96r6u/Ve8ZggtcYpPi2BS4VFu6KfEIrhN6FcHG7WP05W+oM+hxj7nyA1r1jkB 2Ry70YfThX3Ba1zOryOP+MJ9vaNjMGEwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8B Af8EBAMCAQYwHQYDVR0OBBYEFFeF/4PyY8xlfWi3Olv0jUrL+0lfMB8GA1UdIwQY MBaAFFeF/4PyY8xlfWi3Olv0jUrL+0lfMAoGCCqGSM49BAMDA2kAMGYCMQCPeFM3 TASsKQVaT+8S0sO9u97PVGCpE9d/I42IT7k3UUOLSR/qvJynVOD1vQKVXf0CMQC+ EY55WYoDBvs2wPAH1Gw4LbcwUN8QCff8bFmV4ZxjCRr4WXTLFHBKjbfneGSBWwA= -----END CERTIFICATE----- **pSecondCert**pFirstCert**-----BEGIN CERTIFICATE----- *der*printed**-----END CERTIFICATE----- *pCert*bufferEnd*certSize*NVSwitch: Assertion failed **NVSwitch: Assertion failed *call to nvswitch_lib_ctrl_tnvl_lock_only*call to _nvswitch_ctrl_get_info*call to _nvswitch_ctrl_get_internal_latency*call to _nvswitch_ctrl_get_nvlipt_counters*call to nvswitch_ctrl_get_errors*call to nvswitch_ctrl_get_port_events*call to _nvswitch_ctrl_get_nvlink_status*call to _nvswitch_ctrl_acquire_capability*call to _nvswitch_ctrl_therm_read_temperature*call to _nvswitch_ctrl_get_fatal_error_scope*call to _nvswitch_lib_validate_privileged_ctrl*call to _nvswitch_ctrl_set_switch_port_config*call to _nvswitch_ctrl_get_ingress_request_table*call to _nvswitch_ctrl_set_ingress_request_table*call to _nvswitch_ctrl_set_ingress_request_valid*call to _nvswitch_ctrl_get_ingress_response_table*call to _nvswitch_ctrl_set_ingress_response_table*call to _nvswitch_ctrl_set_ganged_link_table*call to _nvswitch_ctrl_set_nvlipt_counter_config*call to _nvswitch_ctrl_get_nvlipt_counter_config*call to _nvswitch_ctrl_set_remap_policy*call to _nvswitch_ctrl_get_remap_policy*call to _nvswitch_ctrl_set_remap_policy_valid*call to _nvswitch_ctrl_set_routing_id*call to _nvswitch_ctrl_get_routing_id*call to _nvswitch_ctrl_set_routing_id_valid*call to _nvswitch_ctrl_set_routing_lan*call to _nvswitch_ctrl_get_routing_lan*call to _nvswitch_ctrl_set_routing_lan_valid*call to _nvswitch_ctrl_get_ingress_reqlinkid*call to _nvswitch_ctrl_unregister_link*call to _nvswitch_ctrl_reset_and_drain_links*call to _nvswitch_ctrl_get_bios_info*call to _nvswitch_ctrl_get_inforom_version*call to nvswitch_ctrl_blacklist_device*call to nvswitch_ctrl_set_fm_driver_state*call to nvswitch_ctrl_set_device_fabric_state*call to nvswitch_ctrl_set_fm_timeout*call to _nvswitch_ctrl_register_events*call to _nvswitch_ctrl_unregister_events*call to _nvswitch_ctrl_set_training_error_info*call to _nvswitch_ctrl_set_mc_rid_table*call to _nvswitch_ctrl_get_mc_rid_table*call to _nvswitch_ctrl_get_counters*call to _nvswitch_ctrl_get_nvlink_ecc_errors*call to _nvswitch_ctrl_i2c_smbus_command*call to _nvswitch_ctrl_cci_cmis_presence*call to _nvswitch_ctrl_cci_nvlink_mappings*call to _nvswitch_ctrl_cci_memory_access_read*call to _nvswitch_ctrl_cci_memory_access_write*call to _nvswitch_ctrl_cci_cage_bezel_marking*call to nvswitch_ctrl_get_grading_values*call to nvswitch_ctrl_get_ports_cpld_info*call to nvswitch_ctrl_get_cci_fw_revisions*call to nvswitch_ctrl_set_locate_led*call to _nvswitch_ctrl_get_soe_heartbeat*call to _nvswitch_ctrl_set_continuous_ali*call to _nvswitch_ctrl_request_ali*call to _nvswitch_ctrl_therm_get_temperature_limit*call to _nvswitch_ctrl_get_inforom_nvlink_max_correctable_error_rate*call to _nvswitch_ctrl_get_inforom_nvlink_errors*call to _nvswitch_ctrl_get_inforom_ecc_errors*call to _nvswitch_ctrl_get_inforom_bbx_sxid*call to _nvswitch_ctrl_get_fom_values*call to _nvswitch_ctrl_get_nvlink_lp_counters*call to _nvswitch_ctrl_get_residency_bins*call to _nvswitch_ctrl_set_residency_bins*call to _nvswitch_ctrl_get_rb_stall_busy*call to nvswitch_ctrl_get_multicast_id_error_vector*call to nvswitch_ctrl_clear_multicast_id_error_vector*call to _nvswitch_ctrl_inband_send_data*call to _nvswitch_ctrl_inband_read_data*call to _nvswitch_ctrl_inband_flush_data*call to _nvswitch_ctrl_inband_pending_data_stats*call to _nvswitch_ctrl_get_board_part_number*call to _nvswitch_ctrl_get_sw_info*call to _nvswitch_ctrl_register_read*call to _nvswitch_ctrl_register_write*call to _nvswitch_ctrl_get_err_info*call to _nvswitch_ctrl_clear_counters*call to _nvswitch_ctrl_set_nvlink_error_threshold*call to _nvswitch_ctrl_get_nvlink_error_threshold*call to _nvswitch_ctrl_therm_read_voltage*call to _nvswitch_ctrl_therm_read_power*call to _nvswitch_ctrl_get_inforom_bbx_sys_info*call to _nvswitch_ctrl_get_inforom_bbx_time_info*call to _nvswitch_ctrl_get_inforom_bbx_temp_data*call to _nvswitch_ctrl_get_inforom_bbx_temp_samples*call to _nvswitch_ctrl_get_link_l1_capability*call to _nvswitch_ctrl_get_link_l1_threshold*call to _nvswitch_ctrl_set_link_l1_threshold*call to _nvswitch_ctrl_fsprpc_get_caps*call to _nvswitch_ctrl_set_device_tnvl_lock*call to _nvswitch_ctrl_get_attestation_certificate_chain*call to _nvswitch_ctrl_get_attestation_report*call to _nvswitch_ctrl_get_tnvl_status*unknown ioctl %x **unknown ioctl %x *ioctl %x is not permitted when TNVL is locked **ioctl %x is not permitted when TNVL is locked *call to nvswitch_send_tnvl_prelock_cmd*call to nvswitch_tnvl_send_fsp_lock_config*l1Threshold**l1Threshold*l1Capable**l1Capable*statusData*call to nvswitch_os_vsnprintf*get_info*clocks*sys_pll*vco_freq_khz*update_rate_khz*freq_khz*hi0*hi1*call to nvswitch_os_mem_read32*boot_0*nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, IO failure **nvidia-%s: SXid (PCI:%04x:%02x:%02x.%x): %05d, IO failure *IO failure **IO failure *eeprom*call to nvlink_lib_get_link*link%d**link%d*linkName**linkName*call to nvswitch_get_link_ip_version*ac_coupled_mask*call to nvswitch_get_link_handlers**link_handlers***link_info*call to nvswitch_lib_read_fabric_state**msghdr*call to nvswitch_is_message_persistent*bSendNackOrDrop*call to nvswitch_send_nack_or_drop*call to nvswitch_os_is_admin*call to nvswitch_os_is_fabric_manager*call to nvswitch_inforom_bbx_get_data*call to nvswitch_inforom_bbx_get_sxid*call to nvswitch_inforom_ecc_get_errors*call to nvswitch_inforom_nvlink_get_errors*call to nvswitch_inforom_nvlink_get_max_correctable_error_rate*call to nvswitch_ctrl_cci_request_ali*call to cciCmisCageBezelMarking*bezelMarking**bezelMarking*call to nvswitch_i2c_get_port_info*port_info*is_i2c_access_allowed*is_port_allowed*transactionData*smbusByte*smbusByteData*smbusWordData*call to _nvswitch_perform_i2c_transfer*call to _nvswitch_inband_clear_lists*call to nvswitch_os_acquire_fabric_mgmt_cap*next_task**next_task*task_fn_vdptr*task_args**task_args***task_args*last_run_nsec*task_fn_devptr*time_current*deviceAddr*call to nvswitch_i2c_is_device_access_allowed*call to nvswitch_is_i2c_supported*call to nvlink_lib_unregister_device**driverName**nvlink_device*return_device**return_device*nvswitch%d**nvswitch%d*coreDev**coreDev*nvidia-nvswitch**nvidia-nvswitch***os_handle*fm_timeout*fabric_state_sequence_number*call to _nvswitch_setup_hal*call to _nvswitch_init_device_regkeys*call to nvlink_lib_register_device*log_FATAL_ERRORS*log_NONFATAL_ERRORS*log_PORT_EVENTS*call to nvswitch_hw_counter_shutdown*call to _nvswitch_destruct_cci*call to _nvswitch_led_shutdown*call to _nvswitch_unregister_links*call to nvswitch_destroy_error_log*call to _nvswitch_destroy_port_event_log*call to nvswitch_smbpbi_unload*call to _nvswitch_destroy_event_list*call to nvswitch_destroy_inforom_objects*call to nvswitch_destroy_inforom*call to nvswitch_smbpbi_destroy*call to _nvswitch_destroy_rom*call to _nvswitch_destruct_soe*call to nvswitch_tasks_destroy**portEvent*port_events*nextPortEventIndex*portEventCount*bOverflow*call to nvswitch_get_port_event*port_event*portEventIndex*port_event_entry**port_event_entry*port_event_log*port_event_start*local_port_event_num*call to nvswitch_os_notify_client_event*private_driver_data**private_driver_data*call to nvswitch_os_remove_client_event*call to nvswitch_os_add_client_event*newEvent**newEvent***private_driver_data*call to _nvswitch_check_pending_data_and_notify*call to _nvswitch_post_init_blacklist_device_setup*call to _nvswitch_post_init_device_setup*call to nvswitch_bios_get_image*call to nvswitch_parse_bios_image*call to cciLoad*enabledLinkMaskNonCci*call to _nvswitch_setup_system_registers*call to _nvswitch_corelib_get_dl_link_mode*call to nvswitch_reset_and_train_link*call to nvswitch_lib_load_platform_info*call to _nvswitch_construct_soe*call to _nvswitch_construct_cci*call to _nvswitch_set_dma_mask*call to _nvswitch_initialize_device_state*call to nvswitch_os_is_uuid_in_blacklist*is_blacklisted_by_os*call to nvswitch_initialize_inforom*call to nvswitch_initialize_inforom_objects*call to nvswitch_lib_blacklist_device*call to nvswitch_create_link*call to nvswitch_reset_persistent_link_hw_state*call to nvswitch_program_l1_scratch_reg*call to nvswitch_set_training_mode*call to nvswitch_construct_error_log*call to _nvswitch_construct_port_event_log*call to nvswitch_get_latency_sample_interval_msec**port_event_log*port_event_total*age*call to nvswitch_os_get_supported_register_events_params*osDescriptor**osDescriptor***osDescriptor*call to nvswitch_lib_remove_client_events*call to nvswitch_lib_add_client_event*eventIds**eventIds*fabric_state_timestamp*prev_fm_status*is_blacklisted*hw_dma_width*call to nvswitch_os_set_dma_mask*dma_addr_width*call to cciDestroy*call to cciAllocNew*call to cciInit*call to flcnableDestruct_HAL*call to soeDestroy**pSoe*call to soeAllocNew*call to soeInit*call to flcnableConstruct_HAL*pPacketState*pTag*pMsgqHead*pMsgqTail*pQueueHead*pQueueTail**kernel_version*call to nvswitch_os_strncmp*call to nvswitch_setup_hal_lr10*call to nvswitch_setup_hal_ls10*prev_task**prev_task*time_next_nsec*ato_control*call to nvswitch_os_read_registry_dword*ATOControl**ATOControl*sto_control*STOControl**STOControl*crc_bit_error_rate_short*CRCBitErrorRateShort**CRCBitErrorRateShort*crc_bit_error_rate_long*CRCBitErrorRateLong**CRCBitErrorRateLong*surpress_link_errors_for_gpu_reset*SurpressLinkErrorsForGpuReset**SurpressLinkErrorsForGpuReset*cci_control*CCIControl**CCIControl*cci_link_train_disable_mask*cci_link_train_disable_mask2*cci_max_onboard_attempts*CCIMaxOnboardAttempts**CCIMaxOnboardAttempts*cci_error_log_enable*CCIErrorLogEnable**CCIErrorLogEnable*external_fabric_mgmt*txtrain_control*crossbar_DBI*link_DBI*ac_coupled_mask2*call to nvswitch_get_swap_clk_default*swap_clk*link_enable_mask2*bandwidth_shaper*ssg_control*skip_buffer_ready*enable_pm*EnablePM**EnablePM*chiplib_forced_config_link_mask*chiplib_forced_config_link_mask2*soe_dma_self_test*soe_disable*latency_counter*nvlink_speed_control*inforom_bbx_periodic_flush*inforom_bbx_write_periodicity*inforom_bbx_write_min_duration*minion_disable*set_ucode_target*set_simmode*set_smf_settings*select_uphy_tables*link_training_mode*i2c_access_control*force_kernel_i2c*link_recal_settings*lp_threshold*minion_intr*MINIONIntr**MINIONIntr*block_code_mode*reference_clock_mode*debug_level*set_dl_link_mode*get_dl_link_mode*set_tl_link_mode*get_tl_link_mode*set_tx_mode*get_tx_mode*set_rx_mode*get_rx_mode*set_rx_detect*get_rx_detect*training_complete*get_uphy_load*write_discovery_token*read_discovery_token*ali_training*get_cci_link_mode*call to _nvswitch_is_device_id_present*call to nvswitch_get_rom_info*call to _nvswitch_read_rom_header*call to _nvswitch_read_bit_table*call to _nvswitch_print_bit_table_info*call to _nvswitch_read_rom_bytes*bit_header*call to _nvswitch_read_bit_tokens*bit_entry_offset*bit_token*call to _nvswitch_rom_parse_bit_clock_ptrs*call to _nvswitch_rom_parse_bit_nvinit_ptrs*call to _nvswitch_rom_parse_bit_bridge_fw_data*call to _nvswitch_rom_parse_bit_dcb_ptrs*idx_token*dcb*dcb_found*dcb_ptrs_size*dcb_ptrs*dcb_version*dcb_signature*call to _nvswitch_rom_parse_bit_dcb_ccb_block*call to _nvswitch_rom_parse_bit_dcb_i2c_devices*call to _nvswitch_rom_parse_bit_dcb_gpio_table*i2c_table_offset*i2c_device_count*i2c_entry**i2c_device*i2cDeviceType*i2cAddress*i2cPortLogical*idx_i2c*gpio_pin_count*gpio_entry*hw_select*ccb*ccb_table_offset*ccb_entry*i2c_bus_idx*comm_port**comm_port*idx_i2c_port*i2c_speed*i2c_33v*idx_ccb*link_config_found*nvinit_ptrs_size*nvlink_config_offset*nvlink_config*nvlink_config_size*link_ac_coupled_mask*clocks_found*clock_ptrs_size*pll_info_table*pll_info_header*bit_clock_ptrs*pll_info_offset*pll_info*idx_pll*bridge_fw_found*bridge_fw_size*oem_version*BIOS_MOD_date**BIOS_MOD_date*bit_bridge_fw*fw_release_build*instance_id*firmware_size*eeprom_header*pci_vendor_id*pci_system_vendor_id*pci_system_device_id*pSetCmd*trainingErrorInfo*pLinkTrainingErrorInfo*runtimeErrorInfo*pLinkRuntimeErrorInfo*pObjs**pObjs**pSmbpbi*pRegValue*call to flcnableInit*call to soeSetupHal*call to soeSetupHal_LS10**pParentHal*call to soeEmemTransfer_HAL*pTheirEntries*rx**pTheirEntries*pWriteIncoming**pWriteIncoming*call to msgqRxGetReadAvailable*call to _backendWrite32*pReadOutgoing**pReadOutgoing*fcnNotifyArg**fcnNotifyArg*call to _backendRead32*rxAvail*call to msgqTxSync*pReadIncoming**pReadIncoming*call to msgqTxGetFreeSpace*pOurEntries**pOurEntries*pWriteOutgoing**pWriteOutgoing*wp*txFree*pBackingStore**pBackingStore*pTheirTxHdr**pTheirTxHdr*fcnBackendRwArg**fcnBackendRwArg*pTheirRxHdr**pTheirRxHdr*rxLinked*rxSwapped*pOurRxHdr*rxReadPtr*rxHdrOff*entryOff*writePtr*msgCount*pOurTxHdr**pOurTxHdr**pOurRxHdr*txLinked*pTx**pTx*call to portMemSet*pAddr**pAddr**pVal*fcnBarrier*fcnZero*fcnFlush*fcnInvalidate*fcnBackendRw***fcnBackendRwArg**pArg*fcnNotify***fcnNotifyArg*aPtr*call to softfloat_raiseFlags*zPtr*v64*v0*isSigNaNA*uA*frac*call to softfloat_f16UIToCommonNaN*call to softfloat_commonNaNToF32UI*uiZ*call to softfloat_normSubnormalF16Sig*normExpSig*uZ*ui*uB*call to softfloat_subMagsF32*call to softfloat_addMagsF32*signA*expA*sigA*signB*expB*sigB*call to softfloat_normSubnormalF32Sig*expZ*sig64A*sigZ*call to softfloat_roundPackToF32*call to softfloat_propagateNaNF32UI*magBits*call to softfloat_shortShiftRightJam64*uC*call to softfloat_mulAddF32*expDiff*recip32*altRem*meanRem*signRem*call to softfloat_normRoundPackToF32*lastBitMask*roundBitsMask*call to softfloat_approxRecipSqrt32_1*shiftedSigZ*negRem*call to softfloat_f32UIToCommonNaN*call to softfloat_commonNaNToF16UI*frac16*call to softfloat_roundPackToF16*call to softfloat_commonNaNToF64UI*sig64*shiftDist*call to softfloat_shiftRightJam64*call to softfloat_roundToI32*absZ*call to softfloat_shiftRightJam64Extra*sig64Extra*call to softfloat_roundToI64*call to softfloat_roundToUI32*call to softfloat_roundToUI64*call to softfloat_addMagsF64*call to softfloat_subMagsF64*call to softfloat_normSubnormalF64Sig*sig32Z*doubleTerm*call to softfloat_roundPackToF64*call to softfloat_propagateNaNF64UI*call to softfloat_mul64To128*sig128Z*call to softfloat_mulAddF64*q64*call to softfloat_normRoundPackToF64*sig32A*recipSqrt32*call to softfloat_f64UIToCommonNaN*frac32*absA*call to softfloat_countLeadingZeros32*call to softfloat_countLeadingZeros64*call to softfloat_shiftRightJam32*eps*r0*ESqrR0*sigma0*sqrSigma0*a32*b32*mid1*signC*expC*sigC*signProd*expProd*sigProd*sig64Z*sig64C*call to softfloat_add128*call to softfloat_shortShiftRightJam128*call to softfloat_shiftRightJam128*sig128C*call to softfloat_sub128*call to softfloat_shortShiftLeft128*call to softfloat_countLeadingZeros16*roundNearEven*roundIncrement*roundBits*isTiny*sig32*doIncrement*u8NegDist*sigDiff*sigX*sigY*digest**h*len_b*pm_len*call to sha256_transf*tmp_len*rem_len*new_len*shifted_message**shifted_message*tot_len*call to nv_sha256_init*call to nv_sha256_update*call to nv_sha256_final*sub_block**sub_block*wv**wv*call to __nvListDel*call to __nvListAdd*pA*sizeMM*pB*call to nvPopCount64*numTaps*stringMightBeNull*safeString*call to nvIsEmulationEvo*call to nvIsDfpgaEvo**deviceList*gpuInfoParams*gpuInfoListSize*gpuInfoList**gpuInfoList*rmctx**ws*nv0080Params*migDev*nv2080Params*call to DeviceInMigMode*partInfoParams*queryPartitionInfo**queryPartitionInfo*partInfo**partInfo*gfxSize*execAllocParams*uuidStr**uuidStr*getUuidParams*migUuidBin**migUuidBin*activeParams*currentGpuId*currentGpuIdSubDevIndex**migDev*migDeviceId*smgAccessOk*call to GetGraphicsPartitionUUIDForDevice*clientctx*call to ListPartitions*call to InitializeGlobalMapping*call to GpuHasSMGPartitions*call to nvBuildModeNameSnprintf*%dx%d**%dx%d*bytePtr**call to nv_vasprintf_alloc*tmp_ap**tmp_ap*call to nv_vasprintf_vsnprintf*current_len*call to nv_vasprintf_free*call to nvUnixRmHandleFreeMem*call to UnixRmHandleMemset**pAllocator*handleId*call to UnixRmHandleReallocBitmap*call to nvUnixRmHandleReallocMem**call to nvUnixRmHandleReallocMem*newBitmap**newBitmap*maxHandles*call to nvDmaMoveDWORDS*call to __nvPushMoveDWORDS*call to __nvPushSetMethodDataSegmentU64LE*call to __nvPushSetMethodDataSegmentU64*call to nv3dGetVertexAttributestreamOffset*vertexStreamOffset**vertexStreamOffset*call to nv3dGetBindlessTextureConstantBufferOffset*call to nv3dGetProgramLocalMemoryOffset*call to nv3dGetProgramConstantsOffset*call to nv3dGetProgramOffset*call to nv3dGetConstantBufferOffset*call to nv3dGetTextureOffset*channelErrorOccurred*call to __nvPushGetNotifierRawIndex*call to nvPushSubDeviceMaskEquiv*call to _nv3dBindTexturesKepler*textureBindingIndices*call to nv3dGetTextureGpuAddress*call to _nv3dInvalidateTexturesFermi*call to nv3dUploadDataInline*currentProgramIndex**currentProgramIndex*call to nvPushIsAModel*call to nvPushInlineData*call to nv3dLoadConstantsHeader*call to nv3dSelectCbAddress*blendColor**blendColor*blendStateColor*blendStateAlpha*call to nv3dPushFloat*sfactor*dfactor*nv3dBlendStateColor*blendEquation*blendFactorSrc*blendFactorDst*call to nv3dSetBlend*programs*pgm*constants*call to nv3dGetProgramConstantsGpuAddress*programCode**programCode*call to nv3dClearProgramCache*currentPrimitiveMode*call to nv3dGetProgramLocalMemoryGpuAddress*call to UploadPrograms*call to nv3dGetProgramGpuAddress*call to nv3dImportFree***programCode*call to _nv3dInitializeStreams*pPrograms*compressedStart*compressedEnd**compressedStart**compressedData*call to xz_crc32_init*call to xz_dec_init*xzState**xzState*Failed to initialize xz decompression.**Failed to initialize xz decompression.*call to xz_dec_run*call to xz_dec_end*Failed to decompress xz data.**Failed to decompress xz data.*tex*samp**samp*call to _nv3dInitChannelTuring**p3dChannel*call to PickProgramsRec*call to ComputeProgramLocalMemorySize*programLocalMemorySize*call to _nv3dAssignSurfaceOffsets*call to nvPushGetSupportedClassIndex*call to GetSpaVersion*shaderArch*call to GetMaxThreadsAndWarps*call to GetSmVersion*smVersionParams*grInfoListSize**grInfoList***grInfoList*smVersion*maxThreadsPerWarp*maxWarps*call to QueryThreadsAndWarpsOneSd*grInfo*numSMs*maxWarpsPerSM*threadsPerWarp*programsTable**programsTable*call to nv3dGetBindlessTextureConstantBufferGpuAddress*remappedBinding**remappedBinding*call to nvPushMaxMethodCount*bytesSoFar*spaVersion*programLocalMemoryOffset*programOffset*programConstantsOffset*textureOffset*bindlessTextureConstantBufferOffset*constantBufferOffset*call to UnmapSurface*call to FreeSurface*call to AllocSurface*call to MapSurface*thisGpuAddress*AllocSize*pStream*vertexStreams**vertexStreams*call to WillVertexDataWrap*call to DrawVertexArray*call to WrapVertexStreams*call to nv3dSetConstantBufferOffset*call to nv3dVasSelectCbForVertexData*call to nv3dPushConstants*call to SelectCbForStream*call to nv3dGetVertexAttributestreamGpuAddress*pStatic*nextLaunch*call to SetVertexStreamOffset*hasPositionAttrib*daSkipMask*hasStaticAttribs*call to AdvanceStream*call to InitializeStream**pDynamic*call to InitializeStreamFromSurf*call to SetVertexStreamSurface*call to __nvPushGpFifoOffset*call to FreeUserMode*clientSli*call to IsClassSupported*hasFb*confidentialComputeMode*gpuVASpaceObject*amodelConfig*call to GetChannelClassAndUserDSize*No supported command buffer format found**No supported command buffer format found*call to __nvPushGetHal*No push buffer implementation found.**No push buffer implementation found.*call to AllocUserMode*Unable to allocate push buffer controls.**Unable to allocate push buffer controls.*call to __nvPushGetDeviceIndex*userD**userD*userdMapHandle*call to nvPushImportRmApiUnmapMemory*call to nvPushImportRmApiFree*pushbufferVAHandle**pushbufferVAHandle*call to __nvPushGetNumDevices*pushbufferHandle*call to FreeSemaSurface*call to GetCoherenceFlags*Failed to allocate notification memory.**Failed to allocate notification memory.*call to AllocPushbuffer*Push buffer DMA allocation failed**Push buffer DMA allocation failed*call to InitDmaSegment**gpfifo*call to __nvPushProgressTrackerOffset*call to ProgressTrackerBufferSize*call to nvDmaAllocUserD*call to AllocChannelObject*call to RequestChidToken*call to AllocSemaSurface*call to InitGpFifoExtendedBase*call to __nvPushTestPushBuffer*call to GetExtendedBase*gpPointer*call to GetChannelHandle*gpuMapFlags*call to nvPushImportRmApiAllocMemory64*call to nvPushImportRmApiMapMemoryDma*errorCtxDma*ctxdmaParams*call to nvPushImportRmApiAlloc*call to nvPushInitWaitForNotifier*call to nvPushImportRmApiUnmapMemoryDma***allocParams*hopperParams*bBar1Mapping*call to GetDeviceHandle*hUserMode*call to nvPushImportRmApiMapMemory**pUserMode***pUserMode*call to CalculateGPBufferSize*call to TryAllocAndMapPushbuffer*pCpuAddress**pCpuAddress*pGpuAddress*vaParams*vaAlloc**vaAlloc*localLimit*surfaceAlloc*vaMap**vaMap*busInfo*call to nvPushImportRmApiControl*gpFifoDmaClasses**gpFifoDmaClasses*gpfifoClass*userDSize**pClassTable*gpFifoOffset**hUserdMemory**userdOffset**userdMapHandle*Push buffer object allocation failed: 0x%x (%s)**Push buffer object allocation failed: 0x%x (%s)*call to BindAndScheduleChannel*Push buffer mapping failed: 0x%x (%s)**Push buffer mapping failed: 0x%x (%s)*pUserD**pUserD*bindParams*Failed to bind the channel**Failed to bind the channel*scheduleParams*Failed to schedule the channel**Failed to schedule the channel*notifParams*freeDwords*gpuMapOffset*call to __nvPushProgressTrackerEntrySize*progressSemaphore*call to GetHandle*pHandlePool*call to __NV_clzll*call to __NV_ctzll*call to __NV_clz*call to __NV_ctz*call to __builtin_ctzll*call to __builtin_ctz*call to __sync_add_and_fetch_4**location*call to __sync_sub_and_fetch_4**oldValue**newValue*call to __atomic_compare_exchange_8*call to __sync_val_compare_and_swap_4*call to __sync_lock_test_and_set_4*call to __NVatomicCompareExchange64*oldPayload*call to NvTimeSemVoltaGetPayloadVal*payloadPtr**payloadPtr*call to NvTimeSemFermiSetMaxSubmittedVal*call to NvTimeSemFermiGetPayloadVal*maxSubmittedPtr**maxSubmittedPtr*extendedBase*clientAllocatesUserD*allocateDoubleSizeGpFifo*releaseTimelineSemaphore*acquireTimelineSemaphore*call to nvPushSetMethodDataU64LE*call to GetSetObjectHandle*Failed to query object info.**Failed to query object info.*pNotify**pNotify*call to nvPushImportGetMilliSeconds*newtime*short_timeout_value*long_timeout_value*must_wait_for_event*call to nvPushImportWaitForEvent*short_timeout*long_timeout*call to nvPushCheckChannelError*call to GpFifoReadGet*WAIT (0, %d, 0x%04x, 0x%08x, 0x%08x)**WAIT (0, %d, 0x%04x, 0x%08x, 0x%08x)*WAIT (1, %d, 0x%04x, 0x%08x, 0x%08x)**WAIT (1, %d, 0x%04x, 0x%08x, 0x%08x)*WAIT (2, %d, 0x%04x, 0x%08x, 0x%08x)**WAIT (2, %d, 0x%04x, 0x%08x, 0x%08x)*shortTimeOutDone*call to nvPushImportYield*call to nvPushImportEmptyEventFifo*call to nvPushReadGetOffset*fenceToEnd*call to nvPushImportPushbufferWrapped*call to WriteGetOffset*call to IdleChannel*Failed to initialize DMA.**Failed to initialize DMA.*Failed to idle DMA.**Failed to idle DMA.*call to Kickoff*baseTime*call to UserDKickoff*notifIndex*pTokenNotifier**pTokenNotifier*doorbell**doorbell*pUserd*GPPut*call to DumpPB*call to nvWriteGpEntry*call to FillGpEntry*nextGpPut**gpPointer*call to ReadGpGetOffset*call to InsertProgressTracker*restoreSDM*call to GpFifoReadGpGet*maxDistanceToPut*call to ReadProgressTrackerSemaphore*call to nvPushImportChannelErrorOccurred*dict*lzma*lzma2*need_dict_reset*size_max*need_props*call to dict_reset*uncompressed*next_sequence*call to lzma_reset*compressed*call to lzma_props*call to rc_read_init*call to dict_limit*call to lzma2_lzma*call to dict_flush*call to rc_is_finished*call to rc_reset*call to dict_uncompressed*in_avail*call to nv3dImportMemCpy*call to nv3dImportMemSet*in_limit*call to lzma_main*call to nv3dImportMemMove*pos_mask*literal_pos_mask*rep0*rep1*rep2*rep3*is_match**is_match***is_match*probs**probs*call to dict_has_space*call to dict_repeat*call to rc_limit_exceeded*call to rc_bit*call to lzma_literal*is_rep**is_rep*call to lzma_rep_match*call to lzma_match*call to rc_normalize*is_rep0**is_rep0*is_rep0_long**is_rep0_long***is_rep0_long*call to lzma_state_short_rep*is_rep1**is_rep1*is_rep2**is_rep2*call to lzma_state_long_rep*call to lzma_len*call to lzma_state_match*dist_slot**dist_slot***dist_slot*call to lzma_get_dist_state*call to rc_bittree*dist_special**dist_special*call to rc_bittree_reverse*call to rc_direct*dist_align**dist_align***low**mid***mid*call to lzma_literal_probs*call to lzma_state_is_literal*call to dict_get*match_byte*match_bit*call to dict_put*call to lzma_state_literal*literal**literal***literal*bound*init_bytes_left*back*call to xz_dec_lzma2_end*allow_buf_error*call to xz_dec_lzma2_create**lzma2*call to xz_dec_reset*in_start*call to dec_main*out_pos*call to fill_temp*call to dec_stream_header*call to dec_block_header*call to dec_block*call to crc32_validate*call to dec_index*call to index_update*call to nv3dImportMemCmp*call to dec_stream_footer*call to xz_crc32*call to get_unaligned_le32*call to dec_vli*call to xz_dec_lzma2_reset*YZ**YZ*ý7zXZ**ý7zXZ*check_type*call to xz_dec_lzma2_run*arangeTable*foundUnit*lineUnitBuffer**lineUnitBuffer*lineUnitSize*debugARangesEnd*call to libosDwarfReadRaw*debugARanges*lastEntry*arangeUnit*combAddress*combLength*nUnit*nARangeEntries**arangeTable*call to dwarfBuildARangeTable*call to constructDwarfARangeTuple*debugLineEnd*debugLines*call to dwarfParseUnitLines*pFoundARange**pFoundARange**directory*outputLine*outputColumn*matchedAddress*debugLineStrStart*call to dwarfReadUleb128*emit_row*postEmitResetStateIsStmt*postEmitResetState*call to libosDwarfExtractString*call to dwarfReadSleb128*isStmt*call to dwarfSetARangeTableLineUnit*call to dwarfReadFilename_V2*call to dwarfReadFilename_V5*previousAddress*previousFile*previousLine*previousColumn*column*call to resolveDebugLinesStrp*pResolver*debugLineStrEnd*call to dwarfReadLeb128Generic*strtabStart*call to portStringCompare*symbolName*call to portMemFree*call to LibosElfFindSectionByName*.debug_line**.debug_line*debugLine*call to LibosElfMapSection**debugLineStart**debugLineStrStart*.debug_line_str**.debug_line_str*debugLineStr*.debug_aranges**.debug_aranges**debugARangesStart*.symtab**.symtab*debugSymTab**symtabStart*.strtab**.strtab*debugStrTab**strtabStart*call to libosDwarfBuildTables*call to LibosElfProgramHeaderForRange*phdr*call to LibosElfFindSectionByAddress*call to LibosElfGetClass*shdr32*shdr64*phdr32*phdr64*shdrOffset*shdrSize*shdrAddr*call to LibosElfSectionHeaderNext*call to LibosElfMapVirtual**call to LibosElfMapVirtual**raw*elf32*shstrndx*shnum*elf64*shstr**shstr*shstrSize*shdrName*targetName*call to LibosElfProgramHeaderNext*shoff*shdrSz*shdrTable**shdrTable*shdrTableEnd**shdrTableEnd*phoff*phnum*phdrSz*phdrTable**phdrTable*phdrTableEnd**phdrTableEnd*entryVa**ident*elf**elf*libosLogDataOffset*pLibosBufV0***dataPointer*pLibosBufV1*pLibosBufV2*logDecode*log**log*putCopy*call to libosExtractLogs_decode*call to libosExtractLogs_nvlog*previousPut*call to nvlogDeallocBuffer*hNvLogNoWrap*hNvLogWrap**mergedBuffer*call to LibosDebugResolverDestroy**scratchBuffer*numLogBuffers*bIsDecodable*pBuffers**pBuffers***pBuffers*pNoWrapBuf*pWrapBuf*call to libosLogInit*scratchBufferSize*lineBuffer**lineBuffer*curLineBufPtr**curLineBufPtr*call to LibosElfImageConstruct*call to _checkIsElfContainer*bIsContainer*call to libosLogInitLogBuffer*call to _libosLogInitLogBuffer*mergedLogResolver**mergedLogResolver**KRNL*pTaskLogInfos*call to portStringCopy**taskPrefix**elfSectionName*sectionHdr*sectionData**sectionData*sectionDataEnd***elf*.logging_metadata**.logging_metadata*call to LibosDebugResolverConstruct*.logging**.logging*loggingBaseAddress*loggingSize*.fwIsContainerElf**.fwIsContainerElf*call to libosLogAddLogEx*LIBOS_LOG_DECODE::log array is too small. Increase LIBOS_LOG_MAX_LOGS. **LIBOS_LOG_DECODE::log array is too small. Increase LIBOS_LOG_MAX_LOGS. *call to nvAssertFailedNoLog*libosLogNvlogBufferVersion >= LIBOS_LOG_NVLOG_BUFFER_VERSION_2 || libosLogFlags == 0*../common/uproc/os/libos-v3.1.0/lib/liblogdecode.c**libosLogNvlogBufferVersion >= LIBOS_LOG_NVLOG_BUFFER_VERSION_2 || libosLogFlags == 0**../common/uproc/os/libos-v3.1.0/lib/liblogdecode.c**pLog**physicLogBuffer*logBufferSize*putIter*bNvLogNoWrap*bDidPush*preservedNoWrapPos*sourceName**sourceName*call to findPreservedNvlogBuffer*call to nvlogAllocBuffer*call to libosLogInitHeader*buildId**buildId*nvlogAllocBuffer nowrap failed **nvlogAllocBuffer nowrap failed *nvlogAllocBuffer wrap failed **nvlogAllocBuffer wrap failed *buildIdSection*buildIdLength*call to libosLogCreateExGeneric*pSourceName*call to libosLogCreateGeneric*call to states_init**logDecode**GSP*pNvLogBuffer*pLogDecode*hasMergedNvlogBuffers*call to libosCopyLogToNvlog_nowrap*call to libosCopyLogToNvlog_wrap*call to libosSyncNvlog*call to libosExtractLogs_mergeNvlog*call to libosMergeNvlog*call to libosCopyLogToMergeNvlog_Nowrap*call to libosCopyLogToMergeNvlog_Wrap*pLogLocal*previousPutLocal*numLogEntries*libosMergeNvlog: Invalid logEntrySize: %d, merged buffer configured incorrectly! **libosMergeNvlog: Invalid logEntrySize: %d, merged buffer configured incorrectly! *(pLog->hNvLogWrap != 0) && (pNvLogBuffer != NULL)**(pLog->hNvLogWrap != 0) && (pNvLogBuffer != NULL)*nvlogPos*(pLog->hNvLogNoWrap != 0) && (pNvLogBuffer != NULL)**(pLog->hNvLogNoWrap != 0) && (pNvLogBuffer != NULL)*logDecode->physicLogBuffer is NULL **logDecode->physicLogBuffer is NULL *call to libosExtractLog_ReadRecord*meta*pPrevRec*call to portMemCmp***** scratch buffer overflow. lost entries **** ****** scratch buffer overflow. lost entries **** **pPrevRec*call to libosPrintLogRecords*call to _getLoggingMetadata**meta*taskId***** Bad metadata. Lost %lld entries from %s-%s **** ****** Bad metadata. Lost %lld entries from %s-%s **** *logSymbolResolver**logSymbolResolver*call to _findLogBufferFromSectionName*pLogSymbolResolver**pLogSymbolResolver*argCount***** Buffer wrapped. Lost %lld entries from %s-%s **** ****** Buffer wrapped. Lost %lld entries from %s-%s **** *metadataVA**pMetadata*call to LibosElfMapVirtualString*pMetadataEx**pMetadataEx*pRec***** Bad metadata. Lost %lld entries from %s **** ****** Bad metadata. Lost %lld entries from %s **** *lineLogLevel**format*call to portStringLength*call to libos_printf_a*call to nvDbgSnprintf*NVRM: GPU%u %s-%s: %s(%u): **NVRM: GPU%u %s-%s: %s(%u): *remain*line_buffer_end*Assertion failed: **Assertion failed: *bFixedString*Assertion failed: %s (0x%08X) returned from **Assertion failed: %s (0x%08X) returned from *Check failed: **Check failed: *Check failed: %s (0x%08X) returned from **Check failed: %s (0x%08X) returned from *call to flush_line_buffer*l10n*argpos*nl_arg**nl_arg*call to getint*st*-+ 0X0x**-+ 0X0x*bResolvePtrVal*call to fmt_x*call to fmt_o*call to fmt_u*call to s_getSymbolDataStr*(bad-pointer)**(bad-pointer)*call to portStringLengthSafe*wc**wc*call to pad*call to emit_string*symDecodedLineLen*decodedLine*call to LibosDebugResolveSymbolToName*resolver*call to LibosDwarfResolveLine*bResolved*%s+%lld (%s%s%s:%lld)**%s+%lld (%s%s%s:%lld)*/**/*??? (%s:%lld)**??? (%s:%lld)*%s+%lld**%s+%lld*logLevel*call to osSanityTestIsr*call to osIsr*__nvoc_metadata_ptr*vtable*PageCount*ActualSize*_addressSpace*_hwResId*call to memdescSetPteKindForGpu*call to memdescGetPteKindForGpu*addressTranslation*pReference*pDuplicate*pRmResource**ppMemDesc*pSharePolicy*ppNotifShare**ppNotifShare*pNotifShare*named**pKernelGsplite***pKernelGsplite**pKCe***pKCe**pKernelGraphics***pKernelGraphics**pChild***pChild*chipInfo*call to gpuGetChipMajRev*call to gpuGetChipMinRev*engineOrder*call to nvAssertFailed*generated/g_gpu_nvoc.h**generated/g_gpu_nvoc.h*pIsEngineRequired*pTableSize*pSliLinkCircular*pVidLinkCount*pMsgAddr**pMsgAddr*pMsgSize*pErrorString**pErrorString*pbDrainRecommended*pbResetRequired*call to gpuEncodeDomainBusDevice**OOR_ARCH_X86_64**OOR_ARCH_PPC64LE**OOR_ARCH_ARM**OOR_ARCH_AARCH64**OOR_ARCH_NONE**OOR_ARCH__UNKNOWN*bClientPageTablesPmaManaged*bPmaForcePersistence*bPmaInitialized*generated/g_mem_mgr_nvoc.h**generated/g_mem_mgr_nvoc.h*pAllocData*pVidHeapAlloc*pHeapFlag*pMemoryRange*pBlAddrs*phMemory*pAddrRange**ppMemoryPartitionHeap*pbIsValid*pMmuLockLo*pMmuLockHi*pFbAllocInfo*bar1Info*pMemSize*pBankPlacementLowData*pIntrService*pRecords*generated/g_intr_nvoc.h**generated/g_intr_nvoc.h*ppIntrTable**ppIntrTable*pInterruptVectors**pInterruptVectors*pIntrmode*pPending*pKernelGraphicsContext_PRIVATE**pVgpuGfxpBuffers*engineInfo*engineInfoList*generated/g_kernel_fifo_nvoc.h**generated/g_kernel_fifo_nvoc.h**UNKNOWN_ACCESS_TYPE*call to kfifoGetMaxSubcontext_DISPATCH**arg5*pChidMgr*pEngineFifoList*pVChid*pPresent*pUserdAperture*pUserdAttribute*pAddrShift*bar1MapSize*ppPbdmaIds**ppPbdmaIds*pNumPbdmas*pOutVal*pFlags**ppMemdesc*pWorkSubmitToken*pShift*pAlignment*pbInstProtectedMem*pInstAttr*ppInstAllocList**ppInstAllocList*pKernelMIGManager_PRIVATE*bIsA100ReducedConfig*generated/g_kernel_mig_manager_nvoc.h**generated/g_kernel_mig_manager_nvoc.h*pProfile*pCIIds*generated/g_kern_mem_sys_nvoc.h**generated/g_kern_mem_sys_nvoc.h*KernelMemorySystem*rsvdPhysAddr*generated/g_subdevice_nvoc.h**generated/g_subdevice_nvoc.h*call to subdeviceInternalControlForward_DISPATCH*pGspFeaturesParams*pRcRecovery*pChannelInfo*pDisableChannelParams*pBlackListParams*pVideoEventParams*powerInfoParams*pLevelInfoParams*pPerfmonParams*pBiosPostTime*pBiosGetSKUInfoParams*pBiosInfoParams*generated/g_objtmr_nvoc.h**generated/g_objtmr_nvoc.h*call to osGetTimestamp*call to osGetMonotonicTimeNs***fp*call to threadStateInit*call to rmapiLockAcquire*call to rmGpuGroupLockAcquire*Off**Off*Active**Active*pVidmemPowerStatus**pVidmemPowerStatus*call to RmGetDynamicPowerManagementStatus*pNvp*pDynamicPowerStatus**pDynamicPowerStatus*call to RmGetGpuGcxSupport*pGc6Supported**pGc6Supported*pGcoffSupported**pGcoffSupported*pS0ixStatus**pS0ixStatus*call to RmGetDynamicBoostSupport*pDbStatus**pDbStatus*call to rmGpuGroupLockRelease*call to rmapiLockRelease*call to threadStateFree*powerInfo**vidmem_power_status**dynamic_power_status**gc6_support**gcoff_support**s0ix_status**db_support*nvp*call to RmConfigureUpstreamPortForRTD3*call to RmDestroyDeferredDynamicPowerManagement*call to rmGpuLocksAcquire*call to RmCheckRtd3GcxSupport*call to RmInitDeferredDynamicPowerManagement*call to RmInitS0ixPowerManagement*dynamic_power*gc6_upstream_port_configured*is_pm_unsupported*call to rmGpuLocksRelease*call to RmAcpiD3ColdDsm*gc6State*call to nvDbg_Printf*arch/nvalloc/unix/src/dynamic-power.c*NVRM: Aux Power and Pex delay settings %s successfully. **arch/nvalloc/unix/src/dynamic-power.c**NVRM: Aux Power and Pex delay settings %s successfully. *applied**applied*cleared**cleared*call to nv_acpi_d3cold_dsm_for_upstream_port*NVRM: %s: PEX _DSM subfunction: 0x%X failed. **NVRM: %s: PEX _DSM subfunction: 0x%X failed. *call to osReadRegistryDword*s0ix_pm_enabled*s0ix_gcoff_max_fb_size*call to osQueueWorkItem*NVRM: Failed to queue RmHandleIdleSustained() workitem. **NVRM: Failed to queue RmHandleIdleSustained() workitem. *call to RmScheduleCallbackForIdlePreConditionsUnderGpuLock*b_idle_sustained_workitem_queued*call to nv_revoke_gpu_mappings*call to RmScheduleCallbackToIndicateIdle**Enabled (fine-grained)**Enabled (coarse-grained)**Disabled by default**Not supported*call to os_get_dynamic_boost_support*db_supported**DbStatus*call to RmNotifyClientAboutHotplug*NVRM: Failed to increment dynamic power refcount **NVRM: Failed to increment dynamic power refcount *call to gpuNotifySubDeviceEvent_IMPL*call to rmapiEnterRtd3PmPath*call to RmTransitionDynamicPower*bTryAgain*call to rmapiLeaveRtd3PmPath*call to RmGcxPowerManagement*call to nv_idle_holdoff*call to RmScheduleCallbackToRemoveIdleHoldoff*call to os_flush_work_queue*rm_pvt_status*call to nvAssertOkFailedNoLog*os_flush_work_queue(pNv->queue, pmAction != NV_PM_ACTION_RESUME)**os_flush_work_queue(pNv->queue, pmAction != NV_PM_ACTION_RESUME)*call to os_ref_dynamic_power*call to RmPowerManagementTegra*call to os_disable_console_access*bConsoleDisabled*call to nv_indicate_idle*call to RmCancelCallbackToRemoveIdleHoldoff*b_idle_holdoff*pm_state*call to RmPowerManagement*call to os_enable_console_access*call to os_unref_dynamic_power*acpiMethodData*jtMethodData*NVRM: AML overrides present in Desktop**NVRM: AML overrides present in Desktop*d0_state_in_suspend*call to nv_s2idle_pm_configured*bCanUseGc6*call to memmgrGetUsedRamSize_IMPL*call to RmCheckForGcOffPM*call to fbsrReserveSysMemoryForPowerMgmt_IMPL*pFbsr**pFbsr***pFbsr*PDB_PROP_GPU_GCOFF_STATE_ENTERING*bPreserveComptagBackingStoreOnSuspend*PDB_PROP_GPU_GCOFF_STATE_ENTERED*call to fbsrFreeReservedSysMemoryForPowerMgmt_IMPL*entryParams*flavorId*stepMask*bIsRTD3Transition*bSkipPstateSanity*call to gpuGc6Entry_IMPL*NVRM: System suspend failed with current system suspend configuration. Please change the system suspend configuration to s2idle in /sys/power/mem_sleep. **NVRM: System suspend failed with current system suspend configuration. Please change the system suspend configuration to s2idle in /sys/power/mem_sleep. *exitParams*call to gpuGc6Exit_IMPL*call to RmUpdateFixedFbsrModes*InHibernate*call to intrGetIntrEn_IMPL*IntrEn*call to intrSetIntrEn_IMPL*call to gpuEnterHibernate_IMPL*call to gpuEnterStandby_IMPL*call to gpuResumeFromHibernate_IMPL*call to gpuResumeFromStandby_IMPL*call to RmPowerSourceChangeEvent*call to RmRequestDNotifierState*fixedFbsrModesMask*gcoff_max_fb_size*b_fine_not_supported*NVRM: RTD3 is not supported. **NVRM: RTD3 is not supported. *call to AddGpuDynamicPowerSupported*call to gpuGetInstance*call to CreateDynamicPowerCallbacks*deferred_idle_enabled*clients_gcoff_disallow_refcount*NVRM: Failed to register for dynamic power callbacks **NVRM: Failed to register for dynamic power callbacks *NVRM: RTD3 is not supported for this arch **NVRM: RTD3 is not supported for this arch *NVRM: Disabling RTD3. [GC6 support=%d GCOFF support=%d] **NVRM: Disabling RTD3. [GC6 support=%d GCOFF support=%d] *rmapi*NVRM: Failed to get Virtualization mode, status=0x%x **NVRM: Failed to get Virtualization mode, status=0x%x *virtModeParams*NVRM: RTD3 is not supported on VM **NVRM: RTD3 is not supported on VM *NVRM: RTD3 ACPI support is not available. **NVRM: RTD3 ACPI support is not available. *call to rmDeviceGpuLockIsOwner*rmDeviceGpuLockIsOwner(pGpu->gpuInstance)**rmDeviceGpuLockIsOwner(pGpu->gpuInstance)*remove_idle_holdoff**remove_idle_holdoff*scheduleEventParams**pEvent***pEvent*bUseTimeAbs*call to tmrCtrlCmdEventSchedule*NVRM: Error scheduling kernel refcount decrement callback **NVRM: Error scheduling kernel refcount decrement callback *indicate_idle_event**indicate_idle_event*NVRM: Error scheduling indicate idle callback **NVRM: Error scheduling indicate idle callback *idle_precondition_check_event**idle_precondition_check_event*idle_precondition_check_callback_scheduled*NVRM: Error scheduling precondition callback **NVRM: Error scheduling precondition callback *createEventParams*ppEvent**ppEvent***ppEvent*pTimeProc**pTimeProc***pTimeProc**timerCallbackForIdlePreConditions***pCallbackData**pGpu*call to tmrCtrlCmdEventCreate*NVRM: Error creating dynamic power precondition check callback **NVRM: Error creating dynamic power precondition check callback ***idle_precondition_check_event***indicate_idle_event***remove_idle_holdoff*timerCallbackToIndicateIdle**timerCallbackToIndicateIdle*NVRM: Error creating callback to indicate GPU idle **NVRM: Error creating callback to indicate GPU idle **timerCallbackToRemoveIdleHoldoff*NVRM: Error creating callback to decrease kernel refcount **NVRM: Error creating callback to decrease kernel refcount *call to RmCancelDynamicPowerCallbacks*call to RmDestroyDynamicPowerCallbacks*call to RemoveGpuDynamicPowerSupported*destroyParams*call to tmrCtrlCmdEventDestroy*cancelParams*call to tmrCtrlCmdEventCancel*call to RmCanEnterGcxUnderGpuLock*NVRM: unexpected dynamic power state 0x%x **NVRM: unexpected dynamic power state 0x%x *call to nv_dynamic_power_state_transition*call to RmQueueIdleSustainedWorkitem*call to nv_acquire_mmap_lock*call to nv_get_all_mappings_revoked_locked*call to nv_release_mmap_lock*NVRM: Queuing of remove idle holdoff work item failed with status : 0x%x **NVRM: Queuing of remove idle holdoff work item failed with status : 0x%x *call to RmCheckForGcxSupportOnCurrentState*NVRM: NVRM, Failed to get GCx pre-requisite, status=0x%x **NVRM: NVRM, Failed to get GCx pre-requisite, status=0x%x *entryPrerequisiteParams*nv->removed**nv->removed*call to acquireDynamicPowerMutex*ref >= 0**ref >= 0*nvp->dynamic_power.state == NV_DYNAMIC_POWER_STATE_IN_USE**nvp->dynamic_power.state == NV_DYNAMIC_POWER_STATE_IN_USE*call to releaseDynamicPowerMutex*NVRM: Failed to allocate memory **NVRM: Failed to allocate memory *pNvpcfParams**pNvpcfParams*NVRM: Unexpected dynamic power state 0x%x **NVRM: Unexpected dynamic power state 0x%x *call to nv_indicate_not_idle*call to RmScheduleCallbackForIdlePreConditions*old_state == NV_DYNAMIC_POWER_STATE_IDLE_INSTANT || old_state == NV_DYNAMIC_POWER_STATE_IDLE_SUSTAINED**old_state == NV_DYNAMIC_POWER_STATE_IDLE_INSTANT || old_state == NV_DYNAMIC_POWER_STATE_IDLE_SUSTAINED*call to nv_disallow_runtime_suspend*call to nv_pci_tegra_pm_deinit*is_tegra_pci_igpu_rg_enabled*NVRM: Tegra PCI iGPU railgating is disabled **NVRM: Tegra PCI iGPU railgating is disabled *call to portSyncMutexDestroy**mutex*dynamic_power_regkey*call to rmReadAndParseDynamicPowerRegkey*call to nv_dynamic_power_available*NVRM: %s: Disabling dynamic power management either due to lack of system support or due to error (%d) in reading regkey. **NVRM: %s: Disabling dynamic power management either due to lack of system support or due to error (%d) in reading regkey. *gcOffMaxFbSizeMb*call to portSyncMutexCreate*NVRM: %s: failed to create power mutex **NVRM: %s: failed to create power mutex *NVRM: Unknown DynamicPowerManagement value '%u' specified; disabling dynamic power management. **NVRM: Unknown DynamicPowerManagement value '%u' specified; disabling dynamic power management. *bIsTegraPciIgpuRgSupported*call to nv_pci_tegra_pm_init*NVRM: Tegra PCI iGPU railgating is enabled **NVRM: Tegra PCI iGPU railgating is enabled *call to nv_allow_runtime_suspend*call to nv_set_primary_vga_status*call to rm_get_uefi_console_status*bUefiConsole*console_device*refcount*PDB_PROP_SYS_SUPPORTS_S0IX*NVRM: NVRM: Tegra PCI iGPU detected, Rail-Gating is supported. **NVRM: NVRM: Tegra PCI iGPU detected, Rail-Gating is supported. *pRegkeyValue*call to decodePmcBoot42ChipId*call to rm_is_system_notebook*__nvoc_pbase_GpuResource*bGcoffDisallowed*call to osClientGcoffDisallowRefcount*call to nv_audio_dynamic_power*call to osRefGpuAccessNeeded*call to osUnrefGpuAccessNeeded*NVRM: %s: Unexpected dynamic power refcount value **NVRM: %s: Unexpected dynamic power refcount value *call to RmCanEnterGcx*nvp->dynamic_power.deferred_idle_enabled**nvp->dynamic_power.deferred_idle_enabled*old_state != new_state**old_state != new_state*call to portAtomicCompareAndSwapS32*NVRM: %s: state transition %s -> %s *call to nv_dynamic_power_state_string**NVRM: %s: state transition %s -> %s *NVRM: %s: FAILED state transition %s -> %s **NVRM: %s: FAILED state transition %s -> %s **IN_USE**IDLE_INSTANT**IDLE_SUSTAINED**IDLE_INDICATED**UNEXPECTED*call to portSyncMutexRelease*call to portSyncMutexAcquire*ppCpuMapping**ppCpuMapping*pClientEntry*call to nv_is_chassis_notebook*call to nv_acpi_is_battery_present*call to osIsAdministrator*paramLocation***pProcessToken*gpuOsInfo**gpuOsInfo***gpuOsInfo*ctl_nvfp*clientOSInfo**clientOSInfo***clientOSInfo**ctl_nvfp*pApi**pApi*pParms**pParms*call to RmAllocOsDescriptor*call to Nv01AllocMemoryWithSecInfo*call to rm_create_mmap_context*arch/nvalloc/unix/src/escape.c*NVRM: could not create mmap context for %p **arch/nvalloc/unix/src/escape.c**NVRM: could not create mmap context for %p *call to Nv01AllocObjectWithSecInfo*call to Nv04AllocWithSecInfo*call to Nv04AllocWithAccessSecInfo*pApiAccess*call to Nv01FreeWithSecInfo*call to rm_client_free_os_events*call to RmCreateOsDescriptor*call to Nv04VidHeapControlWithSecInfo*call to Nv04I2CAccessWithSecInfo*call to Nv04IdleChannelsWithSecInfo*call to Nv04MapMemoryWithSecInfo*call to Nv04UnmapMemoryWithSecInfo*call to rm_access_registry*pDevNode**pDevNode*pParmStr**pParmStr*pBinaryData**pBinaryData*call to Nv04AllocContextDmaWithSecInfo*call to Nv04BindContextDmaWithSecInfo*call to Nv04MapMemoryDmaWithSecInfo*call to Nv04UnmapMemoryDmaWithSecInfo*call to Nv04DupObjectWithSecInfo*call to Nv04ShareWithSecInfo*call to nv_get_adapter_state**pNv*call to rm_get_adapter_status*call to RmGetDeviceFd*dev_nvfp**dev_nvfp*call to portAtomicCompareAndSwapU32*call to Nv04ControlWithSecInfo*pOldCpuAddress**pOldCpuAddress***pOldCpuAddress*pNewCpuAddress**pNewCpuAddress***pNewCpuAddress*call to rm_update_device_mapping_info*call to cliresCtrlCmdNvdGetNvlogInfo_IMPL*call to cliresCtrlCmdNvdGetNvlogBufferInfo_IMPL*call to cliresCtrlCmdNvdGetNvlog_IMPL*call to portAtomicExSetS64*ctl_nvfp_priv**ctl_nvfp_priv***ctl_nvfp_priv*NVRM: unknown NVRM ioctl command: 0x%x **NVRM: unknown NVRM ioctl command: 0x%x *pVidHeapParams**pVidHeapParams*AllocOsDesc***pDescriptor*call to os_lock_user_pages*call to os_lookup_user_io_memory*pPteArray**pPteArray*call to os_unlock_user_pages*paramCopy*msgTag**msgTag*ppKernelParams**ppKernelParams***ppKernelParams***pUserParams*call to portSafeMulU32*bSizeValid*call to rmapiParamsAcquire**pKernelParams*pAttachGpuParams*pExportMemParams*call to rmapiParamsRelease*rmapiParamsRelease(¶mCopy) == NV_OK**rmapiParamsRelease(¶mCopy) == NV_OK*NV_FALSE*generated/g_vaspace_nvoc.h**NV_FALSE**generated/g_vaspace_nvoc.h*pPhysAddr*pPasid*pMemBlock*pAllocInfo*pKernelGraphicsManager_PRIVATE*veidInUseMask*grIdxVeidMask**grIdxVeidMask*pVeidCount*generated/g_kernel_bif_nvoc.h**generated/g_kernel_bif_nvoc.h*pBandwidth*pKernelBif0*addrReg*pMirrorBase*pMirrorSize*bifAtomicsmask*pNumAreas*pOffsets*pSizes*pBif*pciStart*pcieStart*pRegmapRef*bFlaSupported*generated/g_kern_bus_nvoc.h**generated/g_kern_bus_nvoc.h*pSpaValue**pPriv*ppPriv**ppPriv*pPDB*pInstBlkMemDesc*memDescIn*pKernelBus0*pKernelBus1*pMemArea*pPeerKernelBus*dma_size*pRemoteGpu*ppP2PDomMemDesc**ppP2PDomMemDesc*pLocalKernelBus*pLocalGpu*ppWMBoxMemDesc**ppWMBoxMemDesc*pMailboxAreaSize*pMailboxAlignmentSize*pMailboxMaxOffset64KB*pCpuPtr**pCpuPtr*call to os_device_vm_present*call to os_is_grid_supported*call to gpumgrIsVgxRmFirmwareCapableChip*call to osWriteRegistryDword*RMPowerFeature**RMPowerFeature*RmDisableInforomBBX**RmDisableInforomBBX*RMProcessNonStallIntrInLocklessIsr**RMProcessNonStallIntrInLocklessIsr*RMDumpNvLog**RMDumpNvLog*RmRcWatchdog**RmRcWatchdog*ForceP2P**ForceP2P*call to os_get_grid_csp_support*vgpu_info*call to os_call_vgpu_vfio*call to CliAddSystemEvent*isEventNotified*call to is_bar_64bit*isBar064bit*call to kvgpumgrGetHostVgpuDeviceFromVgpuDevName*pVgpuDevName*call to gpuIsSriovEnabled*barSizes*call to _nv_vgpu_get_bar_size*configParams*arch/nvalloc/unix/src/os-hypervisor.c*NVRM: Failed to query BAR size for index %u 0x%x **arch/nvalloc/unix/src/os-hypervisor.c**NVRM: Failed to query BAR size for index %u 0x%x *call to _nv_vgpu_get_sparse_mmap*sparseOffsets*sparseSizes*sparseCount*call to kbifGetVFSparseMmapRegions_DISPATCH*NVRM: Not enough space for sparse mmap region info **NVRM: Not enough space for sparse mmap region info *call to nv_parse_config_params*direct_gpu_timer_access**direct_gpu_timer_access*call to tmrGetTimerBar0MapInfo_DISPATCH*call to kfifoGetUsermodeMapInfo_DISPATCH*config_params*call to rm_string_token*call to os_string_length*call to os_strtoul*call to os_string_compare**configParams*hbmAddr*NVRM: %s GPU handle is not valid **NVRM: %s GPU handle is not valid *NVRM: non contiguous HBM region is not supported **NVRM: non contiguous HBM region is not supported *hbmRegionList*pKernelVgpuMgr*pRequestVgpu*vgpuDevName**vgpuDevName**pVgpuDevName*real**call to listHead_IMPL**pRequestVgpu**call to listNext_IMPL*call to kvgpumgrSetGpuInstanceId*call to kvgpumgrSetPlacementId*call to nv_vgpu_rm_get_bar_info**pKernelBus*call to kbusGetPciBarSize_IMPL*call to kvgpumgrGetVgpuTypeInfo*kvgpumgrGetVgpuTypeInfo(pKernelHostVgpuDevice->vgpuType, &vgpuTypeInfo)**kvgpumgrGetVgpuTypeInfo(pKernelHostVgpuDevice->vgpuType, &vgpuTypeInfo)*vgpuTypeInfo*call to kvgpumgrGetPgpuIndex*kvgpumgrGetPgpuIndex(pKernelVgpuMgr, pGpu->gpuId, &pgpuIndex)**kvgpumgrGetPgpuIndex(pKernelVgpuMgr, pGpu->gpuId, &pgpuIndex)*vgpuExtraParams*override_bar1_size**vgpuExtraParams**override_bar1_size*bOverrideBar1Size*call to gpuIsVfResizableBAR1Supported*sriovState*vfBarSize**vfBarSize*Compute*vgpuClass**Compute**vgpuClass*bar1SizeInBytes*pgpuInfo**pgpuInfo*guestBar1*call to nvPrevPow2_U64*address64**address64*NVRM: BAR%d region doesn't exist! **NVRM: BAR%d region doesn't exist! *NVRM: BAR%d region is_64bit: %d **NVRM: BAR%d region is_64bit: %d *call to kvgpumgrCreateRequestVgpu*gpu_instance_id*placement_id*call to kvgpumgrProcessVfInfo*vf_pci_info**vf_pci_info*call to kvgpumgrDeleteRequestVgpu*vgpuTypeIds*numSupportedVgpuTypes*vgpuTypes**vgpuTypes***vgpuTypes**vgpuTypeInfo*call to kvgpumgrGetAvailableInstances*NVRM: Failed to get available instances for vGPU ID: %d, status: 0x%x **NVRM: Failed to get available instances for vGPU ID: %d, status: 0x%x *call to os_snprintf*%d **%d *call to hypervisorGetHypervisorType_IMPL*call to osIsVgpuVfioPresent*pVgpuNsIntr*call to os_inject_vgx_msi*bIsHypervVgpuSupported*generated/g_device_nvoc.h**generated/g_device_nvoc.h*pNvjpgCapsParams*pBspCapParams*pMsencCapsParams*pHostCapsParamsV2*pGetLatencyBufferSizeParams*flushParams*pPrbEnc*client_managed_console**pOsInfo*bCleanupRmapi*call to os_offline_page_at_address*call to nv_get_egm_info*pNodeId*call to os_numa_remove_gpu_memory*arch/nvalloc/unix/src/os.c**arch/nvalloc/unix/src/os.c*call to os_numa_add_gpu_memory*pNumaNodeId*call to os_get_numa_node_memory_usage*free_memory_bytes*total_memory_bytes*call to nv_match_gpu_os_info*call to nv_is_gpu_accessible*call to os_enable_pci_req_atomics*call to nv_get_syncpoint_aperture*call to nv_set_tegra_brightness_level*call to nv_get_tegra_brightness_level*pPrivData*call to nv_get_num_dpaux_instances*pNumIntances*call to nv_destroy_nano_timer**pTimer*call to nv_start_nano_timer*call to nv_create_nano_timer*call to nv_imp_icc_set_bw*call to nv_imp_enable_disable_rfl*call to nv_imp_get_import_data*pTegraImpImportData*pRequestData**pRequestData*pResponseData**pResponseData*pRet*pApiRet*call to nv_tegra_dce_client_ipc_send_recv*call to nv_tegra_dce_unregister_ipc_client*call to nv_tegra_dce_register_ipc_client*call to nv_tegra_get_rm_interface_type*call to rmLocksAcquireAll*call to dceclientHandleAsyncRpcCallback*call to rmLocksReleaseAll*call to os_wake_up*pWq*call to os_wait_interruptible*call to os_wait_uninterruptible*call to os_free_wait_queue*call to os_alloc_wait_queue*ppWq**ppWq*call to os_get_current_process_flags*call to os_get_random_bytes*call to os_imex_channel_get*call to os_imex_channel_count*call to nv_acquire_fabric_mgmt_cap*pOsRmCaps*call to os_nv_cap_validate_and_dup_fd*duped_fd*call to _allocOsRmCaps*ppCaps**ppCaps***ppCaps*call to os_nv_cap_create_dir_entry*NVRM: Failed to create mig directory **NVRM: Failed to create mig directory *call to os_nv_cap_create_file_entry*NVRM: Failed to create mig config file **NVRM: Failed to create mig config file **monitor*NVRM: Failed to create mig monitor file **NVRM: Failed to create mig monitor file *fabric-imex-mgmt**fabric-imex-mgmt*NVRM: Failed to create imex file **NVRM: Failed to create imex file *call to osRmCapUnregister*ppOsRmCaps**ppOsRmCaps*call to os_nv_cap_close_fd*pPartitionOsRmCaps*ci%u**ci%u*NVRM: Failed to setup ci%u directory **NVRM: Failed to setup ci%u directory **access*NVRM: Failed to setup access file for ID:%u **NVRM: Failed to setup access file for ID:%u *ppExecPartitionOsRmCaps**ppExecPartitionOsRmCaps*pGpuOsRmCaps*gi%u**gi%u*NVRM: Failed to setup gi%u directory **NVRM: Failed to setup gi%u directory *ppPartitionOsRmCaps**ppPartitionOsRmCaps*call to nv_get_dev_minor*gpu%u**gpu%u*NVRM: Failed to setup gpu%u directory **NVRM: Failed to setup gpu%u directory *NVRM: Failed to setup mig directory **NVRM: Failed to setup mig directory **pOsRmCaps*call to os_read_file*call to os_write_file*call to os_open_temporary_file*ppFile**ppFile*call to os_count_tail_pages*call to os_get_page_refcount*call to os_put_page*call to os_get_page*call to os_alloc_pages_node*call to nv_get_device_memory_config*call to os_numa_memblock_size*call to os_delete_record_for_crashLog*call to os_add_record_for_crashLog*call to os_get_acpi_rsdp_from_uefi*pRsdpAddr*call to osUnmapKernelSpace*pBaseVAddr**pBaseVAddr*call to os_get_smbios_header*NVRM: %s: Failed query SMBIOS table with error: %x **NVRM: %s: Failed query SMBIOS table with error: %x *physSmbiosAddr != ~0ull**physSmbiosAddr != ~0ull*pMappedAddr**pMappedAddr***pMappedAddr*call to osGetSmbiosTableInfo*pNumSubTypes*_SM3_**_SM3_**pLength*pBaseAddr**pBaseAddr*_SM_**_SM_*_DMI_**_DMI_**pNumSubTypes*(pLinkConnection != NULL)**(pLinkConnection != NULL)*(maxLinks > 0)**(maxLinks > 0)*(pGpu != NULL)**(pGpu != NULL)*CPU_MODEL|CM_ATS_ADDRESS|C2C%u**path**CPU_MODEL|CM_ATS_ADDRESS|C2C%u*(ret > 0) && (ret < (sizeof(path) - 1))**(ret > 0) && (ret < (sizeof(path) - 1))*call to gpuSimEscapeRead*NVRM: %s: %s=0x%X **NVRM: %s: %s=0x%X *NVRM: %s: gpuSimEscapeRead for '%s' failed (%u) **NVRM: %s: gpuSimEscapeRead for '%s' failed (%u) *call to os_pci_remove_supported**pUidToken1**pUidToken2*pTokenUser1*(pTokenUser1 != NULL)**(pTokenUser1 != NULL)*pTokenUser2*(pTokenUser2 != NULL)**(pTokenUser2 != NULL)**pClientSecurityToken**pCurrentSecurityToken*pClientTokenUser*pCurrentTokenUser*NVRM: NVRM: %s: Current security token doesn't match the one in the client database. Current EUID: %d, PID: %d; Client DB EUID: %d, PID: %d **NVRM: NVRM: %s: Current security token doesn't match the one in the client database. Current EUID: %d, PID: %d; Client DB EUID: %d, PID: %d *call to os_get_euid*call to os_dump_stack*call to os_bug_check*call to memdescIsCarveoutMemory*call to memdescGetFlag*call to skipIovaMappingForTegra**iovaArray*pOsData***pPriv**pOsData*call to memdescGetContiguity*call to RmDeflateRmToOsPageArray*NVRM: %s: failed to unmap allocation (status = 0x%x) **NVRM: %s: failed to unmap allocation (status = 0x%x) *call to RmInflateOsToRmPageArray*pRootMemDesc**pRootMemDesc*call to memdescGetAddressSpace*call to gpumgrCheckIndirectPeer_IMPL*bIsIndirectPeerMapping*NVRM: %s passed memory descriptor in an unsupported address space (%s) *call to memdescGetApertureString**NVRM: %s passed memory descriptor in an unsupported address space (%s) *memdescGetContiguity(pIovaMapping->pPhysMemDesc, AT_CPU)**memdescGetContiguity(pIovaMapping->pPhysMemDesc, AT_CPU)*NVRM: %s: failed to map peer IO mem (status = 0x%x) **NVRM: %s: failed to map peer IO mem (status = 0x%x) **peer*bIsContig*call to memdescGetPhysAddr*bIsBar0*bIsFbOffset*osPageCount*!bIsBar0 && bIsFbOffset**!bIsBar0 && bIsFbOffset*call to memdescGetNvLinkGpa*NVRM: %s Failed to get SPA **NVRM: %s Failed to get SPA *call to nv_dma_map_alloc*call to osGetDmaDeviceForMemDesc*NVRM: %s: failed to map allocation (status = 0x%x) **NVRM: %s: failed to map allocation (status = 0x%x) *NVRM: %s: failed to map peer (base = 0x%llx, status = 0x%x) **NVRM: %s: failed to map peer (base = 0x%llx, status = 0x%x) *NVRM: cannot map a GPU's BAR to itself **NVRM: cannot map a GPU's BAR to itself *nv != NULL**nv != NULL*NVRM: %s: Skip memdescMapIommu mapping **NVRM: %s: Skip memdescMapIommu mapping **pKernelBif*call to gpuGetBusIntfType_DISPATCH*call to kbifGetPcieConfigAccessTestRegisters_DISPATCH*call to os_pci_read_dword*call to kbifVerifyPcieConfigAccessTestRegisters_DISPATCH*call to osCallACPI_DSM*call to i2cSwPortMapping*call to nv_i2c_bus_status*scl*sda*call to nv_i2c_transfer*nv_i2c_msgs*call to nv_i2c_unregister_clients*call to os_get_is_openrm*bOpenRm*call to os_get_version_info*osVersionInfo*call to os_tegra_igpu_perf_boost**pOsGpuInfo*os_alloc_mem((void**)&pOsVersionInfo, sizeof(os_version_info))**os_alloc_mem((void**)&pOsVersionInfo, sizeof(os_version_info))*pOsVersionInfo**pOsVersionInfo*call to prbEncAddUInt32*os_build_version_str*NV_UNKNOWN_BUILD_VERSION**NV_UNKNOWN_BUILD_VERSION**os_build_version_str*call to prbEncAddString*os_build_date_plus_str*NV_UNKNOWN_BUILD_DATE**NV_UNKNOWN_BUILD_DATE**os_build_date_plus_str*pInOut*call to nv_acpi_mux_method*bIsNotebook*bOsCCEnabled*bOsCCSevSnpEnabled*bOsCCSmeEnabled*bOsCCSnpVtomEnabled*bOsCCTdxEnabled*call to nv_acpi_rom_method*pOutSize*call to nv_acpi_ddc_method*call to nv_acpi_dod_method*call to checkDsmCall**pAcpiDsmGuid*acpiDsmInArgSize*call to rmcfg_IsTU10X*call to gpuIsACPIPatchRequiredForBug2473619_3dd2c9*call to nv_acpi_dsm_method**pInOut*call to cacheDsmSupportedFunction*NVRM: osCallACPI_DSM: Error during 0x%x DSM subfunction 0x%x! status=0x%x **NVRM: osCallACPI_DSM: Error during 0x%x DSM subfunction 0x%x! status=0x%x *priv == NULL**priv == NULL**metadata*__nvoc_pbase_RsResource*call to portSafeAddU32*hImportHandle*objectTypes**objectTypes*call to RmImportObject*objects**objects*NVRM: %s: Unable to import handle (%x, %x, %x) **NVRM: %s: Unable to import handle (%x, %x, %x) **pRmApi*call to _initializeExportObjectFd*bFdSetup*exportHandles*call to RmExportObject*exportHandles[i] != 0**exportHandles[i] != 0*pExportHandle**pExportHandle*call to RmFreeObjExportHandle**exportHandles*hExportHandle != 0**hExportHandle != 0*call to serverutilGetResourceRef*call to kmigmgrIsMIGGpuInstancingEnabled_IMPL*call to kmigmgrGetInstanceRefFromDevice_IMPL*os_alloc_mem((void **)&nvfp->handles, sizeof(nvfp->handles[0]) * maxObjects)**os_alloc_mem((void **)&nvfp->handles, sizeof(nvfp->handles[0]) * maxObjects)*call to memGetByHandle_IMPL*pAddressSpaceParams**pMemDesc*NVRM: %s: wrong address space %d **NVRM: %s: wrong address space %d *call to memdescGetCpuCacheAttrib*NVRM: %s: wrong caching type %d **NVRM: %s: wrong caching type %d *bInvalidateOnly*NVRM: %s: cacheOps not specified **NVRM: %s: cacheOps not specified *NVRM: %s: end address 0x%llx exceeded buffer length: 0x%llx **NVRM: %s: end address 0x%llx exceeded buffer length: 0x%llx *call to nv_dma_cache_invalidate*call to os_flush_user_cache*call to _osGetTegraPlatform*call to os_get_tegra_platform*call to gpuGetMode*(NV_GPU_MODE_GRAPHICS_MODE == gpuMode) || (NV_GPU_MODE_COMPUTE_MODE == gpuMode)**(NV_GPU_MODE_GRAPHICS_MODE == gpuMode) || (NV_GPU_MODE_COMPUTE_MODE == gpuMode)*call to hypervisorIsVgxHyper_IMPL*call to os_get_cpu_number*call to os_get_cpu_count*call to RmPackageRegistry*call to RmReadRegistryString*regParmStr*pBufferLength*call to RmWriteRegistryBinary*call to RmReadRegistryBinary*call to RmWriteRegistryDword*call to RmReadRegistryDword*call to vgpuDevReadReg032*thisAddress < pMapping->gpuNvLength**thisAddress < pMapping->gpuNvLength*Reg032**Reg032*Reg016**Reg016*Reg008**Reg008*call to vgpuDevWriteReg032*call to osErrorLogV*rmStatus == NV_OK**rmStatus == NV_OK*call to nv_flush_coherent_cpu_cache_range*call to os_flush_cpu_write_combine_buffer*call to os_flush_cpu_cache_all**hEvent*call to portSyncSpinlockAcquire*call to portSyncSpinlockRelease**pOSEvent*event->refcount > 0**event->refcount > 0*NVRM: %s() **NVRM: %s() *call to postEvent**pEventData**pNotifyEvent*Next*call to osEventNotificationWithInfo**eventID*call to osReferenceObjectCount*call to nv_post_event*call to osDereferenceObjectCount*call to osUnmapPciMemoryUser*call to portSafeAddU64*deviceMappings**deviceMappings*call to osMapPciMemoryUser*call to osUnmapPciMemoryKernelOld**virtualAddress*kern_mappings**kern_mappings*call to osMapPciMemoryKernelOld*tmppVirtualAddress**tmppVirtualAddress*call to osMapPciMemoryAreaUser*origStart*diffStart*tNvuap*call to nv_check_usermap_access_params*nv_check_usermap_access_params(pOsGpuInfo, &tNvuap)**nv_check_usermap_access_params(pOsGpuInfo, &tNvuap)*ppNvuap**ppNvuap***ppNvuap*ppNvuap != NULL**ppNvuap != NULL*os_alloc_mem((void**) ppNvuap, sizeof(nv_alloc_mapping_context_t))**os_alloc_mem((void**) ppNvuap, sizeof(nv_alloc_mapping_context_t))*os_alloc_mem((void**) &((*ppNvuap)->memArea.pRanges), totalRangeSize)**os_alloc_mem((void**) &((*ppNvuap)->memArea.pRanges), totalRangeSize)*call to tlsEntryRelease*call to vgpuUpdateGuestSysmemPfnBitMap*vgpuUpdateGuestSysmemPfnBitMap(pGpu, pMemDesc, NV_FALSE) == NV_OK**vgpuUpdateGuestSysmemPfnBitMap(pGpu, pMemDesc, NV_FALSE) == NV_OK*call to memdescSetAddress*call to memdescSetMemData*call to osGetPagesInfo*call to nv_alias_pages*call to memdescGetGuestId*force_dma32_alloc*NVRM: Forcing physically contiguous flags for ISO **NVRM: Forcing physically contiguous flags for ISO *call to memdescSetContiguity*NVRM: Forcing physically contiguous flags for NISO **NVRM: Forcing physically contiguous flags for NISO *call to memdescGetNumaNode*call to nv_alloc_pages*call to memdescSetAllocSizeFields*memdescSetAllocSizeFields(pMemDesc, rmPageCount * RM_PAGE_SIZE, RM_PAGE_SIZE)**memdescSetAllocSizeFields(pMemDesc, rmPageCount * RM_PAGE_SIZE, RM_PAGE_SIZE)*pMemData**pMemData*vgpuUpdateGuestSysmemPfnBitMap(pGpu, pMemDesc, NV_TRUE)**vgpuUpdateGuestSysmemPfnBitMap(pGpu, pMemDesc, NV_TRUE)*call to osGetPageSize*call to memdescGetAdjustedPageSize**rmPageCount <= pMemDesc->pageArraySize***rmPageCount <= pMemDesc->pageArraySize*call to nv_set_dma_address_size*call to nv_schedule_uvm_resume_p2p*call to nv_schedule_uvm_drain_p2p*call to nv_schedule_uvm_isr*pWi**pWi*pSystemFunction***pData*call to os_queue_work_item*bDontFreeParams*bLockSema*apiLock*bLockGpus*bLockGpuGroupDevice*bLockGpuGroupSubdevice*bFullGpuSanity*bDropOnUnloadQueueFlush*bRequiresGpu*pGpuFunction*call to os_schedule*call to os_get_max_user_va*call to os_is_init_ns*call to os_find_ns_pid*pOsPidInfo**pOsPidInfo*pNsPid*call to os_put_pid_info*pThreadId*call to os_get_current_thread*call to os_get_current_process_name*ProcName*call to os_check_access*call to os_is_administrator*call to os_io_read_byte*call to os_io_write_word*call to os_io_read_word*call to os_io_write_byte***pAllocPrivate*call to nv_free_kernel_mapping*call to nv_free_user_mapping*call to nv_unregister_phys_pages*call to setNumaPrivData*2*NVRM: pAllocPrivate is NULL! **NVRM: pAllocPrivate is NULL! *call to nv_alloc_kernel_mapping***pAddress**call to nv_alloc_kernel_mapping*NVRM: failed to create system memory kernel mapping! **NVRM: failed to create system memory kernel mapping! *call to nv_alloc_user_mapping*NVRM: failed to create system memory user mapping! **NVRM: failed to create system memory user mapping! **userAddress*call to nv_get_phys_pages*call to nv_get_num_phys_pages*addrArray**addrArray*numOsPages*pteArray**pteArray*call to nv_register_phys_pages*Size != 0**Size != 0*call to os_pci_write_dword*call to os_pci_write_word*call to os_pci_write_byte*call to os_pci_read_word*call to os_pci_read_byte*call to os_get_cpu_frequency*call to os_delay_us*call to osGetSystemTime*pSeconds*pMicroSeconds*call to os_get_monotonic_time_ns_hr*call to os_get_monotonic_tick_resolution_ns*call to os_get_monotonic_time_ns*call to os_is_isr*call to os_semaphore_may_sleep*call to portThreadGetCurrentThreadId*ppMethods**ppMethods*pNumMethods*call to devfreq_clk_to_domain*clkDomain*call to gpumgrWaitForBarFirewall*call to rm_notify_gpu_addition_removal_helper*arch/nvalloc/unix/src/osapi.c*NVRM: Fail to acquire rmApi lock. Skip notification.**arch/nvalloc/unix/src/osapi.c**NVRM: Fail to acquire rmApi lock. Skip notification.**pSys**pHypervisor*PDB_PROP_HYPERVISOR_DRIVERVM_ENABLED*call to rmDeviceGpuLocksAcquire*call to RmDmabufPutClientAndDevice*pGpuInstanceInfo**pGpuInstanceInfo*call to rmDeviceGpuLocksRelease**rmStatus*call to RmDmabufGetClientAndDevice*phClient*phSubdevice*ppGpuInstanceInfo**ppGpuInstanceInfo*call to kbusIsStaticBar1Enabled_IMPL**pMemInfo*call to kbusGetGpuFbPhysAddressForRdma_IMPL*kbusGetGpuFbPhysAddressForRdma(pGpu, pKernelBus, bForcePcie, &barOffset)**kbusGetGpuFbPhysAddressForRdma(pGpu, pKernelBus, bForcePcie, &barOffset)*call to rmapiLockIsOwner*rmapiLockIsOwner()**rmapiLockIsOwner()*rmDeviceGpuLockIsOwner(gpuGetInstance(pGpu))**rmDeviceGpuLockIsOwner(gpuGetInstance(pGpu))*call to kbusUnmapFbAperture_GM107*kbusUnmapFbAperture_HAL(pGpu, pKernelBus, pMemDesc, memArea, BUS_MAP_FB_FLAGS_MAP_UNICAST)**kbusUnmapFbAperture_HAL(pGpu, pKernelBus, pMemDesc, memArea, BUS_MAP_FB_FLAGS_MAP_UNICAST)*((pMemArea != NULL) && (pMemInfo != NULL)) && (memRange.size != 0)**((pMemArea != NULL) && (pMemInfo != NULL)) && (memRange.size != 0)*call to memdescCheckContiguity*NVRM: RDMA is not supported for localized memory over coherent mappings **NVRM: RDMA is not supported for localized memory over coherent mappings *os_alloc_mem((void **) &pMemArea->pRanges, sizeof(MemoryRange))**os_alloc_mem((void **) &pMemArea->pRanges, sizeof(MemoryRange))*call to memdescGetPageSize*os_alloc_mem((void **) &pMemArea->pRanges, pageCount * sizeof(MemoryRange))**os_alloc_mem((void **) &pMemArea->pRanges, pageCount * sizeof(MemoryRange))*call to serverGetClientUnderLock*serverGetClientUnderLock(&g_resServ, hClient, &pClient)**serverGetClientUnderLock(&g_resServ, hClient, &pClient)*call to deviceGetByGpu_IMPL*deviceGetByGpu(pClient, pGpu, NV_TRUE, &pDevice)**deviceGetByGpu(pClient, pGpu, NV_TRUE, &pDevice)*call to nvCheckFailedNoLog*memdescGetAddressSpace(pMemDesc) == ADDR_FBMEM**memdescGetAddressSpace(pMemDesc) == ADDR_FBMEM*call to kbusMapFbAperture_GM107*kbusMapFbAperture_HAL(pGpu, pKernelBus, pMemDesc, memRange, pMemArea, (BUS_MAP_FB_FLAGS_MAP_UNICAST | BUS_MAP_FB_FLAGS_ALLOW_DISCONTIG), pDevice)**kbusMapFbAperture_HAL(pGpu, pKernelBus, pMemDesc, memRange, pMemArea, (BUS_MAP_FB_FLAGS_MAP_UNICAST | BUS_MAP_FB_FLAGS_ALLOW_DISCONTIG), pDevice)*call to RmDmabufVerifyMemHandle*hMemoryDuped*call to nv_get_screen_info*pFbBaseAddress*bConsoleDevice*call to gpuSetExternalKernelClientCount_IMPL*call to osHandleGpuLost*call to gpumgrQueryGpuDrainState*call to RmUnixRmApiPrologue*bOnline*call to RmUnixRmApiEpilogue*numa_mem_addr*numa_mem_size*numaOfflineAddressesCount*numaOfflineAddresses**numaOfflineAddresses*osPageIdx*osPageOffset*PDB_PROP_SYS_NVIF_INIT_DONE*call to nv_acpi_methods_uninit*call to nv_acpi_methods_init*call to RmCheckNvpcfDsmScope*call to acpiDsmInit*call to Nv01FreeKernel*call to Nv01AllocMemoryKernel*call to Nv04AllocWithAccessKernel*call to Nv04VidHeapControlKernel*call to Nv04MapMemoryKernel*pNvuap**pNvuap*tlsEntryRelease(TLS_ENTRY_ID_MAPPING_CONTEXT) == 0**tlsEntryRelease(TLS_ENTRY_ID_MAPPING_CONTEXT) == 0*call to Nv04UnmapMemoryKernel*call to Nv04AllocContextDmaKernel*call to Nv04MapMemoryDmaKernel*call to Nv04UnmapMemoryDmaKernel*call to Nv04BindContextDmaKernel*call to Nv04ControlKernel*call to Nv04DupObjectKernel*call to Nv04ShareKernel*call to Nv04AddVblankCallbackKernel*PDB_PROP_GPU_PERSISTENT_SW_STATE*call to osModifyGpuSwStatePersistence*request_firmware*allow_fallback_to_monolithic_rm*call to rm_get_is_gsp_capable_vgpu*call to rm_set_firmware_logs*reg_mapping**reg_mapping*isVgpu*call to gpumgrGetRmFirmwareLogsEnabled*enable_firmware_logs*call to RmGetGpuUuidRaw**pGid**pGidString*call to RmGpuUuidRawToString**GPU-????????-????-????-????-????????????*pTmpString*call to RmP2PPutPages*pKey**pKey*call to RmP2PPutPagesPersistent*p2pObject**p2pObject*pMigInfo**pMigInfo*call to RmP2PRegisterCallback*pPlatformData**pPlatformData*call to RmP2PGetPagesWithoutCallbackRegistration*pPhysicalAddresses*pWreqMbH*pRreqMbH*pEntries*pMemCpuCacheable*RmP2PPutPages(p2pToken, vaSpaceToken, gpuVirtualAddress, pPlatformData)**RmP2PPutPages(p2pToken, vaSpaceToken, gpuVirtualAddress, pPlatformData)*ppGpuUuid**ppGpuUuid*call to RmP2PGetPagesPersistent**pGpuInfo*ppMigInfo**ppMigInfo*call to RmP2PGetGpuByAddress*pGpuUuid*call to RmP2PDmaMapPages*pDmaAddresses*NVRM: %s: Failed to handle Power Source change event, status=0x%x **NVRM: %s: Failed to handle Power Source change event, status=0x%x *call to RmPerformVersionCheck*unlock*i2c_adapters**i2c_adapters*pOsAdapter**pOsAdapter**displayId*numDispId*call to nv_i2c_del_adapter***pOsAdapter*i2cPortInfoParams*call to rm_i2c_add_adapter*systemGetSupportedParams*orInfoParams*i2cPortIdParams*NVRM: %s: adapter already exists (port=0x%x, displayId=0x%x) **NVRM: %s: adapter already exists (port=0x%x, displayId=0x%x) *NVRM: %s: no more free display Id entries in adapter **NVRM: %s: no more free display Id entries in adapter *NVRM: %s: no more free adapter entries exist **NVRM: %s: no more free adapter entries exist *call to nv_i2c_add_adapter**call to nv_i2c_add_adapter*unlockApi*unlockGpu*call to RmNonDPAuxI2CTransfer*call to RmDpAuxI2CTransfer*transData*i2cBlockData***pMessage*registerAddress*smbusBlockData*smbusQuickData*i2cBufferData*NVRM: %s: requested I2C transfer length %u is greater than maximum supported length %u **NVRM: %s: requested I2C transfer length %u is greater than maximum supported length %u *call to rm_is_legacy_device**pHalMgr*NVRM: failed to map registers! **NVRM: failed to map registers! *NVRM: The NVIDIA GPU %04x:%02x:%02x.%x NVRM: (PCI ID: %04x:%04x) installed in this system has NVRM: fallen off the bus and is not responding to commands. **NVRM: The NVIDIA GPU %04x:%02x:%02x.%x NVRM: (PCI ID: %04x:%04x) installed in this system has NVRM: fallen off the bus and is not responding to commands. *call to rm_is_legacy_arch*call to halmgrGetHalForGpu_IMPL*call to gpumgrIsDeviceRmFirmwareCapable*... != ...*bIsFirmwareCapable*NVRM: The NVIDIA GPU %04x:%02x:%02x.%x (PCI ID: %04x:%04x) NVRM: installed in this vGPU host system is not supported by NVRM: open nvidia.ko. NVRM: Please see the 'Open Linux Kernel Modules' and 'GSP NVRM: Firmware' sections in the NVIDIA Virtual GPU (vGPU) NVRM: Software documentation, available at docs.nvidia.com. **NVRM: The NVIDIA GPU %04x:%02x:%02x.%x (PCI ID: %04x:%04x) NVRM: installed in this vGPU host system is not supported by NVRM: open nvidia.ko. NVRM: Please see the 'Open Linux Kernel Modules' and 'GSP NVRM: Firmware' sections in the NVIDIA Virtual GPU (vGPU) NVRM: Software documentation, available at docs.nvidia.com. *NVRM: The NVIDIA GPU %04x:%02x:%02x.%x (PCI ID: %04x:%04x) NVRM: installed in this system is not supported by open NVRM: nvidia.ko because it does not include the required GPU NVRM: System Processor (GSP). NVRM: Please see the 'Open Linux Kernel Modules' and 'GSP NVRM: Firmware' sections in the driver README, available on NVRM: the Linux graphics driver download page at NVRM: www.nvidia.com. **NVRM: The NVIDIA GPU %04x:%02x:%02x.%x (PCI ID: %04x:%04x) NVRM: installed in this system is not supported by open NVRM: nvidia.ko because it does not include the required GPU NVRM: System Processor (GSP). NVRM: Please see the 'Open Linux Kernel Modules' and 'GSP NVRM: Firmware' sections in the driver README, available on NVRM: the Linux graphics driver download page at NVRM: www.nvidia.com. *call to rm_is_vgpu_supported_device*NVRM: The NVIDIA vGPU %04x:%02x:%02x.%x (PCI ID: %04x:%04x) NVRM: installed in this system is not supported by open NVRM: nvidia.ko. NVRM: Please see the 'Open Linux Kernel Modules' and 'GSP NVRM: Firmware' sections in the NVIDIA Virtual GPU (vGPU) NVRM: Software documentation, available at docs.nvidia.com. **NVRM: The NVIDIA vGPU %04x:%02x:%02x.%x (PCI ID: %04x:%04x) NVRM: installed in this system is not supported by open NVRM: nvidia.ko. NVRM: Please see the 'Open Linux Kernel Modules' and 'GSP NVRM: Firmware' sections in the NVIDIA Virtual GPU (vGPU) NVRM: Software documentation, available at docs.nvidia.com. *NVRM: The NVIDIA GPU %04x:%02x:%02x.%x (PCI ID: %04x:%04x) NVRM: installed in this system is not supported by the NVRM: NVIDIA %s driver release. NVRM: Please see 'Appendix A - Supported NVIDIA GPU Products' NVRM: in this release's README, available on the operating system NVRM: specific graphics driver download page at www.nvidia.com. **NVRM: The NVIDIA GPU %04x:%02x:%02x.%x (PCI ID: %04x:%04x) NVRM: installed in this system is not supported by the NVRM: NVIDIA %s driver release. NVRM: Please see 'Appendix A - Supported NVIDIA GPU Products' NVRM: in this release's README, available on the operating system NVRM: specific graphics driver download page at www.nvidia.com. *call to RmUpdateDeviceMappingInfo*RmStatus*call to RmAccessRegistry*clientDevNodeAddress**clientDevNodeAddress*clientParmStrAddress**clientParmStrAddress*clientBinaryDataAddress**clientBinaryDataAddress*Entry*tmpName**tmpName*call to RmExecuteWorkItem*pNvWorkItem**pNvWorkItem*call to gpuIsGpuFullPowerForPmResume_IMPL*call to RmRunNanoTimerCallback*call to threadStateInitISRAndDeferredIntHandler*NVRM: Queuing workitem for timer event failed with status :0x%x **NVRM: Queuing workitem for timer event failed with status :0x%x *call to threadStateFreeISRAndDeferredIntHandler*call to tmrEventServiceTimer_IMPL**pArgs*NVRM: Timer event failed from OS timer callback workitem with status :0x%x **NVRM: Timer event failed from OS timer callback workitem with status :0x%x *call to gpuIsGpuFullPower_IMPL*call to gpuCheckSysmemAccess_IMPL*call to osRun1HzCallbacksNow*nvRegistryDwords*strp**strp*in_ptr**in_ptr*out_ptr**out_ptr*call to rm_is_space*isApiLockTaken*call to RmWriteRegistryString*call to RmUnbindLock*call to threadStateSetTimeoutOverride*call to rmapiPrologue*call to RmFreeUnusedClients*call to serverFreeDisabledClients*call to _deferredClientListFreeCallback*call to rmapiEpilogue*call to osQueueSystemWorkItem*NVRM: Failed to schedule deferred free callback. Freeing immediately. **NVRM: Failed to schedule deferred free callback. Freeing immediately. *call to allocate_os_event*call to free_os_event*call to get_os_event_data*call to RmIoctl**pCl*NVRM: %s: no CL object found, setting io coherent by default **NVRM: %s: no CL object found, setting io coherent by default *bIoCoherent*call to portSyncSpinlockDestroy*call to portSyncSpinlockCreate***event_spinlock**call to portSyncSpinlockCreate*call to RmShutdownRm*call to RmInitRm*call to RmGetAdapterStatus*call to clientGetResourceRef_IMPL*__nvoc_pbase_RsClient**pRmResource*call to rmresGetMemoryMappingDescriptor_DISPATCH*bReadOnlyMem*bPeerIoMem*NVRM: Mmap is not allowed **NVRM: Mmap is not allowed ***pMemData*call to os_match_mmap_offset*pPageIndex*call to serverutilAcquireClient*call to osGetCurrentProcess*call to RmCreateMmapContextLocked*call to serverutilReleaseClient*call to CliSetGpuContext*call to subdeviceGetByHandle_IMPL**nvuap*os_alloc_mem((void**)&nvuap, sizeof(nv_usermap_access_params_t))**os_alloc_mem((void**)&nvuap, sizeof(nv_usermap_access_params_t))*os_alloc_mem((void**)&(nvuap->memArea.pRanges), sizeof(MemoryRange))**os_alloc_mem((void**)&(nvuap->memArea.pRanges), sizeof(MemoryRange))*call to nv_align_mmap_offset_length**pKernelMemorySystem*bCoherentAtsCpuOffset*bHostCoherentFbOffset*call to IS_IMEM_OFFSET*call to RmGetAllocPrivate*call to RmGetMmapPteArray*call to RmSetUserMapAccessRange*RmSetUserMapAccessRange(nvuap)**RmSetUserMapAccessRange(nvuap)*call to RmValidateMmapRequest*call to nv_add_mapping_context_to_file*pages != 0**pages != 0*call to RmGetArrayMinMax*addressStart*addressLength*call to RmHandleDisplayChange*call to RmHandleGPSStatusChange*call to RmHandleDNotifierEvent*NVRM: No support for 0x%x event **NVRM: No support for 0x%x event *gpsControl*NVRM: %s: Failed to handle ACPI GPS status change event, status=0x%x **NVRM: %s: Failed to handle ACPI GPS status change event, status=0x%x *call to RmExcludeAdapter*call to RmShutdownAdapter*os_flush_work_queue(pNv->queue, NV_TRUE)**os_flush_work_queue(pNv->queue, NV_TRUE)*call to RmPartiallyDisableAdapter*call to RmDisableAdapter*call to RmPartiallyInitAdapter*call to RmInitAdapter*call to RmFreePrivateState*call to RmInitPrivateState*(pageSize >= os_page_size)**(pageSize >= os_page_size)*osPagesPerP2PPage*pOsDmaAddresses*bDmaMapped**pOsDmaAddresses*nv_dma_unmap_alloc(peer, osPageCount, pOsDmaAddresses, ppPriv)**nv_dma_unmap_alloc(peer, osPageCount, pOsDmaAddresses, ppPriv)**pKernelMIGGpuInstance*call to kmigmgrDecRefCount_IMPL*kmigmgrDecRefCount(pKernelMIGGpuInstance->pShare)**kmigmgrDecRefCount(pKernelMIGGpuInstance->pShare)*call to kmigmgrIsMIGEnabled_IMPL*call to nvCheckOkFailedNoLog*memGetByHandle(pClient, hMemory, &pMemory)**memGetByHandle(pClient, hMemory, &pMemory)*kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, pMemory->pDevice, &ref)**kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, pMemory->pDevice, &ref)*call to kmigmgrIncRefCount_IMPL*kmigmgrIncRefCount(ref.pKernelMIGGpuInstance->pShare)**kmigmgrIncRefCount(ref.pKernelMIGGpuInstance->pShare)*instanceHandles*serverGetClientUnderLock(&g_resServ, hSrcClient, &pClient)**serverGetClientUnderLock(&g_resServ, hSrcClient, &pClient)*pSrcMemoryRef*pSrcMemory**pSrcMemory*call to memdescGetSize*NVRM: %s: Failed to handle ACPI D-Notifier event, status=0x%x **NVRM: %s: Failed to handle ACPI D-Notifier event, status=0x%x *NVRM: %s: Failed to request Dx event update, status 0x%x **NVRM: %s: Failed to request Dx event update, status 0x%x *powerStateInfo*nvpcf_dsm_in_gpu_scope*call to uncacheDsmFuncStatus*call to os_string_copy*rmStr*relaxed*clientCh*rmCh*procId*procName**procName*NVRM: API mismatch: the client '%s' (pid %u) NVRM: has the version %s, but this kernel module has NVRM: the version %s. Please make sure that this NVRM: kernel module and all NVIDIA driver components NVRM: have the same version. **NVRM: API mismatch: the client '%s' (pid %u) NVRM: has the version %s, but this kernel module has NVRM: the version %s. Please make sure that this NVRM: kernel module and all NVIDIA driver components NVRM: have the same version. *call to serverAcquireClient**pClient*call to deviceGetByHandle_IMPL*call to refFindCpuMappingWithFilter*call to serverReleaseClient*isDevice*BinaryDataLength*devNodeParamCopy*parmStrParamCopy*tmpParmStr*binaryDataParamCopy*call to osReadRegistryBinary*tmpBinaryData*copyOutBinaryDataLength*call to osWriteRegistryBinary*tmpDevNode**pWorkItem*call to workItemLocksAcquire*call to gpumgrSetCurrentGpuInstance*call to os_is_queue_flush_ongoing*NVRM: Invalid GPU instance for workitem **NVRM: Invalid GPU instance for workitem *call to workItemLocksRelease**event_list*call to free_os_event_under_lock*NVRM: freed OS event: **NVRM: freed OS event: *NVRM: hParent: 0x%x **NVRM: hParent: 0x%x *NVRM: fd: %d **NVRM: fd: %d *NVRM: failed to find OS event: **NVRM: failed to find OS event: *new_event*NVRM: allocated OS event: **NVRM: allocated OS event: *NVRM: failed to allocate OS event: 0x%08x **NVRM: failed to allocate OS event: 0x%08x **new_event*call to gpuGetUserClientCount_IMPL*call to rmapiGetClientHandlesFromOSInfo*NVRM: freeing abandoned client 0x%x **NVRM: freeing abandoned client 0x%x *pClientList**pClientList*call to free_os_events*call to nv_get_event*MoreEvents*nv_unix_event**nv_unix_event*nv_event*call to os_memcpy_to_user*bGpuIsLost*bGpuIsConnected*map_u*PDB_PROP_GPU_IS_CONNECTED*PDB_PROP_GPU_IS_LOST*call to rcdbAddRmGpuDump*NVRM: %s: failed to save GPU crash data **NVRM: %s: failed to save GPU crash data *NVRM: A GPU crash dump has been created. If possible, please run NVRM: nvidia-bug-report.sh as root to collect this data before NVRM: the NVIDIA kernel module is unloaded. **NVRM: A GPU crash dump has been created. If possible, please run NVRM: nvidia-bug-report.sh as root to collect this data before NVRM: the NVIDIA kernel module is unloaded. *NVRM: Dumping nvlogs buffers **NVRM: Dumping nvlogs buffers *call to nvlogDumpToKernelLog*call to transformGidToUserFriendlyString*call to pciPbiReadUuid*pci_uuid_read_attempted*pci_uuid_status*call to gpumgrSetUuid*NVRM: PBI is not supported for GPU %04x:%02x:%02x.%x **NVRM: PBI is not supported for GPU %04x:%02x:%02x.%x *call to gpuGetGidInfo_IMPL*has_io**RmOverrideSupportChipsetAspm*generated/g_kernel_head_nvoc.h**generated/g_kernel_head_nvoc.h*call to kheadResetPendingLastData_DISPATCH*deferredVblankHeadMask*pKernelDisplay != NULL*generated/g_kern_disp_nvoc.h**pKernelDisplay != NULL**generated/g_kern_disp_nvoc.h**pKernelHead***pKernelHead*pDpModesetData*pChildGpu*pMasterScanLockPin*pSlaveScanLockPin*pOrigLsrMinTime*pComputedLsrMinTime*pChannelNum*call to kgrctxCtrlHandle*generated/g_kernel_channel_nvoc.h**generated/g_kernel_channel_nvoc.h*classEngineID*rmEngineID*pRotateIvParams*pGetKmbParams*generated/g_kernel_rc_nvoc.h**generated/g_kernel_rc_nvoc.h*pKernelWatchdog*pKernelFlcn*pGenKernFlcn*call to kflcnSetRiscvMode*call to kflcnIsRiscvSelected_DISPATCH*riscvMode*generated/g_kernel_falcon_nvoc.h**generated/g_kernel_falcon_nvoc.h*pCore*pCode*pKerneFlcn*generated/g_kernel_gsp_nvoc.h**generated/g_kernel_gsp_nvoc.h**.fwsignature_*pPreserveLogBufferFull*pKernelGSp*pFlcnUcode*preparedCmd*pFwsecUcode*ppVbiosImg**ppVbiosImg*pPayLoad*pGspFw*ppBinStorageImage**ppBinStorageImage*ppBinStorageDesc**ppBinStorageDesc*call to REGISTER_TU10X_HALS*call to REGISTER_GA10X_HALS*call to REGISTER_AD10X_HALS*call to REGISTER_GH10X_HALS*call to REGISTER_GB10X_HALS*call to REGISTER_GB20X_HALS*call to REGISTER_T23XD_HALS*call to REGISTER_T26XD_HALS*call to registerHalModule_T264D*call to registerHalModule_T234D*call to registerHalModule_GB202*call to registerHalModule_GB203*call to registerHalModule_GB205*call to registerHalModule_GB206*call to registerHalModule_GB207*call to registerHalModule_GB20B*call to registerHalModule_GB20C*call to registerHalModule_GB100*call to registerHalModule_GB102*call to registerHalModule_GB10B*call to registerHalModule_GB110*call to registerHalModule_GB112*call to registerHalModule_GH100*call to registerHalModule_AD102*call to registerHalModule_AD103*call to registerHalModule_AD104*call to registerHalModule_AD106*call to registerHalModule_AD107*call to registerHalModule_GA100*call to registerHalModule_GA102*call to registerHalModule_GA103*call to registerHalModule_GA104*call to registerHalModule_GA106*call to registerHalModule_GA107*call to registerHalModule_TU102*call to registerHalModule_TU104*call to registerHalModule_TU106*call to registerHalModule_TU116*call to registerHalModule_TU117*call to RmIsExcludingAllowed*call to gpumgrExcludeGpuId*arch/nvalloc/unix/src/osinit.c*NVRM: failed to exclude GPU: 0x%x **arch/nvalloc/unix/src/osinit.c**NVRM: failed to exclude GPU: 0x%x *call to pciPbiGetFeature*brand*call to os_pci_remove*RMSecBusResetEnable**RMSecBusResetEnable*RMForcePcieConfigSave**RMForcePcieConfigSave*call to gpuReadBusConfigReg_DISPATCH*NVRM: %s: Cannot read NV_CONFIG_PCI_NV_11 **NVRM: %s: Cannot read NV_CONFIG_PCI_NV_11 *subsystem_vendor_id*subsystem_device_id**nvp*PDB_PROP_GPU_IN_TIMEOUT_RECOVERY*call to serverLockAllClients*call to gpumgrGetGpuMask*call to rmapiSetDelPendingClientResourcesFromGpuMask*call to rmapiDelPendingDevices*call to nv_stop_rc_timer*call to teardownCoreLogic*call to krcWatchdogShutdown_IMPL*call to gpuStateUnload_IMPL*call to serverUnlockAllClients*NVRM: %s: RM is in SW Persistence mode **NVRM: %s: RM is in SW Persistence mode *call to gpuGetDeviceInstance*call to RmUnixFreeRmApi*call to gpumgrThreadEnableExpandedGpuVisibility*gpumgrThreadEnableExpandedGpuVisibility()**gpumgrThreadEnableExpandedGpuVisibility()*call to RmDestroyPowerManagement*call to freeNbsiTable*call to osTeardownScalability*call to gpuStateDestroy_IMPL*call to dceclientDceRmInit_IMPL*NVRM: DCE firmware RM Shutdown failure **NVRM: DCE firmware RM Shutdown failure *NVRM: Disable Clocks **NVRM: Disable Clocks *call to RmDisableDeviceClks*call to RmFreeX86EmuState*call to gpumgrDetachGpu*call to gpumgrDestroyDevice*call to gpumgrThreadDisableExpandedGpuVisibility*call to RmTeardownDeviceDma*call to RmClearPrivateState*call to RmUnInitAcpiMethods*call to rmGpuLockFree*call to RmTeardownRegisters*NVRM: GPU %04x:%02x:%02x.%x: RmInitAdapter **NVRM: GPU %04x:%02x:%02x.%x: RmInitAdapter *call to RmSetupRegisters*call to RmInitDeviceDma*NVRM: Cannot configure the device for DMA **NVRM: Cannot configure the device for DMA *call to RmFetchGspRmImages*request_fw_client_rm*call to osInitNvMapping*NVRM: osInitNvMapping failed, bailing out of RmInitAdapter **NVRM: osInitNvMapping failed, bailing out of RmInitAdapter **pOS*call to osInitScalability*call to gpuBootGspRmProxy_IMPL*NVRM: GSP-RM proxy boot command failed. **NVRM: GSP-RM proxy boot command failed. *call to RmDeterminePrimaryDevice*call to RmSetConsolePreservationParams*call to RmInitAcpiMethods*consoleDisabled*NVRM: Enable Clocks to Max **NVRM: Enable Clocks to Max *call to RmEnableDeviceClks*call to kgspInitRm_IMPL*NVRM: Cannot initialize GSP firmware RM **NVRM: Cannot initialize GSP firmware RM *NVRM: Cannot initialize DCE firmware RM **NVRM: Cannot initialize DCE firmware RM *NVRM: Falling back to monolithic RM **NVRM: Falling back to monolithic RM **pKernelDisplay*call to kdispSetWarPurgeSatellitesOnCoreFree_IMPL*call to RmInitNvHal*NVRM: RmInitNvHal() failed, bailing out of RmInitAdapter! **NVRM: RmInitNvHal() failed, bailing out of RmInitAdapter! *call to RmInitX86Emu*NVRM: RmInitX86Emu failed, bailing out of RmInitAdapter **NVRM: RmInitX86Emu failed, bailing out of RmInitAdapter *call to initVendorSpecificRegistry*call to initNbsiTable*call to RmInitNvDevice*NVRM: RmInitNvDevice failed, bailing out of RmInitAdapter **NVRM: RmInitNvDevice failed, bailing out of RmInitAdapter *NVRM: GPU %04x:%02x:%02x.%x: GPU does not have the necessary power cables connected. **NVRM: GPU %04x:%02x:%02x.%x: GPU does not have the necessary power cables connected. *call to osVerifySystemEnvironment*NVRM: osVerifySystemEnvironment failed, bailing! **NVRM: osVerifySystemEnvironment failed, bailing! *call to krcWatchdogInit_DISPATCH*call to krcWatchdogDisable_IMPL*NVRM: krcWatchdogInit returned _NOT_SUPPORTED. For Kepler GPUs in PGOB mode, this is normal **NVRM: krcWatchdogInit returned _NOT_SUPPORTED. For Kepler GPUs in PGOB mode, this is normal *NVRM: krcWatchdogInit failed, bailing out of RmInitAdapter **NVRM: krcWatchdogInit failed, bailing out of RmInitAdapter *call to nv_start_rc_timer*call to RmUnixAllocRmApi*call to RmInitGpuInfoWithRmApi*call to RmI2cAddGpuPorts*call to kfifoGetUserdBar1MapInfo_DISPATCH*NVRM: kfifoGetUserdBar1MapInfo failed, bailing out of RmInitAdapter **NVRM: kfifoGetUserdBar1MapInfo failed, bailing out of RmInitAdapter *PDB_PROP_OS_SYSTEM_EVENTS_SUPPORTED*call to RmInitPowerManagement*call to RmRegisterGpudb*call to _checkP2pChipsetSupport*NVRM: GPU %04x:%02x:%02x.%x: RmInitAdapter succeeded! **NVRM: GPU %04x:%02x:%02x.%x: RmInitAdapter succeeded! *NVRM: GPU %04x:%02x:%02x.%x: RmInitAdapter failed! (0x%x:0x%x:%d) **NVRM: GPU %04x:%02x:%02x.%x: RmInitAdapter failed! (0x%x:0x%x:%d) *call to nv_put_firmware*gspFwHandle**gspFwHandle*gspFwLogHandle**gspFwLogHandle*call to decodePmcBoot42Architecture*call to nv_firmware_get_chip_family*chipFamily**pGspFw*call to nv_get_firmware**call to nv_get_firmware*NVRM: No firmware image found **NVRM: No firmware image found *NVRM: Failed to load gsp_log_*.bin, no GSP-RM logs will be printed (non-fatal) **NVRM: Failed to load gsp_log_*.bin, no GSP-RM logs will be printed (non-fatal) *deviceParams*subDeviceParams*hI2C*hDisp*NVRM: Failed to get UUID **NVRM: Failed to get UUID *call to gpudbRegisterGpu*gpuClData*upstreamPort*NVRM: Failed to register GPU with GPU data base **NVRM: Failed to register GPU with GPU data base *call to RmInitX86EmuState*NVRM: %s: %04x:%02x:%02x.0 **NVRM: %s: %04x:%02x:%02x.0 *call to gpumgrUnregisterGpuId*call to kvgpumgrDetachGpu*call to RmDestroyRegistry*pVbiosCopy**pVbiosCopy***pVbiosCopy*vbiosSize*pRegistry*pRegistryCopy**pRegistryCopy***pRegistryCopy**pRegistry*dynamicPowerCopy**map_u*NVRM: failed to map GPU registers (DISABLE_INTERRUPTS). **NVRM: failed to map GPU registers (DISABLE_INTERRUPTS). *NVRM: failed to allocate private device state. **NVRM: failed to allocate private device state. *call to nv_generate_id_from_pci_info*call to gpumgrRegisterGpuId*call to nv_encode_pci_info*NVRM: failed to register GPU with GPU manager. **NVRM: failed to register GPU with GPU manager. *call to nv_set_probed_gpu_flags*iovaspace_id*NVRM: failed to get GpuArch for 0x%x/0x%x. **NVRM: failed to get GpuArch for 0x%x/0x%x. *call to gpuarchGetDmaAddrWidth_DISPATCH*dmaAddrWidth*call to gpuarchGetSystemPhysAddrWidth_DISPATCH*is_tegra_pci_igpu*supports_tegra_igpu_rg*call to kvgpumgrAttachGpu*NVRM: GPU %04x:%02x:%02x.%x: RmSetupRegisters for 0x%x:0x%x **NVRM: GPU %04x:%02x:%02x.%x: RmSetupRegisters for 0x%x:0x%x *NVRM: GPU %04x:%02x:%02x.%x: pci config info: **NVRM: GPU %04x:%02x:%02x.%x: pci config info: *NVRM: GPU %04x:%02x:%02x.%x: registers look like: 0x%llx 0x%llx**NVRM: GPU %04x:%02x:%02x.%x: registers look like: 0x%llx 0x%llx*NVRM: GPU %04x:%02x:%02x.%x: fb looks like: 0x%llx 0x%llx **NVRM: GPU %04x:%02x:%02x.%x: fb looks like: 0x%llx 0x%llx *call to nv_os_map_kernel_space*NVRM: GPU %04x:%02x:%02x.%x: Failed to map regs registers!! **NVRM: GPU %04x:%02x:%02x.%x: Failed to map regs registers!! *NVRM: GPU %04x:%02x:%02x.%x: Successfully mapped framebuffer and registers **NVRM: GPU %04x:%02x:%02x.%x: Successfully mapped framebuffer and registers *NVRM: GPU %04x:%02x:%02x.%x: final mappings: **NVRM: GPU %04x:%02x:%02x.%x: final mappings: *NVRM: GPU %04x:%02x:%02x.%x: regs: 0x%llx 0x%llx 0x%p **NVRM: GPU %04x:%02x:%02x.%x: regs: 0x%llx 0x%llx 0x%p *call to RmSetupDpauxRegisters*call to RmSetupHdacodecRegisters*call to RmTeardownDpauxRegisters*call to RmSetupMipiCalRegisters*call to RmTeardownHdacodecRegisters*NVRM: GPU %04x:%02x:%02x.%x: Tearing down registers **NVRM: GPU %04x:%02x:%02x.%x: Tearing down registers *call to RmTeardownMipiCalRegisters*call to clTeardown_IMPL*call to os_pat_supported*PDB_PROP_OS_PAT_UNSUPPORTED*call to clInit_IMPL*NVRM: osInitMapping: **NVRM: osInitMapping: *call to gpuWriteBusConfigReg_DISPATCH*call to initCoreLogic*NVRM: RmInitNvDevice: **NVRM: RmInitNvDevice: *NVRM: device instance : 0x%08x **NVRM: device instance : 0x%08x *call to gpumgrStatePreInitGpu*NVRM: *** Cannot pre-initialize the device **NVRM: *** Cannot pre-initialize the device *call to RmCheckForExternalGpu*PDB_PROP_GPU_IS_EXTERNAL_GPU*is_external_gpu*call to gpumgrStateInitGpu*NVRM: *** Cannot initialize the device **NVRM: *** Cannot initialize the device *call to kbifCheckAndRearmMSI_IMPL*call to gpumgrStateLoadGpu*NVRM: *** Cannot load state into the device **NVRM: *** Cannot load state into the device *call to RmInitScalability*call to nv_disable_clk*call to nv_get_max_freq*NVRM: NVRM: Max Freq fetch failed for Clk:%d **NVRM: NVRM: Max Freq fetch failed for Clk:%d *call to nv_enable_clk*NVRM: NVRM: Clk prepare enable failed for Clk:%d **NVRM: NVRM: Clk prepare enable failed for Clk:%d *call to nv_set_freq*NVRM: NVRM: Set Freq failed for Clk:%d **NVRM: NVRM: Set Freq failed for Clk:%d *NVRM: NVRM: Set Freq:%d for Clk:%d **NVRM: NVRM: Set Freq:%d for Clk:%d *call to vmmGetVaspaceFromId_IMPL*call to vmmDestroyVaspace_IMPL*call to vmmCreateVaspace_IMPL*call to gpuFuseSupportsDisplay_DISPATCH*call to rm_get_uefi_console_size*fbConsoleSize*bPreserveBar1ConsoleEnabled*Ram*ReservedConsoleDispMemSize*call to RmAssignPrimaryVga*NVRM: GPU %04x:%02x:%02x.%x: is %s VGA **NVRM: GPU %04x:%02x:%02x.%x: is %s VGA **primary*not primary**not primary*NVRM: GPU %04x:%02x:%02x.%x: is %s UEFI console device **NVRM: GPU %04x:%02x:%02x.%x: is %s UEFI console device *PDB_PROP_GPU_PRIMARY_DEVICE*call to kbifIsPciIoAccessEnabled_DISPATCH*call to kbifIs3dController_DISPATCH*call to clUpstreamVgaDecodeEnabled_IMPL*NVRM: GPU %04x:%02x:%02x.%x: %s reports GPU is %s VGA *PCI config space**NVRM: GPU %04x:%02x:%02x.%x: %s reports GPU is %s VGA **PCI config space*OS**OS*call to clTeardownPcie_IMPL*call to osInitScalabilityOptions*call to clInitPcie_IMPL*NVRM: osInitNvMapping: **NVRM: osInitNvMapping: *call to gpumgrAllocGpuInstance*NVRM: *** Cannot get valid gpu instance **NVRM: *** Cannot get valid gpu instance *call to rmGpuLockAlloc*NVRM: *** cannot allocate GPU lock **NVRM: *** cannot allocate GPU lock *call to gpumgrCreateDevice*NVRM: *** Cannot attach bc gpu **NVRM: *** Cannot attach bc gpu *gpuAttachArg**gpuAttachArg*NVRM: *** Cannot allocate gpuAttachArg **NVRM: *** Cannot allocate gpuAttachArg *socDeviceArgs*call to RmSetSocDispDeviceMappings*call to RmSetSocDpauxDeviceMappings*call to RmSetSocHdacodecDeviceMappings*call to RmSetSocMipiCalDeviceMappings*fbPhysAddr*fbBaseAddr**fbBaseAddr*devPhysAddr*regBaseAddr**regBaseAddr*intLine*instPhysAddr*instBaseAddr**instBaseAddr*regLength*fbLength*instLength*cpuNumaNodeId*pOsAttachArg**pOsAttachArg***pOsAttachArg*call to gpumgrAttachGpu*NVRM: *** Cannot attach gpu **NVRM: *** Cannot attach gpu *call to sysInitRegistryOverrides_IMPL*call to sysApplyLockingPolicy_IMPL*IntLine*registerAccess*gpuFbAddr**gpuFbAddr*gpuPhysFbAddr*call to gpumgrSetParentGPU*NVRM: device instance : %d **NVRM: device instance : %d *NVRM: NV regs using linear address : 0x%p **NVRM: NV regs using linear address : 0x%p *NVRM: NV fb using linear address : 0x%p **NVRM: NV fb using linear address : 0x%p *PDB_PROP_GPU_ALTERNATE_TREE_ENABLED*PDB_PROP_GPU_ALTERNATE_TREE_HANDLE_LOCKLESS*PDB_PROP_GPU_ALLOW_PAGE_RETIREMENT*call to memmgrSetPmaForcePersistence*preserve_vidmem_allocations*call to nv_get_disp_smmu_stream_ids*PDB_PROP_GPU_DISP_PB_REQUIRES_SMMU_BYPASS*deviceMapping**deviceMapping*gpuNvPAddr*gpuNvLength*pGpuInfoParams**pGpuInfoParams*b_4k_page_isolation_required*b_mobile_config_enabled*dma_buf_supported*mem_has_struct_page*call to RmGetFirmwareVersion*call to RmGetVbiosVersion*vbios_params**vbios_params**biosInfoList*biosInfoListSize*%02x.%02x.%02x.%02x.%02x**%02x.%02x.%02x.%02x.%02x*NVRM: %s: Failed to query vbios version, status=0x%x **NVRM: %s: Failed to query vbios version, status=0x%x **N/A*NVRM: %s: Failed to query gpu firmware version, status=0x%x **NVRM: %s: Failed to query gpu firmware version, status=0x%x *firmwareVersion**firmwareVersion*gsp_params*call to gpuDecodeDomain*call to gpuDecodeBus*call to clFindP2PBrdg_IMPL***handleUp**call to clFindP2PBrdg_IMPL*NVRM: Error 0x%08x on eGPU Approval for Bridge ID: 0x%08x **NVRM: Error 0x%08x on eGPU Approval for Bridge ID: 0x%08x *bTb3Bridge*call to clSetPortPcieCapOffset_IMPL*call to osPciReadDword*pciCaps*slotCaps*bSlotHotPlugSupport*iseGPUBridge*portCaps*call to nvErrorLog_va*GPU has fallen off the bus.**GPU has fallen off the bus.*NVRM: GPU %04x:%02x:%02x.%x: GPU has fallen off the bus. **NVRM: GPU %04x:%02x:%02x.%x: GPU has fallen off the bus. *NVRM: GPU %04x:%02x:%02x.%x: GPU serial number is %s. **NVRM: GPU %04x:%02x:%02x.%x: GPU serial number is %s. *call to gpuSetDisconnectedProperties_IMPL*call to krcRcAndNotifyAllChannels_IMPL*call to RmLogGpuCrash*call to initUnixSpecificRegistry*call to initVGXSpecificRegistry*NVRM: shutdown rm **NVRM: shutdown rm *call to RmDestroyRm*call to os_is_efi_enabled*PDB_PROP_SYS_IS_UEFI*call to RmInitRegistry*RMPcieLinkSpeed**RMPcieLinkSpeed*PDB_PROP_SYS_INITIALIZE_SYSTEM_MEMORY_ALLOCATIONS*RmStreamMemOps**RmStreamMemOps*RMForceBarPath**RMForceBarPath*RMNvLinkControl**RMNvLinkControl*RmSetPCIERelaxedOrdering**RmSetPCIERelaxedOrdering*call to os_dbg_init*call to nvDbgInitRmMsg*call to nvlogUpdate*call to REGISTER_ALL_HALS*call to rm_check_s0ix_regkey_and_platform_support*call to threadStateInitSetupFlags*aperture->map == NULL**aperture->map == NULL*call to gpumgrSetProbedFlags*call to gpuGenerate32BitId*pPrivate != NULL*arch/nvalloc/unix/src/osmemdesc.c**pPrivate != NULL**arch/nvalloc/unix/src/osmemdesc.c*call to memdescUnmapIommu*call to nv_unregister_sgt*call to nv_dma_release_sgt*pImportSgt*pImportPrivGem*call to nv_dma_release_dma_buf*pImportPriv**pImportPriv***pPrivate*call to nv_unregister_user_pages*call to nv_unregister_peer_io_mem*call to GetDmaDeviceForImport*bRoDeviceMap*NVRM: %s(): RO DMA Mapping - flags [%x]! **NVRM: %s(): RO DMA Mapping - flags [%x]! *dmaBuf**dmaBuf*NVRM: Error (%d) while trying to import dma_buf! **NVRM: Error (%d) while trying to import dma_buf! *call to _createMemdescFromDmaBuf*ppPrivate**ppPrivate*call to nv_dma_import_sgt*NVRM: %s(): Error (%d) while trying to import sgt! **NVRM: %s(): Error (%d) while trying to import sgt! *call to _createMemdescFromSgt*NVRM: %s(): fd must fit within a signed 32-bit integer! **NVRM: %s(): fd must fit within a signed 32-bit integer! *call to nv_dma_import_from_fd*NVRM: %s(): Error (%d) while trying to import fd! **NVRM: %s(): Error (%d) while trying to import fd! *call to _createMemdescFromDmaBufSgtHelper**pImportPrivGem*(pMemDataReleaseCallback == osDestroyOsDescriptorFromDmaBuf) || (pMemDataReleaseCallback == osDestroyOsDescriptorFromSgt)**(pMemDataReleaseCallback == osDestroyOsDescriptorFromDmaBuf) || (pMemDataReleaseCallback == osDestroyOsDescriptorFromSgt)*gpuCachedFlags*NVRM: %s(): Error: Syncpoint memory region should be uncached!!! **NVRM: %s(): Error: Syncpoint memory region should be uncached!!! *isPeerMmio*NVRM: %s(): Syncpoint type sgt! **NVRM: %s(): Syncpoint type sgt! *call to memdescCreate*call to memdescSetGpuCacheAttrib*call to memdescSetFlag*call to nv_register_sgt*call to memdescDestroy*memdescSetAllocSizeFields(pMemDesc, size, NV_RM_PAGE_SIZE)**memdescSetAllocSizeFields(pMemDesc, size, NV_RM_PAGE_SIZE)*call to memdescMapIommu*NV_IS_ALIGNED64(base, os_page_size)**NV_IS_ALIGNED64(base, os_page_size)*NVRM: %s(): error %d while creating memdesc for kernel memory **NVRM: %s(): error %d while creating memdesc for kernel memory *num_os_pages*pPhys_addrs**pPhys_addrs*NVRM: %s(): permission denied, allowPeermapping=%d **NVRM: %s(): permission denied, allowPeermapping=%d *bAllowMmap*physAddrRange*call to osCheckGpuBarsOverlapAddrRange*NVRM: %s(): phys range 0x%016llx-0x%016llx overlaps with GPU BARs**NVRM: %s(): phys range 0x%016llx-0x%016llx overlaps with GPU BARs*call to _doWarBug4040336*NVRM: %s(): error %d while attempting to create the MMIO mapping **NVRM: %s(): error %d while attempting to create the MMIO mapping *call to nv_register_peer_io_mem*call to gpuIsWarBug4040336Enabled*call to gpumgrGetGpuAttachInfo*gpuPhysFbAddrRange*call to gpumgrGetGpuPhysFbAddr*gpuPhysAddrRange*gpuPhysInstAddrRange*call to osCreateMemdescFromPages*call to nv_register_user_pages*call to rmclientGetCachedPrivilege_DISPATCH*call to osCreateOsDescriptorFromPhysAddr*call to osCreateOsDescriptorFromIoMemory*call to osCreateOsDescriptorFromPageArray*call to osCreateOsDescriptorFromFileHandle*call to osCreateOsDescriptorFromDmaBufPtr*call to osCreateOsDescriptorFromSgtPtr*call to bitVectorGetSlice_IMPL*call to rangeMake*(bitVectorGetSlice(pSrc, rangeMake(0, 63), &localMask) == NV_OK)*generated/g_kernel_nvlink_nvoc.h**(bitVectorGetSlice(pSrc, rangeMake(0, 63), &localMask) == NV_OK)**generated/g_kernel_nvlink_nvoc.h*pKernelNvlink_PRIVATE*nvlinkBwMode*nvlinkLinks**nvlinkLinks*peerLinkMasks**peerLinkMasks*pKernelNvlink0*arch/nvalloc/unix/src/osnvlink.c**arch/nvalloc/unix/src/osnvlink.c*CPU_MODEL|CM_ATS_ADDRESS|NVLink%u**CPU_MODEL|CM_ATS_ADDRESS|NVLink%u*call to osNvlinkGetAltStack*call to knvlinkCoreAliTrainingCallback*call to osNvlinkPutAltStack*call to knvlinkCoreTrainingCompleteCallback*call to knvlinkCoreWriteDiscoveryTokenCallback*call to knvlinkCoreReadDiscoveryTokenCallback*call to knvlinkCoreGetUphyLoadCallback*call to knvlinkCoreGetRxSublinkDetectCallback*call to knvlinkCoreSetRxSublinkDetectCallback*call to knvlinkCoreGetRxSublinkModeCallback*call to knvlinkCoreSetRxSublinkModeCallback*call to knvlinkCoreGetTxSublinkModeCallback*call to knvlinkCoreSetTxSublinkModeCallback*call to knvlinkCoreGetTlLinkModeCallback*call to knvlinkCoreSetTlLinkModeCallback*call to knvlinkCoreGetDlLinkModeCallback*call to knvlinkCoreSetDlLinkModeCallback*call to knvlinkCoreQueueLinkChangeCallback*link_change*call to knvlinkCoreUnlockLinkCallback*call to knvlinkCoreLockLinkCallback*call to knvlinkCoreRemoveLinkCallback*call to knvlinkCoreAddLinkCallback*call to osNvlinkFreeAltStack*call to osNvlinkAllocAltStack*PDB_PROP_OS_ONDEMAND_VBLANK_CONTROL_ENABLE_DEFAULT*PDB_PROP_OS_CACHED_MEMORY_MAPPINGS_FOR_ACPI_TABLE*PDB_PROP_OS_LIMIT_GPU_RESET*PDB_PROP_GPU_IN_PM_CODEPATH*PDB_PROP_GPU_IN_HIBERNATE*PDB_PROP_GPU_IN_STANDBY*bInD3Cold*PDB_PROP_GPU_IN_PM_RESUME_CODEPATH*call to gpuStateLoad_IMPL*pLocalRegistry**pLocalRegistry*call to regCountEntriesAndSize*arch/nvalloc/unix/src/registry.c*NVRM: Registry entries overflow RPC record **arch/nvalloc/unix/src/registry.c**NVRM: Registry entries overflow RPC record *call to regCopyEntriesToPackedBuffer*nvStatus*NVRM: First/second pass mismatch **NVRM: First/second pass mismatch *pRegEntry*NVRM: Registry entry record is full **NVRM: Registry entry record is full *nameOffset**regParmStr*pByte*NVRM: Registry Key not sent to GSP-RM because it has 0 DataLength **NVRM: Registry Key not sent to GSP-RM because it has 0 DataLength **pRegEntry*call to regFreeEntry*call to os_registry_init*NVRM: failed to initialize the OS registry! **NVRM: failed to initialize the OS registry! *call to regFindRegistryEntry*NVRM: buffer (length: %u) is too small (data length: %u) **NVRM: buffer (length: %u) is too small (data length: %u) *call to regCreateNewRegistryKey*NVRM: failed to allocate a string registry entry! **NVRM: failed to allocate a string registry entry! *NVRM: failed to write a string registry entry! **NVRM: failed to write a string registry entry! *NVRM: failed to create binary registry entry **NVRM: failed to create binary registry entry *NVRM: failed to write binary registry entry **NVRM: failed to write binary registry entry *call to stringCaseCompare*call to os_dbg_set_level**new_reg*NVRM: failed to grow registry **NVRM: failed to grow registry *parm_size*new_ParmStr**new_ParmStr*NVRM: failed to allocate registry param string **NVRM: failed to allocate registry param string *parm_size <= NVOS38_MAX_REGISTRY_STRING_LENGTH**parm_size <= NVOS38_MAX_REGISTRY_STRING_LENGTH*NVRM: failed to copy registry param string **NVRM: failed to copy registry param string *c1*string1*call to nvGpuOpsLogEncryption*call to nvGpuOpsIncrementIv*call to nvGpuOpsQueryMessagePool*call to nvGpuOpsCcslSign*authTagData*call to nvGpuOpsCcslDecrypt*call to nvGpuOpsCcslEncrypt*call to nvGpuOpsCcslEncryptWithIv*call to nvGpuOpsCcslRotateIv*call to nvGpuOpsCcslRotateKey*call to nvGpuOpsCcslContextClear*call to nvGpuOpsCcslContextInit*call to nvGpuOpsReportFatalError*call to nvGpuOpsPagingChannelPushStream*call to nvGpuOpsPagingChannelsUnmap*call to nvGpuOpsPagingChannelsMap*call to nvGpuOpsPagingChannelDestroy*call to nvGpuOpsPagingChannelAllocate*call to nvGpuOpsReportNonReplayableFault*call to nvGpuOpsGetChannelResourcePtes*call to nvGpuOpsStopChannel*call to nvGpuOpsReleaseChannel*call to nvGpuOpsBindChannelResources*call to nvGpuOpsRetainChannel*call to nvGpuOpsGetExternalAllocPhysAddrs*gpuExternalPhysAddrsInfo*call to nvGpuOpsGetExternalAllocPtes*call to nvGpuOpsP2pObjectDestroy*call to nvGpuOpsP2pObjectCreate*call to nvGpuOpsGetNvlinkInfo*call to nvGpuOpsDisableAccessCntr*accessCntrInfo*call to nvGpuOpsEnableAccessCntr*accessCntrConfig*call to nvGpuOpsDestroyAccessCntrInfo*call to nvGpuOpsInitAccessCntrInfo*call to nvGpuOpsAccessBitsDump*accessBitsInfo*call to nvGpuOpsAccessBitsBufFree*call to nvGpuOpsAccessBitsBufAlloc*call to nvGpuOpsTogglePrefetchFaults*call to nvGpuOpsFlushReplayableFaultBuffer*call to nvGpuOpsGetNonReplayableFaults*call to nvGpuOpsHasPendingNonReplayableFaults*call to nvGpuOpsDestroyFaultInfo*call to nvGpuOpsInitFaultInfo*call to nvGpuOpsOwnPageFaultIntr*call to nvGpuOpsGetEccInfo*call to nvGpuOpsGetFbInfo*call to nvGpuOpsFreeDupedHandle*call to nvGpuOpsDupMemory*gpuMemoryInfo*call to nvGpuOpsDupAllocation*call to nvGpuOpsUnsetPageDirectory*call to nvGpuOpsSetPageDirectory*call to nvGpuOpsServiceDeviceInterruptsRM*call to nvGpuOpsGetGpuInfo*call to nvGpuOpsQueryCesCaps*call to nvGpuOpsQueryCaps*call to nvGpuOpsMemoryFree*vaspace*call to nvGpuOpsPmaFreePages*call to nvGpuOpsChannelDestroy*call to nvGpuOpsChannelAllocate*call to nvGpuOpsTsgDestroy*call to nvGpuOpsTsgAllocate*call to nvGpuOpsMemoryCpuUnMap*call to nvGpuOpsMemoryCpuMap*call to nvGpuOpsPmaPinPages*call to nvGpuOpsPmaAllocPages*call to nvGpuOpsGetPmaObject*call to pmaUnregisterEvictionCb*call to pmaRegisterEvictionCb*call to nvGpuOpsMemoryAllocSys*call to nvGpuOpsGetP2PCaps*pP2pCapsParams*call to nvGpuOpsMemoryAllocFb*call to nvGpuOpsAddressSpaceDestroy*call to nvGpuOpsDupAddressSpace*dupedVaspace**dupedVaspace*call to nvGpuOpsAddressSpaceCreate*call to nvGpuOpsDeviceDestroy*call to nvGpuOpsDeviceCreate*call to nvGpuOpsDestroySession*call to nvGpuOpsCreateSession*pGPUInstanceSubscription_PRIVATE*call to RmValidateHandleAgainstInternalHandles*pObjectType*addrSpaceType*arch/nvalloc/unix/src/rmobjexportimport.c*NVRM: GET_ADDR_SPACE_TYPE failed with error code 0x%x in %s **arch/nvalloc/unix/src/rmobjexportimport.c**NVRM: GET_ADDR_SPACE_TYPE failed with error code 0x%x in %s *NV_ERR_INVALID_ARGUMENT**NV_ERR_INVALID_ARGUMENT*phDstObject*NVRM: pRmApi->DupObject(pRmApi, failed with error code 0x%x in %s **NVRM: pRmApi->DupObject(pRmApi, failed with error code 0x%x in %s *NVRM: Invalid handle to exported object in %s **NVRM: Invalid handle to exported object in %s *call to RmUnrefObjExportHandle*call to RmUnrefObjExportImport*call to kmigmgrMakeNoMIGReference_IMPL*pDstObject*serverutilGetResourceRef(hSrcClient, hSrcObject, &pSrcResourceRef)**serverutilGetResourceRef(hSrcClient, hSrcObject, &pSrcResourceRef)*call to refFindAncestorOfType*pSrcResourceRef*pDeviceRef*deviceMapIdx*pObjExportDevice**pObjExportDevice*bClientAsDstParent*call to RmRefObjExportImport*call to serverutilValidateNewResourceHandle*call to RmGenerateObjExportHandle*hRmDevice*hRmSubDevice*hGpuInstSub*NVRM: Failed to allocate object handles in %s **NVRM: Failed to allocate object handles in %s *call to mapRemove_IMPL*NVRM: Unable to alloc device in %s **NVRM: Unable to alloc device in %s *subdevParams*NVRM: Unable to alloc subdevice in %s **NVRM: Unable to alloc subdevice in %s *giSubAllocParams*NVRM: Unable to alloc gpu instance subscription in %s **NVRM: Unable to alloc gpu instance subscription in %s *hDstObject*NVRM: Failed to allocate object handle in %s **NVRM: Failed to allocate object handle in %s *NVRM: pRmApi->DupObject(Dev, failed due to invalid parent in %s. Now attempting DupObject with Subdev handle. **NVRM: pRmApi->DupObject(Dev, failed due to invalid parent in %s. Now attempting DupObject with Subdev handle. *NVRM: pRmApi->DupObject(Subdev, failed with error code 0x%x in %s **NVRM: pRmApi->DupObject(Subdev, failed with error code 0x%x in %s *NVRM: pRmApi->DupObject(Dev, failed with error code 0x%x in %s **NVRM: pRmApi->DupObject(Dev, failed with error code 0x%x in %s *pDeviceInstance*call to mapDestroy_IMPL*call to portMemAllocatorRelease*hObjExportRmClient != 0**hObjExportRmClient != 0*pMemAllocator != NULL**pMemAllocator != NULL*NVRM: Unable to alloc root in %s **NVRM: Unable to alloc root in %s *NVRM: Failed to alloc memory allocator in %s **NVRM: Failed to alloc memory allocator in %s *call to mapInit_IMPL*call to contId**call to contId*call to mapIterNext_IMPL*pRmObjExportDevice*pHandleRef**pHandleRef***pHandleRef*call to mapKey_IMPL*NVRM: Exported object trying to free was zombie in %s **NVRM: Exported object trying to free was zombie in %s *bUpdateTGP*bVidmemPersistent*__nvoc_pbase_Object**__nvoc_pbase_Object*pRpcStructureCopy*call to RmGpuHasIOSpaceEnabled*vga*memTarget*workspaceBase*pParams->width == 0*arch/nvalloc/unix/src/unix_console.c**pParams->width == 0**arch/nvalloc/unix/src/unix_console.c*call to gpumgrGetSubDeviceInstanceFromGpu*NVRM: %s: Failed to acquire GPU lock**NVRM: %s: Failed to acquire GPU lock*bChangeResMode*call to RmUpdateGc6ConsoleRefCount*call to RmChangeResMode*call to RmSaveDisplayState*call to RmRestoreDisplayState*_hal*NVRM: RM fallback doesn't support efifb console restore **NVRM: RM fallback doesn't support efifb console restore *preUnixConsoleParams*bUseVbios*bSave*pRmApi->Control(pRmApi, nv->rmapi.hClient, nv->rmapi.hSubDevice, NV2080_CTRL_CMD_INTERNAL_DISPLAY_PRE_UNIX_CONSOLE, &preUnixConsoleParams, sizeof(preUnixConsoleParams))**pRmApi->Control(pRmApi, nv->rmapi.hClient, nv->rmapi.hSubDevice, NV2080_CTRL_CMD_INTERNAL_DISPLAY_PRE_UNIX_CONSOLE, &preUnixConsoleParams, sizeof(preUnixConsoleParams))*call to unixCallVideoBIOS*postUnixConsoleParams*bVbiosCallSuccessful*pRmApi->Control(pRmApi, nv->rmapi.hClient, nv->rmapi.hSubDevice, NV2080_CTRL_CMD_INTERNAL_DISPLAY_POST_UNIX_CONSOLE, &postUnixConsoleParams, sizeof(postUnixConsoleParams))**pRmApi->Control(pRmApi, nv->rmapi.hClient, nv->rmapi.hSubDevice, NV2080_CTRL_CMD_INTERNAL_DISPLAY_POST_UNIX_CONSOLE, &postUnixConsoleParams, sizeof(postUnixConsoleParams))*NVRM: RM fallback doesn't support saving of efifb console **NVRM: RM fallback doesn't support saving of efifb console *vesaMode*pRmApi->Control(pRmApi, nv->rmapi.hClient,nv->rmapi.hSubDevice, NV2080_CTRL_CMD_INTERNAL_DISPLAY_POST_UNIX_CONSOLE, &postUnixConsoleParams, sizeof(postUnixConsoleParams))**pRmApi->Control(pRmApi, nv->rmapi.hClient,nv->rmapi.hSubDevice, NV2080_CTRL_CMD_INTERNAL_DISPLAY_POST_UNIX_CONSOLE, &postUnixConsoleParams, sizeof(postUnixConsoleParams))*NVRM: unixCallVideoBIOS: 0x%x 0x%x, vga_satus = %d **NVRM: unixCallVideoBIOS: 0x%x 0x%x, vga_satus = %d *call to nv_vbios_call*NVRM: int10h(%04x, %04x) vesa call failed! (%04x, %04x) **NVRM: int10h(%04x, %04x) vesa call failed! (%04x, %04x) *pKernelGmmu_PRIVATE*overrideBigPageSize*generated/g_kern_gmmu_nvoc.h**generated/g_kern_gmmu_nvoc.h**pParsedFaultInfo*pCancelInfo*pMmuFaultType*pMmuFaultAddress*pClientFaultBuf*pFaultsCopied**pParsedFaultEntry*entriesCopied*pPutOffset*pGetOffset**pFaultBufferGet**pFaultBufferPut*pFaultBufferInfo**pFaultBufferInfo*faultIntr**faultIntr*faultIntrSet**faultIntrSet*faultIntrClear**faultIntrClear*faultMask**pPrefetchCtrl*pPdeApertures*pLevels*pPteApertures*pTimeOut*pRootPageDir*pOffsetLo*pDataLo*pOffsetHi*pDataHi*call to gpumgrIsDeviceMsixAllowed*stackAllocator**stackAllocator*pIsrAllocator**pIsrAllocator*call to tlsIsrInit*call to threadStateInitISRLockless**pKernelGmmu*call to kgmmuIsNonReplayableFaultPending_DISPATCH*call to kgmmuClearNonReplayableFaultIntr_DISPATCH*call to intrTriggerPrivDoorbell_DISPATCH*call to _rm_gpu_copy_mmu_faults_unlocked*faultsCopied*call to threadStateFreeISRLockless*call to tlsIsrDestroy*call to gpuIsVoltaHubIntrSupported*call to kgmmuCopyMmuFaults_DISPATCH*call to RmIsrBottomHalfUnlocked*call to RmIsrBottomHalf*call to isrWrapper**pIntr*call to intrGetPendingStall_DISPATCH*call to intrServiceNonStallBottomHalf_IMPL*call to osGetCurrentThread*call to rmDeviceGpuLockSetOwner*pDeviceLockGpu*pDpcThreadState*call to GPU_GET_DISP*pDisp**pDisp*call to intrServiceStall_DISPATCH*call to intrServiceNonStall_DISPATCH*call to rmDeviceGpuLocksReleaseAndThreadStateFreeDeferredIntHandlerOptimized*call to nv_control_soc_irqs*call to intrSetIntrEnInHw_DISPATCH*call to intrSetStall_DISPATCH*call to intrRestoreNonStall_DISPATCH*call to intrGetIntrEnFromHw_DISPATCH*intrGetIntrEnFromHw_HAL(pGpu, pIntr, NULL) == INTERRUPT_TYPE_DISABLED*arch/nvalloc/unix/src/unix_intr.c**intrGetIntrEnFromHw_HAL(pGpu, pIntr, NULL) == INTERRUPT_TYPE_DISABLED**arch/nvalloc/unix/src/unix_intr.c*call to gpuIsStateLoaded*call to osInterruptPending*sema_release*bIsAnyStallIntrPending*call to intrGetPendingNonStall_TU102*call to intrCheckFecsEventbufferPending_IMPL*call to bitVectorTestAllCleared_IMPL*call to bitVectorTest_IMPL*call to bitVectorClr_IMPL*bIsAnyBottomHalfStallPending*call to bitVectorClrAll_IMPL*call to bitVectorSet_IMPL*bIsLowLatencyIntrPending*call to kdispAcquireLowLatencyLockConditional*call to kdispHandleAggressiveVblank_IMPL*call to kdispServiceLowLatencyIntrs_KERNEL*call to kdispReleaseLowLatencyLock*call to intrGetPendingLowLatencyHwDisplayIntr_DISPATCH*call to kdispGetDeferredVblankHeadMask*call to bitVectorOr_IMPL*call to intrClearLeafVector_DISPATCH*call to intrGetVectorFromEngineId_IMPL*intrTriggerPrivDoorbell_HAL(pGpu, pIntr, NV_DOORBELL_NOTIFY_LEAF_SERVICE_TMR_HANDLE)**intrTriggerPrivDoorbell_HAL(pGpu, pIntr, NV_DOORBELL_NOTIFY_LEAF_SERVICE_TMR_HANDLE)*!pending**!pending*call to _osIsrIntrMask_GpusUnlocked*call to kdispOptimizePerFrameOsCallbacks_IMPL*kdispOptimizePerFrameOsCallbacks(pGpu, pKernelDisplay, NV_TRUE, pThreadState, &vblankIntrServicedHeadMask, &intrPending)**kdispOptimizePerFrameOsCallbacks(pGpu, pKernelDisplay, NV_TRUE, pThreadState, &vblankIntrServicedHeadMask, &intrPending)*call to kdispSetDeferredVblankHeadMask*bFailedLockAcquire*call to rmIntrMaskLockAcquire*call to intrGetIntrMaskFlags_IMPL*intrMaskFlags*call to intrGetIntrMask_GP100*call to intrSetDisplayInterruptEnable_DISPATCH*call to intrSetIntrMask_DISPATCH*call to rmIntrMaskLockRelease*call to get_int_seg*call to get_int_off*pseg**pseg*arch/nvalloc/unix/src/vbioscall.c*NVRM: cannot call the VBIOS. INT10 vector not in ROM: %04x:%04x **arch/nvalloc/unix/src/vbioscall.c**NVRM: cannot call the VBIOS. INT10 vector not in ROM: %04x:%04x *NVRM: x86emu can't map phys addr 0x%05x **NVRM: x86emu can't map phys addr 0x%05x *x86*SS*spc*SP*I32_reg*e_reg*CS*IP*FLAGS*gen*A*D*ES*call to X86EMU_trace_on*call to Mem_wb*call to pushw*call to X86EMU_exec*!x86emuReady**!x86emuReady*intFuncs**intFuncs*call to nv_get_updated_emu_seg*call to X86EMU_setupMemFuncs*mem_base*mem_size*call to X86EMU_setupPioFuncs*call to X86EMU_setupIntrFuncs*I16_reg*x_reg*NVRM: x86emu: int $%d (eax = %08x) **NVRM: x86emu: int $%d (eax = %08x) *call to X86EMU_halt_sys*call to Mem_ww*call to Mem_rw**call to Mem_addr_xlat*call to os_io_write_dword*call to os_io_read_dword*call to fetch_word_imm*call to fetch_long_imm*call to fetch_byte_imm*call to decode_sib_address*BP*SI*DI*scale*I8_reg*call to get_data_segment*fetched*call to x86emu_intr_handle*intno*call to push_word*call to mem_access_word*call to fetch_decode_modrm*call to decode_rm00_address*destoffset*call to decode_rm01_address*call to decode_rm10_address*stkelem*call to fetch_data_long*destval*call to inc_long*call to store_data_long*call to fetch_data_word*call to inc_word*call to store_data_word*call to dec_long*call to dec_word*destval2*call to push_long*destreg**destreg*call to fetch_data_byte*call to inc_byte*call to store_data_byte*call to dec_byte*call to decode_rm_byte_register*srcval*call to test_long*call to test_word*call to not_long*call to not_word*call to neg_long*call to neg_word*call to mul_long*call to mul_word*call to imul_long*call to imul_word*call to div_long*call to div_word*call to idiv_long*call to idiv_word*call to test_byte*call to not_byte*call to neg_byte*call to mul_byte*call to imul_byte*call to div_byte*call to idiv_byte*l_reg*ip32*ip16*call to aad_word*call to aam_word*call to pop_word*intnum*nesting*call to fetch_data_word_abs**dstreg*srcoffset*DS*h_reg*inc*call to fetch_data_long_abs*call to cmp_long*call to cmp_word*call to fetch_data_byte_abs*val2*call to cmp_byte*call to store_data_long_abs*call to store_data_word_abs*call to store_data_byte_abs*val1*call to pop_long*faroff*farseg*call to decode_rm_seg_register*call to outs*call to ins*call to imul_long_direct*call to aas_word*call to aaa_word*call to xor_long*call to xor_word*call to xor_byte*call to das_byte*call to sub_long*call to sub_word*call to sub_byte*call to daa_byte*call to and_long*call to and_word*call to and_byte*call to sbb_long*call to sbb_word*call to sbb_byte*call to adc_long*call to adc_word*call to adc_byte*call to or_long*call to or_word*call to or_byte*call to add_long*call to add_word*call to add_byte*shiftreg**shiftreg*GS*FS*call to shrd_long*call to shrd_word*call to shld_long*call to shld_word*SETO **SETO *SETNO **SETNO *SETB **SETB *SETNB **SETNB *SETZ **SETZ *SETNZ **SETNZ *SETBE **SETBE *SETNBE **SETNBE *SETS **SETS *SETNS **SETNS *SETP **SETP *SETNP **SETNP *SETL **SETL *SETNL **SETNL *SETLE **SETLE *SETNLE **SETNLE *JO **JO *JNO **JNO *JB **JB *JNB **JNB *JZ **JZ *JNZ **JNZ *JBE **JBE *JNBE **JNBE *JS **JS *JNS **JNS *JP **JP *JNP **JNP *JL **JL *JNL **JNL *JLE **JLE *JNLE **JNLE *dvd*call to x86emu_intr_raise*call to __builtin_abs*cf*ocf*lb*call to __sync_and_and_fetch_8*call to __sync_or_and_fetch_8*call to __sync_xor_and_fetch_8*call to __sync_sub_and_fetch_8*call to __sync_add_and_fetch_8*call to __sync_bool_compare_and_swap_8*call to __sync_and_and_fetch_4*call to __sync_or_and_fetch_4*call to __sync_xor_and_fetch_4*call to __sync_bool_compare_and_swap_4*call to out_string*call to portAtomicMemoryFenceLoad*call to portUtilExReadTimestampCounter*call to portUtilIsPowerOfTwo*pData1*pData0*call to portSyncRwLockReleaseWrite*call to portSyncRwLockReleaseRead*call to portSyncSemaphoreRelease*call to rangeIsEmpty*carveouts*call to rangeContains*baseRanges*call to rangeSplit*swap*pSecondPartAfterSplit*pBigRange*call to rangeOverlaps*merged*intersect*call to __nvoc_objCreate_AccessCounterBuffer*ppThis**ppThis*arg_pParams*ppThis != NULL && *ppThis != NULL*generated/g_access_cntr_buffer_nvoc.c**ppThis != NULL && *ppThis != NULL**generated/g_access_cntr_buffer_nvoc.c**pThis*pThis != NULL**pThis != NULL*__nvoc_base_GpuResource*__nvoc_base_RmResource*__nvoc_base_RsResource*__nvoc_base_Object*pParentObj**pParentObj*call to objAddChild_IMPL**pParent*call to __nvoc_init__AccessCounterBuffer*call to __nvoc_ctor_AccessCounterBuffer*call to objRemoveChild_IMPL**__nvoc_pbase_RsResource*__nvoc_pbase_RmResourceCommon**__nvoc_pbase_RmResourceCommon*__nvoc_pbase_RmResource**__nvoc_pbase_RmResource**__nvoc_pbase_GpuResource*__nvoc_base_Notifier*__nvoc_pbase_INotifier**__nvoc_pbase_INotifier*__nvoc_pbase_Notifier**__nvoc_pbase_Notifier*__nvoc_pbase_AccessCounterBuffer**__nvoc_pbase_AccessCounterBuffer*call to __nvoc_init__GpuResource*call to __nvoc_init__Notifier*metadata__GpuResource*metadata__RmResource*metadata__RsResource**__nvoc_metadata_ptr*__nvoc_base_RmResourceCommon*__nvoc_base_INotifier*metadata__Notifier*call to __nvoc_init_funcTable_AccessCounterBuffer*call to __nvoc_init_funcTable_AccessCounterBuffer_1*call to __nvoc_ctor_GpuResource*call to __nvoc_ctor_Notifier*call to __nvoc_init_dataField_AccessCounterBuffer*call to accesscntrConstruct_IMPL*call to __nvoc_dtor_Notifier*call to __nvoc_dtor_GpuResource*call to accesscntrDestruct_IMPL*call to notifyGetOrAllocNotifShare_DISPATCH*call to notifyUnregisterEvent_DISPATCH*call to notifySetNotificationShare_DISPATCH*call to resAddAdditionalDependants_DISPATCH*call to resGetRefCount_DISPATCH*call to resUnmapFrom_DISPATCH*call to resMapTo_DISPATCH*call to resIsPartialUnmapSupported_DISPATCH*call to resControlFilter_DISPATCH*call to resPreDestruct_DISPATCH*call to resIsDuplicate_DISPATCH*call to resCanCopy_DISPATCH*call to rmresControl_Epilogue_DISPATCH*call to rmresControl_Prologue_DISPATCH*call to rmresControlSerialization_Epilogue_DISPATCH*call to rmresControlSerialization_Prologue_DISPATCH*call to rmresCheckMemInterUnmap_DISPATCH*call to rmresGetMemInterMapParams_DISPATCH*call to rmresAccessCallback_DISPATCH*call to gpuresGetInternalObjectHandle_DISPATCH*call to gpuresInternalControlForward_DISPATCH*call to gpuresGetRegBaseOffsetAndSize_DISPATCH*call to gpuresShareCallback_DISPATCH*call to gpuresControl_DISPATCH*call to accesscntrGetMapAddrSpace_DISPATCH*call to accesscntrUnmap_DISPATCH*call to accesscntrMap_DISPATCH*call to __nvoc_objCreate_BinaryApiPrivileged*generated/g_binary_api_nvoc.c**generated/g_binary_api_nvoc.c*__nvoc_base_BinaryApi*call to __nvoc_init__BinaryApiPrivileged*call to __nvoc_ctor_BinaryApiPrivileged*__nvoc_pbase_BinaryApi**__nvoc_pbase_BinaryApi*__nvoc_pbase_BinaryApiPrivileged**__nvoc_pbase_BinaryApiPrivileged*call to __nvoc_init__BinaryApi*metadata__BinaryApi*call to __nvoc_init_funcTable_BinaryApiPrivileged*call to __nvoc_init_funcTable_BinaryApiPrivileged_1*call to __nvoc_ctor_BinaryApi*call to __nvoc_init_dataField_BinaryApiPrivileged*call to binapiprivConstruct_IMPL*call to __nvoc_dtor_BinaryApi*call to gpuresGetMapAddrSpace_DISPATCH*call to gpuresUnmap_DISPATCH*call to gpuresMap_DISPATCH*call to binapiprivControl_DISPATCH*call to __nvoc_objCreate_BinaryApi*call to __nvoc_init_funcTable_BinaryApi*call to __nvoc_init_funcTable_BinaryApi_1*call to __nvoc_init_dataField_BinaryApi*call to binapiConstruct_IMPL*call to binapiControl_DISPATCH*pTD*pMsgHdr*magicId*pCertSize**pCert*pCertCount*pEncapCertChain**pEncapCertChain*pEncapCertChainSize*pResponse*pResponseSize*pNonce*pAttestationReport**pAttestationReport*pAttestationReportSize*pbIsCecAttestationReportPresent*pCecAttestationReport**pCecAttestationReport*pCecAttestationReportSize*pKeyExCertChain**pKeyExCertChain*pKeyExCertChainSize*pAttestationCertChainSize*call to nvFieldGet32*pField*pEnum*decoded*regions**regions*call to nvFieldSet32*call to mmuFmtLevelEntryCount*call to mmuFmtEntryVirtAddrMask*call to mmuFmtEntryIndexVirtAddrLo*call to mmuFmtLevelVirtAddrLo*call to mmuFmtLevelVirtAddrMask*call to mmuFmtEntryIndexVirtAddrMask*call to nvFieldGet64*call to nvFieldSet64*call to nvFieldGetEnum*call to nvFieldSetEnum**GR_CTX_BUFFER_MAIN**GR_CTX_BUFFER_ZCULL**GR_CTX_BUFFER_PM**GR_CTX_BUFFER_PREEMPT**GR_CTX_BUFFER_SPILL**GR_CTX_BUFFER_BETA_CB**GR_CTX_BUFFER_PAGEPOOL**GR_CTX_BUFFER_RTV_CB**GR_CTX_BUFFER_PATCH**GR_CTX_BUFFER_SETUP**GR_CTX_BUFFER__UNKNOWN**GR_GLOBALCTX_BUFFER_BUNDLE_CB**GR_GLOBALCTX_BUFFER_PAGEPOOL**GR_GLOBALCTX_BUFFER_ATTRIBUTE_CB**GR_GLOBALCTX_BUFFER_RTV_CB**GR_GLOBALCTX_BUFFER_GFXP_POOL**GR_GLOBALCTX_BUFFER_GFXP_CTRL_BLK**GR_GLOBALCTX_BUFFER_FECS_EVENT**GR_GLOBALCTX_BUFFER_PRIV_ACCESS_MAP**GR_GLOBALCTX_BUFFER_UNRESTRICTED_PRIV_ACCESS_MAP**GR_GLOBAL_BUFFER_GLOBAL_PRIV_ACCESS_MAP**GR_GLOBALCTX_BUFFER__UNKNOWN*ivMask**ivMask*pH2DKey*pD2HKey*pKeyId*__nvoc_pbase_Ccsl**__nvoc_pbase_Ccsl*call to __nvoc_init_funcTable_Ccsl*call to __nvoc_init_funcTable_Ccsl_1*call to __nvoc_init_dataField_Ccsl*generated/g_fbsr_nvoc.h**generated/g_fbsr_nvoc.h*pVidMemDesc*pGrceMask*pCe*rd*wr*pKCeCaps*call to __nvoc_objCreate_CeUtilsApi*generated/g_ce_utils_nvoc.c**generated/g_ce_utils_nvoc.c*call to __nvoc_init__CeUtilsApi*call to __nvoc_ctor_CeUtilsApi*__nvoc_pbase_CeUtilsApi**__nvoc_pbase_CeUtilsApi*call to __nvoc_init_funcTable_CeUtilsApi*call to __nvoc_init_funcTable_CeUtilsApi_1*call to __nvoc_init_dataField_CeUtilsApi*call to ceutilsapiConstruct_IMPL*call to ceutilsapiDestruct_IMPL*call to __nvoc_objCreate_CeUtils*arg_pGpu*arg_pKernelMIGGPUInstance*arg_pAllocParams*call to __nvoc_init__CeUtils*call to __nvoc_ctor_CeUtils*__nvoc_pbase_CeUtils**__nvoc_pbase_CeUtils*call to __nvoc_init__Object*call to __nvoc_init_funcTable_CeUtils*call to __nvoc_init_funcTable_CeUtils_1*call to __nvoc_ctor_Object*call to __nvoc_init_dataField_CeUtils*call to ceutilsConstruct_IMPL*call to __nvoc_dtor_Object*call to ceutilsDestruct_IMPL*call to __nvoc_objCreate_ChannelDescendant*generated/g_channel_descendant_nvoc.c**generated/g_channel_descendant_nvoc.c*pParent != NULL**pParent != NULL*pRmhalspecowner**pRmhalspecowner*pRmhalspecowner != NULL**pRmhalspecowner != NULL*call to __nvoc_init__ChannelDescendant*call to __nvoc_ctor_ChannelDescendant*__nvoc_pbase_ChannelDescendant**__nvoc_pbase_ChannelDescendant*call to __nvoc_init_funcTable_ChannelDescendant*call to __nvoc_init_funcTable_ChannelDescendant_1*rmVariantHal*call to __nvoc_init_dataField_ChannelDescendant*call to chandesConstruct_IMPL*call to chandesDestruct_IMPL*call to chandesCheckMemInterUnmap_DISPATCH*pDpuIpHal*__nvoc_HalVarIdx*pDispIpHal*pRmVariantHal*pTegraChipHal*pChipHal*call to __nvoc_objCreate_OBJCL*generated/g_chipset_nvoc.c**generated/g_chipset_nvoc.c*call to __nvoc_init__OBJCL*call to __nvoc_ctor_OBJCL*__nvoc_pbase_OBJCL**__nvoc_pbase_OBJCL*call to __nvoc_init_funcTable_OBJCL*call to __nvoc_init_funcTable_OBJCL_1*call to __nvoc_init_dataField_OBJCL*call to clConstruct_IMPL*PDB_PROP_CL_HAS_RESIZABLE_BAR_ISSUE*PDB_PROP_CL_BUG_3751839_GEN_SPEED_WAR*call to clDestruct_IMPL*call to __nvoc_objCreate_RmClient*arg_pAllocator*generated/g_client_nvoc.c**generated/g_client_nvoc.c*__nvoc_base_RsClient*call to __nvoc_init__RmClient*call to __nvoc_ctor_RmClient**__nvoc_pbase_RsClient*__nvoc_pbase_RmClient**__nvoc_pbase_RmClient*call to __nvoc_init__RsClient*metadata__RsClient*call to __nvoc_init_funcTable_RmClient*call to __nvoc_init_funcTable_RmClient_1*call to __nvoc_ctor_RsClient*call to __nvoc_init_dataField_RmClient*call to rmclientConstruct_IMPL*call to __nvoc_dtor_RsClient*call to rmclientDestruct_IMPL*call to clientShareResource_DISPATCH*call to clientValidateNewResourceHandle_DISPATCH*call to clientUnmapMemory_DISPATCH*call to clientDestructResourceRef_DISPATCH*call to rmclientIsAdmin_DISPATCH*call to rmclientPostProcessPendingFreeList_DISPATCH*call to rmclientInterUnmap_DISPATCH*call to rmclientInterMap_DISPATCH*call to rmclientFreeResource_DISPATCH*call to rmclientValidateLocks_DISPATCH*call to rmclientValidate_DISPATCH*call to __nvoc_objCreate_UserInfo*__nvoc_base_RsShared*call to __nvoc_init__UserInfo*call to __nvoc_ctor_UserInfo*__nvoc_pbase_RsShared**__nvoc_pbase_RsShared*__nvoc_pbase_UserInfo**__nvoc_pbase_UserInfo*call to __nvoc_init__RsShared*metadata__RsShared*call to __nvoc_init_funcTable_UserInfo*call to __nvoc_init_funcTable_UserInfo_1*call to __nvoc_ctor_RsShared*call to __nvoc_init_dataField_UserInfo*call to userinfoConstruct_IMPL*call to __nvoc_dtor_RsShared*call to userinfoDestruct_IMPL*call to __nvoc_objCreate_RmClientResource*generated/g_client_resource_nvoc.c**generated/g_client_resource_nvoc.c*__nvoc_base_RsClientResource*call to __nvoc_init__RmClientResource*call to __nvoc_ctor_RmClientResource*__nvoc_pbase_RsClientResource**__nvoc_pbase_RsClientResource*__nvoc_pbase_RmClientResource**__nvoc_pbase_RmClientResource*call to __nvoc_init__RsClientResource*call to __nvoc_init__RmResourceCommon*metadata__RsClientResource*call to __nvoc_init_funcTable_RmClientResource*call to __nvoc_init_funcTable_RmClientResource_1*call to __nvoc_ctor_RsClientResource*call to __nvoc_ctor_RmResourceCommon*call to __nvoc_init_dataField_RmClientResource*call to cliresConstruct_IMPL*call to __nvoc_dtor_RmResourceCommon*call to __nvoc_dtor_RsClientResource*call to cliresDestruct_IMPL*call to resUnmap_DISPATCH*call to resMap_DISPATCH*call to resControlSerialization_Epilogue_DISPATCH*call to resControlSerialization_Prologue_DISPATCH*call to resControl_DISPATCH*call to cliresControl_Epilogue_DISPATCH*call to cliresControl_Prologue_DISPATCH*call to cliresShareCallback_DISPATCH*call to cliresAccessCallback_DISPATCH*pComputeInstanceSubscription_PRIVATE*call to __nvoc_objCreate_ComputeInstanceSubscription*generated/g_compute_instance_subscription_nvoc.c**generated/g_compute_instance_subscription_nvoc.c*call to __nvoc_init__ComputeInstanceSubscription*call to __nvoc_ctor_ComputeInstanceSubscription*__nvoc_pbase_ComputeInstanceSubscription**__nvoc_pbase_ComputeInstanceSubscription*call to __nvoc_init_funcTable_ComputeInstanceSubscription*call to __nvoc_init_funcTable_ComputeInstanceSubscription_1*call to __nvoc_init_dataField_ComputeInstanceSubscription*call to cisubscriptionConstruct_IMPL*call to cisubscriptionDestruct_IMPL*call to cisubscriptionCanCopy_DISPATCH*call to __nvoc_objCreate_ConfidentialComputeApi*generated/g_conf_compute_api_nvoc.c**generated/g_conf_compute_api_nvoc.c*call to __nvoc_init__ConfidentialComputeApi*call to __nvoc_ctor_ConfidentialComputeApi*__nvoc_pbase_ConfidentialComputeApi**__nvoc_pbase_ConfidentialComputeApi*call to __nvoc_init__RmResource*call to __nvoc_init_funcTable_ConfidentialComputeApi*call to __nvoc_init_funcTable_ConfidentialComputeApi_1*call to __nvoc_ctor_RmResource*call to __nvoc_init_dataField_ConfidentialComputeApi*call to confComputeApiConstruct_IMPL*call to __nvoc_dtor_RmResource*call to confComputeApiDestruct_IMPL*call to rmresShareCallback_DISPATCH*call to __nvoc_objCreate_ConfidentialCompute*generated/g_conf_compute_nvoc.c**generated/g_conf_compute_nvoc.c*__nvoc_base_OBJENGSTATE*pGpuhalspecowner**pGpuhalspecowner*pGpuhalspecowner != NULL**pGpuhalspecowner != NULL*call to __nvoc_init__ConfidentialCompute*call to __nvoc_ctor_ConfidentialCompute*__nvoc_pbase_OBJENGSTATE**__nvoc_pbase_OBJENGSTATE*__nvoc_pbase_ConfidentialCompute**__nvoc_pbase_ConfidentialCompute*call to __nvoc_init__OBJENGSTATE*metadata__OBJENGSTATE*call to __nvoc_init_funcTable_ConfidentialCompute*call to __nvoc_init_funcTable_ConfidentialCompute_1*chipHal*__confComputeDestruct__*__confComputeStatePostLoad__*__confComputeStatePreUnload__*__confComputeSetErrorState__*__confComputeKeyStoreDeriveViaChannel__*__confComputeKeyStoreRetrieveViaChannel__*__confComputeKeyStoreRetrieveViaKeyId__*__confComputeDeriveSecretsForCEKeySpace__*__confComputeDeriveInitialKeySeed__*__confComputeGetAndUpdateCurrentKeySeed__*__confComputeDeriveSecrets__*__confComputeUpdateSecrets__*__confComputeIsSpdmEnabled__*__confComputeGetEngineIdFromKeySpace__*__confComputeGetKeySpaceFromKChannel__*__confComputeGetLceKeyIdFromKChannel__*__confComputeGetMaxCeKeySpaceIdx__*__confComputeGlobalKeyIsKernelPriv__*__confComputeGlobalKeyIsUvmKey__*__confComputeGetKeyPairByChannel__*__confComputeTriggerKeyRotation__*__confComputeGetKeyPairForKeySpace__*__confComputeEnableKeyRotationCallback__*__confComputeEnableKeyRotationSupport__*__confComputeEnableInternalKeyRotationSupport__*__confComputeIsDebugModeEnabled__*__confComputeIsGpuCcCapable__*__confComputeTestPlatformSupport__*__confComputeDeriveSessionKeys__*__confComputeKeyStoreDepositIvMask__*__confComputeKeyStoreUpdateKey__*__confComputeKeyStoreIsValidGlobalKeyId__*__confComputeKeyStoreInit__*__confComputeKeyStoreDeinit__*__confComputeKeyStoreGetExportMasterKey__*__confComputeGetCurrentKeySeed__*__confComputeKeyStoreDeriveKey__*__confComputeKeyStoreClearExportMasterKey__*call to __nvoc_ctor_OBJENGSTATE*call to __nvoc_init_dataField_ConfidentialCompute*PDB_PROP_ENGSTATE_IS_MISSING*PDB_PROP_CONFCOMPUTE_ENABLED*PDB_PROP_CONFCOMPUTE_CC_FEATURE_ENABLED*PDB_PROP_CONFCOMPUTE_APM_FEATURE_ENABLED*PDB_PROP_CONFCOMPUTE_DEVTOOLS_MODE_ENABLED*PDB_PROP_CONFCOMPUTE_ENABLE_EARLY_INIT*PDB_PROP_CONFCOMPUTE_GPUS_READY_CHECK_ENABLED*PDB_PROP_CONFCOMPUTE_MULTI_GPU_PROTECTED_PCIE_MODE_ENABLED*PDB_PROP_CONFCOMPUTE_MULTI_GPU_NVLE_MODE_ENABLED*PDB_PROP_CONFCOMPUTE_KEY_ROTATION_SUPPORTED*PDB_PROP_CONFCOMPUTE_KEY_ROTATION_ENABLED*PDB_PROP_CONFCOMPUTE_INTERNAL_KEY_ROTATION_ENABLED*PDB_PROP_CONFCOMPUTE_WAR_5107790_SYSMEM_FLUSH_ADDR*call to confComputeDestruct_DISPATCH*call to __nvoc_dtor_OBJENGSTATE*call to engstateIsPresent_DISPATCH*call to engstateStateDestroy_DISPATCH*call to engstateStatePostUnload_DISPATCH*call to engstateStateUnload_DISPATCH*call to engstateStateLoad_DISPATCH*call to engstateStatePreLoad_DISPATCH*call to engstateStateInitUnlocked_DISPATCH*call to engstateStatePreInitUnlocked_DISPATCH*call to engstateInitMissing_DISPATCH*call to confComputeStatePreUnload_DISPATCH*call to confComputeStatePostLoad_DISPATCH*call to confComputeStateInitLocked_DISPATCH*call to confComputeStatePreInitLocked_DISPATCH*call to confComputeConstructEngine_DISPATCH*call to __nvoc_objCreate_ConsoleMemory*generated/g_console_mem_nvoc.c**generated/g_console_mem_nvoc.c*__nvoc_base_Memory*call to __nvoc_init__ConsoleMemory*call to __nvoc_ctor_ConsoleMemory*__nvoc_pbase_Memory**__nvoc_pbase_Memory*__nvoc_pbase_ConsoleMemory**__nvoc_pbase_ConsoleMemory*call to __nvoc_init__Memory*metadata__Memory*call to __nvoc_init_funcTable_ConsoleMemory*call to __nvoc_init_funcTable_ConsoleMemory_1*call to __nvoc_ctor_Memory*call to __nvoc_init_dataField_ConsoleMemory*call to conmemConstruct_IMPL*call to __nvoc_dtor_Memory*call to memIsExportAllowed_DISPATCH*call to memIsGpuMapAllowed_DISPATCH*call to memIsReady_DISPATCH*call to memCheckCopyPermissions_DISPATCH*call to memGetMemoryMappingDescriptor_DISPATCH*call to memCheckMemInterUnmap_DISPATCH*call to memGetMemInterMapParams_DISPATCH*call to memUnmap_DISPATCH*call to memMap_DISPATCH*call to memControl_DISPATCH*call to memGetMapAddrSpace_DISPATCH*call to memIsDuplicate_DISPATCH*call to conmemCanCopy_DISPATCH*call to __nvoc_objCreate_ContextDma*generated/g_context_dma_nvoc.c**generated/g_context_dma_nvoc.c*call to __nvoc_init__ContextDma*call to __nvoc_ctor_ContextDma*__nvoc_pbase_ContextDma**__nvoc_pbase_ContextDma*call to __nvoc_init_funcTable_ContextDma*call to __nvoc_init_funcTable_ContextDma_1*call to __nvoc_init_dataField_ContextDma*call to ctxdmaConstruct_IMPL*call to ctxdmaDestruct_IMPL*call to ctxdmaUnmapFrom_DISPATCH*call to ctxdmaMapTo_DISPATCH*__nvoc_pbase_CrashCatEngine**__nvoc_pbase_CrashCatEngine*call to __nvoc_init_funcTable_CrashCatEngine*call to __nvoc_init_funcTable_CrashCatEngine_1*call to __nvoc_init_dataField_CrashCatEngine*call to crashcatEngineConstruct_IMPL*call to crashcatEngineDestruct_IMPL*call to __nvoc_objCreate_CrashCatQueue*arg_pQueueConfig*generated/g_crashcat_queue_nvoc.c**generated/g_crashcat_queue_nvoc.c*pCrashcatWayfinder**pCrashcatWayfinder*pCrashcatWayfinder != NULL**pCrashcatWayfinder != NULL*call to __nvoc_init__CrashCatQueue*call to __nvoc_ctor_CrashCatQueue*__nvoc_pbase_CrashCatQueue**__nvoc_pbase_CrashCatQueue*call to __nvoc_init_funcTable_CrashCatQueue*call to __nvoc_init_funcTable_CrashCatQueue_1*wayfinderHal*call to __nvoc_init_dataField_CrashCatQueue*call to crashcatQueueConstruct_IMPL*call to crashcatQueueDestruct_IMPL*call to __nvoc_objCreate_CrashCatReport*arg_ppReportBytes**arg_ppReportBytes*generated/g_crashcat_report_nvoc.c**generated/g_crashcat_report_nvoc.c*call to __nvoc_init__CrashCatReport*call to __nvoc_ctor_CrashCatReport*__nvoc_pbase_CrashCatReport**__nvoc_pbase_CrashCatReport*call to __nvoc_init_halspec_CrashCatReportHal*call to __nvoc_init_funcTable_CrashCatReport*call to __nvoc_init_funcTable_CrashCatReport_1*reportHal*__crashcatReportSourceContainment__*__crashcatReportLogReporter__*__crashcatReportLogSource__*__crashcatReportLogVersionProtobuf__*call to __nvoc_init_dataField_CrashCatReport*call to crashcatReportConstruct_IMPL*call to crashcatReportDestruct_V1*pCrashCatReportHal*call to __nvoc_objCreate_CrashCatWayfinder*generated/g_crashcat_wayfinder_nvoc.c**generated/g_crashcat_wayfinder_nvoc.c*call to __nvoc_init__CrashCatWayfinder*call to __nvoc_ctor_CrashCatWayfinder*__nvoc_pbase_CrashCatWayfinder**__nvoc_pbase_CrashCatWayfinder*call to __nvoc_init_halspec_CrashCatWayfinderHal*call to __nvoc_init_funcTable_CrashCatWayfinder*call to __nvoc_init_funcTable_CrashCatWayfinder_1*call to __nvoc_init_dataField_CrashCatWayfinder*call to crashcatWayfinderConstruct_IMPL*call to crashcatWayfinderDestruct_IMPL*pCrashCatWayfinderHal*call to __nvoc_objCreate_DebugBufferApi*generated/g_dbgbuffer_nvoc.c**generated/g_dbgbuffer_nvoc.c*call to __nvoc_init__DebugBufferApi*call to __nvoc_ctor_DebugBufferApi*__nvoc_pbase_DebugBufferApi**__nvoc_pbase_DebugBufferApi*call to __nvoc_init_funcTable_DebugBufferApi*call to __nvoc_init_funcTable_DebugBufferApi_1*call to __nvoc_init_dataField_DebugBufferApi*call to dbgbufConstruct_IMPL*call to dbgbufDestruct_IMPL*call to dbgbufGetMemoryMappingDescriptor_DISPATCH*call to dbgbufGetMapAddrSpace_DISPATCH*call to dbgbufUnmap_DISPATCH*call to dbgbufMap_DISPATCH*call to __nvoc_objCreate_OBJDCECLIENTRM*generated/g_dce_client_nvoc.c**generated/g_dce_client_nvoc.c*call to __nvoc_init__OBJDCECLIENTRM*call to __nvoc_ctor_OBJDCECLIENTRM*__nvoc_pbase_OBJDCECLIENTRM**__nvoc_pbase_OBJDCECLIENTRM*call to __nvoc_init_funcTable_OBJDCECLIENTRM*call to __nvoc_init_funcTable_OBJDCECLIENTRM_1*call to __nvoc_init_dataField_OBJDCECLIENTRM*call to dceclientDestruct_IMPL*call to engstateStatePreUnload_DISPATCH*call to engstateStatePostLoad_DISPATCH*call to engstateStateInitLocked_DISPATCH*call to engstateStatePreInitLocked_DISPATCH*call to dceclientStateUnload_DISPATCH*call to dceclientStateLoad_DISPATCH*call to dceclientStateDestroy_DISPATCH*call to dceclientConstructEngine_DISPATCH*call to __nvoc_objCreate_DeferredApiObject*generated/g_deferred_api_nvoc.c**generated/g_deferred_api_nvoc.c*__nvoc_base_ChannelDescendant*call to __nvoc_init__DeferredApiObject*call to __nvoc_ctor_DeferredApiObject*__nvoc_pbase_DeferredApiObject**__nvoc_pbase_DeferredApiObject*metadata__ChannelDescendant*call to __nvoc_init_funcTable_DeferredApiObject*call to __nvoc_init_funcTable_DeferredApiObject_1*call to __nvoc_init_dataField_DeferredApiObject*call to defapiConstruct_IMPL*call to __nvoc_dtor_ChannelDescendant*call to defapiDestruct_IMPL*call to defapiIsSwMethodStalling_DISPATCH*call to defapiGetSwMethods_DISPATCH*call to __nvoc_objCreate_Device*generated/g_device_nvoc.c**generated/g_device_nvoc.c*call to __nvoc_init__Device*call to __nvoc_ctor_Device*__nvoc_pbase_Device**__nvoc_pbase_Device*call to __nvoc_init_funcTable_Device*call to __nvoc_init_funcTable_Device_1*__deviceCtrlCmdDmaFlush__*__deviceCtrlCmdFifoGetEngineContextProperties__*__deviceCtrlCmdFifoGetLatencyBufferSize__*__deviceCtrlCmdFifoIdleChannels__*__deviceCtrlCmdHostGetCapsV2__*__deviceCtrlCmdGpuGetBrandCaps__*__deviceCtrlCmdMsencGetCapsV2__*__deviceCtrlCmdBspGetCapsV2__*__deviceCtrlCmdNvjpgGetCapsV2__*call to __nvoc_init_dataField_Device*call to deviceConstruct_IMPL*call to deviceDestruct_IMPL*call to deviceInternalControlForward_DISPATCH*call to deviceControl_DISPATCH*pDispCapabilities*call to __nvoc_objCreate_DispCapabilities*generated/g_disp_capabilities_nvoc.c**generated/g_disp_capabilities_nvoc.c*call to __nvoc_init__DispCapabilities*call to __nvoc_ctor_DispCapabilities*__nvoc_pbase_DispCapabilities**__nvoc_pbase_DispCapabilities*call to __nvoc_init_funcTable_DispCapabilities*call to __nvoc_init_funcTable_DispCapabilities_1*call to __nvoc_init_dataField_DispCapabilities*call to dispcapConstruct_IMPL*call to dispcapGetRegBaseOffsetAndSize_DISPATCH*call to __nvoc_objCreate_DispChannelDma*generated/g_disp_channel_nvoc.c**generated/g_disp_channel_nvoc.c*__nvoc_base_DispChannel*call to __nvoc_init__DispChannelDma*call to __nvoc_ctor_DispChannelDma*__nvoc_pbase_DispChannel**__nvoc_pbase_DispChannel*__nvoc_pbase_DispChannelDma**__nvoc_pbase_DispChannelDma*call to __nvoc_init__DispChannel*metadata__DispChannel*call to __nvoc_init_funcTable_DispChannelDma*call to __nvoc_init_funcTable_DispChannelDma_1*call to __nvoc_ctor_DispChannel*call to __nvoc_init_dataField_DispChannelDma*call to dispchndmaConstruct_IMPL*call to __nvoc_dtor_DispChannel*call to dispchnGetRegBaseOffsetAndSize_DISPATCH*call to __nvoc_objCreate_DispChannelPio*call to __nvoc_init__DispChannelPio*call to __nvoc_ctor_DispChannelPio*__nvoc_pbase_DispChannelPio**__nvoc_pbase_DispChannelPio*call to __nvoc_init_funcTable_DispChannelPio*call to __nvoc_init_funcTable_DispChannelPio_1*call to __nvoc_init_dataField_DispChannelPio*call to dispchnpioConstruct_IMPL*call to __nvoc_objCreate_DispChannel*call to __nvoc_init_funcTable_DispChannel*call to __nvoc_init_funcTable_DispChannel_1*call to __nvoc_init_dataField_DispChannel*call to dispchnConstruct_IMPL*call to dispchnDestruct_IMPL*generated/g_disp_inst_mem_nvoc.h**generated/g_disp_inst_mem_nvoc.h*pInstMem*pNewAddress*pNewLimit*pTotalInstMemSize*pHashTableSize*call to __nvoc_objCreate_DisplayInstanceMemory*generated/g_disp_inst_mem_nvoc.c**generated/g_disp_inst_mem_nvoc.c*call to __nvoc_init__DisplayInstanceMemory*call to __nvoc_ctor_DisplayInstanceMemory*__nvoc_pbase_DisplayInstanceMemory**__nvoc_pbase_DisplayInstanceMemory*call to __nvoc_init_funcTable_DisplayInstanceMemory*call to __nvoc_init_funcTable_DisplayInstanceMemory_1*dispIpHal*__instmemGetSize__*__instmemGetHashTableBaseAddr__*__instmemIsValid__*__instmemGenerateHashTableData__*__instmemHashFunc__*__instmemCommitContextDma__*__instmemUpdateContextDma__*call to __nvoc_init_dataField_DisplayInstanceMemory*call to instmemConstruct_IMPL*call to instmemDestruct_IMPL*pRsParams*call to __nvoc_objCreate_DispCommon*generated/g_disp_objs_nvoc.c**generated/g_disp_objs_nvoc.c*__nvoc_base_DisplayApi*call to __nvoc_init__DispCommon*call to __nvoc_ctor_DispCommon*__nvoc_pbase_DisplayApi**__nvoc_pbase_DisplayApi*__nvoc_pbase_DispCommon**__nvoc_pbase_DispCommon*call to __nvoc_init__DisplayApi*metadata__DisplayApi*call to __nvoc_init_funcTable_DispCommon*call to __nvoc_init_funcTable_DispCommon_1*call to __nvoc_ctor_DisplayApi*call to __nvoc_init_dataField_DispCommon*call to dispcmnConstruct_IMPL*call to __nvoc_dtor_DisplayApi*call to dispapiControl_Epilogue_DISPATCH*call to dispapiControl_Prologue_DISPATCH*call to dispapiControl_DISPATCH*call to __nvoc_objCreate_DispSwObj*call to __nvoc_init__DispSwObj*call to __nvoc_ctor_DispSwObj*__nvoc_pbase_DispSwObj**__nvoc_pbase_DispSwObj*call to __nvoc_init_funcTable_DispSwObj*call to __nvoc_init_funcTable_DispSwObj_1*call to __nvoc_init_dataField_DispSwObj*call to dispswobjConstruct_IMPL*call to __nvoc_objCreate_NvDispApi*__nvoc_base_DispObject*call to __nvoc_init__NvDispApi*call to __nvoc_ctor_NvDispApi*__nvoc_pbase_DispObject**__nvoc_pbase_DispObject*__nvoc_pbase_NvDispApi**__nvoc_pbase_NvDispApi*call to __nvoc_init__DispObject*metadata__DispObject*call to __nvoc_init_funcTable_NvDispApi*call to __nvoc_init_funcTable_NvDispApi_1*call to __nvoc_ctor_DispObject*call to __nvoc_init_dataField_NvDispApi*call to nvdispapiConstruct_IMPL*call to __nvoc_dtor_DispObject*call to __nvoc_objCreate_DispObject*call to __nvoc_init_funcTable_DispObject*call to __nvoc_init_funcTable_DispObject_1*call to __nvoc_init_dataField_DispObject*call to dispobjConstruct_IMPL*call to __nvoc_objCreate_DisplayApi*call to __nvoc_init_funcTable_DisplayApi*call to __nvoc_init_funcTable_DisplayApi_1*call to __nvoc_init_dataField_DisplayApi*call to dispapiConstruct_IMPL*call to dispapiDestruct_IMPL*pDispSfUser*call to __nvoc_objCreate_DispSfUser*generated/g_disp_sf_user_nvoc.c**generated/g_disp_sf_user_nvoc.c*call to __nvoc_init__DispSfUser*call to __nvoc_ctor_DispSfUser*__nvoc_pbase_DispSfUser**__nvoc_pbase_DispSfUser*call to __nvoc_init_funcTable_DispSfUser*call to __nvoc_init_funcTable_DispSfUser_1*call to __nvoc_init_dataField_DispSfUser*call to dispsfConstruct_IMPL*call to dispsfGetRegBaseOffsetAndSize_DISPATCH*call to __nvoc_objCreate_DispSwObject*generated/g_dispsw_nvoc.c**generated/g_dispsw_nvoc.c*call to __nvoc_init__DispSwObject*call to __nvoc_ctor_DispSwObject*__nvoc_pbase_DispSwObject**__nvoc_pbase_DispSwObject*call to __nvoc_init_funcTable_DispSwObject*call to __nvoc_init_funcTable_DispSwObject_1*call to __nvoc_init_dataField_DispSwObject*call to dispswConstruct_IMPL*call to dispswDestruct_IMPL*call to chandesIsSwMethodStalling_DISPATCH*call to dispswGetSwMethods_DISPATCH*pStandardMemory*pAllocRequest*call to __nvoc_objCreate_ExtendedGpuMemory*generated/g_egm_mem_nvoc.c**generated/g_egm_mem_nvoc.c*__nvoc_base_StandardMemory*call to __nvoc_init__ExtendedGpuMemory*call to __nvoc_ctor_ExtendedGpuMemory*__nvoc_pbase_StandardMemory**__nvoc_pbase_StandardMemory*__nvoc_pbase_ExtendedGpuMemory**__nvoc_pbase_ExtendedGpuMemory*call to __nvoc_init__StandardMemory*metadata__StandardMemory*call to __nvoc_init_funcTable_ExtendedGpuMemory*call to __nvoc_init_funcTable_ExtendedGpuMemory_1*call to __nvoc_ctor_StandardMemory*call to __nvoc_init_dataField_ExtendedGpuMemory*call to egmmemConstruct_IMPL*call to __nvoc_dtor_StandardMemory*call to stdmemCanCopy_DISPATCH*call to __nvoc_objCreate_OBJENGSTATE*generated/g_eng_state_nvoc.c**generated/g_eng_state_nvoc.c*call to __nvoc_init_funcTable_OBJENGSTATE*call to __nvoc_init_funcTable_OBJENGSTATE_1*call to __nvoc_init_dataField_OBJENGSTATE*call to engstateDestruct_IMPL*call to __nvoc_objCreate_EventBuffer*generated/g_event_buffer_nvoc.c**generated/g_event_buffer_nvoc.c*call to __nvoc_init__EventBuffer*call to __nvoc_ctor_EventBuffer*__nvoc_pbase_EventBuffer**__nvoc_pbase_EventBuffer*call to __nvoc_init_funcTable_EventBuffer*call to __nvoc_init_funcTable_EventBuffer_1*call to __nvoc_init_dataField_EventBuffer*call to eventbufferConstruct_IMPL*call to eventbufferDestruct_IMPL*call to __nvoc_init__INotifier*call to __nvoc_init_funcTable_Notifier*call to __nvoc_init_funcTable_Notifier_1*call to __nvoc_ctor_INotifier*call to __nvoc_init_dataField_Notifier*call to notifyConstruct_IMPL*call to __nvoc_dtor_INotifier*call to notifyDestruct_IMPL*call to __nvoc_init_funcTable_INotifier*call to __nvoc_init_funcTable_INotifier_1*call to __nvoc_init_dataField_INotifier*call to inotifyConstruct_IMPL*call to inotifyDestruct_IMPL*call to __nvoc_objCreate_Event*generated/g_event_nvoc.c**generated/g_event_nvoc.c*call to __nvoc_init__Event*call to __nvoc_ctor_Event*__nvoc_pbase_Event**__nvoc_pbase_Event*call to __nvoc_init_funcTable_Event*call to __nvoc_init_funcTable_Event_1*call to __nvoc_init_dataField_Event*call to eventConstruct_IMPL*call to eventDestruct_IMPL*call to __nvoc_objCreate_NotifShare*call to __nvoc_init__NotifShare*call to __nvoc_ctor_NotifShare*__nvoc_pbase_NotifShare**__nvoc_pbase_NotifShare*call to __nvoc_init_funcTable_NotifShare*call to __nvoc_init_funcTable_NotifShare_1*call to __nvoc_init_dataField_NotifShare*call to shrnotifConstruct_IMPL*call to shrnotifDestruct_IMPL*call to __nvoc_objCreate_Fabric*generated/g_fabric_nvoc.c**generated/g_fabric_nvoc.c*call to __nvoc_init__Fabric*call to __nvoc_ctor_Fabric*__nvoc_pbase_Fabric**__nvoc_pbase_Fabric*call to __nvoc_init_funcTable_Fabric*call to __nvoc_init_funcTable_Fabric_1*call to __nvoc_init_dataField_Fabric*call to fabricConstruct_IMPL*call to fabricDestruct_IMPL*generated/g_virt_mem_allocator_nvoc.h**generated/g_virt_mem_allocator_nvoc.h*memType*tgtPteMem*pComprInfo*pFabricVAS_PRIVATE*pUnused*call to __nvoc_objCreate_FABRIC_VASPACE*generated/g_fabric_vaspace_nvoc.c**generated/g_fabric_vaspace_nvoc.c*__nvoc_base_OBJVASPACE*call to __nvoc_init__FABRIC_VASPACE*call to __nvoc_ctor_FABRIC_VASPACE*__nvoc_pbase_OBJVASPACE**__nvoc_pbase_OBJVASPACE*__nvoc_pbase_FABRIC_VASPACE**__nvoc_pbase_FABRIC_VASPACE*call to __nvoc_init__OBJVASPACE*metadata__OBJVASPACE*call to __nvoc_init_funcTable_FABRIC_VASPACE*call to __nvoc_init_funcTable_FABRIC_VASPACE_1*call to __nvoc_ctor_OBJVASPACE*call to __nvoc_init_dataField_FABRIC_VASPACE*call to fabricvaspaceDestruct_IMPL*call to __nvoc_dtor_OBJVASPACE*call to vaspaceFreeV2_DISPATCH*call to vaspaceSetPteInfo_DISPATCH*call to vaspaceGetPteInfo_DISPATCH*call to vaspaceGetPageTableInfo_DISPATCH*call to vaspaceGetPasid_DISPATCH*call to vaspaceIsAtsEnabled_DISPATCH*call to vaspaceIsExternallyOwned_DISPATCH*call to vaspaceIsFaultCapable_DISPATCH*call to vaspaceGetBigPageSize_DISPATCH*call to vaspaceGetMapPageSize_DISPATCH*call to vaspaceReserveMempool_DISPATCH*call to vaspaceGetFlags_DISPATCH*call to vaspaceGetVaLimit_DISPATCH*call to vaspaceGetVaStart_DISPATCH*call to vaspaceIncAllocRefCnt_DISPATCH*call to fabricvaspaceInvalidateTlb_DISPATCH*call to fabricvaspaceUnpinRootPageDir_DISPATCH*call to fabricvaspacePinRootPageDir_DISPATCH*call to fabricvaspaceGetVasInfo_DISPATCH*call to fabricvaspaceApplyDefaultAlignment_DISPATCH*call to fabricvaspaceUnmap_DISPATCH*call to fabricvaspaceMap_DISPATCH*call to fabricvaspaceFree_DISPATCH*call to fabricvaspaceAlloc_DISPATCH*call to fabricvaspaceConstruct__DISPATCH*call to __nvoc_objCreate_OBJFBSR*generated/g_fbsr_nvoc.c**generated/g_fbsr_nvoc.c*call to __nvoc_init__OBJFBSR*call to __nvoc_ctor_OBJFBSR*__nvoc_pbase_OBJFBSR**__nvoc_pbase_OBJFBSR*call to __nvoc_init_funcTable_OBJFBSR*call to __nvoc_init_funcTable_OBJFBSR_1*__fbsrInit__*__fbsrDestroy__*__fbsrBegin__*__fbsrEnd__*__fbsrCopyMemoryMemDesc__*__fbsrSendMemsysProgramRawCompressionMode__*call to __nvoc_init_dataField_OBJFBSR*call to __nvoc_objCreate_FmSessionApi*generated/g_fm_session_api_nvoc.c**generated/g_fm_session_api_nvoc.c*call to __nvoc_init__FmSessionApi*call to __nvoc_ctor_FmSessionApi*__nvoc_pbase_FmSessionApi**__nvoc_pbase_FmSessionApi*call to __nvoc_init_funcTable_FmSessionApi*call to __nvoc_init_funcTable_FmSessionApi_1*call to __nvoc_init_dataField_FmSessionApi*call to fmsessionapiConstruct_IMPL*call to fmsessionapiDestruct_IMPL*call to __nvoc_objCreate_GenericEngineApi*generated/g_generic_engine_nvoc.c**generated/g_generic_engine_nvoc.c*call to __nvoc_init__GenericEngineApi*call to __nvoc_ctor_GenericEngineApi*__nvoc_pbase_GenericEngineApi**__nvoc_pbase_GenericEngineApi*call to __nvoc_init_funcTable_GenericEngineApi*call to __nvoc_init_funcTable_GenericEngineApi_1*call to __nvoc_init_dataField_GenericEngineApi*call to genapiConstruct_IMPL*call to genapiDestruct_IMPL*call to genapiControl_DISPATCH*call to genapiGetMapAddrSpace_DISPATCH*call to genapiMap_DISPATCH*call to __nvoc_objCreate_SwBcAperture*arg_pApertures*generated/g_gpu_access_nvoc.c**generated/g_gpu_access_nvoc.c*call to __nvoc_init__SwBcAperture*call to __nvoc_ctor_SwBcAperture*__nvoc_pbase_RegisterAperture**__nvoc_pbase_RegisterAperture*__nvoc_pbase_SwBcAperture**__nvoc_pbase_SwBcAperture*call to __nvoc_init__RegisterAperture*__nvoc_base_RegisterAperture*call to __nvoc_init_funcTable_SwBcAperture*call to __nvoc_init_funcTable_SwBcAperture_1*call to __nvoc_ctor_RegisterAperture*call to __nvoc_init_dataField_SwBcAperture*call to swbcaprtConstruct_IMPL*call to __nvoc_dtor_RegisterAperture*call to swbcaprtIsRegValid_DISPATCH*call to swbcaprtWriteReg32Uc_DISPATCH*call to swbcaprtWriteReg32_DISPATCH*call to swbcaprtWriteReg16_DISPATCH*call to swbcaprtWriteReg08_DISPATCH*call to swbcaprtReadReg32_DISPATCH*call to swbcaprtReadReg16_DISPATCH*call to swbcaprtReadReg08_DISPATCH*call to __nvoc_objCreate_IoAperture*arg_pParentAperture*arg_pMapping*call to __nvoc_init__IoAperture*call to __nvoc_ctor_IoAperture*__nvoc_pbase_IoAperture**__nvoc_pbase_IoAperture*call to __nvoc_init_funcTable_IoAperture*call to __nvoc_init_funcTable_IoAperture_1*call to __nvoc_init_dataField_IoAperture*call to ioaprtConstruct_IMPL*call to ioaprtDestruct_IMPL*call to ioaprtIsRegValid_DISPATCH*call to ioaprtWriteReg32Uc_DISPATCH*call to ioaprtWriteReg32_DISPATCH*call to ioaprtWriteReg16_DISPATCH*call to ioaprtWriteReg08_DISPATCH*call to ioaprtReadReg32_DISPATCH*call to ioaprtReadReg16_DISPATCH*call to ioaprtReadReg08_DISPATCH*call to __nvoc_objCreate_GpuAccounting*generated/g_gpu_acct_nvoc.c**generated/g_gpu_acct_nvoc.c*call to __nvoc_init__GpuAccounting*call to __nvoc_ctor_GpuAccounting*__nvoc_pbase_GpuAccounting**__nvoc_pbase_GpuAccounting*call to __nvoc_init_funcTable_GpuAccounting*call to __nvoc_init_funcTable_GpuAccounting_1*call to __nvoc_init_dataField_GpuAccounting*call to gpuacctConstruct_IMPL*call to gpuacctDestruct_IMPL*call to __nvoc_objCreate_GpuArch*generated/g_gpu_arch_nvoc.c**generated/g_gpu_arch_nvoc.c*call to __nvoc_init__GpuArch*call to __nvoc_ctor_GpuArch*__nvoc_pbase_GpuHalspecOwner**__nvoc_pbase_GpuHalspecOwner*__nvoc_pbase_GpuArch**__nvoc_pbase_GpuArch*call to __nvoc_init__GpuHalspecOwner*__nvoc_base_GpuHalspecOwner*call to __nvoc_init_funcTable_GpuArch*call to __nvoc_init_funcTable_GpuArch_1*__gpuarchGetSystemPhysAddrWidth__*__gpuarchGetDmaAddrWidth__*call to __nvoc_ctor_GpuHalspecOwner*call to __nvoc_init_dataField_GpuArch*call to gpuarchConstruct_IMPL*call to __nvoc_dtor_GpuHalspecOwner*bGpuArchIsZeroFb*bGpuarchSupportsIgpuRg*call to __nvoc_objCreate_OBJGPUBOOSTMGR*generated/g_gpu_boost_mgr_nvoc.c**generated/g_gpu_boost_mgr_nvoc.c*call to __nvoc_init__OBJGPUBOOSTMGR*call to __nvoc_ctor_OBJGPUBOOSTMGR*__nvoc_pbase_OBJGPUBOOSTMGR**__nvoc_pbase_OBJGPUBOOSTMGR*call to __nvoc_init_funcTable_OBJGPUBOOSTMGR*call to __nvoc_init_funcTable_OBJGPUBOOSTMGR_1*call to __nvoc_init_dataField_OBJGPUBOOSTMGR*call to gpuboostmgrConstruct_IMPL*call to gpuboostmgrDestruct_IMPL*call to __nvoc_objCreate_GpuDb*generated/g_gpu_db_nvoc.c**generated/g_gpu_db_nvoc.c*call to __nvoc_init__GpuDb*call to __nvoc_ctor_GpuDb*__nvoc_pbase_GpuDb**__nvoc_pbase_GpuDb*call to __nvoc_init_funcTable_GpuDb*call to __nvoc_init_funcTable_GpuDb_1*call to __nvoc_init_dataField_GpuDb*call to gpudbConstruct_IMPL*call to gpudbDestruct_IMPL*call to __nvoc_objCreate_OBJGPUGRP*generated/g_gpu_group_nvoc.c**generated/g_gpu_group_nvoc.c*call to __nvoc_init__OBJGPUGRP*call to __nvoc_ctor_OBJGPUGRP*__nvoc_pbase_OBJGPUGRP**__nvoc_pbase_OBJGPUGRP*call to __nvoc_init_funcTable_OBJGPUGRP*call to __nvoc_init_funcTable_OBJGPUGRP_1*call to __nvoc_init_dataField_OBJGPUGRP*call to __nvoc_init_halspec_ChipHal*call to __nvoc_init_halspec_TegraChipHal*call to __nvoc_init_funcTable_GpuHalspecOwner*call to __nvoc_init_funcTable_GpuHalspecOwner_1*call to __nvoc_init_dataField_GpuHalspecOwner*__nvoc_pbase_RmHalspecOwner**__nvoc_pbase_RmHalspecOwner*call to __nvoc_init_halspec_RmVariantHal*call to __nvoc_init_halspec_DispIpHal*call to __nvoc_init_funcTable_RmHalspecOwner*call to __nvoc_init_funcTable_RmHalspecOwner_1*call to __nvoc_init_dataField_RmHalspecOwner*call to __nvoc_objCreate_GPUInstanceSubscription*generated/g_gpu_instance_subscription_nvoc.c**generated/g_gpu_instance_subscription_nvoc.c*call to __nvoc_init__GPUInstanceSubscription*call to __nvoc_ctor_GPUInstanceSubscription*__nvoc_pbase_GPUInstanceSubscription**__nvoc_pbase_GPUInstanceSubscription*call to __nvoc_init_funcTable_GPUInstanceSubscription*call to __nvoc_init_funcTable_GPUInstanceSubscription_1*call to __nvoc_init_dataField_GPUInstanceSubscription*call to gisubscriptionConstruct_IMPL*call to gisubscriptionDestruct_IMPL*call to gisubscriptionCanCopy_DISPATCH*call to __nvoc_objCreate_GpuManagementApi*generated/g_gpu_mgmt_api_nvoc.c**generated/g_gpu_mgmt_api_nvoc.c*call to __nvoc_init__GpuManagementApi*call to __nvoc_ctor_GpuManagementApi*__nvoc_pbase_GpuManagementApi**__nvoc_pbase_GpuManagementApi*call to __nvoc_init_funcTable_GpuManagementApi*call to __nvoc_init_funcTable_GpuManagementApi_1*call to __nvoc_init_dataField_GpuManagementApi*call to gpumgmtapiConstruct_IMPL*call to gpumgmtapiDestruct_IMPL*call to __nvoc_objCreate_OBJGPUMGR*generated/g_gpu_mgr_nvoc.c**generated/g_gpu_mgr_nvoc.c*call to __nvoc_init__OBJGPUMGR*call to __nvoc_ctor_OBJGPUMGR*__nvoc_pbase_OBJGPUMGR**__nvoc_pbase_OBJGPUMGR*call to __nvoc_init_funcTable_OBJGPUMGR*call to __nvoc_init_funcTable_OBJGPUMGR_1*call to __nvoc_init_dataField_OBJGPUMGR*call to gpumgrConstruct_IMPL*call to gpumgrDestruct_IMPL*call to __nvoc_objCreate_OBJGPU*arg_pUuid*arg_pGpuArch*generated/g_gpu_nvoc.c**generated/g_gpu_nvoc.c*call to __nvoc_init__OBJGPU*call to __nvoc_ctor_OBJGPU*__nvoc_pbase_OBJTRACEABLE**__nvoc_pbase_OBJTRACEABLE*__nvoc_pbase_OBJGPU**__nvoc_pbase_OBJGPU*call to __nvoc_init__RmHalspecOwner*call to __nvoc_init__OBJTRACEABLE*__nvoc_base_RmHalspecOwner*__nvoc_base_OBJTRACEABLE*call to __nvoc_init_funcTable_OBJGPU*call to __nvoc_init_funcTable_OBJGPU_1*call to __nvoc_init_funcTable_OBJGPU_2*__gpuDetermineSelfHostedSocType__*__gpuValidateMIGSupport__*__gpuInitOptimusSettings__*__gpuDeinitOptimusSettings__*__gpuIsSliCapableWithoutDisplay__*__gpuIsCCEnabledInHw__*__gpuIsDevModeEnabledInHw__*__gpuIsProtectedPcieEnabledInHw__*__gpuIsProtectedPcieSupportedInFirmware__*__gpuIsMultiGpuNvleEnabledInHw__*__gpuIsNvleModeEnabledInHw__*__gpuIsCtxBufAllocInPmaSupported__*__gpuGetErrorContStateTableAndSize__*__gpuUpdateErrorContainmentState__*__gpuSetPartitionErrorAttribution__*__gpuCreateRusdMemory__*__gpuCheckEccCounts__*__gpuWaitForGfwBootComplete__*__gpuGetFirstAsyncLce__*__gpuIsInternalSkuFuseEnabled__*__gpuRequireGrCePresence__*__gpuGetIsCmpSku__*__gpuConstructDeviceInfoTable__*__gpuGetNameString__*__gpuGetShortNameString__*__gpuInitBranding__*__gpuGetRtd3GC6Data__*__gpuCheckEngine__*__gpuIsSocSdmEnabled__*__gpuReadPBusScratch__*__gpuWritePBusScratch__*__gpuSetResetScratchBit__*__gpuGetResetScratchBit__*__gpuResetRequiredStateChanged__*__gpuMarkDeviceForReset__*__gpuUnmarkDeviceForReset__*__gpuIsDeviceMarkedForReset__*__gpuSetDrainAndResetScratchBit__*__gpuGetDrainAndResetScratchBit__*__gpuMarkDeviceForDrainAndReset__*__gpuUnmarkDeviceForDrainAndReset__*__gpuIsDeviceMarkedForDrainAndReset__*__gpuRefreshRecoveryAction__*__gpuPowerOff__*__gpuPowerOn__*__gpuPowerOffHda__*__gpuPowerOnHda__*__gpuGetBusIntfType__*__gpuWriteBusConfigReg__*__gpuReadBusConfigReg__*__gpuReadBusConfigRegEx__*__gpuReadFunctionConfigReg__*__gpuWriteFunctionConfigReg__*__gpuWriteFunctionConfigRegEx__*__gpuReadPassThruConfigReg__*__gpuConfigAccessSanityCheck__*__gpuReadBusConfigCycle__*__gpuWriteBusConfigCycle__*__gpuReadPcieConfigCycle__*__gpuWritePcieConfigCycle__*__gpuGetIdInfo__*__gpuGenGidData__*__gpuGenUgidData__*__gpuGetChipSubRev__*__gpuGetSkuInfo__*__gpuGetVirtRegPhysOffset__*__gpuGetRegBaseOffset__*__gpuHandleSanityCheckRegReadError__*__gpuHandleSecFault__*__gpuGetSanityCheckRegReadError__*__gpuSanityCheckVirtRegAccess__*__gpuGetChildrenOrder__*__gpuGetChildrenPresent__*__gpuGetEngClassDescriptorList__*__gpuGetNoEngClassList__*__gpuInitSriov__*__gpuDeinitSriov__*__gpuMnocMboxSyncRecv__*__gpuMnocMboxSend__*__gpuMnocMboxRecv__*__gpuMnocMboxIsMsgAvailable__*__gpuMnocMboxInterruptEnable__*__gpuMnocMboxInterruptDisable__*__gpuMnocMboxInterruptRaised__*__gpuMnocMboxInterruptClear__*__gpuMnocMboxMinMessageSize__*__gpuMnocMboxMaxMessageSize__*__gpuCreateDefaultClientShare__*__gpuDestroyDefaultClientShare__*__gpuFuseSupportsDisplay__*__gpuJtVersionSanityCheck__*__gpuValidateRmctrlCmd__*__gpuValidateBusInfoIndex__*__gpuGetActiveFBIOs__*__gpuIsDebuggerActive__*__gpuExtdevConstruct__*__gpuIsGspToBootInInstInSysMode__*__gpuCheckPageRetirementSupport__*__gpuIsInternalSku__*__gpuClearFbhubPoisonIntrForBug2924523__*__gpuCheckIfFbhubPoisonIntrPending__*__gpuGetSriovCaps__*__gpuCheckIsP2PAllocated__*__gpuPrePowerOff__*__gpuVerifyExistence__*__gpuGetNvlinkLinkDetectionHalFlag__*__gpuDetectNvlinkLinkFromGpus__*__gpuGetFlaVasSize__*__gpuIsAtsSupportedWithSmcMemPartitioning__*__gpuIsGlobalPoisonFuseEnabled__*__gpuIsSystemRebootRequired__*__gpuDetermineSelfHostedMode__*call to __nvoc_ctor_RmHalspecOwner*call to __nvoc_ctor_OBJTRACEABLE*call to __nvoc_init_dataField_OBJGPU*call to gpuConstruct_IMPL*call to __nvoc_dtor_OBJTRACEABLE*call to __nvoc_dtor_RmHalspecOwner*PDB_PROP_GPU_SOC_FRM_RESTORE_HIBERNATE_RESUME*PDB_PROP_GPU_KEEP_WPR_ACROSS_GC6_SUPPORTED*PDB_PROP_GPU_TEGRA_SOC_NVDISPLAY*PDB_PROP_GPU_ATS_SUPPORTED*PDB_PROP_GPU_TRIGGER_PCIE_FLR*PDB_PROP_GPU_CLKS_IN_TEGRA_SOC*PDB_PROP_GPU_PREINITIALIZED_WPR_REGION*PDB_PROP_GPU_BUG_3007008_EMULATE_VF_MMU_TLB_INVALIDATE*PDB_PROP_GPU_CAN_OPTIMIZE_COMPUTE_USE_CASE*PDB_PROP_GPU_MIG_SUPPORTED*PDB_PROP_GPU_MIG_MIRROR_HOST_CI_ON_GUEST*PDB_PROP_GPU_MIG_SUPPORTS_SPLIT_CE_RANGES*PDB_PROP_GPU_MIG_GFX_SUPPORTED*PDB_PROP_GPU_MIG_TIMESLICING_SUPPORTED*PDB_PROP_GPU_VC_CAPABILITY_SUPPORTED*PDB_PROP_GPU_RESETLESS_MIG_SUPPORTED*PDB_PROP_GPU_IS_COT_ENABLED*PDB_PROP_GPU_FW_WPR_OFFSET_SET_BY_ACR*PDB_PROP_GPU_TOGGLE_DYNAMIC_THROTTLE_WINDOW_SIZE_SUPPORTED*PDB_PROP_GPU_UNIX_HDMI_FRL_COMPLIANCE_ENABLED*bIsFlexibleFlaSupported*PDB_PROP_GPU_SRIOV_SYSMEM_DIRTY_PAGE_TRACKING_ENABLED*PDB_PROP_GPU_VGPU_OFFLOAD_CAPABLE*PDB_PROP_GPU_POWER_MANAGEMENT_UNSUPPORTED*PDB_PROP_GPU_UNIX_DYNAMIC_POWER_SUPPORTED*PDB_PROP_GPU_SKIP_CE_MAPPINGS_NO_NVLINK*PDB_PROP_GPU_OPTIMUS_GOLD_CFG_SPACE_RESTORE*PDB_PROP_GPU_CC_FEATURE_CAPABLE*PDB_PROP_GPU_APM_FEATURE_CAPABLE*PDB_PROP_GPU_CHIP_SUPPORTS_RTD3_DEF*PDB_PROP_GPU_FASTPATH_SEQ_ENABLED*PDB_PROP_GPU_RECOVERY_DRAIN_P2P_REQUIRED*PDB_PROP_GPU_REUSE_INIT_CONTING_MEM*PDB_PROP_GPU_RUSD_POLLING_SUPPORT_MONOLITHIC*PDB_PROP_GPU_RUSD_DISABLE_CLK_PUBLIC_DOMAIN_INFO*PDB_PROP_GPU_RECOVERY_REBOOT_REQUIRED*PDB_PROP_GPU_ALLOC_ISO_SYS_MEM_FROM_CARVEOUT*PDB_PROP_GPU_HFRP_IS_KERNEL_OBJECT_ACTIVE_WAR*isVirtual*isGspClient*isDceClient*bIsDebugModeEnabled*numOfMclkLockRequests*bUseRegisterAccessMap**boardInfo*gpuGroupCount*bIsMigRm*bUnifiedMemorySpaceEnabled*bWarBug200577889SriovHeavyEnabled*bNonPowerOf2ChannelCountSupported*bWarBug4347206PowerCycleOnUnload*bNeed4kPageIsolation*bInstLoc47bitPaWar*bIsBarPteInSysmemSupported*bClientRmAllocatedCtxBuffer*bInstanceMemoryAlwaysCached*bComputePolicyTimesliceSupported*bSriovCapable*bRecheckSliSupportAtResume*bGpuNvEncAv1Supported*bIsGspOwnedFaultBuffersEnabled*bVfResizableBAR1Supported*bVoltaHubIntrSupported*bUsePmcDeviceEnableForHostEngine*call to gpuDestruct_IMPL*call to __nvoc_objCreate_GpuResource*generated/g_gpu_resource_nvoc.c**generated/g_gpu_resource_nvoc.c*call to __nvoc_init_funcTable_GpuResource*call to __nvoc_init_funcTable_GpuResource_1*call to __nvoc_init_dataField_GpuResource*call to gpuresConstruct_IMPL*call to __nvoc_objCreate_GpuUserSharedData*generated/g_gpu_user_shared_data_nvoc.c**generated/g_gpu_user_shared_data_nvoc.c*call to __nvoc_init__GpuUserSharedData*call to __nvoc_ctor_GpuUserSharedData*__nvoc_pbase_GpuUserSharedData**__nvoc_pbase_GpuUserSharedData*call to __nvoc_init_funcTable_GpuUserSharedData*call to __nvoc_init_funcTable_GpuUserSharedData_1*call to __nvoc_init_dataField_GpuUserSharedData*call to gpushareddataConstruct_IMPL*call to gpushareddataDestruct_IMPL*call to gpushareddataCanCopy_DISPATCH*call to __nvoc_objCreate_OBJGVASPACE*generated/g_gpu_vaspace_nvoc.c**generated/g_gpu_vaspace_nvoc.c*call to __nvoc_init__OBJGVASPACE*call to __nvoc_ctor_OBJGVASPACE*__nvoc_pbase_OBJGVASPACE**__nvoc_pbase_OBJGVASPACE*call to __nvoc_init_funcTable_OBJGVASPACE*call to __nvoc_init_funcTable_OBJGVASPACE_1*call to __nvoc_init_dataField_OBJGVASPACE*call to gvaspaceDestruct_IMPL*call to gvaspaceFreeV2_DISPATCH*call to gvaspaceSetPteInfo_DISPATCH*call to gvaspaceGetPteInfo_DISPATCH*call to gvaspaceGetPageTableInfo_DISPATCH*call to gvaspaceGetVasInfo_DISPATCH*call to gvaspaceInvalidateTlb_DISPATCH*call to gvaspaceUnpinRootPageDir_DISPATCH*call to gvaspacePinRootPageDir_DISPATCH*call to gvaspaceGetPasid_DISPATCH*call to gvaspaceIsAtsEnabled_DISPATCH*call to gvaspaceIsExternallyOwned_DISPATCH*call to gvaspaceIsFaultCapable_DISPATCH*call to gvaspaceGetFlags_DISPATCH*call to gvaspaceGetBigPageSize_DISPATCH*call to gvaspaceGetMapPageSize_DISPATCH*call to gvaspaceUnmap_DISPATCH*call to gvaspaceMap_DISPATCH*call to gvaspaceIncAllocRefCnt_DISPATCH*call to gvaspaceApplyDefaultAlignment_DISPATCH*call to gvaspaceFree_DISPATCH*call to gvaspaceAlloc_DISPATCH*call to gvaspaceReserveMempool_DISPATCH*call to gvaspaceConstruct__DISPATCH*call to __nvoc_objCreate_GSyncApi*generated/g_gsync_api_nvoc.c**generated/g_gsync_api_nvoc.c*call to __nvoc_init__GSyncApi*call to __nvoc_ctor_GSyncApi*__nvoc_pbase_GSyncApi**__nvoc_pbase_GSyncApi*call to __nvoc_init_funcTable_GSyncApi*call to __nvoc_init_funcTable_GSyncApi_1*call to __nvoc_init_dataField_GSyncApi*call to gsyncapiConstruct_IMPL*call to gsyncapiControl_DISPATCH*call to __nvoc_objCreate_OBJGSYNCMGR*generated/g_gsync_nvoc.c**generated/g_gsync_nvoc.c*call to __nvoc_init__OBJGSYNCMGR*call to __nvoc_ctor_OBJGSYNCMGR*__nvoc_pbase_OBJGSYNCMGR**__nvoc_pbase_OBJGSYNCMGR*call to __nvoc_init_funcTable_OBJGSYNCMGR*call to gsyncmgrIsFirmwareGPUMismatch_GB100*call to gsyncmgrIsFirmwareGPUMismatch_4a4dee*call to __nvoc_init_funcTable_OBJGSYNCMGR_1*call to __nvoc_init_dataField_OBJGSYNCMGR*call to gsyncmgrConstruct_IMPL*call to gsyncmgrDestruct_IMPL*call to __nvoc_objCreate_OBJHALMGR*generated/g_hal_mgr_nvoc.c**generated/g_hal_mgr_nvoc.c*call to __nvoc_init__OBJHALMGR*call to __nvoc_ctor_OBJHALMGR*__nvoc_pbase_OBJHALMGR**__nvoc_pbase_OBJHALMGR*call to __nvoc_init_funcTable_OBJHALMGR*call to __nvoc_init_funcTable_OBJHALMGR_1*call to __nvoc_init_dataField_OBJHALMGR*call to halmgrConstruct_IMPL*call to halmgrDestruct_IMPL*call to __nvoc_objCreate_OBJHAL*generated/g_hal_nvoc.c**generated/g_hal_nvoc.c*call to __nvoc_init__OBJHAL*call to __nvoc_ctor_OBJHAL*__nvoc_pbase_OBJHAL**__nvoc_pbase_OBJHAL*call to __nvoc_init_funcTable_OBJHAL*call to __nvoc_init_funcTable_OBJHAL_1*call to __nvoc_init_dataField_OBJHAL*call to __nvoc_objCreate_Hdacodec*generated/g_hda_codec_api_nvoc.c**generated/g_hda_codec_api_nvoc.c*call to __nvoc_init__Hdacodec*call to __nvoc_ctor_Hdacodec*__nvoc_pbase_Hdacodec**__nvoc_pbase_Hdacodec*call to __nvoc_init_funcTable_Hdacodec*call to __nvoc_init_funcTable_Hdacodec_1*call to __nvoc_init_dataField_Hdacodec*call to hdacodecConstruct_IMPL*call to __nvoc_objCreate_Heap*generated/g_heap_nvoc.c**generated/g_heap_nvoc.c*call to __nvoc_init__Heap*call to __nvoc_ctor_Heap*__nvoc_pbase_Heap**__nvoc_pbase_Heap*call to __nvoc_init_funcTable_Heap*call to __nvoc_init_funcTable_Heap_1*call to __nvoc_init_dataField_Heap*call to heapDestruct_IMPL*pRmTimeout*__nvoc_pbase_OBJHOSTENG**__nvoc_pbase_OBJHOSTENG*call to __nvoc_init_funcTable_OBJHOSTENG*call to __nvoc_init_funcTable_OBJHOSTENG_1*call to __nvoc_init_dataField_OBJHOSTENG*PDB_PROP_HOSTENG_ENSURE_HALT_SUCCEEDS_BEFORE_RESET*call to __nvoc_objCreate_MemoryHwResources*generated/g_hw_resources_nvoc.c**generated/g_hw_resources_nvoc.c*call to __nvoc_init__MemoryHwResources*call to __nvoc_ctor_MemoryHwResources*__nvoc_pbase_MemoryHwResources**__nvoc_pbase_MemoryHwResources*call to __nvoc_init_funcTable_MemoryHwResources*call to __nvoc_init_funcTable_MemoryHwResources_1*call to __nvoc_init_dataField_MemoryHwResources*call to hwresConstruct_IMPL*call to hwresDestruct_IMPL*call to hwresCanCopy_DISPATCH*call to __nvoc_objCreate_OBJHYPERVISOR*generated/g_hypervisor_nvoc.c**generated/g_hypervisor_nvoc.c*call to __nvoc_init__OBJHYPERVISOR*call to __nvoc_ctor_OBJHYPERVISOR*__nvoc_pbase_OBJHYPERVISOR**__nvoc_pbase_OBJHYPERVISOR*call to __nvoc_init_funcTable_OBJHYPERVISOR*call to __nvoc_init_funcTable_OBJHYPERVISOR_1*call to __nvoc_init_dataField_OBJHYPERVISOR*call to hypervisorConstruct_IMPL*call to hypervisorDestruct_IMPL*call to __nvoc_objCreate_I2cApi*generated/g_i2c_api_nvoc.c**generated/g_i2c_api_nvoc.c*call to __nvoc_init__I2cApi*call to __nvoc_ctor_I2cApi*__nvoc_pbase_I2cApi**__nvoc_pbase_I2cApi*call to __nvoc_init_funcTable_I2cApi*call to __nvoc_init_funcTable_I2cApi_1*call to __nvoc_init_dataField_I2cApi*call to i2capiConstruct_IMPL*call to i2capiDestruct_IMPL*call to __nvoc_objCreate_ImexSessionApi*generated/g_imex_session_api_nvoc.c**generated/g_imex_session_api_nvoc.c*call to __nvoc_init__ImexSessionApi*call to __nvoc_ctor_ImexSessionApi*__nvoc_pbase_ImexSessionApi**__nvoc_pbase_ImexSessionApi*call to __nvoc_init_funcTable_ImexSessionApi*call to __nvoc_init_funcTable_ImexSessionApi_1*call to __nvoc_init_dataField_ImexSessionApi*call to imexsessionapiConstruct_IMPL*call to imexsessionapiDestruct_IMPL*call to __nvoc_objCreate_InstrumentationManager*generated/g_instrumentation_manager_nvoc.c**generated/g_instrumentation_manager_nvoc.c*call to __nvoc_init__InstrumentationManager*call to __nvoc_ctor_InstrumentationManager*__nvoc_pbase_InstrumentationManager**__nvoc_pbase_InstrumentationManager*call to __nvoc_init_funcTable_InstrumentationManager*call to __nvoc_init_funcTable_InstrumentationManager_1*call to __nvoc_init_dataField_InstrumentationManager*call to instrumentationmanagerConstruct_IMPL*call to instrumentationmanagerDestruct_IMPL*call to __nvoc_objCreate_Intr*generated/g_intr_nvoc.c**generated/g_intr_nvoc.c*call to __nvoc_init__Intr*call to __nvoc_ctor_Intr*__nvoc_pbase_Intr**__nvoc_pbase_Intr*call to __nvoc_init_funcTable_Intr*call to __nvoc_init_funcTable_Intr_1*__intrServiceNonStall__*__intrGetNonStallEnable__*__intrDisableNonStall__*__intrRestoreNonStall__*__intrGetStallInterruptMode__*__intrEncodeStallIntrEn__*__intrDecodeStallIntrEn__*__intrCheckAndServiceNonReplayableFault__*__intrEnableLeaf__*__intrDisableLeaf__*__intrEnableTopNonstall__*__intrDisableTopNonstall__*__intrSetStall__*__intrClearLeafVector__*__intrIsPending__*__intrIsVectorPending__*__intrSetStallSWIntr__*__intrClearStallSWIntr__*__intrEnableStallSWIntr__*__intrDisableStallSWIntr__*__intrTriggerPrivDoorbell__*__intrRetriggerTopLevel__*__intrGetLeafStatus__*__intrGetLocklessVectorsInRmSubtree__*__intrGetPendingLowLatencyHwDisplayIntr__*__intrSetDisplayInterruptEnable__*__intrCacheDispIntrVectors__*__intrDumpState__*__intrCacheIntrFields__*__intrReadRegLeafEnSet__*__intrReadRegLeaf__*__intrReadRegTopEnSet__*__intrReadRegTop__*__intrWriteRegLeafEnSet__*__intrWriteRegLeafEnClear__*__intrWriteRegLeaf__*__intrWriteRegTopEnSet__*__intrWriteRegTopEnClear__*__intrGetNumLeaves__*__intrGetLeafSize__*__intrGetIntrTopNonStallMask__*__intrGetIntrTopLegacyStallMask__*__intrGetIntrTopLockedMask__*__intrSanityCheckEngineIntrStallVector__*__intrSanityCheckEngineIntrNotificationVector__*__intrStateLoad__*__intrStateUnload__*__intrInitInterruptTable__*__intrGetInterruptTable__*__intrDestroyInterruptTable__*__intrServiceStall__*__intrServiceStallList__*__intrServiceStallSingle__*__intrProcessDPCQueue__*__intrSetIntrMask__*__intrSetIntrEnInHw__*__intrGetIntrEnFromHw__*__intrGetPendingStall__*__intrGetAuxiliaryPendingStall__*call to __nvoc_init_dataField_Intr*PDB_PROP_INTR_HOST_DRIVEN_ENGINES_REMOVED_FROM_PMC*PDB_PROP_INTR_READ_ONLY_EVEN_NUMBERED_INTR_LEAF_REGS*PDB_PROP_INTR_ENUMERATIONS_ON_ENGINE_RESET*PDB_PROP_INTR_USE_TOP_EN_FOR_VBLANK_HANDLING*PDB_PROP_INTR_MASK_SUPPORTED*displayIntrVector*displayLowLatencyIntrVector*bDefaultNonstallNotify*bUseLegacyVectorAssignment*call to intrDestruct_IMPL*call to intrStateUnload_DISPATCH*call to intrStateLoad_DISPATCH*call to intrStateDestroy_DISPATCH*call to intrStateInitLocked_DISPATCH*call to intrStateInitUnlocked_DISPATCH*call to intrStatePreInitLocked_DISPATCH*call to intrConstructEngine_DISPATCH*__nvoc_pbase_IntrService**__nvoc_pbase_IntrService*call to __nvoc_init_funcTable_IntrService*call to __nvoc_init_funcTable_IntrService_1*call to __nvoc_init_dataField_IntrService*call to __nvoc_objCreate_OBJIOVASPACE*generated/g_io_vaspace_nvoc.c**generated/g_io_vaspace_nvoc.c*call to __nvoc_init__OBJIOVASPACE*call to __nvoc_ctor_OBJIOVASPACE*__nvoc_pbase_OBJIOVASPACE**__nvoc_pbase_OBJIOVASPACE*call to __nvoc_init_funcTable_OBJIOVASPACE*call to __nvoc_init_funcTable_OBJIOVASPACE_1*call to __nvoc_init_dataField_OBJIOVASPACE*call to iovaspaceDestruct_IMPL*call to vaspaceInvalidateTlb_DISPATCH*call to vaspaceUnpinRootPageDir_DISPATCH*call to vaspacePinRootPageDir_DISPATCH*call to vaspaceUnmap_DISPATCH*call to vaspaceMap_DISPATCH*call to iovaspaceGetVasInfo_DISPATCH*call to iovaspaceGetVaLimit_DISPATCH*call to iovaspaceGetVaStart_DISPATCH*call to iovaspaceIncAllocRefCnt_DISPATCH*call to iovaspaceApplyDefaultAlignment_DISPATCH*call to iovaspaceFree_DISPATCH*call to iovaspaceAlloc_DISPATCH*call to iovaspaceConstruct__DISPATCH*call to __nvoc_init_funcTable_RegisterAperture*call to __nvoc_init_funcTable_RegisterAperture_1*call to __nvoc_init_dataField_RegisterAperture*call to __nvoc_objCreate_OBJRCDB*generated/g_journal_nvoc.c**generated/g_journal_nvoc.c*call to __nvoc_init__OBJRCDB*call to __nvoc_ctor_OBJRCDB*__nvoc_pbase_OBJRCDB**__nvoc_pbase_OBJRCDB*call to __nvoc_init_funcTable_OBJRCDB*call to __nvoc_init_funcTable_OBJRCDB_1*call to __nvoc_init_dataField_OBJRCDB*call to rcdbConstruct_IMPL*PDB_PROP_RCDB_COMPRESS*call to rcdbDestruct_IMPL*call to __nvoc_objCreate_KernelBus*generated/g_kern_bus_nvoc.c**generated/g_kern_bus_nvoc.c*call to __nvoc_init__KernelBus*call to __nvoc_ctor_KernelBus*__nvoc_pbase_KernelBus**__nvoc_pbase_KernelBus*call to __nvoc_init_funcTable_KernelBus*call to kbusGetEffectiveAddressSpace_SOC*call to kbusGetEffectiveAddressSpace_GM107*call to kbusGetP2PWriteMailboxAddressSize_GH100*call to kbusGetP2PWriteMailboxAddressSize_GB100*call to kbusGetP2PWriteMailboxAddressSize_474d46*call to __nvoc_init_funcTable_KernelBus_1*call to __nvoc_init_funcTable_KernelBus_2*__kbusInitBar1__*__kbusTeardownMailbox__*__kbusGetBar1VASpace__*__kbusBar1InstBlkVasUpdate__*__kbusFlushPcieForBar0Doorbell__*__kbusFlush__*__kbusCreateCoherentCpuMapping__*__kbusMapCoherentCpuMapping__*__kbusUnmapCoherentCpuMapping__*__kbusTeardownCoherentCpuMapping__*__kbusBar1InstBlkBind__*__kbusGetEccCounts__*__kbusGetPFBar1Spa__*__kbusConstructXalApertures__*__kbusGetXalAperture__*__kbusInitBarsSize__*__kbusConstructHal__*__kbusGetBar1ResvdVA__*__kbusStatePreInitLocked__*__kbusStateInitLockedKernel__*__kbusStatePreLoad__*__kbusStateLoad__*__kbusStatePreUnload__*__kbusStateUnload__*__kbusStatePostUnload__*__kbusBar2IsReady__*__kbusMapBar2Aperture__*__kbusValidateBar2ApertureMapping__*__kbusUnmapBar2ApertureWithFlags__*__kbusGetVaLimitForBar2__*__kbusCalcCpuInvisibleBar2Range__*__kbusCalcCpuInvisibleBar2ApertureSize__*__kbusCommitBar2__*__kbusRewritePTEsForExistingMapping__*__kbusPatchBar1Pdb__*__kbusPatchBar2Pdb__*__kbusConstructVirtualBar2CpuInvisibleHeap__*__kbusMapCpuInvisibleBar2Aperture__*__kbusUnmapCpuInvisibleBar2Aperture__*__kbusSetupCpuPointerForBusFlush__*__kbusDestroyCpuPointerForBusFlush__*__kbusTeardownBar2CpuAperture__*__kbusSetP2PMailboxBar1Area__*__kbusUnsetP2PMailboxBar1Area__*__kbusAllocP2PMailboxBar1__*__kbusGetP2PMailboxAttributes__*__kbusSetupMailboxAccess__*__kbusDestroyPeerAccess__*__kbusCreateP2PMapping__*__kbusRemoveP2PMapping__*__kbusGetEgmPeerId__*__kbusGetPeerId__*__kbusGetNvlinkPeerId__*__kbusGetNvSwitchPeerId__*__kbusGetUnusedPciePeerId__*__kbusIsPeerIdValid__*__kbusGetNvlinkP2PPeerId__*__kbusCreateP2PMappingForMailbox__*__kbusRemoveP2PMappingForMailbox__*__kbusSetupMailboxes__*__kbusWriteP2PWmbTag__*__kbusSetupP2PDomainAccess__*__kbusNeedWarForBug999673__*__kbusCreateP2PMappingForC2C__*__kbusRemoveP2PMappingForC2C__*__kbusUnreserveP2PPeerIds__*__kbusGetNvlinkPeerNumberMask__*__kbusIsStaticBar1Supported__*__kbusEnableStaticBar1Mapping__*__kbusDisableStaticBar1Mapping__*__kbusGetBar1P2PDmaInfo__*__kbusIncreaseStaticBar1Refcount__*__kbusDecreaseStaticBar1Refcount__*__kbusGetStaticFbAperture__*__kbusCreateP2PMappingForBar1P2P__*__kbusRemoveP2PMappingForBar1P2P__*__kbusHasPcieBar1P2PMapping__*__kbusIsPcieBar1P2PMappingSupported__*__kbusCheckFlaSupportedAndInit__*__kbusDetermineFlaRangeAndAllocate__*__kbusAllocateFlaVaspace__*__kbusGetFlaRange__*__kbusAllocateLegacyFlaVaspace__*__kbusAllocateHostManagedFlaVaspace__*__kbusDestroyFla__*__kbusGetFlaVaspace__*__kbusDestroyHostManagedFlaVaspace__*__kbusVerifyFlaRange__*__kbusConstructFlaInstBlk__*__kbusDestructFlaInstBlk__*__kbusValidateFlaBaseAddress__*__kbusSetupUnbindFla__*__kbusSetupBindFla__*__kbusFlushSingle__*__kbusSendSysmembarSingle__*__kbusInitPciBars__*__kbusInitBarsBaseInfo__*__kbusCacheBAR1ResizeSize_WAR_BUG_3249028__*__kbusRestoreBAR1ResizeSize_WAR_BUG_3249028__*__kbusIsDirectMappingAllowed__*__kbusUseDirectSysmemMap__*__kbusWriteBAR0WindowBase__*__kbusReadBAR0WindowBase__*__kbusValidateBAR0WindowBase__*__kbusSetBAR0WindowVidOffset__*__kbusGetBAR0WindowVidOffset__*__kbusSetupBar0WindowBeforeBar2Bootstrap__*__kbusRestoreBar0WindowAfterBar2Bootstrap__*__kbusInitBar2__*__kbusRestoreBar2__*__kbusDestroyBar2__*__kbusVerifyBar2__*__kbusBar2BootStrapInPhysicalMode__*__kbusWriteBar2BlockRegisters__*__kbusBindBar2__*__kbusInstBlkWriteAddrLimit__*__kbusInitInstBlk__*__kbusBar2InstBlkWrite__*__kbusSetupBar2PageTablesAtBottomOfFb__*__kbusTeardownBar2PageTablesAtBottomOfFb__*__kbusSetupBar2InstBlkAtBottomOfFb__*__kbusTeardownBar2InstBlkAtBottomOfFb__*__kbusSetupBar2PageTablesAtTopOfFb__*__kbusCommitBar2PDEs__*__kbusVerifyCoherentLink__*call to __nvoc_init_dataField_KernelBus*bP2pMailboxClientAllocatedBug3466714VoltaAndUp*bBug2751296LimitBar2PtSize*bAllowReflectedMappingAccess*bIsEntireBar2RegionVirtuallyAddressible*bSkipBar2TestOnGc6Exit*bReadCpuPointerToFlush*PDB_PROP_KBUS_NVLINK_DECONFIG_HSHUB_ON_NO_MAPPING*PDB_PROP_KBUS_RESTORE_BAR1_SIZE_BUG_3249028_WAR*PDB_PROP_KBUS_SUPPORT_BAR1_P2P_BY_DEFAULT*bBar1Disabled*bCpuVisibleBar2Disabled*bBar1DiscontigEnabled*bBar1ReuseEnabled*call to kbusDestruct_IMPL*call to kbusStateDestroy_DISPATCH*call to kbusStatePostUnload_DISPATCH*call to kbusStateUnload_DISPATCH*call to kbusStatePreUnload_DISPATCH*call to kbusStatePostLoad_DISPATCH*call to kbusStateLoad_DISPATCH*call to kbusStatePreLoad_DISPATCH*call to kbusStateInitLocked_DISPATCH*call to kbusStatePreInitLocked_DISPATCH*call to kbusConstructEngine_DISPATCH*call to __nvoc_objCreate_KernelDisplay*generated/g_kern_disp_nvoc.c**generated/g_kern_disp_nvoc.c*call to __nvoc_init__KernelDisplay*call to __nvoc_ctor_KernelDisplay*__nvoc_pbase_KernelDisplay**__nvoc_pbase_KernelDisplay*call to __nvoc_init__IntrService*__nvoc_base_IntrService*call to __nvoc_init_funcTable_KernelDisplay*call to __nvoc_init_funcTable_KernelDisplay_1*__kdispSelectClass__*__kdispGetBaseOffset__*__kdispGetChannelNum__*__kdispGetDisplayCapsBaseAndSize__*__kdispGetDisplaySfUserBaseAndSize__*__kdispGetDisplayChannelUserBaseAndSize__*__kdispImportImpData__*__kdispArbAndAllocDisplayBandwidth__*__kdispGetVgaWorkspaceBase__*__kdispInvokeDisplayModesetCallback__*__kdispReadRgLineCountAndFrameCount__*__kdispInitBrightcStateLoad__*__kdispSetupAcpiEdid__*__kdispRestoreOriginalLsrMinTime__*__kdispComputeLsrMinTimeValue__*__kdispSetSwapBarrierLsrMinTime__*__kdispGetRgScanLock__*__kdispDetectSliLink__*__kdispReadAwakenChannelNumMask__*__kdispGetPBTargetAperture__*__kdispAllocateSharedMem__*__kdispReadPendingWinSemIntr__*__kdispHandleWinSemEvt__*__kdispIntrRetrigger__*__kdispComputeDpModeSettings__*__kdispServiceAwakenIntr__*__kdispSetChannelTrashAndAbortAccel__*__kdispIsChannelIdle__*__kdispApplyChannelConnectDisconnect__*__kdispIsChannelAllocatedHw__*call to __nvoc_ctor_IntrService*call to __nvoc_init_dataField_KernelDisplay*PDB_PROP_KDISP_IMP_ALLOC_BW_IN_KERNEL_RM_DEF*PDB_PROP_KDISP_FEATURE_STRETCH_VBLANK_CAPABLE*PDB_PROP_KDISP_HAS_SEPARATE_LOW_LATENCY_LINE*PDB_PROP_KDISP_ENABLE_INLINE_INTR_SERVICE*PDB_PROP_KDISP_WINDOW_CHANNEL_ALWAYS_MAPPED**pStaticInfo*bWarPurgeSatellitesOnCoreFree*bExtdevIntrSupported*call to kdispDestruct_IMPL*call to __nvoc_dtor_IntrService*call to intrservServiceNotificationInterrupt_DISPATCH*call to intrservClearInterrupt_DISPATCH*call to kdispServiceInterrupt_DISPATCH*call to kdispRegisterIntrService_DISPATCH*call to kdispStateUnload_DISPATCH*call to kdispStateLoad_DISPATCH*call to kdispStateDestroy_DISPATCH*call to kdispStateInitLocked_DISPATCH*call to kdispStatePreInitLocked_DISPATCH*call to kdispConstructEngine_DISPATCH*bytesRead*pCallbackArgs**pCallbackArgs*call to __nvoc_objCreate_KernelFsp*generated/g_kern_fsp_nvoc.c**generated/g_kern_fsp_nvoc.c*call to __nvoc_init__KernelFsp*call to __nvoc_ctor_KernelFsp*__nvoc_pbase_KernelFsp**__nvoc_pbase_KernelFsp*call to __nvoc_init_funcTable_KernelFsp*call to __nvoc_init_funcTable_KernelFsp_1*__kfspConstructHal__*__kfspSendPacket__*__kfspReadPacket__*__kfspCanSendPacket__*__kfspIsResponseAvailable__*__kfspGetMaxSendPacketSize__*__kfspGetMaxRecvPacketSize__*__kfspGspFmcIsEnforced__*__kfspPrepareBootCommands__*__kfspSendBootCommands__*__kfspPrepareAndSendBootCommands__*__kfspWaitForSecureBoot__*__kfspNvdmToSeid__*__kfspCreateMctpHeader__*__kfspCreateNvdmHeader__*__kfspGetPacketInfo__*__kfspValidateMctpPayloadHeader__*__kfspProcessNvdmMessage__*__kfspProcessCommandResponse__*__kfspDumpDebugState__*__kfspErrorCode2NvStatusMap__*__kfspGetExtraReservedMemorySize__*__kfspWaitForGspTargetMaskReleased__*__kfspRequiresBug3957833WAR__*__kfspFrtsSysmemLocationProgram__*__kfspFrtsSysmemLocationClear__*__kfspCheckForClockBoostCapability__*__kfspSendClockBoostRpc__*call to __nvoc_init_dataField_KernelFsp*PDB_PROP_KFSP_FSP_FUSE_ERROR_CHECK_ENABLED*PDB_PROP_KFSP_DISABLE_FRTS_SYSMEM*cotPayloadSignatureSize*cotPayloadPublicKeySize*cotPayloadVersion*call to kfspStateDestroy_DISPATCH*call to kfspStateUnload_DISPATCH*call to kfspConstructEngine_DISPATCH*call to __nvoc_objCreate_KernelGmmu*generated/g_kern_gmmu_nvoc.c**generated/g_kern_gmmu_nvoc.c*call to __nvoc_init__KernelGmmu*call to __nvoc_ctor_KernelGmmu*__nvoc_pbase_KernelGmmu**__nvoc_pbase_KernelGmmu*call to __nvoc_init_funcTable_KernelGmmu*call to __nvoc_init_funcTable_KernelGmmu_1*call to __nvoc_init_funcTable_KernelGmmu_2*__kgmmuServiceUnboundInstBlockFault__*__kgmmuCheckAndDecideBigPageSize__*__kgmmuGetEccCounts__*__kgmmuCreateFakeSparseTables__*__kgmmuGetFakeSparseEntry__*__kgmmuStatePostLoad__*__kgmmuStatePreUnload__*__kgmmuGetFaultBufferReservedFbSpaceSize__*__kgmmuGetMaxBigPageSize__*__kgmmuGetVaspaceClass__*__kgmmuInstBlkAtsGet__*__kgmmuInstBlkVaLimitGet__*__kgmmuInstBlkMagicValueGet__*__kgmmuInstBlkPageDirBaseGet__*__kgmmuGetPDBAllocSize__*__kgmmuGetBigPageSize__*__kgmmuFmtInitPteApertures__*__kgmmuFmtInitPdeApertures__*__kgmmuInvalidateTlb__*__kgmmuCommitInvalidateTlbTest__*__kgmmuCheckPendingInvalidates__*__kgmmuCommitTlbInvalidate__*__kgmmuSetPdbToInvalidate__*__kgmmuSetTlbInvalidateMembarWarParameters__*__kgmmuSetTlbInvalidationScope__*__kgmmuFmtInitPteComptagLine__*__kgmmuFmtInitPeerPteFld__*__kgmmuEnableComputePeerAddressing__*__kgmmuFmtInitPte__*__kgmmuFmtInitPde__*__kgmmuFmtIsVersionSupported__*__kgmmuFmtInitLevels__*__kgmmuFmtInitPdeMulti__*__kgmmuDetermineMaxVASize__*__kgmmuFmtFamiliesInit__*__kgmmuTranslatePtePcfFromSw__*__kgmmuTranslatePtePcfFromHw__*__kgmmuTranslatePdePcfFromSw__*__kgmmuTranslatePdePcfFromHw__*__kgmmuGetFaultRegisterMappings__*__kgmmuGetFaultTypeString__*__kgmmuChangeReplayableFaultOwnership__*__kgmmuServiceReplayableFault__*__kgmmuIssueReplayableFaultBufferFlush__*__kgmmuToggleFaultOnPrefetch__*__kgmmuReportFaultBufferOverflow__*__kgmmuReadFaultBufferGetPtr__*__kgmmuWriteFaultBufferGetPtr__*__kgmmuReadFaultBufferPutPtr__*__kgmmuReadMmuFaultBufferSize__*__kgmmuReadMmuFaultStatus__*__kgmmuWriteMmuFaultStatus__*__kgmmuIsNonReplayableFaultPending__*__kgmmuClientShadowFaultBufferAlloc__*__kgmmuClientShadowFaultBufferFree__*__kgmmuFaultBufferAllocSharedMemory__*__kgmmuFaultBufferFreeSharedMemory__*__kgmmuSetupWarForBug2720120__*__kgmmuTestAccessCounterWriteNak__*__kgmmuCheckAccessCounterBar2FaultServicingState__*__kgmmuGetGraphicsEngineId__*__kgmmuEnableNvlinkComputePeerAddressing__*__kgmmuClearNonReplayableFaultIntr__*__kgmmuClearReplayableFaultIntr__*__kgmmuReadShadowBufPutIndex__*__kgmmuPrintFaultInfo__*__kgmmuIsFaultEngineBar1__*__kgmmuIsFaultEngineBar2__*__kgmmuIsFaultEnginePhysical__*__kgmmuCopyMmuFaults__*__kgmmuParseFaultPacket__*__kgmmuFaultBufferClearPackets__*__kgmmuFaultBufferGetFault__*__kgmmuCopyFaultPacketToClientShadowBuffer__*__kgmmuIsReplayableShadowFaultBufferFull__*__kgmmuReadClientShadowBufPutIndex__*__kgmmuWriteClientShadowBufPutIndex__*__kgmmuInitCeMmuFaultIdRange__*__kgmmuFaultBufferMap__*__kgmmuFaultBufferUnmap__*__kgmmuFaultBufferInit__*__kgmmuFaultBufferDestroy__*__kgmmuFaultBufferLoad__*__kgmmuFaultBufferUnload__*__kgmmuEnableFaultBuffer__*__kgmmuDisableFaultBuffer__*__kgmmuSetAndGetDefaultFaultBufferSize__*__kgmmuReadMmuFaultInstHiLo__*__kgmmuReadMmuFaultAddrHiLo__*__kgmmuReadMmuFaultInfo__*__kgmmuWriteMmuFaultBufferSize__*__kgmmuWriteMmuFaultBufferHiLo__*__kgmmuEnableMmuFaultInterrupts__*__kgmmuDisableMmuFaultInterrupts__*__kgmmuEnableMmuFaultOverflowIntr__*__kgmmuSignExtendFaultAddress__*__kgmmuGetFaultType__*__kgmmuIsP2PUnboundInstFault__*__kgmmuServiceVfPriFaults__*__kgmmuTestVidmemAccessBitBufferError__*__kgmmuDisableVidmemAccessBitBuf__*__kgmmuEnableVidmemAccessBitBuf__*__kgmmuClearAccessCounterWriteNak__*__kgmmuServiceMthdBuffFaultInBar2Fault__*__kgmmuFaultCancelTargeted__*__kgmmuFaultCancelIssueInvalidate__*__kgmmuServiceNonReplayableFault__*__kgmmuHandleNonReplayableFaultPacket__*__kgmmuNotifyNonReplayableFault__*__kgmmuServiceMmuFault__*__kgmmuGetFaultInfoFromFaultPckt__*__kgmmuServicePriFaults__*call to __nvoc_init_dataField_KernelGmmu*PDB_PROP_KGMMU_SYSMEM_FAULT_BUFFER_GPU_UNCACHED*PDB_PROP_KGMMU_REDUCE_NR_FAULT_BUFFER_SIZE*defaultBigPageSize*bHugePageSupported*bPageSize512mbSupported*bPageSize256gbSupported*bBug2720120WarEnabled*bVaspaceInteropSupported*call to kgmmuDestruct_IMPL*call to kgmmuServiceNotificationInterrupt_DISPATCH*call to kgmmuServiceInterrupt_DISPATCH*call to kgmmuClearInterrupt_DISPATCH*call to kgmmuRegisterIntrService_DISPATCH*call to kgmmuStateDestroy_DISPATCH*call to kgmmuStatePreUnload_DISPATCH*call to kgmmuStatePostLoad_DISPATCH*call to kgmmuStateUnload_DISPATCH*call to kgmmuStateLoad_DISPATCH*call to kgmmuStateInitLocked_DISPATCH*call to kgmmuConstructEngine_DISPATCH*generated/g_kern_hwpm_nvoc.h**generated/g_kern_hwpm_nvoc.h*call to __nvoc_objCreate_KernelHwpm*generated/g_kern_hwpm_nvoc.c**generated/g_kern_hwpm_nvoc.c*call to __nvoc_init__KernelHwpm*call to __nvoc_ctor_KernelHwpm*__nvoc_pbase_KernelHwpm**__nvoc_pbase_KernelHwpm*call to __nvoc_init_funcTable_KernelHwpm*call to __nvoc_init_funcTable_KernelHwpm_1*__khwpmGetCblockInfo__*call to __nvoc_init_dataField_KernelHwpm*PDB_PROP_KHWPM_MULTIPLE_PMA_SUPPORTED*PDB_PROP_KHWPM_HES_CWD_SUPPORTED*PDB_PROP_KHWPM_EXTENDED_BUFFER_ENABLED*PDB_PROP_KHWPM_EXTENDED_BUFFER_SUPPORTED*PDB_PROP_KHWPM_PROFILING_B1CC_SUPPORTED*call to engstateConstructEngine_DISPATCH*call to khwpmStateDestroy_DISPATCH*call to khwpmStateInitUnlocked_DISPATCH*call to __nvoc_objCreate_KernelMemorySystem*generated/g_kern_mem_sys_nvoc.c**generated/g_kern_mem_sys_nvoc.c*call to __nvoc_init__KernelMemorySystem*call to __nvoc_ctor_KernelMemorySystem*__nvoc_pbase_KernelMemorySystem**__nvoc_pbase_KernelMemorySystem*call to __nvoc_init_funcTable_KernelMemorySystem*call to __nvoc_init_funcTable_KernelMemorySystem_1*__kmemsysGetFbNumaInfo__*__kmemsysReadUsableFbSize__*__kmemsysGetUsableFbSize__*__kmemsysCacheOp__*__kmemsysDoCacheOp__*__kmemsysReadL2SysmemInvalidateReg__*__kmemsysWriteL2SysmemInvalidateReg__*__kmemsysReadL2PeermemInvalidateReg__*__kmemsysWriteL2PeermemInvalidateReg__*__kmemsysInitHshub0Aperture__*__kmemsysDestroyHshub0Aperture__*__kmemsysInitFlushSysmemBuffer__*__kmemsysProgramSysmemFlushBuffer__*__kmemsysGetFlushSysmemBufferAddrShift__*__kmemsysIsPagePLCable__*__kmemsysReadMIGMemoryCfg__*__kmemsysInitMIGMemoryPartitionTable__*__kmemsysSwizzIdToVmmuSegmentsRange__*__kmemsysNumaAddMemory__*__kmemsysNumaRemoveMemory__*__kmemsysNumaRemoveAllMemory__*__kmemsysPopulateMIGGPUInstanceMemConfig__*__kmemsysNeedInvalidateGpuCacheOnMap__*__kmemsysNeedInvalidateGpuCacheOnUnmap__*__kmemsysSetupAllAtsPeers__*__kmemsysRemoveAllAtsPeers__*__kmemsysAssertFbAckTimeoutPending__*__kmemsysGetMaxFbpas__*__kmemsysGetEccDedCountSize__*__kmemsysGetEccDedCountRegAddr__*__kmemsysGetEccCounts__*__kmemsysGetL2EccDedCountRegAddr__*__kmemsysGetMaximumBlacklistPages__*__kmemsysIsSwizzIdRejectedByHW__*__kmemsysGetFbInfos__*__kmemsysCheckReadoutEccEnablement__*__kmemsysIsNonPasidAtsSupported__*call to __nvoc_init_dataField_KernelMemorySystem*bDisableTiledCachingInvalidatesWithEccBug1521641*bGpuCacheEnable*bNumaNodesAdded*bL2CleanFbPull*l2WriteMode*bBug3656943WAR*overrideToGMK*bDisablePlcForCertainOffsetsBug3046774*call to kmemsysDestruct_IMPL*call to kmemsysStateDestroy_DISPATCH*call to kmemsysStateUnload_DISPATCH*call to kmemsysStatePreUnload_DISPATCH*call to kmemsysStateLoad_DISPATCH*call to kmemsysStatePostLoad_DISPATCH*call to kmemsysStatePreLoad_DISPATCH*call to kmemsysStateInitLocked_DISPATCH*call to kmemsysStatePreInitLocked_DISPATCH*call to kmemsysConstructEngine_DISPATCH*call to __nvoc_objCreate_KernelPerf*generated/g_kern_perf_nvoc.c**generated/g_kern_perf_nvoc.c*call to __nvoc_init__KernelPerf*call to __nvoc_ctor_KernelPerf*__nvoc_pbase_KernelPerf**__nvoc_pbase_KernelPerf*call to __nvoc_init_funcTable_KernelPerf*call to __nvoc_init_funcTable_KernelPerf_1*__kperfGpuBoostSyncStateInit__*call to __nvoc_init_dataField_KernelPerf*call to kperfStateDestroy_DISPATCH*call to kperfStateUnload_DISPATCH*call to kperfStateLoad_DISPATCH*call to kperfStateInitLocked_DISPATCH*call to kperfConstructEngine_DISPATCH*call to perfbufferConstructHal_DISPATCH*arg_pResource*call to __nvoc_objCreate_PerfBuffer*generated/g_kern_perfbuffer_nvoc.c**generated/g_kern_perfbuffer_nvoc.c*call to __nvoc_init__PerfBuffer*call to __nvoc_ctor_PerfBuffer*__nvoc_pbase_PerfBuffer**__nvoc_pbase_PerfBuffer*call to __nvoc_init_funcTable_PerfBuffer*call to __nvoc_init_funcTable_PerfBuffer_1*__perfbufferConstructHal__*call to __nvoc_init_dataField_PerfBuffer*call to __nvoc_perfbufferConstruct*call to perfbufferDestruct_b3696a*call to __nvoc_objCreate_KernelPmu*generated/g_kern_pmu_nvoc.c**generated/g_kern_pmu_nvoc.c*call to __nvoc_init__KernelPmu*call to __nvoc_ctor_KernelPmu*__nvoc_pbase_KernelPmu**__nvoc_pbase_KernelPmu*call to __nvoc_init_funcTable_KernelPmu*call to __nvoc_init_funcTable_KernelPmu_1*__kpmuGetIsSelfInit__*call to __nvoc_init_dataField_KernelPmu*call to kpmuDestruct_IMPL*call to kpmuStateInitLocked_DISPATCH*call to kpmuStateDestroy_DISPATCH*call to kpmuConstructEngine_DISPATCH*call to __nvoc_objCreate_KernelBif*generated/g_kernel_bif_nvoc.c**generated/g_kernel_bif_nvoc.c*call to __nvoc_init__KernelBif*call to __nvoc_ctor_KernelBif*__nvoc_pbase_KernelBif**__nvoc_pbase_KernelBif*call to __nvoc_init_funcTable_KernelBif*call to __nvoc_init_funcTable_KernelBif_1*__kbifStatePostLoad__*__kbifDestruct__*__kbifInitLtr__*__kbifInitDmaCaps__*__kbifSavePcieConfigRegisters__*__kbifRestorePcieConfigRegisters__*__kbifSavePcieConfigRegistersFn1__*__kbifRestorePcieConfigRegistersFn1__*__kbifPollBarFirewallDisengage__*__kbifGetXveStatusBits__*__kbifClearXveStatus__*__kbifGetXveAerBits__*__kbifClearXveAer__*__kbifGetPcieConfigAccessTestRegisters__*__kbifVerifyPcieConfigAccessTestRegisters__*__kbifRearmMSI__*__kbifIsMSIEnabledInHW__*__kbifIsMSIXEnabledInHW__*__kbifIsPciIoAccessEnabled__*__kbifIs3dController__*__kbifExecC73War__*__kbifEnableExtendedTagSupport__*__kbifPcieConfigEnableRelaxedOrdering__*__kbifPcieConfigDisableRelaxedOrdering__*__kbifInitRelaxedOrderingFromEmulatedConfigSpace__*__kbifEnableNoSnoop__*__kbifDisableP2PTransactions__*__kbifApplyWARBug3208922__*__kbifGetVFSparseMmapRegions__*__kbifProbePcieReqAtomicCaps__*__kbifEnablePcieAtomics__*__kbifProbePcieCplAtomicCaps__*__kbifReadPcieCplCapsFromConfigSpace__*__kbifDoFunctionLevelReset__*__kbifInitXveRegMap__*__kbifGetMSIXTableVectorControlSize__*__kbifConfigAccessWait__*__kbifGetPciConfigSpacePriMirror__*__kbifGetBusOptionsAddr__*__kbifPreOsGlobalErotGrantRequest__*__kbifStopSysMemRequests__*__kbifDisableSysmemAccess__*__kbifWaitForTransactionsComplete__*__kbifTriggerFlr__*__kbifCacheFlrSupport__*__kbifCache64bBar0Support__*__kbifCacheMnocSupport__*__kbifCacheVFInfo__*__kbifRestoreBar0__*__kbifAnyBarsAreValid__*__kbifRestoreBarsAndCommand__*__kbifStoreBarRegOffsets__*__kbifInit__*__kbifPrepareForFullChipReset__*__kbifIsC2CP2PSupported__*__kbifPrepareForXveReset__*__kbifDoFullChipReset__*__kbifResetHostEngines__*__kbifGetValidEnginesToReset__*__kbifGetValidDeviceEnginesToReset__*__kbifGetMigrationBandwidth__*__kbifGetEccCounts__*__kbifAllowGpuReqPcieAtomics__*__kbifAllowGpuCplPcieAtomics__*__kbifClearDownstreamReadCounter__*__kbifDoSecondaryBusResetOrFunctionLevelReset__*__kbifDoSecondaryBusHotReset__*call to __nvoc_init_dataField_KernelBif*PDB_PROP_KBIF_CHECK_IF_GPU_EXISTS_DEF*PDB_PROP_KBIF_IS_FMODEL_MSI_BROKEN*PDB_PROP_KBIF_USE_CONFIG_SPACE_TO_REARM_MSI*PDB_PROP_KBIF_ALLOW_REARM_MSI_FOR_VF*PDB_PROP_KBIF_P2P_READS_DISABLED*PDB_PROP_KBIF_P2P_WRITES_DISABLED*PDB_PROP_KBIF_UPSTREAM_LTR_SUPPORT_WAR_BUG_200634944*PDB_PROP_KBIF_SUPPORT_NONCOHERENT*PDB_PROP_KBIF_SECONDARY_BUS_RESET_ENABLED*PDB_PROP_KBIF_FLR_PRE_CONDITIONING_REQUIRED*PDB_PROP_KBIF_FLR_HANDLED_BY_OS*call to kbifDestruct_DISPATCH*call to kbifStateUnload_DISPATCH*call to kbifStatePostLoad_DISPATCH*call to kbifStateLoad_DISPATCH*call to kbifStateInitLocked_DISPATCH*call to kbifConstructEngine_DISPATCH*call to __nvoc_objCreate_KernelCcuApi*generated/g_kernel_ccu_api_nvoc.c**generated/g_kernel_ccu_api_nvoc.c*call to __nvoc_init__KernelCcuApi*call to __nvoc_ctor_KernelCcuApi*__nvoc_pbase_KernelCcuApi**__nvoc_pbase_KernelCcuApi*call to __nvoc_init_funcTable_KernelCcuApi*call to __nvoc_init_funcTable_KernelCcuApi_1*call to __nvoc_init_dataField_KernelCcuApi*call to kccuapiConstruct_IMPL*call to kccuapiDestruct_IMPL*call to kccuapiGetMemoryMappingDescriptor_DISPATCH*call to kccuapiGetMapAddrSpace_DISPATCH*call to kccuapiUnmap_DISPATCH*call to kccuapiMap_DISPATCH*call to __nvoc_objCreate_KernelCcu*generated/g_kernel_ccu_nvoc.c**generated/g_kernel_ccu_nvoc.c*call to __nvoc_init__KernelCcu*call to __nvoc_ctor_KernelCcu*__nvoc_pbase_KernelCcu**__nvoc_pbase_KernelCcu*call to __nvoc_init_funcTable_KernelCcu*call to __nvoc_init_funcTable_KernelCcu_1*__kccuMigShrBufHandler__*__kccuGetBufSize__*call to __nvoc_init_dataField_KernelCcu*call to kccuStateUnload_DISPATCH*call to kccuStateLoad_DISPATCH*call to kccuConstructEngine_DISPATCH*call to __nvoc_objCreate_KernelCeContext*generated/g_kernel_ce_context_nvoc.c**generated/g_kernel_ce_context_nvoc.c*call to __nvoc_init__KernelCeContext*call to __nvoc_ctor_KernelCeContext*__nvoc_pbase_KernelCeContext**__nvoc_pbase_KernelCeContext*call to __nvoc_init_funcTable_KernelCeContext*call to __nvoc_init_funcTable_KernelCeContext_1*call to __nvoc_init_dataField_KernelCeContext*call to kcectxConstruct_IMPL*call to kcectxDestruct_IMPL*call to chandesGetSwMethods_DISPATCH*call to __nvoc_objCreate_KernelCE*generated/g_kernel_ce_nvoc.c**generated/g_kernel_ce_nvoc.c*call to __nvoc_init__KernelCE*call to __nvoc_ctor_KernelCE*__nvoc_pbase_KernelCE**__nvoc_pbase_KernelCE*call to __nvoc_init_funcTable_KernelCE*call to kceIsDecompLce_VF*call to kceIsDecompLce_IMPL*call to __nvoc_init_funcTable_KernelCE_1*__kceIsPresent__*__kceStateUnload__*__kceStateLoad__*__kceSetShimInstance__*__kceIsSecureCe__*__kceSetDecompCeCap__*__kceIsCeSysmemRead__*__kceIsCeSysmemWrite__*__kceIsCCWorkSubmitLce__*__kceIsCeNvlinkP2P__*__kceIsScrubLce__*__kceAssignCeCaps__*__kceGetP2PCes__*__kceGetSysmemRWLCEs__*__kceGetNvlinkAutoConfigCeValues__*__kceGetNvlinkMaxTopoForTable__*__kceIsCurrentMaxTopology__*__kceClearAssignedNvlinkPeerMasks__*__kceGetAutoConfigTableEntry__*__kceGetGrceConfigSize1__*__kceGetPce2lceConfigSize1__*__kceGetMappings__*__kceGetMappingsForMIGGpuInstance__*__kceMapPceLceForC2C__*__kceMapPceLceForScrub__*__kceMapPceLceForWorkSubmitLces__*__kceMapPceLceForDecomp__*__kceMapPceLceForPCIe__*__kceMapPceLceForGRCE__*__kceGetLceMaskForShimInstance__*__kceMapPceLceForSysmemLinks__*__kceMapPceLceForNvlinkPeers__*__kceGetSysmemSupportedLceMask__*__kceMapAsyncLceDefault__*__kceGetNvlinkPeerSupportedLceMask__*__kceGetGrceSupportedLceMask__*__kceIsGenXorHigherSupported__*__kceApplyGen4orHigherMapping__*__kceGetGrceMaskReg__*call to __nvoc_init_dataField_KernelCE*bCcFipsSelfTestRequired*call to intrservServiceInterrupt_DISPATCH*call to kceServiceNotificationInterrupt_DISPATCH*call to kceRegisterIntrService_DISPATCH*call to kceStateDestroy_DISPATCH*call to kceStateLoad_DISPATCH*call to kceStateUnload_DISPATCH*call to kceStateInitLocked_DISPATCH*call to kceIsPresent_DISPATCH*call to kceConstructEngine_DISPATCH*call to __nvoc_objCreate_KernelChannelGroupApi*generated/g_kernel_channel_group_api_nvoc.c**generated/g_kernel_channel_group_api_nvoc.c*call to __nvoc_init__KernelChannelGroupApi*call to __nvoc_ctor_KernelChannelGroupApi*__nvoc_pbase_KernelChannelGroupApi**__nvoc_pbase_KernelChannelGroupApi*call to __nvoc_init_funcTable_KernelChannelGroupApi*call to __nvoc_init_funcTable_KernelChannelGroupApi_1*call to __nvoc_init_dataField_KernelChannelGroupApi*call to kchangrpapiConstruct_IMPL*call to kchangrpapiDestruct_IMPL*call to kchangrpapiControl_DISPATCH*call to kchangrpapiCanCopy_DISPATCH*generated/g_kernel_channel_group_nvoc.h**generated/g_kernel_channel_group_nvoc.h*call to __nvoc_objCreate_KernelChannelGroup*generated/g_kernel_channel_group_nvoc.c**generated/g_kernel_channel_group_nvoc.c*call to __nvoc_init__KernelChannelGroup*call to __nvoc_ctor_KernelChannelGroup*__nvoc_pbase_KernelChannelGroup**__nvoc_pbase_KernelChannelGroup*call to __nvoc_init_funcTable_KernelChannelGroup*call to __nvoc_init_funcTable_KernelChannelGroup_1*__kchangrpAllocFaultMethodBuffers__*__kchangrpFreeFaultMethodBuffers__*__kchangrpMapFaultMethodBuffers__*__kchangrpUnmapFaultMethodBuffers__*call to __nvoc_init_dataField_KernelChannelGroup*call to kchangrpConstruct_IMPL*call to kchangrpDestruct_IMPL*call to __nvoc_objCreate_KernelChannel*generated/g_kernel_channel_nvoc.c**generated/g_kernel_channel_nvoc.c*call to __nvoc_init__KernelChannel*call to __nvoc_ctor_KernelChannel*__nvoc_pbase_KernelChannel**__nvoc_pbase_KernelChannel*call to __nvoc_init_funcTable_KernelChannel*call to __nvoc_init_funcTable_KernelChannel_1*__kchannelAllocMem__*__kchannelDestroyMem__*__kchannelAllocHwID__*__kchannelFreeHwID__*__kchannelGetUserdInfo__*__kchannelGetUserdBar1MapOffset__*__kchannelCreateUserdMemDescBc__*__kchannelCreateUserdMemDesc__*__kchannelDestroyUserdMemDesc__*__kchannelCreateUserMemDesc__*__kchannelIsUserdAddrSizeValid__*__kchannelGetEngine__*__kchannelCtrlCmdGetKmb__*__kchannelCtrlRotateSecureChannelIv__*__kchannelSetEncryptionStatsBuffer__*__kchannelGetClassEngineID__*__kchannelEnableVirtualContext__*__kchannelDeriveAndRetrieveKmb__*__kchannelSetKeyRotationNotifier__*call to __nvoc_init_dataField_KernelChannel*call to kchannelConstruct_IMPL*call to kchannelDestruct_IMPL*call to kchannelCheckMemInterUnmap_DISPATCH*call to kchannelGetMemInterMapParams_DISPATCH*call to kchannelGetMapAddrSpace_DISPATCH*call to kchannelUnmap_DISPATCH*call to kchannelMap_DISPATCH*generated/g_kernel_crashcat_engine_nvoc.h**generated/g_kernel_crashcat_engine_nvoc.h*__nvoc_pbase_KernelCrashCatEngine**__nvoc_pbase_KernelCrashCatEngine*call to __nvoc_init__CrashCatEngine*__nvoc_base_CrashCatEngine*call to __nvoc_init_funcTable_KernelCrashCatEngine*call to __nvoc_init_funcTable_KernelCrashCatEngine_1*__kcrashcatEngineReadDmem__*__kcrashcatEngineGetScratchOffsets__*__kcrashcatEngineGetWFL0Offset__*call to __nvoc_ctor_CrashCatEngine*call to __nvoc_init_dataField_KernelCrashCatEngine*call to __nvoc_dtor_CrashCatEngine*call to kcrashcatEngineGetWFL0Offset_DISPATCH*call to kcrashcatEngineSyncBufferDescriptor_DISPATCH*call to kcrashcatEngineUnmapBufferDescriptor_DISPATCH*call to kcrashcatEnginePriWrite_DISPATCH*call to kcrashcatEnginePriRead_DISPATCH*call to kcrashcatEngineVprintf_DISPATCH*call to kcrashcatEngineUnload_DISPATCH*call to kcrashcatEngineConfigured_DISPATCH*call to __nvoc_objCreate_KernelCtxShareApi*generated/g_kernel_ctxshare_nvoc.c**generated/g_kernel_ctxshare_nvoc.c*call to __nvoc_init__KernelCtxShareApi*call to __nvoc_ctor_KernelCtxShareApi*__nvoc_pbase_KernelCtxShareApi**__nvoc_pbase_KernelCtxShareApi*call to __nvoc_init_funcTable_KernelCtxShareApi*call to __nvoc_init_funcTable_KernelCtxShareApi_1*call to __nvoc_init_dataField_KernelCtxShareApi*call to kctxshareapiConstruct_IMPL*call to kctxshareapiDestruct_IMPL*call to kctxshareapiCanCopy_DISPATCH*call to __nvoc_objCreate_KernelCtxShare*call to __nvoc_init__KernelCtxShare*call to __nvoc_ctor_KernelCtxShare*__nvoc_pbase_KernelCtxShare**__nvoc_pbase_KernelCtxShare*call to __nvoc_init_funcTable_KernelCtxShare*call to __nvoc_init_funcTable_KernelCtxShare_1*call to __nvoc_init_dataField_KernelCtxShare*call to kctxshareConstruct_IMPL*call to kctxshareDestruct_IMPL*call to __nvoc_objCreate_GenericKernelFalcon*arg_pFalconConfig*generated/g_kernel_falcon_nvoc.c**generated/g_kernel_falcon_nvoc.c*call to __nvoc_init__GenericKernelFalcon*call to __nvoc_ctor_GenericKernelFalcon*__nvoc_base_KernelFalcon*__nvoc_base_KernelCrashCatEngine*__nvoc_pbase_KernelFalcon**__nvoc_pbase_KernelFalcon*__nvoc_pbase_GenericKernelFalcon**__nvoc_pbase_GenericKernelFalcon*call to __nvoc_init__KernelFalcon*metadata__KernelFalcon*metadata__KernelCrashCatEngine*call to __nvoc_init_funcTable_GenericKernelFalcon*call to __nvoc_init_funcTable_GenericKernelFalcon_1*__gkflcnRegRead__*__gkflcnRegWrite__*__gkflcnMaskDmemAddr__*__gkflcnReadDmem__*__gkflcnGetScratchOffsets__*__gkflcnGetWFL0Offset__*call to __nvoc_ctor_KernelFalcon*call to __nvoc_init_dataField_GenericKernelFalcon*call to gkflcnConstruct_IMPL*call to __nvoc_dtor_KernelFalcon*call to kcrashcatEngineReadEmem_DISPATCH*call to kcrashcatEngineReadDmem_DISPATCH*call to kflcnMaskDmemAddr_DISPATCH*call to kflcnRegWrite_DISPATCH*call to kflcnRegRead_DISPATCH*call to gkflcnServiceNotificationInterrupt_DISPATCH*call to gkflcnRegisterIntrService_DISPATCH*call to gkflcnResetHw_DISPATCH*call to __nvoc_init__KernelCrashCatEngine*call to __nvoc_init_funcTable_KernelFalcon*call to __nvoc_init_funcTable_KernelFalcon_1*__kflcnRegRead__*__kflcnRegWrite__*__kflcnRiscvRegRead__*__kflcnRiscvRegWrite__*__kflcnIsRiscvCpuEnabled__*__kflcnIsRiscvActive__*__kflcnIsRiscvSelected__*__kflcnRiscvProgramBcr__*__kflcnSwitchToFalcon__*__kflcnReset__*__kflcnResetIntoRiscv__*__kflcnStartCpu__*__kflcnDisableCtxReq__*__kflcnPreResetWait__*__kflcnWaitForResetToFinish__*__kflcnWaitForHalt__*__kflcnWaitForHaltRiscv__*__kflcnReadIntrStatus__*__kflcnRiscvReadIntrStatus__*__kflcnIntrRetrigger__*__kflcnMaskImemAddr__*__kflcnMaskDmemAddr__*__kflcnRiscvIcdWaitForIdle__*__kflcnRiscvIcdReadMem__*__kflcnRiscvIcdReadReg__*__kflcnRiscvIcdRcsr__*__kflcnRiscvIcdRstat__*__kflcnRiscvIcdRpc__*__kflcnRiscvIcdHalt__*__kflcnIcdReadCmdReg__*__kflcnRiscvIcdReadRdata__*__kflcnRiscvIcdWriteAddress__*__kflcnIcdWriteCmdReg__*__kflcnCoreDumpPc__*__kflcnDumpCoreRegs__*__kflcnDumpTracepc__*__kflcnDumpPeripheralRegs__*__kflcnGetEccInterruptMask__*__kflcnGetFatalHwErrorStatus__*__kflcnFatalHwErrorCodeToString__*__kflcnReadDmem__*__kflcnGetScratchOffsets__*__kflcnGetWFL0Offset__*call to __nvoc_ctor_KernelCrashCatEngine*call to __nvoc_init_dataField_KernelFalcon*call to __nvoc_dtor_KernelCrashCatEngine*call to __nvoc_objCreate_KernelFifo*generated/g_kernel_fifo_nvoc.c**generated/g_kernel_fifo_nvoc.c*call to __nvoc_init__KernelFifo*call to __nvoc_ctor_KernelFifo*__nvoc_pbase_KernelFifo**__nvoc_pbase_KernelFifo*call to __nvoc_init_funcTable_KernelFifo*call to __nvoc_init_funcTable_KernelFifo_1*__kfifoConstructHal__*__kfifoStatePostLoad__*__kfifoStatePreUnload__*__kfifoChannelGroupGetDefaultTimeslice__*__kfifoGetInstMemInfo__*__kfifoGetInstBlkSizeAlign__*__kfifoGetDefaultRunlist__*__kfifoValidateSCGTypeAndRunqueue__*__kfifoValidateEngineAndRunqueue__*__kfifoValidateEngineAndSubctxType__*__kfifoRmctrlGetWorkSubmitToken__*__kfifoChannelGetFifoContextMemDesc__*__kfifoCheckChannelAllocAddrSpaces__*__kfifoConvertInstToKernelChannel__*__kfifoConstructUsermodeMemdescs__*__kfifoGetUsermodeMapInfo__*__kfifoGetMaxSubcontext__*__kfifoChannelGroupGetLocalMaxSubcontext__*__kfifoGetMaxLowerSubcontext__*__kfifoGetNumRunqueues__*__kfifoGetMaxChannelGroupSize__*__kfifoGetCtxBufferMapFlags__*__kfifoEngineInfoXlate__*__kfifoGetSubctxType__*__kfifoGenerateWorkSubmitTokenHal__*__kfifoRingChannelDoorBell__*__kfifoUpdateUsermodeDoorbell__*__kfifoGetNumEngines__*__kfifoGetEngineName__*__kfifoGetMaxNumRunlists__*__kfifoGetEnginePbdmaIds__*__kfifoReservePbdmaFaultIds__*__kfifoGetEnginePartnerList__*__kfifoRunlistIsTsgHeaderSupported__*__kfifoRunlistGetEntrySize__*__kfifoRunlistGetBaseShift__*__kfifoPreAllocUserD__*__kfifoFreePreAllocUserD__*__kfifoGetUserdBar1MapStartOffset__*__kfifoGetUserdBar1MapInfo__*__kfifoGetUserdSizeAlign__*__kfifoGetUserdLocation__*__kfifoCalcTotalSizeOfFaultMethodBuffers__*__kfifoGetMaxCeChannelGroups__*__kfifoCheckEngine__*__kfifoGetVChIdForSChId__*__kfifoProgramChIdTable__*__kfifoRecoverAllChannels__*__kfifoStartChannelHalt__*__kfifoCompleteChannelHalt__*__kfifoRunlistSetId__*__kfifoRunlistSetIdByEngine__*__kfifoSetupUserD__*__kfifoGetEnginePbdmaFaultIds__*__kfifoGetNumPBDMAs__*__kfifoPrintPbdmaId__*__kfifoPrintInternalEngine__*__kfifoPrintInternalEngineCheck__*__kfifoGetClientIdStringCommon__*__kfifoGetClientIdString__*__kfifoGetClientIdStringCheck__*__kfifoGetFaultAccessTypeString__*call to __nvoc_init_dataField_KernelFifo*bUsePerRunlistChram*bIsPerRunlistChramSupportedInHw*bHostEngineExpansion*bHostHasLbOverflow*bSubcontextSupported*bIsZombieSubctxWarEnabled*bGuestGenenratesWorkSubmitToken*bIsPbdmaMmuEngineIdContiguous*bDoorbellsSupported*pBar1VF**pBar1VF*pBar1PrivVF**pBar1PrivVF**pRegVF*call to kfifoDestruct_IMPL*call to kfifoStatePreUnload_DISPATCH*call to kfifoStatePostLoad_DISPATCH*call to kfifoStateDestroy_DISPATCH*call to kfifoStateInitLocked_DISPATCH*call to kfifoStateUnload_DISPATCH*call to kfifoStateLoad_DISPATCH*call to kfifoConstructEngine_DISPATCH*call to __nvoc_objCreate_KernelGraphicsContextShared*generated/g_kernel_graphics_context_nvoc.c**generated/g_kernel_graphics_context_nvoc.c*call to __nvoc_init__KernelGraphicsContextShared*call to __nvoc_ctor_KernelGraphicsContextShared*__nvoc_pbase_KernelGraphicsContextShared**__nvoc_pbase_KernelGraphicsContextShared*call to __nvoc_init_funcTable_KernelGraphicsContextShared*call to __nvoc_init_funcTable_KernelGraphicsContextShared_1*call to __nvoc_init_dataField_KernelGraphicsContextShared*call to shrkgrctxConstruct_IMPL*call to shrkgrctxDestruct_IMPL*call to __nvoc_objCreate_KernelGraphicsContext*call to __nvoc_init__KernelGraphicsContext*call to __nvoc_ctor_KernelGraphicsContext*__nvoc_pbase_KernelGraphicsContext**__nvoc_pbase_KernelGraphicsContext*call to __nvoc_init_funcTable_KernelGraphicsContext*call to __nvoc_init_funcTable_KernelGraphicsContext_1*__kgrctxShouldPreAllocPmBuffer__*__kgrctxGetRegisterAccessMapId__*call to __nvoc_init_dataField_KernelGraphicsContext*call to kgrctxConstruct_IMPL*call to kgrctxDestruct_IMPL*call to kgrctxGetInternalObjectHandle_DISPATCH*call to kgrctxCanCopy_DISPATCH*call to __nvoc_objCreate_KernelGraphicsManager*generated/g_kernel_graphics_manager_nvoc.c**generated/g_kernel_graphics_manager_nvoc.c*call to __nvoc_init__KernelGraphicsManager*call to __nvoc_ctor_KernelGraphicsManager*__nvoc_pbase_KernelGraphicsManager**__nvoc_pbase_KernelGraphicsManager*call to __nvoc_init_funcTable_KernelGraphicsManager*call to __nvoc_init_funcTable_KernelGraphicsManager_1*__kgrmgrGetVeidsFromGpcCount__*call to __nvoc_init_dataField_KernelGraphicsManager*call to kgrmgrDestruct_IMPL*call to kgrmgrStateDestroy_DISPATCH*call to kgrmgrConstructEngine_DISPATCH*pKernelGraphicsObject_PRIVATE*pPromoteIds*pbPromote*pKernelGraphics_PRIVATE*bBug4208224WAREnabled*bPerSubcontextContextHeaderSupported*bOverrideContextBuffersPteKind*bPeFiroBufferEnabled*bOverrideContextBuffersToGpuCached*bBottomHalfCtxswLoggingEnabled*bIntrDrivenCtxswLoggingEnabled*bCtxswLoggingSupported*generated/g_kernel_graphics_nvoc.h**generated/g_kernel_graphics_nvoc.h*bCtxswLoggingEnabled*call to __nvoc_objCreate_KernelGraphics*generated/g_kernel_graphics_nvoc.c**generated/g_kernel_graphics_nvoc.c*call to __nvoc_init__KernelGraphics*call to __nvoc_ctor_KernelGraphics*__nvoc_pbase_KernelGraphics**__nvoc_pbase_KernelGraphics*call to __nvoc_init_funcTable_KernelGraphics*call to __nvoc_init_funcTable_KernelGraphics_1*__kgraphicsAllocGrGlobalCtxBuffers__*__kgraphicsTeardownBug4208224State__*__kgraphicsCreateBug4208224Channel__*__kgraphicsInitializeBug4208224WAR__*__kgraphicsIsBug4208224WARNeeded__*__kgraphicsAllocGlobalCtxBuffers__*__kgraphicsLoadStaticInfo__*__kgraphicsClearInterrupt__*__kgraphicsServiceInterrupt__*__kgraphicsIsUnrestrictedAccessMapSupported__*__kgraphicsGetFecsTraceRdOffset__*__kgraphicsSetFecsTraceRdOffset__*__kgraphicsSetFecsTraceWrOffset__*__kgraphicsSetFecsTraceHwEnable__*__kgraphicsIsCtxswLoggingEnabled__*call to __nvoc_init_dataField_KernelGraphics*bDeferContextInit*bSetContextBuffersGPUPrivileged*bUcodeSupportsPrivAccessMap*bRtvCbSupported*bFecsRecordUcodeSeqnoSupported*call to kgraphicsDestruct_IMPL*call to kgraphicsServiceInterrupt_DISPATCH*call to kgraphicsClearInterrupt_DISPATCH*call to kgraphicsServiceNotificationInterrupt_DISPATCH*call to kgraphicsRegisterIntrService_DISPATCH*call to kgraphicsStatePostLoad_DISPATCH*call to kgraphicsIsPresent_DISPATCH*call to kgraphicsStateDestroy_DISPATCH*call to kgraphicsStateUnload_DISPATCH*call to kgraphicsStatePreUnload_DISPATCH*call to kgraphicsStateLoad_DISPATCH*call to kgraphicsStateInitLocked_DISPATCH*call to kgraphicsConstructEngine_DISPATCH*call to __nvoc_objCreate_KernelGraphicsObject*generated/g_kernel_graphics_object_nvoc.c**generated/g_kernel_graphics_object_nvoc.c*call to __nvoc_init__KernelGraphicsObject*call to __nvoc_ctor_KernelGraphicsObject*__nvoc_pbase_KernelGraphicsObject**__nvoc_pbase_KernelGraphicsObject*call to __nvoc_init_funcTable_KernelGraphicsObject*call to __nvoc_init_funcTable_KernelGraphicsObject_1*__kgrobjGetPromoteIds__*__kgrobjSetComputeMmio__*__kgrobjFreeComputeMmio__*call to __nvoc_init_dataField_KernelGraphicsObject*call to kgrobjConstruct_IMPL*call to kgrobjDestruct_IMPL*call to kgrobjGetMemInterMapParams_DISPATCH*call to __nvoc_objCreate_KernelGsp*generated/g_kernel_gsp_nvoc.c**generated/g_kernel_gsp_nvoc.c*call to __nvoc_init__KernelGsp*call to __nvoc_ctor_KernelGsp*__nvoc_pbase_KernelGsp**__nvoc_pbase_KernelGsp*call to __nvoc_init_funcTable_KernelGsp*call to kgspHasLibosKernelLogging_72a2e1*call to kgspHasLibosKernelLogging_d69453*call to kgspHasLibosKernelLogging_e661f0*call to __nvoc_init_funcTable_KernelGsp_1*__kgspConfigureFalcon__*__kgspIsDebugModeEnabled__*__kgspAllocBootArgs__*__kgspFreeBootArgs__*__kgspProgramLibosBootArgsAddr__*__kgspSetCmdQueueHead__*__kgspPrepareForBootstrap__*__kgspBootstrap__*__kgspTeardown__*__kgspGetGspRmBootUcodeStorage__*__kgspGetBinArchiveGspRmBoot__*__kgspGetBinArchiveConcatenatedFMCDesc__*__kgspGetBinArchiveConcatenatedFMC__*__kgspGetBinArchiveGspRmFmcGfwDebugSigned__*__kgspGetBinArchiveGspRmFmcGfwProdSigned__*__kgspGetBinArchiveGspRmCcFmcGfwProdSigned__*__kgspPopulateWprMeta__*__kgspGetNonWprHeapSize__*__kgspExecuteSequencerCommand__*__kgspReadUcodeFuseVersion__*__kgspResetHw__*__kgspHealthCheck__*__kgspDumpMailbox__*__kgspService__*__kgspServiceFatalHwError__*__kgspEccServiceEvent__*__kgspEccServiceUncorrError__*__kgspIsWpr2Up__*__kgspGetFrtsSize__*__kgspGetPrescrubbedTopFbSize__*__kgspExtractVbiosFromRom__*__kgspPrepareForFwsecFrts__*__kgspPrepareForFwsecSb__*__kgspExecuteFwsec__*__kgspIsScrubberImageSupported__*__kgspExecuteScrubberIfNeeded__*__kgspExecuteBooterLoad__*__kgspExecuteBooterUnloadIfNeeded__*__kgspExecuteHsFalcon__*__kgspWaitForProcessorSuspend__*__kgspPrepareSuspendResumeData__*__kgspFreeSuspendResumeData__*__kgspWaitForGfwBootOk__*__kgspGetBinArchiveBooterLoadUcode__*__kgspGetBinArchiveBooterUnloadUcode__*__kgspGetLogCount__*__kgspGetMinWprHeapSizeMB__*__kgspGetMaxWprHeapSizeMB__*__kgspGetFwHeapParamOsCarveoutSize__*__kgspInitVgpuPartitionLogging__*__kgspPreserveVgpuPartitionLogging__*__kgspFreeVgpuPartitionLogging__*__kgspGetLibosVersion__*__kgspVgpuFwHeapSize__*__kgspVgpuNumVgpuPartitions__*__kgspGetSignatureSectionNamePrefix__*__kgspSetupGspFmcArgs__*__kgspReadEmem__*__kgspGetCrashcatSysmemBufferSize__*__kgspIssueNotifyOp__*__kgspCheckGspRmCcCleanup__*__kgspRegRead__*__kgspRegWrite__*__kgspMaskDmemAddr__*__kgspReadDmem__*__kgspGetScratchOffsets__*__kgspGetWFL0Offset__*call to __nvoc_init_dataField_KernelGsp*bPartitionedFmc*bScrubberUcodeSupported*fwHeapParamBaseSize*bBootGspRmWithBoostClocks*ememPort*call to kgspDestruct_IMPL*call to kgspReadEmem_DISPATCH*call to kgspResetHw_DISPATCH*call to kgspServiceInterrupt_DISPATCH*call to kgspRegisterIntrService_DISPATCH*call to kgspStateInitLocked_DISPATCH*call to kgspConstructEngine_DISPATCH*call to __nvoc_objCreate_KernelGsplite*generated/g_kernel_gsplite_nvoc.c**generated/g_kernel_gsplite_nvoc.c*call to __nvoc_init__KernelGsplite*call to __nvoc_ctor_KernelGsplite*__nvoc_pbase_KernelGsplite**__nvoc_pbase_KernelGsplite*call to __nvoc_init_funcTable_KernelGsplite*call to __nvoc_init_funcTable_KernelGsplite_1*call to __nvoc_init_dataField_KernelGsplite*PDB_PROP_KGSPLITE_ENABLE_CMC_NVLOG*call to kgspliteDestruct_IMPL*call to kgspliteStateInitLocked_DISPATCH*call to kgspliteStateInitUnlocked_DISPATCH*call to kgspliteConstructEngine_DISPATCH*call to __nvoc_objCreate_KernelHead*generated/g_kernel_head_nvoc.c**generated/g_kernel_head_nvoc.c*call to __nvoc_init__KernelHead*call to __nvoc_ctor_KernelHead*__nvoc_pbase_KernelHead**__nvoc_pbase_KernelHead*call to __nvoc_init_funcTable_KernelHead*call to __nvoc_init_funcTable_KernelHead_1*__kheadResetPendingLastData__*__kheadReadVblankIntrEnable__*__kheadGetDisplayInitialized__*__kheadWriteVblankIntrEnable__*__kheadProcessVblankCallbacks__*__kheadResetPendingVblank__*__kheadReadPendingVblank__*__kheadGetLoadVCounter__*__kheadGetCrashLockCounterV__*__kheadReadPendingRgLineIntr__*__kheadVsyncNotificationOverRgVblankIntr__*__kheadResetRgLineIntrMask__*__kheadProcessRgLineCallbacks__*__kheadReadPendingRgSemIntr__*__kheadHandleRgSemIntr__*call to __nvoc_init_dataField_KernelHead*call to kheadConstruct_IMPL*bIsPanelReplayEnabled*call to kheadDestruct_IMPL*call to __nvoc_objCreate_KernelHFRP*generated/g_kernel_hfrp_nvoc.c**generated/g_kernel_hfrp_nvoc.c*call to __nvoc_init__KernelHFRP*call to __nvoc_ctor_KernelHFRP*__nvoc_pbase_KernelHFRP**__nvoc_pbase_KernelHFRP*call to __nvoc_init_funcTable_KernelHFRP*call to __nvoc_init_funcTable_KernelHFRP_1*call to __nvoc_init_dataField_KernelHFRP*PDB_PROP_KHFRP_IS_ENABLED*PDB_PROP_KHFRP_HDA_IS_ENABLED*call to khfrpDestruct_IMPL*call to khfrpConstructEngine_DISPATCH*call to khfrpStatePreInitLocked_DISPATCH*call to __nvoc_objCreate_KernelHostVgpuDeviceApi*generated/g_kernel_hostvgpudeviceapi_nvoc.c**generated/g_kernel_hostvgpudeviceapi_nvoc.c*call to __nvoc_init__KernelHostVgpuDeviceApi*call to __nvoc_ctor_KernelHostVgpuDeviceApi*__nvoc_pbase_KernelHostVgpuDeviceApi**__nvoc_pbase_KernelHostVgpuDeviceApi*call to __nvoc_init_funcTable_KernelHostVgpuDeviceApi*call to __nvoc_init_funcTable_KernelHostVgpuDeviceApi_1*call to __nvoc_init_dataField_KernelHostVgpuDeviceApi*call to kernelhostvgpudeviceapiConstruct_IMPL*call to kernelhostvgpudeviceapiDestruct_IMPL*call to kernelhostvgpudeviceapiCanCopy_DISPATCH*call to __nvoc_objCreate_KernelHostVgpuDeviceShr*call to __nvoc_init__KernelHostVgpuDeviceShr*call to __nvoc_ctor_KernelHostVgpuDeviceShr*__nvoc_pbase_KernelHostVgpuDeviceShr**__nvoc_pbase_KernelHostVgpuDeviceShr*call to __nvoc_init_funcTable_KernelHostVgpuDeviceShr*call to __nvoc_init_funcTable_KernelHostVgpuDeviceShr_1*call to __nvoc_init_dataField_KernelHostVgpuDeviceShr*call to kernelhostvgpudeviceshrConstruct_IMPL*call to kernelhostvgpudeviceshrDestruct_IMPL*pKernelIoctrl_PRIVATE*call to __nvoc_objCreate_KernelIoctrl*generated/g_kernel_ioctrl_nvoc.c**generated/g_kernel_ioctrl_nvoc.c*call to __nvoc_init__KernelIoctrl*call to __nvoc_ctor_KernelIoctrl*__nvoc_pbase_KernelIoctrl**__nvoc_pbase_KernelIoctrl*call to __nvoc_init_funcTable_KernelIoctrl*call to __nvoc_init_funcTable_KernelIoctrl_1*__kioctrlGetMinionEnableDefault__*__kioctrlMinionConstruct__*call to __nvoc_init_dataField_KernelIoctrl*PDB_PROP_KIOCTRL_MINION_AVAILABLE*call to kioctrlConstructEngine_DISPATCH*generated/g_kernel_mc_nvoc.h**generated/g_kernel_mc_nvoc.h*call to __nvoc_objCreate_KernelMc*generated/g_kernel_mc_nvoc.c**generated/g_kernel_mc_nvoc.c*call to __nvoc_init__KernelMc*call to __nvoc_ctor_KernelMc*__nvoc_pbase_KernelMc**__nvoc_pbase_KernelMc*call to __nvoc_init_funcTable_KernelMc*call to __nvoc_init_funcTable_KernelMc_1*__kmcWritePmcEnableReg__*__kmcPrepareForXVEReset__*__kmcGetMcBar0MapInfo__*call to __nvoc_init_dataField_KernelMc*call to kmcStateLoad_DISPATCH*call to kmcStateInitLocked_DISPATCH*call to __nvoc_objCreate_KernelMIGManager*generated/g_kernel_mig_manager_nvoc.c**generated/g_kernel_mig_manager_nvoc.c*call to __nvoc_init__KernelMIGManager*call to __nvoc_ctor_KernelMIGManager*__nvoc_pbase_KernelMIGManager**__nvoc_pbase_KernelMIGManager*call to __nvoc_init_funcTable_KernelMIGManager*call to __nvoc_init_funcTable_KernelMIGManager_1*__kmigmgrGetGRCERange__*__kmigmgrGetAsyncCERange__*__kmigmgrLoadStaticInfo__*__kmigmgrSetStaticInfo__*__kmigmgrClearStaticInfo__*__kmigmgrSaveToPersistenceFromVgpuStaticInfo__*__kmigmgrDeleteGPUInstanceRunlists__*__kmigmgrCreateGPUInstanceRunlists__*__kmigmgrRestoreFromPersistence__*__kmigmgrApplyDefaultCeMappings__*__kmigmgrClearMIGGpuInstanceCeMapping__*__kmigmgrApplyMIGGpuInstanceCeMapping__*__kmigmgrCreateGPUInstanceCheck__*__kmigmgrIsDevinitMIGBitSet__*__kmigmgrDetectReducedConfig__*__kmigmgrIsGPUInstanceCombinationValid__*__kmigmgrIsGPUInstanceFlagValid__*__kmigmgrGpuInstanceSupportVgpuTimeslice__*__kmigmgrGenerateComputeInstanceUuid__*__kmigmgrGenerateGPUInstanceUuid__*__kmigmgrCreateComputeInstances__*__kmigmgrIsMemoryPartitioningRequested__*__kmigmgrIsMemoryPartitioningNeeded__*__kmigmgrIsGfxCapabilitesRequested__*__kmigmgrMemSizeFlagToSwizzIdRange__*__kmigmgrSwizzIdToSpan__*__kmigmgrSwizzIdToGrSpan__*__kmigmgrSetMIGState__*__kmigmgrGetComputeProfileFromGpcCount__*__kmigmgrIsCTSAlignmentRequired__*__kmigmgrRestoreFromBootConfig__*call to __nvoc_init_dataField_KernelMIGManager*call to kmigmgrDestruct_IMPL*call to kmigmgrStateUnload_DISPATCH*call to kmigmgrStateInitLocked_DISPATCH*call to kmigmgrConstructEngine_DISPATCH*call to nvdecctxDestructHal_KERNEL*call to nvdecctxConstructHal_KERNEL*arg_pNvdecContext*call to __nvoc_objCreate_NvdecContext*generated/g_kernel_nvdec_ctx_nvoc.c**generated/g_kernel_nvdec_ctx_nvoc.c*call to __nvoc_init__NvdecContext*call to __nvoc_ctor_NvdecContext*__nvoc_pbase_NvdecContext**__nvoc_pbase_NvdecContext*call to __nvoc_init_funcTable_NvdecContext*call to __nvoc_init_funcTable_NvdecContext_1*call to __nvoc_init_dataField_NvdecContext*call to __nvoc_nvdecctxConstruct*call to __nvoc_nvdecctxDestruct*call to msencctxDestructHal_KERNEL*call to msencctxConstructHal_KERNEL*arg_pMsencContext*call to __nvoc_objCreate_MsencContext*generated/g_kernel_nvenc_ctx_nvoc.c**generated/g_kernel_nvenc_ctx_nvoc.c*call to __nvoc_init__MsencContext*call to __nvoc_ctor_MsencContext*__nvoc_pbase_MsencContext**__nvoc_pbase_MsencContext*call to __nvoc_init_funcTable_MsencContext*call to __nvoc_init_funcTable_MsencContext_1*call to __nvoc_init_dataField_MsencContext*call to __nvoc_msencctxConstruct*call to __nvoc_msencctxDestruct*call to nvjpgctxDestructHal_KERNEL*call to nvjpgctxConstructHal_KERNEL*arg_pNvjpgContext*call to __nvoc_objCreate_NvjpgContext*generated/g_kernel_nvjpg_ctx_nvoc.c**generated/g_kernel_nvjpg_ctx_nvoc.c*call to __nvoc_init__NvjpgContext*call to __nvoc_ctor_NvjpgContext*__nvoc_pbase_NvjpgContext**__nvoc_pbase_NvjpgContext*call to __nvoc_init_funcTable_NvjpgContext*call to __nvoc_init_funcTable_NvjpgContext_1*call to __nvoc_init_dataField_NvjpgContext*call to __nvoc_nvjpgctxConstruct*call to __nvoc_nvjpgctxDestruct*call to __nvoc_objCreate_KernelNvlink*generated/g_kernel_nvlink_nvoc.c**generated/g_kernel_nvlink_nvoc.c*call to __nvoc_init__KernelNvlink*call to __nvoc_ctor_KernelNvlink*__nvoc_pbase_KernelNvlink**__nvoc_pbase_KernelNvlink*call to __nvoc_init_funcTable_KernelNvlink*call to __nvoc_init_funcTable_KernelNvlink_1*__knvlinkIsPresent__*__knvlinkSetDirectConnectBaseAddress__*__knvlinkSetUniqueFabricBaseAddress__*__knvlinkClearUniqueFabricBaseAddress__*__knvlinkSetUniqueFabricEgmBaseAddress__*__knvlinkClearUniqueFabricEgmBaseAddress__*__knvlinkHandleFaultUpInterrupt__*__knvlinkValidateFabricBaseAddress__*__knvlinkValidateFabricEgmBaseAddress__*__knvlinkGetConnectedLinksMask__*__knvlinkEnableLinksPostTopology__*__knvlinkOverrideConfig__*__knvlinkFilterBridgeLinks__*__knvlinkGetUniquePeerIdMask__*__knvlinkGetUniquePeerId__*__knvlinkRemoveMapping__*__knvlinkGetP2POptimalCEs__*__knvlinkConstructHal__*__knvlinkStatePostLoadHal__*__knvlinkSetupPeerMapping__*__knvlinkProgramLinkSpeed__*__knvlinkApplyNvswitchDegradedModeSettings__*__knvlinkPoweredUpForD3__*__knvlinkIsAliSupported__*__knvlinkPostSetupNvlinkPeer__*__knvlinkDiscoverPostRxDetLinks__*__knvlinkLogAliDebugMessages__*__knvlinkDumpCallbackRegister__*__knvlinkGetEffectivePeerLinkMask__*__knvlinkGetNumLinksToBeReducedPerIoctrl__*__knvlinkIsBandwidthModeOff__*__knvlinkIsBwModeSupported__*__knvlinkGetHshubSupportedRbmModes__*__knvlinkPostSchedulingEnableCallbackRegister__*__knvlinkTriggerProbeRequest__*__knvlinkPostSchedulingEnableCallbackUnregister__*__knvlinkGetSupportedBwMode__*__knvlinkDirectConnectCheck__*__knvlinkIsGpuReducedNvlinkConfig__*__knvlinkIsFloorSweepingNeeded__*__knvlinkCoreGetDevicePciInfo__*__knvlinkGetSupportedCounters__*__knvlinkGetSupportedCoreLinkStateMask__*__knvlinkGetEncryptionBits__*__knvlinkIsNvleEnabled__*__knvlinkEncryptionGetUpdateGpuIdentifiers__*__knvlinkEncryptionUpdateTopology__*call to __nvoc_init_dataField_KernelNvlink*PDB_PROP_KNVLINK_ENABLED*PDB_PROP_KNVLINK_RESET_HSHUBNVL_ON_TEARDOWN*PDB_PROP_KNVLINK_UNSET_NVLINK_PEER_SUPPORTED*PDB_PROP_KNVLINK_CONFIG_REQUIRE_INITIALIZED_LINKS_CHECK*PDB_PROP_KNVLINK_LANE_SHUTDOWN_ENABLED*PDB_PROP_KNVLINK_LANE_SHUTDOWN_ON_UNLOAD*PDB_PROP_KNVLINK_LINKRESET_AFTER_SHUTDOWN*PDB_PROP_KNVLINK_BUG2274645_RESET_FOR_RTD3_FGC6*PDB_PROP_KNVLINK_L2_POWER_STATE_FOR_LONG_IDLE*PDB_PROP_KNVLINK_WAR_BUG_3471679_PEERID_FILTERING*PDB_PROP_KNVLINK_MINION_GFW_BOOT*PDB_PROP_KNVLINK_SYSMEM_SUPPORT_ENABLED*PDB_PROP_KNVLINK_NVSWITCH_SUPPORTS_SLI*PDB_PROP_KNVLINK_UNCONTAINED_ERROR_RECOVERY_SUPPORTED*PDB_PROP_KNVLINK_ENCRYPTION_ENABLED*PDB_PROP_KNVLINK_RBM_LINK_COUNT_ENABLED*PDB_PROP_KNVLINK_UNILATERAL_LINK_STATE_CHANGE_SUPPORTED*vidmemDirectConnectBaseAddr*call to knvlinkDestruct_IMPL*call to knvlinkIsPresent_DISPATCH*call to knvlinkStatePostUnload_DISPATCH*call to knvlinkStateUnload_DISPATCH*call to knvlinkStatePostLoad_DISPATCH*call to knvlinkStateLoad_DISPATCH*call to knvlinkStatePreInitLocked_DISPATCH*call to knvlinkConstructEngine_DISPATCH*call to ofactxDestructHal_KERNEL*call to ofactxConstructHal_KERNEL*arg_pOfaContext*call to __nvoc_objCreate_OfaContext*generated/g_kernel_ofa_ctx_nvoc.c**generated/g_kernel_ofa_ctx_nvoc.c*call to __nvoc_init__OfaContext*call to __nvoc_ctor_OfaContext*__nvoc_pbase_OfaContext**__nvoc_pbase_OfaContext*call to __nvoc_init_funcTable_OfaContext*call to __nvoc_init_funcTable_OfaContext_1*call to __nvoc_init_dataField_OfaContext*call to __nvoc_ofactxConstruct*call to __nvoc_ofactxDestruct*call to __nvoc_objCreate_KernelRc*generated/g_kernel_rc_nvoc.c**generated/g_kernel_rc_nvoc.c*call to __nvoc_init__KernelRc*call to __nvoc_ctor_KernelRc*__nvoc_pbase_KernelRc**__nvoc_pbase_KernelRc*call to __nvoc_init_funcTable_KernelRc*call to __nvoc_init_funcTable_KernelRc_1*__krcWatchdogInit__*__krcWatchdogInitPushbuffer__*__krcWatchdog__*__krcWatchdogRecovery__*call to __nvoc_init_dataField_KernelRc*call to krcConstructEngine_DISPATCH*generated/g_kernel_sec2_nvoc.h**generated/g_kernel_sec2_nvoc.h*ppDesc**ppDesc*ppImg**ppImg*call to __nvoc_objCreate_KernelSec2*generated/g_kernel_sec2_nvoc.c**generated/g_kernel_sec2_nvoc.c*call to __nvoc_init__KernelSec2*call to __nvoc_ctor_KernelSec2*__nvoc_pbase_KernelSec2**__nvoc_pbase_KernelSec2*call to __nvoc_init_funcTable_KernelSec2*call to __nvoc_init_funcTable_KernelSec2_1*__ksec2ConfigureFalcon__*__ksec2ResetHw__*__ksec2StateLoad__*__ksec2StateDestroy__*__ksec2ReadUcodeFuseVersion__*__ksec2GetBinArchiveBlUcode__*__ksec2GetGenericBlUcode__*__ksec2GetBinArchiveSecurescrubUcode__*__ksec2SetupGspImages__*__ksec2PrepareBootCommands__*__ksec2SafeToSendBootCommands__*__ksec2SendBootCommands__*__ksec2PrepareAndSendBootCommands__*__ksec2CanSendPacket__*__ksec2GetMaxSendPacketSize__*__ksec2CreateNvdmHeader__*__ksec2CreateMctpHeader__*__ksec2SendPacket__*__ksec2WaitForGspTargetMaskReleased__*__ksec2ReadPacket__*__ksec2IsResponseAvailable__*__ksec2GspFmcIsEnforced__*__ksec2WaitForSecureBoot__*__ksec2GetMaxRecvPacketSize__*__ksec2NvdmToSeid__*__ksec2GetPacketInfo__*__ksec2ValidateMctpPayloadHeader__*__ksec2ProcessNvdmMessage__*__ksec2ProcessCommandResponse__*__ksec2DumpDebugState__*__ksec2ErrorCode2NvStatusMap__*__ksec2RegRead__*__ksec2RegWrite__*__ksec2MaskDmemAddr__*__ksec2ReadDmem__*__ksec2GetScratchOffsets__*__ksec2GetWFL0Offset__*call to __nvoc_init_dataField_KernelSec2*PDB_PROP_KSEC2_BOOT_GSPFMC*PDB_PROP_KSEC2_RM_BOOT_GSP*call to ksec2Destruct_IMPL*call to ksec2StateDestroy_DISPATCH*call to ksec2StateLoad_DISPATCH*call to ksec2ResetHw_DISPATCH*call to ksec2StateUnload_DISPATCH*call to ksec2ServiceNotificationInterrupt_DISPATCH*call to ksec2RegisterIntrService_DISPATCH*call to ksec2ConstructEngine_DISPATCH**SMDBG_EXCEPTION_TYPE_FATAL**SMDBG_EXCEPTION_TYPE_TRAP**SMDBG_EXCEPTION_TYPE_SINGLE_STEP**SMDBG_EXCEPTION_TYPE_INT**SMDBG_EXCEPTION_TYPE_CILP**SMDBG_EXCEPTION_TYPE_PREEMPTION_STARTED**SMDBG_EXCEPTION_TYPE__UNKNOWN*call to ksmdbgssnInternalControlForward_DISPATCH*call to __nvoc_objCreate_KernelSMDebuggerSession*generated/g_kernel_sm_debugger_session_nvoc.c**generated/g_kernel_sm_debugger_session_nvoc.c*call to __nvoc_init__KernelSMDebuggerSession*call to __nvoc_ctor_KernelSMDebuggerSession*__nvoc_pbase_KernelSMDebuggerSession**__nvoc_pbase_KernelSMDebuggerSession*call to __nvoc_init_funcTable_KernelSMDebuggerSession*call to __nvoc_init_funcTable_KernelSMDebuggerSession_1*__ksmdbgssnCtrlCmdDebugReadMMUFaultInfo__*call to __nvoc_init_dataField_KernelSMDebuggerSession*call to ksmdbgssnConstruct_IMPL*call to ksmdbgssnDestruct_IMPL*call to ksmdbgssnGetInternalObjectHandle_DISPATCH*call to __nvoc_objCreate_RmDebuggerSession*__nvoc_base_RsSession*call to __nvoc_init__RmDebuggerSession*call to __nvoc_ctor_RmDebuggerSession*__nvoc_pbase_RsSession**__nvoc_pbase_RsSession*__nvoc_pbase_RmDebuggerSession**__nvoc_pbase_RmDebuggerSession*call to __nvoc_init__RsSession*metadata__RsSession*call to __nvoc_init_funcTable_RmDebuggerSession*call to __nvoc_init_funcTable_RmDebuggerSession_1*call to __nvoc_ctor_RsSession*call to __nvoc_init_dataField_RmDebuggerSession*call to __nvoc_dtor_RsSession*call to dbgSessionRemoveDependency_DISPATCH*call to dbgSessionRemoveDependant_DISPATCH*call to __nvoc_objCreate_KernelVgpuMgr*generated/g_kernel_vgpu_mgr_nvoc.c**generated/g_kernel_vgpu_mgr_nvoc.c*call to __nvoc_init__KernelVgpuMgr*call to __nvoc_ctor_KernelVgpuMgr*__nvoc_pbase_KernelVgpuMgr**__nvoc_pbase_KernelVgpuMgr*call to __nvoc_init_funcTable_KernelVgpuMgr*call to __nvoc_init_funcTable_KernelVgpuMgr_1*call to __nvoc_init_dataField_KernelVgpuMgr*call to kvgpumgrConstruct_IMPL*call to kvgpumgrDestruct_IMPL*call to __nvoc_objCreate_KernelVideoEngine*generated/g_kernel_video_engine_nvoc.c**generated/g_kernel_video_engine_nvoc.c*call to __nvoc_init__KernelVideoEngine*call to __nvoc_ctor_KernelVideoEngine*__nvoc_pbase_KernelVideoEngine**__nvoc_pbase_KernelVideoEngine*call to __nvoc_init_funcTable_KernelVideoEngine*call to kvidengIsVideoTraceLogSupported_IMPL*call to kvidengIsVideoTraceLogSupported_3dd2c9*call to __nvoc_init_funcTable_KernelVideoEngine_1*call to __nvoc_init_dataField_KernelVideoEngine*call to kvidengConstruct_IMPL*call to __nvoc_objCreate_KernelWatchdog*generated/g_kernel_watchdog_nvoc.c**generated/g_kernel_watchdog_nvoc.c*call to __nvoc_init__KernelWatchdog*call to __nvoc_ctor_KernelWatchdog*__nvoc_pbase_KernelWatchdog**__nvoc_pbase_KernelWatchdog*call to __nvoc_init_funcTable_KernelWatchdog*call to __nvoc_init_funcTable_KernelWatchdog_1*call to __nvoc_init_dataField_KernelWatchdog*call to kwdtConstruct_IMPL*call to kwdtDestruct_IMPL*call to __nvoc_objCreate_LockStressObject*generated/g_lock_stress_nvoc.c**generated/g_lock_stress_nvoc.c*call to __nvoc_init__LockStressObject*call to __nvoc_ctor_LockStressObject*__nvoc_pbase_LockStressObject**__nvoc_pbase_LockStressObject*call to __nvoc_init_funcTable_LockStressObject*call to __nvoc_init_funcTable_LockStressObject_1*call to __nvoc_init_dataField_LockStressObject*call to lockStressObjConstruct_IMPL*call to lockStressObjDestruct_IMPL*call to __nvoc_objCreate_LockTestRelaxedDupObject*generated/g_lock_test_nvoc.c**generated/g_lock_test_nvoc.c*call to __nvoc_init__LockTestRelaxedDupObject*call to __nvoc_ctor_LockTestRelaxedDupObject*__nvoc_pbase_LockTestRelaxedDupObject**__nvoc_pbase_LockTestRelaxedDupObject*call to __nvoc_init_funcTable_LockTestRelaxedDupObject*call to __nvoc_init_funcTable_LockTestRelaxedDupObject_1*call to __nvoc_init_dataField_LockTestRelaxedDupObject*call to lockTestRelaxedDupObjConstruct_IMPL*call to lockTestRelaxedDupObjDestruct_IMPL*call to lockTestRelaxedDupObjCanCopy_DISPATCH*call to __nvoc_objCreate_MemoryExport*generated/g_mem_export_nvoc.c**generated/g_mem_export_nvoc.c*call to __nvoc_init__MemoryExport*call to __nvoc_ctor_MemoryExport*__nvoc_pbase_MemoryExport**__nvoc_pbase_MemoryExport*call to __nvoc_init_funcTable_MemoryExport*call to __nvoc_init_funcTable_MemoryExport_1*call to __nvoc_init_dataField_MemoryExport*call to memoryexportConstruct_IMPL*call to memoryexportDestruct_IMPL*call to memoryexportControl_DISPATCH*call to memoryexportCanCopy_DISPATCH*pMemoryFabricImportedRef*call to __nvoc_objCreate_MemoryFabricImportedRef*generated/g_mem_fabric_import_ref_nvoc.c**generated/g_mem_fabric_import_ref_nvoc.c*call to __nvoc_init__MemoryFabricImportedRef*call to __nvoc_ctor_MemoryFabricImportedRef*__nvoc_pbase_MemoryFabricImportedRef**__nvoc_pbase_MemoryFabricImportedRef*call to __nvoc_init_funcTable_MemoryFabricImportedRef*call to __nvoc_init_funcTable_MemoryFabricImportedRef_1*call to __nvoc_init_dataField_MemoryFabricImportedRef*call to memoryfabricimportedrefConstruct_IMPL*call to memoryfabricimportedrefDestruct_IMPL*call to memoryfabricimportedrefCanCopy_DISPATCH*call to __nvoc_objCreate_MemoryFabricImportV2*generated/g_mem_fabric_import_v2_nvoc.c**generated/g_mem_fabric_import_v2_nvoc.c*call to __nvoc_init__MemoryFabricImportV2*call to __nvoc_ctor_MemoryFabricImportV2*__nvoc_pbase_MemoryFabricImportV2**__nvoc_pbase_MemoryFabricImportV2*call to __nvoc_init_funcTable_MemoryFabricImportV2*call to __nvoc_init_funcTable_MemoryFabricImportV2_1*call to __nvoc_init_dataField_MemoryFabricImportV2*call to memoryfabricimportv2Construct_IMPL*call to memoryfabricimportv2Destruct_IMPL*call to memoryfabricimportv2IsExportAllowed_DISPATCH*call to memoryfabricimportv2IsGpuMapAllowed_DISPATCH*call to memoryfabricimportv2GetMapAddrSpace_DISPATCH*call to memoryfabricimportv2Control_DISPATCH*call to memoryfabricimportv2IsReady_DISPATCH*call to memoryfabricimportv2CanCopy_DISPATCH*call to __nvoc_objCreate_MemoryFabric*generated/g_mem_fabric_nvoc.c**generated/g_mem_fabric_nvoc.c*call to __nvoc_init__MemoryFabric*call to __nvoc_ctor_MemoryFabric*__nvoc_pbase_MemoryFabric**__nvoc_pbase_MemoryFabric*call to __nvoc_init_funcTable_MemoryFabric*call to __nvoc_init_funcTable_MemoryFabric_1*call to __nvoc_init_dataField_MemoryFabric*call to memoryfabricConstruct_IMPL*call to memoryfabricDestruct_IMPL*call to memoryfabricControl_DISPATCH*call to memoryfabricCanCopy_DISPATCH*call to memoryfabricUnmapFrom_DISPATCH*call to __nvoc_objCreate_MemoryList*generated/g_mem_list_nvoc.c**generated/g_mem_list_nvoc.c*call to __nvoc_init__MemoryList*call to __nvoc_ctor_MemoryList*__nvoc_pbase_MemoryList**__nvoc_pbase_MemoryList*call to __nvoc_init_funcTable_MemoryList*call to __nvoc_init_funcTable_MemoryList_1*call to __nvoc_init_dataField_MemoryList*call to memlistConstruct_IMPL*call to memlistCanCopy_DISPATCH*call to __nvoc_objCreate_MemoryMapper*generated/g_mem_mapper_nvoc.c**generated/g_mem_mapper_nvoc.c*call to __nvoc_init__MemoryMapper*call to __nvoc_ctor_MemoryMapper*__nvoc_pbase_MemoryMapper**__nvoc_pbase_MemoryMapper*call to __nvoc_init_funcTable_MemoryMapper*call to __nvoc_init_funcTable_MemoryMapper_1*call to __nvoc_init_dataField_MemoryMapper*call to memmapperConstruct_IMPL*call to memmapperDestruct_IMPL*call to __nvoc_objCreate_MemoryManager*generated/g_mem_mgr_nvoc.c**generated/g_mem_mgr_nvoc.c*call to __nvoc_init__MemoryManager*call to __nvoc_ctor_MemoryManager*__nvoc_pbase_MemoryManager**__nvoc_pbase_MemoryManager*call to __nvoc_init_funcTable_MemoryManager*call to __nvoc_init_funcTable_MemoryManager_1*__memmgrDeterminePageSize__*__memmgrFreeHwResources__*__memmgrCreateHeap__*__memmgrInitFbRegions__*__memmgrAllocateConsoleRegion__*__memmgrScrubHandlePostSchedulingEnable__*__memmgrScrubHandlePreSchedulingDisable__*__memmgrMemUtilsChannelInitialize__*__memmgrMemUtilsCopyEngineInitialize__*__memmgrMemUtilsSec2CtxInit__*__memmgrMemUtilsGetCopyEngineClass__*__memmgrMemUtilsCreateMemoryAlias__*__memmgrMemUtilsCheckMemoryFastScrubEnable__*__memmgrAllocHal__*__memmgrFreeHal__*__memmgrGetBankPlacementData__*__memmgrDirtyForPmTest__*__memmgrGetReservedHeapSizeMb__*__memmgrAllocDetermineAlignment__*__memmgrGetMaxContextSize__*__memmgrHandleSizeOverrides__*__memmgrFinishHandleSizeOverrides__*__memmgrGetBAR1InfoForDevice__*__memmgrGetFbTaxSize__*__memmgrScrubRegistryOverrides__*__memmgrGetRsvdSizeForSr__*__memmgrComparePhysicalAddresses__*__memmgrGetInvalidOffset__*__memmgrGetAddrSpaceSizeMB__*__memmgrGetUsableMemSizeMB__*__memmgrIsKindCompressible__*__memmgrGetPteKindBl__*__memmgrGetPteKindPitch__*__memmgrChooseKindZ__*__memmgrChooseKindCompressZ__*__memmgrChooseKindCompressC__*__memmgrGetPteKindGenericMemoryCompressible__*__memmgrGetUncompressedKind__*__memmgrGetCompressedKind__*__memmgrChooseKind__*__memmgrIsKind__*__memmgrGetMessageKind__*__memmgrGetDefaultPteKindForNoHandle__*__memmgrGetFlaKind__*__memmgrIsMemDescSupportedByFla__*__memmgrIsValidFlaPageSize__*__memmgrGetHwPteKindFromSwPteKind__*__memmgrGetSwPteKindFromHwPteKind__*__memmgrGetPteKindForScrubber__*__memmgrGetCtagOffsetFromParams__*__memmgrSetCtagOffsetInParams__*__memmgrDetermineComptag__*__memmgrScrubMapDoorbellRegion__*__memmgrSetAllocParameters__*__memmgrCalcReservedFbSpaceForUVM__*__memmgrCalcReservedFbSpaceHal__*__memmgrGetGrHeapReservationSize__*__memmgrGetRunlistEntriesReservedFbSpace__*__memmgrGetUserdReservedFbSpace__*__memmgrCheckReservedMemorySize__*__memmgrInitReservedMemory__*__memmgrPreInitReservedMemory__*__memmgrReadMmuLock__*__memmgrBlockMemLockedMemory__*__memmgrInsertUnprotectedRegionAtBottomOfFb__*__memmgrInitBaseFbRegions__*__memmgrGetDisablePlcKind__*__memmgrEnableDynamicPageOfflining__*__memmgrSetMemDescPageSize__*__memmgrSetPartitionableMem__*__memmgrAllocMIGGPUInstanceMemory__*__memmgrGetBlackListPagesForHeap__*__memmgrGetBlackListPages__*__memmgrDiscoverMIGPartitionableMemoryRange__*__memmgrGetFBEndReserveSizeEstimate__*__memmgrInitZeroFbRegionsHal__*__memmgrAllocScanoutCarveoutRegionResources__*__memmgrAllocFromScanoutCarveoutRegion__*__memmgrFreeScanoutCarveoutRegionResources__*__memmgrFreeFromScanoutCarveoutRegion__*__memmgrCreateScanoutCarveoutHeap__*__memmgrDestroyScanoutCarveoutHeap__*__memmgrDuplicateFromScanoutCarveoutRegion__*__memmgrIsMemoryIoCoherent__*__memmgrGetLocalizedOffset__*__memmgrIsFlaSysmemSupported__*__memmgrGetLocalizedMemorySupported__*call to __nvoc_init_dataField_MemoryManager*bFbRegionsSupported*bPmaEnabled*bScanoutSysmem*bDisallowSplitLowerMemory*bSmallPageCompression*bSysmemCompressionSupportDef*bBug2301372IncreaseRmReserveMemoryWar*bEnableDynamicPageOfflining*bVgpuPmaSupport*bScrubChannelSetupInProgress*bBug3922001DisableCtxBufOnSim*bPlatformFullyCoherent*bAllowNoncontiguousAllocation*bLocalEgmSupported*bScrubOnFreeEnabled*bFastScrubberEnabled*bFastScrubberSupportsSysmem*bSysmemPageSizeDefaultAllowLargePages*bMonitoredFenceSupported*b64BitSemaphoresSupported*bGenericKindSupport*bSkipCompressionCheck*bUseVirtualCopyOnSuspend*call to memmgrDestruct_IMPL*call to memmgrStateDestroy_DISPATCH*call to memmgrStateUnload_DISPATCH*call to memmgrStatePreUnload_DISPATCH*call to memmgrStatePostLoad_DISPATCH*call to memmgrStateLoad_DISPATCH*call to memmgrStateInitLocked_DISPATCH*call to memmgrStatePreInitLocked_DISPATCH*call to memmgrConstructEngine_DISPATCH*call to __nvoc_objCreate_MemoryMulticastFabric*generated/g_mem_multicast_fabric_nvoc.c**generated/g_mem_multicast_fabric_nvoc.c*call to __nvoc_init__MemoryMulticastFabric*call to __nvoc_ctor_MemoryMulticastFabric*__nvoc_pbase_MemoryMulticastFabric**__nvoc_pbase_MemoryMulticastFabric*call to __nvoc_init_funcTable_MemoryMulticastFabric*call to __nvoc_init_funcTable_MemoryMulticastFabric_1*call to __nvoc_init_dataField_MemoryMulticastFabric*call to memorymulticastfabricConstruct_IMPL*call to memorymulticastfabricDestruct_IMPL*call to memorymulticastfabricGetMapAddrSpace_DISPATCH*call to memorymulticastfabricIsExportAllowed_DISPATCH*call to memorymulticastfabricIsGpuMapAllowed_DISPATCH*call to memorymulticastfabricControl_DISPATCH*call to memorymulticastfabricIsReady_DISPATCH*call to memorymulticastfabricCanCopy_DISPATCH*call to memorymulticastfabricUnmapFrom_DISPATCH*call to __nvoc_objCreate_Memory*generated/g_mem_nvoc.c**generated/g_mem_nvoc.c*call to __nvoc_init_funcTable_Memory*call to __nvoc_init_funcTable_Memory_1*call to __nvoc_init_dataField_Memory*call to memConstruct_IMPL*call to memDestruct_IMPL*call to __nvoc_objCreate_MIGConfigSession*generated/g_mig_config_session_nvoc.c**generated/g_mig_config_session_nvoc.c*call to __nvoc_init__MIGConfigSession*call to __nvoc_ctor_MIGConfigSession*__nvoc_pbase_MIGConfigSession**__nvoc_pbase_MIGConfigSession*call to __nvoc_init_funcTable_MIGConfigSession*call to __nvoc_init_funcTable_MIGConfigSession_1*call to __nvoc_init_dataField_MIGConfigSession*call to migconfigsessionConstruct_IMPL*call to migconfigsessionDestruct_IMPL*call to __nvoc_objCreate_MIGMonitorSession*generated/g_mig_monitor_session_nvoc.c**generated/g_mig_monitor_session_nvoc.c*call to __nvoc_init__MIGMonitorSession*call to __nvoc_ctor_MIGMonitorSession*__nvoc_pbase_MIGMonitorSession**__nvoc_pbase_MIGMonitorSession*call to __nvoc_init_funcTable_MIGMonitorSession*call to __nvoc_init_funcTable_MIGMonitorSession_1*call to __nvoc_init_dataField_MIGMonitorSession*call to migmonitorsessionConstruct_IMPL*call to migmonitorsessionDestruct_IMPL*pMmuFaultBuffer*call to __nvoc_objCreate_MmuFaultBuffer*generated/g_mmu_fault_buffer_nvoc.c**generated/g_mmu_fault_buffer_nvoc.c*call to __nvoc_init__MmuFaultBuffer*call to __nvoc_ctor_MmuFaultBuffer*__nvoc_pbase_MmuFaultBuffer**__nvoc_pbase_MmuFaultBuffer*call to __nvoc_init_funcTable_MmuFaultBuffer*call to __nvoc_init_funcTable_MmuFaultBuffer_1*call to __nvoc_init_dataField_MmuFaultBuffer*call to faultbufConstruct_IMPL*call to faultbufDestruct_IMPL*call to faultbufGetMapAddrSpace_DISPATCH*call to faultbufUnmap_DISPATCH*call to faultbufMap_DISPATCH*call to __nvoc_objCreate_MpsApi*generated/g_mps_api_nvoc.c**generated/g_mps_api_nvoc.c*call to __nvoc_init__MpsApi*call to __nvoc_ctor_MpsApi*__nvoc_pbase_MpsApi**__nvoc_pbase_MpsApi*call to __nvoc_init_funcTable_MpsApi*call to __nvoc_init_funcTable_MpsApi_1*call to __nvoc_init_dataField_MpsApi*call to mpsApiConstruct_IMPL*call to mpsApiDestruct_IMPL*call to __nvoc_objCreate_NoDeviceMemory*generated/g_no_device_mem_nvoc.c**generated/g_no_device_mem_nvoc.c*call to __nvoc_init__NoDeviceMemory*call to __nvoc_ctor_NoDeviceMemory*__nvoc_pbase_NoDeviceMemory**__nvoc_pbase_NoDeviceMemory*call to __nvoc_init_funcTable_NoDeviceMemory*call to __nvoc_init_funcTable_NoDeviceMemory_1*call to __nvoc_init_dataField_NoDeviceMemory*call to nodevicememConstruct_IMPL*call to nodevicememDestruct_IMPL*call to nodevicememGetMapAddrSpace_DISPATCH*call to __nvoc_objCreate_NvDebugDump*generated/g_nv_debug_dump_nvoc.c**generated/g_nv_debug_dump_nvoc.c*call to __nvoc_init__NvDebugDump*call to __nvoc_ctor_NvDebugDump*__nvoc_pbase_NvDebugDump**__nvoc_pbase_NvDebugDump*call to __nvoc_init_funcTable_NvDebugDump*call to __nvoc_init_funcTable_NvDebugDump_1*call to __nvoc_init_dataField_NvDebugDump*call to nvdDestruct_IMPL*call to nvdStateInitLocked_DISPATCH*call to nvdConstructEngine_DISPATCH*call to __nvoc_objCreate_NvencSession*generated/g_nvencsession_nvoc.c**generated/g_nvencsession_nvoc.c*call to __nvoc_init__NvencSession*call to __nvoc_ctor_NvencSession*__nvoc_pbase_NvencSession**__nvoc_pbase_NvencSession*call to __nvoc_init_funcTable_NvencSession*call to __nvoc_init_funcTable_NvencSession_1*call to __nvoc_init_dataField_NvencSession*call to nvencsessionConstruct_IMPL*call to nvencsessionDestruct_IMPL*call to __nvoc_objCreate_NvfbcSession*generated/g_nvfbc_session_nvoc.c**generated/g_nvfbc_session_nvoc.c*call to __nvoc_init__NvfbcSession*call to __nvoc_ctor_NvfbcSession*__nvoc_pbase_NvfbcSession**__nvoc_pbase_NvfbcSession*call to __nvoc_init_funcTable_NvfbcSession*call to __nvoc_init_funcTable_NvfbcSession_1*call to __nvoc_init_dataField_NvfbcSession*call to nvfbcsessionConstruct_IMPL*call to nvfbcsessionDestruct_IMPL*call to __nvoc_objCreate_Object*generated/g_object_nvoc.c**generated/g_object_nvoc.c*call to __nvoc_init_funcTable_Object*call to __nvoc_init_funcTable_Object_1*call to __nvoc_init_dataField_Object*pSweng*call to __nvoc_objCreate_OBJSWENG*generated/g_objsweng_nvoc.c**generated/g_objsweng_nvoc.c*call to __nvoc_init__OBJSWENG*call to __nvoc_ctor_OBJSWENG*__nvoc_pbase_OBJSWENG**__nvoc_pbase_OBJSWENG*call to __nvoc_init_funcTable_OBJSWENG*call to __nvoc_init_funcTable_OBJSWENG_1*call to __nvoc_init_dataField_OBJSWENG*call to swengConstructEngine_DISPATCH*call to __nvoc_objCreate_OBJTMR*generated/g_objtmr_nvoc.c**generated/g_objtmr_nvoc.c*call to __nvoc_init__OBJTMR*call to __nvoc_ctor_OBJTMR*__nvoc_pbase_OBJTMR**__nvoc_pbase_OBJTMR*call to __nvoc_init_funcTable_OBJTMR*call to __nvoc_init_funcTable_OBJTMR_1*__tmrDelay__*__tmrServiceInterrupt__*__tmrSetCurrentTime__*__tmrGetTimeLo__*__tmrGetTime__*__tmrGetNsecShiftMask__*__tmrGetTimeEx__*__tmrReadTimeLoReg__*__tmrReadTimeHiReg__*__tmrGetGpuPtimerOffset__*__tmrGetPtimerOffsetNs__*__tmrSetCountdownIntrDisable__*__tmrSetCountdownIntrEnable__*__tmrSetCountdownIntrReset__*__tmrSetCountdown__*__tmrGetTimerBar0MapInfo__*__tmrGrTickFreqChange__*__tmrGetGpuAndCpuTimestampPair__*__tmrGetTmrBaseAddr__*call to __nvoc_init_dataField_OBJTMR*PDB_PROP_TMR_USE_COUNTDOWN_TIMER_FOR_RM_CALLBACKS*PDB_PROP_TMR_ALARM_INTR_REMOVED_FROM_PMC_TREE*PDB_PROP_TMR_USE_OS_TIMER_FOR_CALLBACKS*PDB_PROP_TMR_USE_PTIMER_FOR_OSTIMER_CALLBACKS*PDB_PROP_TMR_USE_POLLING_FOR_CALLBACKS*PDB_PROP_TMR_USE_SECOND_COUNTDOWN_TIMER_FOR_SWRL*PDB_PROP_TMR_WAR_FOR_BUG_4679970_DEF*call to tmrDestruct_IMPL*call to tmrStateDestroy_DISPATCH*call to tmrStateUnload_DISPATCH*call to tmrStateLoad_DISPATCH*call to tmrStateInitUnlocked_DISPATCH*call to tmrStateInitLocked_DISPATCH*call to tmrStatePreInitLocked_DISPATCH*call to tmrConstructEngine_DISPATCH*call to tmrServiceInterrupt_DISPATCH*call to tmrClearInterrupt_DISPATCH*call to tmrRegisterIntrService_DISPATCH*call to __nvoc_objCreate_OsDescMemory*generated/g_os_desc_mem_nvoc.c**generated/g_os_desc_mem_nvoc.c*call to __nvoc_init__OsDescMemory*call to __nvoc_ctor_OsDescMemory*__nvoc_pbase_OsDescMemory**__nvoc_pbase_OsDescMemory*call to __nvoc_init_funcTable_OsDescMemory*call to __nvoc_init_funcTable_OsDescMemory_1*call to __nvoc_init_dataField_OsDescMemory*call to osdescConstruct_IMPL*call to osdescDestruct_IMPL*call to osdescCanCopy_DISPATCH*call to __nvoc_objCreate_OBJOS*generated/g_os_nvoc.c**generated/g_os_nvoc.c*call to __nvoc_init__OBJOS*call to __nvoc_ctor_OBJOS*__nvoc_pbase_OBJOS**__nvoc_pbase_OBJOS*call to __nvoc_init_funcTable_OBJOS*call to __nvoc_init_funcTable_OBJOS_1*call to __nvoc_init_dataField_OBJOS*PDB_PROP_OS_SUPPORTS_DISPLAY_REMAPPER*call to __nvoc_objCreate_P2PApi*generated/g_p2p_api_nvoc.c**generated/g_p2p_api_nvoc.c*call to __nvoc_init__P2PApi*call to __nvoc_ctor_P2PApi*__nvoc_pbase_P2PApi**__nvoc_pbase_P2PApi*call to __nvoc_init_funcTable_P2PApi*call to __nvoc_init_funcTable_P2PApi_1*call to __nvoc_init_dataField_P2PApi*call to p2papiConstruct_IMPL*call to p2papiDestruct_IMPL*call to __nvoc_objCreate_PhysicalMemory*generated/g_phys_mem_nvoc.c**generated/g_phys_mem_nvoc.c*call to __nvoc_init__PhysicalMemory*call to __nvoc_ctor_PhysicalMemory*__nvoc_pbase_PhysicalMemory**__nvoc_pbase_PhysicalMemory*call to __nvoc_init_funcTable_PhysicalMemory*call to __nvoc_init_funcTable_PhysicalMemory_1*call to __nvoc_init_dataField_PhysicalMemory*call to physmemConstruct_IMPL*call to physmemCanCopy_DISPATCH*call to __nvoc_objCreate_OBJPFM*generated/g_platform_nvoc.c**generated/g_platform_nvoc.c*call to __nvoc_init__OBJPFM*call to __nvoc_ctor_OBJPFM*__nvoc_pbase_OBJPFM**__nvoc_pbase_OBJPFM*call to __nvoc_init_funcTable_OBJPFM*call to __nvoc_init_funcTable_OBJPFM_1*call to __nvoc_init_dataField_OBJPFM*call to pfmConstruct_IMPL*PDB_PROP_PFM_SUPPORTS_ACPI*PDB_PROP_PFM_POSSIBLE_HIGHRES_BOOT*call to __nvoc_objCreate_PlatformRequestHandler*generated/g_platform_request_handler_nvoc.c**generated/g_platform_request_handler_nvoc.c*call to __nvoc_init__PlatformRequestHandler*call to __nvoc_ctor_PlatformRequestHandler*__nvoc_pbase_PlatformRequestHandler**__nvoc_pbase_PlatformRequestHandler*call to __nvoc_init_funcTable_PlatformRequestHandler*call to __nvoc_init_funcTable_PlatformRequestHandler_1*call to __nvoc_init_dataField_PlatformRequestHandler*call to pfmreqhndlrConstruct_IMPL*call to __nvoc_objCreate_PrereqTracker*arg_pParent*generated/g_prereq_tracker_nvoc.c**generated/g_prereq_tracker_nvoc.c*call to __nvoc_init__PrereqTracker*call to __nvoc_ctor_PrereqTracker*__nvoc_pbase_PrereqTracker**__nvoc_pbase_PrereqTracker*call to __nvoc_init_funcTable_PrereqTracker*call to __nvoc_init_funcTable_PrereqTracker_1*call to __nvoc_init_dataField_PrereqTracker*call to prereqConstruct_IMPL*call to prereqDestruct_IMPL*call to __nvoc_objCreate_Profiler*generated/g_profiler_v1_nvoc.c**generated/g_profiler_v1_nvoc.c*call to __nvoc_init__Profiler*call to __nvoc_ctor_Profiler*__nvoc_pbase_Profiler**__nvoc_pbase_Profiler*call to __nvoc_init_funcTable_Profiler*call to __nvoc_init_funcTable_Profiler_1*call to __nvoc_init_dataField_Profiler*call to profilerConstruct_IMPL*call to profilerDestruct_d44104*call to profilerControl_DISPATCH*generated/g_profiler_v2_nvoc.h**generated/g_profiler_v2_nvoc.h*pProf*call to __nvoc_objCreate_ProfilerDev*generated/g_profiler_v2_nvoc.c**generated/g_profiler_v2_nvoc.c*__nvoc_base_ProfilerBase*call to __nvoc_init__ProfilerDev*call to __nvoc_ctor_ProfilerDev*__nvoc_pbase_ProfilerBase**__nvoc_pbase_ProfilerBase*__nvoc_pbase_ProfilerDev**__nvoc_pbase_ProfilerDev*call to __nvoc_init__ProfilerBase*metadata__ProfilerBase*call to __nvoc_init_funcTable_ProfilerDev*call to __nvoc_init_funcTable_ProfilerDev_1*__profilerDevConstructState__*__profilerDevConstructStatePrologue__*__profilerDevConstructStateInterlude__*__profilerDevConstructStateEpilogue__*__profilerDevDestructState__*call to __nvoc_ctor_ProfilerBase*call to __nvoc_init_dataField_ProfilerDev*call to profilerDevConstruct_IMPL*call to __nvoc_dtor_ProfilerBase*call to profilerDevDestruct_IMPL*call to __nvoc_objCreate_ProfilerCtx*call to __nvoc_init__ProfilerCtx*call to __nvoc_ctor_ProfilerCtx*__nvoc_pbase_ProfilerCtx**__nvoc_pbase_ProfilerCtx*call to __nvoc_init_funcTable_ProfilerCtx*call to __nvoc_init_funcTable_ProfilerCtx_1*__profilerCtxConstructState__*__profilerCtxConstructStatePrologue__*__profilerCtxConstructStateInterlude__*__profilerCtxConstructStateEpilogue__*__profilerCtxDestruct__*call to __nvoc_init_dataField_ProfilerCtx*call to profilerCtxConstruct_IMPL*call to profilerCtxDestruct_DISPATCH*call to __nvoc_objCreate_ProfilerBase*call to __nvoc_init_funcTable_ProfilerBase*call to __nvoc_init_funcTable_ProfilerBase_1*__profilerBaseConstructState__*__profilerBaseDestructState__*__profilerBaseCtrlCmdInternalFreePmaStream__*__profilerBaseCtrlCmdUnbindPmResources__*__profilerBaseCtrlCmdPmaStreamUpdateGetPut__*__profilerBaseCtrlCmdInternalAllocPmaStream__*__profilerBaseCtrlCmdInternalSriovPromotePmaStream__*__profilerBaseCtrlCmdRequestCgControls__*__profilerBaseCtrlCmdReleaseCgControls__*call to __nvoc_init_dataField_ProfilerBase*call to profilerBaseConstruct_IMPL*call to profilerBaseDestruct_IMPL*call to __nvoc_objCreate_OBJREFCNT*generated/g_ref_count_nvoc.c**generated/g_ref_count_nvoc.c*call to __nvoc_init__OBJREFCNT*call to __nvoc_ctor_OBJREFCNT*__nvoc_pbase_OBJREFCNT**__nvoc_pbase_OBJREFCNT*call to __nvoc_init_funcTable_OBJREFCNT*call to __nvoc_init_funcTable_OBJREFCNT_1*call to __nvoc_init_dataField_OBJREFCNT*call to refcntConstruct_IMPL*call to refcntDestruct_IMPL*call to __nvoc_objCreate_RegisterMemory*generated/g_reg_mem_nvoc.c**generated/g_reg_mem_nvoc.c*call to __nvoc_init__RegisterMemory*call to __nvoc_ctor_RegisterMemory*__nvoc_pbase_RegisterMemory**__nvoc_pbase_RegisterMemory*call to __nvoc_init_funcTable_RegisterMemory*call to __nvoc_init_funcTable_RegisterMemory_1*call to __nvoc_init_dataField_RegisterMemory*call to regmemConstruct_IMPL*call to regmemCanCopy_DISPATCH*call to __nvoc_objCreate_RmResource*generated/g_resource_nvoc.c**generated/g_resource_nvoc.c*call to __nvoc_init__RsResource*call to __nvoc_init_funcTable_RmResource*call to __nvoc_init_funcTable_RmResource_1*call to __nvoc_ctor_RsResource*call to __nvoc_init_dataField_RmResource*call to rmresConstruct_IMPL*call to __nvoc_dtor_RsResource*call to __nvoc_init_funcTable_RmResourceCommon*call to __nvoc_init_funcTable_RmResourceCommon_1*call to __nvoc_init_dataField_RmResourceCommon*call to rmrescmnConstruct_IMPL*call to __nvoc_objCreate_RgLineCallback*generated/g_rg_line_callback_nvoc.c**generated/g_rg_line_callback_nvoc.c*call to __nvoc_init__RgLineCallback*call to __nvoc_ctor_RgLineCallback*__nvoc_pbase_RgLineCallback**__nvoc_pbase_RgLineCallback*call to __nvoc_init_funcTable_RgLineCallback*call to __nvoc_init_funcTable_RgLineCallback_1*call to __nvoc_init_dataField_RgLineCallback*call to rglcbConstruct_IMPL*call to rglcbDestruct_IMPL*call to rmcfg_IsT264DorBetter*call to rmcfg_IsT264D*call to rmcfg_IsT234DorBetter*call to rmcfg_IsT234D*call to rmcfg_IsGB20B*call to rmcfg_IsGB20C*call to rmcfg_IsGB10B*call to rmcfg_IsGB20CorBetter*call to rmcfg_IsGB100orBetter*call to rmcfg_IsGB100*call to rmcfg_IsGB102*call to rmcfg_IsGB110*call to rmcfg_IsGB112*call to rmcfg_IsGB202*call to rmcfg_IsGB203*call to rmcfg_IsGB205*call to rmcfg_IsGB206*call to rmcfg_IsGB207*call to rmcfg_IsGH100orBetter*call to rmcfg_IsGH100*call to rmcfg_IsAD102orBetter*call to rmcfg_IsAD102*call to rmcfg_IsAD103*call to rmcfg_IsAD104*call to rmcfg_IsAD106*call to rmcfg_IsAD107*call to rmcfg_IsGA100orBetter*call to rmcfg_IsGA100*call to rmcfg_IsGA102*call to rmcfg_IsGA103*call to rmcfg_IsGA104*call to rmcfg_IsGA106*call to rmcfg_IsGA107*call to rmcfg_IsTU102orBetter*call to rmcfg_IsTU102*call to rmcfg_IsTU104*call to rmcfg_IsTU106*call to rmcfg_IsTU116*call to rmcfg_IsTU117*call to rmcfg_IsGV100orBetter*call to rmcfg_IsGP100orBetter*call to rmcfg_IsGM107orBetter*call to rmcfg_IsGK104orBetter*call to rmcfg_IsGB112orBetter*call to rmcfg_IsGF100orBetter*call to gpuIsImplementationOrBetter_IMPL*call to gpuIsImplementation_IMPL*call to rmcfg_IsGB202orBetter*call to rpcObjIfacesSetup*call to rpcSetPropertiesSpecial*call to __iom_ctor_OBJRPC*pRpcMethods*__rpcConstruct__*__rpcDestroy__*__rpcSendMessage__*__rpcRecvPoll__*call to rpcstructurecopySetPropertiesSpecial*call to __iom_ctor_OBJRPCSTRUCTURECOPY*call to __nvoc_objCreate_RsClientResource*generated/g_rs_client_nvoc.c**generated/g_rs_client_nvoc.c*call to __nvoc_init_funcTable_RsClientResource*call to __nvoc_init_funcTable_RsClientResource_1*call to __nvoc_init_dataField_RsClientResource*call to clientresConstruct_IMPL*call to clientresDestruct_IMPL*call to resShareCallback_DISPATCH*call to resAccessCallback_DISPATCH*call to resControl_Epilogue_DISPATCH*call to resControl_Prologue_DISPATCH*call to __nvoc_objCreate_RsClient*call to __nvoc_init_funcTable_RsClient*call to __nvoc_init_funcTable_RsClient_1*call to __nvoc_init_dataField_RsClient*call to clientConstruct_IMPL*call to clientDestruct_IMPL*call to __nvoc_objCreate_RsResource*generated/g_rs_resource_nvoc.c**generated/g_rs_resource_nvoc.c*call to __nvoc_init_funcTable_RsResource*call to __nvoc_init_funcTable_RsResource_1*call to __nvoc_init_dataField_RsResource*call to resConstruct_IMPL*call to resDestruct_IMPL*call to __nvoc_objCreate_RsSession*generated/g_rs_server_nvoc.c**generated/g_rs_server_nvoc.c*call to __nvoc_init_funcTable_RsSession*call to __nvoc_init_funcTable_RsSession_1*call to __nvoc_init_dataField_RsSession*call to sessionConstruct_IMPL*call to sessionDestruct_IMPL*call to __nvoc_objCreate_RsShared*call to __nvoc_init_funcTable_RsShared*call to __nvoc_init_funcTable_RsShared_1*call to __nvoc_init_dataField_RsShared*call to shrConstruct_IMPL*call to shrDestruct_IMPL*call to sec2ctxDestructHal_KERNEL*call to sec2ctxConstructHal_KERNEL*arg_pSec2Context*call to __nvoc_objCreate_Sec2Context*generated/g_sec2_context_nvoc.c**generated/g_sec2_context_nvoc.c*call to __nvoc_init__Sec2Context*call to __nvoc_ctor_Sec2Context*__nvoc_pbase_Sec2Context**__nvoc_pbase_Sec2Context*call to __nvoc_init_funcTable_Sec2Context*call to __nvoc_init_funcTable_Sec2Context_1*call to __nvoc_init_dataField_Sec2Context*call to __nvoc_sec2ctxConstruct*call to __nvoc_sec2ctxDestruct*call to __nvoc_objCreate_Sec2Utils*generated/g_sec2_utils_nvoc.c**generated/g_sec2_utils_nvoc.c*call to __nvoc_init__Sec2Utils*call to __nvoc_ctor_Sec2Utils*__nvoc_pbase_Sec2Utils**__nvoc_pbase_Sec2Utils*call to __nvoc_init_funcTable_Sec2Utils*call to __nvoc_init_funcTable_Sec2Utils_1*call to __nvoc_init_dataField_Sec2Utils*call to sec2utilsConstruct_IMPL*call to sec2utilsDestruct_IMPL*call to __nvoc_objCreate_SemaphoreSurface*generated/g_sem_surf_nvoc.c**generated/g_sem_surf_nvoc.c*call to __nvoc_init__SemaphoreSurface*call to __nvoc_ctor_SemaphoreSurface*__nvoc_pbase_SemaphoreSurface**__nvoc_pbase_SemaphoreSurface*call to __nvoc_init_funcTable_SemaphoreSurface*call to __nvoc_init_funcTable_SemaphoreSurface_1*call to __nvoc_init_dataField_SemaphoreSurface*call to semsurfConstruct_IMPL*call to semsurfDestruct_IMPL*call to semsurfCanCopy_DISPATCH*call to __nvoc_objCreate_Spdm*generated/g_spdm_nvoc.c**generated/g_spdm_nvoc.c*call to __nvoc_init__Spdm*call to __nvoc_ctor_Spdm*__nvoc_pbase_Spdm**__nvoc_pbase_Spdm*call to __nvoc_init_funcTable_Spdm*call to __nvoc_init_funcTable_Spdm_1*__spdmStatePostLoad__*__spdmStatePreUnload__*__spdmGetCertChains__*__spdmGetAttestationReport__*__spdmCheckAndExecuteKeyUpdate__*__spdmSendInitRmDataCommand__*__spdmRegisterForHeartbeats__*__spdmUnregisterFromHeartbeats__*__spdmMutualAuthSupported__*__spdmSendCtrlCall__*__spdmDeviceInit__*__spdmDeviceDeinit__*__spdmDeviceSecuredSessionSupported__*__spdmCheckConnection__*__spdmMessageProcess__*__spdmGetCertificates__*__spdmGetReqEncapCertificates__*__spdmGetRequesterCertificateCount__*__spdmGetBinArchiveIndividualL2Certificate__*__spdmGetBinArchiveIndividualL3Certificate__*__spdmGetIndividualCertificate__*__spdmConvertCertificateToDer__*call to __nvoc_init_dataField_Spdm*PDB_PROP_SPDM_ENABLED*call to spdmDestruct_IMPL*call to spdmStatePreUnload_DISPATCH*call to spdmStatePostLoad_DISPATCH*call to spdmConstructEngine_DISPATCH*__nvoc_pbase_SpdmProxy**__nvoc_pbase_SpdmProxy*call to __nvoc_init_funcTable_SpdmProxy*call to __nvoc_init_funcTable_SpdmProxy_1*call to __nvoc_init_dataField_SpdmProxy*call to __nvoc_objCreate_StandardMemory*generated/g_standard_mem_nvoc.c**generated/g_standard_mem_nvoc.c*call to __nvoc_init_funcTable_StandardMemory*call to __nvoc_init_funcTable_StandardMemory_1*call to __nvoc_init_dataField_StandardMemory*call to stdmemConstruct_IMPL*pChannelStateParams*call to __nvoc_objCreate_DiagApi*generated/g_subdevice_diag_nvoc.c**generated/g_subdevice_diag_nvoc.c*call to __nvoc_init__DiagApi*call to __nvoc_ctor_DiagApi*__nvoc_pbase_DiagApi**__nvoc_pbase_DiagApi*call to __nvoc_init_funcTable_DiagApi*call to __nvoc_init_funcTable_DiagApi_1*__diagapiCtrlCmdFifoGetChannelState__*call to __nvoc_init_dataField_DiagApi*call to diagapiConstruct_IMPL*call to diagapiControlFilter_DISPATCH*call to diagapiControl_DISPATCH*call to __nvoc_objCreate_Subdevice*generated/g_subdevice_nvoc.c**generated/g_subdevice_nvoc.c*call to __nvoc_init__Subdevice*call to __nvoc_ctor_Subdevice*__nvoc_pbase_Subdevice**__nvoc_pbase_Subdevice*call to __nvoc_init_funcTable_Subdevice*call to __nvoc_init_funcTable_Subdevice_1*__subdeviceCtrlCmdBiosGetInfoV2__*__subdeviceCtrlCmdBiosGetSKUInfo__*__subdeviceCtrlCmdBiosGetPostTime__*__subdeviceCtrlCmdBusGetPcieReqAtomicsCaps__*__subdeviceCtrlCmdBusGetPcieSupportedGpuAtomics__*__subdeviceCtrlCmdBusGetPcieCplAtomicsCaps__*__subdeviceCtrlCmdBusGetC2CInfo__*__subdeviceCtrlCmdBusGetC2CLpwrStats__*__subdeviceCtrlCmdBusSetC2CLpwrStateVote__*__subdeviceCtrlCmdBusSetC2CIdleThreshold__*__subdeviceCtrlCmdBusSetP2pMapping__*__subdeviceCtrlCmdBusUnsetP2pMapping__*__subdeviceCtrlCmdBusGetNvlinkCaps__*__subdeviceCtrlCmdPerfGetGpumonPerfmonUtilSamplesV2__*__subdeviceCtrlCmdPerfReservePerfmonHw__*__subdeviceCtrlCmdPerfGetLevelInfo_V2__*__subdeviceCtrlCmdPerfGetCurrentPstate__*__subdeviceCtrlCmdPerfGetVideoEnginePerfmonSample__*__subdeviceCtrlCmdPerfGetPowerstate__*__subdeviceCtrlCmdPerfNotifyVideoevent__*__subdeviceCtrlCmdFbGetOfflinedPages__*__subdeviceCtrlCmdFbGetLTCInfoForFBP__*__subdeviceCtrlCmdFbGetDynamicOfflinedPages__*__subdeviceCtrlCmdMemSysGetStaticConfig__*__subdeviceCtrlCmdMemSysGetMIGMemoryPartitionTable__*__subdeviceCtrlCmdMemSysQueryDramEncryptionPendingConfiguration__*__subdeviceCtrlCmdMemSysSetDramEncryptionConfiguration__*__subdeviceCtrlCmdMemSysQueryDramEncryptionStatus__*__subdeviceCtrlCmdFifoDisableChannelsForKeyRotation__*__subdeviceCtrlCmdFifoDisableChannelsForKeyRotationV2__*__subdeviceCtrlCmdFifoObjschedGetCaps__*__subdeviceCtrlCmdFifoConfigCtxswTimeout__*__subdeviceCtrlCmdFifoGetDeviceInfoTable__*__subdeviceCtrlCmdFifoUpdateChannelInfo__*__subdeviceCtrlCmdKGrCtxswPmMode__*__subdeviceCtrlCmdGpuQueryEccStatus__*__subdeviceCtrlCmdGpuQueryIllumSupport__*__subdeviceCtrlCmdGpuQueryScrubberStatus__*__subdeviceCtrlCmdGetP2pCaps__*__subdeviceCtrlCmdSetRcRecovery__*__subdeviceCtrlCmdGetRcRecovery__*__subdeviceCtrlCmdCeGetCaps__*__subdeviceCtrlCmdCeGetCapsV2__*__subdeviceCtrlCmdCeGetAllCaps__*__subdeviceCtrlCmdGpuQueryEccConfiguration__*__subdeviceCtrlCmdGpuSetEccConfiguration__*__subdeviceCtrlCmdGpuResetEccErrorStatus__*__subdeviceCtrlCmdGspGetFeatures__*__subdeviceCtrlCmdBifGetStaticInfo__*__subdeviceCtrlCmdInternalGetLocalAtsConfig__*__subdeviceCtrlCmdInternalSetPeerAtsConfig__*__subdeviceCtrlCmdCcuGetSampleInfo__*call to __nvoc_init_dataField_Subdevice*call to subdeviceConstruct_IMPL*call to subdeviceDestruct_IMPL*call to subdevicePreDestruct_DISPATCH*call to __nvoc_objCreate_SoftwareMethodTest*generated/g_sw_test_nvoc.c**generated/g_sw_test_nvoc.c*call to __nvoc_init__SoftwareMethodTest*call to __nvoc_ctor_SoftwareMethodTest*__nvoc_pbase_SoftwareMethodTest**__nvoc_pbase_SoftwareMethodTest*call to __nvoc_init_funcTable_SoftwareMethodTest*call to __nvoc_init_funcTable_SoftwareMethodTest_1*call to __nvoc_init_dataField_SoftwareMethodTest*call to swtestConstruct_IMPL*call to swtestDestruct_IMPL*call to swtestGetSwMethods_DISPATCH*call to __nvoc_objCreate_SwIntr*generated/g_swintr_nvoc.c**generated/g_swintr_nvoc.c*call to __nvoc_init__SwIntr*call to __nvoc_ctor_SwIntr*__nvoc_pbase_SwIntr**__nvoc_pbase_SwIntr*call to __nvoc_init_funcTable_SwIntr*call to __nvoc_init_funcTable_SwIntr_1*call to __nvoc_init_dataField_SwIntr*call to swintrServiceInterrupt_DISPATCH*call to swintrRegisterIntrService_DISPATCH*call to __nvoc_objCreate_SyncGpuBoost*generated/g_syncgpuboost_nvoc.c**generated/g_syncgpuboost_nvoc.c*call to __nvoc_init__SyncGpuBoost*call to __nvoc_ctor_SyncGpuBoost*__nvoc_pbase_SyncGpuBoost**__nvoc_pbase_SyncGpuBoost*call to __nvoc_init_funcTable_SyncGpuBoost*call to __nvoc_init_funcTable_SyncGpuBoost_1*call to __nvoc_init_dataField_SyncGpuBoost*call to syncgpuboostConstruct_IMPL*call to syncgpuboostDestruct_IMPL*call to __nvoc_objCreate_SyncpointMemory*generated/g_syncpoint_mem_nvoc.c**generated/g_syncpoint_mem_nvoc.c*call to __nvoc_init__SyncpointMemory*call to __nvoc_ctor_SyncpointMemory*__nvoc_pbase_SyncpointMemory**__nvoc_pbase_SyncpointMemory*call to __nvoc_init_funcTable_SyncpointMemory*call to __nvoc_init_funcTable_SyncpointMemory_1*call to __nvoc_init_dataField_SyncpointMemory*call to syncpointConstruct_IMPL*call to syncpointCanCopy_DISPATCH*call to __nvoc_objCreate_SysmemScrubber*generated/g_sysmem_scrub_nvoc.c**generated/g_sysmem_scrub_nvoc.c*call to __nvoc_init__SysmemScrubber*call to __nvoc_ctor_SysmemScrubber*__nvoc_pbase_SysmemScrubber**__nvoc_pbase_SysmemScrubber*call to __nvoc_init_funcTable_SysmemScrubber*call to __nvoc_init_funcTable_SysmemScrubber_1*call to __nvoc_init_dataField_SysmemScrubber*call to sysmemscrubConstruct_IMPL*call to sysmemscrubDestruct_IMPL*call to __nvoc_objCreate_SystemMemory*generated/g_system_mem_nvoc.c**generated/g_system_mem_nvoc.c*call to __nvoc_init__SystemMemory*call to __nvoc_ctor_SystemMemory*__nvoc_pbase_SystemMemory**__nvoc_pbase_SystemMemory*call to __nvoc_init_funcTable_SystemMemory*call to __nvoc_init_funcTable_SystemMemory_1*__sysmemInitAllocRequest__*call to __nvoc_init_dataField_SystemMemory*call to sysmemConstruct_IMPL*call to sysmemDestruct_IMPL*call to __nvoc_objCreate_OBJSYS*generated/g_system_nvoc.c**generated/g_system_nvoc.c*call to __nvoc_init__OBJSYS*call to __nvoc_ctor_OBJSYS*__nvoc_pbase_OBJSYS**__nvoc_pbase_OBJSYS*call to __nvoc_init_funcTable_OBJSYS*call to __nvoc_init_funcTable_OBJSYS_1*call to __nvoc_init_dataField_OBJSYS*call to sysConstruct_IMPL*PDB_PROP_SYS_VALIDATE_CLIENT_HANDLE*PDB_PROP_SYS_VALIDATE_CLIENT_HANDLE_STRICT*PDB_PROP_SYS_VALIDATE_KERNEL_BUFFERS*PDB_PROP_SYS_INTERNAL_EVENT_BUFFER_ALLOC_ALLOWED*PDB_PROP_SYS_IS_AGGRESSIVE_GC6_ENABLED*PDB_PROP_SYS_PRIORITY_BOOST*PDB_PROP_SYS_PRIORITY_THROTTLE_DELAY_US*PDB_PROP_SYS_RM_LOCK_TIME_COLLECT*PDB_PROP_SYS_ENABLE_RM_TEST_ONLY_CODE*PDB_PROP_SYS_ROUTE_TO_PHYSICAL_LOCK_BYPASS*PDB_PROP_SYS_ENABLE_FORCE_SHARED_LOCK*bUseDeferredClientListFree*clientListDeferredFreeLimit*bEnableDynamicGranularityPageArrays*PDB_PROP_SYS_RECOVERY_REBOOT_REQUIRED*call to sysDestruct_IMPL*call to __nvoc_objCreate_ThirdPartyP2P*generated/g_third_party_p2p_nvoc.c**generated/g_third_party_p2p_nvoc.c*call to __nvoc_init__ThirdPartyP2P*call to __nvoc_ctor_ThirdPartyP2P*__nvoc_pbase_ThirdPartyP2P**__nvoc_pbase_ThirdPartyP2P*call to __nvoc_init_funcTable_ThirdPartyP2P*call to __nvoc_init_funcTable_ThirdPartyP2P_1*call to __nvoc_init_dataField_ThirdPartyP2P*call to thirdpartyp2pConstruct_IMPL*call to thirdpartyp2pDestruct_IMPL*call to __nvoc_objCreate_P2PTokenShare*call to __nvoc_init__P2PTokenShare*call to __nvoc_ctor_P2PTokenShare*__nvoc_pbase_P2PTokenShare**__nvoc_pbase_P2PTokenShare*call to __nvoc_init_funcTable_P2PTokenShare*call to __nvoc_init_funcTable_P2PTokenShare_1*call to __nvoc_init_dataField_P2PTokenShare*call to shrp2pConstruct_IMPL*call to shrp2pDestruct_IMPL*call to __nvoc_objCreate_TimedSemaSwObject*generated/g_timed_sema_nvoc.c**generated/g_timed_sema_nvoc.c*call to __nvoc_init__TimedSemaSwObject*call to __nvoc_ctor_TimedSemaSwObject*__nvoc_pbase_TimedSemaSwObject**__nvoc_pbase_TimedSemaSwObject*call to __nvoc_init_funcTable_TimedSemaSwObject*call to __nvoc_init_funcTable_TimedSemaSwObject_1*call to __nvoc_init_dataField_TimedSemaSwObject*call to tsemaConstruct_IMPL*call to tsemaDestruct_IMPL*call to tsemaGetSwMethods_DISPATCH*call to __nvoc_objCreate_TimerApi*generated/g_tmr_nvoc.c**generated/g_tmr_nvoc.c*call to __nvoc_init__TimerApi*call to __nvoc_ctor_TimerApi*__nvoc_pbase_TimerApi**__nvoc_pbase_TimerApi*call to __nvoc_init_funcTable_TimerApi*call to __nvoc_init_funcTable_TimerApi_1*call to __nvoc_init_dataField_TimerApi*call to tmrapiConstruct_IMPL*call to tmrapiDestruct_IMPL*call to tmrapiGetRegBaseOffsetAndSize_DISPATCH*call to __nvoc_init_funcTable_OBJTRACEABLE*call to __nvoc_init_funcTable_OBJTRACEABLE_1*call to __nvoc_init_dataField_OBJTRACEABLE*call to __nvoc_objCreate_UserModeApi*generated/g_usermode_api_nvoc.c**generated/g_usermode_api_nvoc.c*call to __nvoc_init__UserModeApi*call to __nvoc_ctor_UserModeApi*__nvoc_pbase_UserModeApi**__nvoc_pbase_UserModeApi*call to __nvoc_init_funcTable_UserModeApi*call to __nvoc_init_funcTable_UserModeApi_1*call to __nvoc_init_dataField_UserModeApi*call to usrmodeConstruct_IMPL*call to usrmodeGetMemInterMapParams_DISPATCH*call to usrmodeCanCopy_DISPATCH*call to __nvoc_objCreate_UvmChannelRetainer*generated/g_uvm_channel_retainer_nvoc.c**generated/g_uvm_channel_retainer_nvoc.c*call to __nvoc_init__UvmChannelRetainer*call to __nvoc_ctor_UvmChannelRetainer*__nvoc_pbase_UvmChannelRetainer**__nvoc_pbase_UvmChannelRetainer*call to __nvoc_init_funcTable_UvmChannelRetainer*call to __nvoc_init_funcTable_UvmChannelRetainer_1*call to __nvoc_init_dataField_UvmChannelRetainer*call to uvmchanrtnrConstruct_IMPL*call to uvmchanrtnrDestruct_IMPL*generated/g_uvm_nvoc.h**generated/g_uvm_nvoc.h**arg6**arg7**arg8**arg9*call to __nvoc_objCreate_OBJUVM*generated/g_uvm_nvoc.c**generated/g_uvm_nvoc.c*call to __nvoc_init__OBJUVM*call to __nvoc_ctor_OBJUVM*__nvoc_pbase_OBJUVM**__nvoc_pbase_OBJUVM*call to __nvoc_init_funcTable_OBJUVM*call to __nvoc_init_funcTable_OBJUVM_1*__uvmInitAccessCntrBuffer__*__uvmDestroyAccessCntrBuffer__*__uvmAccessCntrBufferUnregister__*__uvmAccessCntrBufferRegister__*__uvmUnloadAccessCntrBuffer__*__uvmSetupAccessCntrBuffer__*__uvmReadAccessCntrBufferPutPtr__*__uvmReadAccessCntrBufferGetPtr__*__uvmReadAccessCntrBufferFullPtr__*__uvmAccessCntrSetGranularity__*__uvmAccessCntrSetThreshold__*__uvmAccessCntrSetCounterLimit__*__uvmWriteAccessCntrBufferGetPtr__*__uvmEnableAccessCntr__*__uvmDisableAccessCntr__*__uvmEnableAccessCntrIntr__*__uvmDisableAccessCntrIntr__*__uvmGetAccessCntrRegisterMappings__*__uvmAccessCntrService__*__uvmGetAccessCounterBufferSize__*__uvmProgramWriteAccessCntrBufferAddress__*__uvmProgramAccessCntrBufferEnabled__*__uvmIsAccessCntrBufferEnabled__*__uvmIsAccessCntrBufferPushed__*__uvmGetRegOffsetAccessCntrBufferPut__*__uvmGetRegOffsetAccessCntrBufferGet__*__uvmGetRegOffsetAccessCntrBufferHi__*__uvmGetRegOffsetAccessCntrBufferLo__*__uvmGetRegOffsetAccessCntrBufferConfig__*__uvmGetRegOffsetAccessCntrBufferInfo__*__uvmGetRegOffsetAccessCntrBufferSize__*call to __nvoc_init_dataField_OBJUVM*accessCounterBufferCount*call to uvmServiceInterrupt_DISPATCH*call to uvmRegisterIntrService_DISPATCH*call to uvmStateInitUnlocked_DISPATCH*call to uvmStateDestroy_DISPATCH*call to __nvoc_objCreate_UvmSwObject*generated/g_uvm_sw_nvoc.c**generated/g_uvm_sw_nvoc.c*call to __nvoc_init__UvmSwObject*call to __nvoc_ctor_UvmSwObject*__nvoc_pbase_UvmSwObject**__nvoc_pbase_UvmSwObject*call to __nvoc_init_funcTable_UvmSwObject*call to __nvoc_init_funcTable_UvmSwObject_1*call to __nvoc_init_dataField_UvmSwObject*call to uvmswConstruct_IMPL*call to uvmswDestruct_IMPL*call to uvmswGetSwMethods_DISPATCH*call to __nvoc_objCreate_VaSpaceApi*generated/g_vaspace_api_nvoc.c**generated/g_vaspace_api_nvoc.c*call to __nvoc_init__VaSpaceApi*call to __nvoc_ctor_VaSpaceApi*__nvoc_pbase_VaSpaceApi**__nvoc_pbase_VaSpaceApi*call to __nvoc_init_funcTable_VaSpaceApi*call to __nvoc_init_funcTable_VaSpaceApi_1*call to __nvoc_init_dataField_VaSpaceApi*call to vaspaceapiConstruct_IMPL*call to vaspaceapiDestruct_IMPL*call to vaspaceapiCanCopy_DISPATCH*call to __nvoc_init_funcTable_OBJVASPACE*call to __nvoc_init_funcTable_OBJVASPACE_1*call to __nvoc_init_dataField_OBJVASPACE*call to __nvoc_objCreate_VblankCallback*generated/g_vblank_callback_nvoc.c**generated/g_vblank_callback_nvoc.c*call to __nvoc_init__VblankCallback*call to __nvoc_ctor_VblankCallback*__nvoc_pbase_VblankCallback**__nvoc_pbase_VblankCallback*call to __nvoc_init_funcTable_VblankCallback*call to __nvoc_init_funcTable_VblankCallback_1*call to __nvoc_init_dataField_VblankCallback*call to vblcbConstruct_IMPL*call to vblcbDestruct_IMPL*call to __nvoc_objCreate_VgpuApi*generated/g_vgpuapi_nvoc.c**generated/g_vgpuapi_nvoc.c*call to __nvoc_init__VgpuApi*call to __nvoc_ctor_VgpuApi*__nvoc_pbase_VgpuApi**__nvoc_pbase_VgpuApi*call to __nvoc_init_funcTable_VgpuApi*call to __nvoc_init_funcTable_VgpuApi_1*call to __nvoc_init_dataField_VgpuApi*call to vgpuapiConstruct_IMPL*call to vgpuapiDestruct_IMPL*call to __nvoc_objCreate_VgpuConfigApi*generated/g_vgpuconfigapi_nvoc.c**generated/g_vgpuconfigapi_nvoc.c*call to __nvoc_init__VgpuConfigApi*call to __nvoc_ctor_VgpuConfigApi*__nvoc_pbase_VgpuConfigApi**__nvoc_pbase_VgpuConfigApi*call to __nvoc_init_funcTable_VgpuConfigApi*call to __nvoc_init_funcTable_VgpuConfigApi_1*call to __nvoc_init_dataField_VgpuConfigApi*call to vgpuconfigapiConstruct_IMPL*call to vgpuconfigapiDestruct_IMPL*call to __nvoc_objCreate_VideoMemory*generated/g_video_mem_nvoc.c**generated/g_video_mem_nvoc.c*call to __nvoc_init__VideoMemory*call to __nvoc_ctor_VideoMemory*__nvoc_pbase_VideoMemory**__nvoc_pbase_VideoMemory*call to __nvoc_init_funcTable_VideoMemory*call to __nvoc_init_funcTable_VideoMemory_1*call to __nvoc_init_dataField_VideoMemory*call to vidmemConstruct_IMPL*call to vidmemDestruct_IMPL*call to vidmemCheckCopyPermissions_DISPATCH*call to vidmemAccessBitBufConstructHelper_56cd7a*arg_pVidmemAccessBitBuffer*call to __nvoc_objCreate_VidmemAccessBitBuffer*generated/g_vidmem_access_bit_buffer_nvoc.c**generated/g_vidmem_access_bit_buffer_nvoc.c*call to __nvoc_init__VidmemAccessBitBuffer*call to __nvoc_ctor_VidmemAccessBitBuffer*__nvoc_pbase_VidmemAccessBitBuffer**__nvoc_pbase_VidmemAccessBitBuffer*call to __nvoc_init_funcTable_VidmemAccessBitBuffer*call to __nvoc_init_funcTable_VidmemAccessBitBuffer_1*call to __nvoc_init_dataField_VidmemAccessBitBuffer*call to __nvoc_vidmemAccessBitBufConstruct*call to vidmemAccessBitBufDestruct_b3696a*call to __nvoc_objCreate_VirtMemAllocator*generated/g_virt_mem_allocator_nvoc.c**generated/g_virt_mem_allocator_nvoc.c*call to __nvoc_init__VirtMemAllocator*call to __nvoc_ctor_VirtMemAllocator*__nvoc_pbase_VirtMemAllocator**__nvoc_pbase_VirtMemAllocator*call to __nvoc_init_funcTable_VirtMemAllocator*call to __nvoc_init_funcTable_VirtMemAllocator_1*__dmaAllocMapping__*__dmaFreeMapping__*__dmaAllocBar1P2PMapping__*__dmaFreeBar1P2PMapping__*__dmaIsDefaultGpuUncached__*__dmaUpdateVASpace__*__dmaXlateVAtoPAforChannel__*__dmaGetPTESize__*__dmaMapBuffer__*__dmaUnmapBuffer__*call to __nvoc_init_dataField_VirtMemAllocator*bDmaShaderAccessSupported*bDmaIsSupportedSparseVirtual*bDmaEnforce32BitPointer*bDmaEnableFullCompTagLine*bDmaMultipleVaspaceSupported*call to dmaDestruct_GM107*call to dmaStatePostLoad_DISPATCH*call to dmaStateInitLocked_DISPATCH*call to dmaConstructEngine_DISPATCH*call to __nvoc_objCreate_OBJVMM*generated/g_virt_mem_mgr_nvoc.c**generated/g_virt_mem_mgr_nvoc.c*call to __nvoc_init__OBJVMM*call to __nvoc_ctor_OBJVMM*__nvoc_pbase_OBJVMM**__nvoc_pbase_OBJVMM*call to __nvoc_init_funcTable_OBJVMM*call to __nvoc_init_funcTable_OBJVMM_1*call to __nvoc_init_dataField_OBJVMM*call to __nvoc_objCreate_VirtualMemoryRange*generated/g_virt_mem_range_nvoc.c**generated/g_virt_mem_range_nvoc.c*__nvoc_base_VirtualMemory*call to __nvoc_init__VirtualMemoryRange*call to __nvoc_ctor_VirtualMemoryRange*__nvoc_pbase_VirtualMemory**__nvoc_pbase_VirtualMemory*__nvoc_pbase_VirtualMemoryRange**__nvoc_pbase_VirtualMemoryRange*call to __nvoc_init__VirtualMemory*metadata__VirtualMemory*call to __nvoc_init_funcTable_VirtualMemoryRange*call to __nvoc_init_funcTable_VirtualMemoryRange_1*call to __nvoc_ctor_VirtualMemory*call to __nvoc_init_dataField_VirtualMemoryRange*call to vmrangeConstruct_IMPL*call to __nvoc_dtor_VirtualMemory*call to virtmemIsPartialUnmapSupported_DISPATCH*call to virtmemUnmapFrom_DISPATCH*call to virtmemMapTo_DISPATCH*call to __nvoc_objCreate_VirtualMemory*generated/g_virtual_mem_nvoc.c**generated/g_virtual_mem_nvoc.c*call to __nvoc_init_funcTable_VirtualMemory*call to __nvoc_init_funcTable_VirtualMemory_1*call to __nvoc_init_dataField_VirtualMemory*call to virtmemConstruct_IMPL*call to virtmemDestruct_IMPL*call to zbcapiConstructHal_56cd7a*arg_pZbcApi*generated/g_zbc_api_nvoc.h**generated/g_zbc_api_nvoc.h*pGetZBCClearTableSizeParams*call to __nvoc_objCreate_ZbcApi*generated/g_zbc_api_nvoc.c**generated/g_zbc_api_nvoc.c*call to __nvoc_init__ZbcApi*call to __nvoc_ctor_ZbcApi*__nvoc_pbase_ZbcApi**__nvoc_pbase_ZbcApi*call to __nvoc_init_funcTable_ZbcApi*call to __nvoc_init_funcTable_ZbcApi_1*__zbcapiCtrlCmdGetZbcClearTableSize__*call to __nvoc_init_dataField_ZbcApi*call to __nvoc_zbcapiConstruct*call to zbcapiDestruct_b3696a*pageArrayBase**pageArrayBase***pageArrayBase*pageArraySize*pageNumberList**pageNumberList***pageNumberList*flagsOs02*hHwResClient*hHwResDevice*hHwResHandle*pteAdjust*comprcovg*zcullcovg*ctagOffset*virtAllocParams*call to _rmAllocGetHeapSize*call to RmDeprecatedFindOrCreateSubDeviceHandle*fbInfoParams*fbInfoListSize**fbInfoList*call to RmDeprecatedConvertOs02ToOs32Flags*call to _ctrlparamsTokenInit*call to _ctrlparamsTokenAddEmbeddedPtr*CTRL_PARAMS_TOKEN_ADD_EMBEDDED(token, NV2080_CTRL_FB_GET_INFO_PARAMS, pArgs, fbInfoList, fbInfoListSize, ((NvBool)(0 == 0)), NV2080_CTRL_FB_INFO, maxListSize)*interface/deprecated/rmapi_deprecated_control.c**CTRL_PARAMS_TOKEN_ADD_EMBEDDED(token, NV2080_CTRL_FB_GET_INFO_PARAMS, pArgs, fbInfoList, fbInfoListSize, ((NvBool)(0 == 0)), NV2080_CTRL_FB_INFO, maxListSize)**interface/deprecated/rmapi_deprecated_control.c*call to ctrlparamAcquire*pParams2**pParams2*NVRM: No memory for pParams2 **NVRM: No memory for pParams2 *call to ctrlparamRelease*NVRM: pParams2 static array too small **NVRM: pParams2 static array too small *pContextInternal*CTRL_PARAMS_TOKEN_ADD_EMBEDDED(token, NV0080_CTRL_MSENC_GET_CAPS_PARAMS, pArgs, capsTbl, capsTblSize, ((NvBool)(0 == 0)), NvU8, maxListSize)**CTRL_PARAMS_TOKEN_ADD_EMBEDDED(token, NV0080_CTRL_MSENC_GET_CAPS_PARAMS, pArgs, capsTbl, capsTblSize, ((NvBool)(0 == 0)), NvU8, maxListSize)*NVRM: pParams capsTblSize %d invalid **NVRM: pParams capsTblSize %d invalid *instanceId*NVRM: No memory for pParams **NVRM: No memory for pParams *call to RmCopyUserForDeprecatedApi*RmCopyUserForDeprecatedApi(RMAPI_DEPRECATED_COPYIN, RMAPI_DEPRECATED_BUFFER_EMPLACE, pArgs->params, sizeof(*pParams), (void**)&pParams, pSecInfo->paramLocation == PARAM_LOCATION_USER)**RmCopyUserForDeprecatedApi(RMAPI_DEPRECATED_COPYIN, RMAPI_DEPRECATED_BUFFER_EMPLACE, pArgs->params, sizeof(*pParams), (void**)&pParams, pSecInfo->paramLocation == PARAM_LOCATION_USER)*bDebugValues*RmCopyUserForDeprecatedApi(RMAPI_DEPRECATED_COPYIN, RMAPI_DEPRECATED_BUFFER_EMPLACE, pParams->pFeatureDebugValues, sizeof(pParams2->featureDebugValues), (void**)&pParams2->featureDebugValues, pSecInfo->paramLocation != PARAM_LOCATION_KERNEL)**RmCopyUserForDeprecatedApi(RMAPI_DEPRECATED_COPYIN, RMAPI_DEPRECATED_BUFFER_EMPLACE, pParams->pFeatureDebugValues, sizeof(pParams2->featureDebugValues), (void**)&pParams2->featureDebugValues, pSecInfo->paramLocation != PARAM_LOCATION_KERNEL)*RmCopyUserForDeprecatedApi(RMAPI_DEPRECATED_COPYOUT, RMAPI_DEPRECATED_BUFFER_EMPLACE, pParams->pFeatureDebugValues, sizeof(pParams2->featureDebugValues), (void**)&pParams2->featureDebugValues, pSecInfo->paramLocation != PARAM_LOCATION_KERNEL)**RmCopyUserForDeprecatedApi(RMAPI_DEPRECATED_COPYOUT, RMAPI_DEPRECATED_BUFFER_EMPLACE, pParams->pFeatureDebugValues, sizeof(pParams2->featureDebugValues), (void**)&pParams2->featureDebugValues, pSecInfo->paramLocation != PARAM_LOCATION_KERNEL)*RmCopyUserForDeprecatedApi(RMAPI_DEPRECATED_COPYOUT, RMAPI_DEPRECATED_BUFFER_EMPLACE, pArgs->params, sizeof(*pParams), (void**)&pParams, pSecInfo->paramLocation == PARAM_LOCATION_USER)**RmCopyUserForDeprecatedApi(RMAPI_DEPRECATED_COPYOUT, RMAPI_DEPRECATED_BUFFER_EMPLACE, pArgs->params, sizeof(*pParams), (void**)&pParams, pSecInfo->paramLocation == PARAM_LOCATION_USER)*CTRL_PARAMS_TOKEN_ADD_EMBEDDED(token, NV2080_CTRL_GPU_GET_INFO_PARAMS, pArgs, gpuInfoList, gpuInfoListSize, ((NvBool)(0 == 0)), NV2080_CTRL_GPU_INFO, maxListSize)**CTRL_PARAMS_TOKEN_ADD_EMBEDDED(token, NV2080_CTRL_GPU_GET_INFO_PARAMS, pArgs, gpuInfoList, gpuInfoListSize, ((NvBool)(0 == 0)), NV2080_CTRL_GPU_INFO, maxListSize)*CTRL_PARAMS_TOKEN_ADD_EMBEDDED(token, NV0080_CTRL_BSP_GET_CAPS_PARAMS, pArgs, capsTbl, capsTblSize, ((NvBool)(0 == 0)), NvU8, maxListSize)**CTRL_PARAMS_TOKEN_ADD_EMBEDDED(token, NV0080_CTRL_BSP_GET_CAPS_PARAMS, pArgs, capsTbl, capsTblSize, ((NvBool)(0 == 0)), NvU8, maxListSize)*CTRL_PARAMS_TOKEN_ADD_EMBEDDED(token, NV2080_CTRL_BUS_GET_INFO_PARAMS, pArgs, busInfoList, busInfoListSize, ((NvBool)(0 == 0)), NV2080_CTRL_BUS_INFO, maxListSize)**CTRL_PARAMS_TOKEN_ADD_EMBEDDED(token, NV2080_CTRL_BUS_GET_INFO_PARAMS, pArgs, busInfoList, busInfoListSize, ((NvBool)(0 == 0)), NV2080_CTRL_BUS_INFO, maxListSize)*CTRL_PARAMS_TOKEN_ADD_EMBEDDED(token, NV2080_CTRL_BIOS_GET_INFO_PARAMS, pArgs, biosInfoList, biosInfoListSize, ((NvBool)(0 == 0)), NV2080_CTRL_BIOS_INFO, maxListSize)**CTRL_PARAMS_TOKEN_ADD_EMBEDDED(token, NV2080_CTRL_BIOS_GET_INFO_PARAMS, pArgs, biosInfoList, biosInfoListSize, ((NvBool)(0 == 0)), NV2080_CTRL_BIOS_INFO, maxListSize)*driverVersionBuffer**driverVersionBuffer*driverVersionBufferLen*versionBuffer**versionBuffer*versionBufferLen*titleBuffer**titleBuffer*titleBufferLen*maxSizeOfStrings*pDriverVersionBuffer**pDriverVersionBuffer*pVersionBuffer**pVersionBuffer*pTitleBuffer**pTitleBuffer*sizeOfStrings*call to rmapiParamsCopyOut*NVRM: Unable to copy out build version info to User Space. **NVRM: Unable to copy out build version info to User Space. *changelistNumber*officialChangelistNumber**pToken*paramCount*serverGetClientUnderLock(&g_resServ, pArgs->hClient, &pClient)**serverGetClientUnderLock(&g_resServ, pArgs->hClient, &pClient)*call to gpuGetByHandle*IRQL and BYPASS_LOCK control calls currently unsupported for deprecated control calls**IRQL and BYPASS_LOCK control calls currently unsupported for deprecated control calls*gssLegacyVgpuCall*call to IsGssLegacyCall***phClients***phDevices***phChannels*paramStructPtr**paramStructPtr*orginalEmbeddedPtr**orginalEmbeddedPtr***orginalEmbeddedPtr*dataBuffSize**pControlParams*statusTmp*call to RmDeprecatedGetClassID*call to RmDeprecatedGetHandleParent*allocVirtualParams*vblankArgs***pProc*LogicalHead***pParm1***pParm2*pHObject*findParams*pHSubDevice*classIdParams*parentParams*HwFree*HwAlloc*allochMemory*comprCovg*zcullCovg*bindResultFunc**bindResultFunc***bindResultFunc***pHandle*osDeviceHandle*compPageShift*compressedKind*compTagLineMin*compPageIndexLo*compPageIndexHi*compTagLineMultiplier*uncompressedKind*allocAddr*hResourceHandle*allocType*allocInputFlags*allocHeight*allocWidth*allocPitch*allocMask*allocComprCovg*allocZcullCovg*AllocHintAlignment*alignType*alignAttr*alignInputFlags*alignSize*alignHeight*alignWidth*alignPitch*alignMask*alignKind*alignAdjust*alignAttr2*ReleaseCompr*updateParams*Info*Free*AllocTiledPitchHeight***address*partitionStride*call to _rmVidHeapControlAllocCommon*AllocSizeRange*(pArgs->cmd & RM_GSS_LEGACY_MASK)*interface/deprecated/rmapi_gss_legacy_control.c**(pArgs->cmd & RM_GSS_LEGACY_MASK)**interface/deprecated/rmapi_gss_legacy_control.c***pKernelParams*call to portMemExCopyFromUser*bApiLockTaken*gpuGetByHandle(pClient, pArgs->hObject, NULL, &pGpu)**gpuGetByHandle(pClient, pArgs->hObject, NULL, &pGpu)*pRpc != NULL**pRpc != NULL*call to portMemExCopyToUser*call to finnDeserializeInternal_FINN_RM_API**api*call to finnSerializeInternal_FINN_RM_API*call to finn_read_buffer*call to finnDeserializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_QUICK_RW*call to finnDeserializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_I2C_BYTE_RW*call to finnDeserializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_I2C_BLOCK_RW*call to finnDeserializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_I2C_BUFFER_RW*call to finnDeserializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_BYTE_RW*call to finnDeserializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_WORD_RW*call to finnDeserializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_BLOCK_RW*call to finnDeserializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_PROCESS_CALL*call to finnDeserializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_BLOCK_PROCESS_CALL*call to finnDeserializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_MULTIBYTE_REGISTER_BLOCK_RW*call to finnDeserializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_READ_EDID_DDC*call to finn_write_buffer*call to finnSerializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_QUICK_RW*call to finnSerializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_I2C_BYTE_RW*call to finnSerializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_I2C_BLOCK_RW*call to finnSerializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_I2C_BUFFER_RW*call to finnSerializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_BYTE_RW*call to finnSerializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_WORD_RW*call to finnSerializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_BLOCK_RW*call to finnSerializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_PROCESS_CALL*call to finnSerializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_BLOCK_PROCESS_CALL*call to finnSerializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_SMBUS_MULTIBYTE_REGISTER_BLOCK_RW*call to finnSerializeRecord_NV402C_CTRL_I2C_TRANSACTION_DATA_READ_EDID_DDC*warFlags*writeMessage*readMessage*indexLength*writeMessageLength*readMessageLength**writeMessage**readMessage*segmentNumber***pSamples*engineCtxBuff**engineCtxBuff***pEngineCtxBuff*call to finnDeserializeMessage_NVB06F_CTRL_SAVE_ENGINE_CTX_DATA_PARAMS*call to finnSerializeMessage_NVB06F_CTRL_SAVE_ENGINE_CTX_DATA_PARAMS*call to finnBadEnum_NV402C_CTRL_I2C_TRANSACTION_TYPE*call to finnDeserializeUnion_NV402C_CTRL_I2C_TRANSACTION_DATA*call to finnSerializeUnion_NV402C_CTRL_I2C_TRANSACTION_DATA*bIsWrite*virtAddress***bufferPtr*encrClientID*startTimestamp*stopTimestamp***engineList*capsTblSize***capsTbl*globIndex*globSource*retBufOffset*retSize*totalObjSize***retBuf***pChannelHandleList***pChannelList*pdeIndex*call to finnDeserializeRecord_NV0080_CTRL_DMA_UPDATE_PDE_2_PAGE_TABLE_PARAMS*ptParams**ptParams***pPdeBuffer*call to finnSerializeRecord_NV0080_CTRL_DMA_UPDATE_PDE_2_PAGE_TABLE_PARAMS*call to finnDeserializeMessage_NV402C_CTRL_I2C_INDEXED_PARAMS*api_intf*call to finnDeserializeMessage_NV402C_CTRL_I2C_TRANSACTION_PARAMS*call to finnSerializeMessage_NV402C_CTRL_I2C_INDEXED_PARAMS*call to finnSerializeMessage_NV402C_CTRL_I2C_TRANSACTION_PARAMS*call to finnDeserializeMessage_NV2080_CTRL_RC_READ_VIRTUAL_MEM_PARAMS*call to finnSerializeMessage_NV2080_CTRL_RC_READ_VIRTUAL_MEM_PARAMS*call to finnDeserializeRecord_NV2080_CTRL_GPUMON_SAMPLES*call to finnSerializeRecord_NV2080_CTRL_GPUMON_SAMPLES*call to finnDeserializeMessage_NV2080_CTRL_NVD_GET_DUMP_PARAMS*call to finnSerializeMessage_NV2080_CTRL_NVD_GET_DUMP_PARAMS*call to finnDeserializeMessage_NV2080_CTRL_I2C_ACCESS_PARAMS*call to finnSerializeMessage_NV2080_CTRL_I2C_ACCESS_PARAMS*call to finnDeserializeMessage_NV2080_CTRL_GPU_GET_ENGINES_PARAMS*call to finnDeserializeMessage_NV2080_CTRL_GPU_GET_ENGINE_CLASSLIST_PARAMS*call to finnDeserializeMessage_NV2080_CTRL_GPU_RPC_GSP_TEST_PARAMS*call to finnSerializeMessage_NV2080_CTRL_GPU_GET_ENGINES_PARAMS*call to finnSerializeMessage_NV2080_CTRL_GPU_GET_ENGINE_CLASSLIST_PARAMS*call to finnSerializeMessage_NV2080_CTRL_GPU_RPC_GSP_TEST_PARAMS*call to finnDeserializeMessage_NV2080_CTRL_CE_GET_CAPS_PARAMS*call to finnSerializeMessage_NV2080_CTRL_CE_GET_CAPS_PARAMS*call to finnDeserializeMessage_NV2080_CTRL_BIOS_GET_NBSI_OBJ_PARAMS*call to finnSerializeMessage_NV2080_CTRL_BIOS_GET_NBSI_OBJ_PARAMS*call to finnDeserializeMessage_NV0000_CTRL_NVD_GET_DUMP_PARAMS*call to finnSerializeMessage_NV0000_CTRL_NVD_GET_DUMP_PARAMS*call to finnDeserializeMessage_NV0080_CTRL_MSENC_GET_CAPS_PARAMS*call to finnSerializeMessage_NV0080_CTRL_MSENC_GET_CAPS_PARAMS*call to finnDeserializeMessage_NV0080_CTRL_HOST_GET_CAPS_PARAMS*call to finnSerializeMessage_NV0080_CTRL_HOST_GET_CAPS_PARAMS*call to finnDeserializeMessage_NV0080_CTRL_GR_GET_CAPS_PARAMS*call to finnSerializeMessage_NV0080_CTRL_GR_GET_CAPS_PARAMS*call to finnDeserializeMessage_NV0080_CTRL_GPU_GET_CLASSLIST_PARAMS*call to finnSerializeMessage_NV0080_CTRL_GPU_GET_CLASSLIST_PARAMS*call to finnDeserializeMessage_NV0080_CTRL_FIFO_GET_CAPS_PARAMS*call to finnDeserializeMessage_NV0080_CTRL_FIFO_GET_CHANNELLIST_PARAMS*call to finnSerializeMessage_NV0080_CTRL_FIFO_GET_CAPS_PARAMS*call to finnSerializeMessage_NV0080_CTRL_FIFO_GET_CHANNELLIST_PARAMS*call to finnDeserializeMessage_NV0080_CTRL_FB_GET_CAPS_PARAMS*call to finnSerializeMessage_NV0080_CTRL_FB_GET_CAPS_PARAMS*call to finnDeserializeMessage_NV0080_CTRL_DMA_UPDATE_PDE_2_PARAMS*call to finnSerializeMessage_NV0080_CTRL_DMA_UPDATE_PDE_2_PARAMS*call to finnDeserializeMessage_NVB06F_CTRL_GET_ENGINE_CTX_DATA_PARAMS*call to finnDeserializeMessage_NVB06F_CTRL_CMD_RESTORE_ENGINE_CTX_DATA_FINN_PARAMS*call to finnSerializeMessage_NVB06F_CTRL_GET_ENGINE_CTX_DATA_PARAMS*call to finnSerializeMessage_NVB06F_CTRL_CMD_RESTORE_ENGINE_CTX_DATA_FINN_PARAMS*call to finnDeserializeMessage_NV83DE_CTRL_DEBUG_READ_MEMORY_PARAMS*call to finnDeserializeMessage_NV83DE_CTRL_DEBUG_WRITE_MEMORY_PARAMS*call to finnSerializeMessage_NV83DE_CTRL_DEBUG_READ_MEMORY_PARAMS*call to finnSerializeMessage_NV83DE_CTRL_DEBUG_WRITE_MEMORY_PARAMS*call to finnUnserializedInterfaceSize_FINN_NV01_ROOT_NVD*call to finnUnserializedInterfaceSize_FINN_NV01_DEVICE_0_DMA*call to finnUnserializedInterfaceSize_FINN_NV01_DEVICE_0_FB*call to finnUnserializedInterfaceSize_FINN_NV01_DEVICE_0_FIFO*call to finnUnserializedInterfaceSize_FINN_NV01_DEVICE_0_GPU*call to finnUnserializedInterfaceSize_FINN_NV01_DEVICE_0_GR*call to finnUnserializedInterfaceSize_FINN_NV01_DEVICE_0_HOST*call to finnUnserializedInterfaceSize_FINN_NV01_DEVICE_0_MSENC*call to finnUnserializedInterfaceSize_FINN_NV20_SUBDEVICE_0_BIOS*call to finnUnserializedInterfaceSize_FINN_NV20_SUBDEVICE_0_CE*call to finnUnserializedInterfaceSize_FINN_NV20_SUBDEVICE_0_GPU*call to finnUnserializedInterfaceSize_FINN_NV20_SUBDEVICE_0_I2C*call to finnUnserializedInterfaceSize_FINN_NV20_SUBDEVICE_0_NVD*call to finnUnserializedInterfaceSize_FINN_NV20_SUBDEVICE_0_PERF*call to finnUnserializedInterfaceSize_FINN_NV20_SUBDEVICE_0_RC*call to finnUnserializedInterfaceSize_FINN_NV20_SUBDEVICE_DIAG_GPU*call to finnUnserializedInterfaceSize_FINN_NV40_I2C_I2C*call to finnUnserializedInterfaceSize_FINN_GT200_DEBUGGER_DEBUG*call to finnUnserializedInterfaceSize_FINN_MAXWELL_CHANNEL_GPFIFO_A_GPFIFO*call to finn_open_buffer_for_write*call to finnSerializeRoot_FINN_RM_API*call to finn_close_buffer_for_write*buffer_position**buffer_position*call to finnDeserializeInterface_FINN_NV01_ROOT_NVD*call to finnDeserializeInterface_FINN_NV01_DEVICE_0_DMA*call to finnDeserializeInterface_FINN_NV01_DEVICE_0_FB*call to finnDeserializeInterface_FINN_NV01_DEVICE_0_FIFO*call to finnDeserializeInterface_FINN_NV01_DEVICE_0_GPU*call to finnDeserializeInterface_FINN_NV01_DEVICE_0_GR*call to finnDeserializeInterface_FINN_NV01_DEVICE_0_HOST*call to finnDeserializeInterface_FINN_NV01_DEVICE_0_MSENC*call to finnDeserializeInterface_FINN_NV20_SUBDEVICE_0_BIOS*call to finnDeserializeInterface_FINN_NV20_SUBDEVICE_0_CE*call to finnDeserializeInterface_FINN_NV20_SUBDEVICE_0_GPU*call to finnDeserializeInterface_FINN_NV20_SUBDEVICE_0_I2C*call to finnDeserializeInterface_FINN_NV20_SUBDEVICE_0_NVD*call to finnDeserializeInterface_FINN_NV20_SUBDEVICE_0_PERF*call to finnDeserializeInterface_FINN_NV20_SUBDEVICE_0_RC*call to finnDeserializeInterface_FINN_NV20_SUBDEVICE_DIAG_GPU*call to finnDeserializeInterface_FINN_NV40_I2C_I2C*call to finnDeserializeInterface_FINN_GT200_DEBUGGER_DEBUG*call to finnDeserializeInterface_FINN_MAXWELL_CHANNEL_GPFIFO_A_GPFIFO*src_max**src_max*call to finn_open_buffer_for_read*call to finnDeserializeRoot_FINN_RM_API*call to finnSerializeInterface_FINN_NV01_ROOT_NVD*call to finnSerializeInterface_FINN_NV01_DEVICE_0_DMA*call to finnSerializeInterface_FINN_NV01_DEVICE_0_FB*call to finnSerializeInterface_FINN_NV01_DEVICE_0_FIFO*call to finnSerializeInterface_FINN_NV01_DEVICE_0_GPU*call to finnSerializeInterface_FINN_NV01_DEVICE_0_GR*call to finnSerializeInterface_FINN_NV01_DEVICE_0_HOST*call to finnSerializeInterface_FINN_NV01_DEVICE_0_MSENC*call to finnSerializeInterface_FINN_NV20_SUBDEVICE_0_BIOS*call to finnSerializeInterface_FINN_NV20_SUBDEVICE_0_CE*call to finnSerializeInterface_FINN_NV20_SUBDEVICE_0_GPU*call to finnSerializeInterface_FINN_NV20_SUBDEVICE_0_I2C*call to finnSerializeInterface_FINN_NV20_SUBDEVICE_0_NVD*call to finnSerializeInterface_FINN_NV20_SUBDEVICE_0_PERF*call to finnSerializeInterface_FINN_NV20_SUBDEVICE_0_RC*call to finnSerializeInterface_FINN_NV20_SUBDEVICE_DIAG_GPU*call to finnSerializeInterface_FINN_NV40_I2C_I2C*call to finnSerializeInterface_FINN_GT200_DEBUGGER_DEBUG*call to finnSerializeInterface_FINN_MAXWELL_CHANNEL_GPFIFO_A_GPFIFO*dst_end**dst_end*accumulator*empty_bit_count*sod**end_of_buffer*eob*end_of_data*remaining_bit_count**end_of_data*eod*pCbData**pCbData*call to fabricMulticastCleanupCacheGet_IMPL**call to fabricMulticastCleanupCacheGet_IMPL**pWq*call to osWaitInterruptible*call to fabricMulticastCleanupCacheDelete_IMPL*call to osFreeWaitQueue*multimapCountItems(&pFabric->fabricMulticastCache) == 0*src/kernel/compute/fabric.c**multimapCountItems(&pFabric->fabricMulticastCache) == 0**src/kernel/compute/fabric.c*call to multimapDestroy_IMPL*pMulticastFabricModuleLock*call to portSyncRwLockDestroy*pMulticastFabriCacheLock*call to listCount_IMPL*listCount(&pFabric->fabricEventListV2) == 0**listCount(&pFabric->fabricEventListV2) == 0*multimapCountItems(&pFabric->unimportCache) == 0**multimapCountItems(&pFabric->unimportCache) == 0*multimapCountItems(&pFabric->importCache) == 0**multimapCountItems(&pFabric->importCache) == 0*pFabric->nodeId == NV_FABRIC_INVALID_NODE_ID**pFabric->nodeId == NV_FABRIC_INVALID_NODE_ID*pFabricImportModuleLock*pUnimportCacheLock*pImportCacheLock*pListLock*call to listDestroy_IMPL*call to listInit_IMPL*call to multimapInit_IMPL*call to portSyncRwLockCreate**pListLock**pImportCacheLock**pUnimportCacheLock**pFabricImportModuleLock*bAllowFabricMemAlloc**pMulticastFabricModuleLock**pMulticastFabriCacheLock*call to portSyncRwLockAcquireWrite*call to _fabricCacheInvokeCallback*call to portSyncRwLockAcquireRead*call to _fabricCacheFind**pMapEntry*call to _fabricCacheDelete*call to _fabricCacheInsert*call to _fabricCacheIterateAll*pExportUuid*call to _fabricCacheClearAll**multimapItemIterRange_IMPL*call to multimapFirstItem_IMPL**call to multimapFirstItem_IMPL*call to multimapLastItem_IMPL**call to multimapLastItem_IMPL**pCache*mmIter*call to multimapItemIterNext_IMPL*call to multimapClear_IMPL*mIter*call to portAtomicExIncrementU64*call to listRemove_IMPL*pEventArray*bMoreEvents**pNode*call to _fabricSetUnimportCallback*pEvents*call to _fabricNotifyEvent*call to _fabricUnsetUnimportCallback*pOsEvent**pOsEvent***pOsImexEvent*NVRM: Unable to notify ImexSessionApi **NVRM: Unable to notify ImexSessionApi *call to osSetEvent*call to threadStateGetCurrent***pCbData*call to osAllocWaitQueue*call to fabricUnimportCacheInsert_IMPL*call to threadStateEnqueueCallbackOnFree*call to fabricUnimportCacheDelete_IMPL*call to threadStateRemoveCallbackOnFree*call to fabricUnimportCacheGet_IMPL**call to fabricUnimportCacheGet_IMPL*call to osWakeUp*call to mapRemoveByKey_IMPL*call to mapCount_IMPL*call to multimapRemoveItem_IMPL**call to multimapFindSubmap_IMPL*pSubmap != NULL**pSubmap != NULL*call to multimapRemoveSubmap_IMPL*call to multimapInsertSubmap_IMPL**call to multimapInsertSubmap_IMPL*pInsertedSubmap**pInsertedSubmap**call to multimapInsertItemNew_IMPL*pInsertedEntry**pInsertedEntry*call to _clearFmState*src/kernel/compute/fm_session_api.c*NVRM: Fabric manager state is already set. **src/kernel/compute/fm_session_api.c**NVRM: Fabric manager state is already set. *PDB_PROP_SYS_FABRIC_MANAGER_IS_INITIALIZED*NVRM: Fabric manager state is set. **NVRM: Fabric manager state is set. *call to osRmCapRelease*pFmSessionApi*PDB_PROP_SYS_FABRIC_MANAGER_IS_REGISTERED*pRmClient != NULL**pRmClient != NULL*call to osRmCapInitDescriptor*NVRM: only supported for usermode clients **NVRM: only supported for usermode clients *NVRM: duplicate object creation **NVRM: duplicate object creation *call to osRmCapAcquire*NVRM: insufficient permissions **NVRM: insufficient permissions *NVRM: Capability validation failed **NVRM: Capability validation failed *call to fabricSetFmSessionFlags_IMPL*call to fabricGetFmSessionFlags_IMPL*NVRM: Fabric manager state is already cleared. **NVRM: Fabric manager state is already cleared. *NVRM: Fabric manager state is cleared. **NVRM: Fabric manager state is cleared. *call to _clearOutstandingComputeChannels*NVRM: nvidia-fabricmanager daemon shutdown detected, robust channel recovery invoked! **NVRM: nvidia-fabricmanager daemon shutdown detected, robust channel recovery invoked! *NVRM: nvidia-fabricmanager daemon has invoked robust channel recovery! **NVRM: nvidia-fabricmanager daemon has invoked robust channel recovery! *call to rmGpuLockIsOwner*rmGpuLockIsOwner()**rmGpuLockIsOwner()*NVRM: Failed to recover all compute channels for GPU %d **NVRM: Failed to recover all compute channels for GPU %d *call to rcAndDisableOutstandingClientsWithImportedMemory*src/kernel/compute/imex_session_api.c*NVRM: nvidia-imex daemon has invoked robust channel recovery for remote node: %u **src/kernel/compute/imex_session_api.c**NVRM: nvidia-imex daemon has invoked robust channel recovery for remote node: %u **tokenArray*call to fabricUnimportCacheInvokeCallback_IMPL*pTokenArray*call to fabricExtractEventsV2_IMPL*eventArray**eventArray*call to _checkDanglingExports*call to fabricDisableMemAlloc_IMPL*NVRM: Abrupt nvidia-imex daemon shutdown detected, disabled fabric allocations **NVRM: Abrupt nvidia-imex daemon shutdown detected, disabled fabric allocations *call to memoryexportClearCache*call to fabricGetNodeId_IMPL*call to fabricSetImexEvent_IMPL*fabricSetImexEvent(pFabric, NULL)**fabricSetImexEvent(pFabric, NULL)*call to fabricSetNodeId_IMPL*pImexOsEvent**pImexOsEvent*call to fabricFlushUnhandledEvents_IMPL*call to fabricUnimportCacheIterateAll_IMPL*NVRM: Abrupt nvidia-imex daemon shutdown detected, robust channel recovery invoked **NVRM: Abrupt nvidia-imex daemon shutdown detected, robust channel recovery invoked *NVRM: Invalid input value **NVRM: Invalid input value *call to osUserHandleToKernelPtr*NVRM: Invalid event handle: 0x%x **NVRM: Invalid event handle: 0x%x *NVRM: Unable to set event: 0x%x **NVRM: Unable to set event: 0x%x *call to fabricEnableMemAlloc_IMPL*call to rmGpuGroupLockIsOwner*rmGpuLockIsOwner() || rmGpuGroupLockIsOwner(0, GPU_LOCK_GRP_MASK, &gpuMask)**rmGpuLockIsOwner() || rmGpuGroupLockIsOwner(0, GPU_LOCK_GRP_MASK, &gpuMask)*call to rmclientIsCapable_IMPL*call to _checkClientDesiredImporterAndClearCache*clientHandles**clientHandles*call to _performRcAndDisableChannels*channelRcInvoked***ppClient*call to serverutilGetNextClientUnderLock**call to serverutilGetNextClientUnderLock*call to clientRefIter*call to clientRefIterNext*bMatch*call to memoryfabricimportv2RemoveFromCache_IMPL*call to memorymulticastfabricRemoveFromCache_IMPL*NVRM: Failed to disable channels for GPU %d **NVRM: Failed to disable channels for GPU %d *pIdx*(pIdx != NULL)*src/kernel/core/bin_data.c**(pIdx != NULL)**src/kernel/core/bin_data.c*BINDATA_IS_MUTABLE**BINDATA_IS_MUTABLE*call to bindataGetBufferSize*NVRM: bindata memory alloc failed **NVRM: bindata memory alloc failed *call to bindataWriteToBuffer*NVRM: bindataWriteToBuffer failed. Freeing alloced memory, return code %u **NVRM: bindataWriteToBuffer failed. Freeing alloced memory, return code %u *call to osPagedSegmentAccessCheck*pBinArchive*call to bindataMarkReferenced*pBinStorage != NULL**pBinStorage != NULL*pBuffer != NULL**pBuffer != NULL*pBinStoragePvt*pBinStoragePvt->pData != NULL**pBinStoragePvt->pData != NULL*call to _bindataGetBindataPtr*call to utilGzGetData*NVRM: failed to get inflated data, got %u bytes, expecting %u **NVRM: failed to get inflated data, got %u bytes, expecting %u *call to _bindataWriteStorageToBuffer*call to halmgrCreateHal_IMPL*call to halmgrGetHal_IMPL*src/kernel/core/hal/hal.c**src/kernel/core/hal/hal.c*pMod**pMod*pHalSetIfaces**pHalSetIfaces*thisHal*call to rpcHalIfacesSetup_GB202*pRpcHal*call to rpcHalIfacesSetup_GB100*call to rpcHalIfacesSetup_AD102*call to rpcHalIfacesSetup_GA102*call to rpcHalIfacesSetup_GA100*rpcVgpuGspRingDoorbell*rpcVgpuGspWriteScratchRegister*call to rpcHalIfacesSetup_TU102**pTable*ifacesWrapupFn*rpcCtrlFifoSetupVfZombieSubctxPdb*rpcVgpuPfRegRead32*rpcCtrlBusUnsetP2pMapping*rpcDumpProtobufComponent*rpcEccNotifierWriteAck*rpcAllocMemory*rpcCtrlDbgReadSingleSmErrorState*rpcDisableChannels*rpcGpuExecRegOps*rpcCtrlGpuPromoteCtx*rpcCtrlDbgSetNextStopTriggerType*rpcAllocShareDevice*rpcCtrlPreempt*rpcCtrlGpuInitializeCtx*rpcCtrlReservePmAreaSmpc*rpcCtrlGpuMigratableOps*rpcCtrlDbgSetModeErrbarDebug*rpcCtrlPmaStreamUpdateGetPut*rpcCtrlFabricMemoryDescribe*rpcAllocChannelDma*rpcCtrlSetZbcDepthClear*rpcCtrlResetIsolatedChannel*rpcCtrlDmaSetDefaultVaspace*rpcAllocSubdevice*rpcCtrlExecPartitionsExport*rpcFree*rpcDmaControl*rpcCtrlDbgClearSingleSmErrorState*rpcUnsetPageDirectory*rpcCtrlReserveCcuProf*rpcGetGspStaticInfo*rpcSaveHibernationData*rpcDupObject*rpcGspSetSystemInfo*rpcCtrlPmAreaPcSampler*rpcCtrlSubdeviceGetLibosHeapStats*rpcCtrlDbgSetExceptionMask*rpcCtrlSetZbcStencilClear*rpcCtrlVaspaceCopyServerReservedPdes*rpcCtrlCmdGetChipletHsCreditPool*rpcCtrlGrCtxswPreemptionBind*rpcCtrlAllocPmaStream*rpcCtrlCmdInternalGpuCheckCtsIdValid*rpcCtrlCmdGetHsCreditsMapping*rpcCtrlReleaseHes*rpcCtrlReserveHwpmLegacy*rpcCtrlPerfRatedTdpGetStatus*rpcCtrlInternalQuiescePmaChannel*rpcCtrlSubdeviceGetVgpuHeapStats*rpcCtrlBusSetP2pMapping*rpcCtrlGpuGetInfoV2*rpcCtrlGetHsCredits*rpcCtrlGrSetCtxswPreemptionMode*rpcCtrlB0ccExecRegOps*rpcCtrlGrmgrGetGrFsInfo*rpcCtrlGetZbcClearTable*rpcCleanupSurface*rpcCtrlSetTimeslice*rpcCtrlGpuQueryEccStatus*rpcCtrlDbgGetModeMmuDebug*rpcCtrlDbgClearAllSmErrorStates*rpcCtrlGrSetTpcPartitionMode*rpcCtrlGetTotalHsCredits*rpcCtrlInternalPromoteFaultMethodBuffers*rpcCtrlFbGetInfoV2*rpcSetPageDirectory*rpcCtrlGetP2pCapsV2*rpcCtrlNvlinkGetInbandReceivedData*rpcCtrlGetCePceMask*rpcCtrlGpuEvictCtx*rpcCtrlGetMmuDebugMode*rpcInvalidateTlb*rpcCtrlDbgSetSingleSmSingleStep*rpcUnloadingGuestDriver*rpcGetConsolidatedGrStaticInfo*rpcSwitchToVga*rpcCtrlResetChannel*rpcCtrlGpfifoSchedule*rpcSetRegistry*rpcCtrlDbgSetModeMmuGccDebug*rpcCtrlGetNvlinkStatus*rpcGetStaticData*rpcCtrlGrGetTpcPartitionMode*rpcCtrlStopChannel*rpcCtrlCmdInternalControlGspTrace*rpcSetSurfaceProperties*rpcCtrlReleaseCcuProf*rpcCtrlTimerSetGrTickFreq*rpcCtrlGpfifoSetWorkSubmitTokenNotifIndex*rpcAllocEvent*rpcCtrlGrPcSamplingMode*rpcCtrlMcServiceInterrupts*rpcCtrlDbgReadAllSmErrorStates*rpcCtrlSetZbcColorClear*rpcGetEncoderCapacity*rpcCtrlGetP2pCaps*rpcPerfGetLevelInfo*rpcAllocObject*rpcCtrlGpuHandleVfPriFault*rpcRmApiControl*rpcCtrlFabricMemStats*rpcCtrlCmdNvlinkInbandSendData*rpcCtrlGrCtxswZcullBind*rpcCtrlInternalMemsysSetZbcReferenced*rpcSetupHibernationBuffer*rpcCtrlPerfRatedTdpSetControl*rpcCtrlExecPartitionsCreate*rpcCtrlGpfifoGetWorkSubmitToken*rpcIdleChannels*rpcCtrlCmdInternalGpuStartFabricProbe*rpcGetBrandCaps*rpcRestoreHibernationData*rpcCtrlFlaSetupInstanceMemBlock*rpcCtrlInternalSriovPromotePmaStream*rpcCtrlFbGetFsInfo*rpcCtrlSetChannelInterleaveLevel*rpcCtrlDbgResumeContext*rpcAllocRoot*rpcCtrlFifoDisableChannels*rpcCtrlSetHsCredits*rpcGetEngineUtilization*rpcCtrlGetZbcClearTableEntry*rpcCtrlNvencSwSessionUpdateInfo*rpcCtrlDbgSuspendContext*rpcCtrlGetP2pCapsMatrix*rpcCtrlDbgExecRegOps*rpcCtrlFreePmaStream*rpcCtrlSetTsgInterleaveLevel*rpcCtrlMasterGetVirtualFunctionErrorContIntrMask*rpcCtrlReserveHes*rpcLog*rpcCtrlDbgGetModeMmuGccDebug*rpcCtrlExecPartitionsDelete*rpcCtrlPerfBoost*rpcCtrlDbgSetModeMmuDebug*rpcCtrlFifoSetChannelProperties*rpcCtrlSubdeviceGetP2pCaps*rpcUpdateBarPde*rpcCtrlBindPmResources*rpcMapMemoryDma*rpcUpdateGpmGuestBufferInfo*rpcCtrlSetVgpuFbUsage*rpcUnmapMemoryDma*rpcSetGuestSystemInfoExt**iGrp_ipVersions_UNASSIGNED**rpcCtrlFifoSetupVfZombieSubctxPdb*pRpcHal->rpcCtrlFifoSetupVfZombieSubctxPdb != (void *) iGrp_ipVersions_UNASSIGNED*generated/g_rpc_private.h**pRpcHal->rpcCtrlFifoSetupVfZombieSubctxPdb != (void *) iGrp_ipVersions_UNASSIGNED**generated/g_rpc_private.h**rpcVgpuPfRegRead32*pRpcHal->rpcVgpuPfRegRead32 != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcVgpuPfRegRead32 != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlBusUnsetP2pMapping*pRpcHal->rpcCtrlBusUnsetP2pMapping != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlBusUnsetP2pMapping != (void *) iGrp_ipVersions_UNASSIGNED**rpcDumpProtobufComponent*pRpcHal->rpcDumpProtobufComponent != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcDumpProtobufComponent != (void *) iGrp_ipVersions_UNASSIGNED**rpcEccNotifierWriteAck*pRpcHal->rpcEccNotifierWriteAck != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcEccNotifierWriteAck != (void *) iGrp_ipVersions_UNASSIGNED**rpcAllocMemory*pRpcHal->rpcAllocMemory != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcAllocMemory != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgReadSingleSmErrorState*pRpcHal->rpcCtrlDbgReadSingleSmErrorState != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgReadSingleSmErrorState != (void *) iGrp_ipVersions_UNASSIGNED**rpcDisableChannels*pRpcHal->rpcDisableChannels != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcDisableChannels != (void *) iGrp_ipVersions_UNASSIGNED**rpcGpuExecRegOps*pRpcHal->rpcGpuExecRegOps != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcGpuExecRegOps != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGpuPromoteCtx*pRpcHal->rpcCtrlGpuPromoteCtx != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGpuPromoteCtx != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgSetNextStopTriggerType*pRpcHal->rpcCtrlDbgSetNextStopTriggerType != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgSetNextStopTriggerType != (void *) iGrp_ipVersions_UNASSIGNED**rpcAllocShareDevice*pRpcHal->rpcAllocShareDevice != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcAllocShareDevice != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlPreempt*pRpcHal->rpcCtrlPreempt != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlPreempt != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGpuInitializeCtx*pRpcHal->rpcCtrlGpuInitializeCtx != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGpuInitializeCtx != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlReservePmAreaSmpc*pRpcHal->rpcCtrlReservePmAreaSmpc != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlReservePmAreaSmpc != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGpuMigratableOps*pRpcHal->rpcCtrlGpuMigratableOps != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGpuMigratableOps != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgSetModeErrbarDebug*pRpcHal->rpcCtrlDbgSetModeErrbarDebug != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgSetModeErrbarDebug != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlPmaStreamUpdateGetPut*pRpcHal->rpcCtrlPmaStreamUpdateGetPut != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlPmaStreamUpdateGetPut != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlFabricMemoryDescribe*pRpcHal->rpcCtrlFabricMemoryDescribe != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlFabricMemoryDescribe != (void *) iGrp_ipVersions_UNASSIGNED**rpcAllocChannelDma*pRpcHal->rpcAllocChannelDma != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcAllocChannelDma != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlSetZbcDepthClear*pRpcHal->rpcCtrlSetZbcDepthClear != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlSetZbcDepthClear != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlResetIsolatedChannel*pRpcHal->rpcCtrlResetIsolatedChannel != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlResetIsolatedChannel != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDmaSetDefaultVaspace*pRpcHal->rpcCtrlDmaSetDefaultVaspace != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDmaSetDefaultVaspace != (void *) iGrp_ipVersions_UNASSIGNED**rpcAllocSubdevice*pRpcHal->rpcAllocSubdevice != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcAllocSubdevice != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlExecPartitionsExport*pRpcHal->rpcCtrlExecPartitionsExport != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlExecPartitionsExport != (void *) iGrp_ipVersions_UNASSIGNED**rpcFree*pRpcHal->rpcFree != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcFree != (void *) iGrp_ipVersions_UNASSIGNED**rpcDmaControl*pRpcHal->rpcDmaControl != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcDmaControl != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgClearSingleSmErrorState*pRpcHal->rpcCtrlDbgClearSingleSmErrorState != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgClearSingleSmErrorState != (void *) iGrp_ipVersions_UNASSIGNED**rpcUnsetPageDirectory*pRpcHal->rpcUnsetPageDirectory != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcUnsetPageDirectory != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlReserveCcuProf*pRpcHal->rpcCtrlReserveCcuProf != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlReserveCcuProf != (void *) iGrp_ipVersions_UNASSIGNED**rpcGetGspStaticInfo*pRpcHal->rpcGetGspStaticInfo != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcGetGspStaticInfo != (void *) iGrp_ipVersions_UNASSIGNED**rpcSaveHibernationData*pRpcHal->rpcSaveHibernationData != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcSaveHibernationData != (void *) iGrp_ipVersions_UNASSIGNED**rpcDupObject*pRpcHal->rpcDupObject != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcDupObject != (void *) iGrp_ipVersions_UNASSIGNED**rpcGspSetSystemInfo*pRpcHal->rpcGspSetSystemInfo != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcGspSetSystemInfo != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlPmAreaPcSampler*pRpcHal->rpcCtrlPmAreaPcSampler != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlPmAreaPcSampler != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlSubdeviceGetLibosHeapStats*pRpcHal->rpcCtrlSubdeviceGetLibosHeapStats != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlSubdeviceGetLibosHeapStats != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgSetExceptionMask*pRpcHal->rpcCtrlDbgSetExceptionMask != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgSetExceptionMask != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlSetZbcStencilClear*pRpcHal->rpcCtrlSetZbcStencilClear != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlSetZbcStencilClear != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlVaspaceCopyServerReservedPdes*pRpcHal->rpcCtrlVaspaceCopyServerReservedPdes != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlVaspaceCopyServerReservedPdes != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlCmdGetChipletHsCreditPool*pRpcHal->rpcCtrlCmdGetChipletHsCreditPool != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlCmdGetChipletHsCreditPool != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGrCtxswPreemptionBind*pRpcHal->rpcCtrlGrCtxswPreemptionBind != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGrCtxswPreemptionBind != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlAllocPmaStream*pRpcHal->rpcCtrlAllocPmaStream != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlAllocPmaStream != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlCmdInternalGpuCheckCtsIdValid*pRpcHal->rpcCtrlCmdInternalGpuCheckCtsIdValid != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlCmdInternalGpuCheckCtsIdValid != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlCmdGetHsCreditsMapping*pRpcHal->rpcCtrlCmdGetHsCreditsMapping != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlCmdGetHsCreditsMapping != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlReleaseHes*pRpcHal->rpcCtrlReleaseHes != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlReleaseHes != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlReserveHwpmLegacy*pRpcHal->rpcCtrlReserveHwpmLegacy != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlReserveHwpmLegacy != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlPerfRatedTdpGetStatus*pRpcHal->rpcCtrlPerfRatedTdpGetStatus != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlPerfRatedTdpGetStatus != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlInternalQuiescePmaChannel*pRpcHal->rpcCtrlInternalQuiescePmaChannel != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlInternalQuiescePmaChannel != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlSubdeviceGetVgpuHeapStats*pRpcHal->rpcCtrlSubdeviceGetVgpuHeapStats != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlSubdeviceGetVgpuHeapStats != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlBusSetP2pMapping*pRpcHal->rpcCtrlBusSetP2pMapping != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlBusSetP2pMapping != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGpuGetInfoV2*pRpcHal->rpcCtrlGpuGetInfoV2 != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGpuGetInfoV2 != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGetHsCredits*pRpcHal->rpcCtrlGetHsCredits != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGetHsCredits != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGrSetCtxswPreemptionMode*pRpcHal->rpcCtrlGrSetCtxswPreemptionMode != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGrSetCtxswPreemptionMode != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlB0ccExecRegOps*pRpcHal->rpcCtrlB0ccExecRegOps != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlB0ccExecRegOps != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGrmgrGetGrFsInfo*pRpcHal->rpcCtrlGrmgrGetGrFsInfo != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGrmgrGetGrFsInfo != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGetZbcClearTable*pRpcHal->rpcCtrlGetZbcClearTable != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGetZbcClearTable != (void *) iGrp_ipVersions_UNASSIGNED**rpcCleanupSurface*pRpcHal->rpcCleanupSurface != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCleanupSurface != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlSetTimeslice*pRpcHal->rpcCtrlSetTimeslice != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlSetTimeslice != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGpuQueryEccStatus*pRpcHal->rpcCtrlGpuQueryEccStatus != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGpuQueryEccStatus != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgGetModeMmuDebug*pRpcHal->rpcCtrlDbgGetModeMmuDebug != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgGetModeMmuDebug != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgClearAllSmErrorStates*pRpcHal->rpcCtrlDbgClearAllSmErrorStates != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgClearAllSmErrorStates != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGrSetTpcPartitionMode*pRpcHal->rpcCtrlGrSetTpcPartitionMode != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGrSetTpcPartitionMode != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGetTotalHsCredits*pRpcHal->rpcCtrlGetTotalHsCredits != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGetTotalHsCredits != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlInternalPromoteFaultMethodBuffers*pRpcHal->rpcCtrlInternalPromoteFaultMethodBuffers != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlInternalPromoteFaultMethodBuffers != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlFbGetInfoV2*pRpcHal->rpcCtrlFbGetInfoV2 != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlFbGetInfoV2 != (void *) iGrp_ipVersions_UNASSIGNED**rpcSetPageDirectory*pRpcHal->rpcSetPageDirectory != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcSetPageDirectory != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGetP2pCapsV2*pRpcHal->rpcCtrlGetP2pCapsV2 != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGetP2pCapsV2 != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlNvlinkGetInbandReceivedData*pRpcHal->rpcCtrlNvlinkGetInbandReceivedData != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlNvlinkGetInbandReceivedData != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGetCePceMask*pRpcHal->rpcCtrlGetCePceMask != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGetCePceMask != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGpuEvictCtx*pRpcHal->rpcCtrlGpuEvictCtx != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGpuEvictCtx != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGetMmuDebugMode*pRpcHal->rpcCtrlGetMmuDebugMode != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGetMmuDebugMode != (void *) iGrp_ipVersions_UNASSIGNED**rpcInvalidateTlb*pRpcHal->rpcInvalidateTlb != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcInvalidateTlb != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgSetSingleSmSingleStep*pRpcHal->rpcCtrlDbgSetSingleSmSingleStep != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgSetSingleSmSingleStep != (void *) iGrp_ipVersions_UNASSIGNED**rpcUnloadingGuestDriver*pRpcHal->rpcUnloadingGuestDriver != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcUnloadingGuestDriver != (void *) iGrp_ipVersions_UNASSIGNED**rpcGetConsolidatedGrStaticInfo*pRpcHal->rpcGetConsolidatedGrStaticInfo != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcGetConsolidatedGrStaticInfo != (void *) iGrp_ipVersions_UNASSIGNED**rpcSwitchToVga*pRpcHal->rpcSwitchToVga != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcSwitchToVga != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlResetChannel*pRpcHal->rpcCtrlResetChannel != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlResetChannel != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGpfifoSchedule*pRpcHal->rpcCtrlGpfifoSchedule != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGpfifoSchedule != (void *) iGrp_ipVersions_UNASSIGNED**rpcSetRegistry*pRpcHal->rpcSetRegistry != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcSetRegistry != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgSetModeMmuGccDebug*pRpcHal->rpcCtrlDbgSetModeMmuGccDebug != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgSetModeMmuGccDebug != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGetNvlinkStatus*pRpcHal->rpcCtrlGetNvlinkStatus != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGetNvlinkStatus != (void *) iGrp_ipVersions_UNASSIGNED**rpcGetStaticData*pRpcHal->rpcGetStaticData != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcGetStaticData != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGrGetTpcPartitionMode*pRpcHal->rpcCtrlGrGetTpcPartitionMode != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGrGetTpcPartitionMode != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlStopChannel*pRpcHal->rpcCtrlStopChannel != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlStopChannel != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlCmdInternalControlGspTrace*pRpcHal->rpcCtrlCmdInternalControlGspTrace != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlCmdInternalControlGspTrace != (void *) iGrp_ipVersions_UNASSIGNED**rpcSetSurfaceProperties*pRpcHal->rpcSetSurfaceProperties != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcSetSurfaceProperties != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlReleaseCcuProf*pRpcHal->rpcCtrlReleaseCcuProf != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlReleaseCcuProf != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlTimerSetGrTickFreq*pRpcHal->rpcCtrlTimerSetGrTickFreq != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlTimerSetGrTickFreq != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGpfifoSetWorkSubmitTokenNotifIndex*pRpcHal->rpcCtrlGpfifoSetWorkSubmitTokenNotifIndex != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGpfifoSetWorkSubmitTokenNotifIndex != (void *) iGrp_ipVersions_UNASSIGNED**rpcAllocEvent*pRpcHal->rpcAllocEvent != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcAllocEvent != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGrPcSamplingMode*pRpcHal->rpcCtrlGrPcSamplingMode != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGrPcSamplingMode != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlMcServiceInterrupts*pRpcHal->rpcCtrlMcServiceInterrupts != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlMcServiceInterrupts != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgReadAllSmErrorStates*pRpcHal->rpcCtrlDbgReadAllSmErrorStates != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgReadAllSmErrorStates != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlSetZbcColorClear*pRpcHal->rpcCtrlSetZbcColorClear != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlSetZbcColorClear != (void *) iGrp_ipVersions_UNASSIGNED**rpcGetEncoderCapacity*pRpcHal->rpcGetEncoderCapacity != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcGetEncoderCapacity != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGetP2pCaps*pRpcHal->rpcCtrlGetP2pCaps != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGetP2pCaps != (void *) iGrp_ipVersions_UNASSIGNED**rpcPerfGetLevelInfo*pRpcHal->rpcPerfGetLevelInfo != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcPerfGetLevelInfo != (void *) iGrp_ipVersions_UNASSIGNED**rpcAllocObject*pRpcHal->rpcAllocObject != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcAllocObject != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGpuHandleVfPriFault*pRpcHal->rpcCtrlGpuHandleVfPriFault != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGpuHandleVfPriFault != (void *) iGrp_ipVersions_UNASSIGNED**rpcRmApiControl*pRpcHal->rpcRmApiControl != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcRmApiControl != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlFabricMemStats*pRpcHal->rpcCtrlFabricMemStats != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlFabricMemStats != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlCmdNvlinkInbandSendData*pRpcHal->rpcCtrlCmdNvlinkInbandSendData != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlCmdNvlinkInbandSendData != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGrCtxswZcullBind*pRpcHal->rpcCtrlGrCtxswZcullBind != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGrCtxswZcullBind != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlInternalMemsysSetZbcReferenced*pRpcHal->rpcCtrlInternalMemsysSetZbcReferenced != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlInternalMemsysSetZbcReferenced != (void *) iGrp_ipVersions_UNASSIGNED**rpcSetupHibernationBuffer*pRpcHal->rpcSetupHibernationBuffer != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcSetupHibernationBuffer != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlPerfRatedTdpSetControl*pRpcHal->rpcCtrlPerfRatedTdpSetControl != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlPerfRatedTdpSetControl != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlExecPartitionsCreate*pRpcHal->rpcCtrlExecPartitionsCreate != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlExecPartitionsCreate != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGpfifoGetWorkSubmitToken*pRpcHal->rpcCtrlGpfifoGetWorkSubmitToken != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGpfifoGetWorkSubmitToken != (void *) iGrp_ipVersions_UNASSIGNED**rpcIdleChannels*pRpcHal->rpcIdleChannels != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcIdleChannels != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlCmdInternalGpuStartFabricProbe*pRpcHal->rpcCtrlCmdInternalGpuStartFabricProbe != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlCmdInternalGpuStartFabricProbe != (void *) iGrp_ipVersions_UNASSIGNED**rpcGetBrandCaps*pRpcHal->rpcGetBrandCaps != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcGetBrandCaps != (void *) iGrp_ipVersions_UNASSIGNED**rpcRestoreHibernationData*pRpcHal->rpcRestoreHibernationData != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcRestoreHibernationData != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlFlaSetupInstanceMemBlock*pRpcHal->rpcCtrlFlaSetupInstanceMemBlock != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlFlaSetupInstanceMemBlock != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlInternalSriovPromotePmaStream*pRpcHal->rpcCtrlInternalSriovPromotePmaStream != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlInternalSriovPromotePmaStream != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlFbGetFsInfo*pRpcHal->rpcCtrlFbGetFsInfo != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlFbGetFsInfo != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlSetChannelInterleaveLevel*pRpcHal->rpcCtrlSetChannelInterleaveLevel != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlSetChannelInterleaveLevel != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgResumeContext*pRpcHal->rpcCtrlDbgResumeContext != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgResumeContext != (void *) iGrp_ipVersions_UNASSIGNED**rpcAllocRoot*pRpcHal->rpcAllocRoot != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcAllocRoot != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlFifoDisableChannels*pRpcHal->rpcCtrlFifoDisableChannels != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlFifoDisableChannels != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlSetHsCredits*pRpcHal->rpcCtrlSetHsCredits != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlSetHsCredits != (void *) iGrp_ipVersions_UNASSIGNED**rpcGetEngineUtilization*pRpcHal->rpcGetEngineUtilization != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcGetEngineUtilization != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGetZbcClearTableEntry*pRpcHal->rpcCtrlGetZbcClearTableEntry != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGetZbcClearTableEntry != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlNvencSwSessionUpdateInfo*pRpcHal->rpcCtrlNvencSwSessionUpdateInfo != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlNvencSwSessionUpdateInfo != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgSuspendContext*pRpcHal->rpcCtrlDbgSuspendContext != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgSuspendContext != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlGetP2pCapsMatrix*pRpcHal->rpcCtrlGetP2pCapsMatrix != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlGetP2pCapsMatrix != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgExecRegOps*pRpcHal->rpcCtrlDbgExecRegOps != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgExecRegOps != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlFreePmaStream*pRpcHal->rpcCtrlFreePmaStream != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlFreePmaStream != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlSetTsgInterleaveLevel*pRpcHal->rpcCtrlSetTsgInterleaveLevel != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlSetTsgInterleaveLevel != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlMasterGetVirtualFunctionErrorContIntrMask*pRpcHal->rpcCtrlMasterGetVirtualFunctionErrorContIntrMask != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlMasterGetVirtualFunctionErrorContIntrMask != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlReserveHes*pRpcHal->rpcCtrlReserveHes != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlReserveHes != (void *) iGrp_ipVersions_UNASSIGNED**rpcLog*pRpcHal->rpcLog != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcLog != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgGetModeMmuGccDebug*pRpcHal->rpcCtrlDbgGetModeMmuGccDebug != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgGetModeMmuGccDebug != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlExecPartitionsDelete*pRpcHal->rpcCtrlExecPartitionsDelete != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlExecPartitionsDelete != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlPerfBoost*pRpcHal->rpcCtrlPerfBoost != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlPerfBoost != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlDbgSetModeMmuDebug*pRpcHal->rpcCtrlDbgSetModeMmuDebug != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlDbgSetModeMmuDebug != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlFifoSetChannelProperties*pRpcHal->rpcCtrlFifoSetChannelProperties != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlFifoSetChannelProperties != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlSubdeviceGetP2pCaps*pRpcHal->rpcCtrlSubdeviceGetP2pCaps != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlSubdeviceGetP2pCaps != (void *) iGrp_ipVersions_UNASSIGNED**rpcUpdateBarPde*pRpcHal->rpcUpdateBarPde != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcUpdateBarPde != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlBindPmResources*pRpcHal->rpcCtrlBindPmResources != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlBindPmResources != (void *) iGrp_ipVersions_UNASSIGNED**rpcMapMemoryDma*pRpcHal->rpcMapMemoryDma != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcMapMemoryDma != (void *) iGrp_ipVersions_UNASSIGNED**rpcUpdateGpmGuestBufferInfo*pRpcHal->rpcUpdateGpmGuestBufferInfo != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcUpdateGpmGuestBufferInfo != (void *) iGrp_ipVersions_UNASSIGNED**rpcCtrlSetVgpuFbUsage*pRpcHal->rpcCtrlSetVgpuFbUsage != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcCtrlSetVgpuFbUsage != (void *) iGrp_ipVersions_UNASSIGNED**rpcUnmapMemoryDma*pRpcHal->rpcUnmapMemoryDma != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcUnmapMemoryDma != (void *) iGrp_ipVersions_UNASSIGNED**rpcSetGuestSystemInfoExt*pRpcHal->rpcSetGuestSystemInfoExt != (void *) iGrp_ipVersions_UNASSIGNED**pRpcHal->rpcSetGuestSystemInfoExt != (void *) iGrp_ipVersions_UNASSIGNED*call to rpcstructurecopyHalIfacesSetup_GB202*pRpcstructurecopyHal*call to rpcstructurecopyHalIfacesSetup_GB100*call to rpcstructurecopyHalIfacesSetup_AD102*call to rpcstructurecopyHalIfacesSetup_GA100*call to rpcstructurecopyHalIfacesSetup_TU102*deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_SM_ISSUE_RATE_MODIFIER_V2_PARAMS*deserialize_NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS*deserialize_NV2080_CTRL_INTERNAL_MEMSYS_GET_STATIC_CONFIG_PARAMS*deserialize_NV2080_CTRL_GPU_GET_COMPUTE_PROFILES_PARAMS*deserialize_VGPU_P2P_CAPABILITY_PARAMS*deserialize_NV2080_CTRL_CMD_BUS_GET_C2C_INFO_PARAMS*deserialize_NV2080_CTRL_MC_GET_INTR_CATEGORY_SUBTREE_MAP_PARAMS*deserialize_NVA080_CTRL_VGPU_GET_CONFIG_PARAMS*deserialize_NV2080_CTRL_FB_GET_LTC_INFO_FOR_FBP_PARAMS*deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_GLOBAL_SM_ORDER_PARAMS*deserialize_NV2080_CTRL_INTERNAL_GET_DEVICE_INFO_TABLE_PARAMS*deserialize_NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS*deserialize_NV2080_CTRL_GPU_GET_GID_INFO_PARAMS*deserialize_NV90E6_CTRL_MASTER_GET_VIRTUAL_FUNCTION_ERROR_CONT_INTR_MASK_PARAMS*deserialize_NV2080_CTRL_INTERNAL_CCU_SAMPLE_INFO_PARAMS*deserialize_NVC637_CTRL_EXEC_PARTITIONS_GET_PARAMS*deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_INFO_PARAMS*deserialize_VGPU_FB_GET_DYNAMIC_BLACKLISTED_PAGES*deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_SM_ISSUE_RATE_MODIFIER_PARAMS*deserialize_VGPU_FB_GET_LTC_INFO_FOR_FBP*deserialize_VGPU_STATIC_DATA*deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_SM_ISSUE_THROTTLE_CTRL_PARAMS*deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_PDB_PROPERTIES_PARAMS*deserialize_NV2080_CTRL_CMD_BUS_GET_PCIE_REQ_ATOMICS_CAPS_PARAMS*deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_FECS_TRACE_DEFINES_PARAMS*deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_PPC_MASKS_PARAMS*deserialize_NV0000_CTRL_SYSTEM_GET_VGX_SYSTEM_INFO_PARAMS*deserialize_NV2080_CTRL_GR_GET_ZCULL_INFO_PARAMS*deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_FLOORSWEEPING_MASKS_PARAMS*deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_ZCULL_INFO_PARAMS*deserialize_VGPU_FIFO_GET_DEVICE_INFO_TABLE*deserialize_NV2080_CTRL_CE_GET_ALL_CAPS_PARAMS*deserialize_GPU_EXEC_SYSPIPE_INFO*deserialize_VGPU_BSP_GET_CAPS*deserialize_NV2080_CTRL_CMD_BUS_GET_PCIE_SUPPORTED_GPU_ATOMICS_PARAMS*deserialize_GPU_PARTITION_INFO*deserialize_NV2080_CTRL_GR_GET_SM_ISSUE_THROTTLE_CTRL_PARAMS*deserialize_NV2080_CTRL_MC_GET_STATIC_INTR_TABLE_PARAMS*deserialize_NV2080_CTRL_MC_GET_ENGINE_NOTIFICATION_INTR_VECTORS_PARAMS*deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_ROP_INFO_PARAMS*deserialize_NV9096_CTRL_GET_ZBC_CLEAR_TABLE_SIZE_PARAMS*deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_FECS_RECORD_SIZE_PARAMS*deserialize_NV2080_CTRL_CMD_NVLINK_GET_NVLINK_CAPS_PARAMS*deserialize_NV2080_CTRL_BUS_GET_INFO_V2_PARAMS*deserialize_VGPU_STATIC_PROPERTIES*deserialize_NV2080_CTRL_GPU_GET_CONSTRUCTED_FALCON_INFO_PARAMS*deserialize_NV2080_CTRL_GR_GET_SM_ISSUE_RATE_MODIFIER_V2_PARAMS*deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_CONTEXT_BUFFERS_INFO_PARAMS*deserialize_NV2080_CTRL_FLA_GET_RANGE_PARAMS*deserialize_NV2080_CTRL_GR_GET_SM_ISSUE_RATE_MODIFIER_PARAMS*deserialize_NV2080_CTRL_GPU_QUERY_ECC_STATUS_PARAMS*deserialize_VGPU_CE_GET_CAPS_V2*deserialize_NV0080_CTRL_MSENC_GET_CAPS_V2_PARAMS*deserialize_VGPU_GET_LATENCY_BUFFER_SIZE**deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_SM_ISSUE_RATE_MODIFIER_V2_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_SM_ISSUE_RATE_MODIFIER_V2_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED*generated/g_rpcstructurecopy_private.h**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_SM_ISSUE_RATE_MODIFIER_V2_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**generated/g_rpcstructurecopy_private.h**deserialize_NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_MEMSYS_GET_STATIC_CONFIG_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_MEMSYS_GET_STATIC_CONFIG_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_MEMSYS_GET_STATIC_CONFIG_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_GPU_GET_COMPUTE_PROFILES_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_GPU_GET_COMPUTE_PROFILES_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_GPU_GET_COMPUTE_PROFILES_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_VGPU_P2P_CAPABILITY_PARAMS*pRpcstructurecopyHal->deserialize_VGPU_P2P_CAPABILITY_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_VGPU_P2P_CAPABILITY_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_CMD_BUS_GET_C2C_INFO_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_CMD_BUS_GET_C2C_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_CMD_BUS_GET_C2C_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_MC_GET_INTR_CATEGORY_SUBTREE_MAP_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_MC_GET_INTR_CATEGORY_SUBTREE_MAP_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_MC_GET_INTR_CATEGORY_SUBTREE_MAP_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NVA080_CTRL_VGPU_GET_CONFIG_PARAMS*pRpcstructurecopyHal->deserialize_NVA080_CTRL_VGPU_GET_CONFIG_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NVA080_CTRL_VGPU_GET_CONFIG_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_FB_GET_LTC_INFO_FOR_FBP_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_FB_GET_LTC_INFO_FOR_FBP_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_FB_GET_LTC_INFO_FOR_FBP_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_GLOBAL_SM_ORDER_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_GLOBAL_SM_ORDER_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_GLOBAL_SM_ORDER_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_GET_DEVICE_INFO_TABLE_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_GET_DEVICE_INFO_TABLE_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_GET_DEVICE_INFO_TABLE_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_GPU_GET_GID_INFO_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_GPU_GET_GID_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_GPU_GET_GID_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV90E6_CTRL_MASTER_GET_VIRTUAL_FUNCTION_ERROR_CONT_INTR_MASK_PARAMS*pRpcstructurecopyHal->deserialize_NV90E6_CTRL_MASTER_GET_VIRTUAL_FUNCTION_ERROR_CONT_INTR_MASK_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV90E6_CTRL_MASTER_GET_VIRTUAL_FUNCTION_ERROR_CONT_INTR_MASK_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_CCU_SAMPLE_INFO_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_CCU_SAMPLE_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_CCU_SAMPLE_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NVC637_CTRL_EXEC_PARTITIONS_GET_PARAMS*pRpcstructurecopyHal->deserialize_NVC637_CTRL_EXEC_PARTITIONS_GET_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NVC637_CTRL_EXEC_PARTITIONS_GET_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_INFO_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_VGPU_FB_GET_DYNAMIC_BLACKLISTED_PAGES*pRpcstructurecopyHal->deserialize_VGPU_FB_GET_DYNAMIC_BLACKLISTED_PAGES != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_VGPU_FB_GET_DYNAMIC_BLACKLISTED_PAGES != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_SM_ISSUE_RATE_MODIFIER_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_SM_ISSUE_RATE_MODIFIER_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_SM_ISSUE_RATE_MODIFIER_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_VGPU_FB_GET_LTC_INFO_FOR_FBP*pRpcstructurecopyHal->deserialize_VGPU_FB_GET_LTC_INFO_FOR_FBP != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_VGPU_FB_GET_LTC_INFO_FOR_FBP != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_VGPU_STATIC_DATA*pRpcstructurecopyHal->deserialize_VGPU_STATIC_DATA != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_VGPU_STATIC_DATA != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_SM_ISSUE_THROTTLE_CTRL_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_SM_ISSUE_THROTTLE_CTRL_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_SM_ISSUE_THROTTLE_CTRL_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_PDB_PROPERTIES_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_PDB_PROPERTIES_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_PDB_PROPERTIES_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_CMD_BUS_GET_PCIE_REQ_ATOMICS_CAPS_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_CMD_BUS_GET_PCIE_REQ_ATOMICS_CAPS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_CMD_BUS_GET_PCIE_REQ_ATOMICS_CAPS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_FECS_TRACE_DEFINES_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_FECS_TRACE_DEFINES_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_FECS_TRACE_DEFINES_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_PPC_MASKS_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_PPC_MASKS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_PPC_MASKS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV0000_CTRL_SYSTEM_GET_VGX_SYSTEM_INFO_PARAMS*pRpcstructurecopyHal->deserialize_NV0000_CTRL_SYSTEM_GET_VGX_SYSTEM_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV0000_CTRL_SYSTEM_GET_VGX_SYSTEM_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_GR_GET_ZCULL_INFO_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_GR_GET_ZCULL_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_GR_GET_ZCULL_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_FLOORSWEEPING_MASKS_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_FLOORSWEEPING_MASKS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_FLOORSWEEPING_MASKS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_ZCULL_INFO_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_ZCULL_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_ZCULL_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_VGPU_FIFO_GET_DEVICE_INFO_TABLE*pRpcstructurecopyHal->deserialize_VGPU_FIFO_GET_DEVICE_INFO_TABLE != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_VGPU_FIFO_GET_DEVICE_INFO_TABLE != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_CE_GET_ALL_CAPS_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_CE_GET_ALL_CAPS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_CE_GET_ALL_CAPS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_GPU_EXEC_SYSPIPE_INFO*pRpcstructurecopyHal->deserialize_GPU_EXEC_SYSPIPE_INFO != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_GPU_EXEC_SYSPIPE_INFO != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_VGPU_BSP_GET_CAPS*pRpcstructurecopyHal->deserialize_VGPU_BSP_GET_CAPS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_VGPU_BSP_GET_CAPS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_CMD_BUS_GET_PCIE_SUPPORTED_GPU_ATOMICS_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_CMD_BUS_GET_PCIE_SUPPORTED_GPU_ATOMICS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_CMD_BUS_GET_PCIE_SUPPORTED_GPU_ATOMICS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_GPU_PARTITION_INFO*pRpcstructurecopyHal->deserialize_GPU_PARTITION_INFO != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_GPU_PARTITION_INFO != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_GR_GET_SM_ISSUE_THROTTLE_CTRL_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_GR_GET_SM_ISSUE_THROTTLE_CTRL_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_GR_GET_SM_ISSUE_THROTTLE_CTRL_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_MC_GET_STATIC_INTR_TABLE_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_MC_GET_STATIC_INTR_TABLE_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_MC_GET_STATIC_INTR_TABLE_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_MC_GET_ENGINE_NOTIFICATION_INTR_VECTORS_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_MC_GET_ENGINE_NOTIFICATION_INTR_VECTORS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_MC_GET_ENGINE_NOTIFICATION_INTR_VECTORS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_ROP_INFO_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_ROP_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_ROP_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV9096_CTRL_GET_ZBC_CLEAR_TABLE_SIZE_PARAMS*pRpcstructurecopyHal->deserialize_NV9096_CTRL_GET_ZBC_CLEAR_TABLE_SIZE_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV9096_CTRL_GET_ZBC_CLEAR_TABLE_SIZE_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_FECS_RECORD_SIZE_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_FECS_RECORD_SIZE_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_FECS_RECORD_SIZE_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_CMD_NVLINK_GET_NVLINK_CAPS_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_CMD_NVLINK_GET_NVLINK_CAPS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_CMD_NVLINK_GET_NVLINK_CAPS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_BUS_GET_INFO_V2_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_BUS_GET_INFO_V2_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_BUS_GET_INFO_V2_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_VGPU_STATIC_PROPERTIES*pRpcstructurecopyHal->deserialize_VGPU_STATIC_PROPERTIES != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_VGPU_STATIC_PROPERTIES != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_GPU_GET_CONSTRUCTED_FALCON_INFO_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_GPU_GET_CONSTRUCTED_FALCON_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_GPU_GET_CONSTRUCTED_FALCON_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_GR_GET_SM_ISSUE_RATE_MODIFIER_V2_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_GR_GET_SM_ISSUE_RATE_MODIFIER_V2_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_GR_GET_SM_ISSUE_RATE_MODIFIER_V2_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_CONTEXT_BUFFERS_INFO_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_CONTEXT_BUFFERS_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_INTERNAL_STATIC_GR_GET_CONTEXT_BUFFERS_INFO_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_FLA_GET_RANGE_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_FLA_GET_RANGE_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_FLA_GET_RANGE_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_GR_GET_SM_ISSUE_RATE_MODIFIER_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_GR_GET_SM_ISSUE_RATE_MODIFIER_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_GR_GET_SM_ISSUE_RATE_MODIFIER_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV2080_CTRL_GPU_QUERY_ECC_STATUS_PARAMS*pRpcstructurecopyHal->deserialize_NV2080_CTRL_GPU_QUERY_ECC_STATUS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV2080_CTRL_GPU_QUERY_ECC_STATUS_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_VGPU_CE_GET_CAPS_V2*pRpcstructurecopyHal->deserialize_VGPU_CE_GET_CAPS_V2 != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_VGPU_CE_GET_CAPS_V2 != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_NV0080_CTRL_MSENC_GET_CAPS_V2_PARAMS*pRpcstructurecopyHal->deserialize_NV0080_CTRL_MSENC_GET_CAPS_V2_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_NV0080_CTRL_MSENC_GET_CAPS_V2_PARAMS != (void *) iGrp_ipVersions_UNASSIGNED**deserialize_VGPU_GET_LATENCY_BUFFER_SIZE*pRpcstructurecopyHal->deserialize_VGPU_GET_LATENCY_BUFFER_SIZE != (void *) iGrp_ipVersions_UNASSIGNED**pRpcstructurecopyHal->deserialize_VGPU_GET_LATENCY_BUFFER_SIZE != (void *) iGrp_ipVersions_UNASSIGNED*call to registerHalModule*0 && "iGrp_ipVersions_UNASSIGNED"*generated/g_hal_stubs.h**0 && "iGrp_ipVersions_UNASSIGNED"**generated/g_hal_stubs.h*curNode->infoBlock*src/kernel/core/hal/info_block.c**curNode->infoBlock**src/kernel/core/hal/info_block.c**curNode*delNode**delNode*delNode->infoBlock**delNode->infoBlock**newNode***infoBlock**chipName*pHalList**pHalList***pHalList*call to _halmgrIsChipSupported*src/kernel/core/hal_mgr.c*NVRM: Matching %s = 0x%x to HAL_IMPL_%s *call to _halmgrGetStringRepForHalImpl**src/kernel/core/hal_mgr.c**NVRM: Matching %s = 0x%x to HAL_IMPL_%s *PMC_BOOT_42**PMC_BOOT_42*PMC_BOOT_0**PMC_BOOT_0*call to _halmgrIsTegraSupported*chipid*majorRev*halImpl < HAL_IMPL_MAXIMUM**halImpl < HAL_IMPL_MAXIMUM*call to __nvoc_objDelete**pLock*gpuLocks**gpuLocks*src/kernel/core/locks.c*NVRM: Worker thread finished without releasing all locks. gpuMask=%x **src/kernel/core/locks.c**NVRM: Worker thread finished without releasing all locks. gpuMask=%x *call to _rmGpuLocksHandleDeferredWork*call to _rmGpuLocksRelease*call to _rmGpuAllocLockIsOwner*pAllocLock*bSetAllocLockOwner*maxLockableGpuInst*pGpuLock**pGpuLock*pGpuLock->threadId == GPUS_LOCK_OWNER_PENDING_DPC_REFRESH**pGpuLock->threadId == GPUS_LOCK_OWNER_PENDING_DPC_REFRESH*pAllocLock->threadId == GPUS_LOCK_OWNER_PENDING_DPC_REFRESH**pAllocLock->threadId == GPUS_LOCK_OWNER_PENDING_DPC_REFRESH*holdGpuLock*waitGpuLock*call to _rmGpuLockIsOwner*call to gpumgrGetGrpMaskFromGpuInst*lockableMask*call to portUtilCountLeadingZeros32*highestInstanceInGpuMask*lockedThreadId*call to portUtilCountTrailingZeros32*call to portAtomicAndU32*call to portAtomicOrU32*rmGpuLocksGpuMask*threadStateGpuMask*call to threadStateOnlyProcessWorkISRAndDeferredIntHandler*pGpuOrig*pDpcGpu**pCallContext*pCallContext->pLockInfo != NULL**pCallContext->pLockInfo != NULL*call to threadStateOnlyFreeISRAndDeferredIntHandler**pDpcGpu*NVRM: Attempting to release nonlockable GPUs. gpuMask = 0x%08x, gpusLockableMask = 0x%08x **NVRM: Attempting to release nonlockable GPUs. gpuMask = 0x%08x, gpusLockableMask = 0x%08x *call to portSyncExSafeToWake*call to portSyncExSafeToSleep*bReleaseAllocLock*(rmGpuLockInfo.gpusFreezeMask & gpuMask) == 0**(rmGpuLockInfo.gpusFreezeMask & gpuMask) == 0*_rmGpuLockIsOwner(gpuMask)**_rmGpuLockIsOwner(gpuMask)*call to threadPriorityBoost*NVRM: Releasing nonlockable GPU (already went through teardown). gpuMask = 0x%08x, gpusLockableMask = 0x%08x. **NVRM: Releasing nonlockable GPU (already went through teardown). gpuMask = 0x%08x, gpusLockableMask = 0x%08x. *NVRM: Attempting to release unlocked GPUs. gpuMask = 0x%08x, gpusLockedMask = 0x%08x. Will skip them. **NVRM: Attempting to release unlocked GPUs. gpuMask = 0x%08x, gpusLockedMask = 0x%08x. Will skip them. *NVRM: No more GPUs to release after skipping**NVRM: No more GPUs to release after skipping*NVRM: GPU mask for release (0x%08x) has higher instance that maxLockableGpuIns (%d) **NVRM: GPU mask for release (0x%08x) has higher instance that maxLockableGpuIns (%d) *call to _gpuInstLoopShouldContinue*startHoldTime*bAllocLockWakeup*bSignaled*call to _gpuInstLoopTail*call to _gpuInstLoopPrev*call to _gpuLocksReleaseEnableInterrupts*pGpuLock->threadId == threadId**pGpuLock->threadId == threadId*bRunning*call to portAtomicIncrementS32*pGpuLock->count <= 1**pGpuLock->count <= 1**ra*callerRA*data16*bHighIrql*extraWakeUp*allocLockWakeUp*NVRM: Releasing GPU locks (mask:0x%08x) at raised IRQL without a DPC GPU at %p. Attempting to recover.. **NVRM: Releasing GPU locks (mask:0x%08x) at raised IRQL without a DPC GPU at %p. Attempting to recover.. *bDpcReleaseAll*gpuMask == rmGpuLockInfo.gpusLockedMask**gpuMask == rmGpuLockInfo.gpusLockedMask*call to osGpuLocksQueueRelease*gpuMask == gpumgrGetGpuMask(pDpcGpu)**gpuMask == gpumgrGetGpuMask(pDpcGpu)*call to portAtomicExAddU64*call to threadPriorityRestore*call to _gpuLocksReleaseHandleDeferredWork*pRmCtrlDeferredCmd**pRmCtrlDeferredCmd*call to rmControl_Deferred*call to osHandleDeferredRecovery*call to osDeferredIsr*call to osRunQueued1HzCallbacksUnderLock*call to rmGpuLockIsHidden*call to osIsSwPreInitOnly*call to osLockShouldToggleInterrupts*call to intrSetIntrMaskFlags_IMPL*call to bitVectorSetAll_IMPL*intrGetIntrEnFromHw_HAL(pGpu, pIntr, NULL) != INTERRUPT_TYPE_HARDWARE**intrGetIntrEnFromHw_HAL(pGpu, pIntr, NULL) != INTERRUPT_TYPE_HARDWARE*call to osEnableInterrupts*call to tmrRmCallbackIntrEnable_IMPL*call to tmrGetIntrStatus_cb5ce8*call to _rmGpuLocksAcquire*call to gpumgrIsGpuPointerValid*call to rmGpuGroupLockGetMask*bReleaseSpinlock*bIsOwner*NVRM: Nothing to lock for gpuInst=%d, gpuGrpId=%d, gpuMask=0x%08x **NVRM: Nothing to lock for gpuInst=%d, gpuGrpId=%d, gpuMask=0x%08x *call to rmGpuLocksGetOwnedMask*bCondAcquireCheck*bLockAll*bAcquireAllocLock*NVRM: Attempting to lock GPUs (mask=%08x) that are not lockable (mask=%08x). Will skip non-lockables. **NVRM: Attempting to lock GPUs (mask=%08x) that are not lockable (mask=%08x). Will skip non-lockables. *call to _gpuInstLoopHead*call to _gpuInstLoopNext*startWaitTime*NVRM: GPU lock %d freed while we were waiting on a previous lock **NVRM: GPU lock %d freed while we were waiting on a previous lock *GPU alloc lock already acquired by this thread**GPU alloc lock already acquired by this thread*GPU lock already acquired by this thread**GPU lock already acquired by this thread*call to osDelayUs*NVRM: GPU lock %d freed while threads were still waiting. **NVRM: GPU lock %d freed while threads were still waiting. *call to portAtomicDecrementS32*call to portSyncSemaphoreAcquire*call to _gpuLocksAcquireDisableInterrupts*priorityPrev*NVRM: Max lockable instance changed from %d to %d **NVRM: Max lockable instance changed from %d to %d *NVRM: Locked a different GPU mask (0x%08x) than requested (0x%08x) @ %p. **NVRM: Locked a different GPU mask (0x%08x) than requested (0x%08x) @ %p. *gpuMaskLocked == rmGpuLockInfo.gpusLockedMask**gpuMaskLocked == rmGpuLockInfo.gpusLockedMask*gpuMaskLocked == rmGpuLockInfo.gpusLockableMask**gpuMaskLocked == rmGpuLockInfo.gpusLockableMask*_rmGpuAllocLockIsOwner()**_rmGpuAllocLockIsOwner()*call to tmrRmCallbackIntrDisable_IMPL*call to osDisableInterrupts*intrMask*(gpuInst < NV_MAX_DEVICES)**(gpuInst < NV_MAX_DEVICES)*call to _rmGpuLockDestroy*call to rmIntrMaskLockFree*((rmGpuLockInfo.gpusLockableMask & NVBIT(gpuInst)) == 0)**((rmGpuLockInfo.gpusLockableMask & NVBIT(gpuInst)) == 0)*call to rmIntrMaskLockAlloc*call to _rmGpuLockInit*failed to acquire GPU alloc lock**failed to acquire GPU alloc lock*call to portSyncSemaphoreDestroy**pWaitSema*call to osSchedule*call to portSyncSemaphoreCreate*rmGpuLockInfo.gpusLockableMask == 0**rmGpuLockInfo.gpusLockableMask == 0*Unexpected gpuGrpId in gpu lock get mask*src/kernel/core/locks_common.c**Unexpected gpuGrpId in gpu lock get mask**src/kernel/core/locks_common.c*call to osReleaseRmSema*call to osAcquireRmSema*pReleaseLocks*NVRM: GPU isn't full power! gpuInstance = 0x%x. **NVRM: GPU isn't full power! gpuInstance = 0x%x. *NVRM: GPU isn't full power and isn't in resume codepath! gpuInstance = 0x%x. **NVRM: GPU isn't full power and isn't in resume codepath! gpuInstance = 0x%x. *NVRM: Failed to acquire the RM lock! **NVRM: Failed to acquire the RM lock! *NVRM: Failed to acquire the API lock! **NVRM: Failed to acquire the API lock! *NVRM: Failed to acquire the GPU lock! **NVRM: Failed to acquire the GPU lock! *call to rmGpuLockInfoDestroy*call to rmGpuLockInfoInit*rmGpuGroupLockAcquire(0, GPU_LOCK_GRP_ALL, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_NONE, &gpuMask)*src/kernel/core/system.c**rmGpuGroupLockAcquire(0, GPU_LOCK_GRP_ALL, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_NONE, &gpuMask)**src/kernel/core/system.c*call to gpuRefreshRecoveryAction_DISPATCH*bExternalFabricMgmt*bRouteToPhysicalLockBypass*roTopLockApiMask*RMEnableEventTracer**RMEnableEventTracer*RmValidateClientData**RmValidateClientData*PDB_PROP_SYS_REGISTRY_OVERRIDES_INITIALIZED*PDB_PROP_SYS_ENABLE_STREAM_MEMOPS*RMPriorityBoost**RMPriorityBoost*RMPriorityThrottleDelay**RMPriorityThrottleDelay*call to _sysRegistryOverrideExternalFabricMgmt*call to _sysRegistryOverrideResourceServer*call to osBugCheckOnTimeoutEnabled*PDB_PROP_SYS_BUGCHECK_ON_TIMEOUT*RmRouteToPhyiscalLockBypass**RmRouteToPhyiscalLockBypass*RMGpuLockMidpath**RMGpuLockMidpath*PDB_PROP_SYS_GPU_LOCK_MIDPATH_ENABLED*EnableRmTestOnlyCode**EnableRmTestOnlyCode*RMEnableForceSharedLock**RMEnableForceSharedLock*RmAllowUnknown4PartIds**RmAllowUnknown4PartIds*PDB_PROP_SYS_ALLOW_UNKNOWN_4PART_IDS*call to gpumgrSetGpuNvlinkBwModeFromRegistry_IMPL*call to constructObjOS*call to osRmCapRegisterSys*NVRM: RM Access Sys Cap creation failed: 0x%x **NVRM: RM Access Sys Cap creation failed: 0x%x *call to coreShutdownRm*call to coreInitializeRm*call to nvAssertDestroy*call to nvDbgDestroy*call to portAtomicDecrementU32*call to nvlogDestroy*call to portShutdown*call to portInitialize*call to portAtomicIncrementU32*call to nvlogInit*call to nvDbgInit*call to nvAssertInit*call to osInitSystemStaticConfig*call to osIsNvswitchPresent*PDB_PROP_SYS_NVSWITCH_IS_PRESENT*NVRM: NvSwitch is found in the system **NVRM: NvSwitch is found in the system *call to sysEnableExternalFabricMgmt_IMPL*NVRM: Forcing fabric manager's state as initialized to unblock clients. **NVRM: Forcing fabric manager's state as initialized to unblock clients. *PDB_PROP_SYS_FABRIC_IS_EXTERNALLY_MANAGED*NVRM: Enabling external fabric management for Proxy NvSwitch systems. **NVRM: Enabling external fabric management for Proxy NvSwitch systems. *RMExternalFabricMgmt**RMExternalFabricMgmt*NVRM: Enabling external fabric management. **NVRM: Enabling external fabric management. *NVRM: Disabling external fabric management. **NVRM: Disabling external fabric management. *RmRoApiLock**RmRoApiLock*apiLockMask*RmRoApiLockModule**RmRoApiLockModule*apiLockModuleMask*RmLockTimeCollect**RmLockTimeCollect*RMClientListDeferredFree**RMClientListDeferredFree*RMClientListDeferredFreeLimit**RMClientListDeferredFreeLimit*call to __nvoc_objCreateDynamic*call to _sysCreateOs*PDB_PROP_SYS_DESTRUCTING*call to _sysDestroyMemExportCache*call to _sysDestroyMemExportClient*call to rmapiShutdown*call to osSyncWithRmDestroy*call to threadStateGlobalFree*call to rmLocksFree*call to _sysDeleteChildObjects*call to bindataDestroy*call to _sysInitStaticConfig*call to _sysCreateChildObjects*rmInstanceId*call to hypervisorDetection_IMPL*call to osRmInitRm*call to _sysNvSwitchDetection*call to RmInitCpuInfo*call to rmLocksAlloc*call to threadStateGlobalAlloc*call to rmapiInitialize*call to _sysInitMemExportCache*call to _sysInitMemExportClient*call to bindataInitialize*rmapiLockAcquire(API_LOCK_FLAGS_NONE, RM_LOCK_MODULES_DESTROY) == NV_OK**rmapiLockAcquire(API_LOCK_FLAGS_NONE, RM_LOCK_MODULES_DESTROY) == NV_OK*pRmApi->Free(pRmApi, pSys->hSysMemExportClient, pSys->hSysMemExportClient) == NV_OK**pRmApi->Free(pRmApi, pSys->hSysMemExportClient, pSys->hSysMemExportClient) == NV_OK*hSysMemExportClient*pSysMemExportModuleLock**pSysMemExportModuleLock*multimapCountItems(&pSys->sysMemExportCache) == 0**multimapCountItems(&pSys->sysMemExportCache) == 0*pThreadNode->flags & THREAD_STATE_FLAGS_STATE_FREE_CB_ENABLED*src/kernel/core/thread_state.c**pThreadNode->flags & THREAD_STATE_FLAGS_STATE_FREE_CB_ENABLED**src/kernel/core/thread_state.c*pCbListNode**pCbListNode*call to _threadStateSetTimeoutOverride*overrideTimeoutMsecs*call to _threadStateSetNextCpuYieldTime*nonComputeTime*computeTime*call to _threadNodeCheckTimeout*call to rcdbAddAssertJournalRecWithLine*call to _threadNodeInitTime*call to _threadStatePrintInfo*nonComputeTimeoutMsecs*call to osGetTimeoutParams*timeoutMsecs*call to timeoutApplyScale*computeTimeoutMsecs*call to threadStateGetCurrentUnchecked*ppThreadNode**ppThreadNode*call to _threadStateGet*NVRM: threadState[Init,Free] call may be missing from this RM entry point! **NVRM: threadState[Init,Free] call may be missing from this RM entry point! *call to osGetCurrentProcessorNumber*ppIsrThreadStateGpu**ppIsrThreadStateGpu**pIsrlocklessThreadNode*ppISRDeferredIntHandlerThreadNode**ppISRDeferredIntHandlerThreadNode**pISRDeferredIntHandlerNode*spinlock*(*ppThreadNode)->threadId == threadId**(*ppThreadNode)->threadId == threadId*flags & (THREAD_STATE_FLAGS_IS_ISR_LOCKLESS | THREAD_STATE_FLAGS_IS_ISR)**flags & (THREAD_STATE_FLAGS_IS_ISR_LOCKLESS | THREAD_STATE_FLAGS_IS_ISR)*pThreadNode->cpuNum == osGetCurrentProcessorNumber()**pThreadNode->cpuNum == osGetCurrentProcessorNumber()*call to _threadStateFreeProcessWork*pThreadStateIsrlockless**pThreadStateIsrlockless*pThreadStateIsrlockless->ppIsrThreadStateGpu[pGpu->gpuInstance] != NULL**pThreadStateIsrlockless->ppIsrThreadStateGpu[pGpu->gpuInstance] != NULL*(flags & (THREAD_STATE_FLAGS_IS_ISR_LOCKLESS | THREAD_STATE_FLAGS_IS_ISR | THREAD_STATE_FLAGS_DEFERRED_INT_HANDLER_RUNNING)) == 0**(flags & (THREAD_STATE_FLAGS_IS_ISR_LOCKLESS | THREAD_STATE_FLAGS_IS_ISR | THREAD_STATE_FLAGS_DEFERRED_INT_HANDLER_RUNNING)) == 0*call to _threadStateFreeInvokeCallbacks**pMap*call to mapRemoveIntrusive_IMPL*call to threadPriorityStateFree**pThreadNode*pGpu && (flags & (THREAD_STATE_FLAGS_IS_ISR | THREAD_STATE_FLAGS_DEFERRED_INT_HANDLER_RUNNING))**pGpu && (flags & (THREAD_STATE_FLAGS_IS_ISR | THREAD_STATE_FLAGS_DEFERRED_INT_HANDLER_RUNNING))*flags & THREAD_STATE_FLAGS_IS_ISR_LOCKLESS**flags & THREAD_STATE_FLAGS_IS_ISR_LOCKLESS*threadSeqId*cpuNum*pThreadNode->cpuNum < threadStateDatabase.maxCPUs**pThreadNode->cpuNum < threadStateDatabase.maxCPUs*pThreadStateIsrLockless**pThreadStateIsrLockless*pThreadStateIsrLockless->ppIsrThreadStateGpu[pGpu->gpuInstance] == NULL**pThreadStateIsrLockless->ppIsrThreadStateGpu[pGpu->gpuInstance] == NULL*flags & (THREAD_STATE_FLAGS_IS_ISR | THREAD_STATE_FLAGS_DEFERRED_INT_HANDLER_RUNNING)**flags & (THREAD_STATE_FLAGS_IS_ISR | THREAD_STATE_FLAGS_DEFERRED_INT_HANDLER_RUNNING)**pHeapNode*call to _threadStateInitCommon*call to osGetCurrentProcessFlags*osFlags*call to mapInsertExisting_IMPL*call to _threadStateLogInitCaller*call to threadPriorityStateAlloc*traceInfo*NVRM: API_GPU_ATTACHED_SANITY_CHECK failed! **NVRM: API_GPU_ATTACHED_SANITY_CHECK failed! *call to _getTimeoutDataFromGpuMode*NVRM: threadStateDatabaseTimeoutMsecs or pThreadNodeTime was NULL! **NVRM: threadStateDatabaseTimeoutMsecs or pThreadNodeTime was NULL! *timeInNs*NVRM: _threadNodeCheckTimeout: currentTime: %llx >= %llx **NVRM: _threadNodeCheckTimeout: currentTime: %llx >= %llx *NVRM: _threadNodeCheckTimeout: Unsupported timeout.flags: 0x%x! **NVRM: _threadNodeCheckTimeout: Unsupported timeout.flags: 0x%x! *NVRM: _threadNodeCheckTimeout: Timeout was set to: %lld msecs! **NVRM: _threadNodeCheckTimeout: Timeout was set to: %lld msecs! *call to threadStateResetTimeout*firstInit*enterTime*NVRM: Bad threadStateDatabase.timeout.flags: 0x%x! **NVRM: Bad threadStateDatabase.timeout.flags: 0x%x! *NVRM: Yielding **NVRM: Yielding *nextCpuYieldTime*computeGpuMask*setupFlags*RmThreadStateSetupFlags**RmThreadStateSetupFlags*NVRM: Overriding threadStateDatabase.setupFlags from 0x%x to 0x%x **NVRM: Overriding threadStateDatabase.setupFlags from 0x%x to 0x%x *call to _threadStateFreePerCpuPerGpu***ppISRDeferredIntHandlerThreadNode**spinlock*call to mapDestroyIntrusive_IMPL*call to tlsShutdown*call to tlsInitialize*tlsInitialize() == NV_OK**tlsInitialize() == NV_OK*threadSeqCntr*gspIsrThreadSeqCntr*call to _threadStateAllocPerCpuPerGpu*call to mapInitIntrusive_IMPL*call to osGetMaximumCoreCount*maxCPUs***ppIsrThreadStateGpu*NVRM: Thread state: **NVRM: Thread state: *NVRM: threadId: 0x%llx flags: 0x0%x **NVRM: threadId: 0x%llx flags: 0x0%x *NVRM: enterTime: 0x%llx Limits: nonComputeTime: 0x%llx computeTime: 0x%llx **NVRM: enterTime: 0x%llx Limits: nonComputeTime: 0x%llx computeTime: 0x%llx **pGpuAcct*gpuInstanceInfo**gpuInstanceInfo*vmInstanceInfo**vmInstanceInfo*call to _vmAcctDestroyDataStore*vmPId*vmIndex*targetVMIndex*vmInstanceFound*isAccountingEnabled*call to vmAcctInitDataStore*src/kernel/diagnostics/gpu_acct.c*NVRM: Failed to create process accounting data store for VM Pid : %d **src/kernel/diagnostics/gpu_acct.c**NVRM: Failed to create process accounting data store for VM Pid : %d *call to gpuacctInitDataStore*call to gpuacctDestroyDataStore*pVMInstanceInfo*currTime*pGpu != NULL**pGpu != NULL*pClearAcctDataParams*pClearAcctDataParams != NULL**pClearAcctDataParams != NULL*bVgpuOnGspEnabled*call to gpuacctCleanupDataStore*pSetAcctModeParams*pSetAcctModeParams != NULL**pSetAcctModeParams != NULL*call to gpuacctStopTimerCallbacks*pDS**pDS*pDS != NULL**pDS != NULL*call to gpuacctStartTimerCallbacks*bVMFound*call to tmrEventDestroy_IMPL*pSamplesParams**pSamplesParams*PDB_PROP_GPU_ACCOUNTING_ON*NVRM: NULL pTmr object found **NVRM: NULL pTmr object found *NVRM: Failed to allocate memory for sample params **NVRM: Failed to allocate memory for sample params *call to tmrEventCreate_IMPL*tmrEventCreate(pTmr, &pGpuInstanceInfo->pTmrEvent, gpuacctSampleGpuUtil, pGpuInstanceInfo, TMR_FLAGS_NONE)**tmrEventCreate(pTmr, &pGpuInstanceInfo->pTmrEvent, gpuacctSampleGpuUtil, pGpuInstanceInfo, TMR_FLAGS_NONE)*call to tmrEventScheduleRel_IMPL*tmrEventScheduleRelSec(pTmr, pGpuInstanceInfo->pTmrEvent, 1)**tmrEventScheduleRelSec(pTmr, pGpuInstanceInfo->pTmrEvent, 1)*pGetAcctModeParams*pGetAcctModeParams != NULL**pGetAcctModeParams != NULL*call to osIsInitNs*deadProcAcctInfo*deadVMProcAcctInfo*pList != NULL**pList != NULL*call to listIterNext_IMPL*pidTbl**pidTbl*liveProcAcctInfo*liveVMProcAcctInfo*pParams != NULL**pParams != NULL*isLiveProcess*call to gpuacctLookupProcEntry*maxFbUsage*gpuUtil*fbUtil*endTime*sampleCount*pLiveDS**pLiveDS*pDeadDS**pDeadDS*searchPid*call to gpuacctGetCurrTime*totalSampleCount*NVRM: pid=%d **NVRM: pid=%d *call to gpuacctFreeProcEntry*call to gpuacctRemoveProcEntry*call to gpuacctAddProcEntry*call to gpuacctAllocProcEntry*pEntry != NULL**pEntry != NULL*isGuestProcess*startSampleCount*NVRM: pid=%d startSampleCount=%u **NVRM: pid=%d startSampleCount=%u *pUtilSampleBuffer*maxTimeStamp*gr*call to gpuacctFindProcEntryFromPidSubpid*NVRM: pid=%d subPid=%d util=%4d sumUtil=%lld sampleCount=%u (total=%u) **NVRM: pid=%d subPid=%d util=%4d sumUtil=%lld sampleCount=%u (total=%u) *lastUpdateTimestamp*!IS_GSP_CLIENT(pGpu)**!IS_GSP_CLIENT(pGpu)*NVRM: NULL objects found **NVRM: NULL objects found *NVRM: GET_GPUMON_PERFMON_UTIL_SAMPLES failed with status : %d **NVRM: GET_GPUMON_PERFMON_UTIL_SAMPLES failed with status : %d *call to gpuacctProcessGpuUtil**samples*NVRM: Error sheduling callback for util 0x%x **NVRM: Error sheduling callback for util 0x%x *ppEntry**ppEntry*pidToSearch*maxProcLimit**pOldEntry*call to listAppendExisting_IMPL*call to listRemoveIntrusive_IMPL*ppEntry != NULL**ppEntry != NULL*call to listDestroyIntrusive_IMPL*call to listInitIntrusive_IMPL*call to gpuacctInitState*pInstrumentationManager*pSysmemBuffer*dataBuffer**dataBuffer*call to instrumentationmanagerDeregisterBuffer_IMPL*pBufferNode*pRcdb*pReasonData*nocatJournalDescriptor*nocatEventCounters**nocatEventCounters*call to _rcdbGetNewestNocatJournalRecordForType*call to _rcdbSetTdrReason**tdrReason*call to _rcdbReleaseNocatJournalRecord*newEntry*pSource**pSource*call to rcdbNocatInsertNocatError*pFaultingEngine**pFaultingEngine*pRcdError*pDiagBuffer**pDiagBuffer*pAssertRec*ASSERT**ASSERT*diagBuffer*callStack**callStack*lastRecordId**lastRecordId*pDiagData**pDiagData*recordPosted*pNewEntry**pNewEntry*RC ERROR**RC ERROR*call to rcdbNocatInitEngineErrorEvent*bugcheck*postRecord*lockTimestamp*RC Error**RC Error*call to _rcdbAllocNocatJournalRecord**pNocatEntry*call to _rcdbNocatCollectContext**** unknown ******** unknown *****diagBuffer*faultingEngine**faultingEngine*call to _rcdbSendNocatJournalNotification*call to osGetDriverBlock*driverBlock*driverStart**driverStart*loadAddress*nvDumpState*pReturnedNocatEntry**pReturnedNocatEntry*pDesc**pDesc*call to _rcdbGetNocatJournalRecord*GPUTag*stateMask*nocatGpuState*nextReportedId*call to rcdbGetNocatOutstandingCount*ppReturnedCommon**ppReturnedCommon*ppReturnedNocatEntry**ppReturnedNocatEntry*call to rcdbFindRingBufferForType*call to rcdbGetOcaRecordSizeWithHeader_IMPL**pCommon*call to _rcdbAllocRecFromRingBuffer*ppCommon**ppCommon*pTdrReasonStr*pTmpStr**pTmpStr*LEGACY**LEGACY*FULL CHIP RESET**FULL CHIP RESET*BUS RESET**BUS RESET*GC6 RESET**GC6 RESET*SURPRISE REMOVAL**SURPRISE REMOVAL*UCODE RESET**UCODE RESET*GPU RC RESET**GPU RC RESET**pTag*prod**prod**pContext*pContextCache**pContextCache*idInfo*subsystemVendor*bMsHybrid*vbiosProject**vbiosProject*call to osIsRaisedIRQL*capsMethodData*bOptimus*bFullPower*bInGc6Reset*bInFullchipReset*bInSecBusReset*pCrashedFlcn**pCrashedFlcn*call to rcdProbeGpuPresent*bFoundLostGpu*call to regCheckRead032*testValue*src/kernel/diagnostics/journal.c*NVRM: found GPU %d (0x%p) inaccessible After assert **src/kernel/diagnostics/journal.c**NVRM: found GPU %d (0x%p) inaccessible After assert *pNvDumpState*prbEnc*bDumpInProcess*nvDumpType*bRMLock*call to rcdbDumpInitGpuAccessibleFlag_IMPL*call to prbEncStartAlloc*NVRM: deferred GPU dump encoder init failed (status = 0x%x) **NVRM: deferred GPU dump encoder init failed (status = 0x%x) *NVRM: deferring GPU dump for normal context **NVRM: deferring GPU dump for normal context *pGpuInstance**pGpuInstance*call to nvdDumpAllEngines_IMPL*call to prbEncNestedEnd*call to prbEncFinish*bufferUsed*pPrbErrorInfo**pPrbErrorInfo*RmPrbErrorData*cRecordGroup*cRecordType*wRecordSize*call to rcdbSetCommonJournalRecord*pErrorHeader**pErrorHeader*pErrorBlock**pErrorBlock*pNewErrorBlock**pNewErrorBlock**pBlock*pSysErrorInfo*pErrorList**pErrorList*ErrorHeader*pNextError**pNextError***pErrorList*call to _rcdbGetOcaRecordSize*recSz*recordSize == _rcdbGetOcaRecordSize(pRcDB, type)**recordSize == _rcdbGetOcaRecordSize(pRcDB, type)*pRecord**pRecord*newItemIndex*pRingBufferColl*pCurrentRingBuffer**pCurrentRingBuffer**pFirstEntry*pNextRingBuffer**pNextRingBuffer**pRingBuffer*entrySize != 0**entrySize != 0*maxBufferSize*entryType*call to portSafeSubU32*ppRingBuffer**ppRingBuffer*ppRingBuffer != NULL**ppRingBuffer != NULL*pCurrentRingBuffer != NULL**pCurrentRingBuffer != NULL*NVRM: Ring Buffer not found for type %d **NVRM: Ring Buffer not found for type %d *nvDumpBuffer*call to nvdDumpComponent_IMPL*call to rcdbAllocNextJournalRec_IMPL**fieldDesc*call to _rcdbDbgBreakEx*NVRM: Breakpoint at 0x%llx. **NVRM: Breakpoint at 0x%llx. *call to portDbgPrintString*NVRM-RC: Nvidia Release Debug Break **NVRM-RC: Nvidia Release Debug Break *call to nvDbgBreakpointEnabled*call to osDbgBugCheckOnAssert*call to _rcdbRmAssert*call to rcdProbeAllGpusPresent*NVRM-RC: Nvidia Release NV_ASSERT Break **NVRM-RC: Nvidia Release NV_ASSERT Break *NVRM: Dump triggered: gpuSelect=%u, component=%u, dumpStatus=%u **NVRM: Dump triggered: gpuSelect=%u, component=%u, dumpStatus=%u **pNvd*dumpStatus*NVRM: Invalid dumpStatus %u **NVRM: Invalid dumpStatus %u *call to rcdbDumpComponent_IMPL*NVRM: Invalid component %u **NVRM: Invalid component %u *NVRM: Should never reach this point! **NVRM: Should never reach this point! *PDB_PROP_RCDB_IN_DEFERRED_DUMP_CODEPATH*NVRM: failed to acquire the GPU locks! **NVRM: failed to acquire the GPU locks! *NVRM: failed to acquire the API lock! **NVRM: failed to acquire the API lock! *NVRM: failed to acquire the OS semaphore! **NVRM: failed to acquire the OS semaphore! *call to prbEncNestingLevel*call to prbEncNestedStart*prbEncNestedStart(pPrbEnc, NVDEBUG_NVDUMP_DCL_MSG)**prbEncNestedStart(pPrbEnc, NVDEBUG_NVDUMP_DCL_MSG)*rcErrorCounterArray**rcErrorCounterArray*rcErrTyp*prbEncNestedStart(pPrbEnc, DCL_DCLMSG_RCCOUNTER)**prbEncNestedStart(pPrbEnc, DCL_DCLMSG_RCCOUNTER)*call to prbEncAddUInt64*prbEncNestedEnd(pPrbEnc)**prbEncNestedEnd(pPrbEnc)*call to prbEncUnwindNesting*prbEncUnwindNesting(pPrbEnc, startingDepth)**prbEncUnwindNesting(pPrbEnc, startingDepth)*pJournal*List*call to timeoutSet*NVRM: timed out waiting for Rm journal ring buffer to be available **NVRM: timed out waiting for Rm journal ring buffer to be available *call to timeoutCheck*call to osSpinLoop*Ring Buffer unavailable for dump at high irql.**Ring Buffer unavailable for dump at high irql.*call to rcdbInsertRingBufferCollectionToList*call to _rcdbInsertErrorHistoryToList*pJournalBuff*pRecord->Header.cRecordGroup == RmGroup**pRecord->Header.cRecordGroup == RmGroup*call to _rcdbInsertJournalRecordToList*call to _rcdbDumpDclMsgRecord*pFieldDesc**pCurrentBuffer*pCurrentBuffer->maxEntries * rcdbGetOcaRecordSizeWithHeader(pRcDB, pCurrentBuffer->entryType) == pCurrentBuffer->bufferSize**pCurrentBuffer->maxEntries * rcdbGetOcaRecordSizeWithHeader(pRcDB, pCurrentBuffer->entryType) == pCurrentBuffer->bufferSize*call to rcdbInsertRingBufferToList*pCurrentBuffer == NULL**pCurrentBuffer == NULL*pNextRecord*pCurrentRecord**pCurrentRecord**pNextRecord*prbEncNestedStart(pPrbEnc, pFieldDesc)**prbEncNestedStart(pPrbEnc, pFieldDesc)*call to _rcdbDumpCommonJournalRecord*prbEncNestedStart(pPrbEnc, DCL_DCLMSG_JOURNAL_ASSERT)**prbEncNestedStart(pPrbEnc, DCL_DCLMSG_JOURNAL_ASSERT)*call to rcdbDumpCommonAssertRecord*prbEncNestedStart(pPrbEnc, DCL_DCLMSG_JOURNAL_BADREAD)**prbEncNestedStart(pPrbEnc, DCL_DCLMSG_JOURNAL_BADREAD)*call to prbEncAddBytes*call to prbEncCatMsg*prbEncNestedStart(pPrbEnc, DCL_DCLMSG_JOURNAL_BUGCHECK)**prbEncNestedStart(pPrbEnc, DCL_DCLMSG_JOURNAL_BUGCHECK)*prbEncNestedStart(pPrbEnc, DCL_DCLMSG_RC_DIAG_RECS)**prbEncNestedStart(pPrbEnc, DCL_DCLMSG_RC_DIAG_RECS)*call to prbEncGpuRegImm*NVRM: unknown Dcl Record entry type: %d **NVRM: unknown Dcl Record entry type: %d *pPrbErrorElement**pPrbErrorElement*NVRM: only one error block expected! **NVRM: only one error block expected! *NVRM: unknown error element type: %d **NVRM: unknown error element type: %d *call to rcdbDumpJournal_IMPL*call to rcdbDumpErrorCounters_IMPL*NVRM: no GPU - won't dump ring buffers or journal **NVRM: no GPU - won't dump ring buffers or journal *prbEncNestedStart(pPrbEnc, NVDEBUG_NVDUMP_SYSTEM_INFO)**prbEncNestedStart(pPrbEnc, NVDEBUG_NVDUMP_SYSTEM_INFO)*call to _rcdbGetTimeInfo*_rcdbGetTimeInfo(pPrbEnc, pNvDumpState, NVDEBUG_SYSTEMINFO_TIME_INFO)**_rcdbGetTimeInfo(pPrbEnc, pNvDumpState, NVDEBUG_SYSTEMINFO_TIME_INFO)*call to _rcdbGetResourceServerData*_rcdbGetResourceServerData(pPrbEnc, pNvDumpState, NVDEBUG_SYSTEMINFO_RESSERV_INFO)**_rcdbGetResourceServerData(pPrbEnc, pNvDumpState, NVDEBUG_SYSTEMINFO_RESSERV_INFO)*prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_NORTHBRIDGE_INFO)**prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_NORTHBRIDGE_INFO)*FHBBusInfo*prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_CPU_INFO)**prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_CPU_INFO)*cpuInfo*prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_GPU_INFO)**prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_GPU_INFO)*prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_OS_INFO)**prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_OS_INFO)*call to osGetVersionDump**pPrbEnc*prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_DRIVER_INFO)**prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_DRIVER_INFO)*sizeStr*Private r591_47 rel/gpu_drv/r590/r591_47-174 unknown**Private r591_47 rel/gpu_drv/r590/r591_47-174 unknown*RELEASE**RELEASE*call to prbEncAddBool*rel/gpu_drv/r590/r591_47-174**rel/gpu_drv/r590/r591_47-174*previousDriverVersion*previousDriverBranch*bGpuDone**bGpuDone*prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_GPU_CONFIG)**prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_GPU_CONFIG)*prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_ERROR_STATE)**prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_ERROR_STATE)*call to serverGetClientCount*call to serverGetResourceCount*prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_RESOURCESERVER_CLIENT_INFO)**prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_RESOURCESERVER_CLIENT_INFO)*prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_RESOURCESERVER_CLIENTINFO_ALLOCATIONS)**prbEncNestedStart(pPrbEnc, NVDEBUG_SYSTEMINFO_RESOURCESERVER_CLIENTINFO_ALLOCATIONS)*pRmRes*call to osGetTimestampFreq*timeSinceBoot*internalCode*bGpuAccessible*initialbufferSize*curNumBytes*call to prbEncStart*prbEncStartAlloc(&encoder, NVDEBUG_NVDUMP, pBuffer->size, pBufferCallback)**prbEncStartAlloc(&encoder, NVDEBUG_NVDUMP, pBuffer->size, pBufferCallback)*call to prbEncStartCount*startingDepth*call to rcdbDumpSystemFunc_IMPL*rcdbDumpSystemFunc(pRcDB, &encoder, pNvDumpState)**rcdbDumpSystemFunc(pRcDB, &encoder, pNvDumpState)*call to rcdbDumpSystemInfo_IMPL*rcdbDumpSystemInfo(pRcDB, &encoder, pNvDumpState)**rcdbDumpSystemInfo(pRcDB, &encoder, pNvDumpState)*NVRM: called with invalid component %u selected. **NVRM: called with invalid component %u selected. *prbEncUnwindNesting(&encoder, startingDepth)**prbEncUnwindNesting(&encoder, startingDepth)**pBuff*call to regRead032*pCurrentBuffer != NULL**pCurrentBuffer != NULL*pCurrentBuffer->pBuffer != NULL**pCurrentBuffer->pBuffer != NULL*pTempCurrentBuffer**pTempCurrentBuffer**pRingBufferColl**pDelete**pOldErrorBlock*pFifoDelete**pFifoDelete*portSyncExSafeToSleep()**portSyncExSafeToSleep()*call to portUtilSpin*pFifoErrorInfo**pFifoErrorInfo*pFreeErrorInfo**pFreeErrorInfo*call to rcdbDeleteErrorElement_IMPL*ErrorCount*LogCount*call to portAtomicSetU32*ppRec**ppRec*pFree*call to rcdbGetRcDiagRec_IMPL*recStatus*ppRmDiagWrapBuffRec**ppRmDiagWrapBuffRec*call to _rcdbInternalGetRcDiagRec*pRecord->idx == reqIdx**pRecord->idx == reqIdx*call to rcdbAddRcDiagRec_IMPL*pRmDiagGsp**pCommonCpu*pCommonGsp*pCommonCpu->GPUTag == pCommonGsp->GPUTag**pCommonCpu->GPUTag == pCommonGsp->GPUTag*pRmDiagWrapBuffRec*Diag report to large for buffer**Diag report to large for buffer*call to rcdbAddRecToRingBuffer_IMPL*logicalStartIdx*foundStart*foundEnd*newRmDiagWrapBuffRec*CPUTag*call to _getCommonJournalStateMask*pVoidGpu**pVoidGpu*pPossibleNULLGpu**pPossibleNULLGpu**pRcDB*Journal*pAssertList**pAssertList*newAssertRec*breakpointAddrHint*ppList**ppList**pAssertRec*lastTimeStamp*NVRM: failed to insert tracking for assert record **NVRM: failed to insert tracking for assert record *call to _rcdbNocatReportAssert*bPrevDriverCodeExecuted*call to osReadRegistryVolatileSize*RmRCPrevDriverVersion**RmRCPrevDriverVersion**previousDriverVersion*call to osReadRegistryVolatile*RmRCPrevDriverBranch**RmRCPrevDriverBranch**previousDriverBranch*RmRCPrevDriverChangelist**RmRCPrevDriverChangelist*RmRCPrevDriverLoadCount**RmRCPrevDriverLoadCount*call to osWriteRegistryVolatile*call to _initJournal*BugcheckCount*NVRM: failed to allocate NVD debugger dump buffer **NVRM: failed to allocate NVD debugger dump buffer *debuggerControlFuncAddr**debuggerControlFuncAddr***debuggerControlFuncAddr**nvdDebuggerControlFunc*call to rcdbCreateRingBuffer_IMPL*NVRM: failed to allocate RC Diagnostic Ring Buffer **NVRM: failed to allocate RC Diagnostic Ring Buffer *RcErrRptNextIdx*RcErrRptRecordsDropped*rcErrorType*rcErrorCount*rcLastCHID*rcLastTime*nocatLastRecordType*cacheFreshnessPeriodticks*NVRM: failed to allocate NOCAT Ring Buffer **NVRM: failed to allocate NOCAT Ring Buffer *timeStampFreq*systemTime*systemTimeReference*NULL == pJournal->pBuffer**NULL == pJournal->pBuffer*AssertList*NULL == (NvU8*) pJournal->AssertList.ppList**NULL == (NvU8*) pJournal->AssertList.ppList*BufferSize**pFree*BufferRemaining**pCurrCollection*RecordCount***ppList*Count*QualifyingStackSize*NVRM: Failure to allocate RC assert tracking buffer **NVRM: Failure to allocate RC assert tracking buffer *NVRM: Failure to allocate RC journal buffer **NVRM: Failure to allocate RC journal buffer **pJournal*call to rcdbClearErrorHistory_IMPL*call to rcdbDestroyRingBufferCollection_IMPL*pCurrDebugBuffer*pPrevDebugBuffer**pHeadDebugBuffer*call to memdescFree**pCurrDebugBuffer**pPrevDebugBuffer*src/kernel/diagnostics/nv_debug_dump.c*NVRM: nvdFreeDebugBuffer - Memory Descriptor not found in list! **src/kernel/diagnostics/nv_debug_dump.c**NVRM: nvdFreeDebugBuffer - Memory Descriptor not found in list! *NVRM: nvdAllocDebugBuffer - memdescCreate Failed: %x **NVRM: nvdAllocDebugBuffer - memdescCreate Failed: %x *call to memdescAlloc*NVRM: nvdAllocDebugBuffer - memdescAlloc Failed: %x **NVRM: nvdAllocDebugBuffer - memdescAlloc Failed: %x *pNewDebugBuffer**pNewDebugBuffer*NVRM: nvdAllocDebugBuffer - portMemAllocNonPaged Failed: %x **NVRM: nvdAllocDebugBuffer - portMemAllocNonPaged Failed: %x *call to rcdbSavePreviousDriverVersion_IMPL*call to nvdEngineSignUp_IMPL*prbEncNestedStart(pPrbEnc, NVDEBUG_GPUINFO_ENG_NVD)**prbEncNestedStart(pPrbEnc, NVDEBUG_GPUINFO_ENG_NVD)*subAlloc**subAlloc*call to portSafeSubU16*call to prbEncStubbedAddBytes**pCurrent*endStatus*call to memdescMap*pUmdBuffer**pUmdBuffer*call to portAtomicMemoryFenceFull*call to prbAppendSubMsg*call to memdescUnmap*call to nvdDumpDebugBuffers_IMPL*call to nvdDoEngineDump_IMPL*prbEncNestedStart(pPrbEnc, NVDEBUG_NVDUMP_GPU_INFO)**prbEncNestedStart(pPrbEnc, NVDEBUG_NVDUMP_GPU_INFO)*call to prbEncBufLeft*pEngineCallback*call to nvdEngineDumpCallbackHelper*nvdEngineDumpCallbackHelper(pGpu, pPrbEnc, pNvDumpState, pEngineCallback)**nvdEngineDumpCallbackHelper(pGpu, pPrbEnc, pNvDumpState, pEngineCallback)**pEngineCallback*call to nvdFindEngine_IMPL*pvData**pvData*pEngineCallback->pDumpEngineFunc(pGpu, pPrbEnc, pNvDumpState, pEngineCallback->pvData)**pEngineCallback->pDumpEngineFunc(pGpu, pPrbEnc, pNvDumpState, pEngineCallback->pvData)*startingDepth == prbEncNestingLevel(pPrbEnc)**startingDepth == prbEncNestingLevel(pPrbEnc)**pWalk***pvData*pBack**pBack*pNvd->pHeadDebugBuffer == NULL**pNvd->pHeadDebugBuffer == NULL*call to nvdEngineRelease_IMPL*flushCbsLock*nvlogFlushCbs[i].pCb != pCb || nvlogFlushCbs[i].pData != pData*src/kernel/diagnostics/nvlog.c**nvlogFlushCbs[i].pCb != pCb || nvlogFlushCbs[i].pData != pData**src/kernel/diagnostics/nvlog.c*ring*call to _printBase64*base64_key**base64_key*nvrm-nvlog: %s **nvrm-nvlog: %s *call to portMemExSafeForNonPagedAlloc*call to portMemExSafeForPagedAlloc*call to portMemAllocPaged**call to portMemAllocPaged*pNewBuffer**pNewBuffer*mainLock*oldPos*NVLOG_IS_VALID_BUFFER_HANDLE(hBuffer)**NVLOG_IS_VALID_BUFFER_HANDLE(hBuffer)*pDest != NULL**pDest != NULL*destSize >= NVLOG_BUFFER_SIZE(pBuffer)**destSize >= NVLOG_BUFFER_SIZE(pBuffer)*pBufferHandle*pBufferHandle != NULL**pBufferHandle != NULL*pFlags != NULL**pFlags != NULL*pTag != NULL**pTag != NULL*pSize != NULL**pSize != NULL**pChunkSize > 0***pChunkSize > 0*index <= pBuffer->size**index <= pBuffer->size*size > 0**size > 0*pData != NULL**pData != NULL*buffersLock*nextFree*call to _deallocateNvlogBuffer*NvLogLogger.totalFree > 0**NvLogLogger.totalFree > 0*call to _allocateNvlogBuffer*call to osDeleteRecordForCrashLog*pushfunc*call to osAddRecordForCrashLog**mainLock**buffersLock**flushCbsLock*src/kernel/diagnostics/nvlog_printf.c*NVRM: x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xa xb xc xd xe xf **src/kernel/diagnostics/nvlog_printf.c**NVRM: x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xa xb xc xd xe xf *NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x **NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x *NVRM: %p %02x .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. **NVRM: %p %02x .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. *NVRM: %p %02x %02x .. .. .. .. .. .. .. .. .. .. .. .. .. .. **NVRM: %p %02x %02x .. .. .. .. .. .. .. .. .. .. .. .. .. .. *NVRM: %p %02x %02x %02x .. .. .. .. .. .. .. .. .. .. .. .. .. **NVRM: %p %02x %02x %02x .. .. .. .. .. .. .. .. .. .. .. .. .. *NVRM: %p %02x %02x %02x %02x .. .. .. .. .. .. .. .. .. .. .. .. **NVRM: %p %02x %02x %02x %02x .. .. .. .. .. .. .. .. .. .. .. .. *NVRM: %p %02x %02x %02x %02x %02x .. .. .. .. .. .. .. .. .. .. .. **NVRM: %p %02x %02x %02x %02x %02x .. .. .. .. .. .. .. .. .. .. .. *NVRM: %p %02x %02x %02x %02x %02x %02x .. .. .. .. .. .. .. .. .. .. **NVRM: %p %02x %02x %02x %02x %02x %02x .. .. .. .. .. .. .. .. .. .. *NVRM: %p %02x %02x %02x %02x %02x %02x %02x .. .. .. .. .. .. .. .. .. **NVRM: %p %02x %02x %02x %02x %02x %02x %02x .. .. .. .. .. .. .. .. .. *NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x .. .. .. .. .. .. .. .. **NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x .. .. .. .. .. .. .. .. *NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x .. .. .. .. .. .. .. **NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x .. .. .. .. .. .. .. *NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x .. .. .. .. .. .. **NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x .. .. .. .. .. .. *NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x .. .. .. .. .. **NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x .. .. .. .. .. *NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x .. .. .. .. **NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x .. .. .. .. *NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x .. .. .. **NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x .. .. .. *NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x .. .. **NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x .. .. *NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x .. **NVRM: %p %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x .. *call to osReadRegistryString*NVRM**NVRM*: **: *call to gpumgrGetCurrentGpuInstance*%sGPU%u**%sGPU%u**space*%s%s**%s%s*%s%d.%06d**%s%d.%06d*noun**noun*nounlen*startline*endline*tempPrefix*call to nv_strnstr*pPrefix*pat*pad_options*fillcount*fillchar*destLimit*hexadjust*nbuf**nbuf*intdigp**intdigp*digitcount*destcount*signchar*quotient*call to nvDbgVsnprintf**destLimit*longlong*specptr**specptr*u32val*u64val*call to inttodecfmtstr*tmpBuf**tmpBuf*tmpSize*s64val*s32val*call to uinttohexfmtstr*pval**pval***pval*strpval**strpval*call to strtofmtstr*NVRM: printBuf [BEGIN]**NVRM: printBuf [BEGIN]*NVRM: printBuf 0x%p **NVRM: printBuf 0x%p *%02x**%02x* ** *%c**%c*NVRM: printBuf [END] **NVRM: printBuf [END] *call to nvDbg_PrintMsg*call to _nvDbgPrepareString*%.*s*call to _nvDbgForceLevel**%.*s*call to RmMsgPrefix*call to nvDbg_vPrintf*debuglevel_min*call to nvDbgRmMsgCheck*call to osDbgBreakpointEnabled*pGroup != NULL*src/kernel/diagnostics/profiler.c**pGroup != NULL**src/kernel/diagnostics/profiler.c*pLast*pGroup->pLast != NULL**pGroup->pLast != NULL*call to osGetPerformanceCounter*call to _rmProfStopTime*pTotal**pTotal**pLast*pNext != NULL**pNext != NULL*call to rmProfStart*pFirst*pFirst != NULL**pFirst != NULL*start_ns*pStats != NULL**pStats != NULL*min_ns*max_ns*NVRM: Stopping time measurement that is already stopped **NVRM: Stopping time measurement that is already stopped *call to rmProfRecord*NVRM: Starting time measurement that is already started **NVRM: Starting time measurement that is already started *src/kernel/diagnostics/ucode_instrumentation_ctrl.c**src/kernel/diagnostics/ucode_instrumentation_ctrl.c*call to rpcDmaControl_wrapper*call to diagapiCoverageGetData_KERNEL*call to diagapiCoverageSetState_KERNEL*call to diagapiCoverageGetState_KERNEL*taskRmCoverage**taskRmCoverage*taskVgpuCoverage**taskVgpuCoverage***taskVgpuCoverage*backingRangeStore*call to CliGetDmaMappingInfo*call to gpumgrGetDeviceGpuMask*call to semaphoreFillGPUVA*call to notifyFillNotifierGPUVA*call to kdispGetHead*DispCommon*call to chandesIsolateOnDestruct_b3696a*call to gpumgrIsParentGPU*gpumgrIsParentGPU(pGpu)*src/kernel/disp/disp_sw.c**gpumgrIsParentGPU(pGpu)**src/kernel/disp/disp_sw.c*call to kheadDeleteVblankCallback_IMPL*NotifyOnVBlank*Semaphore*NVRM: Display is not enabled, can't create class **NVRM: Display is not enabled, can't create class *NVRM: RPC error, can't get the displaymask and number of heads **NVRM: RPC error, can't get the displaymask and number of heads *NVRM: invalid logical head number: %d **NVRM: invalid logical head number: %d *NVRM: Device not active: 0x%08x, RM display mask: 0x%08x **NVRM: Device not active: 0x%08x, RM display mask: 0x%08x *DispObject**DispObject*src/kernel/disp/nvfbc_session.c*NVRM: Enter function **src/kernel/disp/nvfbc_session.c**NVRM: Enter function *(pParams != NULL)**(pParams != NULL)*nvfbcSessionEntry*totalGrabCalls*pParams->timestampEntryCount <= NVA0BD_CTRL_CMD_NVFBC_MAX_TIMESTAMP_ENTRIES**pParams->timestampEntryCount <= NVA0BD_CTRL_CMD_NVFBC_MAX_TIMESTAMP_ENTRIES*timestampEntryCount*timestampEntry**timestampEntry*pTimeStampBuffer**pTimeStampBuffer*localAverageLatency*localAverageLatency < 0xFFFFFFFF**localAverageLatency < 0xFFFFFFFF*averageLatency*totalEntries*timeToCapture*localAverageFPS*localAverageFPS < 0xFFFFFFFF**localAverageFPS < 0xFFFFFFFF*averageFPS*rpcParams*captureCallFlags*call to resGetFreeParams_IMPL**pRsClient*hNvfbcSessionHandle*NV_OK == status**NV_OK == status*pNvfbcSessionListItem*pNvfbcSessionListItemNext**pNvfbcSessionListItemNext**pNvfbcSessionListItem*NVRM: Creating NVFBC session above max session limit. **NVRM: Creating NVFBC session above max session limit. *pNvA0BDAllocParams**pNvA0BDAllocParams*vgpuInstanceId*sessionId*displayOrdinal*sessionType*sessionFlags*hMaxResolution*vMaxResolution**sessionPtr*call to gpuIsGlobalPoisonFuseEnabled_DISPATCH*call to _gpuGetErrorContSettings*_gpuGetErrorContSettings(pGpu, errorCode, bIsSmcEnabled, &errorContSmcSettings)*src/kernel/gpu/arch/ampere/kern_gpu_error_cont_ga100.c**_gpuGetErrorContSettings(pGpu, errorCode, bIsSmcEnabled, &errorContSmcSettings)**src/kernel/gpu/arch/ampere/kern_gpu_error_cont_ga100.c*errorContSmcSettings*pRcErrorCode*call to gpuSetPartitionErrorAttribution_DISPATCH*call to gpuIsAmpereErrorContainmentXidEnabled_KERNEL*call to _gpuGenerateErrorLog*_gpuGenerateErrorLog(pGpu, errorCode, loc, &errorContSmcSettings)**_gpuGenerateErrorLog(pGpu, errorCode, loc, &errorContSmcSettings)*call to _gpuNotifySubDeviceEventNotifier*_gpuNotifySubDeviceEventNotifier(pGpu, errorCode, loc, errorContSmcSettings.nv2080Notifier)**_gpuNotifySubDeviceEventNotifier(pGpu, errorCode, loc, errorContSmcSettings.nv2080Notifier)*call to gpuMarkDeviceForReset_DISPATCH*NVRM: Failed to mark GPU for pending reset**NVRM: Failed to mark GPU for pending reset*call to gpuMarkDeviceForDrainAndReset_DISPATCH*NVRM: Failed to mark GPU for pending drain and reset**NVRM: Failed to mark GPU for pending drain and reset*pTableSize != NULL**pTableSize != NULL*pErrorContSmcSettings*(pErrorContSmcSettings != NULL)**(pErrorContSmcSettings != NULL)*%s: %s (0x%x, 0x%x). physAddr: 0x%08llx RST: %s, D-RST: %s**%s: %s (0x%x, 0x%x). physAddr: 0x%08llx RST: %s, D-RST: %s*Contained**Contained*Uncontained**Uncontained*locInfo*dramLoc*%s: %s (0x%x, 0x%x). RST: %s, D-RST: %s**%s: %s (0x%x, 0x%x). RST: %s, D-RST: %s*ltcLoc*engineLoc*call to kmigmgrGetMIGReferenceFromEngineType_IMPL*kmigmgrGetMIGReferenceFromEngineType(pGpu, pKernelMIGManager, loc.locInfo.engineLoc.rmEngineId, &ref)**kmigmgrGetMIGReferenceFromEngineType(pGpu, pKernelMIGManager, loc.locInfo.engineLoc.rmEngineId, &ref)*call to kmigmgrGetGlobalToLocalEngineType_IMPL*kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, loc.locInfo.engineLoc.rmEngineId, &localRmEngineType)**kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, loc.locInfo.engineLoc.rmEngineId, &localRmEngineType)*%s: %s (0x%x). RST: %s, D-RST: %s**%s: %s (0x%x). RST: %s, D-RST: %s*call to gpuGetNv2080EngineType_IMPL*%s: %s. RST: %s, D-RST: %s**%s: %s. RST: %s, D-RST: %s*pErrorContSettings*pErrorContSettings != NULL**pErrorContSettings != NULL*pErrContTable*pErrContTable != NULL**pErrContTable != NULL*smcDisEnSetting**smcDisEnSetting*NVRM: Invalid errorCode: 0x%x **NVRM: Invalid errorCode: 0x%x *loc.locType == NV_ERROR_CONT_LOCATION_TYPE_ENGINE**loc.locType == NV_ERROR_CONT_LOCATION_TYPE_ENGINE*kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, loc.locInfo.engineLoc.pDevice, &ref)**kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, loc.locInfo.engineLoc.pDevice, &ref)*call to kmigmgrIsEngineInInstance_IMPL*NVRM: Notifier requested for an unsupported rm engine id (0x%x) **NVRM: Notifier requested for an unsupported rm engine id (0x%x) **FB DED**DED CBC**LTC Data**LTC GPC**LTC TAG**LTC CBC**FBHUB**SM**CE User Channel**CE Kernel Channel**MMU**GCC**CTXSW**PCIE Egress**PCIE Ingress**PMU**FB Falcon**NVDEC**NVJPG**OFA*call to gpuReadPBusScratch_DISPATCH*call to gpuWritePBusScratch_DISPATCH*call to intrIsVectorPending_DISPATCH*src/kernel/gpu/arch/ampere/kern_gpu_ga100.c*NVRM: FBHUB Interrupt detected. Clearing it. **src/kernel/gpu/arch/ampere/kern_gpu_ga100.c**NVRM: FBHUB Interrupt detected. Clearing it. *src/kernel/gpu/arch/blackwell/kern_gpu_error_cont_gb100.c**src/kernel/gpu/arch/blackwell/kern_gpu_error_cont_gb100.c*call to gpuGetChipId*pUgidData*call to gpuGetPdi_FWCLIENT*gpuGetPdi_HAL(pGpu, &pdi64)*src/kernel/gpu/arch/blackwell/kern_gpu_gb100.c**gpuGetPdi_HAL(pGpu, &pdi64)**src/kernel/gpu/arch/blackwell/kern_gpu_gb100.c*call to nvGenerateUgpuUuid*nvGenerateUgpuUuid(chipId, ugpuId, pdi64, (NvUuid*)(pUgidData))**nvGenerateUgpuUuid(chipId, ugpuId, pdi64, (NvUuid*)(pUgidData))*call to timeoutCondWait**pCondData*call to gpuMnocMboxIsMsgAvailable_DISPATCH*call to regaprtReadReg32_DISPATCH*call to regaprtWriteReg32_DISPATCH*pMsgBuf**pMsgBuf*recvMsgSize*call to gpuMnocMboxMinMessageSize_DISPATCH*recvMsgSize >= gpuMnocMboxMinMessageSize_HAL(pGpu)**recvMsgSize >= gpuMnocMboxMinMessageSize_HAL(pGpu)*call to gpuMnocMboxMaxMessageSize_DISPATCH*recvMsgSize <= gpuMnocMboxMaxMessageSize_HAL(pGpu)**recvMsgSize <= gpuMnocMboxMaxMessageSize_HAL(pGpu)*copyDataSize*call to _gpuMnocMboxSendReceiverReady_GB100*call to _gpuMnocMboxSendCreditCheck_GB100*call to _gpuMnocMboxSendErrorCheck_GB100*call to _gpuMnocMboxPollForMsg_GB100*call to gpuMnocMboxRecv_DISPATCH*RmCCMultiGpuNvle**RmCCMultiGpuNvle*call to gpuIsNvleModeEnabledInHw_DISPATCH*call to gpuIsCCEnabledInHw_DISPATCH*call to gpuReadBusConfigCycle_DISPATCH*GPU_BUS_CFG_CYCLE_RD32(pGpu, NV_PF0_DVSEC0_SEC_FAULT_REGISTER_1, &secDebug)**GPU_BUS_CFG_CYCLE_RD32(pGpu, NV_PF0_DVSEC0_SEC_FAULT_REGISTER_1, &secDebug)*NVRM: SEC_FAULT lockdown detected. This is fatal. RM will now shut down. NV_PF0_DVSEC0_SEC_FAULT_REGISTER_1: 0x%x **NVRM: SEC_FAULT lockdown detected. This is fatal. RM will now shut down. NV_PF0_DVSEC0_SEC_FAULT_REGISTER_1: 0x%x *NVRM: SEC_FAULT type: _FUSE_POD **NVRM: SEC_FAULT type: _FUSE_POD *SEC_FAULT: _FUSE_POD**SEC_FAULT: _FUSE_POD*NVRM: SEC_FAULT type: _FUSE_SCPM **NVRM: SEC_FAULT type: _FUSE_SCPM *SEC_FAULT: _FUSE_SCPM**SEC_FAULT: _FUSE_SCPM*NVRM: SEC_FAULT type: _FSP_SCPM **NVRM: SEC_FAULT type: _FSP_SCPM *SEC_FAULT: _FSP_SCPM**SEC_FAULT: _FSP_SCPM*NVRM: SEC_FAULT type: _SEC2_SCPM **NVRM: SEC_FAULT type: _SEC2_SCPM *SEC_FAULT: _SEC2_SCPM**SEC_FAULT: _SEC2_SCPM*NVRM: SEC_FAULT type: _FSP_DCLS **NVRM: SEC_FAULT type: _FSP_DCLS *SEC_FAULT: _FSP_DCLS**SEC_FAULT: _FSP_DCLS*NVRM: SEC_FAULT type: _SEC2_DCLS **NVRM: SEC_FAULT type: _SEC2_DCLS *SEC_FAULT: _SEC2_DCLS**SEC_FAULT: _SEC2_DCLS*NVRM: SEC_FAULT type: _GSP_DCLS **NVRM: SEC_FAULT type: _GSP_DCLS *SEC_FAULT: _GSP_DCLS**SEC_FAULT: _GSP_DCLS*NVRM: SEC_FAULT type: _PMU_DCLS **NVRM: SEC_FAULT type: _PMU_DCLS *SEC_FAULT: _PMU_DCLS**SEC_FAULT: _PMU_DCLS*NVRM: SEC_FAULT type: _IFF_SEQUENCE_TOO_BIG **NVRM: SEC_FAULT type: _IFF_SEQUENCE_TOO_BIG *SEC_FAULT: _IFF_SEQUENCE_TOO_BIG**SEC_FAULT: _IFF_SEQUENCE_TOO_BIG*NVRM: SEC_FAULT type: _PRE_IFF_CRC_CHECK_FAILED **NVRM: SEC_FAULT type: _PRE_IFF_CRC_CHECK_FAILED *SEC_FAULT: _PRE_IFF_CRC_CHECK_FAILED**SEC_FAULT: _PRE_IFF_CRC_CHECK_FAILED*NVRM: SEC_FAULT type: _POST_IFF_CRC_CHECK_FAILED **NVRM: SEC_FAULT type: _POST_IFF_CRC_CHECK_FAILED *SEC_FAULT: _POST_IFF_CRC_CHECK_FAILED**SEC_FAULT: _POST_IFF_CRC_CHECK_FAILED*NVRM: SEC_FAULT type: _IFF_ECC_UNCORRECTABLE_ERROR **NVRM: SEC_FAULT type: _IFF_ECC_UNCORRECTABLE_ERROR *SEC_FAULT: _IFF_ECC_UNCORRECTABLE_ERROR**SEC_FAULT: _IFF_ECC_UNCORRECTABLE_ERROR*NVRM: SEC_FAULT type: _IFF_CMD_FORMAT_ERROR **NVRM: SEC_FAULT type: _IFF_CMD_FORMAT_ERROR *SEC_FAULT: _IFF_CMD_FORMAT_ERROR**SEC_FAULT: _IFF_CMD_FORMAT_ERROR*NVRM: SEC_FAULT type: _IFF_PRI_ERROR **NVRM: SEC_FAULT type: _IFF_PRI_ERROR *SEC_FAULT: _IFF_PRI_ERROR**SEC_FAULT: _IFF_PRI_ERROR*NVRM: SEC_FAULT type: _C2C_MISC_LINK_ERROR **NVRM: SEC_FAULT type: _C2C_MISC_LINK_ERROR *SEC_FAULT: _C2C_MISC_LINK_ERROR**SEC_FAULT: _C2C_MISC_LINK_ERROR*NVRM: SEC_FAULT type: _FSP_WDT **NVRM: SEC_FAULT type: _FSP_WDT *SEC_FAULT: _FSP_WDT**SEC_FAULT: _FSP_WDT*NVRM: SEC_FAULT type: _GSP_WDT **NVRM: SEC_FAULT type: _GSP_WDT *SEC_FAULT: _GSP_WDT**SEC_FAULT: _GSP_WDT*NVRM: SEC_FAULT type: _PMU_WDT **NVRM: SEC_FAULT type: _PMU_WDT *SEC_FAULT: _PMU_WDT**SEC_FAULT: _PMU_WDT*NVRM: SEC_FAULT type: _SEC2_WDT **NVRM: SEC_FAULT type: _SEC2_WDT *SEC_FAULT: _SEC2_WDT**SEC_FAULT: _SEC2_WDT*NVRM: SEC_FAULT type: _C2C_HBI_LINK_ERROR **NVRM: SEC_FAULT type: _C2C_HBI_LINK_ERROR *SEC_FAULT: _C2C_HBI_LINK_ERROR**SEC_FAULT: _C2C_HBI_LINK_ERROR*NVRM: SEC_FAULT type: _FSP_EMP **NVRM: SEC_FAULT type: _FSP_EMP *SEC_FAULT: _FSP_EMP**SEC_FAULT: _FSP_EMP*NVRM: SEC_FAULT type: _FSP_UNCORRECTABLE_ERRORS **NVRM: SEC_FAULT type: _FSP_UNCORRECTABLE_ERRORS *SEC_FAULT: _FSP_UNCORRECTABLE_ERRORS**SEC_FAULT: _FSP_UNCORRECTABLE_ERRORS*NVRM: SEC_FAULT type: _FUSE_POD_2ND **NVRM: SEC_FAULT type: _FUSE_POD_2ND *SEC_FAULT: _FUSE_POD_2ND**SEC_FAULT: _FUSE_POD_2ND*NVRM: SEC_FAULT type: _FUSE_SCPM_2ND **NVRM: SEC_FAULT type: _FUSE_SCPM_2ND *SEC_FAULT: _FUSE_SCPM_2ND**SEC_FAULT: _FUSE_SCPM_2ND*NVRM: SEC_FAULT type: _IFF_SEQUENCE_TOO_BIG_2ND **NVRM: SEC_FAULT type: _IFF_SEQUENCE_TOO_BIG_2ND *SEC_FAULT: _IFF_SEQUENCE_TOO_BIG_2ND**SEC_FAULT: _IFF_SEQUENCE_TOO_BIG_2ND*NVRM: SEC_FAULT type: _PRE_IFF_CRC_CHECK_FAILED_2ND **NVRM: SEC_FAULT type: _PRE_IFF_CRC_CHECK_FAILED_2ND *SEC_FAULT: _PRE_IFF_CRC_CHECK_FAILED_2ND**SEC_FAULT: _PRE_IFF_CRC_CHECK_FAILED_2ND*NVRM: SEC_FAULT type: _POST_IFF_CRC_CHECK_FAILED_2ND **NVRM: SEC_FAULT type: _POST_IFF_CRC_CHECK_FAILED_2ND *SEC_FAULT: _POST_IFF_CRC_CHECK_FAILED_2ND**SEC_FAULT: _POST_IFF_CRC_CHECK_FAILED_2ND*NVRM: SEC_FAULT type: _IFF_ECC_UNCORRECTABLE_ERROR_2ND **NVRM: SEC_FAULT type: _IFF_ECC_UNCORRECTABLE_ERROR_2ND *SEC_FAULT: _IFF_ECC_UNCORRECTABLE_ERROR_2ND**SEC_FAULT: _IFF_ECC_UNCORRECTABLE_ERROR_2ND*NVRM: SEC_FAULT type: _IFF_CMD_FORMAT_ERROR_2ND **NVRM: SEC_FAULT type: _IFF_CMD_FORMAT_ERROR_2ND *SEC_FAULT: _IFF_CMD_FORMAT_ERROR_2ND**SEC_FAULT: _IFF_CMD_FORMAT_ERROR_2ND*NVRM: SEC_FAULT type: _IFF_PRI_ERROR_2ND **NVRM: SEC_FAULT type: _IFF_PRI_ERROR_2ND *SEC_FAULT: _IFF_PRI_ERROR_2ND**SEC_FAULT: _IFF_PRI_ERROR_2ND*NVRM: SEC_FAULT type: _DEVICE_LOCKDOWN **NVRM: SEC_FAULT type: _DEVICE_LOCKDOWN *SEC_FAULT: _DEVICE_LOCKDOWN**SEC_FAULT: _DEVICE_LOCKDOWN*NVRM: SEC_FAULT type: _FUNCTION_LOCKDOWN **NVRM: SEC_FAULT type: _FUNCTION_LOCKDOWN *SEC_FAULT: _FUNCTION_LOCKDOWN**SEC_FAULT: _FUNCTION_LOCKDOWN*call to pciReadBusConfigCycle_GB100*NVRM: Cannot find BAR firewall capability, falling back to wait for 4 seconds! **NVRM: Cannot find BAR firewall capability, falling back to wait for 4 seconds! *NVRM: unable to read NV_PF0_REVISION_ID_AND_CLASS_CODE **NVRM: unable to read NV_PF0_REVISION_ID_AND_CLASS_CODE *PCIRevisionID*NVRM: unable to read NV_PF0_SUBSYSTEM_ID_AND_VENDOR_ID **NVRM: unable to read NV_PF0_SUBSYSTEM_ID_AND_VENDOR_ID *PCISubDeviceID*NVRM: unable to read NV_PF0_DEVICE_VENDOR_ID **NVRM: unable to read NV_PF0_DEVICE_VENDOR_ID *PCIDeviceID*hwDefRegInfo*VendorId and Dvseclength fields not found **VendorId and Dvseclength fields not found *msgboxId*call to _gpuGetPcieExtCfgCapId_GB100*targetCapId*NVRM: capId for register 0x%x not found **NVRM: capId for register 0x%x not found *call to _gpuGetPcieExtCfgDvsecInfo_GB100*NVRM: Register read failed : 0x%x **NVRM: Register read failed : 0x%x *curCapId*venIdAddr*regVal2*curVendorId*curDvsecLen*capBaseAddr*NVRM: Register 0x%x not part of PCIe linked list **NVRM: Register 0x%x not part of PCIe linked list *call to _gpuGetPcieCfgCapId_GB100*call to _gpuGetPcieCfgMsgboxId_GB100*targetMsgBoxId*curMsgBoxId*call to gpuDecodeDevice*call to gpuReadBusConfigCycle_GM107*call to gpuConfigAccessSanityCheck_DISPATCH***hPci*call to _gpuFindPcieRegAddr_GB100*call to gpuWriteBusConfigCycle_GM107*call to osPciWriteDword*call to _gpuGetPciePartitionId_GB100*partitionId*call to _gpuGetPcieCfgCapBaseAddr_GB100*call to _gpuGetPcieCfgRegOffset_GB100*call to _gpuGetPcieExtCfgCapBaseAddr_GB100*call to _gpuGetPcieExtCfgRegOffset_GB100*GPU_BUS_CFG_CYCLE_RD32(pGpu, NV_EP_PCFG_GPU_VSEC_DEBUG_SEC, &secDebug)*src/kernel/gpu/arch/blackwell/kern_gpu_gb10b.c**GPU_BUS_CFG_CYCLE_RD32(pGpu, NV_EP_PCFG_GPU_VSEC_DEBUG_SEC, &secDebug)**src/kernel/gpu/arch/blackwell/kern_gpu_gb10b.c*NVRM: SEC_FAULT lockdown detected. This is fatal. RM will now shut down. NV_EP_PCFG_GPU_VSEC_DEBUG_SEC: 0x%x **NVRM: SEC_FAULT lockdown detected. This is fatal. RM will now shut down. NV_EP_PCFG_GPU_VSEC_DEBUG_SEC: 0x%x *NVRM: SEC_FAULT type: _SEC2_L5_WDT **NVRM: SEC_FAULT type: _SEC2_L5_WDT *SEC_FAULT: _SEC2_L5_WDT**SEC_FAULT: _SEC2_L5_WDT*NVRM: SEC_FAULT type: _GSP_L5_WDT **NVRM: SEC_FAULT type: _GSP_L5_WDT *SEC_FAULT: _GSP_L5_WDT**SEC_FAULT: _GSP_L5_WDT*NVRM: SEC_FAULT type: _PMU_L5_WDT **NVRM: SEC_FAULT type: _PMU_L5_WDT *SEC_FAULT: _PMU_L5_WDT**SEC_FAULT: _PMU_L5_WDT*NVRM: SEC_2_FAULT type: _IFF_POS value: 0x%x **NVRM: SEC_2_FAULT type: _IFF_POS value: 0x%x *SEC_FAULT: _IFF_POS value: 0x%x**SEC_FAULT: _IFF_POS value: 0x%x*NVRM: pNumEntries[%u] **NVRM: pNumEntries[%u] *pKernelFifo != NULL*src/kernel/gpu/arch/blackwell/kern_gpu_gb202.c**pKernelFifo != NULL**src/kernel/gpu/arch/blackwell/kern_gpu_gb202.c*call to kfifoCheckEngine_DISPATCH*call to ceIsCeGrce*call to ceutilsGetFirstAsyncCe_IMPL*GPU_BUS_CFG_CYCLE_RD32(pGpu, NV_EP_PCFG_GPU_VSEC_DEBUG_SEC_1, &secDebug1)**GPU_BUS_CFG_CYCLE_RD32(pGpu, NV_EP_PCFG_GPU_VSEC_DEBUG_SEC_1, &secDebug1)*GPU_BUS_CFG_CYCLE_RD32(pGpu, NV_EP_PCFG_GPU_VSEC_DEBUG_SEC_2, &secDebug2)**GPU_BUS_CFG_CYCLE_RD32(pGpu, NV_EP_PCFG_GPU_VSEC_DEBUG_SEC_2, &secDebug2)*NVRM: SEC_FAULT lockdown detected. This is fatal. RM will now shut down. NV_EP_PCFG_GPU_VSEC_DEBUG_SEC_1: 0x%xNV_EP_PCFG_GPU_VSEC_DEBUG_SEC_2: 0x%x **NVRM: SEC_FAULT lockdown detected. This is fatal. RM will now shut down. NV_EP_PCFG_GPU_VSEC_DEBUG_SEC_1: 0x%xNV_EP_PCFG_GPU_VSEC_DEBUG_SEC_2: 0x%x *NVRM: SEC_FAULT type: _IFF_PRE_IFF_CRC_CHECK_FAILED **NVRM: SEC_FAULT type: _IFF_PRE_IFF_CRC_CHECK_FAILED *SEC_FAULT: _IFF_PRE_IFF_CRC_CHECK_FAILED**SEC_FAULT: _IFF_PRE_IFF_CRC_CHECK_FAILED*NVRM: SEC_FAULT type: _IFF_POST_IFF_CRC_CHECK_FAILED **NVRM: SEC_FAULT type: _IFF_POST_IFF_CRC_CHECK_FAILED *SEC_FAULT: _IFF_POST_IFF_CRC_CHECK_FAILED**SEC_FAULT: _IFF_POST_IFF_CRC_CHECK_FAILED*NVRM: SEC_FAULT type: _FSP_UNCORRECTABLE_ERROR **NVRM: SEC_FAULT type: _FSP_UNCORRECTABLE_ERROR *SEC_FAULT: _FSP_UNCORRECTABLE_ERROR**SEC_FAULT: _FSP_UNCORRECTABLE_ERROR*NVRM: SEC_FAULT type: _FSP_L5_WDT **NVRM: SEC_FAULT type: _FSP_L5_WDT *SEC_FAULT: _FSP_L5_WDT**SEC_FAULT: _FSP_L5_WDT*NVRM: SEC_FAULT type: _XTAL_CTFDC **NVRM: SEC_FAULT type: _XTAL_CTFDC *SEC_FAULT: _XTAL_CTFDC**SEC_FAULT: _XTAL_CTFDC*NVRM: SEC_FAULT type: _CLOCK_XTAL_FMON **NVRM: SEC_FAULT type: _CLOCK_XTAL_FMON *SEC_FAULT: _CLOCK_XTAL_FMON**SEC_FAULT: _CLOCK_XTAL_FMON*NVRM: SEC_FAULT type: _CLOCK_GPC_FMON **NVRM: SEC_FAULT type: _CLOCK_GPC_FMON *SEC_FAULT: _CLOCK_GPC_FMON**SEC_FAULT: _CLOCK_GPC_FMON*NVRM: SEC_FAULT type: _INTERRUPT **NVRM: SEC_FAULT type: _INTERRUPT *SEC_FAULT: _INTERRUPT**SEC_FAULT: _INTERRUPT*NVRM: SEC_FAULT type: _BAR_FIREWALL_ENGAGE **NVRM: SEC_FAULT type: _BAR_FIREWALL_ENGAGE *SEC_FAULT: _BAR_FIREWALL_ENGAGE**SEC_FAULT: _BAR_FIREWALL_ENGAGE*SEC_2_FAULT: _IFF_POS value: 0x%x**SEC_2_FAULT: _IFF_POS value: 0x%x*pKernelHfrp != NULL*src/kernel/gpu/arch/blackwell/kern_gpu_gb20b.c**pKernelHfrp != NULL**src/kernel/gpu/arch/blackwell/kern_gpu_gb20b.c*call to khfrpPostCommandBlocking_IMPL*NVRM: ERROR: HFRP_CMD_fialed to turn off iGPU HDA power with status = 0x%x **NVRM: ERROR: HFRP_CMD_fialed to turn off iGPU HDA power with status = 0x%x *NVRM: ERROR: HFRP_CMD_fialed to turn off iGPU HDA power with HFRP response status = 0x%x **NVRM: ERROR: HFRP_CMD_fialed to turn off iGPU HDA power with HFRP response status = 0x%x *NVRM: ERROR: HFRP_CMD_fialed to turn on iGPU HDA power with status = 0x%x **NVRM: ERROR: HFRP_CMD_fialed to turn on iGPU HDA power with status = 0x%x *NVRM: ERROR: HFRP_CMD_fialed to turn on iGPU HDA power with HFRP response status = 0x%x **NVRM: ERROR: HFRP_CMD_fialed to turn on iGPU HDA power with HFRP response status = 0x%x *NVRM: ERROR: HFRP_CMD_fialed to turn off iGPU power with status = 0x%x **NVRM: ERROR: HFRP_CMD_fialed to turn off iGPU power with status = 0x%x *NVRM: ERROR: HFRP_CMD_fialed to turn off iGPU power with HFRP response status = 0x%x **NVRM: ERROR: HFRP_CMD_fialed to turn off iGPU power with HFRP response status = 0x%x *NVRM: ERROR: HFRP_CMD_fialed to turn on iGPU power with status = 0x%x **NVRM: ERROR: HFRP_CMD_fialed to turn on iGPU power with status = 0x%x *NVRM: ERROR: HFRP_CMD_fialed to turn on iGPU power with HFRP response status = 0x%x **NVRM: ERROR: HFRP_CMD_fialed to turn on iGPU power with HFRP response status = 0x%x *call to gpuPowerOnHda_DISPATCH*NVRM: ERROR: HFRP_CMD_fialed to turn on HDA power **NVRM: ERROR: HFRP_CMD_fialed to turn on HDA power *NVRM: SEC_FAULT type: _GPMVDD_VMON **NVRM: SEC_FAULT type: _GPMVDD_VMON *SEC_FAULT: _GPMVDD_VMON**SEC_FAULT: _GPMVDD_VMON*NVRM: SEC_FAULT type: _GPCVDD_VMON **NVRM: SEC_FAULT type: _GPCVDD_VMON *SEC_FAULT: _GPCVDD_VMON**SEC_FAULT: _GPCVDD_VMON*NVRM: SEC_FAULT type: _SOC2GPU_SEC_FAULT_FUNCTION_LOCKDOWN_REQ **NVRM: SEC_FAULT type: _SOC2GPU_SEC_FAULT_FUNCTION_LOCKDOWN_REQ *SEC_FAULT: _SOC2GPU_SEC_FAULT_FUNCTION_LOCKDOWN_REQ**SEC_FAULT: _SOC2GPU_SEC_FAULT_FUNCTION_LOCKDOWN_REQ*prcKnobReadPayload*prcObjType*knobId*call to kfspSendAndReadMessage_IMPL*status == NV_OK || status == NV_ERR_INVALID_ARGUMENT*src/kernel/gpu/arch/hopper/kern_gpu_gh100.c**status == NV_OK || status == NV_ERR_INVALID_ARGUMENT**src/kernel/gpu/arch/hopper/kern_gpu_gh100.c*NVRM: kfspSendAndReadMessage status: 0x%x knobValue: 0x%x **NVRM: kfspSendAndReadMessage status: 0x%x knobValue: 0x%x *call to gpuIsProtectedPcieSupportedInFirmware_DISPATCH*call to gpuIsSelfHosted*bIsSelfHosted*NVRM: SELF HOSTED mode detected after reading VGPU static info. **NVRM: SELF HOSTED mode detected after reading VGPU static info. *pGSCI*NVRM: SELF HOSTED mode detected after reading GSP static info. **NVRM: SELF HOSTED mode detected after reading GSP static info. *call to osGpuReadReg032*call to gpuHandleSecFault_DISPATCH**Unknown SYS_PRI_ERROR_CODE*call to gpuGetSanityCheckRegReadError_DISPATCH*NVRM: Possible bad register read: addr: 0x%x, regvalue: 0x%x, error code: %s **NVRM: Possible bad register read: addr: 0x%x, regvalue: 0x%x, error code: %s *NVRM: SEC_FAULT type: _FAULT_FUSE_POD **NVRM: SEC_FAULT type: _FAULT_FUSE_POD *SEC_FAULT: _FAULT_FUSE_POD**SEC_FAULT: _FAULT_FUSE_POD*NVRM: SEC_FAULT type: _FAULT_FUSE_SCPM **NVRM: SEC_FAULT type: _FAULT_FUSE_SCPM *SEC_FAULT: _FAULT_FUSE_SCPM**SEC_FAULT: _FAULT_FUSE_SCPM*NVRM: SEC_FAULT type: _FAULT_FSP_SCPM **NVRM: SEC_FAULT type: _FAULT_FSP_SCPM *SEC_FAULT: _FAULT_FSP_SCPM**SEC_FAULT: _FAULT_FSP_SCPM*NVRM: SEC_FAULT type: _FAULT_SEC2_SCPM **NVRM: SEC_FAULT type: _FAULT_SEC2_SCPM *SEC_FAULT: _FAULT_SEC2_SCPM**SEC_FAULT: _FAULT_SEC2_SCPM*NVRM: SEC_FAULT type: _FAULT_FSP_DCLS **NVRM: SEC_FAULT type: _FAULT_FSP_DCLS *SEC_FAULT: _FAULT_FSP_DCLS**SEC_FAULT: _FAULT_FSP_DCLS*NVRM: SEC_FAULT type: _FAULT_SEC2_DCLS **NVRM: SEC_FAULT type: _FAULT_SEC2_DCLS *SEC_FAULT: _FAULT_SEC2_DCLS**SEC_FAULT: _FAULT_SEC2_DCLS*NVRM: SEC_FAULT type: _FAULT_GSP_DCLS **NVRM: SEC_FAULT type: _FAULT_GSP_DCLS *SEC_FAULT: _FAULT_GSP_DCLS**SEC_FAULT: _FAULT_GSP_DCLS*NVRM: SEC_FAULT type: _FAULT_PMU_DCLS **NVRM: SEC_FAULT type: _FAULT_PMU_DCLS *SEC_FAULT: _FAULT_PMU_DCLS**SEC_FAULT: _FAULT_PMU_DCLS*NVRM: SEC_FAULT type: _FAULT_SEQ_TOO_BIG **NVRM: SEC_FAULT type: _FAULT_SEQ_TOO_BIG *SEC_FAULT: _FAULT_SEQ_TOO_BIG**SEC_FAULT: _FAULT_SEQ_TOO_BIG*NVRM: SEC_FAULT type: _FAULT_PRE_IFF_CRC **NVRM: SEC_FAULT type: _FAULT_PRE_IFF_CRC *SEC_FAULT: _FAULT_PRE_IFF_CRC**SEC_FAULT: _FAULT_PRE_IFF_CRC*NVRM: SEC_FAULT type: _FAULT_POST_IFF_CRC **NVRM: SEC_FAULT type: _FAULT_POST_IFF_CRC *SEC_FAULT: _FAULT_POST_IFF_CRC**SEC_FAULT: _FAULT_POST_IFF_CRC*NVRM: SEC_FAULT type: _FAULT_ECC **NVRM: SEC_FAULT type: _FAULT_ECC *SEC_FAULT: _FAULT_ECC**SEC_FAULT: _FAULT_ECC*NVRM: SEC_FAULT type: _FAULT_CMD **NVRM: SEC_FAULT type: _FAULT_CMD *SEC_FAULT: _FAULT_CMD**SEC_FAULT: _FAULT_CMD*NVRM: SEC_FAULT type: _FAULT_PRI **NVRM: SEC_FAULT type: _FAULT_PRI *SEC_FAULT: _FAULT_PRI**SEC_FAULT: _FAULT_PRI*NVRM: SEC_FAULT type: _FAULT_WDG **NVRM: SEC_FAULT type: _FAULT_WDG *SEC_FAULT: _FAULT_WDG**SEC_FAULT: _FAULT_WDG*NVRM: SEC_FAULT type: _FAULT_BOOTFSM **NVRM: SEC_FAULT type: _FAULT_BOOTFSM *SEC_FAULT: _FAULT_BOOTFSM**SEC_FAULT: _FAULT_BOOTFSM*iffPos*NVRM: SEC_FAULT type: _IFF_POS value: 0x%x **NVRM: SEC_FAULT type: _IFF_POS value: 0x%x *NVRM: unable to read NV_EP_PCFG_GPU_REVISION_ID_AND_CLASSCODE **NVRM: unable to read NV_EP_PCFG_GPU_REVISION_ID_AND_CLASSCODE *NVRM: unable to read NV_EP_PCFG_GPU_SUBSYSTEM_ID **NVRM: unable to read NV_EP_PCFG_GPU_SUBSYSTEM_ID *NVRM: unable to read NV_EP_PCFG_GPU_ID **NVRM: unable to read NV_EP_PCFG_GPU_ID *call to osDevReadReg032*call to gpuWriteBusConfigCycle_DISPATCH*NV_PPRIV_SYS_PRI_ERROR_CODE_HOST_FECS_ERR**NV_PPRIV_SYS_PRI_ERROR_CODE_HOST_FECS_ERR*NV_PPRIV_SYS_PRI_ERROR_CODE_HOST_PRI_TIMEOUT**NV_PPRIV_SYS_PRI_ERROR_CODE_HOST_PRI_TIMEOUT*NV_PPRIV_SYS_PRI_ERROR_CODE_HOST_FB_ACK_TIMEOUT**NV_PPRIV_SYS_PRI_ERROR_CODE_HOST_FB_ACK_TIMEOUT*NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_TIMEOUT**NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_TIMEOUT*NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_DECODE**NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_DECODE*NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_RESET**NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_RESET*NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_FLOORSWEEP**NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_FLOORSWEEP*NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_STUCK_ACK**NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_STUCK_ACK*NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_0_EXPECTED_ACK**NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_0_EXPECTED_ACK*NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_FENCE_ERROR**NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_FENCE_ERROR*NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_SUBID_ERROR**NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_SUBID_ERROR*NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_ORPHAN**NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_ORPHAN*NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_DEAD_RING**NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_DEAD_RING*NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_TRAP**NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_TRAP*NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_CLIENT_ERR**NV_PPRIV_SYS_PRI_ERROR_CODE_FECS_PRI_CLIENT_ERR*gpuBridgeType*call to gpuGetSliLinkDetectionHalFlag_539ab4*gpuGetSliLinkDetectionHalFlag_HAL(pGpu) == GPU_LINK_DETECTION_HAL_GK104*src/kernel/gpu/arch/maxwell/kern_gpu_gm107.c**gpuGetSliLinkDetectionHalFlag_HAL(pGpu) == GPU_LINK_DETECTION_HAL_GK104**src/kernel/gpu/arch/maxwell/kern_gpu_gm107.c*pGpuLoop**pGpuLoop*pGpuSaved**pGpuSaved*linkHalImpl**linkHalImpl*call to gpuGetNvlinkLinkDetectionHalFlag_DISPATCH*call to gpuDetectNvlinkLinkFromGpus_DISPATCH*sliLinkOutputMask**sliLinkOutputMask*bSliLinkCircular**bSliLinkCircular*sliLinkEndsMask**sliLinkEndsMask*vidLinkCount**vidLinkCount*NVRM: More than one type of SLI bridge detected! **NVRM: More than one type of SLI bridge detected! *bFoundBridge*NVRM: JT Version mismatch 0x%x **NVRM: JT Version mismatch 0x%x *NVRM: unable to read NV_XVE_REV_ID **NVRM: unable to read NV_XVE_REV_ID *NVRM: unable to read NV_XVE_SUBSYSTEM **NVRM: unable to read NV_XVE_SUBSYSTEM *NVRM: unable to read NV_XVE_ID **NVRM: unable to read NV_XVE_ID *NVRM: Offset 0x%08x exceeds range! **NVRM: Offset 0x%08x exceeds range! *call to regWrite032*NVRM: attempt to read cfg space of non-existant function %x **NVRM: attempt to read cfg space of non-existant function %x *call to gpuWriteFunctionConfigRegEx_DISPATCH*call to gpuReadBusConfigRegEx_DISPATCH*call to gpuIsCCFeatureEnabled_IMPL*bIsCCFeatureEnabled*call to gpuReadPassThruConfigReg_DISPATCH*bSriovEnabled*gpuGetNvlinkLinkDetectionHalFlag_HAL(pGpu) == GPU_LINK_DETECTION_HAL_GP100*src/kernel/gpu/arch/pascal/kern_gpu_gp100.c**gpuGetNvlinkLinkDetectionHalFlag_HAL(pGpu) == GPU_LINK_DETECTION_HAL_GP100**src/kernel/gpu/arch/pascal/kern_gpu_gp100.c**pKernelNvlink*gpuIndexChild*pGpuChild**pGpuChild*pKernelNvlinkChild**pKernelNvlinkChild*call to knvlinkIsNvlinkP2pSupported_IMPL*call to gpumgrUpdateSliLinkRouting*call to gpuCheckRmctrlAllowList*pAllowList*platformId*implementationId*revisionId*call to _gpuIsGfwBootCompleted_TU102*bGfwBootCompleted*src/kernel/gpu/arch/turing/kern_gpu_tu102.c*NVRM: failed to wait for GFW_BOOT: (progress 0x%x) **src/kernel/gpu/arch/turing/kern_gpu_tu102.c**NVRM: failed to wait for GFW_BOOT: (progress 0x%x) *call to kflcnWaitForHalt_DISPATCH*NVRM: GSP failed to halt with GFW_BOOT: (progress 0x%x) **NVRM: GSP failed to halt with GFW_BOOT: (progress 0x%x) *pgc6VirtAddr*call to kmemsysGetEccCounts_DISPATCH*call to kgmmuGetEccCounts_DISPATCH*call to kbifGetEccCounts_DISPATCH*call to kbusGetEccCounts_DISPATCH*call to gpuCheckIfFbhubPoisonIntrPending_DISPATCH*An uncorrectable ECC error detected (possible firmware handling failure) DRAM:%d, LTC:%d, MMU:%d, PCIE:%d**An uncorrectable ECC error detected (possible firmware handling failure) DRAM:%d, LTC:%d, MMU:%d, PCIE:%d*totalVFs*firstVfOffset*firstVFBarAddress**firstVFBarAddress*FirstVFBar0Address*FirstVFBar1Address*FirstVFBar2Address*bar0Size*bar1Size*bar2Size*b64bitBar0*b64bitBar1*b64bitBar2*call to gpuIsWarBug200577889SriovHeavyEnabled*bSriovHeavyEnabled*bEmulateVFBar0TlbInvalidationRegister*call to gpuIsClientRmAllocatedCtxBufferEnabled*call to gpuIsNonPowerOf2ChannelCountSupported*xveRegmapRef**xveRegmapRef*nFunc*numXveRegMapValid*numXveRegMapWrite*cacheData*gpuBootConfigSpace**gpuBootConfigSpace*bufBootConfigSpace**bufBootConfigSpace*call to kbifGetMSIXTableVectorControlSize_DISPATCH*controlSize*bufMsixTable**bufMsixTable*pKernelBif->xveRegmapRef[0].bufMsixTable != NULL*src/kernel/gpu/bif/arch/ada/kernel_bif_ad102.c**pKernelBif->xveRegmapRef[0].bufMsixTable != NULL**src/kernel/gpu/bif/arch/ada/kernel_bif_ad102.c*call to kbifInitXveRegMap_GM107*NVRM: Invalid argument, func: %d. **NVRM: Invalid argument, func: %d. *call to _kbifPreOsCheckErotGrantAllowed_AD102*NVRM: Timed out waiting for preOs to grant access to EEPROM **NVRM: Timed out waiting for preOs to grant access to EEPROM *call to clPcieWriteDword_IMPL*bIsFLRSupportedAndEnabled*call to kbifDoFunctionLevelReset_DISPATCH*status = kbifDoFunctionLevelReset_HAL(pGpu, pKernelBif)*src/kernel/gpu/bif/arch/ampere/kernel_bif_ga100.c**status = kbifDoFunctionLevelReset_HAL(pGpu, pKernelBif)**src/kernel/gpu/bif/arch/ampere/kernel_bif_ga100.c*NVRM: FLR is either not supported or is disabled. **NVRM: FLR is either not supported or is disabled. *PDB_PROP_GPU_IN_FULLCHIP_RESET*call to kfifoGetNumEngines_DISPATCH*call to kfifoEngineInfoXlate_DISPATCH*NVRM: Unable to get Reset index for engine ID (%u) **NVRM: Unable to get Reset index for engine ID (%u) *call to kbifGetValidEnginesToReset_DISPATCH*engineMask*call to kmcWritePmcEnableReg_DISPATCH*kmcWritePmcEnableReg_HAL(pGpu, pKernelMc, engineMask, NV_FALSE, NV_FALSE)**kmcWritePmcEnableReg_HAL(pGpu, pKernelMc, engineMask, NV_FALSE, NV_FALSE)*call to kbifGetValidDeviceEnginesToReset_DISPATCH*call to gpuIsUsePmcDeviceEnableForHostEngineEnabled*kmcWritePmcEnableReg_HAL(pGpu, pKernelMc, engineMask, NV_FALSE, gpuIsUsePmcDeviceEnableForHostEngineEnabled(pGpu))**kmcWritePmcEnableReg_HAL(pGpu, pKernelMc, engineMask, NV_FALSE, gpuIsUsePmcDeviceEnableForHostEngineEnabled(pGpu))*call to knvlinkPrepareForXVEReset_IMPL*NVRM: NVLINK prepare for fullchip reset failed. **NVRM: NVLINK prepare for fullchip reset failed. *PDB_PROP_GPU_PREPARING_FULLCHIP_RESET*call to kbusPrepareForXVEReset_GM107*NVRM: BUS prepare for devinit failed. **NVRM: BUS prepare for devinit failed. *call to kbifPrepareForXveReset_DISPATCH*NVRM: BIF prepare for devinit failed. **NVRM: BIF prepare for devinit failed. *oldPmc*oldPmcDevice*call to kbifResetHostEngines_DISPATCH**barRegOffsets*currOffset*pBarRegOffsets*NVRM: pBarRegOffsets is NULL! **NVRM: pBarRegOffsets is NULL! **pBarRegOffsets*bar0LoRegOffset*bar0HiRegOffset*bar1LoRegOffset*bar1HiRegOffset*bufConfigSpace*NVRM: Unable to read NV_XVE_DBG_CYA_0 **NVRM: Unable to read NV_XVE_DBG_CYA_0 *PDB_PROP_KBIF_64BIT_BAR0_SUPPORTED*PDB_PROP_KBIF_PCIE_RELAXED_ORDERING_SET_IN_EMULATED_CONFIG_SPACE*src/kernel/gpu/bif/arch/ampere/kernel_bif_ga102.c**src/kernel/gpu/bif/arch/ampere/kernel_bif_ga102.c*pBifAtomicsmask != NULL*src/kernel/gpu/bif/arch/blackwell/kernel_bif_gb100.c**pBifAtomicsmask != NULL**src/kernel/gpu/bif/arch/blackwell/kernel_bif_gb100.c*NVRM: Unable to read NV_PF0_DEVICE_CAPABILITIES_2 **NVRM: Unable to read NV_PF0_DEVICE_CAPABILITIES_2 *call to kbifReadPcieCplCapsFromConfigSpace_DISPATCH*Timed out waiting for devinit to complete **Timed out waiting for devinit to complete *call to osDelay*Config space read of device control failed **Config space read of device control failed *FLR trigger failed **FLR trigger failed *GPU_BUS_CFG_CYCLE_RD32(pGpu, NV_PF0_STATUS_COMMAND, ®Val)**GPU_BUS_CFG_CYCLE_RD32(pGpu, NV_PF0_STATUS_COMMAND, ®Val)*GPU_BUS_CFG_CYCLE_WR32(pGpu, NV_PF0_STATUS_COMMAND, regVal)**GPU_BUS_CFG_CYCLE_WR32(pGpu, NV_PF0_STATUS_COMMAND, regVal)*NVRM: Unable to read NV_PF0_INITIAL_AND_TOTAL_VFS **NVRM: Unable to read NV_PF0_INITIAL_AND_TOTAL_VFS *NVRM: Unable to read NV_PF0_VF_STRIDE_AND_OFFSET **NVRM: Unable to read NV_PF0_VF_STRIDE_AND_OFFSET *firstVFOffset*call to _kbifGetBarInfo_GB100*b64bitVFBar0*b64bitVFBar1*b64bitVFBar2*barIs64Bit*pBarBaseAddress*barBaseAddr*(status == NV_OK)**(status == NV_OK)*pIs64BitBar*bMnocAvailable*NVRM: Unable to read NV_PF0_DESIGNATED_VENDOR_SPECIFIC_0_HEADER_1 **NVRM: Unable to read NV_PF0_DESIGNATED_VENDOR_SPECIFIC_0_HEADER_1 *NVRM: Unable to read NV_PF0_DESIGNATED_VENDOR_SPECIFIC_0_HEADER_2_AND_GENERAL **NVRM: Unable to read NV_PF0_DESIGNATED_VENDOR_SPECIFIC_0_HEADER_2_AND_GENERAL *NVRM: Unable to read NV_PF0_BASE_ADDRESS_REGISTERS_0 **NVRM: Unable to read NV_PF0_BASE_ADDRESS_REGISTERS_0 *NVRM: Unable to read NV_PF0_DEVICE_CAPABILITIES **NVRM: Unable to read NV_PF0_DEVICE_CAPABILITIES *PDB_PROP_KBIF_FLR_SUPPORTED*barOffsetEntry*NVRM: Read of NV_PF0_DEVICE_CONTROL_2 failed. **NVRM: Read of NV_PF0_DEVICE_CONTROL_2 failed. *NVRM: Write of NV_PF0_DEVICE_CONTROL_2 failed. **NVRM: Write of NV_PF0_DEVICE_CONTROL_2 failed. *NVRM: PCIe Requester atomics enabled. **NVRM: PCIe Requester atomics enabled. *NVRM: Invalid register type passed 0x%x **NVRM: Invalid register type passed 0x%x *xtlAerUncorr*xtlAerCorr*NVRM: Unable to read NV_PF0_UNCORRECTABLE_ERROR_STATUS **NVRM: Unable to read NV_PF0_UNCORRECTABLE_ERROR_STATUS *NVRM: Unable to read NV_PF0_CORRECTABLE_ERROR_STATUS **NVRM: Unable to read NV_PF0_CORRECTABLE_ERROR_STATUS *xtlDevCtrlStatus*NVRM: Unable to read NV_PF0_DEVICE_CONTROL_AND_STATUS! **NVRM: Unable to read NV_PF0_DEVICE_CONTROL_AND_STATUS! *EnteredRecoverySinceErrorsLastChecked*NVRM: Failed to read NV_PF0_DEVICE_CONTROL_AND_STATUS. **NVRM: Failed to read NV_PF0_DEVICE_CONTROL_AND_STATUS. *fieldVal*NVRM: Failed to write NV_PF0_DEVICE_CONTROL_AND_STATUS. **NVRM: Failed to write NV_PF0_DEVICE_CONTROL_AND_STATUS. *NVRM: Unable to read NV_PF0_DEVICE_CONTROL_AND_STATUS **NVRM: Unable to read NV_PF0_DEVICE_CONTROL_AND_STATUS *NVRM: Unable to write NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS **NVRM: Unable to write NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS *Unable to read NV_PF0_MSIX_CAPABILITY_HEADR_AND_CONTROL **Unable to read NV_PF0_MSIX_CAPABILITY_HEADR_AND_CONTROL *src/kernel/gpu/bif/arch/blackwell/kernel_bif_gb10b.c**src/kernel/gpu/bif/arch/blackwell/kernel_bif_gb10b.c*NVRM: Unable to read NV_EP_PCFG_GPU_DEVICE_CAPABILITIES_2 **NVRM: Unable to read NV_EP_PCFG_GPU_DEVICE_CAPABILITIES_2 *call to gpuIsTeslaBranded*timeoutData*src/kernel/gpu/bif/arch/blackwell/kernel_bif_gb202.c*NVRM: Unable to read NV_EP_PCFG_GPU_VSEC_DEBUG_SEC_2. **src/kernel/gpu/bif/arch/blackwell/kernel_bif_gb202.c**NVRM: Unable to read NV_EP_PCFG_GPU_VSEC_DEBUG_SEC_2. *NVRM: Timeout polling CFG BAR firewall disengage. **NVRM: Timeout polling CFG BAR firewall disengage. *NVRM: Unable to read NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS_2 **NVRM: Unable to read NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS_2 *NVRM: Unable to write NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS_2 **NVRM: Unable to write NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS_2 *NVRM: LTR is disabled in the hierarchy **NVRM: LTR is disabled in the hierarchy *call to gpuWritePcieConfigCycle_DISPATCH*NVRM: Config write failed. **NVRM: Config write failed. *call to gpuReadPcieConfigCycle_DISPATCH*NVRM: Config read failed. **NVRM: Config read failed. *PDB_PROP_KBIF_SECONDARY_BUS_RESET_SUPPORTED*src/kernel/gpu/bif/arch/hopper/kernel_bif_gh100.c*NVRM: Unable to get number of GPU attached **src/kernel/gpu/bif/arch/hopper/kernel_bif_gh100.c**NVRM: Unable to get number of GPU attached *pKernelBif1*pRmApiGpu0*NVRM: GPU0 NV2080_CTRL_CMD_BUS_GET_C2C_INFO failed %s (0x%x) **NVRM: GPU0 NV2080_CTRL_CMD_BUS_GET_C2C_INFO failed %s (0x%x) *pRmApiGpu1*NVRM: GPU1 NV2080_CTRL_CMD_BUS_GET_C2C_INFO failed %s (0x%x) **NVRM: GPU1 NV2080_CTRL_CMD_BUS_GET_C2C_INFO failed %s (0x%x) *c2cInfoParamsGpu0*c2cInfoParamsGpu1*call to kbifDoSecondaryBusHotReset_GM107*bFLRSupportedAndEnabled*NVRM: FLR is NOT supported! Failing early in fullchip reset sequence **NVRM: FLR is NOT supported! Failing early in fullchip reset sequence *NVRM: FLR is force disabled using regkey/similar mechanism. Failing early. **NVRM: FLR is force disabled using regkey/similar mechanism. Failing early. *NVRM: NVLink prepare for fullchip reset failed. **NVRM: NVLink prepare for fullchip reset failed. *GPU_BUS_CFG_CYCLE_RD32( pGpu, NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS, ®Val)**GPU_BUS_CFG_CYCLE_RD32( pGpu, NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS, ®Val)*gpuCheckTimeout(pGpu, &timeout)**gpuCheckTimeout(pGpu, &timeout)*NVRM: Config space buffer is NULL! **NVRM: Config space buffer is NULL! *call to _kbifRestorePcieConfigRegisters_GH100*NVRM: Restoring PCIe config space failed for gpu. **NVRM: Restoring PCIe config space failed for gpu. *NVRM: GPU not back on the bus after %s, 0x%x != 0x%x! **NVRM: GPU not back on the bus after %s, 0x%x != 0x%x! *FLR**FLR*GC6 exit**GC6 exit*pmcBoot0*NVRM: Timeout GPU not back on the bus after %s, **NVRM: Timeout GPU not back on the bus after %s, *NVRM: Time spend on GPU back on bus is 0x%x ns, **NVRM: Time spend on GPU back on bus is 0x%x ns, *NVRM: Skipping PCIe Fn1 config space restore. **NVRM: Skipping PCIe Fn1 config space restore. *call to kbifRestorePcieConfigRegistersFn1_DISPATCH*NVRM: Restoring PCIe config space failed for azalia. **NVRM: Restoring PCIe config space failed for azalia. *NVRM: Config space save has been skipped. **NVRM: Config space save has been skipped. *call to _kbifSavePcieConfigRegisters_GH100*NVRM: Saving PCIe config space failed for gpu. **NVRM: Saving PCIe config space failed for gpu. *NVRM: Skipping PCIe Fn1 config space save. **NVRM: Skipping PCIe Fn1 config space save. *call to kbifSavePcieConfigRegistersFn1_DISPATCH*NVRM: Saving PCIe config space failed for azalia. **NVRM: Saving PCIe config space failed for azalia. *GPU_BUS_CFG_CYCLE_RD32(pGpu, NV_EP_PCFG_GPU_CTRL_CMD_AND_STATUS, ®Val)**GPU_BUS_CFG_CYCLE_RD32(pGpu, NV_EP_PCFG_GPU_CTRL_CMD_AND_STATUS, ®Val)*GPU_BUS_CFG_CYCLE_WR32(pGpu, NV_EP_PCFG_GPU_CTRL_CMD_AND_STATUS, regVal)**GPU_BUS_CFG_CYCLE_WR32(pGpu, NV_EP_PCFG_GPU_CTRL_CMD_AND_STATUS, regVal)*NVRM: Unable to read NV_EP_PCFG_GPU_SRIOV_INIT_TOT_VF **NVRM: Unable to read NV_EP_PCFG_GPU_SRIOV_INIT_TOT_VF *NVRM: Unable to read NV_EP_PCFG_GPU_SRIOV_FIRST_VF_STRIDE **NVRM: Unable to read NV_EP_PCFG_GPU_SRIOV_FIRST_VF_STRIDE *call to _kbifGetBarInfo_GH100*NVRM: Unable to read NV_EP_PCFG_GPU_BARREG0 **NVRM: Unable to read NV_EP_PCFG_GPU_BARREG0 *NVRM: Unable to read NV_EP_PCFG_GPU_DEVICE_CAPABILITIES **NVRM: Unable to read NV_EP_PCFG_GPU_DEVICE_CAPABILITIES **pKernelFsp**pKernelSec2*bInFunctionLevelReset*call to osDoFunctionLevelReset*osDoFunctionLevelReset failed! **osDoFunctionLevelReset failed! *bPreparingFunctionLevelReset*call to osGpuWriteReg032*NVRM: Read of NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS_2 failed. **NVRM: Read of NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS_2 failed. *NVRM: Write of NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS_2 failed. **NVRM: Write of NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS_2 failed. *call to kbifAllowGpuReqPcieAtomics_DISPATCH*NVRM: PCIe atomics not supported in this platform! **NVRM: PCIe atomics not supported in this platform! *call to osConfigurePcieReqAtomics*NVRM: PCIe requester atomics not enabled since completer is not capable! **NVRM: PCIe requester atomics not enabled since completer is not capable! *osPcieAtomicsOpMask*call to kbifEnablePcieAtomics_DISPATCH*NVRM: Unable to read NV_EP_PCFG_GPU_UNCORRECTABLE_ERROR_STATUS **NVRM: Unable to read NV_EP_PCFG_GPU_UNCORRECTABLE_ERROR_STATUS *NVRM: Unable to read NV_EP_PCFG_GPU_CORRECTABLE_ERROR_STATUS **NVRM: Unable to read NV_EP_PCFG_GPU_CORRECTABLE_ERROR_STATUS *NVRM: Unable to read NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS! **NVRM: Unable to read NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS! *NVRM: Failed to read NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS. **NVRM: Failed to read NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS. *NVRM: Failed to write NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS. **NVRM: Failed to write NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS. *NVRM: Unable to read NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS **NVRM: Unable to read NV_EP_PCFG_GPU_DEVICE_CONTROL_STATUS *Unable to read NV_EP_PCFG_GPU_MSIX_CAP_HEADER **Unable to read NV_EP_PCFG_GPU_MSIX_CAP_HEADER *NVRM: unable to read NV_EP_PCFG_GPU_MSI_64_HEADER **NVRM: unable to read NV_EP_PCFG_GPU_MSI_64_HEADER *pUpstreamPort*pUpstreamPortHandle**pUpstreamPortHandle***pUpstreamPortHandle*call to osPciReadWord*call to clPcieWriteWord_IMPL*call to kbifWaitForConfigAccessAfterReset_IMPL*PDB_PROP_GPU_IN_SECONDARY_BUS_RESET*call to kbifDoSecondaryBusHotReset_DISPATCH*src/kernel/gpu/bif/arch/maxwell/kernel_bif_gm107.c*NVRM: SBR not supported so saved BAR3 is not valid, skipping restore!!! **src/kernel/gpu/bif/arch/maxwell/kernel_bif_gm107.c**NVRM: SBR not supported so saved BAR3 is not valid, skipping restore!!! *newPmc*GPU_BUS_CFG_RD32(pGpu, NV_XVE_SW_RESET, &tempRegVal)**GPU_BUS_CFG_RD32(pGpu, NV_XVE_SW_RESET, &tempRegVal)*call to kbifApplyWarForBug1511451_56cd7a*NVRM: Failed while applying WAR for Bug 1511451 **NVRM: Failed while applying WAR for Bug 1511451 *call to kbusSendBusInfo_IMPL*kbusSendBusInfo(pGpu, GPU_GET_KERNEL_BUS(pGpu), &busInfo)**kbusSendBusInfo(pGpu, GPU_GET_KERNEL_BUS(pGpu), &busInfo)*call to gpuIsPciBusFamily_IMPL*call to kbifControlGetPCIEInfo_IMPL*kbifControlGetPCIEInfo(pGpu, pKernelBif, &busInfo)**kbifControlGetPCIEInfo(pGpu, pKernelBif, &busInfo)*call to kbifGetPciLinkMaxSpeedByPciGenInfo_IMPL*kbifGetPciLinkMaxSpeedByPciGenInfo(pGpu, pKernelBif, pciLinkGenInfo, &pciLinkMaxSpeed)**kbifGetPciLinkMaxSpeedByPciGenInfo(pGpu, pKernelBif, pciLinkGenInfo, &pciLinkMaxSpeed)*call to calculatePCIELinkRateMBps*calculatePCIELinkRateMBps(lanes, pciLinkMaxSpeed, &pcieLinkRate)**calculatePCIELinkRateMBps(lanes, pciLinkMaxSpeed, &pcieLinkRate)*call to kbifStoreBarRegOffsets_DISPATCH*call to kmcPrepareForXVEReset_DISPATCH*NVRM: MC prepare for XVE reset failed. **NVRM: MC prepare for XVE reset failed. *call to kmemsysPrepareForXVEReset_56cd7a*NVRM: FB_IFACE disable for fullchip reset failed. **NVRM: FB_IFACE disable for fullchip reset failed. *GPU_BUS_CFG_RD32(pGpu, NV_XVE_DEV_CTRL, ®Val)**GPU_BUS_CFG_RD32(pGpu, NV_XVE_DEV_CTRL, ®Val)*GPU_BUS_CFG_WR32(pGpu, NV_XVE_DEV_CTRL, regVal)**GPU_BUS_CFG_WR32(pGpu, NV_XVE_DEV_CTRL, regVal)*azaliaBootConfigSpace**azaliaBootConfigSpace*RMCFG_FEATURE_PLATFORM_WINDOWS**RMCFG_FEATURE_PLATFORM_WINDOWS*rootPort*NVRM: Cannot turn off L0s on C73 chipset, suspend/resume may fail (Bug 400044). **NVRM: Cannot turn off L0s on C73 chipset, suspend/resume may fail (Bug 400044). *xveAerUncorr*xveAerCorr*NVRM: Unable to read NV_XVE_AER_UNCORR_ERR **NVRM: Unable to read NV_XVE_AER_UNCORR_ERR *NVRM: Unable to read NV_XVE_AER_CORR_ERR **NVRM: Unable to read NV_XVE_AER_CORR_ERR *xveDevCtrlStatus*NVRM: Unable to read NV_XVE_DEVICE_CONTROL_STATUS! **NVRM: Unable to read NV_XVE_DEVICE_CONTROL_STATUS! *call to _kbifRestorePcieConfigRegisters_GM107*bGcxPmuCfgRestore*NVRM: Timeout waiting for PCIE Config Space Restore from PMU, RM takes over **NVRM: Timeout waiting for PCIE Config Space Restore from PMU, RM takes over **pReg*call to kbifRestoreBar0_DISPATCH*bufOffset*call to gpuWriteFunctionConfigReg_DISPATCH*call to _kbifSavePcieConfigRegisters_GM107*(pRegmapRef->xveRegMapWrite[index] & mask) == 0 || (pRegmapRef->xveRegMapValid[index] & mask) != 0**(pRegmapRef->xveRegMapWrite[index] & mask) == 0 || (pRegmapRef->xveRegMapValid[index] & mask) != 0*call to gpuReadFunctionConfigReg_DISPATCH*NVRM: unable to read NV_XVE_MSI_CTRL **NVRM: unable to read NV_XVE_MSI_CTRL *call to gpuSanityCheckRegisterAccess_IMPL*GPU_BUS_CFG_RD32(pGpu, NV_XVE_ID, &data)**GPU_BUS_CFG_RD32(pGpu, NV_XVE_ID, &data)*GPU_BUS_CFG_RD32(pGpu, NV_XVE_VCCAP_HDR, &data)**GPU_BUS_CFG_RD32(pGpu, NV_XVE_VCCAP_HDR, &data)*src/kernel/gpu/bif/arch/maxwell/kernel_bif_gm200.c**src/kernel/gpu/bif/arch/maxwell/kernel_bif_gm200.c*src/kernel/gpu/bif/arch/pascal/kernel_bif_gp10X.c**src/kernel/gpu/bif/arch/pascal/kernel_bif_gp10X.c*call to kbifPrepareForFullChipReset_DISPATCH*src/kernel/gpu/bif/arch/turing/kernel_bif_tu102.c**src/kernel/gpu/bif/arch/turing/kernel_bif_tu102.c*call to kbifSavePcieConfigRegisters_DISPATCH*NVRM: Config registers save failed! **NVRM: Config registers save failed! *call to kbifIsMSIXEnabledInHW_DISPATCH*bMSIXEnabled*call to _kbifSaveMSIXVectorControlMasks*call to kbifStopSysMemRequests_DISPATCH*NVRM: BIF Stop Sys Mem requests failed. **NVRM: BIF Stop Sys Mem requests failed. *call to kbifWaitForTransactionsComplete_DISPATCH*NVRM: BIF Wait for Transactions complete failed. **NVRM: BIF Wait for Transactions complete failed. *call to kbifTriggerFlr_DISPATCH*call to kbifRestorePcieConfigRegisters_DISPATCH*NVRM: Config registers restore failed! **NVRM: Config registers restore failed! *NVRM: Entering secure boot completion wait. **NVRM: Entering secure boot completion wait. *call to gpuWaitForGfwBootComplete_DISPATCH*NVRM: VBIOS boot failed!! **NVRM: VBIOS boot failed!! *NVRM: Exited secure boot completion wait with status = NV_OK. **NVRM: Exited secure boot completion wait with status = NV_OK. *call to _kbifRestoreMSIXVectorControlMasks*call to kbifClearDownstreamReadCounter_DISPATCH*controlSize < 32**controlSize < 32*NVRM: Timeout waiting for transactions pending to go to 0 **NVRM: Timeout waiting for transactions pending to go to 0 *NVRM: Unable to read NV_XVE_DEVICE_CAPABILITY **NVRM: Unable to read NV_XVE_DEVICE_CAPABILITY *pKernelHostVgpuDevice != NULL**pKernelHostVgpuDevice != NULL*pNumAreas != NULL**pNumAreas != NULL*call to kvgpumgrGetMaxInstanceOfVgpu*kvgpumgrGetMaxInstanceOfVgpu(pKernelHostVgpuDevice->vgpuType, &maxInstance)**kvgpumgrGetMaxInstanceOfVgpu(pKernelHostVgpuDevice->vgpuType, &maxInstance)*bDryRun**pOffsets**pSizes*offsetEnd*offsetStart*call to hypervisorIsType_IMPL*idx <= *pNumAreas**idx <= *pNumAreas*NVRM: VF Sparse Mmap Region[%u] range 0x%llx - 0x%llx, size 0x%llx **NVRM: VF Sparse Mmap Region[%u] range 0x%llx - 0x%llx, size 0x%llx *Unable to read NV_XVE_MSIX_CAP_HDR **Unable to read NV_XVE_MSIX_CAP_HDR *pGpuHandle**pGpuHandle*call to kbifDoFullChipReset_DISPATCH*status = kbifDoFullChipReset_HAL(pGpu, pKernelBif)*src/kernel/gpu/bif/kernel_bif.c**status = kbifDoFullChipReset_HAL(pGpu, pKernelBif)**src/kernel/gpu/bif/kernel_bif.c*NVRM: Unknown PCIe Gen Info **NVRM: Unknown PCIe Gen Info *NVRM: Timeout polling GPU back on bus **NVRM: Timeout polling GPU back on bus *pBusInfo*call to kbifGetGpuLinkCapabilities_IMPL*call to _doesBoardHaveMultipleGpusAndSwitch*call to clPcieReadPortConfigReg_IMPL*NVRM: Squashing rmStatus: %x **NVRM: Squashing rmStatus: %x *call to kbifGetGpuLinkControlStatus_IMPL*call to kbifGetXveStatusBits_DISPATCH*call to kbifClearXveStatus_DISPATCH*call to clPcieReadDevCtrlStatus_IMPL*call to clPcieClearDevCtrlStatus_IMPL*call to kbifGetXveAerBits_DISPATCH*call to kbifClearXveAer_DISPATCH*call to kbifIsMSIEnabledInHW_DISPATCH*call to gpuIsMultiGpuBoard*call to kbifGetBusOptionsAddr_DISPATCH*NVRM: Unable to read %x **NVRM: Unable to read %x *pcieConfigReg*linkCap*call to gpuVerifyExistence_DISPATCH*call to osRemove1HzCallback*p2pOverride*forceP2PType*RMForceP2PType**RMForceP2PType*pcieP2PType*RMPcieP2PType**RMPcieP2PType*peerMappingOverride*PeerMappingOverride**PeerMappingOverride*NVRM: allow peermapping reg key = %d **NVRM: allow peermapping reg key = %d *PDB_PROP_KBIF_FORCE_PCIE_CONFIG_SAVE*bForceDisableFLR*RMPcieFLRPolicy**RMPcieFLRPolicy*NVRM: Pcie FLR Policy reg key = %d **NVRM: Pcie FLR Policy reg key = %d *RMPcieFlrDevinitTimeout**RMPcieFlrDevinitTimeout*flrDevInitTimeoutScale*NVRM: PCI-E device status errors pending (%08X): **NVRM: PCI-E device status errors pending (%08X): *NVRM: Clearing these errors.. **NVRM: Clearing these errors.. *NVRM: PCI-E device AER errors pending (%08X): **NVRM: PCI-E device AER errors pending (%08X): *PDB_PROP_KBIF_IS_MSIX_ENABLED*PDB_PROP_KBIF_IS_MSIX_CACHED*PDB_PROP_KBIF_IS_MSI_ENABLED*NVRM: MSI is enabled for vGPU, but no need to re-ARM **NVRM: MSI is enabled for vGPU, but no need to re-ARM *PDB_PROP_KBIF_IS_MSI_CACHED*call to kbifIsMSIEnabled_IMPL*call to kbifRearmMSI_DISPATCH*call to intrRetriggerTopLevel_DISPATCH*call to kbifIsMSIXEnabled_IMPL*call to kbifEnableExtendedTagSupport_DISPATCH*call to kbifPcieConfigEnableRelaxedOrdering_DISPATCH*call to kbifPcieConfigDisableRelaxedOrdering_DISPATCH*PDB_PROP_CL_ROOTPORT_NEEDS_NOSNOOP_WAR*call to kbifEnableNoSnoop_DISPATCH*NVRM: Could not allocate pStaticInfo for KernelBif**NVRM: Could not allocate pStaticInfo for KernelBif*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_BIF_GET_STATIC_INFO, pStaticInfo, sizeof(*pStaticInfo))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_BIF_GET_STATIC_INFO, pStaticInfo, sizeof(*pStaticInfo))*PDB_PROP_KBIF_PCIE_GEN4_CAPABLE*PDB_PROP_KBIF_IS_C2C_LINK_UP*PDB_PROP_KBIF_DEVICE_IS_MULTIFUNCTION*PDB_PROP_KBIF_GCX_PMU_CFG_SPACE_RESTORE*dmaCaps*call to clPcieReadL1SsCapability_IMPL*L1SsCap*chipsetL1ssEnable*bChipsetPcipmL12Enabled*bChipsetPcipmL11Enabled*bChipsetAspmL12Enabled*bChipsetAspmL11Enabled*NVRM: Chipset supports L1 PM substates. L1 PM Capabilities Register 0x%x L1 PM Control 1 Register 0x%x L1 PM Control 2 Register 0x%x **NVRM: Chipset supports L1 PM substates. L1 PM Capabilities Register 0x%x L1 PM Control 1 Register 0x%x L1 PM Control 2 Register 0x%x *NVRM: L1 PM susbstates is not enabled in RootPort. L1 PM Control 1 Register 0x%x **NVRM: L1 PM susbstates is not enabled in RootPort. L1 PM Control 1 Register 0x%x *NVRM: Failed to get L1 PM substates capabilites from Root Port **NVRM: Failed to get L1 PM substates capabilites from Root Port *call to kbifInitRelaxedOrderingFromEmulatedConfigSpace_DISPATCH*call to _kbifSetPcieRelaxedOrdering*pcieRo*NVRM: NV2080_CTRL_CMD_INTERNAL_BIF_SET_PCIE_RO failed %s (0x%x) **NVRM: NV2080_CTRL_CMD_INTERNAL_BIF_SET_PCIE_RO failed %s (0x%x) *pBifPciePowerControlParams*pciePowerControlInfo*pciePowerControlMask*pciePowerControlIdentifiedKeyOrder*pciePowerControlIdentifiedKeyLocation*call to _kbifCreateChipsetPayloadStr*chipsetPayloadStr**chipsetPayloadStr*call to _kbifCreateChipsetGpuPayloadStr*chipsetGpuPayloadStr**chipsetGpuPayloadStr*call to _kbifParsePciePowerControl*pPciePowerControlValue**PCIEPowerControl*pChipsetGpuPayloadStr*identifiedKeyOrder*identifiedKeyLocation*call to osGetNvGlobalRegistryDword*call to osGetUefiVariable*call to _kbifNbsiReadRTD3RegistryDword*pChipsetPayloadStr*wildcardVar*call to nbsiReadRegistryString*RMSbiosEnableASPMDT**RMSbiosEnableASPMDT*aspmCookie*pRegParmStr*pRegParmStr != NULL**pRegParmStr != NULL*pNbsiObj*NVRM: osReadRegistryDword called in Sleep path can cause excessive delays! **NVRM: osReadRegistryDword called in Sleep path can cause excessive delays! *maxOSndx*call to fnv1Hash20Array*elementHashArray*call to nvStringLen**elementHashArray*pRetBuf**pRetBuf*call to getNbsiValue*errorCode != NV2080_CTRL_BIOS_GET_NBSI_INCOMPLETE**errorCode != NV2080_CTRL_BIOS_GET_NBSI_INCOMPLETE*gpuFourPartIds*call to nvU64ToStr*gpuFourPartIdStr**gpuFourPartIdStr**pChipsetGpuPayloadStr**pChipsetPayloadStr*offsetFromBase*_**_*chipsetFourpartIds*chipsetFourPartIdStr**chipsetFourPartIdStr*NVRM: BIF DMA Caps: %08x **NVRM: BIF DMA Caps: %08x *call to kbifGetDmaCaps_IMPL*call to kbifExecC73War_DISPATCH*call to kbifClearConfigErrors_IMPL*call to kbifInitLtr_DISPATCH*call to osSchedule1HzCallback*call to kbifInitXveRegMap_DISPATCH*call to kbifInit_DISPATCH*kbifInit_HAL(pGpu, pKernelBif)**kbifInit_HAL(pGpu, pKernelBif)*call to osInitMapping*osInitMapping(pGpu)**osInitMapping(pGpu)*call to kbifStaticInfoInit_IMPL*call to kbifInitDmaCaps_DISPATCH*NVRM: BIF disabling noncoherent on OS w/o usable PAT support **NVRM: BIF disabling noncoherent on OS w/o usable PAT support *call to _kbifInitRegistryOverrides*call to kbifApplyWARBug3208922_DISPATCH*call to kbifDisableP2PTransactions_DISPATCH*call to kbifCacheMnocSupport_DISPATCH*call to kbifCacheFlrSupport_DISPATCH*call to kbifCache64bBar0Support_DISPATCH*call to kbifCacheVFInfo_DISPATCH*atomicsCaps*pVSI != NULL*src/kernel/gpu/bif/kernel_bif_vgpu.c**pVSI != NULL**src/kernel/gpu/bif/kernel_bif_vgpu.c*bIsC2CLinkUp*bPcieGen4Capable*bIsDeviceMultiFunction*bGcxPmuCfgSpaceRestore*pKernelBus->bar1ResizeSizeIndex >= NV_XVE_RESIZE_BAR1_CTRL_BAR_SIZE_MIN*src/kernel/gpu/bus/arch/ampere/kern_bus_ga100.c**pKernelBus->bar1ResizeSizeIndex >= NV_XVE_RESIZE_BAR1_CTRL_BAR_SIZE_MIN**src/kernel/gpu/bus/arch/ampere/kern_bus_ga100.c*pKernelBus->bar1ResizeSizeIndex <= NV_XVE_RESIZE_BAR1_CTRL_BAR_SIZE_MAX**pKernelBus->bar1ResizeSizeIndex <= NV_XVE_RESIZE_BAR1_CTRL_BAR_SIZE_MAX*bar1ResizeSizeIndex*NVRM: BAR1 size mismatch: current: 0x%x, expected: 0x%x **NVRM: BAR1 size mismatch: current: 0x%x, expected: 0x%x *NVRM: Most likely SBIOS did not restore the BAR1 size **NVRM: Most likely SBIOS did not restore the BAR1 size *NVRM: Please update your SBIOS! **NVRM: Please update your SBIOS! *pGpuPeer*p2p*busNvlinkPeerNumberMask**busNvlinkPeerNumberMask*NVRM: NVLINK P2P not set up between GPU%u and GPU%u, checking for PCIe P2P... **NVRM: NVLINK P2P not set up between GPU%u and GPU%u, checking for PCIe P2P... *flaInfo*call to gpuGetFlaVasSize_DISPATCH*flaAction*bFlaBind*bFlaEnabled*pInstblkMemDesc*imbPhysAddr*paramAddrSpace*NVRM: FLA bind failed, status: %x **NVRM: FLA bind failed, status: %x *call to knvlinkIsGpuConnectedToNvswitch_IMPL*NVRM: FLA base: %llx, size: %llx is verified **NVRM: FLA base: %llx, size: %llx is verified *fbSizeBytes*call to knvlinkIsForcedConfig_IMPL*call to knvlinkAreLinksRegistryOverriden*call to kbusGetPeerIdFromTable_GM107*call to knvlinkGetPeerLinkMask*call to kbusGetEgmPeerId_DISPATCH*call to kbusGetPeerId_DISPATCH*call to kbusGetUnusedPeerId_GM107*NVRM: GPU%d: peerID not available for NVLink P2P **NVRM: GPU%d: peerID not available for NVLink P2P *call to kbusReserveP2PPeerIds_GM200**bEgmPeer*NVRM: BAR allocation trying to request reflected mapping, by passing the map flags, failing the request **NVRM: BAR allocation trying to request reflected mapping, by passing the map flags, failing the request *NVRM: BAR allocation trying to request reflected mapping, by setting ENCRYPTED flag in memdesc, failing the request **NVRM: BAR allocation trying to request reflected mapping, by setting ENCRYPTED flag in memdesc, failing the request **pInstblkMemDesc*call to memmgrMemDescMemSet_IMPL*NVRM: Nvlink is not supported in this GPU: %x **NVRM: Nvlink is not supported in this GPU: %x *call to kmigmgrIsMIGNvlinkP2PSupported_IMPL*NVRM: FLA is not supported with MIG enabled, GPU: %x **NVRM: FLA is not supported with MIG enabled, GPU: %x *call to kbusIsFlaSupported*NVRM: FLA is not supported, GPU: %x **NVRM: FLA is not supported, GPU: %x *call to kbusIsFlaEnabled*NVRM: FLA is not enabled, GPU: %x **NVRM: FLA is not enabled, GPU: %x *NVRM: returning the vas: %p for GPU: %x start: 0x%llx, limit:0x%llx **NVRM: returning the vas: %p for GPU: %x start: 0x%llx, limit:0x%llx *NVRM: Freeing the FLA client: 0x%x FLAVASpace:%x, gpu:%x **NVRM: Freeing the FLA client: 0x%x FLAVASpace:%x, gpu:%x *hFlaVASpace**pFlaVAS**pFabricVAS*call to kbusDestructFlaInstBlk_DISPATCH*bFlaAllocated*bFlaRangeRegistered*call to kbusSetupUnbindFla_DISPATCH*NVRM: RPC to host failed with status: 0x%x **NVRM: RPC to host failed with status: 0x%x *call to knvlinkIsNvswitchProxyPresent_IMPL*call to knvlinkExecGspRmRpc_IMPL*NVRM: Failed to get the NVSwitch FLA address **NVRM: Failed to get the NVSwitch FLA address *call to knvlinkSetUniqueFlaBaseAddress_IMPL*NVRM: Failed to enable FLA for GPU: %x **NVRM: Failed to enable FLA for GPU: %x *NVRM: Skipping the FLA initialization in Host vGPU **NVRM: Skipping the FLA initialization in Host vGPU *call to kbusAllocateFlaVaspace_DISPATCH*kbusAllocateFlaVaspace_HAL(pGpu, pKernelBus, base, size)**kbusAllocateFlaVaspace_HAL(pGpu, pKernelBus, base, size)*NVRM: FLA is disabled, gpu %x is in MIG/SLI mode **NVRM: FLA is disabled, gpu %x is in MIG/SLI mode *NVRM: Enabling FLA_SUPPORTED to TRUE, gpu: %x ... **NVRM: Enabling FLA_SUPPORTED to TRUE, gpu: %x ... *call to kbusDetermineFlaRangeAndAllocate_DISPATCH*kbusDetermineFlaRangeAndAllocate_HAL(pGpu, pKernelBus, base, size)**kbusDetermineFlaRangeAndAllocate_HAL(pGpu, pKernelBus, base, size)*size != 0**size != 0*IS_GFID_VF(gfid)**IS_GFID_VF(gfid)*hClient != NV01_NULL_OBJECT**hClient != NV01_NULL_OBJECT*hDevice != NV01_NULL_OBJECT**hDevice != NV01_NULL_OBJECT*hSubdevice != NV01_NULL_OBJECT**hSubdevice != NV01_NULL_OBJECT*hVASpace != NV01_NULL_OBJECT**hVASpace != NV01_NULL_OBJECT*!pKernelBus->flaInfo.bFlaAllocated**!pKernelBus->flaInfo.bFlaAllocated*call to vaspaceGetByHandleOrDeviceDefault_IMPL*NVRM: failed allocating fabric vaspace, status=0x%x **NVRM: failed allocating fabric vaspace, status=0x%x *NVRM: failed pinning down fabric vaspace, status=0x%x **NVRM: failed pinning down fabric vaspace, status=0x%x *NVRM: failed pinning down legacy vaspace, status=0x%x **NVRM: failed pinning down legacy vaspace, status=0x%x *call to kbusConstructFlaInstBlk_DISPATCH*NVRM: failed constructing instblk for FLA, status=0x%x **NVRM: failed constructing instblk for FLA, status=0x%x *call to kgmmuInstBlkInit_IMPL*NVRM: failed instantiating instblk for FLA, status=0x%x **NVRM: failed instantiating instblk for FLA, status=0x%x *bToggleBindPoint*call to fabricvaspaceInitUCRange_IMPL*fabricvaspaceInitUCRange( dynamicCast(pGpu->pFabricVAS, FABRIC_VASPACE), pGpu, base, size)**fabricvaspaceInitUCRange( dynamicCast(pGpu->pFabricVAS, FABRIC_VASPACE), pGpu, base, size)*call to kbusAllocateLegacyFlaVaspace_DISPATCH*kbusAllocateLegacyFlaVaspace_HAL(pGpu, pKernelBus, base, size)**kbusAllocateLegacyFlaVaspace_HAL(pGpu, pKernelBus, base, size)*NVRM: failed getting the vaspace from handle, status=0x%x **NVRM: failed getting the vaspace from handle, status=0x%x *NVRM: failed pinning down FLAVASpace, status=0x%x **NVRM: failed pinning down FLAVASpace, status=0x%x *call to gpuCheckIsP2PAllocated_DISPATCH*call to kbusSetupBindFla_DISPATCH*NVRM: Skipping binding FLA, because no P2P GFID is validated yet **NVRM: Skipping binding FLA, because no P2P GFID is validated yet *NVRM: failed binding instblk for FLA, status=0x%x **NVRM: failed binding instblk for FLA, status=0x%x *NVRM: failed allocating FLA VASpace status=0x%x **NVRM: failed allocating FLA VASpace status=0x%x *call to serverutilGenResourceHandle*nv0080AllocParams*NVRM: failed creating device, status=0x%x **NVRM: failed creating device, status=0x%x *nv2080AllocParams*NVRM: failed creating sub-device, status=0x%x **NVRM: failed creating sub-device, status=0x%x *NVRM: failed generating vaspace handle, status=0x%x **NVRM: failed generating vaspace handle, status=0x%x *bAcquireLock*rmGpuLocksAcquire(GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_MEM_FLA)**rmGpuLocksAcquire(GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_MEM_FLA)*NVRM: failed allocating vaspace, status=0x%x **NVRM: failed allocating vaspace, status=0x%x *pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_GET_PF_BAR1_SPA, ¶ms, sizeof(params))*src/kernel/gpu/bus/arch/blackwell/kern_bus_gb100.c**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_GET_PF_BAR1_SPA, ¶ms, sizeof(params))**src/kernel/gpu/bus/arch/blackwell/kern_bus_gb100.c*pKernelBus->bar1ResizeSizeIndex >= NV_PF0_PF_RESIZABLE_BAR_CONTROL_BAR_SIZE_MIN**pKernelBus->bar1ResizeSizeIndex >= NV_PF0_PF_RESIZABLE_BAR_CONTROL_BAR_SIZE_MIN*NVRM: Unable to read NV_PF0_PF_RESIZABLE_BAR_CONTROL **NVRM: Unable to read NV_PF0_PF_RESIZABLE_BAR_CONTROL *NVRM: Resizable bar capability is absent **NVRM: Resizable bar capability is absent *baseType == XAL_BASE*src/kernel/gpu/bus/arch/blackwell/kern_bus_gb10b.c**baseType == XAL_BASE**src/kernel/gpu/bus/arch/blackwell/kern_bus_gb10b.c*xalApertures*xalApertureCount**xalApertures*pKernelBus->xalApertures != NULL**pKernelBus->xalApertures != NULL*call to ioaprtInit*pIOApertures**pIOApertures***pIOApertures*call to kbusFlushSingle_DISPATCH*pGpu->getProperty(pGpu, PDB_PROP_GPU_COHERENT_CPU_MAPPING) == NV_FALSE*src/kernel/gpu/bus/arch/blackwell/kern_bus_gb202.c**pGpu->getProperty(pGpu, PDB_PROP_GPU_COHERENT_CPU_MAPPING) == NV_FALSE**src/kernel/gpu/bus/arch/blackwell/kern_bus_gb202.c*call to rmcfg_IsTEGRA_TEGRA_NVDISP_GPUS*call to gpuIsCacheOnlyModeEnabled*call to kbusIsBar2TestSkipped*NVRM: **NVRM: *flagsClean*call to kmemsysIsL2CleanFbPull*NVRM: input offset 0x%llx size 0x%llx exceeds surface size 0x%llx **NVRM: input offset 0x%llx size 0x%llx exceeds surface size 0x%llx *bIsStandaloneTest**pOffset*call to memdescCreateExisting*NVRM: Could not allocate vidmem to test bar2 with **NVRM: Could not allocate vidmem to test bar2 with **call to memdescMapInternal*testMemoryOffset*testMemorySize*call to kgmmuGetHwPteApertureFromMemdesc_GM107*testAddrSpace*NVRM: Test is not supported. NV_XAL_EP_BAR0_WINDOW only supports vidmem **NVRM: Test is not supported. NV_XAL_EP_BAR0_WINDOW only supports vidmem *NVRM: Testing BAR0 window... **NVRM: Testing BAR0 window... *call to kbusGetBAR0WindowVidOffset_DISPATCH*bar0Window*bar0TestAddr*call to kbusWriteBAR0WindowBase_DISPATCH*testData*NVRM: Pre-L2 invalidate evict: Address 0x%llx programmed through the bar0 window with value 0x%x did not read back the last write. **NVRM: Pre-L2 invalidate evict: Address 0x%llx programmed through the bar0 window with value 0x%x did not read back the last write. *call to kmemsysSendL2InvalidateEvict_IMPL*NVRM: L2 evict failed **NVRM: L2 evict failed *NVRM: Post-L2 invalidate evict: Address 0x%llx programmed through the bar0 window with value 0x%x did not read back the last write **NVRM: Post-L2 invalidate evict: Address 0x%llx programmed through the bar0 window with value 0x%x did not read back the last write *NVRM: Setup a trigger on write with a 3 quarters post trigger capture **NVRM: Setup a trigger on write with a 3 quarters post trigger capture *NVRM: and search for the last bar0 window write not returning the same value in a subsequent read **NVRM: and search for the last bar0 window write not returning the same value in a subsequent read *NVRM: Bar0 window tests successfully **NVRM: Bar0 window tests successfully *virtualBar2**virtualBar2*bar2VirtualAddr*NVRM: MMUTest Writing test data through virtual BAR2 starting at bar2 offset (%p - %p) = %p and of size 0x%x **NVRM: MMUTest Writing test data through virtual BAR2 starting at bar2 offset (%p - %p) = %p and of size 0x%x *NVRM: MMUTest The physical address being targetted is 0x%llx **NVRM: MMUTest The physical address being targetted is 0x%llx *call to osFlushCpuWriteCombineBuffer*bar2ReadbackData*NVRM: MMUTest BAR2 readback VA = 0x%llx returned garbage 0x%x **NVRM: MMUTest BAR2 readback VA = 0x%llx returned garbage 0x%x *NVRM: bar0Window = 0x%llx, testMemoryOffset = 0x%llx, testAddrSpace = %d, _XAL_EP_BAR0_WINDOW = 0x%08x **NVRM: bar0Window = 0x%llx, testMemoryOffset = 0x%llx, testAddrSpace = %d, _XAL_EP_BAR0_WINDOW = 0x%08x *NVRM: MMUTest BAR0 window offset 0x%x returned garbage 0x%x **NVRM: MMUTest BAR0 window offset 0x%x returned garbage 0x%x *NVRM: Setup a trigger for write and in the waves search the last few bar2 virtual writes mixed with bar0 window reads **NVRM: Setup a trigger for write and in the waves search the last few bar2 virtual writes mixed with bar0 window reads *call to kbusFlush_DISPATCH*NVRM: MMUTest BAR2 Read of virtual addr 0x%x returned garbage 0x%x **NVRM: MMUTest BAR2 Read of virtual addr 0x%x returned garbage 0x%x *call to memdescUnmapInternal*NVRM: BAR2 virtual test passes **NVRM: BAR2 virtual test passes *(NvU32)baseType < pKernelBus->xalApertureCount*src/kernel/gpu/bus/arch/hopper/kern_bus_gh100.c**(NvU32)baseType < pKernelBus->xalApertureCount**src/kernel/gpu/bus/arch/hopper/kern_bus_gh100.c*startToken*completedToken*NVRM: - timeout error waiting for startToken = 0x%x cnt=%d **NVRM: - timeout error waiting for startToken = 0x%x cnt=%d *!API_GPU_IN_RESET_SANITY_CHECK(pGpu)**!API_GPU_IN_RESET_SANITY_CHECK(pGpu)*API_GPU_ATTACHED_SANITY_CHECK(pGpu)**API_GPU_ATTACHED_SANITY_CHECK(pGpu)*timeoutStatus*call to vgpuGetCallingContextGfid*vgpuGetCallingContextGfid(pGpu, &gfid)**vgpuGetCallingContextGfid(pGpu, &gfid)*call to kgmmuGetMemAperture_IMPL*bar1**bar1*ptrLow*ptrHigh*call to kbusIsBar1PhysicalModeEnabled*blockMode*NVRM: timed out waiting for bar1 binding to complete **NVRM: timed out waiting for bar1 binding to complete *bar2**bar2*pInstBlkMemDescForBootstrap*call to kbusIsPhysicalBar2InitPagetableEnabled*bIsModePhysical*instBlkAperture*instBlkAddr*valueLowAddr*valueHighAddr*call to kbusWriteBar2BlockRegisters_DISPATCH*NVRM: timed out waiting for bar2 binding to complete **NVRM: timed out waiting for bar2 binding to complete *pKernelBus->bar1ResizeSizeIndex >= NV_EP_PCFG_GPU_PF_RESIZE_BAR_CTRL_BAR_SIZE_MIN**pKernelBus->bar1ResizeSizeIndex >= NV_EP_PCFG_GPU_PF_RESIZE_BAR_CTRL_BAR_SIZE_MIN*NVRM: Unable to read NV_EP_PCFG_GPU_PF_RESIZE_BAR_CTRL **NVRM: Unable to read NV_EP_PCFG_GPU_PF_RESIZE_BAR_CTRL *NVRM: Resizable Bar capability is absent **NVRM: Resizable Bar capability is absent *NVRM: NVLINK P2P not set up between GPU%u and GPU%u **NVRM: NVLINK P2P not set up between GPU%u and GPU%u *call to fabricvaspaceGetUCFlaLimit*ucFlaLimit*call to fabricvaspaceGetUCFlaStart*call to gpuFabricProbeIsSupported*call to kbusAllocateFlaVaspace_GA100*call to kbusDetermineFlaRangeAndAllocate_GA100*c2cPeerInfo*busC2CPeerNumberMask**busC2CPeerNumberMask*call to _kbusRemoveC2CPeerMapping*_kbusRemoveC2CPeerMapping(pGpu0, pKernelBus0, pGpu1, peer0)**_kbusRemoveC2CPeerMapping(pGpu0, pKernelBus0, pGpu1, peer0)*_kbusRemoveC2CPeerMapping(pGpu1, pKernelBus1, pGpu0, peer1)**_kbusRemoveC2CPeerMapping(pGpu1, pKernelBus1, pGpu0, peer1)*c2cPeer0*c2cPeer1*call to _kbusGetC2CP2PPeerId*NVRM: Failed to create C2C P2P mapping between GPU%u and GPU%u **NVRM: Failed to create C2C P2P mapping between GPU%u and GPU%u *busC2CMappingRefcountPerPeerId**busC2CMappingRefcountPerPeerId*NVRM: - P2P: Peer mapping is already in use for gpu instances %x and %x with peer id's %d and %d. Increasing the mapping refcounts for the peer IDs to %d and %d respectively. **NVRM: - P2P: Peer mapping is already in use for gpu instances %x and %x with peer id's %d and %d. Increasing the mapping refcounts for the peer IDs to %d and %d respectively. *call to _kbusCreateC2CPeerMapping*_kbusCreateC2CPeerMapping(pGpu0, pKernelBus0, pGpu1, *peer0)**_kbusCreateC2CPeerMapping(pGpu0, pKernelBus0, pGpu1, *peer0)*_kbusCreateC2CPeerMapping(pGpu1, pKernelBus1, pGpu0, *peer1)**_kbusCreateC2CPeerMapping(pGpu1, pKernelBus1, pGpu0, *peer1)*NVRM: added C2C P2P mapping between GPU%u (peer %u) and GPU%u (peer %u) **NVRM: added C2C P2P mapping between GPU%u (peer %u) and GPU%u (peer %u) *call to kbifIsC2CP2PSupported_DISPATCH*NVRM: GPU%d: peerID not available for C2C P2P **NVRM: GPU%d: peerID not available for C2C P2P *NVRM: - P2P: Using Default RM mapping for P2P. **NVRM: - P2P: Using Default RM mapping for P2P. *NVRM: - P2P: Incorrect PeerId: %d passed down to RM **NVRM: - P2P: Incorrect PeerId: %d passed down to RM *gpuInst0*gpuInst1*p2pPcieBar1*busBar1PeerRefcount**busBar1PeerRefcount*call to _kbusRemoveStaticBar1IOMMUMappingForGpuPair*NVRM: removed PCIe BAR1 P2P mapping between GPU%u and GPU%u **NVRM: removed PCIe BAR1 P2P mapping between GPU%u and GPU%u *call to kbusIsPcieBar1P2PMappingSupported_DISPATCH*call to _kbusCreateStaticBar1IOMMUMappingForGpuPair*_kbusCreateStaticBar1IOMMUMappingForGpuPair(pGpu0, pKernelBus0, pGpu1, pKernelBus1)**_kbusCreateStaticBar1IOMMUMappingForGpuPair(pGpu0, pKernelBus0, pGpu1, pKernelBus1)*NVRM: added PCIe BAR1 P2P mapping between GPU%u and GPU%u **NVRM: added PCIe BAR1 P2P mapping between GPU%u and GPU%u *pDmaAddress*pDmaSize*(pDmaAddress != NULL) && (pDmaSize != NULL)**(pDmaAddress != NULL) && (pDmaSize != NULL)*vgpuGetCallingContextGfid(pPeerGpu, &peerGfid)**vgpuGetCallingContextGfid(pPeerGpu, &peerGfid)*staticBar1*pPeerDmaMemDesc**pPeerDmaMemDesc*pDmaMemDesc*pPeerDmaMemDesc != NULL**pPeerDmaMemDesc != NULL*call to memdescGetPtePhysAddrsForGpu*call to _kbusCreateStaticBar1IOMMUMapping*NVRM: IOMMU mapping failed from GPU%u to GPU%u **NVRM: IOMMU mapping failed from GPU%u to GPU%u *call to _kbusRemoveStaticBar1IOMMUMapping*vgpuGetCallingContextGfid(pPeerGpu, &peerGpuGfid)**vgpuGetCallingContextGfid(pPeerGpu, &peerGpuGfid)*memdescMapIommu(pPeerDmaMemDesc, pSrcGpu->busInfo.iovaspaceId)**memdescMapIommu(pPeerDmaMemDesc, pSrcGpu->busInfo.iovaspaceId)*NVRM: The peer DMA address 0x%llx is not aligned at 0x%llx **NVRM: The peer DMA address 0x%llx is not aligned at 0x%llx *vgpuGetCallingContextGfid(pPeerGpu, &peerGfid) == NV_OK**vgpuGetCallingContextGfid(pPeerGpu, &peerGfid) == NV_OK*pPeerKernelBus->bar1[peerGfid].staticBar1.pDmaMemDesc != NULL**pPeerKernelBus->bar1[peerGfid].staticBar1.pDmaMemDesc != NULL*p2pPcie*peerNumberMask**peerNumberMask*call to osNumaOnliningEnabled*call to _kbusMemoryIsInFbRegion*call to osUnmapSystemMemory*memdescGetContiguity(pMemDesc, AT_CPU)**memdescGetContiguity(pMemDesc, AT_CPU)*coherentCpuMapping*refcnt**refcnt*pKernelBus->coherentCpuMapping.refcnt[i] != 0**pKernelBus->coherentCpuMapping.refcnt[i] != 0*No mappings found**No mappings found*call to osMapSystemMemory**pCpuMapping***pCpuMapping*pKernelBus->coherentCpuMapping.pCpuMapping[i] != NvP64_NULL**pKernelBus->coherentCpuMapping.pCpuMapping[i] != NvP64_NULL**physAddr*region < pKernelBus->coherentCpuMapping.nrMapping**region < pKernelBus->coherentCpuMapping.nrMapping*NVRM: Skipping Coherent link test **NVRM: Skipping Coherent link test *NVRM: Could not allocate vidmem to test coherent link with **NVRM: Could not allocate vidmem to test coherent link with *NVRM: Coherent link test buffer PA: 0x%llx **NVRM: Coherent link test buffer PA: 0x%llx *call to osFlushGpuCoherentCpuCacheRange*NVRM: Coherent Link test readback VA = 0x%llx returned garbage 0x%x **NVRM: Coherent Link test readback VA = 0x%llx returned garbage 0x%x *NVRM: Coherent link test passes **NVRM: Coherent link test passes *gpuIsSelfHosted(pGpu) && pKernelBif->getProperty(pKernelBif, PDB_PROP_KBIF_IS_C2C_LINK_UP)**gpuIsSelfHosted(pGpu) && pKernelBif->getProperty(pKernelBif, PDB_PROP_KBIF_IS_C2C_LINK_UP)*kbusGetBar1VASpace_HAL(pGpu, pKernelBus) == NULL**kbusGetBar1VASpace_HAL(pGpu, pKernelBus) == NULL*listCount(&pKernelBus->virtualBar2[GPU_GFID_PF].usedMapList) == 0**listCount(&pKernelBus->virtualBar2[GPU_GFID_PF].usedMapList) == 0*call to osNumaMemblockSize*osNumaMemblockSize(&memblockSize)**osNumaMemblockSize(&memblockSize)*fbRegion**fbRegion*cachingMode**cachingMode*nrMapping*numReservedRegions*reservedRegions**reservedRegions*wprRegions**wprRegions*NVRM: wpr1 0x%llx->0x%llx, wpr2 0x%llx->0x%llx **NVRM: wpr1 0x%llx->0x%llx, wpr2 0x%llx->0x%llx *call to rangesCarveout*numReservedRegions <= COHERENT_CPU_MAPPING_TOTAL_REGIONS - 1**numReservedRegions <= COHERENT_CPU_MAPPING_TOTAL_REGIONS - 1*call to rangeLength*busAddrStart**busAddrStart*busAddrSize**busAddrSize*call to osMapPciMemoryKernel64*NVRM: coherent link mapping. i: %d base: 0x%llx size: 0x%llx **NVRM: coherent link mapping. i: %d base: 0x%llx size: 0x%llx *bFlush == NV_FALSE**bFlush == NV_FALSE*bCoherentCpuMapping*NVRM: Enabling CPU->C2C->FBMEM path **NVRM: Enabling CPU->C2C->FBMEM path *peerId < P2P_MAX_NUM_PEERS**peerId < P2P_MAX_NUM_PEERS*call to kbusIsPeerIdValid_GP100*NVRM: C2C P2P not set up between GPU%u and GPU%u, checking for Nvlink... **NVRM: C2C P2P not set up between GPU%u and GPU%u, checking for Nvlink... *call to kbusGetPeerId_GP100*call to kbusRemoveP2PMappingForC2C_DISPATCH*call to kbusRemoveP2PMappingForNvlink_GP100*call to kbusRemoveP2PMappingForBar1P2P_DISPATCH*call to kbusRemoveP2PMappingForMailbox_DISPATCH*NVRM: P2P type %d is not supported **NVRM: P2P type %d is not supported *call to kbusCreateP2PMappingForC2C_DISPATCH*call to kbusCreateP2PMappingForNvlink_GP100*call to kbusCreateP2PMappingForBar1P2P_DISPATCH*call to kbusCreateP2PMappingForMailbox_DISPATCH*call to kbusSetupPeerBarAccess_IMPL*pMailboxBar1MaxOffset64KB*call to kbusGetP2PWriteMailboxAddressSize_STATIC_DISPATCH**pParentGpu*bFixedAddressAllocate*writeMailboxBar1Addr*writeMailboxTotalSize*vaAllocMax*call to vaspaceAlloc_DISPATCH*NVRM: cannot allocate vaspace for P2P write mailboxes (0x%x) **NVRM: cannot allocate vaspace for P2P write mailboxes (0x%x) *GPU_GET_KERNEL_BUS(pParentGpu)->p2pPcie.writeMailboxBar1Addr == pKernelBus->p2pPcie.writeMailboxBar1Addr**GPU_GET_KERNEL_BUS(pParentGpu)->p2pPcie.writeMailboxBar1Addr == pKernelBus->p2pPcie.writeMailboxBar1Addr*NVRM: [GPU%u] P2P write mailboxes allocated at BAR1 addr = 0x%llx **NVRM: [GPU%u] P2P write mailboxes allocated at BAR1 addr = 0x%llx *pPageLevelsMemDesc*call to memmgrMemDescEndTransfer_IMPL**pPageLevels*call to kbusDestroyCpuPointerForBusFlush_DISPATCH*call to kbusFlushVirtualBar2_VBAR2*call to kbusReadBAR0WindowBase_DISPATCH*cachedBar0WindowVidOffset*kbusSetBAR0WindowVidOffset_HAL call in coherent path **kbusSetBAR0WindowVidOffset_HAL call in coherent path *(vidOffset & 0xffff)==0**(vidOffset & 0xffff)==0*call to kbusValidateBAR0WindowBase_DISPATCH*kbusValidateBAR0WindowBase_HAL(pGpu, pKernelBus, vidOffset >> NV_XAL_EP_BAR0_WINDOW_BASE_SHIFT)**kbusValidateBAR0WindowBase_HAL(pGpu, pKernelBus, vidOffset >> NV_XAL_EP_BAR0_WINDOW_BASE_SHIFT)*NVRM: mapping BAR0_WINDOW to VID:%x'%08x **NVRM: mapping BAR0_WINDOW to VID:%x'%08x *bar1Block*pKernelBus != NULL*src/kernel/gpu/bus/arch/maxwell/kern_bus_gm107.c**pKernelBus != NULL**src/kernel/gpu/bus/arch/maxwell/kern_bus_gm107.c*call to kbusIsDirectMappingAllowed_DISPATCH*kbusIsDirectMappingAllowed_HAL(pGpu, pKernelBus, pMemDesc, mapFlags, &bDirectSysMappingAllowed)**kbusIsDirectMappingAllowed_HAL(pGpu, pKernelBus, pMemDesc, mapFlags, &bDirectSysMappingAllowed)*call to kbusIsReflectedMappingAccessAllowed*call to kbusIsP2pMailboxClientAllocated*call to kbusUnlinkP2P_GM107*call to memdescMapOld**pDstMem**pSrcMem*call to memdescUnmapOld*pSrcPriv**pSrcPriv*bar0Offset*call to kbusSetBAR0WindowVidOffset_DISPATCH*pDstPriv**pDstPriv*call to portUtilCheckOverlap*!portUtilCheckOverlap((const NvU8*)dest, size, (const NvU8*)source, size)**!portUtilCheckOverlap((const NvU8*)dest, size, (const NvU8*)source, size)*pTmp**pTmp*pTmp != NULL**pTmp != NULL*srcPA*call to kbusMemoryCopyFromPtr_GM107*call to kbusRewritePTEsForExistingMapping_DISPATCH*kbusRewritePTEsForExistingMapping_HAL(pGpu, pKernelBus, pKernelBus->virtualBar2[gfid].pPageLevelsMemDesc)**kbusRewritePTEsForExistingMapping_HAL(pGpu, pKernelBus, pKernelBus->virtualBar2[gfid].pPageLevelsMemDesc)*pFlushMemDesc*kbusRewritePTEsForExistingMapping_HAL(pGpu, pKernelBus, pKernelBus->pFlushMemDesc)**kbusRewritePTEsForExistingMapping_HAL(pGpu, pKernelBus, pKernelBus->pFlushMemDesc)*call to kgmmuInvalidateTlb_DISPATCH**pPDB*pPDEMemDesc*call to kbusBindBar2_DISPATCH*kbusBindBar2_HAL(pGpu, pKernelBus, BAR2_MODE_VIRTUAL)**kbusBindBar2_HAL(pGpu, pKernelBus, BAR2_MODE_VIRTUAL)*call to kbusInitVirtualBar2_VBAR2*kbusInitVirtualBar2_HAL(pGpu, pKernelBus)**kbusInitVirtualBar2_HAL(pGpu, pKernelBus)*call to kbusSetupCpuPointerForBusFlush_DISPATCH*kbusSetupCpuPointerForBusFlush_HAL(pGpu, pKernelBus)**kbusSetupCpuPointerForBusFlush_HAL(pGpu, pKernelBus)*userCtx*pTempWalk**pTempWalk*pWalkForBootstrap*pBar2GmmuFmt**pLevelFmt*bBootstrap*call to mmuWalkSetUserCtx*call to mmuWalkCommitPDEs*call to kbusGetVaLimitForBar2_DISPATCH*pPageLevelsForBootstrap*call to kbusPreInitVirtualBar2_VBAR2*bMigrating*call to mmuWalkSparsify*pPageLevelsMemDescForBootstrap*call to kbusReleaseRmAperture_VBAR2**pPageLevelsForBootstrap*pWalkStagingBuffer*call to kbusCreateStagingMemdesc**pWalkStagingBuffer*call to mmuWalkCreate*mmuWalkSetUserCtx(pWalk, &userCtx)**mmuWalkSetUserCtx(pWalk, &userCtx)*call to mmuWalkReserveEntries*mmuWalkSetUserCtx(pWalk, NULL)**mmuWalkSetUserCtx(pWalk, NULL)*NVRM: (BAR2 0x%llx, PDB 0x%llx): vaLimit = 0x%llx **NVRM: (BAR2 0x%llx, PDB 0x%llx): vaLimit = 0x%llx *call to memdescSetPageSize**pWalkForBootstrap*call to mmuWalkLevelInstancesForceFree*call to mmuWalkDestroy**pInstBlkMemDescForBootstrap*pKernelBus->bar2[gfid].bBootstrap**pKernelBus->bar2[gfid].bBootstrap*call to memdescDescribe*call to kgmmuGetBigPageSize_DISPATCH*NULL != pMap**NULL != pMap*call to kbusBar2InstBlkWrite_DISPATCH*mmuWalkSetUserCtx(pKernelBus->bar2[gfid].pWalkForBootstrap, &userCtx)**mmuWalkSetUserCtx(pKernelBus->bar2[gfid].pWalkForBootstrap, &userCtx)*mmuWalkSetUserCtx(pKernelBus->bar2[gfid].pWalkForBootstrap, NULL)**mmuWalkSetUserCtx(pKernelBus->bar2[gfid].pWalkForBootstrap, NULL)**pPageLevelsMemDescForBootstrap*call to kbusGetSizeOfBar2PageDirs_GM107*call to kbusGetSizeOfBar2PageTables_GM107*pageLvlSize*pdeBaseForBootstrap*pteBaseForBootstrap*NVRM: Init memory size (0x%x) > BAR2 window mapped to CPU (0x%llx) **NVRM: Init memory size (0x%x) > BAR2 window mapped to CPU (0x%llx) *pKernelBus->PDEBAR2Aperture == pKernelBus->PTEBAR2Aperture**pKernelBus->PDEBAR2Aperture == pKernelBus->PTEBAR2Aperture*call to memdescGetVolatility*call to kbusSetupBar2InstBlkAtBottomOfFb_DISPATCH*kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, pKernelBus->bar2[GPU_GFID_PF].instBlockBase & ~0xffffULL)**kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, pKernelBus->bar2[GPU_GFID_PF].instBlockBase & ~0xffffULL)*windowOffset*kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, origVidOffset)**kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, origVidOffset)*pKernelBus->InstBlkAperture == ADDR_FBMEM**pKernelBus->InstBlkAperture == ADDR_FBMEM*instBlockBase*pBar1VAS*kgmmuInstBlkInit(pKernelGmmu, pKernelBus->bar1[gfid].pInstBlkMemDesc, pBar1VAS, FIFO_PDB_IDX_BASE, ¶ms)**kgmmuInstBlkInit(pKernelGmmu, pKernelBus->bar1[gfid].pInstBlkMemDesc, pBar1VAS, FIFO_PDB_IDX_BASE, ¶ms)*call to kbusSendSysmembar_IMPL*call to kbusBar1InstBlkBind_DISPATCH*kbusBar1InstBlkBind_HAL(pGpu, pKernelBus)**kbusBar1InstBlkBind_HAL(pGpu, pKernelBus)*call to kbusIsBar2SysmemAccessEnabled*call to memdescGetGpuCacheAttrib*call to memdescGetPteKind*call to memmgrIsKind_DISPATCH*bAllowReflectedMapping*call to memdescGetCustomHeap*call to gpuIsUnifiedMemorySpaceEnabled*gpuIsUnifiedMemorySpaceEnabled(pGpu) || (addrSpace == ADDR_FBMEM)**gpuIsUnifiedMemorySpaceEnabled(pGpu) || (addrSpace == ADDR_FBMEM)*kbusMemAccessBar0Window_HAL call in coherent path **kbusMemAccessBar0Window_HAL call in coherent path *bar0WindowOrig*bar0WindowOffset*bRestoreWindow*call to regRead008*call to regRead016*call to regWrite008*call to regWrite016*kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, bar0WindowOrig)**kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, bar0WindowOrig)*(vidOffset >> 16) <= DRF_MASK(NV_PBUS_BAR0_WINDOW_BASE)**(vidOffset >> 16) <= DRF_MASK(NV_PBUS_BAR0_WINDOW_BASE)*pciBars**pciBars*totalPciBars*NVRM: Peer number table doesn't support >%u GPUs **NVRM: Peer number table doesn't support >%u GPUs *vgpuGetCallingContextGfid(pGpu, &gfid) == NV_OK**vgpuGetCallingContextGfid(pGpu, &gfid) == NV_OK*busPeer**busPeer*NULL != pFmt**NULL != pFmt**pLevel*call to _kbusGetSizeOfBar2PageDir_GM107*pageDirSize*numPageDirs*levelSize*NULL != pLevel**NULL != pLevel*0 != entrySize**0 != entrySize*0 != vaPerEntry**0 != vaPerEntry*vaBaseAligned*call to kbusDetermineBar1ApertureLength_IMPL*RMBar2ApertureSizeMB**RMBar2ApertureSizeMB*rmApertureLimit*cpuVisibleLimit*oldAddrSpace*NVRM: bar0Window = 0x%llx, testMemoryOffset = 0x%llx, testAddrSpace = %d, _PBUS_BAR0_WINDOW = 0x%08x **NVRM: bar0Window = 0x%llx, testMemoryOffset = 0x%llx, testAddrSpace = %d, _PBUS_BAR0_WINDOW = 0x%08x *call to kbusDestroyBar2_DISPATCH*offsetBar0*pWriteCombinedBar0Window**pWriteCombinedBar0Window**pDefaultBar0Pointer*pUncachedBar0Window*NVRM: FLA Supported: %x **NVRM: FLA Supported: %x *NVRM: Trying to destroy FLA VAS **NVRM: Trying to destroy FLA VAS *call to kbusDestroyFla_DISPATCH*call to kbusIsP2pInitialized*call to _kbusDestroyP2P_GM107*pFakeSparseBuffer**pFakeSparseBuffer*pPgTbl**pPgTbl*cpuVisibleApertureSize*cpuInisibleApertureSize*numPgTblsCeil*numPgTblsFloor*pgTblSize*pageTblSize*call to mmuFmtLevelSize*numPageTbls*call to kbusGetNvlinkPeerNumberMask_DISPATCH**pRemoteGpu*NVRM: There is a P2P mapping involving an unloaded GPU **NVRM: There is a P2P mapping involving an unloaded GPU *pRemoteKernelBus**pRemoteKernelBus*nvPopCount32(pKernelBus->p2pPcie.peerNumberMask[i]) == 1**nvPopCount32(pKernelBus->p2pPcie.peerNumberMask[i]) == 1*locPeerId < P2P_MAX_NUM_PEERS**locPeerId < P2P_MAX_NUM_PEERS*remPeerId < P2P_MAX_NUM_PEERS**remPeerId < P2P_MAX_NUM_PEERS*pKernelBus->p2pPcie.busPeer[locPeerId].remotePeerId == remPeerId**pKernelBus->p2pPcie.busPeer[locPeerId].remotePeerId == remPeerId*call to kbusDestroyMailbox_IMPL*call to kbusSendMemsysDisableNvlinkPeers*kbusSendMemsysDisableNvlinkPeers(pGpu)**kbusSendMemsysDisableNvlinkPeers(pGpu)*kbusSendMemsysDisableNvlinkPeers(pRemoteGpu)**kbusSendMemsysDisableNvlinkPeers(pRemoteGpu)*pRemoteGpu != NULL**pRemoteGpu != NULL*call to knvlinkGetP2pConnectionStatus_IMPL*busNvlinkMappingRefcountPerGpu**busNvlinkMappingRefcountPerGpu*busNvlinkMappingRefcountPerPeerId**busNvlinkMappingRefcountPerPeerId*call to knvlinkTrainP2pLinksToActive_IMPL*call to knvlinkSetupPeerMapping_DISPATCH*call to kbusSetupMailboxes_DISPATCH*programPciePeerMask*NVRM: Error in programming the local PEER_CONNECTION_CFG registers **NVRM: Error in programming the local PEER_CONNECTION_CFG registers *NVRM: Error in programming the remote PEER_CONNECTION_CFG registers **NVRM: Error in programming the remote PEER_CONNECTION_CFG registers *locPeerId*remPeerId*pRemoteKernelBus->p2pPcie.busPeer[remPeerId].remotePeerId == locPeerId**pRemoteKernelBus->p2pPcie.busPeer[remPeerId].remotePeerId == locPeerId*pKernelBus->p2pPcie.peerNumberMask[i] == 0**pKernelBus->p2pPcie.peerNumberMask[i] == 0*pRemoteKernelBus->p2pPcie.peerNumberMask[gpuInst] == 0**pRemoteKernelBus->p2pPcie.peerNumberMask[gpuInst] == 0*NVRM: non-zero peer refcount(%d) on GPU 0x%x peer %d **NVRM: non-zero peer refcount(%d) on GPU 0x%x peer %d *bP2pInitialized*call to gpumgrGetSubDeviceCount*numSubdevices*NVRM: Fermi only supports P2P with up to 8 subdevices in SLI configuration. **NVRM: Fermi only supports P2P with up to 8 subdevices in SLI configuration. *localGpuInstance*localPeerIndex*localPeerCount**pLocalKernelBus*remoteGpuInstance*remotePeerIndex*remotePeerCount*localPeerIndex != remotePeerIndex**localPeerIndex != remotePeerIndex*(localPeerCount < P2P_MAX_NUM_PEERS) && (remotePeerCount < P2P_MAX_NUM_PEERS)**(localPeerCount < P2P_MAX_NUM_PEERS) && (remotePeerCount < P2P_MAX_NUM_PEERS)*(locPeerId < P2P_MAX_NUM_PEERS) && (remPeerId < P2P_MAX_NUM_PEERS)**(locPeerId < P2P_MAX_NUM_PEERS) && (remPeerId < P2P_MAX_NUM_PEERS)*remotePeerId*call to kbusConvertBusMapFlagsToDmaFlags_IMPL*call to memdescCreateSubMem*call to dmaAllocMapping_DISPATCH*pAperOffset*call to kbusIsFbFlushDisabled*call to kbusSendSysmembarSingle_DISPATCH*call to _kbusGetCurrentGfid*gfid != INVALID_P2P_GFID**gfid != INVALID_P2P_GFID*pBar1VaInfo**pBar1VaInfo*call to kbusDecreaseStaticBar1Refcount_DISPATCH*NVRM: static BAR1 reported error **NVRM: static BAR1 reported error *call to memdescFlushCpuCaches*call to dmaFreeMapping_DISPATCH*ppMappingType**ppMappingType***ppMappingType*ppMappingType != NULL**ppMappingType != NULL*pMappingType**pMappingType*call to reusemappingdbUnmap*call to memdescRemoveDestroyCallback*call to kbusUpdateRusdStatistics_IMPL*(flags & BUS_MAP_FB_FLAGS_FERMI_INVALID) == 0**(flags & BUS_MAP_FB_FLAGS_FERMI_INVALID) == 0*!(flags & BUS_MAP_FB_FLAGS_MAP_OFFSET_FIXED) || pMemArea->numRanges == 1**!(flags & BUS_MAP_FB_FLAGS_MAP_OFFSET_FIXED) || pMemArea->numRanges == 1**pVAS*call to kbusGetStaticFbAperture_DISPATCH*pDevice != NULL**pDevice != NULL*kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, pDevice, &ref)**kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, pDevice, &ref)*call to kmigmgrGetMIGGpuInstanceSlot_IMPL*pCurrKernelMIGGPUInstance**pCurrKernelMIGGPUInstance*pKernelMIGGPUInstance**pKernelMIGGPUInstance*pKernelMIGGPUInstance != NULL**pKernelMIGGPUInstance != NULL*mappingFlags*call to _kbusMapAperture_GM107*destroyCallback***pObject*call to memdescAddDestroyCallback*bNewSubmap*pMappingType != NULL**pMappingType != NULL*bNewType*call to reusemappingdbMap*reusemappingdbMap(&pBar1VaInfo->reuseDb, pMappingType, mapRange, pMemArea, cachingFlags)**reusemappingdbMap(&pBar1VaInfo->reuseDb, pMappingType, mapRange, pMemArea, cachingFlags)**pCtx**pGlobalCtx*pVaInfo*pFirstVaInfo*ppVaToType**ppVaToType***ppVaToType*curMappingSize*_kbusMapAperture_GM107(pGpu, pKernelBus, pType->pMemDesc, pVAS, physRange.start, &mapRange.start, &mapRange.size, mapFlags, swizzId)**_kbusMapAperture_GM107(pGpu, pKernelBus, pType->pMemDesc, pVAS, physRange.start, &mapRange.start, &mapRange.size, mapFlags, swizzId)*pageSize != curMappingSize**pageSize != curMappingSize*!(cachingFlags & REUSE_MAPPING_DB_MAP_FLAGS_SINGLE_RANGE)**!(cachingFlags & REUSE_MAPPING_DB_MAP_FLAGS_SINGLE_RANGE)*ppVaToType != NULL**ppVaToType != NULL*pCallback(pToken, physRange.start, mapRange.start, curMappingSize)**pCallback(pToken, physRange.start, mapRange.start, curMappingSize)*call to vgpuIsCallingContextPlugin*vgpuIsCallingContextPlugin(pGpu, &bCallingContextPlugin) == NV_OK**vgpuIsCallingContextPlugin(pGpu, &bCallingContextPlugin) == NV_OK*vgpuIsCallingContextPlugin(pGpu, &bCallingContextPlugin)**vgpuIsCallingContextPlugin(pGpu, &bCallingContextPlugin)*NVRM: unsupported VA size (0x%llx) **NVRM: unsupported VA size (0x%llx) **pSubDevMemDesc**addressTranslation*call to dmaPageArrayInitFromMemDesc*mmuWalkSetUserCtx(pKernelBus->bar2[gfid].pWalk, &userCtx)**mmuWalkSetUserCtx(pKernelBus->bar2[gfid].pWalk, &userCtx)*NVRM: mmuWalkSparsify pwalk=%p, vaLo=%llx, vaHi = %llx **NVRM: mmuWalkSparsify pwalk=%p, vaLo=%llx, vaHi = %llx *NVRM: mmuWalkSparsify status=%x pwalk=%p, vaLo=%llx, vaHi = %llx **NVRM: mmuWalkSparsify status=%x pwalk=%p, vaLo=%llx, vaHi = %llx *pFmt != NULL**pFmt != NULL*mapTarget*MapNextEntries*mapIter*call to nvFieldSetBool*v8*pteTemplate**v8*mapIter.aperture == GMMU_APERTURE_VIDEO**mapIter.aperture == GMMU_APERTURE_VIDEO*call to kgmmuTranslatePtePcfFromSw_DISPATCH*kgmmuTranslatePtePcfFromSw_HAL(pKernelGmmu, ptePcfSw, &ptePcfHw) == NV_OK**kgmmuTranslatePtePcfFromSw_HAL(pKernelGmmu, ptePcfSw, &ptePcfHw) == NV_OK*call to gmmuFieldSetAperture*call to gmmuFieldGetAperture*pAddrField**pAddrField*call to kbusSetupBar0WindowBeforeBar2Bootstrap_DISPATCH*call to mmuWalkMap*mmuWalkSetUserCtx(pKernelBus->bar2[gfid].pWalk, NULL)**mmuWalkSetUserCtx(pKernelBus->bar2[gfid].pWalk, NULL)*call to kbusRestoreBar0WindowAfterBar2Bootstrap_DISPATCH*pReadToFlush*call to kbusGetFlushAperture_IMPL*pPTEMemDesc*call to mmuFmtLevelPageSize*NVRM: [GPU%u]: PA 0x%llX, Entries 0x%X-0x%X **NVRM: [GPU%u]: PA 0x%llX, Entries 0x%X-0x%X *call to kbusIsBarAccessBlocked*pKernelBus->virtualBar2[gfid].pPageLevels == NULL**pKernelBus->virtualBar2[gfid].pPageLevels == NULL*surf*sizeOfEntries*call to _busWalkCBMapNextEntries_UpdatePhysAddr*call to memmgrMemEndTransfer_IMPL*NULL != pKernelBus->virtualBar2[gfid].pPageLevelsForBootstrap**NULL != pKernelBus->virtualBar2[gfid].pPageLevelsForBootstrap*entryOffset*call to kbusMapCoherentCpuMapping_DISPATCH*pMapping != NULL**pMapping != NULL*call to kbusUnmapCoherentCpuMapping_DISPATCH*sizeInDWord*call to kbusMemAccessBar0Window_GM107*v32**v32*kbusMemAccessBar0Window_HAL(pGpu, pKernelBus, (entry + (sizeof(NvU32) * i)), &data, sizeof(NvU32), NV_TRUE, ADDR_FBMEM) == NV_OK**kbusMemAccessBar0Window_HAL(pGpu, pKernelBus, (entry + (sizeof(NvU32) * i)), &data, sizeof(NvU32), NV_TRUE, ADDR_FBMEM) == NV_OK*pKernelBus->bar2[gfid].bMigrating**pKernelBus->bar2[gfid].bMigrating*NULL == pKernelBus->virtualBar2[gfid].pPageLevels**NULL == pKernelBus->virtualBar2[gfid].pPageLevels*memdescGetAddressSpace(pMemDesc) == ADDR_SYSMEM && pKernelBus->virtualBar2[gfid].pPageLevels == NULL**memdescGetAddressSpace(pMemDesc) == ADDR_SYSMEM && pKernelBus->virtualBar2[gfid].pPageLevels == NULL*call to dmaPageArrayGetPhysAddr*(pIter->pPageArray->count == 1) && (pIter->currIdx > 0)**(pIter->pPageArray->count == 1) && (pIter->currIdx > 0)*call to gmmuFieldSetAddress*call to kgmmuEncodePhysAddr_IMPL*pEntryValue*!KBUS_BAR0_PRAMIN_DISABLED(pGpu)**!KBUS_BAR0_PRAMIN_DISABLED(pGpu)*pKernelBus->bar2[GPU_GFID_PF].bBootstrap**pKernelBus->bar2[GPU_GFID_PF].bBootstrap*bar2OffsetInBar0Window*ADDR_FBMEM == pKernelBus->PDEBAR2Aperture**ADDR_FBMEM == pKernelBus->PDEBAR2Aperture*call to mmuWalkUnmap*call to mmuWalkReleaseEntries*call to kbusTeardownBar2PageTablesAtBottomOfFb_DISPATCH*call to kbusTeardownBar2InstBlkAtBottomOfFb_DISPATCH**pPDEMemDesc*pPDEMemDescForBootstrap**pPDEMemDescForBootstrap**pPTEMemDesc**pPageLevelsMemDesc**pInstBlkMemDesc*call to memmgrGetRsvdMemoryBase*call to kbusSetupBar2PageTablesAtBottomOfFb_DISPATCH**pVASpaceHeap*pdeBase*pteBase*pKernelBus->virtualBar2[gfid].pPageLevels**pKernelBus->virtualBar2[gfid].pPageLevels*NULL != pKernelBus->bar2[gfid].pFmt**NULL != pKernelBus->bar2[gfid].pFmt*pWalk != NULL**pWalk != NULL*cpuVisiblePgTblSize*call to kbusPatchBar2Pdb_DISPATCH*call to kbusTeardownBar2GpuVaSpace_GM107*NVRM: BAR2 already initialized! **NVRM: BAR2 already initialized! *NVRM: BAR2 pteBase not initialized by fbPreInit_FERMI! **NVRM: BAR2 pteBase not initialized by fbPreInit_FERMI! *pKernelBus->bar2[gfid].physAddr != 0**pKernelBus->bar2[gfid].physAddr != 0*NVRM: - Unable to map bar2! **NVRM: - Unable to map bar2! *NVRM: BAR0 Base Cpu Mapping @ 0x%p and BAR2 Base Cpu Mapping @ 0x%p **NVRM: BAR0 Base Cpu Mapping @ 0x%p and BAR2 Base Cpu Mapping @ 0x%p *call to vgpuGspTeardownBuffers*call to kbusTeardownBar2CpuAperture_DISPATCH*bIsBar2Initialized*call to kbusBar2BootStrapInPhysicalMode_DISPATCH*IS_GFID_PF(gfid)**IS_GFID_PF(gfid)*call to vgpuGspSetupBuffers*call to kbusIsCpuVisibleBar2Disabled*call to kbusSetupBar2CpuAperture_GM107*call to kbusSetupBar2GpuVaSpace_GM107*call to kbusSetupBar2PageTablesAtTopOfFb_DISPATCH*call to kbusCommitBar2_DISPATCH*call to vaspaceFree_DISPATCH*call to kbusUnmapPreservedConsole_GM107*call to kbusDisableStaticBar1Mapping_DISPATCH*call to reusemappingdbDestruct*call to kbusUnmapFbApertureSingle_IMPL*pConsoleMemDesc != NULL**pConsoleMemDesc != NULL*bBar1ConsolePreserved*vaflags*bSmoothTransitionEnabled*uefiScanoutSurfaceSizeInMB*NVRM: Could not construct BAR1 VA space object. **NVRM: Could not construct BAR1 VA space object. *call to reusemappingdbInit*NVRM: Unable to set BAR1 alloc range to aperture size! **NVRM: Unable to set BAR1 alloc range to aperture size! *mappableLength*pKernelBus->bar1[gfid].apertureLength <= kbusGetPciBarSize(pKernelBus, 1)**pKernelBus->bar1[gfid].apertureLength <= kbusGetPciBarSize(pKernelBus, 1)*call to kbusIsPreserveBar1ConsoleEnabled**pConsoleMemDesc*NVRM: preserving console BAR1 mapping (0x%llx) **NVRM: preserving console BAR1 mapping (0x%llx) *call to kbusMapFbApertureSingle_IMPL*NVRM: cannot preserve console mapping in BAR1 (0x%llx, 0x%x) **NVRM: cannot preserve console mapping in BAR1 (0x%llx, 0x%x) *NVRM: expected console @ BAR1 offset 0 (0x%llx, 0x%x) **NVRM: expected console @ BAR1 offset 0 (0x%llx, 0x%x) *NVRM: no console memdesc available to preserve **NVRM: no console memdesc available to preserve *call to _kbusRequiresP2PMailboxBar1_GM107*call to kbusAllocP2PMailboxBar1_DISPATCH*call to kbusIsStaticBar1Supported_DISPATCH*bStaticBar1Supported*call to kbusEnableStaticBar1Mapping_DISPATCH*kbusEnableStaticBar1Mapping_HAL(pGpu, pKernelBus, gfid, bar1Offset)**kbusEnableStaticBar1Mapping_HAL(pGpu, pKernelBus, gfid, bar1Offset)*call to kbusBar1InstBlkVasUpdate_DISPATCH*call to kbusPatchBar1Pdb_DISPATCH*apertureVirtAddr*apertureVirtLength*call to kbusDestroyBar1_GM107*kbusTeardownBar2CpuAperture_HAL(pGpu, pKernelBus, GPU_GFID_PF)**kbusTeardownBar2CpuAperture_HAL(pGpu, pKernelBus, GPU_GFID_PF)*call to kbusTeardownMailbox_DISPATCH*call to kbusIsBar1Disabled*call to kbusInitBar1_DISPATCH*call to _kbusLinkP2P_GM107*call to kbusGetPFBar1Spa_DISPATCH*kbusGetPFBar1Spa_HAL(pGpu, pKernelBus, &pKernelBus->grdmaBar1Spa)**kbusGetPFBar1Spa_HAL(pGpu, pKernelBus, &pKernelBus->grdmaBar1Spa)*flags & GPU_STATE_FLAGS_PRESERVING**flags & GPU_STATE_FLAGS_PRESERVING*kbusSetupBar2CpuAperture_HAL(pGpu, pKernelBus, GPU_GFID_PF)**kbusSetupBar2CpuAperture_HAL(pGpu, pKernelBus, GPU_GFID_PF)*kbusBindBar2_HAL(pGpu, pKernelBus, BAR2_MODE_PHYSICAL)**kbusBindBar2_HAL(pGpu, pKernelBus, BAR2_MODE_PHYSICAL)*call to kbusCommitBar2PDEs_DISPATCH*kbusCommitBar2PDEs_HAL(pGpu, pKernelBus)**kbusCommitBar2PDEs_HAL(pGpu, pKernelBus)*kbusSetupBar0WindowBeforeBar2Bootstrap_HAL(pGpu, pKernelBus, &origVidOffset)**kbusSetupBar0WindowBeforeBar2Bootstrap_HAL(pGpu, pKernelBus, &origVidOffset)*pBar2Walk*mmuWalkSetUserCtx(pBar2Walk, &userCtx)**mmuWalkSetUserCtx(pBar2Walk, &userCtx)*mmuWalkCommitPDEs(pBar2Walk, pLevelFmt, pKernelBus->bar2[GPU_GFID_PF].cpuVisibleBase, pKernelBus->bar2[GPU_GFID_PF].cpuVisibleLimit)**mmuWalkCommitPDEs(pBar2Walk, pLevelFmt, pKernelBus->bar2[GPU_GFID_PF].cpuVisibleBase, pKernelBus->bar2[GPU_GFID_PF].cpuVisibleLimit)*mmuWalkCommitPDEs(pBar2Walk, pLevelFmt, pKernelBus->bar2[GPU_GFID_PF].cpuInvisibleBase, pKernelBus->bar2[GPU_GFID_PF].cpuInvisibleLimit)**mmuWalkCommitPDEs(pBar2Walk, pLevelFmt, pKernelBus->bar2[GPU_GFID_PF].cpuInvisibleBase, pKernelBus->bar2[GPU_GFID_PF].cpuInvisibleLimit)*mmuWalkSparsify(pBar2Walk, pKernelBus->bar2[GPU_GFID_PF].cpuVisibleBase, pKernelBus->bar2[GPU_GFID_PF].cpuVisibleLimit, NV_FALSE)**mmuWalkSparsify(pBar2Walk, pKernelBus->bar2[GPU_GFID_PF].cpuVisibleBase, pKernelBus->bar2[GPU_GFID_PF].cpuVisibleLimit, NV_FALSE)*kbusCommitBar2_HAL(pGpu, pKernelBus, flags)**kbusCommitBar2_HAL(pGpu, pKernelBus, flags)*call to kbusVerifyBar2_DISPATCH*NVRM: kbusVerifyBar2_HAL() keeps failing. **NVRM: kbusVerifyBar2_HAL() keeps failing. *kbusVerifyBar2_HAL(pGpu, pKernelBus, NULL, NULL, 0, 0)**kbusVerifyBar2_HAL(pGpu, pKernelBus, NULL, NULL, 0, 0)*call to kbusRestoreBAR1ResizeSize_WAR_BUG_3249028_DISPATCH*kbusRestoreBAR1ResizeSize_WAR_BUG_3249028_HAL(pGpu, pKernelBus)**kbusRestoreBAR1ResizeSize_WAR_BUG_3249028_HAL(pGpu, pKernelBus)*call to kbusCacheBAR1ResizeSize_WAR_BUG_3249028_DISPATCH*IsTEGRA(pGpu)**IsTEGRA(pGpu)*call to kbusRestoreBar2_DISPATCH*kbusRestoreBar2_HAL(pKernelBus, flags)**kbusRestoreBar2_HAL(pKernelBus, flags)*bBarAccessBlocked*bBar2TestSkipped*NVRM: BARs will be blocked for CC **NVRM: BARs will be blocked for CC *call to memmgrVerifyGspDmaOps_IMPL*memmgrVerifyGspDmaOps(pGpu, GPU_GET_MEMORY_MANAGER(pGpu))**memmgrVerifyGspDmaOps(pGpu, GPU_GET_MEMORY_MANAGER(pGpu))*pciBarSizes**pciBarSizes*apertureLength*NVRM: C2C is being used, so disable CPU visible BAR2 now before they are setup **NVRM: C2C is being used, so disable CPU visible BAR2 now before they are setup *pKernelBus->bar2[GPU_GFID_PF].cpuVisibleBase == 0**pKernelBus->bar2[GPU_GFID_PF].cpuVisibleBase == 0*pKernelBus->bar2[GPU_GFID_PF].cpuInvisibleLimit >= pKernelBus->bar2[GPU_GFID_PF].cpuInvisibleBase**pKernelBus->bar2[GPU_GFID_PF].cpuInvisibleLimit >= pKernelBus->bar2[GPU_GFID_PF].cpuInvisibleBase*NVRM: Contiguous range, update BAR2 cpuInvisibleBase: 0x%llX to 0, and cpuInvisibleLimit: 0x%llX to 0x%llX. **NVRM: Contiguous range, update BAR2 cpuInvisibleBase: 0x%llX to 0, and cpuInvisibleLimit: 0x%llX to 0x%llX. *cpuInvisibleBase*NVRM: Discontiguous range, retaining BAR2 cpuInvisibleBase: 0x%llX, and cpuInvisibleLimit: 0x%llX. **NVRM: Discontiguous range, retaining BAR2 cpuInvisibleBase: 0x%llX, and cpuInvisibleLimit: 0x%llX. *NVRM: Setting cpuVisibleLimit: 0x%llX to 0 **NVRM: Setting cpuVisibleLimit: 0x%llX to 0 *bUsePhysicalBar2InitPagetable*call to kbusStateInitLockedKernel_DISPATCH*kbusStateInitLockedKernel_HAL(pGpu, pKernelBus)**kbusStateInitLockedKernel_HAL(pGpu, pKernelBus)*call to kbusStateInitLockedPhysical_56cd7a*kbusStateInitLockedPhysical_HAL(pGpu, pKernelBus)**kbusStateInitLockedPhysical_HAL(pGpu, pKernelBus)*memmgrMemDescMemSet(GPU_GET_MEMORY_MANAGER(pGpu), pKernelBus->bar1[GPU_GFID_PF].pInstBlkMemDesc, 0, TRANSFER_FLAGS_NONE)**memmgrMemDescMemSet(GPU_GET_MEMORY_MANAGER(pGpu), pKernelBus->bar1[GPU_GFID_PF].pInstBlkMemDesc, 0, TRANSFER_FLAGS_NONE)*RMP2PPeerId**RMP2PPeerId*p2pMapSpecifyId*p2pMapPeerId*call to kbusSetupDefaultBar0Window*call to kgmmuCreateFakeSparseTables_DISPATCH*kgmmuCreateFakeSparseTables_HAL(pGpu, GPU_GET_KERNEL_GMMU(pGpu))**kgmmuCreateFakeSparseTables_HAL(pGpu, GPU_GET_KERNEL_GMMU(pGpu))*bIsBar2SetupInPhysicalMode*NVRM: For GSP client with C2C enabled, skip BAR2 init **NVRM: For GSP client with C2C enabled, skip BAR2 init *call to kbusInitBar2_DISPATCH*kbusInitBar2_HAL(pGpu, pKernelBus, GPU_GFID_PF)**kbusInitBar2_HAL(pGpu, pKernelBus, GPU_GFID_PF)*NVRM: Enabling FLA Support in Guest RM: %x, flabase: %llx, flaSize: %llx **NVRM: Enabling FLA Support in Guest RM: %x, flabase: %llx, flaSize: %llx *call to kbusCheckFlaSupportedAndInit_DISPATCH*vAlignment*pMemorySystemConfig*pDeviceMapping**pUncachedBar0Window*physicalBar0WindowSize*NVRM: gpu:%d **NVRM: gpu:%d *bInstProtectedMem*call to kbusInitBarsSize_DISPATCH*kbusInitBarsSize_HAL(pGpu, pKernelBus)**kbusInitBarsSize_HAL(pGpu, pKernelBus)*call to kbusDetermineBar1Force64KBMapping_IMPL*call to kbusConstructVirtualBar2_VBAR2*NVRM: Entered **NVRM: Entered *cpuInvisibleLimit*pMapListMemory**pMapListMemory*bFbFlushDisabled*numPeers*PTEBAR2Aperture*PTEBAR2Attr*PDEBAR2Aperture*PDEBAR2Attr*src/kernel/gpu/bus/arch/maxwell/kern_bus_gm200.c**src/kernel/gpu/bus/arch/maxwell/kern_bus_gm200.c*NVRM: P2P mailbox area size is not set **NVRM: P2P mailbox area size is not set *call to kbusGetP2PMailboxAttributes_DISPATCH*mailboxTotalSize == mailboxAreaSizeReq**mailboxTotalSize == mailboxAreaSizeReq*(mailboxBar1Addr & (mailboxAlignmentSizeReq - 1)) == 0**(mailboxBar1Addr & (mailboxAlignmentSizeReq - 1)) == 0*(mailboxBar1Addr + mailboxTotalSize) < (((NvU64)mailboxBar1MaxOffset64KBReq) << 16)**(mailboxBar1Addr + mailboxTotalSize) < (((NvU64)mailboxBar1MaxOffset64KBReq) << 16)*mailboxBar1Addr == pKernelBus->p2pPcie.writeMailboxBar1Addr**mailboxBar1Addr == pKernelBus->p2pPcie.writeMailboxBar1Addr*mailboxTotalSize == pKernelBus->p2pPcie.writeMailboxTotalSize**mailboxTotalSize == pKernelBus->p2pPcie.writeMailboxTotalSize*NVRM: reserving peer ID %u on GPU%u for NVLINK/C2C use **NVRM: reserving peer ID %u on GPU%u for NVLINK/C2C use *NVRM: Removing mapping GPU %d Peer %d <-> GPU %d Peer %d **NVRM: Removing mapping GPU %d Peer %d <-> GPU %d Peer %d *NVRM: Decremented refCount for Mapping GPU %d Peer %d <-> GPU %d Peer %d New Count: %d **NVRM: Decremented refCount for Mapping GPU %d Peer %d <-> GPU %d Peer %d New Count: %d *call to clFindCommonDownstreamBR_IMPL**peer0 < P2P_MAX_NUM_PEERS***peer0 < P2P_MAX_NUM_PEERS**peer1 < P2P_MAX_NUM_PEERS***peer1 < P2P_MAX_NUM_PEERS*pKernelBus0->p2pPcie.busPeer[*peer0].remotePeerId == *peer1**pKernelBus0->p2pPcie.busPeer[*peer0].remotePeerId == *peer1*pKernelBus1->p2pPcie.busPeer[*peer1].remotePeerId == *peer0**pKernelBus1->p2pPcie.busPeer[*peer1].remotePeerId == *peer0*pRmApi->Control(pRmApi, pGpu0->hInternalClient, pGpu0->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_HSHUB_PEER_CONN_CONFIG, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu0->hInternalClient, pGpu0->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_HSHUB_PEER_CONN_CONFIG, ¶ms, sizeof(params))*pRmApi->Control(pRmApi, pGpu1->hInternalClient, pGpu1->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_HSHUB_PEER_CONN_CONFIG, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu1->hInternalClient, pGpu1->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_HSHUB_PEER_CONN_CONFIG, ¶ms, sizeof(params))*NVRM: explicit peer IDs %u and %u requested for GPU%u and GPU%u are not available, will assign dynamically **NVRM: explicit peer IDs %u and %u requested for GPU%u and GPU%u are not available, will assign dynamically *!pKernelBus0->p2pPcie.busPeer[*peer0].bReserved**!pKernelBus0->p2pPcie.busPeer[*peer0].bReserved*!pKernelBus1->p2pPcie.busPeer[*peer1].bReserved**!pKernelBus1->p2pPcie.busPeer[*peer1].bReserved*(pKernelBus0->p2pPcie.peerNumberMask[gpuInst1] == 0) && (pKernelBus1->p2pPcie.peerNumberMask[gpuInst0] == 0)**(pKernelBus0->p2pPcie.peerNumberMask[gpuInst1] == 0) && (pKernelBus1->p2pPcie.peerNumberMask[gpuInst0] == 0)*NVRM: - ERROR: Peer ID %d is already in use. Default RM P2P mapping will be used. **NVRM: - ERROR: Peer ID %d is already in use. Default RM P2P mapping will be used. *call to kbusGetUnusedPciePeerId_DISPATCH*NVRM: no peer IDs available **NVRM: no peer IDs available *pKernelBus0->p2pPcie.busPeer[*peer0].refCount == 0**pKernelBus0->p2pPcie.busPeer[*peer0].refCount == 0*pKernelBus1->p2pPcie.busPeer[*peer1].refCount == 0**pKernelBus1->p2pPcie.busPeer[*peer1].refCount == 0*NVRM: added PCIe P2P mapping between GPU%u (peer %u) and GPU%u (peer %u) **NVRM: added PCIe P2P mapping between GPU%u (peer %u) and GPU%u (peer %u) *pRemoteWMBoxMemDesc**pRemoteWMBoxMemDesc*pRemoteP2PDomMemDesc**pRemoteP2PDomMemDesc*call to kbusNeedWarForBug999673_DISPATCH*local2Remote < P2P_MAX_NUM_PEERS**local2Remote < P2P_MAX_NUM_PEERS*remote2Local < P2P_MAX_NUM_PEERS**remote2Local < P2P_MAX_NUM_PEERS*pKernelBus1->p2pPcie.busPeer[remote2Local].remotePeerId == local2Remote**pKernelBus1->p2pPcie.busPeer[remote2Local].remotePeerId == local2Remote*pKernelBus0->p2pPcie.busPeer[local2Remote].remotePeerId == remote2Local**pKernelBus0->p2pPcie.busPeer[local2Remote].remotePeerId == remote2Local***ppMemDesc*call to kbusSetupMailboxAccess_DISPATCH*remoteWMBoxLocalAddr*remoteWMBoxLocalAddr != ~0ULL**remoteWMBoxLocalAddr != ~0ULL*call to kbusSetupP2PDomainAccess_DISPATCH*localP2PDomainRemoteAddr*localP2PDomainRemoteAddr != ~0ULL**localP2PDomainRemoteAddr != ~0ULL*remoteP2PDomainLocalAddr*remoteP2PDomainLocalAddr != ~0ULL**remoteP2PDomainLocalAddr != ~0ULL*remoteWMBoxAddrU64*(remoteWMBoxAddrU64 & 0xFFFF) == 0**(remoteWMBoxAddrU64 & 0xFFFF) == 0*params0*bNeedWarBug999673*pRmApi0*params1*pRmApi1*call to kbusWriteP2PWmbTag_DISPATCH*call to kbusCalcCpuInvisibleBar2ApertureSize_DISPATCH*cpuInvisibleSize*src/kernel/gpu/bus/arch/pascal/kern_bus_gp100.c*NVRM: base: 0x%llx size: 0x%x **src/kernel/gpu/bus/arch/pascal/kern_bus_gp100.c**NVRM: base: 0x%llx size: 0x%x *NULL != pKernelBus->bar2[GPU_GFID_PF].pFmt**NULL != pKernelBus->bar2[GPU_GFID_PF].pFmt*newPteFormat*call to kbusInstBlkWriteAddrLimit_DISPATCH*pKernelBus->bar2[gfid].pFmt != NULL**pKernelBus->bar2[gfid].pFmt != NULL*pageDirBaseHi*kbusMemAccessBar0Window_HAL(pGpu, pKernelBus, (pKernelBus->bar2[gfid].instBlockBase + SF_OFFSET(NV_RAMIN_PAGE_DIR_BASE_HI)), &pageDirBaseHi, sizeof(NvU32), NV_FALSE, pKernelBus->InstBlkAperture)**kbusMemAccessBar0Window_HAL(pGpu, pKernelBus, (pKernelBus->bar2[gfid].instBlockBase + SF_OFFSET(NV_RAMIN_PAGE_DIR_BASE_HI)), &pageDirBaseHi, sizeof(NvU32), NV_FALSE, pKernelBus->InstBlkAperture)*pageDirBaseTarget*kbusMemAccessBar0Window_HAL(pGpu, pKernelBus, (pKernelBus->bar2[gfid].instBlockBase + SF_OFFSET(NV_RAMIN_PAGE_DIR_BASE_TARGET)), &pageDirBaseTarget, sizeof(NvU32), NV_FALSE, pKernelBus->InstBlkAperture)**kbusMemAccessBar0Window_HAL(pGpu, pKernelBus, (pKernelBus->bar2[gfid].instBlockBase + SF_OFFSET(NV_RAMIN_PAGE_DIR_BASE_TARGET)), &pageDirBaseTarget, sizeof(NvU32), NV_FALSE, pKernelBus->InstBlkAperture)*NVRM: Invalid peerId value: %d **NVRM: Invalid peerId value: %d *NVRM: GPU%u: Cannot unreserve peerId %u. Nvlink refcount > 0 **NVRM: GPU%u: Cannot unreserve peerId %u. Nvlink refcount > 0 *NVRM: Unreserving peer ID %u on GPU%u reserved for NVLINK **NVRM: Unreserving peer ID %u on GPU%u reserved for NVLINK *call to knvlinkGetNumLinksToPeer_IMPL*call to knvlinkGetUniquePeerIdMask_DISPATCH*bNvlinkPeerIdsReserved*call to knvlinkGetUniquePeerId_DISPATCH*call to kbusIsPeerIdValid_GM107*call to kbusGetPeerId_GM107*call to _kbusRemoveNvlinkPeerMapping*_kbusRemoveNvlinkPeerMapping(pGpu0, pKernelBus0, pGpu1, peer0, attributes)**_kbusRemoveNvlinkPeerMapping(pGpu0, pKernelBus0, pGpu1, peer0, attributes)*_kbusRemoveNvlinkPeerMapping(pGpu1, pKernelBus1, pGpu0, peer1, attributes)**_kbusRemoveNvlinkPeerMapping(pGpu1, pKernelBus1, pGpu0, peer1, attributes)*call to kbusUnreserveP2PPeerIds_DISPATCH*NVRM: GPU%d: Failed to unreserve peer ID mask 0x%x **NVRM: GPU%d: Failed to unreserve peer ID mask 0x%x *NVRM: Removing mapping for GPU%u peer %u (GPU%u) **NVRM: Removing mapping for GPU%u peer %u (GPU%u) *pKernelBus0->p2p.busNvlinkMappingRefcountPerPeerId[peerId] == 0**pKernelBus0->p2p.busNvlinkMappingRefcountPerPeerId[peerId] == 0*NVRM: PeerID %u is not being used for P2P from GPU%d to any other remote GPU. Can be freed **NVRM: PeerID %u is not being used for P2P from GPU%d to any other remote GPU. Can be freed *osAcquireRmSema(pSys->pSema)**osAcquireRmSema(pSys->pSema)*connectionType*bUseUuid*remoteGpuId*call to _kbusExecGspRmRpcForNvlink*pKernelNvlink0 != NULL**pKernelNvlink0 != NULL*NVRM: GPU%d Failed to UNSET USE_NVLINK_PEER for peer%d **NVRM: GPU%d Failed to UNSET USE_NVLINK_PEER for peer%d *call to knvlinkRemoveMapping_DISPATCH*NVRM: GPU%d Failed to remove hshub mapping for peer%d **NVRM: GPU%d Failed to remove hshub mapping for peer%d *call to knvlinkSyncLinkMasksAndVbiosInfo_IMPL*status != NV_OK**status != NV_OK*call to knvlinkGetInitializedLinkMask*bBufferReady*call to knvlinkUpdateCurrentConfig_IMPL*kbusReserveP2PPeerIds_HAL(pGpu0, pKernelBus0, NVBIT(0))**kbusReserveP2PPeerIds_HAL(pGpu0, pKernelBus0, NVBIT(0))*call to kbusGetNvlinkP2PPeerId_DISPATCH*NVRM: EGM peer **NVRM: EGM peer *NVRM: - ERROR: Peer ID %d is already in use. Default RM P2P mapping will be used for loopback connection. **NVRM: - ERROR: Peer ID %d is already in use. Default RM P2P mapping will be used for loopback connection. *NVRM: added NVLink P2P mapping between GPU%u (peer %u) and GPU%u (peer %u) **NVRM: added NVLink P2P mapping between GPU%u (peer %u) and GPU%u (peer %u) *call to _kbusCreateNvlinkPeerMapping*_kbusCreateNvlinkPeerMapping(pGpu0, pKernelBus0, pGpu1, *peer0, attributes)**_kbusCreateNvlinkPeerMapping(pGpu0, pKernelBus0, pGpu1, *peer0, attributes)*_kbusCreateNvlinkPeerMapping(pGpu1, pKernelBus1, pGpu0, *peer1, attributes)**_kbusCreateNvlinkPeerMapping(pGpu1, pKernelBus1, pGpu0, *peer1, attributes)*NVRM: GPU%d Failed to ENABLE USE_NVLINK_PEER for peer%d **NVRM: GPU%d Failed to ENABLE USE_NVLINK_PEER for peer%d *NVRM: Failed to acquire locks for gpumask 0x%x **NVRM: Failed to acquire locks for gpumask 0x%x *gpuMaskRelease*paramAddr**paramAddr*src/kernel/gpu/bus/arch/turing/kern_bus_tu102.c**src/kernel/gpu/bus/arch/turing/kern_bus_tu102.c*gfid == 0**gfid == 0*pMemDesc != NULL**pMemDesc != NULL*bContigDesc*mapRangeEndPlus1*mapGranularity*mapRangeEndPlus1 <= memdescGetSize(pMemDesc)**mapRangeEndPlus1 <= memdescGetSize(pMemDesc)*testMapRange*lastTestMapRangeLimit*bInDynamicRegion*bInStaticRegion*(numRanges == 1) || (bDiscontigAllowed && !bUnmanagedRange)**(numRanges == 1) || (bDiscontigAllowed && !bUnmanagedRange)*NVRM: MemDesc spans both static and dynamic region,which is unsupported. **NVRM: MemDesc spans both static and dynamic region,which is unsupported. *NVRM: static Bar1 map [0, 0x%llx] **NVRM: static Bar1 map [0, 0x%llx] *NVRM: Requested map range 0x%llx to 0x%llx, mapGranularity 0x%llx **NVRM: Requested map range 0x%llx to 0x%llx, mapGranularity 0x%llx *call to memdescPrintMemdesc*Dumping memdesc:**Dumping memdesc:*call to kbusIncreaseStaticBar1Refcount_DISPATCH*pMemArea->pRanges != NULL**pMemArea->pRanges != NULL*call to mrangeContains*pRootMemDesc->staticBar1MappingRefCount != 0**pRootMemDesc->staticBar1MappingRefCount != 0*call to _kbusUpdateStaticBar1VAMapping_TU102*_kbusUpdateStaticBar1VAMapping_TU102(pGpu, pKernelBus, pRootMemDesc, BUS_MAP_FB_FLAGS_NONE, NV_TRUE)**_kbusUpdateStaticBar1VAMapping_TU102(pGpu, pKernelBus, pRootMemDesc, BUS_MAP_FB_FLAGS_NONE, NV_TRUE)*requestedKind*requestedDmaFlags*staticBar1MappingKind*staticBar1DmaFlags*mapSize*NV_IS_ALIGNED64( memdescGetPhysAddr(pMemDesc, addressTranslation, 0), pageSize)**NV_IS_ALIGNED64( memdescGetPhysAddr(pMemDesc, addressTranslation, 0), pageSize)*NV_IS_ALIGNED64(mapSize, pageSize)**NV_IS_ALIGNED64(mapSize, pageSize)*call to memmgrGetKindComprFromMemDesc_IMPL*memmgrGetKindComprFromMemDesc(pMemoryManager, pMemDesc, 0, &kind, &comprInfo)**memmgrGetKindComprFromMemDesc(pMemoryManager, pMemDesc, 0, &kind, &comprInfo)*pageArray*bLocalized*call to dmaUpdateVASpace_DISPATCH*NVRM: Failed to update static bar1 VA space, error 0x%x. **NVRM: Failed to update static bar1 VA space, error 0x%x. *bStaticBar1Enabled*kbusUnmapFbApertureSingle(pGpu, pKernelBus, pKernelBus->bar1[gfid].staticBar1.pVidMemDesc, pKernelBus->bar1[gfid].staticBar1.startOffset, pKernelBus->bar1[gfid].staticBar1.size, BUS_MAP_FB_FLAGS_MAP_UNICAST)**kbusUnmapFbApertureSingle(pGpu, pKernelBus, pKernelBus->bar1[gfid].staticBar1.pVidMemDesc, pKernelBus->bar1[gfid].staticBar1.startOffset, pKernelBus->bar1[gfid].staticBar1.size, BUS_MAP_FB_FLAGS_MAP_UNICAST)**pVidMemDesc**pDmaMemDesc*call to memmgrGetClientFbAddrSpaceSize_IMPL*memdescCreate(&pMemDesc, pGpu, bar1MapSize, 0, NV_MEMORY_CONTIGUOUS, ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pMemDesc, pGpu, bar1MapSize, 0, NV_MEMORY_CONTIGUOUS, ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*staticBar1DefaultKind*call to memdescSetPteKind*kbusMapFbApertureSingle(pGpu, pKernelBus, pMemDesc, 0, &bar1Offset, &bar1MapSize, mapFlags, NV01_NULL_OBJECT)**kbusMapFbApertureSingle(pGpu, pKernelBus, pMemDesc, 0, &bar1Offset, &bar1MapSize, mapFlags, NV01_NULL_OBJECT)*bar1BusAddr*memdescCreate(&pDmaMemDesc, pGpu, bar1MapSize, 0, NV_MEMORY_CONTIGUOUS, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pDmaMemDesc, pGpu, bar1MapSize, 0, NV_MEMORY_CONTIGUOUS, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*startOffset*NVRM: Static bar1 mapped offset 0x%llx size 0x%llx **NVRM: Static bar1 mapped offset 0x%llx size 0x%llx *kbusUnmapFbApertureSingle(pGpu, pKernelBus, pMemDesc, bar1Offset, bar1MapSize, BUS_MAP_FB_FLAGS_MAP_UNICAST | BUS_MAP_FB_FLAGS_MAP_OFFSET_FIXED)**kbusUnmapFbApertureSingle(pGpu, pKernelBus, pMemDesc, bar1Offset, bar1MapSize, BUS_MAP_FB_FLAGS_MAP_UNICAST | BUS_MAP_FB_FLAGS_MAP_OFFSET_FIXED)*NVRM: Failed to create the static bar1 mapping offset0x%llx size 0x%llx **NVRM: Failed to create the static bar1 mapping offset0x%llx size 0x%llx *NVRM: BAR1 size %lld is not large enough to map FB size%lld to force static BAR1 **NVRM: BAR1 size %lld is not large enough to map FB size%lld to force static BAR1 *call to kfifoGetMaxChannelsInSystem_IMPL*call to kfifoGetUserdSizeAlign_DISPATCH*NVRM: Enabling static BAR1 automatically! **NVRM: Enabling static BAR1 automatically! *bBrokenFb*pKernelGmmu != NULL**pKernelGmmu != NULL*call to kgmmuFmtGetLatestSupportedFormat_IMPL*nvlinkIdMask*call to kgmmuSetAndGetDefaultFaultBufferSize_DISPATCH*call to kfifoCalcTotalSizeOfFaultMethodBuffers_DISPATCH*apertureSize*call to kbusIsReadCpuPointerToFlushEnabled*(pKernelBus->pReadToFlush != NULL || pKernelBus->virtualBar2[GPU_GFID_PF].pCpuMapping != NULL)*src/kernel/gpu/bus/arch/volta/kern_bus_gv100.c**(pKernelBus->pReadToFlush != NULL || pKernelBus->virtualBar2[GPU_GFID_PF].pCpuMapping != NULL)**src/kernel/gpu/bus/arch/volta/kern_bus_gv100.c*pKernelBus->coherentCpuMapping.refcnt[i] == 0**pKernelBus->coherentCpuMapping.refcnt[i] == 0*call to osUnmapPciMemoryKernel64*pGpu->getProperty(pGpu, PDB_PROP_GPU_COHERENT_CPU_MAPPING)**pGpu->getProperty(pGpu, PDB_PROP_GPU_COHERENT_CPU_MAPPING)*pKernelBus->coherentCpuMapping.refcnt[COHERENT_CPU_MAPPING_WPR] == 0**pKernelBus->coherentCpuMapping.refcnt[COHERENT_CPU_MAPPING_WPR] == 0*pMemDesc->_flags & MEMDESC_FLAGS_PHYSICALLY_CONTIGUOUS**pMemDesc->_flags & MEMDESC_FLAGS_PHYSICALLY_CONTIGUOUS*memdescGetContiguity(pMemDesc, AT_GPU)**memdescGetContiguity(pMemDesc, AT_GPU)**pReadToFlush**pFlushMemDesc*!kbusIsBarAccessBlocked(pKernelBus)**!kbusIsBarAccessBlocked(pKernelBus)*pKernelBus->pReadToFlush != NULL**pKernelBus->pReadToFlush != NULL*bZeroRusd**pBar1VAS*pBar1VAS != NULL*src/kernel/gpu/bus/kern_bus.c**pBar1VAS != NULL**src/kernel/gpu/bus/kern_bus.c*pVASHeap**pVASHeap*bar1VARange*bar1AvailSize*vasFreeSize*call to gpushareddataWriteStart_INTERNAL*pPmaInfo**pPmaInfo*call to gpushareddataWriteFinish_INTERNAL*totalMem*freeMem*fbInUse**pSharedData*busGetInfoParams*kbusMapFbAperture_HAL(pGpu, pKernelBus, pMemDesc, mrangeMake(offset, *pLength), &memArea, flags | BUS_MAP_FB_FLAGS_UNMANAGED_MEM_AREA, pDevice)**kbusMapFbAperture_HAL(pGpu, pKernelBus, pMemDesc, mrangeMake(offset, *pLength), &memArea, flags | BUS_MAP_FB_FLAGS_UNMANAGED_MEM_AREA, pDevice)*dmaFlags*!((busMapFlags & BUS_MAP_FB_FLAGS_READ_ONLY) && (busMapFlags & BUS_MAP_FB_FLAGS_WRITE_ONLY))**!((busMapFlags & BUS_MAP_FB_FLAGS_READ_ONLY) && (busMapFlags & BUS_MAP_FB_FLAGS_WRITE_ONLY))*!bPageSizeLocked**!bPageSizeLocked*bPageSizeLocked*bExplicitCacheFlushRequired*pHostCaps**pHostCaps*pOldPdb**pOldPdb*memdescCreate(&pMemDesc, pGpu, pKernelBus->bar2[GPU_GFID_PF].pageDirSize, RM_PAGE_SIZE, NV_TRUE, ADDR_FBMEM, pKernelBus->PDEBAR2Attr, MEMDESC_FLAGS_NONE)**memdescCreate(&pMemDesc, pGpu, pKernelBus->bar2[GPU_GFID_PF].pageDirSize, RM_PAGE_SIZE, NV_TRUE, ADDR_FBMEM, pKernelBus->PDEBAR2Attr, MEMDESC_FLAGS_NONE)*call to memmgrMemRead_IMPL*pRootFmt*memmgrMemRead(pMemoryManager, &surf, &entryValue, pRootFmt->entrySize, TRANSFER_FLAGS_NONE)**memmgrMemRead(pMemoryManager, &surf, &entryValue, pRootFmt->entrySize, TRANSFER_FLAGS_NONE)*pGpuState*call to kgmmuGetPTEAttr*memdescCreate(&pMemDesc, pGpu, rootSize, RM_PAGE_SIZE, NV_TRUE, ADDR_FBMEM, kgmmuGetPTEAttr(pKernelGmmu), MEMDESC_FLAGS_NONE)**memdescCreate(&pMemDesc, pGpu, rootSize, RM_PAGE_SIZE, NV_TRUE, ADDR_FBMEM, kgmmuGetPTEAttr(pKernelGmmu), MEMDESC_FLAGS_NONE)*call to gvaspaceWalkUserCtxAcquire_IMPL*gvaspaceWalkUserCtxAcquire(pGVAS, pGpu, NULL, &userCtx)**gvaspaceWalkUserCtxAcquire(pGVAS, pGpu, NULL, &userCtx)*call to mmuWalkModifyLevelInstance*call to gvaspaceWalkUserCtxRelease_IMPL*NVRM: Failed to modify CPU-RM's BAR1 PDB to GSP-RM's BAR1 PDB. **NVRM: Failed to modify CPU-RM's BAR1 PDB to GSP-RM's BAR1 PDB. *call to kbusCalcCpuInvisibleBar2Range_DISPATCH*NVRM: va limit: 0x%llx **NVRM: va limit: 0x%llx *pKernelBus->bar2[gfid].vaLimit == 0 || pKernelBus->bar2[gfid].vaLimit == limit**pKernelBus->bar2[gfid].vaLimit == 0 || pKernelBus->bar2[gfid].vaLimit == limit*pKernelBus->bar2[GPU_GFID_PF].vaLimit == 0 || pKernelBus->bar2[GPU_GFID_PF].vaLimit == limit**pKernelBus->bar2[GPU_GFID_PF].vaLimit == 0 || pKernelBus->bar2[GPU_GFID_PF].vaLimit == limit*NULL != pKernelBus->virtualBar2[GPU_GFID_PF].pCpuMapping**NULL != pKernelBus->virtualBar2[GPU_GFID_PF].pCpuMapping*ADDR_FBMEM == pMemDesc->_addressSpace**ADDR_FBMEM == pMemDesc->_addressSpace*call to kbusDestroyPeerAccess_DISPATCH*hshubParams*invalidatePeerMask*! pKernelBus->bPciBarSizesValid**! pKernelBus->bPciBarSizesValid*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_BUS_GET_PCI_BAR_INFO, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_BUS_GET_PCI_BAR_INFO, ¶ms, sizeof(params))*pciBarInfo**pciBarInfo*bPciBarSizesValid*RMBar1ApertureSizeMB**RMBar1ApertureSizeMB*bBar1Force64KBMapping*RM64KBBAR1Mappings**RM64KBBAR1Mappings*bar1SizeMB*NVRM: bad index 0x%x **NVRM: bad index 0x%x *call to kbusDestructVirtualBar2_VBAR2*pVGpu == NULL**pVGpu == NULL*((base & RM_PAGE_MASK) == 0)**((base & RM_PAGE_MASK) == 0)*(memdescGetPhysAddr(pMemDesc, AT_GPU, 0) == base) && (memdescGetSize(pMemDesc) == size)**(memdescGetPhysAddr(pMemDesc, AT_GPU, 0) == base) && (memdescGetSize(pMemDesc) == size)**pIovaMapping*NVRM: no IOVA mapping found for pre-existing P2P domain memdesc **NVRM: no IOVA mapping found for pre-existing P2P domain memdesc **pRmClient*call to kmigmgrIsMIGMemPartitioningEnabled_IMPL*call to kmigmgrIsDeviceUsingDeviceProfiling_IMPL*call to memmgrGetMIGPartitionableBAR1Range_IMPL*call to kmemsysSwizzIdToMIGMemRange_IMPL*kmemsysSwizzIdToMIGMemRange(pGpu, pKernelMemorySystem, ref.pKernelMIGGpuInstance->swizzId, *pBar1VARange, pBar1VARange)**kmemsysSwizzIdToMIGMemRange(pGpu, pKernelMemorySystem, ref.pKernelMIGGpuInstance->swizzId, *pBar1VARange, pBar1VARange)*call to gpuIsBarPteInSysmemSupported*call to gpuIsRegUsesGlobalSurfaceOverridesEnabled*NVRM: BAR PTEs not supported in sysmem. Ignoring global override request. **NVRM: BAR PTEs not supported in sysmem. Ignoring global override request. *NVRM: Using aperture %d for BAR2 PTEs **NVRM: Using aperture %d for BAR2 PTEs *NVRM: BAR PDEs not supported in sysmem. Ignoring global override request. **NVRM: BAR PDEs not supported in sysmem. Ignoring global override request. *RMForceStaticBar1**RMForceStaticBar1*staticBar1ForceType*NVRM: Forcing static BAR1 to type %d. **NVRM: Forcing static BAR1 to type %d. *bGrdmaForceSpa*RmGpuDirectRdmaForceSPA**RmGpuDirectRdmaForceSPA*P2PMailboxClientAllocated**P2PMailboxClientAllocated*bP2pMailboxClientAllocated*RMBar1RestoreSize**RMBar1RestoreSize*RmForceBarAccessOnHcc**RmForceBarAccessOnHcc*call to gpuIsCCDevToolsModeEnabled_IMPL*bForceBarAccessOnHcc*call to rmcfg_IsAMPERE_CLASSIC_GPUSorBetter*bBar1PhysicalModeEnabled*call to kbusConstructHal_DISPATCH*call to kbusInitRegistryOverrides*call to kbusInitPciBars_DISPATCH*call to kbusInitBarsBaseInfo_DISPATCH*kbusInitBarsBaseInfo_HAL(pKernelBus)**kbusInitBarsBaseInfo_HAL(pKernelBus)*call to kbusSetBarsApertureSize_GM107*kbusSetBarsApertureSize_HAL(pGpu, pKernelBus, GPU_GFID_PF)**kbusSetBarsApertureSize_HAL(pGpu, pKernelBus, GPU_GFID_PF)*call to kbusConstructXalApertures_DISPATCH*kbusConstructXalApertures_HAL(pGpu, pKernelBus)**kbusConstructXalApertures_HAL(pGpu, pKernelBus)*src/kernel/gpu/bus/kern_bus_ctrl.c**src/kernel/gpu/bus/kern_bus_ctrl.c*bIsLinkUp*nrLinks*perLinkBwMBps*remoteType*atomicOp**atomicOp*call to kbifDisableSysmemAccess_DISPATCH*bIsVirtual*pBarInfoParams*pciBarCount*pBarInfoParams->pciBarCount <= NV2080_CTRL_BUS_MAX_PCI_BARS**pBarInfoParams->pciBarCount <= NV2080_CTRL_BUS_MAX_PCI_BARS*barSize*barSizeBytes*barOffset*call to getBusInfos*call to gpuValidateBusInfoIndex_DISPATCH*pBusInfos*gpuValidateBusInfoIndex_HAL(pGpu, pBusInfos[i].index)**gpuValidateBusInfoIndex_HAL(pGpu, pBusInfos[i].index)*bSendRpc*kbusSendBusInfo(pGpu, GPU_GET_KERNEL_BUS(pGpu), &pBusInfos[i])**kbusSendBusInfo(pGpu, GPU_GET_KERNEL_BUS(pGpu), &pBusInfos[i])*call to kbusControlGetCaps*kbifControlGetPCIEInfo(pGpu, pKernelBif, &pBusInfos[i])**kbifControlGetPCIEInfo(pGpu, pKernelBif, &pBusInfos[i])*call to kgmmuGetMaxVASize*nvlinkPeerIdMask**nvlinkPeerIdMask*pPciInfoParams*pciSubSystemId*pciRevisionId*pciExtDeviceId**pHostCapsParamsV2*call to _kbusGetHostCaps*pHostCapsParams*NVRM: size mismatch: client 0x%x rm 0x%x **NVRM: size mismatch: client 0x%x rm 0x%x *call to kbusGetDeviceCaps_IMPL*call to kbusUpdateRmAperture_GM107*src/kernel/gpu/bus/kern_bus_vbar2.c**src/kernel/gpu/bus/kern_bus_vbar2.c*call to _freeRmApertureMap_VBAR2*call to kbusUnmapBar2ApertureWithFlags_SCRATCH*call to kbusUnmapBar2ApertureWithFlags_VBAR2*NVRM: Cannot map/unmap CPR vidmem into/from BAR2 **NVRM: Cannot map/unmap CPR vidmem into/from BAR2 *call to kbusUnmapBar2ApertureCached_VBAR2*call to kbusBar2IsReady_SCRATCH*call to kbusBar2IsReady_VBAR2*NVRM: GPU %d: Warning: Reflected Mapping Found: MapType = BAR and AddressSpace = SYSMEM. **NVRM: GPU %d: Warning: Reflected Mapping Found: MapType = BAR and AddressSpace = SYSMEM. *call to kbusUnmapBar2ApertureWithFlags_DISPATCH*call to listPrependExisting_IMPL*NVRM: can't find mapping struct! **NVRM: can't find mapping struct! *pMemDesc->pGpu == pGpu**pMemDesc->pGpu == pGpu*(pMemDesc->Size != 0) && (pMemDesc->PageCount != 0)**(pMemDesc->Size != 0) && (pMemDesc->PageCount != 0)*pKernelBus->virtualBar2[GPU_GFID_PF].pVASpaceHeap != NULL**pKernelBus->virtualBar2[GPU_GFID_PF].pVASpaceHeap != NULL*pMap->pMemDesc**pMap->pMemDesc*NVRM: No free bar2 mapping struct left! **NVRM: No free bar2 mapping struct left! *bEvictNeeded*pMap->pMemDesc != NULL**pMap->pMemDesc != NULL**call to listPrev_IMPL*NVRM: Not enough contiguous BAR2 VA space left allocSize %llx! **NVRM: Not enough contiguous BAR2 VA space left allocSize %llx! *pMapNew**pMapNew**pRtnPtr*memDescCallback*pBlockFree**pBlockFree*vAddrSize*pKernelBus->virtualBar2[gfid].pPageLevelsForBootstrap**pKernelBus->virtualBar2[gfid].pPageLevelsForBootstrap*pMemDesc == NULL**pMemDesc == NULL*listCount(&pKernelBus->virtualBar2[gfid].usedMapList) == 0**listCount(&pKernelBus->virtualBar2[gfid].usedMapList) == 0*!shutdown || (listCount(&pKernelBus->virtualBar2[gfid].cachedMapList) == 0)**!shutdown || (listCount(&pKernelBus->virtualBar2[gfid].cachedMapList) == 0)*call to _kbusDestructVirtualBar2Lists*call to _kbusDestructVirtualBar2Heaps*NVRM: MapCount: %d Bar2 Hits: %d Evictions: %d **NVRM: MapCount: %d Bar2 Hits: %d Evictions: %d **pVASpaceHiddenHeap*NVRM: Unable to alloc hidden bar2 eheap! **NVRM: Unable to alloc hidden bar2 eheap! *call to constructObjEHeap*NVRM: Unable to alloc bar2 eheap! **NVRM: Unable to alloc bar2 eheap! *call to _kbusConstructVirtualBar2Lists*call to _kbusConstructVirtualBar2Heaps*mapCount*cacheHit*evictions*NVRM: Unable to alloc bar2 mapping list! **NVRM: Unable to alloc bar2 mapping list! *call to kbusConstructVirtualBar2CpuVisibleHeap_VBAR2*kbusConstructVirtualBar2CpuVisibleHeap_HAL(pKernelBus, gfid)**kbusConstructVirtualBar2CpuVisibleHeap_HAL(pKernelBus, gfid)*call to kbusConstructVirtualBar2CpuInvisibleHeap_DISPATCH*kbusConstructVirtualBar2CpuInvisibleHeap_HAL(pKernelBus, gfid)**kbusConstructVirtualBar2CpuInvisibleHeap_HAL(pKernelBus, gfid)*src/kernel/gpu/bus/kern_bus_vgpu.c**src/kernel/gpu/bus/kern_bus_vgpu.c*pParams->bUseUuid == NV_FALSE**pParams->bUseUuid == NV_FALSE*pParams->connectionType != NV2080_CTRL_CMD_BUS_SET_P2P_MAPPING_CONNECTION_TYPE_INVALID**pParams->connectionType != NV2080_CTRL_CMD_BUS_SET_P2P_MAPPING_CONNECTION_TYPE_INVALID*remoteGpuUuid**remoteGpuUuid*shimParams*call to kbusGetVfBar0SizeBytes_IMPL*NVRM: Unable to bind BAR2 to physical mode. **NVRM: Unable to bind BAR2 to physical mode. *call to CliGetThirdPartyP2PInfoFromToken*call to RmP2PGetInfoWithoutToken*call to thirdpartyp2pDelMappingInfoByKey_IMPL*src/kernel/gpu/bus/p2p.c**src/kernel/gpu/bus/p2p.c**pThirdPartyP2P*call to thirdpartyp2pDelPersistentMappingInfoByKey_IMPL*call to RmP2PPutMigInfo*call to CliGetThirdPartyP2PVidmemInfoFromAddress*call to CliRegisterThirdPartyP2PMappingCallback*pVidmemInfo*call to RmP2PValidateSubDevice*call to _rmP2PGetPages*pbMemCpuCacheable**ppGpu*NVRM: invalid argument(s) in RmP2PGetPages, pFreeCallback=%p pData=%p **NVRM: invalid argument(s) in RmP2PGetPages, pFreeCallback=%p pData=%p *NV_IS_ALIGNED64(address, NVRM_P2P_PAGESIZE_BIG_64K)**NV_IS_ALIGNED64(address, NVRM_P2P_PAGESIZE_BIG_64K)*NV_IS_ALIGNED64(length, NVRM_P2P_PAGESIZE_BIG_64K)**NV_IS_ALIGNED64(length, NVRM_P2P_PAGESIZE_BIG_64K)*call to gpuIsApmFeatureEnabled_IMPL*call to RmP2PGetMigInfo*hThirdPartyP2PInternal**pThirdPartyP2PInternal*call to _createOrReuseVidmemInfoPersistent**pVidmemInfo*call to RmP2PGetPagesUsingVidmemInfo*call to CliDelThirdPartyP2PVidmemInfoPersistent*CliGetThirdPartyP2PVidmemInfoFromAddress(pThirdPartyP2P, address, length, &offset, &pVidmemInfo)**CliGetThirdPartyP2PVidmemInfoFromAddress(pThirdPartyP2P, address, length, &offset, &pVidmemInfo)*call to CliGetThirdPartyP2PVidmemInfoFromId*pClientInternal**pClientInternal**pSubdevice*bMemDuped*call to memGetByHandleAndDevice_IMPL*call to CliAddThirdPartyP2PVidmemInfo*pMemoryInternal*call to thirdpartyp2pGetVASpaceInfoFromToken_IMPL*pVASpaceInfo*call to RmP2PValidateAddressRangeOrGetPages*rmDeviceGpuLocksAcquire(pGpu, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_P2P)**rmDeviceGpuLocksAcquire(pGpu, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_P2P)*call to memmgrGetBAR1InfoForDevice_DISPATCH*bar1SizeBytes*fbAvailableBytes*call to CliNextThirdPartyP2PInfoWithPid*pThirdPartyP2PInfo*call to CliGetThirdPartyP2PPlatformData*call to RmP2PGetVASpaceInfoWithoutToken*ppThirdPartyP2PInfo**ppThirdPartyP2PInfo*call to thirdpartyp2pGetNextVASpaceInfo_IMPL**ppVASpaceInfo*ppPhysicalAddresses**ppPhysicalAddresses*ppWreqMbH**ppWreqMbH*ppRreqMbH**ppRreqMbH*call to CliGetThirdPartyP2PMappingInfoFromKey*call to CliAddThirdPartyP2PMappingInfo*call to RmThirdPartyP2PNVLinkGetPages*call to RmThirdPartyP2PBAR1GetPages*pMappingInfo*NV_IS_ALIGNED64(offset, NVRM_P2P_PAGESIZE_BIG_64K)**NV_IS_ALIGNED64(offset, NVRM_P2P_PAGESIZE_BIG_64K)*lastAddress*(pMappingInfo != NULL)**(pMappingInfo != NULL)*(pSubDevice != NULL)**(pSubDevice != NULL)*(pThirdPartyP2PInfo != NULL)**(pThirdPartyP2PInfo != NULL)*(ppPhysicalAddresses != NULL)**(ppPhysicalAddresses != NULL)*(pEntries != NULL)**(pEntries != NULL)*(pbMemCpuCacheable != NULL)**(pbMemCpuCacheable != NULL)*kbusGetGpuFbPhysAddressForRdma(pGpu, pKernelBus, bForcePcie, &physicalFbAddress)**kbusGetGpuFbPhysAddressForRdma(pGpu, pKernelBus, bForcePcie, &physicalFbAddress)*NVRM: Requesting Bar1 mappings for address: 0x%llx, length: 0x%llx, BAR1 base: 0x%llx **NVRM: Requesting Bar1 mappings for address: 0x%llx, length: 0x%llx, BAR1 base: 0x%llx *pExtentInfoLoop**pExtentInfoLoop*lengthReq*pExtentInfo**pExtentInfo*call to _isSpaceAvailableForBar1P2PMapping*NVRM: no space for BAR1 mappings, length: 0x%llx **NVRM: no space for BAR1 mappings, length: 0x%llx *call to _createThirdPartyP2PMappingExtent*call to _reuseThirdPartyP2PMappingExtent*call to _thirdpartyp2pFillEntries*call to RmThirdPartyP2PMappingFree*rangeOffset*bDone*bGpuLockTaken*(pDevice != NULL)**(pDevice != NULL)*bVgpuRpc*pExtentInfoNext**pExtentInfoNext*mappingLength*call to NV_RM_RPC_UNMAP_MEMORY*call to _freeMappingExtentInfo*length == 0**length == 0**ppExtentInfo*(ppExtentInfo != NULL)**(ppExtentInfo != NULL)*(pList != NULL)**(pList != NULL)*pMappingStart*(pMappingStart != NULL)**(pMappingStart != NULL)*pMappingLength*(pMappingLength != NULL)**(pMappingLength != NULL)*(pMemDesc != NULL)**(pMemDesc != NULL)*NVRM: Reuse allocation for address: 0x%llx **NVRM: Reuse allocation for address: 0x%llx *mappingStart*NVRM: New allocation for address: 0x%llx **NVRM: New allocation for address: 0x%llx *call to _constructMappingExtentInfo*_constructMappingExtentInfo(address, offset, fbApertureMapLength, pMemDesc, ppExtentInfo)**_constructMappingExtentInfo(address, offset, fbApertureMapLength, pMemDesc, ppExtentInfo)*call to NV_RM_RPC_MAP_MEMORY*pExtentInfoTmp**pExtentInfoTmp*call to listInsertExisting_IMPL*pNewMemDesc*src/kernel/gpu/bus/p2p_api.c**src/kernel/gpu/bus/p2p_api.c**pLocalGpu*pLocalKernelBus->totalP2pObjectsAliveRefCount > 0**pLocalKernelBus->totalP2pObjectsAliveRefCount > 0*pRemoteKernelBus->totalP2pObjectsAliveRefCount > 0**pRemoteKernelBus->totalP2pObjectsAliveRefCount > 0*call to kbusRemoveP2PMapping_DISPATCH*kbusRemoveP2PMapping_HAL(pLocalGpu, pLocalKernelBus, pRemoteGpu, pRemoteKernelBus, pP2PApi->peerId1, pP2PApi->peerId2, pP2PApi->attributes)**kbusRemoveP2PMapping_HAL(pLocalGpu, pLocalKernelBus, pRemoteGpu, pRemoteKernelBus, pP2PApi->peerId1, pP2PApi->peerId2, pP2PApi->attributes)*call to memmgrIsLocalEgmEnabled**peer1**peer2*call to kbusUnsetP2PMailboxBar1Area_DISPATCH*pNv503bAllocParams*subDevicePeerIdMask*peerSubDevicePeerIdMask*subDeviceEgmPeerIdMask*peerSubDeviceEgmPeerIdMask*hPeerSubDevice*hPeerDevice**pLocalKernelNvlink**pRemoteKernelNvlink**pP2pCapsParams*pRmApi->Control(pRmApi, pClient->hClient, pClient->hClient, NV0000_CTRL_CMD_SYSTEM_GET_P2P_CAPS_V2, pP2pCapsParams, sizeof(*pP2pCapsParams))**pRmApi->Control(pRmApi, pClient->hClient, pClient->hClient, NV0000_CTRL_CMD_SYSTEM_GET_P2P_CAPS_V2, pP2pCapsParams, sizeof(*pP2pCapsParams))*bP2PWriteCapable*bP2PReadCapable*p2pConnectionType*NVRM: Unknown connection type **NVRM: Unknown connection type *call to kbusSetP2PMailboxBar1Area_DISPATCH*l2pFabricP2PInfo*call to knvlinkGetUniqueFabricBaseAddress_e203db*p2lFabricP2PInfo*egmPeer1*egmPeer2*NVRM: ERROR: P2P is Disabled, cannot create mappings **NVRM: ERROR: P2P is Disabled, cannot create mappings *call to knvlinkTrainFabricLinksToActive_IMPL*NVRM: link training between GPU%u and SWITCH failed with status %x **NVRM: link training between GPU%u and SWITCH failed with status %x *call to clientRefOrderedIter*call to clientRefOrderedIterNext*pOtherP2PApi**pOtherP2PApi*NVRM: Mapping already exists between the two subdevices (0x%08x), (0x%08x). Multiple mappings not supported on pre-PASCAL GPUs **NVRM: Mapping already exists between the two subdevices (0x%08x), (0x%08x). Multiple mappings not supported on pre-PASCAL GPUs *NVRM: EGM P2P not setup because of SPA only P2P flag! **NVRM: EGM P2P not setup because of SPA only P2P flag! *l2pBar1P2PDmaInfo*p2lBar1P2PDmaInfo*call to kbusCreateP2PMapping_DISPATCH*kbusCreateP2PMapping_HAL(pLocalGpu, pLocalKernelBus, pRemoteGpu, pRemoteKernelBus, &peer1, &peer2, pP2PApi->attributes)**kbusCreateP2PMapping_HAL(pLocalGpu, pLocalKernelBus, pRemoteGpu, pRemoteKernelBus, &peer1, &peer2, pP2PApi->attributes)*kbusCreateP2PMapping_HAL(pLocalGpu, pLocalKernelBus, pRemoteGpu, pRemoteKernelBus, &egmPeer1, &egmPeer2, pP2PApi->attributes | DRF_DEF(_P2PAPI, _ATTRIBUTES, _REMOTE_EGM, _YES))**kbusCreateP2PMapping_HAL(pLocalGpu, pLocalKernelBus, pRemoteGpu, pRemoteKernelBus, &egmPeer1, &egmPeer2, pP2PApi->attributes | DRF_DEF(_P2PAPI, _ATTRIBUTES, _REMOTE_EGM, _YES))*call to kbusGetBar1P2PDmaInfo_DISPATCH*kbusGetBar1P2PDmaInfo_HAL(pLocalGpu, pRemoteGpu, pRemoteKernelBus, &pNv503bAllocParams->l2pBar1P2PDmaInfo.dma_address, &pNv503bAllocParams->l2pBar1P2PDmaInfo.dma_size)**kbusGetBar1P2PDmaInfo_HAL(pLocalGpu, pRemoteGpu, pRemoteKernelBus, &pNv503bAllocParams->l2pBar1P2PDmaInfo.dma_address, &pNv503bAllocParams->l2pBar1P2PDmaInfo.dma_size)*kbusGetBar1P2PDmaInfo_HAL(pRemoteGpu, pLocalGpu, pLocalKernelBus, &pNv503bAllocParams->p2lBar1P2PDmaInfo.dma_address, &pNv503bAllocParams->p2lBar1P2PDmaInfo.dma_size)**kbusGetBar1P2PDmaInfo_HAL(pRemoteGpu, pLocalGpu, pLocalKernelBus, &pNv503bAllocParams->p2lBar1P2PDmaInfo.dma_address, &pNv503bAllocParams->p2lBar1P2PDmaInfo.dma_size)*call to gpuIsSplitVasManagementServerClientRmEnabled*call to _p2papiReservePeerID*_p2papiReservePeerID(pLocalGpu, pLocalKernelBus, pRemoteGpu, pRemoteKernelBus, pNv503bAllocParams, pP2PApi, &peer1, &peer2, NV_FALSE, bSpaAccessOnly)**_p2papiReservePeerID(pLocalGpu, pLocalKernelBus, pRemoteGpu, pRemoteKernelBus, pNv503bAllocParams, pP2PApi, &peer1, &peer2, NV_FALSE, bSpaAccessOnly)*_p2papiReservePeerID(pLocalGpu, pLocalKernelBus, pRemoteGpu, pRemoteKernelBus, pNv503bAllocParams, pP2PApi, &egmPeer1, &egmPeer2, NV_TRUE, bSpaAccessOnly)**_p2papiReservePeerID(pLocalGpu, pLocalKernelBus, pRemoteGpu, pRemoteKernelBus, pNv503bAllocParams, pP2PApi, &egmPeer1, &egmPeer2, NV_TRUE, bSpaAccessOnly)*peerId1*peerId2*egmPeerId1*egmPeerId2*localGfid*remoteGfid**pNv503bAllocParams*kbusSetupBindFla_HAL(pLocalGpu, pLocalKernelBus, pP2PApi->localGfid)**kbusSetupBindFla_HAL(pLocalGpu, pLocalKernelBus, pP2PApi->localGfid)*kbusSetupBindFla_HAL(pRemoteGpu, pRemoteKernelBus, pP2PApi->remoteGfid)**kbusSetupBindFla_HAL(pRemoteGpu, pRemoteKernelBus, pP2PApi->remoteGfid)*call to refAddDependant*refAddDependant(RES_GET_REF(pSubDevice), pCallContext->pResourceRef)**refAddDependant(RES_GET_REF(pSubDevice), pCallContext->pResourceRef)*refAddDependant(RES_GET_REF(pPeerSubDevice), pCallContext->pResourceRef)**refAddDependant(RES_GET_REF(pPeerSubDevice), pCallContext->pResourceRef)*pLocalKernelBus->totalP2pObjectsAliveRefCount < NV_U32_MAX**pLocalKernelBus->totalP2pObjectsAliveRefCount < NV_U32_MAX*pRemoteKernelBus->totalP2pObjectsAliveRefCount < NV_U32_MAX**pRemoteKernelBus->totalP2pObjectsAliveRefCount < NV_U32_MAX*kbusReserveP2PPeerIds_HAL(pLocalGpu, pLocalKernelBus, NVBIT(*peer1))**kbusReserveP2PPeerIds_HAL(pLocalGpu, pLocalKernelBus, NVBIT(*peer1))*kbusReserveP2PPeerIds_HAL(pRemoteGpu, pRemoteKernelBus, NVBIT(*peer2))**kbusReserveP2PPeerIds_HAL(pRemoteGpu, pRemoteKernelBus, NVBIT(*peer2))*NVRM: Unexpected state, either of the peer ID is invalid **NVRM: Unexpected state, either of the peer ID is invalid *busNvlinkMappingRefcountPerPeerIdSpa**busNvlinkMappingRefcountPerPeerIdSpa*subDevIt*call to CliDelThirdPartyP2PVidmemInfo*call to CliDelThirdPartyP2PClientPid*pThirdPartyP2PClient*call to clientGetResource_IMPL*pidClientList**pidClientList*pidIndex*call to thirdpartyp2pIsValidClientPid_IMPL*!thirdpartyp2pIsValidClientPid(pThirdPartyP2P, pid, 0)*src/kernel/gpu/bus/third_party_p2p.c**!thirdpartyp2pIsValidClientPid(pThirdPartyP2P, pid, 0)**src/kernel/gpu/bus/third_party_p2p.c*call to _thirdpartyp2pDelMappingInfoByKey*_thirdpartyp2pDelMappingInfoByKey(pThirdPartyP2P, pKey, &pVidmemInfo)**_thirdpartyp2pDelMappingInfoByKey(pThirdPartyP2P, pKey, &pVidmemInfo)*(pKey != NULL)**(pKey != NULL)*vidMemMapIter*call to btreeSearch*pMappingInfoList**pMappingInfo*NVRM: Freeing P2P mapping for gpu VA: 0x%llx, length: 0x%llx **NVRM: Freeing P2P mapping for gpu VA: 0x%llx, length: 0x%llx *call to btreeUnlink*ppVidmemInfo**ppVidmemInfo*ppMappingInfo**ppMappingInfo*(ppMappingInfo != NULL)**(ppMappingInfo != NULL)*Node*keyStart*keyEnd***Data*call to btreeInsert*(pFreeCallback != NULL)**(pFreeCallback != NULL)*(pMappingInfo->pFreeCallback == NULL)**(pMappingInfo->pFreeCallback == NULL)*(ppVidmemInfo != NULL)**(ppVidmemInfo != NULL)*pAddressRangeTree*(pOffset != NULL)**(pOffset != NULL)*addressRangeNode***pKey**keyStart*(pVidmemInfo != NULL) && (pThirdPartyP2P != NULL)**(pVidmemInfo != NULL) && (pThirdPartyP2P != NULL)*call to btreeEnumStart*(pMemory != NULL)**(pMemory != NULL)*(ppVASpaceInfo != NULL)**(ppVASpaceInfo != NULL)*vaSpaceInfoIter**pVASpaceInfo**call to mapNext_IMPL*pVASpaceToken*(pVASpaceToken != NULL)**(pVASpaceToken != NULL)*call to mapInsertValue_IMPL**call to mapInsertValue_IMPL*pP2PRef*call to serverutilShareIter*call to serverutilShareIterNext*pP2PTokenShare*call to CliGetPlatformDataMatchFromVidMem*platformData**platformData*ppThirdPartyP2P**ppThirdPartyP2P*(ppThirdPartyP2P != NULL)**(ppThirdPartyP2P != NULL)*call to serverFreeShare*call to gpuFullPowerSanityCheck*pThirdPartyP2P->pAddressRangeTree == NULL**pThirdPartyP2P->pAddressRangeTree == NULL*call to gpuIsSurpriseRemovalSupported*pNv503cAllocParams*pDestroyCallback**pAddressRangeTree*call to serverAllocShare**pP2PTokenShare**pTokenShare*pRegisterPidParams*pClient != NULL*src/kernel/gpu/bus/third_party_p2p_ctrl.c**pClient != NULL**src/kernel/gpu/bus/third_party_p2p_ctrl.c*call to CliAddThirdPartyP2PClientPid*pUnregisterVidmemParams*pRegisterVidmemParams*call to CliDelThirdPartyP2PVASpace*pUnregisterVaSpaceParams*call to CliAddThirdPartyP2PVASpace*pRegisterVaSpaceParams*src/kernel/gpu/ccu/arch/blackwell/kernel_ccu_gb100.c*NVRM: CCU sample size retrieval failed with status: 0x%x **src/kernel/gpu/ccu/arch/blackwell/kernel_ccu_gb100.c**NVRM: CCU sample size retrieval failed with status: 0x%x *devBufSize*devSharedBufSize*migBufSize*migSharedBufSize*src/kernel/gpu/ccu/arch/hopper/kernel_ccu_gh100.c*NVRM: Create/delete CCU shared buffers for mig (migEnabled:%u) **src/kernel/gpu/ccu/arch/hopper/kernel_ccu_gh100.c**NVRM: Create/delete CCU shared buffers for mig (migEnabled:%u) *call to kccuInitMigSharedBuffer_IMPL*bMigShrBufAllocated*call to kccuShrBufInfoToCcu_IMPL*bMigShrBuf*NVRM: CCU mig memdesc unmap request failed with status: 0x%x **NVRM: CCU mig memdesc unmap request failed with status: 0x%x ***pMemDesc*call to kccuShrBufIdxCleanup_IMPL*call to kccuGetBufSize_DISPATCH*src/kernel/gpu/ccu/kernel_ccu.c*NVRM: Failed to get the buffer size info(status: %u) **src/kernel/gpu/ccu/kernel_ccu.c**NVRM: Failed to get the buffer size info(status: %u) *call to _kccuAllocMemory*NVRM: vGPU memory allocation failed for idx(%u) with status: 0x%x **NVRM: vGPU memory allocation failed for idx(%u) with status: 0x%x *call to _kccuVgpuShrBufInfoToCcu*_kccuVgpuShrBufInfoToCcu(pGpu, pKernelCcu, swizzId, computeId, idx, NV_FALSE)**_kccuVgpuShrBufInfoToCcu(pGpu, pKernelCcu, swizzId, computeId, idx, NV_FALSE)*call to memdescGetPte*NVRM: CCU vGPU memdesc map rpc request failed with status: 0x%x **NVRM: CCU vGPU memdesc map rpc request failed with status: 0x%x *shrBuf**shrBuf*pCounterDstInfo*NVRM: KernelCcu: memdesc get failed for input swizzId(%u), computeInst(%u) **NVRM: KernelCcu: memdesc get failed for input swizzId(%u), computeInst(%u) *NVRM: KernelCcu: Get ccu stream **NVRM: KernelCcu: Get ccu stream *NVRM: KernelCcu: Set ccu stream **NVRM: KernelCcu: Set ccu stream *NVRM: KernelCcu: Invalid input params **NVRM: KernelCcu: Invalid input params *NVRM: KernelCcu: CCU stream state is already (%s) **NVRM: KernelCcu: CCU stream state is already (%s) *ENABLED**ENABLED*DISABLED**DISABLED*ccuParams*bStreamState*NVRM: CCU stream state set failed with status: 0x%x **NVRM: CCU stream state set failed with status: 0x%x *pKernelMapInfo*NVRM: KernelCcu: Get counter block size **NVRM: KernelCcu: Get counter block size *NVRM: KernelCcu: Get memdesc for idx(%u) **NVRM: KernelCcu: Get memdesc for idx(%u) *NVRM: CCU memdesc get failed for input idx(%u). Invalid index. **NVRM: CCU memdesc get failed for input idx(%u). Invalid index. *NVRM: KernelCcu: State unload **NVRM: KernelCcu: State unload *call to _kccuUnmapAndFreeMemory*NVRM: KernelCcu: State load **NVRM: KernelCcu: State load *call to _kccuInitDevSharedBuffer*NVRM: Failed to init device shared buffer(status: %u) **NVRM: Failed to init device shared buffer(status: %u) *NVRM: Failed to init mig shared buffer(status: %u) **NVRM: Failed to init mig shared buffer(status: %u) *NVRM: Init shared buffer for mig counters. **NVRM: Init shared buffer for mig counters. *NVRM: CCU memory allocation failed for idx(%u) with status: 0x%x **NVRM: CCU memory allocation failed for idx(%u) with status: 0x%x *NVRM: Init shared buffer for device counters. **NVRM: Init shared buffer for device counters. *NVRM: CCU memory allocation failed with status: 0x%x **NVRM: CCU memory allocation failed with status: 0x%x *NVRM: Send shared buffer info to phyRM/gsp to map. **NVRM: Send shared buffer info to phyRM/gsp to map. *mapInfo**mapInfo*phyAddr*cntBlkSize*NVRM: CCU memdesc map request failed with status: 0x%x **NVRM: CCU memdesc map request failed with status: 0x%x *NVRM: KernelCcu: Unmap and free shared buffer **NVRM: KernelCcu: Unmap and free shared buffer *bDevShrBuf*NVRM: CCU memdesc unmap request failed with status: 0x%x **NVRM: CCU memdesc unmap request failed with status: 0x%x *NVRM: Shared buffer unmap & free for idx(%u). **NVRM: Shared buffer unmap & free for idx(%u). **pCounterDstInfo**pKernelMapInfo*NVRM: KernelCcu: Allocate memory for class members and shared buffer **NVRM: KernelCcu: Allocate memory for class members and shared buffer *NVRM: CCU port mem alloc failed for(%u) with status: 0x%x **NVRM: CCU port mem alloc failed for(%u) with status: 0x%x *NVRM: CCU memdescCreate failed for(%u) with status: 0x%x **NVRM: CCU memdescCreate failed for(%u) with status: 0x%x *NVRM: CCU memdescCreate failed. memdesc for(%u) is NULL **NVRM: CCU memdescCreate failed. memdesc for(%u) is NULL *NVRM: CCU memdescAlloc failed for(%u) with status: 0x%x **NVRM: CCU memdescAlloc failed for(%u) with status: 0x%x *NVRM: CCU memdescMap failed for(%u)with status: 0x%x **NVRM: CCU memdescMap failed for(%u)with status: 0x%x *pHeadTimeStamp**pHeadTimeStamp*pCounterBlock**pCounterBlock***pCounterBlock*pTailTimeStamp**pTailTimeStamp*pLpwrFeatureEngagedMask**pLpwrFeatureEngagedMask*pSwizzId**pSwizzId*pComputeId**pComputeId*src/kernel/gpu/ccu/kernel_ccu_api.c*NVRM: Kernel Ccu Api: CCU get stream state request **src/kernel/gpu/ccu/kernel_ccu_api.c**NVRM: Kernel Ccu Api: CCU get stream state request *call to kccuStreamStateGet_IMPL*NVRM: Kernel Ccu Api: CCU set stream state request **NVRM: Kernel Ccu Api: CCU set stream state request *call to kccuStreamStateSet_IMPL*kccuStreamStateSet(pGpu, pKernelCcu, pParams)**kccuStreamStateSet(pGpu, pKernelCcu, pParams)*NVRM: Kernel Ccu Api: CCU unsubscribe request **NVRM: Kernel Ccu Api: CCU unsubscribe request *NVRM: Kernel Ccu Api: CCU subscribe request **NVRM: Kernel Ccu Api: CCU subscribe request *call to kccuMemDescGetForShrBufId_IMPL*kccuMemDescGetForShrBufId(pGpu, pKernelCcu, CCU_DEV_SHRBUF_ID, &pMemDesc)**kccuMemDescGetForShrBufId(pGpu, pKernelCcu, CCU_DEV_SHRBUF_ID, &pMemDesc)*call to kccuCounterBlockSizeGet_IMPL*call to kccuMemDescGetForComputeInst_IMPL*kccuMemDescGetForComputeInst(pGpu, pKernelCcu, ref.pKernelMIGGpuInstance->swizzId, ref.pMIGComputeInstance->id, &pMemDesc)**kccuMemDescGetForComputeInst(pGpu, pKernelCcu, ref.pKernelMIGGpuInstance->swizzId, ref.pMIGComputeInstance->id, &pMemDesc)*NVRM: Kernel Ccu Api: get memdesc **NVRM: Kernel Ccu Api: get memdesc *call to _kccuapiMemdescGet*_kccuapiMemdescGet(pKernelCcuApi, pClient, pDevice, &pMemDesc)**_kccuapiMemdescGet(pKernelCcuApi, pClient, pDevice, &pMemDesc)*NVRM: Shared buffer memdesc is NULL **NVRM: Shared buffer memdesc is NULL *NVRM: Kernel Ccu Api: get memdesc address-space **NVRM: Kernel Ccu Api: get memdesc address-space *call to rmapiGetEffectiveAddrSpace*rmapiGetEffectiveAddrSpace(pGpu, pMemDesc, mapFlags, &addrSpace)**rmapiGetEffectiveAddrSpace(pGpu, pMemDesc, mapFlags, &addrSpace)*NVRM: Kernel Ccu Api: memdesc unmap **NVRM: Kernel Ccu Api: memdesc unmap *call to rmapiValidateKernelMapping*NVRM: Kernel mapping validation failed with status: 0x%x **NVRM: Kernel mapping validation failed with status: 0x%x *NVRM: Kernel Ccu Api: memdesc map **NVRM: Kernel Ccu Api: memdesc map *NVRM: kernelCcuApi shared buffer memdescMap failed with status: 0x%x **NVRM: kernelCcuApi shared buffer memdescMap failed with status: 0x%x *NVRM: Kernel Ccu Api: Get memdesc info **NVRM: Kernel Ccu Api: Get memdesc info *NVRM: Kernel Ccu Api: Destruct **NVRM: Kernel Ccu Api: Destruct *NVRM: Kernel Ccu Api: Construct **NVRM: Kernel Ccu Api: Construct *call to kmigmgrIsLocalEngineInInstance_IMPL*bEnginePresent*call to kmigmgrGetLocalToGlobalEngineType_IMPL*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_HSHUB_GET_NUM_UNITS, ¶ms, sizeof(params))*src/kernel/gpu/ce/arch/ampere/kernel_ce_ga100.c**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_HSHUB_GET_NUM_UNITS, ¶ms, sizeof(params))**src/kernel/gpu/ce/arch/ampere/kernel_ce_ga100.c*numHshubs*fbPceIndex*pceAvailableMaskPerHshub*hsPceAssigned*pLocalPceLceMap*hshubId*bIsGenXorHigher*NVRM: Root port speed from emulated config space = %d **NVRM: Root port speed from emulated config space = %d *call to clPcieGetRootGenSpeed_IMPL*NVRM: Could not get root gen speed - check for GPU gen speed! **NVRM: Could not get root gen speed - check for GPU gen speed! *genSpeed*NVRM: Gen Speed = %d **NVRM: Gen Speed = %d *call to kceGetNvlinkPeerSupportedLceMask_DISPATCH*NVRM: GPU%d <-> GPU%d PCE Index: %d LCE Index: %d **NVRM: GPU%d <-> GPU%d PCE Index: %d LCE Index: %d *pceMask*call to knvlinkGetLinkMaskToPeer_IMPL*NVRM: GPU%d has nvlink disabled. Skip programming **NVRM: GPU%d has nvlink disabled. Skip programming *numPcePerLink*prevHshubId*hshubIds**hshubIds*bFirstIter*call to _ceGetAlgorithmPceIndex*bPeerAssigned*pKernelNvlink != NULL**pKernelNvlink != NULL*NVRM: Unable to determine PCEs and LCEs for sysmem links **NVRM: Unable to determine PCEs and LCEs for sysmem links *call to kceGetSysmemSupportedLceMask_DISPATCH*paramsNvlinkMask*tempFbPceMask*pParamsHshubId**pParamsHshubId*pParamsHshubId != NULL**pParamsHshubId != NULL*NVRM: Unable to determine Hshub Id for sysmem links**NVRM: Unable to determine Hshub Id for sysmem links*call to kceGetAvailableHubPceMask_IMPL*call to kceMapPceLceForSysmemLinks_DISPATCH*pceAvailableMaskPerConnectingHub**pceAvailableMaskPerConnectingHub*NVRM: No sysmem connections on this chip (PCIe or NVLink)! **NVRM: No sysmem connections on this chip (PCIe or NVLink)! *call to kceMapPceLceForNvlinkPeers_DISPATCH*call to kceMapAsyncLceDefault_DISPATCH*call to kceGetGrceSupportedLceMask_DISPATCH*fbPceMask != 0**fbPceMask != 0*pceIndex < NV2080_CTRL_MAX_PCES**pceIndex < NV2080_CTRL_MAX_PCES*pLocalGrceMap*call to kceIsGenXorHigherSupported_DISPATCH*call to kceApplyGen4orHigherMapping_DISPATCH*grceIdx*NVRM: No more available PCEs to assign! **NVRM: No more available PCEs to assign! *pCurrentTopo*call to kceGetPce2lceConfigSize1_DISPATCH*call to kceGetGrceConfigSize1_DISPATCH*pCurrentTopo != NULL**pCurrentTopo != NULL*pPceLceMap*pGrceConfig**pCurrentTopo**pLocalPceLceMap*pLocalGrceConfig**pLocalGrceConfig*call to knvlinkGetNumLinksToSystem_IMPL*sysmemLinks*call to gpuGetNumCEs*bSymmetric*maxLinksPerPeer*bSwitchConfig*call to kceGetAutoConfigTableEntry_DISPATCH*bEntryExists*pceLceMap**pceLceMap*pceIdx*grceConfig**grceConfig*call to kceGetMappings_DISPATCH*call to gpumgrGetSystemNvlinkTopo_IMPL*call to kceIsCurrentMaxTopology_DISPATCH*bCurrentTopoMax*maxPceLceMap**maxPceLceMap*maxGrceConfig**maxGrceConfig*maxExposeCeMask*maxTopoIdx*call to gpumgrUpdateSystemNvlinkTopo_IMPL*NVRM: GPU%d : RM Configured Values for CE Config **NVRM: GPU%d : RM Configured Values for CE Config *NVRM: PCE-LCE map: PCE %d LCE 0x%x **NVRM: PCE-LCE map: PCE %d LCE 0x%x *NVRM: GRCE Config: GRCE %d LCE 0x%x **NVRM: GRCE Config: GRCE %d LCE 0x%x *NVRM: exposeCeMask = 0x%x **NVRM: exposeCeMask = 0x%x *src/kernel/gpu/ce/arch/ampere/kernel_ce_ga102.c**src/kernel/gpu/ce/arch/ampere/kernel_ce_ga102.c*NVRM: Sysmem over NVLink is not POR! **NVRM: Sysmem over NVLink is not POR! *remoteGpuMask*pKCeLoop**pKCeLoop*call to kceIsCeNvlinkP2P_DISPATCH*kceInst*call to kceGetP2PCes_GH100*call to kceIsCeSysmemRead_DISPATCH*src/kernel/gpu/ce/arch/blackwell/kernel_ce_gb100.c*NVRM: LCE %d assigned for Sysmem Rd. **src/kernel/gpu/ce/arch/blackwell/kernel_ce_gb100.c**NVRM: LCE %d assigned for Sysmem Rd. *call to kceIsCeSysmemWrite_DISPATCH*NVRM: LCE %d assigned for Sysmem Wr. **NVRM: LCE %d assigned for Sysmem Wr. *call to kceIsScrubLce_DISPATCH*NVRM: LCE %d assigned for fast scrubbing. **NVRM: LCE %d assigned for fast scrubbing. *NVRM: LCE %d assigned for NVLINK. **NVRM: LCE %d assigned for NVLINK. *call to kceIsCCWorkSubmitLce_DISPATCH*NVRM: LCE %d assigned for CC Work Submit. **NVRM: LCE %d assigned for CC Work Submit. *shimMask*call to kceGetLceMask_IMPL*pKCeShimIter**pKCeShimIter*pKCeIter**pKCeIter*call to kceFindShimOwner_IMPL*pKCeShimOwner*availablePceMaskForConnectingHub**availablePceMaskForConnectingHub*hubIndex*bMapComplete*ceCapsMask*call to kceMapPceLceForC2C_DISPATCH*call to kceMapPceLceForPCIe_DISPATCH*call to kceMapPceLceForDecomp_DISPATCH*call to kceMapPceLceForWorkSubmitLces_DISPATCH*call to kceMapPceLceForScrub_DISPATCH*call to kceMapPceLceForGRCE_DISPATCH*NVRM: Assigned LCE mask = 0x%x. **NVRM: Assigned LCE mask = 0x%x. *call to kceGetPceConfigForLceType_IMPL*maxLceCount*call to kceGetLceMaskForShimInstance_DISPATCH*pAvailablePceMaskForConnectingHub*availablePceMask*pKCeLce*pKCeLce != NULL**pKCeLce != NULL*NVRM: Async LCE Mapping -- PCE Index: %d -> LCE Index: %d **NVRM: Async LCE Mapping -- PCE Index: %d -> LCE Index: %d *hshubIndex*lastShimInstance*NVRM: Scrub CE Mapping -- PCE Index: %d -> LCE Index: %d **NVRM: Scrub CE Mapping -- PCE Index: %d -> LCE Index: %d *c2cLceTypesToAssign**c2cLceTypesToAssign*ceCapsForLce**ceCapsForLce*NVRM: Invalid number of LCEs for C2C.! **NVRM: Invalid number of LCEs for C2C.! *numC2CLcesToAssign*numPcesAssigned*bAssignedAtleastOneLce*NVRM: C2C CE Mapping -- PCE Index: %d -> LCE Index: %d **NVRM: C2C CE Mapping -- PCE Index: %d -> LCE Index: %d *NVRM: Unable to assign the required number of LCEs for C2C. **NVRM: Unable to assign the required number of LCEs for C2C. *pcieLceTypesToAssign**pcieLceTypesToAssign*NVRM: Invalid number of LCEs for PCIe.! **NVRM: Invalid number of LCEs for PCIe.! *bPceAssignedInCurrentIteration*NVRM: PCIe CE Mapping -- PCE Index: %d -> LCE Index: %d **NVRM: PCIe CE Mapping -- PCE Index: %d -> LCE Index: %d *NVRM: GRCE is not shared and mapped to LCE Index: 0. **NVRM: GRCE is not shared and mapped to LCE Index: 0. *NVRM: GRCE is shared and mapped to LCE Index: %d. **NVRM: GRCE is shared and mapped to LCE Index: %d. *nvlinkNumPeers*NVRM: Invalid LCE Index Request. lceIndex = %d, maxLceCount = %d **NVRM: Invalid LCE Index Request. lceIndex = %d, maxLceCount = %d *NVRM: GPU%d has nvlink disabled. Skip LCE-PCE mapping. **NVRM: GPU%d has nvlink disabled. Skip LCE-PCE mapping. *call to kceSupportsEquidistantPces_3dd2c9*bIsPceAvailable*bFoundPces*connectingHubIndex*NVRM: Invalid Second LCE Index Request. lceIndex = %d, maxLceCount = %d **NVRM: Invalid Second LCE Index Request. lceIndex = %d, maxLceCount = %d *NVRM: Nvlink peer CE Mapping for GPU%d <-> GPU%d -- PCE Index: %d -> LCE Index: %d **NVRM: Nvlink peer CE Mapping for GPU%d <-> GPU%d -- PCE Index: %d -> LCE Index: %d *pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_HSHUB_GET_MAX_HSHUBS_PER_SHIM, &maxHshubParams, sizeof(maxHshubParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_HSHUB_GET_MAX_HSHUBS_PER_SHIM, &maxHshubParams, sizeof(maxHshubParams))*maxHshubParams*numDecompPcesAssigned*localDecompPceMask*NVRM: Decomp CE Mapping -- PCE Index: %d -> LCE Index: %d **NVRM: Decomp CE Mapping -- PCE Index: %d -> LCE Index: %d *NVRM: Work submit CE Mapping -- PCE Index: %d -> LCE Index: %d **NVRM: Work submit CE Mapping -- PCE Index: %d -> LCE Index: %d **pTopoParams*pTopoParams != NULL**pTopoParams != NULL*totalPcesAvailable*call to kceGetPceConfigForLceMIGGpuInstance_IMPL*numLcesMapped*pcesLocalAvailable*pcesForEvenLces*lceIdx*pcesForOddLces*call to kceSetDecompCeCap_DISPATCH*gpuGetNumCEs(pGpu) != 0**gpuGetNumCEs(pGpu) != 0*pPceLceMap != NULL**pPceLceMap != NULL*pGrceConfig != NULL**pGrceConfig != NULL*pExposedLceMask != NULL**pExposedLceMask != NULL*!(pKernelNvlink == NULL && pGpu->getProperty(pGpu, PDB_PROP_GPU_SKIP_CE_MAPPINGS_NO_NVLINK))**!(pKernelNvlink == NULL && pGpu->getProperty(pGpu, PDB_PROP_GPU_SKIP_CE_MAPPINGS_NO_NVLINK))*pLocalPceLceMap != NULL**pLocalPceLceMap != NULL*pLocalGrceConfig != NULL**pLocalGrceConfig != NULL*shimInstance*maxPceCount*src/kernel/gpu/ce/arch/blackwell/kernel_ce_gb10b.c*NVRM: GRCE %d is shared and mapped to LCE Index: %d. **src/kernel/gpu/ce/arch/blackwell/kernel_ce_gb10b.c**NVRM: GRCE %d is shared and mapped to LCE Index: %d. *NVRM: GPU%d PCE Index: %d LCE Index: %d **NVRM: GPU%d PCE Index: %d LCE Index: %d *(pGrceMask != NULL)*src/kernel/gpu/ce/arch/blackwell/kernel_ce_gb202.c**(pGrceMask != NULL)**src/kernel/gpu/ce/arch/blackwell/kernel_ce_gb202.c*NVRM: status = 0x%x **NVRM: status = 0x%x *numLces <= 2**numLces <= 2*bPceAssigned*src/kernel/gpu/ce/arch/blackwell/kernel_ce_gb20b.c**src/kernel/gpu/ce/arch/blackwell/kernel_ce_gb20b.c*NVRM: GRCE %d is not shared **NVRM: GRCE %d is not shared *src/kernel/gpu/ce/arch/hopper/kernel_ce_gh100.c*NVRM: Skipping stubbed CE %d **src/kernel/gpu/ce/arch/hopper/kernel_ce_gh100.c**NVRM: Skipping stubbed CE %d *physicalCaps*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_CE_GET_PHYSICAL_CAPS, &physicalCaps, sizeof(physicalCaps))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_CE_GET_PHYSICAL_CAPS, &physicalCaps, sizeof(physicalCaps))*call to kceMapPceLceForCC**pLocalGrceMap*totalPcesAvailableMask*minP2PLce*pKCeMaxPces**pKCeMaxPces*maxPces*pTargetCe**pTargetCe*nvlinkPeerMask*NVRM: GPU %d Assigning Peer %d to LCE %d **NVRM: GPU %d Assigning Peer %d to LCE %d *NVRM: GPU %d invalid request for gpuCount %d **NVRM: GPU %d invalid request for gpuCount %d *linksPerHshub**linksPerHshub*maxLinksConnectedHshub*maxConnectedHshubId*phyLinkId*maxLcePerHshub**maxLcePerHshub***maxLcePerHshub*targetPceMask*numPces*localMaxPcePerHshub*localMaxLcePerHshub**localMaxLcePerHshub*localMaxHshub*NVRM: GPU %d Assigning Peer %d to preferred LCE %d **NVRM: GPU %d Assigning Peer %d to preferred LCE %d *NVRM: GPU %d Assigning Peer %d to first available LCE %d **NVRM: GPU %d Assigning Peer %d to first available LCE %d *statusC2C*NVRM: status = %d, statusC2C = %d **NVRM: status = %d, statusC2C = %d *maxLceCnt*hshubId < NV_CE_MAX_HSHUBS**hshubId < NV_CE_MAX_HSHUBS*call to kceGetNumPceRequired**pKCeLce*maxLceIdx*numNvLinkPeers*grceMappings**grceMappings*NVRM: Unable to read NV_EP_PCFG_GPU_LINK_CONTROL_STATUS from config space. **NVRM: Unable to read NV_EP_PCFG_GPU_LINK_CONTROL_STATUS from config space. *NVRM: Invalid PCE request. pceIndex = %d pceCnt = %d **NVRM: Invalid PCE request. pceIndex = %d pceCnt = %d *pHshubIdRequested*call to kceGetNvlinkCaps_IMPL*pCachedTopo*pCachedTopo != NULL*src/kernel/gpu/ce/arch/pascal/kernel_ce_gp100.c**pCachedTopo != NULL**src/kernel/gpu/ce/arch/pascal/kernel_ce_gp100.c*pAutoConfigTable**pAutoConfigTable*bCachedIdxExists*bCurrentIdxExists*call to kceClearAssignedNvlinkPeerMasks_DISPATCH**pCachedTopo*call to kceGetCeFromNvlinkConfig_IMPL*sysmemWriteCE*sysmemReadCE*kceGetCeFromNvlinkConfig(pGpu, pKCe, gpuMask, &sysmemReadCE, &sysmemWriteCE, NULL)**kceGetCeFromNvlinkConfig(pGpu, pKCe, gpuMask, &sysmemReadCE, &sysmemWriteCE, NULL)**pPceLceMap*bShimOwner*kceGetPceConfigForLceType(pGpu, pKCe, NV2080_CTRL_CE_LCE_TYPE_DECOMP, &numPcesPerLce, &numLces, &supportedPceMask, &supportedLceMask, &pcesPerHshub)**kceGetPceConfigForLceType(pGpu, pKCe, NV2080_CTRL_CE_LCE_TYPE_DECOMP, &numPcesPerLce, &numLces, &supportedPceMask, &supportedLceMask, &pcesPerHshub)*decompPceMask*shimConnectingHubMask*call to kceTopLevelPceLceMappingsUpdate_IMPL*kceTopLevelPceLceMappingsUpdate(pGpu, pKCeIter)**kceTopLevelPceLceMappingsUpdate(pGpu, pKCeIter)*call to kceIsSecureCe_DISPATCH*mcEngineIdx <= MC_ENGINE_IDX_CE_MAX**mcEngineIdx <= MC_ENGINE_IDX_CE_MAX*call to confComputeDeriveSecrets_DISPATCH*pCC*confComputeDeriveSecrets(pCC, mcEngineIdx)**confComputeDeriveSecrets(pCC, mcEngineIdx)*src/kernel/gpu/ce/arch/turing/kernel_ce_tu102.c**src/kernel/gpu/ce/arch/turing/kernel_ce_tu102.c*call to kceGetNvlinkMaxTopoForTable_DISPATCH*NVRM: GPU%d : NVLINK config not found in PCE2LCE table - using default entry **NVRM: GPU%d : NVLINK config not found in PCE2LCE table - using default entry *topoIdx*NVRM: GPU%d : RM Configured Values for CE Config : pceLceMap = 0x%01x%01x%01x%01x%01x%01x%01x%01x%01x, grceConfig = 0x%01x%01x, exposeCeMask = 0x%08x gpuMask = 0x%08x **NVRM: GPU%d : RM Configured Values for CE Config : pceLceMap = 0x%01x%01x%01x%01x%01x%01x%01x%01x%01x, grceConfig = 0x%01x%01x, exposeCeMask = 0x%08x gpuMask = 0x%08x *src/kernel/gpu/ce/arch/volta/kernel_ce_gv100.c**src/kernel/gpu/ce/arch/volta/kernel_ce_gv100.c*NVRM: GPU %d Peer %d has no links (could be an indirect peer). Sysmem LCE assigned %d! **NVRM: GPU %d Peer %d has no links (could be an indirect peer). Sysmem LCE assigned %d! *pKCeMatch**pKCeMatch*pKCeSubMatch**pKCeSubMatch*src/kernel/gpu/ce/kernel_ce.c**pVSI**src/kernel/gpu/ce/kernel_ce.c*ceCaps**ceCaps*bDecompEnabled*pPceAvailableMask*pPceAvailableMask != NULL**pPceAvailableMask != NULL*pLceAvailableMask*pLceAvailableMask != NULL**pLceAvailableMask != NULL*pNumMinPcesPerLce*pNumMinPcesPerLce != NULL**pNumMinPcesPerLce != NULL*pNumLcesToMap*pNumLcesToMap != NULL**pNumLcesToMap != NULL*pceConfigParams*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_CE_GET_PCE_CONFIG_FOR_LCE_MIG_GPU_INSTANCE, &pceConfigParams, sizeof(pceConfigParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_CE_GET_PCE_CONFIG_FOR_LCE_MIG_GPU_INSTANCE, &pceConfigParams, sizeof(pceConfigParams))*pNumPcesPerLce*pNumPcesPerLce != NULL**pNumPcesPerLce != NULL*pNumLces*pNumLces != NULL**pNumLces != NULL*pSupportedPceMask*pSupportedPceMask != NULL**pSupportedPceMask != NULL*pSupportedLceMask*pSupportedLceMask != NULL**pSupportedLceMask != NULL*pPcesPerHshub*pPcesPerHshub != NULL**pPcesPerHshub != NULL*call to ceEncodeLceTypeMetadataForPcie*metadataForLceType*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_CE_GET_PCE_CONFIG_FOR_LCE_TYPE, &pceConfigParams, sizeof(pceConfigParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_CE_GET_PCE_CONFIG_FOR_LCE_TYPE, &pceConfigParams, sizeof(pceConfigParams))*NVRM: Failed to update PCE-LCE mappings for LCE 0x%x. Return **NVRM: Failed to update PCE-LCE mappings for LCE 0x%x. Return *call to kceUpdateClassDB_KERNEL*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_CE_GET_HUB_PCE_MASK_V2, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_CE_GET_HUB_PCE_MASK_V2, ¶ms, sizeof(params))*connectingHubPceMasks**connectingHubPceMasks*fbhubPceMask*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_CE_GET_FAULT_METHOD_BUFFER_SIZE, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_CE_GET_FAULT_METHOD_BUFFER_SIZE, ¶ms, sizeof(params))*kceUpdateClassDB_HAL(pGpu, pKCe)**kceUpdateClassDB_HAL(pGpu, pKCe)*bUpdateNvlinkPceLce*call to kceGetNvlinkAutoConfigCeValues_DISPATCH*NVRM: CE AutoConfig is not supported. Skipping PCE2LCE update **NVRM: CE AutoConfig is not supported. Skipping PCE2LCE update *NVRM: Failed to get auto-config PCE-LCE mappings. Return **NVRM: Failed to get auto-config PCE-LCE mappings. Return *call to cePauseCeUtilsScheduling*exposeCeMask*call to rmapiControlCacheFreeForControl*rmapiControlCacheFreeForControl(gpuGetInstance(pGpu), NV2080_CTRL_CMD_CE_GET_CE_PCE_MASK)**rmapiControlCacheFreeForControl(gpuGetInstance(pGpu), NV2080_CTRL_CMD_CE_GET_CE_PCE_MASK)*NVRM: Failed to update PCE-LCE mappings. Return **NVRM: Failed to update PCE-LCE mappings. Return *call to ceResumeCeUtilsScheduling*pParams->engineIdx == MC_ENGINE_IDX_CE(pKCe->publicID)**pParams->engineIdx == MC_ENGINE_IDX_CE(pKCe->publicID)*NVRM: for CE%d **NVRM: for CE%d *call to kceNonstallIntrCheckAndClear_b3696a*call to engineNonStallIntrNotify*pRecords[engineIdx].pNotificationService == NULL**pRecords[engineIdx].pNotificationService == NULL*bFifoWaiveNotify**pNotificationService*NVRM: Stubbing KCE %d **NVRM: Stubbing KCE %d *bStubbed*call to gpuDeleteClassFromClassDBByEngTag_IMPL*NVRM: Unstubbing KCE %d **NVRM: Unstubbing KCE %d *call to gpuAddClassToClassDBByEngTag_IMPL*call to gpuUpdateEngineTable_IMPL*gpuUpdateEngineTable(pGpu)**gpuUpdateEngineTable(pGpu)*call to nvlinkCtrlCmdBusGetNvlinkCaps*nvlinkCapsParams**nvlinkCaps*call to kceGetSysmemRWLCEs_DISPATCH*pSysmemReadCE*pSysmemWriteCE*call to kceGetP2PCes_DISPATCH*call to knvlinkCoreGetRemoteDeviceInfo_IMPL**pKCeCaps*NVRM: Querying caps for LCE(%d) **NVRM: Querying caps for LCE(%d) *call to kceAssignCeCaps_DISPATCH*call to printCaps*NVRM: LCE%d caps (engineType = %d (%d)) **NVRM: LCE%d caps (engineType = %d (%d)) *NVRM: _CE_GRCE:%d **NVRM: _CE_GRCE:%d *NVRM: _CE_SHARED:%d **NVRM: _CE_SHARED:%d *NVRM: _CE_SYSMEM_READ:%d **NVRM: _CE_SYSMEM_READ:%d *NVRM: _CE_SYSMEM_WRITE:%d **NVRM: _CE_SYSMEM_WRITE:%d *NVRM: _CE_NVLINK_P2P:%d **NVRM: _CE_NVLINK_P2P:%d *NVRM: _CE_SYSMEM:%d **NVRM: _CE_SYSMEM:%d *NVRM: _CE_P2P:%d **NVRM: _CE_P2P:%d *NVRM: _CE_BL_SIZE_GT_64K_SUPPORTED:%d **NVRM: _CE_BL_SIZE_GT_64K_SUPPORTED:%d *NVRM: _CE_SUPPORTS_NONPIPELINED_BL:%d **NVRM: _CE_SUPPORTS_NONPIPELINED_BL:%d *NVRM: _CE_SUPPORTS_PIPELINED_BL:%d **NVRM: _CE_SUPPORTS_PIPELINED_BL:%d *NVRM: _CE_CC_SECURE:%d **NVRM: _CE_CC_SECURE:%d *NVRM: _CE_CC_WORK_SUBMIT:%d **NVRM: _CE_CC_WORK_SUBMIT:%d *NVRM: _CE_DECOMP_SUPPORTED:%d **NVRM: _CE_DECOMP_SUPPORTED:%d *NVRM: _CE_SCRUB:%d **NVRM: _CE_SCRUB:%d *call to kfifoRemoveSchedulingHandler_IMPL*call to kfifoAddSchedulingHandler_IMPL*kfifoAddSchedulingHandler(pGpu, GPU_GET_KERNEL_FIFO(pGpu), kceRunFipsSelfTest, pKCe, NULL, NULL)**kfifoAddSchedulingHandler(pGpu, GPU_GET_KERNEL_FIFO(pGpu), kceRunFipsSelfTest, pKCe, NULL, NULL)*call to kceRunFipsSelfTestDecrypt*call to kceRunFipsSelfTestEncrypt*gpuIsCCFeatureEnabled(pGpu)**gpuIsCCFeatureEnabled(pGpu)*call to gpuCheckEngineTable_IMPL*call to kmigmgrIsMIGSupported_IMPL*NVRM: Running FIPS test for CE%u **NVRM: Running FIPS test for CE%u *ceUtilsParams*forceCeId*objCreate(&pCeUtils, pMemoryManager, CeUtils, ENG_GET_GPU(pMemoryManager), NULL, &ceUtilsParams)**objCreate(&pCeUtils, pMemoryManager, CeUtils, ENG_GET_GPU(pMemoryManager), NULL, &ceUtilsParams)*call to ccslContextInitViaChannel_IMPL*ccslContextInitViaChannel_HAL(&pCcslCtx, pCeUtils->pChannel->hClient, pCeUtils->pChannel->subdeviceId, pCeUtils->pChannel->channelId)**ccslContextInitViaChannel_HAL(&pCcslCtx, pCeUtils->pChannel->hClient, pCeUtils->pChannel->subdeviceId, pCeUtils->pChannel->channelId)*memdescCreate(&pSrcMemDesc, pGpu, sizeof ceTestPlaintext, 0, NV_TRUE, ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_ALLOC_FLAGS_PROTECTED)**memdescCreate(&pSrcMemDesc, pGpu, sizeof ceTestPlaintext, 0, NV_TRUE, ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_ALLOC_FLAGS_PROTECTED)*memdescAlloc(pSrcMemDesc)**memdescAlloc(pSrcMemDesc)*memdescCreate(&pDstMemDesc, pGpu, sizeof encryptedData, 0, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_ALLOC_IN_UNPROTECTED_MEMORY)**memdescCreate(&pDstMemDesc, pGpu, sizeof encryptedData, 0, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_ALLOC_IN_UNPROTECTED_MEMORY)*memdescAlloc(pDstMemDesc)**memdescAlloc(pDstMemDesc)*memdescCreate(&pAuthMemDesc, pGpu, sizeof dataAuth, 0, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_ALLOC_IN_UNPROTECTED_MEMORY)**memdescCreate(&pAuthMemDesc, pGpu, sizeof dataAuth, 0, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_ALLOC_IN_UNPROTECTED_MEMORY)*memdescAlloc(pAuthMemDesc)**memdescAlloc(pAuthMemDesc)*memdescCreate(&pIvMemDesc, pGpu, sizeof Ivl, 0, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_ALLOC_IN_UNPROTECTED_MEMORY)**memdescCreate(&pIvMemDesc, pGpu, sizeof Ivl, 0, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_ALLOC_IN_UNPROTECTED_MEMORY)*memdescAlloc(pIvMemDesc)**memdescAlloc(pIvMemDesc)*memmgrMemDescMemSet(pMemoryManager, pDstMemDesc, 0, 0)**memmgrMemDescMemSet(pMemoryManager, pDstMemDesc, 0, 0)*memmgrMemDescMemSet(pMemoryManager, pAuthMemDesc, 0, 0)**memmgrMemDescMemSet(pMemoryManager, pAuthMemDesc, 0, 0)*call to memmgrMemWrite_IMPL*ceTestPlaintext**ceTestPlaintext*memmgrMemWrite(pMemoryManager, &srcSurface, ceTestPlaintext, sizeof ceTestPlaintext, TRANSFER_FLAGS_NONE)**memmgrMemWrite(pMemoryManager, &srcSurface, ceTestPlaintext, sizeof ceTestPlaintext, TRANSFER_FLAGS_NONE)*bSecureCopy*authTagAddr*encryptIvAddr**pDstMemDesc**pSrcMemDesc*bEncrypt*call to ceutilsMemcopy_IMPL*ceutilsMemcopy(pCeUtils, ¶ms)**ceutilsMemcopy(pCeUtils, ¶ms)*call to spdmSendTestCommand*pCcslCtx*dataAuth*Ivl*encryptedData**dataAuth**Ivl**encryptedData*spdmSendTestCommand(pGpu, pCcslCtx, ceTestPlaintext, sizeof ceTestPlaintext, dataAuth, sizeof dataAuth, Ivl, sizeof Ivl, encryptedData, sizeof encryptedData, NV_TRUE)**spdmSendTestCommand(pGpu, pCcslCtx, ceTestPlaintext, sizeof ceTestPlaintext, dataAuth, sizeof dataAuth, Ivl, sizeof Ivl, encryptedData, sizeof encryptedData, NV_TRUE)*memmgrMemWrite(pMemoryManager, &dstSurface, encryptedData, sizeof encryptedData, TRANSFER_FLAGS_NONE)**memmgrMemWrite(pMemoryManager, &dstSurface, encryptedData, sizeof encryptedData, TRANSFER_FLAGS_NONE)*memmgrMemWrite(pMemoryManager, &authSurface, dataAuth, sizeof dataAuth, TRANSFER_FLAGS_NONE)**memmgrMemWrite(pMemoryManager, &authSurface, dataAuth, sizeof dataAuth, TRANSFER_FLAGS_NONE)*memmgrMemDescMemSet(pMemoryManager, pSrcMemDesc, 0, 0)**memmgrMemDescMemSet(pMemoryManager, pSrcMemDesc, 0, 0)*memmgrMemRead(pMemoryManager, &ivSurface, Ivl, sizeof Ivl, TRANSFER_FLAGS_NONE)**memmgrMemRead(pMemoryManager, &ivSurface, Ivl, sizeof Ivl, TRANSFER_FLAGS_NONE)*decryptedData**decryptedData*memmgrMemRead(pMemoryManager, &srcSurface, decryptedData, sizeof decryptedData, TRANSFER_FLAGS_NONE)**memmgrMemRead(pMemoryManager, &srcSurface, decryptedData, sizeof decryptedData, TRANSFER_FLAGS_NONE)*portMemCmp(decryptedData, ceTestPlaintext, sizeof ceTestPlaintext) == 0**portMemCmp(decryptedData, ceTestPlaintext, sizeof ceTestPlaintext) == 0*call to ccslContextClear_IMPL*NVRM: Test finished with status 0x%x **NVRM: Test finished with status 0x%x *call to kceIsDecompLce_STATIC_DISPATCH*memdescCreate(&pIvMemDesc, pGpu, CE_FIPS_SELF_TEST_IV_SIZE, 0, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_ALLOC_IN_UNPROTECTED_MEMORY)**memdescCreate(&pIvMemDesc, pGpu, CE_FIPS_SELF_TEST_IV_SIZE, 0, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_ALLOC_IN_UNPROTECTED_MEMORY)*memmgrMemRead(pMemoryManager, &dstSurface, encryptedData, sizeof encryptedData, TRANSFER_FLAGS_NONE)**memmgrMemRead(pMemoryManager, &dstSurface, encryptedData, sizeof encryptedData, TRANSFER_FLAGS_NONE)*memmgrMemRead(pMemoryManager, &authSurface, dataAuth, sizeof dataAuth, TRANSFER_FLAGS_NONE)**memmgrMemRead(pMemoryManager, &authSurface, dataAuth, sizeof dataAuth, TRANSFER_FLAGS_NONE)*spdmSendTestCommand(pGpu, pCcslCtx, encryptedData, sizeof encryptedData, dataAuth, sizeof dataAuth, Ivl, sizeof Ivl, NULL, 0, NV_FALSE)**spdmSendTestCommand(pGpu, pCcslCtx, encryptedData, sizeof encryptedData, dataAuth, sizeof dataAuth, Ivl, sizeof Ivl, NULL, 0, NV_FALSE)*ccFipsTest*getKmbParams*kmb*decryptBundle*call to spdmSendCtrlCall_DISPATCH*encData**encData*call to gpuCheckEngine_DISPATCH*NVRM: KCE %d / %d: present=%d **NVRM: KCE %d / %d: present=%d *NVRM: KernelCE: thisPublicID = %d **NVRM: KernelCE: thisPublicID = %d *publicID*call to kceSetShimInstance_DISPATCH*bIsAutoConfigEnabled*bUseGen4Mapping*bMultipleP2PLce*RmCeEnableAutoConfig**RmCeEnableAutoConfig*NVRM: Disable CE Auto PCE-LCE Config **NVRM: Disable CE Auto PCE-LCE Config *RmCeUseGen4Mapping**RmCeUseGen4Mapping*NVRM: GEN4 mapping will use a HSHUB PCE (if available) for PCIe! **NVRM: GEN4 mapping will use a HSHUB PCE (if available) for PCIe! *refFindAncestorOfType(pCallContext->pResourceRef, classId(Device), &pDeviceRef)*src/kernel/gpu/ce/kernel_ce_context.c**refFindAncestorOfType(pCallContext->pResourceRef, classId(Device), &pDeviceRef)**src/kernel/gpu/ce/kernel_ce_context.c*pNvA0b5CreateParms*NVRM: Version = 0, using engineType (=%d) as CE instance **NVRM: Version = 0, using engineType (=%d) as CE instance *call to gpuGetRmEngineType_IMPL*NVRM: Unknown engine type %d requested **NVRM: Unknown engine type %d requested *NVRM: Version = 1, using engineType=%d **NVRM: Version = 1, using engineType=%d *NVRM: Unknown version = %d **NVRM: Unknown version = %d *call to ceIndexFromType*NVRM: Class %d, CE%d **NVRM: Class %d, CE%d *GPU_GET_KCE(pGpu, engineIndex)**GPU_GET_KCE(pGpu, engineIndex)*NVRM: Failed to determine CE number **NVRM: Failed to determine CE number *call to clearCePceCacheAndForwardCtrlToGsp*clearCePceCacheAndForwardCtrlToGsp(pSubdevice)*src/kernel/gpu/ce/kernel_ce_ctrl.c**clearCePceCacheAndForwardCtrlToGsp(pSubdevice)**src/kernel/gpu/ce/kernel_ce_ctrl.c*ceGetAllCaps*ceNumber*nv2080EngineId*ceCapsv2Params*call to subdeviceCtrlCmdCeGetCapsV2_VF*ceIndexFromType(pGpu, GPU_RES_GET_DEVICE(pSubdevice), rmEngineType, &ceNumber)**ceIndexFromType(pGpu, GPU_RES_GET_DEVICE(pSubdevice), rmEngineType, &ceNumber)*call to kceGetDeviceCaps_IMPL*src/kernel/gpu/ce/kernel_ce_shared.c*NVRM: ceEncodeLceTypeMetadataForPcie unknown **src/kernel/gpu/ce/kernel_ce_shared.c**NVRM: ceEncodeLceTypeMetadataForPcie unknown *call to ceutilsUsesPreferredCe_IMPL*call to memmgrDestroyCeUtils_IMPL*call to memmgrInitCeUtils_IMPL*memmgrInitCeUtils(GPU_GET_MEMORY_MANAGER(pGpu), NV_FALSE, NV_TRUE)**memmgrInitCeUtils(GPU_GET_MEMORY_MANAGER(pGpu), NV_FALSE, NV_TRUE)*call to ceutilsResumeSubmission_IMPL*call to ceutilsPauseSubmission_IMPL**pCeCapsParams*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_CE_GET_ALL_PHYSICAL_CAPS, pCeCapsParams, sizeof(*pCeCapsParams))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_CE_GET_ALL_PHYSICAL_CAPS, pCeCapsParams, sizeof(*pCeCapsParams))*kceStatus*NVRM: NV2080_CTRL_CE_GET_CAPS_V2 ceEngineType = %d **NVRM: NV2080_CTRL_CE_GET_CAPS_V2 ceEngineType = %d *grCeCount*RM_ENGINE_TYPE_IS_COPY(rmCeEngineType)**RM_ENGINE_TYPE_IS_COPY(rmCeEngineType)*partnerParams*numPartners*call to kfifoGetEnginePartnerList_DISPATCH*NVRM: Could not update the engine db. This is fatal **NVRM: Could not update the engine db. This is fatal *engineDB*pGpu->engineDB.size <= NV2080_CTRL_GPU_MAX_ENGINE_PARTNERS**pGpu->engineDB.size <= NV2080_CTRL_GPU_MAX_ENGINE_PARTNERS*partnerList**partnerList*call to kceGetGrceMaskReg_DISPATCH*NVRM: PartnerList space too small. This is fatal **NVRM: PartnerList space too small. This is fatal *call to confComputeIsDebugModeEnabled_DISPATCH*src/kernel/gpu/conf_compute/arch/blackwell/conf_compute_gb100.c*NVRM: Cannot boot Confidential Compute as debug board is not supported. **src/kernel/gpu/conf_compute/arch/blackwell/conf_compute_gb100.c**NVRM: Cannot boot Confidential Compute as debug board is not supported. *call to kchannelGetEngineType*call to confComputeGetKeySpaceFromKChannel_GH100*call to confComputeDeriveSecrets_GH100*call to confComputeGetEngineIdFromKeySpace_GH100*call to libspdm_hkdf_sha256_expand*update_keyseeed*call to confComputeGetCurrentKeySeed_DISPATCH**update_keyseeed**call to confComputeGetCurrentKeySeed_DISPATCH*ce_channel**ce_channel*call to confComputeKeyStoreRetrieveViaChannel_GH100*rotateOperation == ROTATE_IV_ALL_VALID*src/kernel/gpu/conf_compute/arch/blackwell/conf_compute_keystore_gb100.c**rotateOperation == ROTATE_IV_ALL_VALID**src/kernel/gpu/conf_compute/arch/blackwell/conf_compute_keystore_gb100.c*clientKmb*encryptBundle*call to confComputeIncChannelCounter*bIsWorkLaunch*cryptBundle*call to confComputeGetAndUpdateCurrentKeySeed_DISPATCH*curKeySeed**curKeySeed*update_key1**update_key1**pStr*update_key2**update_key2*pKeyId != NULL**pKeyId != NULL*src/kernel/gpu/conf_compute/arch/hopper/conf_compute_gh100.c*NVRM: CC is not supported on self-hosted platforms **src/kernel/gpu/conf_compute/arch/hopper/conf_compute_gh100.c**NVRM: CC is not supported on self-hosted platforms *CC_GKEYID_GET_KEYSPACE(globalKeyId) != CC_KEYSPACE_GSP**CC_GKEYID_GET_KEYSPACE(globalKeyId) != CC_KEYSPACE_GSP*call to confComputeGetKeyPairByKey_IMPL*globalH2DKey*pRmApi->Control( pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_CONF_COMPUTE_ROTATE_KEYS, ¶ms, sizeof(NV2080_CTRL_INTERNAL_CONF_COMPUTE_ROTATE_KEYS_PARAMS))**pRmApi->Control( pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_CONF_COMPUTE_ROTATE_KEYS, ¶ms, sizeof(NV2080_CTRL_INTERNAL_CONF_COMPUTE_ROTATE_KEYS_PARAMS))*call to confComputeKeyStoreUpdateKey_DISPATCH*confComputeKeyStoreUpdateKey_HAL(pConfCompute, h2dKey)**confComputeKeyStoreUpdateKey_HAL(pConfCompute, h2dKey)*confComputeKeyStoreUpdateKey_HAL(pConfCompute, d2hKey)**confComputeKeyStoreUpdateKey_HAL(pConfCompute, d2hKey)*call to confComputeKeyStoreDepositIvMask_DISPATCH*call to confComputeGetMaxCeKeySpaceIdx_DISPATCH*call to confComputeInitChannelIterForKey_IMPL*confComputeInitChannelIterForKey(pGpu, pConfCompute, h2dKey, &iterator)**confComputeInitChannelIterForKey(pGpu, pConfCompute, h2dKey, &iterator)*call to confComputeGetNextChannelForKey_IMPL*call to confComputeKeyStoreRetrieveViaChannel_DISPATCH*confComputeKeyStoreRetrieveViaChannel( pConfCompute, pKernelChannel, ROTATE_IV_ALL_VALID, CHANNEL_IV_OPERATION_NONE, &pKernelChannel->clientKmb)**confComputeKeyStoreRetrieveViaChannel( pConfCompute, pKernelChannel, ROTATE_IV_ALL_VALID, CHANNEL_IV_OPERATION_NONE, &pKernelChannel->clientKmb)*hmacBundle*call to confComputeGlobalKeyIsKernelPriv_DISPATCH*(keySpace >= CC_KEYSPACE_LCE0) && (keySpace <= CC_KEYSPACE_LCE7)**(keySpace >= CC_KEYSPACE_LCE0) && (keySpace <= CC_KEYSPACE_LCE7)*call to confComputeKeyStoreDeriveKey_DISPATCH*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_GSP_CPU_LOCKED_RPC))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_GSP_CPU_LOCKED_RPC))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_CPU_GSP_LOCKED_RPC))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_CPU_GSP_LOCKED_RPC))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_GSP_CPU_DMA))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_GSP_CPU_DMA))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_CPU_GSP_DMA))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_CPU_GSP_DMA))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_GSP_CPU_REPLAYABLE_FAULT))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_GSP_CPU_REPLAYABLE_FAULT))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_GSP_CPU_NON_REPLAYABLE_FAULT))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_GSP_CPU_NON_REPLAYABLE_FAULT))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_CPU_GSP_NVLE_P2P_WRAPPING))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_CPU_GSP_NVLE_P2P_WRAPPING))*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_CONF_COMPUTE_DERIVE_SWL_KEYS, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_CONF_COMPUTE_DERIVE_SWL_KEYS, ¶ms, sizeof(params))*ivMaskSet**ivMaskSet*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_SEC2, CC_LKEYID_CPU_SEC2_DATA_USER))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_SEC2, CC_LKEYID_CPU_SEC2_DATA_USER))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_SEC2, CC_LKEYID_CPU_SEC2_HMAC_USER))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_SEC2, CC_LKEYID_CPU_SEC2_HMAC_USER))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_SEC2, CC_LKEYID_CPU_SEC2_DATA_KERN))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_SEC2, CC_LKEYID_CPU_SEC2_DATA_KERN))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_SEC2, CC_LKEYID_CPU_SEC2_HMAC_KERN))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_SEC2, CC_LKEYID_CPU_SEC2_HMAC_KERN))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_SEC2, CC_LKEYID_CPU_SEC2_DATA_SCRUBBER))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_SEC2, CC_LKEYID_CPU_SEC2_DATA_SCRUBBER))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_SEC2, CC_LKEYID_CPU_SEC2_HMAC_SCRUBBER))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(CC_KEYSPACE_SEC2, CC_LKEYID_CPU_SEC2_HMAC_SCRUBBER))*call to confComputeDeriveSecretsForCEKeySpace_DISPATCH*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_CONF_COMPUTE_DERIVE_LCE_KEYS, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_CONF_COMPUTE_DERIVE_LCE_KEYS, ¶ms, sizeof(params))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(ccKeyspaceLCEIndex, CC_LKEYID_LCE_H2D_USER))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(ccKeyspaceLCEIndex, CC_LKEYID_LCE_H2D_USER))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(ccKeyspaceLCEIndex, CC_LKEYID_LCE_D2H_USER))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(ccKeyspaceLCEIndex, CC_LKEYID_LCE_D2H_USER))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(ccKeyspaceLCEIndex, CC_LKEYID_LCE_H2D_KERN))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(ccKeyspaceLCEIndex, CC_LKEYID_LCE_H2D_KERN))*confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(ccKeyspaceLCEIndex, CC_LKEYID_LCE_D2H_KERN))**confComputeKeyStoreDeriveKey_HAL(pConfCompute, CC_GKEYID_GEN(ccKeyspaceLCEIndex, CC_LKEYID_LCE_D2H_KERN))*src/kernel/gpu/conf_compute/arch/hopper/conf_compute_key_rotation_gh100.c*NVRM: RM internal key rotation not supported for protected PCIe! **src/kernel/gpu/conf_compute/arch/hopper/conf_compute_key_rotation_gh100.c**NVRM: RM internal key rotation not supported for protected PCIe! *RmConfComputeKeyRotation**RmConfComputeKeyRotation*NVRM: Enabling RM internal keys for Key Rotation by regkey override! **NVRM: Enabling RM internal keys for Key Rotation by regkey override! *NVRM: Disabling RM internal keys for Key Rotation by regkey override! **NVRM: Disabling RM internal keys for Key Rotation by regkey override! *RmKeyRotationInternalThreshold**RmKeyRotationInternalThreshold*NVRM: RmKeyRotationInternalThreshold must be higher than minimum of %u! **NVRM: RmKeyRotationInternalThreshold must be higher than minimum of %u! *NVRM: Setting internal key rotation threshold to %u. **NVRM: Setting internal key rotation threshold to %u. *keyRotationInternalThreshold*RmKeyRotationThresholdDelta**RmKeyRotationThresholdDelta*NVRM: Illegal value for RmKeyRotationThresholdDelta. **NVRM: Illegal value for RmKeyRotationThresholdDelta. *NVRM: Cancelling override of threshold delta. **NVRM: Cancelling override of threshold delta. *NVRM: Setting key rotation threshold delta to %u. **NVRM: Setting key rotation threshold delta to %u. *keyRotationThresholdDelta*call to confComputeSetKeyRotationThreshold_IMPL*confComputeSetKeyRotationThreshold(pConfCompute, pConfCompute->attackerAdvantage)**confComputeSetKeyRotationThreshold(pConfCompute, pConfCompute->attackerAdvantage)*RmKeyRotationLowerThreshold**RmKeyRotationLowerThreshold*RmKeyRotationUpperThreshold**RmKeyRotationUpperThreshold*NVRM: Setting key rotation lower threshold to %u and upper threshold to %u. **NVRM: Setting key rotation lower threshold to %u and upper threshold to %u. *keyRotationLowerThreshold*NVRM: RmKeyRotationUpperThreshold must be greater than RmKeyRotationLowerThreshold. **NVRM: RmKeyRotationUpperThreshold must be greater than RmKeyRotationLowerThreshold. *NVRM: Cancelling override of upper and lower key rotation thresholds. **NVRM: Cancelling override of upper and lower key rotation thresholds. *NVRM: RmKeyRotationUpperThreshold must be set if RmKeyRotationLowerThreshold is set. **NVRM: RmKeyRotationUpperThreshold must be set if RmKeyRotationLowerThreshold is set. *RmKeyRotationTimeout**RmKeyRotationTimeout*NVRM: Setting key rotation user-mode timeout to %u seconds. **NVRM: Setting key rotation user-mode timeout to %u seconds. *keyRotationTimeout*NVRM: Key rotation user-mode timeout must be greater than or equal to %u. **NVRM: Key rotation user-mode timeout must be greater than or equal to %u. *NVRM: Cancelling override of user-mode timeout. **NVRM: Cancelling override of user-mode timeout. *NVRM: Confidential Compute key rotation enabled via regkey override. **NVRM: Confidential Compute key rotation enabled via regkey override. *keyRotationEnableMask*NVRM: Confidential Compute key rotation disabled via regkey override. **NVRM: Confidential Compute key rotation disabled via regkey override. *NVRM: Confidential Compute key rotation is enabled. **NVRM: Confidential Compute key rotation is enabled. *NVRM: Confidential Compute key rotation is disabled. **NVRM: Confidential Compute key rotation is disabled. *localH2DKey*localD2HKey*call to confComputeGetKeySlotFromGlobalKeyId_IMPL*confComputeGetKeySlotFromGlobalKeyId(pConfCompute, h2dKey, &h2dIndex)**confComputeGetKeySlotFromGlobalKeyId(pConfCompute, h2dKey, &h2dIndex)*confComputeGetKeySlotFromGlobalKeyId(pConfCompute, d2hKey, &d2hIndex)**confComputeGetKeySlotFromGlobalKeyId(pConfCompute, d2hKey, &d2hIndex)*call to confComputeIsLowerThresholdCrossed_IMPL*aggregateStats**aggregateStats*call to confComputeIsUpperThresholdCrossed_IMPL*confComputeInitChannelIterForKey(pGpu, pConfCompute, h2dKey, &iter)**confComputeInitChannelIterForKey(pGpu, pConfCompute, h2dKey, &iter)*pEncStatsBufMemDesc*pEncStats*pEncStats != NULL**pEncStats != NULL*NVRM: Failed to get stats for channel 0x%08x RM engineId = 0x%x **NVRM: Failed to get stats for channel 0x%08x RM engineId = 0x%x *call to kchannelGetDebugTag*NVRM: Encryption stats for channel 0x%08x with h2dKey 0x%x **NVRM: Encryption stats for channel 0x%08x with h2dKey 0x%x *NVRM: Total h2d bytes encrypted = 0x%llx **NVRM: Total h2d bytes encrypted = 0x%llx *NVRM: Total d2h bytes encrypted = 0x%llx **NVRM: Total d2h bytes encrypted = 0x%llx *NVRM: Total h2d encrypt ops = 0x%llx **NVRM: Total h2d encrypt ops = 0x%llx *NVRM: Total d2h encrypt ops = 0x%llx **NVRM: Total d2h encrypt ops = 0x%llx *freedChannelAggregateStats**freedChannelAggregateStats*totalBytesEncrypted*totalEncryptOps*NVRM: Aggregate stats for h2dKey 0x%x and d2hKey 0x%x **NVRM: Aggregate stats for h2dKey 0x%x and d2hKey 0x%x *pEvent->pUserData != NULL**pEvent->pUserData != NULL*call to confComputeSetKeyRotationStatus_IMPL*confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_FAILED_TIMEOUT)**confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_FAILED_TIMEOUT)*NVRM: Hit timeout on key 0x%x, triggering KR **NVRM: Hit timeout on key 0x%x, triggering KR *call to confComputePerformKeyRotation_IMPL*call to confComputeGetKeyRotationStatus_IMPL*confComputeGetKeyRotationStatus(pConfCompute, h2dKey, &state)**confComputeGetKeyRotationStatus(pConfCompute, h2dKey, &state)*call to calculateEncryptionStatsByKeyPair*calculateEncryptionStatsByKeyPair(pGpu, pConfCompute, h2dKey, d2hKey)**calculateEncryptionStatsByKeyPair(pGpu, pConfCompute, h2dKey, d2hKey)*call to isUpperThresholdCrossed*NVRM: Crossed UPPER threshold for key = 0x%x **NVRM: Crossed UPPER threshold for key = 0x%x *confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_FAILED_THRESHOLD)**confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_FAILED_THRESHOLD)*confComputePerformKeyRotation(pGpu, pConfCompute, h2dKey, d2hKey, NV_TRUE)**confComputePerformKeyRotation(pGpu, pConfCompute, h2dKey, d2hKey, NV_TRUE)*call to isLowerThresholdCrossed*NVRM: Crossed LOWER threshold for key = 0x%x **NVRM: Crossed LOWER threshold for key = 0x%x *confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_PENDING)**confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_PENDING)*call to confComputeGlobalKeyIsUvmKey_DISPATCH*keyRotationTimeoutInfo**keyRotationTimeoutInfo**pH2DKey*tmrEventCreate(pTmr, &pConfCompute->keyRotationTimeoutInfo[h2dIndex].pTimer, keyRotationTimeoutCallback, (void*)pH2DKey, TMR_FLAGS_NONE)**tmrEventCreate(pTmr, &pConfCompute->keyRotationTimeoutInfo[h2dIndex].pTimer, keyRotationTimeoutCallback, (void*)pH2DKey, TMR_FLAGS_NONE)*call to confComputeIsUvmKeyRotationPending_IMPL*call to confComputeStopKeyRotationTimer_IMPL*confComputeStopKeyRotationTimer(pGpu, pConfCompute, h2dKey)**confComputeStopKeyRotationTimer(pGpu, pConfCompute, h2dKey)*tmrEventScheduleRelSec(pTmr, pConfCompute->keyRotationTimeoutInfo[h2dIndex].pTimer, pConfCompute->keyRotationTimeout)**tmrEventScheduleRelSec(pTmr, pConfCompute->keyRotationTimeoutInfo[h2dIndex].pTimer, pConfCompute->keyRotationTimeout)*call to confComputeGetKeyPairForKeySpace_DISPATCH*confComputeGetKeyRotationStatus(pConfCompute, userH2DKey, &userKRStatus)**confComputeGetKeyRotationStatus(pConfCompute, userH2DKey, &userKRStatus)*NVRM: User key rotation pending on key 0x%x **NVRM: User key rotation pending on key 0x%x *confComputeStopKeyRotationTimer(pGpu, pConfCompute, userH2DKey)**confComputeStopKeyRotationTimer(pGpu, pConfCompute, userH2DKey)*call to kchannelUpdateNotifierMem_IMPL*keyRotationCount**keyRotationCount*kchannelUpdateNotifierMem(pKernelChannel, NV_CHANNELGPFIFO_NOTIFICATION_TYPE_KEY_ROTATION_STATUS, pConfCompute->keyRotationCount[h2dIndex], 0, (NvU32)KEY_ROTATION_STATUS_PENDING)**kchannelUpdateNotifierMem(pKernelChannel, NV_CHANNELGPFIFO_NOTIFICATION_TYPE_KEY_ROTATION_STATUS, pConfCompute->keyRotationCount[h2dIndex], 0, (NvU32)KEY_ROTATION_STATUS_PENDING)*NVRM: channel 0x%08x has pending key rotation, writing notifier with val 0x%x **NVRM: channel 0x%08x has pending key rotation, writing notifier with val 0x%x *call to kchannelNotifyEvent_IMPL*NVRM: Skipping keyspace = %d since mask = 0x%x **NVRM: Skipping keyspace = %d since mask = 0x%x *call to getKeyPairForKeySpace*call to triggerKeyRotationByKeyPair*tempStatus == NV_OK**tempStatus == NV_OK*NVRM: Failed to calculate encryption statistics for H2D key 0x%x with status 0x%x **NVRM: Failed to calculate encryption statistics for H2D key 0x%x with status 0x%x *pGlobalH2DKey*pGlobalD2HKey*call to initInternalKeyRotationRegistryOverrides*call to initKeyRotationRegistryOverrides*m_keySlot**m_keySlot*pKeyStore**pKeyStore***pKeyStore*src/kernel/gpu/conf_compute/arch/hopper/conf_compute_keystore_gh100.c**src/kernel/gpu/conf_compute/arch/hopper/conf_compute_keystore_gh100.c*call to getChannelCounter*call to kchannelCheckIsUserMode_IMPL*call to kchannelCheckIsKernel_IMPL*gsp_cpu_locked_rpc**gsp_cpu_locked_rpc*cpu_gsp_locked_rpc**cpu_gsp_locked_rpc*gsp_cpu_dma**gsp_cpu_dma*cpu_gsp_dma**cpu_gsp_dma*gsp_cpu_replayable_fault**gsp_cpu_replayable_fault*gsp_cpu_non_replayable_fault**gsp_cpu_non_replayable_fault*cpu_gsp_nvle_p2p_wrapping**cpu_gsp_nvle_p2p_wrapping*cpu_sec2_data_user**cpu_sec2_data_user*cpu_sec2_hmac_user**cpu_sec2_hmac_user*cpu_sec2_data_kernel**cpu_sec2_data_kernel*cpu_sec2_hmac_kernel**cpu_sec2_hmac_kernel*cpu_sec2_data_scrubber**cpu_sec2_data_scrubber*cpu_sec2_hmac_scrubber**cpu_sec2_hmac_scrubber*Lce00_h2d_user**Lce00_h2d_user*Lce00_d2h_user**Lce00_d2h_user*Lce00_h2d_kernel**Lce00_h2d_kernel*Lce00_d2h_kernel**Lce00_d2h_kernel*Lce00_h2d_p2p**Lce00_h2d_p2p*Lce00_d2h_p2p**Lce00_d2h_p2p*Lce01_h2d_user**Lce01_h2d_user*Lce01_d2h_user**Lce01_d2h_user*Lce01_h2d_kernel**Lce01_h2d_kernel*Lce01_d2h_kernel**Lce01_d2h_kernel*Lce01_h2d_p2p**Lce01_h2d_p2p*Lce01_d2h_p2p**Lce01_d2h_p2p*Lce02_h2d_user**Lce02_h2d_user*Lce02_d2h_user**Lce02_d2h_user*Lce02_h2d_kernel**Lce02_h2d_kernel*Lce02_d2h_kernel**Lce02_d2h_kernel*Lce02_h2d_p2p**Lce02_h2d_p2p*Lce02_d2h_p2p**Lce02_d2h_p2p*Lce03_h2d_user**Lce03_h2d_user*Lce03_d2h_user**Lce03_d2h_user*Lce03_h2d_kernel**Lce03_h2d_kernel*Lce03_d2h_kernel**Lce03_d2h_kernel*Lce03_h2d_p2p**Lce03_h2d_p2p*Lce03_d2h_p2p**Lce03_d2h_p2p*Lce04_h2d_user**Lce04_h2d_user*Lce04_d2h_user**Lce04_d2h_user*Lce04_h2d_kernel**Lce04_h2d_kernel*Lce04_d2h_kernel**Lce04_d2h_kernel*Lce04_h2d_p2p**Lce04_h2d_p2p*Lce04_d2h_p2p**Lce04_d2h_p2p*Lce05_h2d_user**Lce05_h2d_user*Lce05_d2h_user**Lce05_d2h_user*Lce05_h2d_kernel**Lce05_h2d_kernel*Lce05_d2h_kernel**Lce05_d2h_kernel*Lce05_h2d_p2p**Lce05_h2d_p2p*Lce05_d2h_p2p**Lce05_d2h_p2p*Lce06_h2d_user**Lce06_h2d_user*Lce06_d2h_user**Lce06_d2h_user*Lce06_h2d_kernel**Lce06_h2d_kernel*Lce06_d2h_kernel**Lce06_d2h_kernel*Lce06_h2d_p2p**Lce06_h2d_p2p*Lce06_d2h_p2p**Lce06_d2h_p2p*Lce07_h2d_user**Lce07_h2d_user*Lce07_d2h_user**Lce07_d2h_user*Lce07_h2d_kernel**Lce07_h2d_kernel*Lce07_d2h_kernel**Lce07_d2h_kernel*Lce07_h2d_p2p**Lce07_h2d_p2p*Lce07_d2h_p2p**Lce07_d2h_p2p*Lce10_h2d_user**Lce10_h2d_user*Lce10_d2h_user**Lce10_d2h_user*Lce10_h2d_kernel**Lce10_h2d_kernel*Lce10_d2h_kernel**Lce10_d2h_kernel*Lce10_h2d_p2p**Lce10_h2d_p2p*Lce10_d2h_p2p**Lce10_d2h_p2p*Lce11_h2d_user**Lce11_h2d_user*Lce11_d2h_user**Lce11_d2h_user*Lce11_h2d_kernel**Lce11_h2d_kernel*Lce11_d2h_kernel**Lce11_d2h_kernel*Lce11_h2d_p2p**Lce11_h2d_p2p*Lce11_d2h_p2p**Lce11_d2h_p2p*Lce12_h2d_user**Lce12_h2d_user*Lce12_d2h_user**Lce12_d2h_user*Lce12_h2d_kernel**Lce12_h2d_kernel*Lce12_d2h_kernel**Lce12_d2h_kernel*Lce12_h2d_p2p**Lce12_h2d_p2p*Lce12_d2h_p2p**Lce12_d2h_p2p*Lce13_h2d_user**Lce13_h2d_user*Lce13_d2h_user**Lce13_d2h_user*Lce13_h2d_kernel**Lce13_h2d_kernel*Lce13_d2h_kernel**Lce13_d2h_kernel*Lce13_h2d_p2p**Lce13_h2d_p2p*Lce13_d2h_p2p**Lce13_d2h_p2p*Lce14_h2d_user**Lce14_h2d_user*Lce14_d2h_user**Lce14_d2h_user*Lce14_h2d_kernel**Lce14_h2d_kernel*Lce14_d2h_kernel**Lce14_d2h_kernel*Lce14_h2d_p2p**Lce14_h2d_p2p*Lce14_d2h_p2p**Lce14_d2h_p2p*Lce15_h2d_user**Lce15_h2d_user*Lce15_d2h_user**Lce15_d2h_user*Lce15_h2d_kernel**Lce15_h2d_kernel*Lce15_d2h_kernel**Lce15_d2h_kernel*Lce15_h2d_p2p**Lce15_h2d_p2p*Lce15_d2h_p2p**Lce15_d2h_p2p*Lce16_h2d_user**Lce16_h2d_user*Lce16_d2h_user**Lce16_d2h_user*Lce16_h2d_kernel**Lce16_h2d_kernel*Lce16_d2h_kernel**Lce16_d2h_kernel*Lce16_h2d_p2p**Lce16_h2d_p2p*Lce16_d2h_p2p**Lce16_d2h_p2p*Lce17_h2d_user**Lce17_h2d_user*Lce17_d2h_user**Lce17_d2h_user*Lce17_h2d_kernel**Lce17_h2d_kernel*Lce17_d2h_kernel**Lce17_d2h_kernel*Lce17_h2d_p2p**Lce17_h2d_p2p*Lce17_d2h_p2p**Lce17_d2h_p2p*globalKeyIdString*call to getKeyIdSec2*getKeyIdSec2(pKernelChannel, ROTATE_IV_ENCRYPT, &lh2dKeyId)**getKeyIdSec2(pKernelChannel, ROTATE_IV_ENCRYPT, &lh2dKeyId)*getKeyIdSec2(pKernelChannel, ROTATE_IV_HMAC, &ld2hKeyId)**getKeyIdSec2(pKernelChannel, ROTATE_IV_HMAC, &ld2hKeyId)*call to confComputeGetKeySpaceFromKChannel_DISPATCH*confComputeGetKeySpaceFromKChannel_HAL(pConfCompute, pKernelChannel, &keySpace) != NV_OK**confComputeGetKeySpaceFromKChannel_HAL(pConfCompute, pKernelChannel, &keySpace) != NV_OK*call to confComputeGetLceKeyIdFromKChannel_DISPATCH*confComputeGetLceKeyIdFromKChannel_HAL(pConfCompute, pKernelChannel, ROTATE_IV_ENCRYPT, &lh2dKeyId)**confComputeGetLceKeyIdFromKChannel_HAL(pConfCompute, pKernelChannel, ROTATE_IV_ENCRYPT, &lh2dKeyId)*confComputeGetLceKeyIdFromKChannel_HAL(pConfCompute, pKernelChannel, ROTATE_IV_DECRYPT, &ld2hKeyId)**confComputeGetLceKeyIdFromKChannel_HAL(pConfCompute, pKernelChannel, ROTATE_IV_DECRYPT, &ld2hKeyId)*confComputeGetKeySlotFromGlobalKeyId(pConfCompute, globalKeyId, &slotIndex)**confComputeGetKeySlotFromGlobalKeyId(pConfCompute, globalKeyId, &slotIndex)*NVRM: Updating key with global key ID %x. **NVRM: Updating key with global key ID %x. *pKey != NULL**pKey != NULL*call to libspdm_sha256_hash_all*tempMem**tempMem*confComputeGetKeySlotFromGlobalKeyId(pConfCompute, globalKeyId, &slotNumber)**confComputeGetKeySlotFromGlobalKeyId(pConfCompute, globalKeyId, &slotNumber)*NVRM: Retrieving KMB from slot number = %d and type is %d. **NVRM: Retrieving KMB from slot number = %d and type is %d. *call to checkSlot*call to incrementChannelCounter*call to confComputeKeyStoreRetrieveViaKeyId_GH100*NVRM: Clearing the Export Master Key. **NVRM: Clearing the Export Master Key. *confComputeGetKeySlotFromGlobalKeyId(pConfCompute, globalKeyId, &slotNumber) == NV_OK**confComputeGetKeySlotFromGlobalKeyId(pConfCompute, globalKeyId, &slotNumber) == NV_OK*NVRM: Depositing IV mask for global key ID %x. **NVRM: Depositing IV mask for global key ID %x. *NVRM: Deriving key for global key ID %x. **NVRM: Deriving key for global key ID %x. *NVRM: Deinitializing keystore. **NVRM: Deinitializing keystore. *call to confComputeKeyStoreClearExportMasterKey_DISPATCH***m_keySlot*NVRM: Initializing keystore. **NVRM: Initializing keystore. *call to ccslQueryMessagePool_IMPL*call to isChannel*pEncStatsBuffer*call to ccslRotationChecksDecrypt**pConfCompute*ivOut**ivOut*ivPtr**ivPtr***ivPtr*ivIn**ivIn*call to ccslIncrementCounter_IMPL*call to getMessageCounterAndLimit*messageCounter*call to ccslIncrementCounter192*hmac_ctx**hmac_ctx***hmac_ctx*call to libspdm_hmac_sha256_set_key**keyIn*call to libspdm_hmac_sha256_free*call to libspdm_hmac_sha256_update**inputBuffer*call to libspdm_hmac_sha256_final*src/kernel/gpu/conf_compute/ccsl.c*NVRM: CCSL Error! IV overflow detected! **src/kernel/gpu/conf_compute/ccsl.c**NVRM: CCSL Error! IV overflow detected! *carry*call to ccslDecrypt_KERNEL*aadBuffer**ivMaskIn*pDecryptBundles*pDecryptBundle*call to libspdm_aead_aes_gcm_decrypt_prealloc*openrmCtx**openrmCtx*call to ccslRotationChecksEncrypt*call to ccslEncrypt_KERNEL*ivMaskOut**ivMaskOut*call to libspdm_aead_aes_gcm_encrypt_prealloc*keyOut**keyOut*rotateIvParams*rotateIvType*call to getGpuViaChannelHandle*NVRM: Converting %s to NV_ERR_GENERIC. **NVRM: Converting %s to NV_ERR_GENERIC. *pRmApi->Control(pRmApi, pCtx->hClient, pCtx->hChannel, NVC56F_CTRL_ROTATE_SECURE_CHANNEL_IV, &rotateIvParams, sizeof(rotateIvParams))**pRmApi->Control(pRmApi, pCtx->hClient, pCtx->hChannel, NVC56F_CTRL_ROTATE_SECURE_CHANNEL_IV, &rotateIvParams, sizeof(rotateIvParams))*updatedKmb*NVRM: Updating the CCSL context. **NVRM: Updating the CCSL context. *currDecryptBundle*call to writeKmbToContext*NVRM: Clearing the CCSL context. **NVRM: Clearing the CCSL context. **pDecryptBundles**pEncStatsBuffer*call to openrmCtxFree*NVRM: Initializing CCSL context via global key ID. **NVRM: Initializing CCSL context via global key ID. *ccCaps*ppCtx**ppCtx*call to confComputeKeyStoreIsValidGlobalKeyId_DISPATCH*call to libspdm_aead_gcm_prealloc*call to confComputeKeyStoreRetrieveViaKeyId_DISPATCH*msgCounterSize***pConfCompute*NVRM: Initializing CCSL context via channel. **NVRM: Initializing CCSL context via channel. *call to getKernelChannelViaChannelHandle**pKernelChannel***openrmCtx*call to kchannelSetEncryptionStatsBuffer_DISPATCH*globalKeyIdIn*globalKeyIdOut*call to libspdm_aead_free*call to CliGetKernelChannel*pChannelClient*call to ccslSplit32*call to ccslSplit64*call to confComputeIsGivenThresholdCrossed_IMPL*NVRM: Triggering key rotation for global key id 0x%x **NVRM: Triggering key rotation for global key id 0x%x *NVRM: Total bytes encrypted 0x%llx total encrypt ops 0x%llx **NVRM: Total bytes encrypted 0x%llx total encrypt ops 0x%llx *call to ccslUpdateViaKeyId*bytesEncryptedD2H*numEncryptionsD2H*bytesEncryptedH2D*numEncryptionsH2D*src/kernel/gpu/conf_compute/conf_compute.c**src/kernel/gpu/conf_compute/conf_compute.c*pSlot*pSlot != NULL**pSlot != NULL*call to _confComputeGetKeyspaceSize*call to _confComputeDeinitSessionKeys*NVRM: ConfCompute: Fatal error hit! **NVRM: ConfCompute: Fatal error hit! *bAcceptClientRequest*bFatalFailure*NVRM: ConfCompute: Failed setting GPU state to not ready! **NVRM: ConfCompute: Failed setting GPU state to not ready! *call to confComputeIsSpdmEnabled_DISPATCH*call to spdmContextDeinit_IMPL*NVRM: ConfCompute: Failed tearing down SPDM!: 0x%x! **NVRM: ConfCompute: Failed tearing down SPDM!: 0x%x! **ppKernelChannel**ppKernelChannel != NULL*call to kfifoGetNextKernelChannel_IMPL*pIt*call to kchannelGetRunlistId*call to confComputeGetEngineIdFromKeySpace_DISPATCH*engineId != RM_ENGINE_TYPE_NULL**engineId != RM_ENGINE_TYPE_NULL*kfifoEngineInfoXlate(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, engineId, ENGINE_INFO_TYPE_RUNLIST, &runlistId)**kfifoEngineInfoXlate(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, engineId, ENGINE_INFO_TYPE_RUNLIST, &runlistId)*call to kfifoGetChannelIterator_IMPL*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_CONF_COMPUTE_GET_STATIC_INFO, &pConfCompute->ccStaticInfo, sizeof(pConfCompute->ccStaticInfo))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_CONF_COMPUTE_GET_STATIC_INFO, &pConfCompute->ccStaticInfo, sizeof(pConfCompute->ccStaticInfo))*NVRM: BAR1 Trusted: 0x%x PCIE Trusted: 0x%x **NVRM: BAR1 Trusted: 0x%x PCIE Trusted: 0x%x *ccStaticInfo*call to tmrEventCancel_IMPL*call to confComputeEnableKeyRotationCallback_DISPATCH*NVRM: Failed to disable key rotation 0x%x **NVRM: Failed to disable key rotation 0x%x *call to confComputeDeriveSessionKeys_DISPATCH*confComputeEnableKeyRotationCallback_HAL(pGpu, pConfCompute, NV_TRUE)**confComputeEnableKeyRotationCallback_HAL(pGpu, pConfCompute, NV_TRUE)*NVRM: Tearing down CC Keys. **NVRM: Tearing down CC Keys. *call to confComputeKeyStoreDeinit_DISPATCH*pRpcCcslCtx*pDmaCcslCtx*pNvleP2pWrappingCcslCtx**pRpcCcslCtx**pDmaCcslCtx*pGspSec2RpcCcslCtx**pGspSec2RpcCcslCtx**pNvleP2pWrappingCcslCtx*call to confComputeKeyStoreInit_DISPATCH*confComputeKeyStoreInit_HAL(pConfCompute)**confComputeKeyStoreInit_HAL(pConfCompute)*call to spdmRetrieveExportSecret_IMPL*call to confComputeKeyStoreGetExportMasterKey_DISPATCH**call to confComputeKeyStoreGetExportMasterKey_DISPATCH*spdmRetrieveExportSecret(pGpu, pSpdm, CC_EXPORT_MASTER_KEY_SIZE_BYTES, confComputeKeyStoreGetExportMasterKey(pConfCompute))**spdmRetrieveExportSecret(pGpu, pSpdm, CC_EXPORT_MASTER_KEY_SIZE_BYTES, confComputeKeyStoreGetExportMasterKey(pConfCompute))*confComputeDeriveSecrets_HAL(pConfCompute, MC_ENGINE_IDX_GSP)**confComputeDeriveSecrets_HAL(pConfCompute, MC_ENGINE_IDX_GSP)*call to confComputeDeriveInitialKeySeed_DISPATCH*confComputeDeriveInitialKeySeed_HAL(pConfCompute)**confComputeDeriveInitialKeySeed_HAL(pConfCompute)*call to ccslContextInitViaKeyId_KERNEL*ccslContextInitViaKeyId(pConfCompute, &pConfCompute->pRpcCcslCtx, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_CPU_GSP_LOCKED_RPC))**ccslContextInitViaKeyId(pConfCompute, &pConfCompute->pRpcCcslCtx, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_CPU_GSP_LOCKED_RPC))*ccslContextInitViaKeyId(pConfCompute, &pConfCompute->pDmaCcslCtx, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_CPU_GSP_DMA))**ccslContextInitViaKeyId(pConfCompute, &pConfCompute->pDmaCcslCtx, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_CPU_GSP_DMA))*ccslContextInitViaKeyId(pConfCompute, &pConfCompute->pReplayableFaultCcslCtx, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_GSP_CPU_REPLAYABLE_FAULT))**ccslContextInitViaKeyId(pConfCompute, &pConfCompute->pReplayableFaultCcslCtx, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_GSP_CPU_REPLAYABLE_FAULT))*ccslContextInitViaKeyId(pConfCompute, &pConfCompute->pNonReplayableFaultCcslCtx, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_GSP_CPU_NON_REPLAYABLE_FAULT))**ccslContextInitViaKeyId(pConfCompute, &pConfCompute->pNonReplayableFaultCcslCtx, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_GSP_CPU_NON_REPLAYABLE_FAULT))*ccslContextInitViaKeyId(pConfCompute, &pConfCompute->pNvleP2pWrappingCcslCtx, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_CPU_GSP_NVLE_P2P_WRAPPING))**ccslContextInitViaKeyId(pConfCompute, &pConfCompute->pNvleP2pWrappingCcslCtx, CC_GKEYID_GEN(CC_KEYSPACE_GSP, CC_LKEYID_CPU_GSP_NVLE_P2P_WRAPPING))*NVRM: Spdm object is not created when SPDM supported !!. **NVRM: Spdm object is not created when SPDM supported !!. *RmConfidentialCompute**RmConfidentialCompute*NVRM: Confidential Compute enabled via regkey override. **NVRM: Confidential Compute enabled via regkey override. *NVRM: Confidential Compute dev mode enabled via regkey override. **NVRM: Confidential Compute dev mode enabled via regkey override. *NVRM: Enabling NVlink encryption via regkey override. **NVRM: Enabling NVlink encryption via regkey override. *NVRM: SPDM is enabled by default. **NVRM: SPDM is enabled by default. *RmConfComputeSpdmPolicy**RmConfComputeSpdmPolicy*NVRM: Confidential Compute SPDM enabled via regkey override. **NVRM: Confidential Compute SPDM enabled via regkey override. *NVRM: Confidential Compute SPDM disabled via regkey override. **NVRM: Confidential Compute SPDM disabled via regkey override. *NVRM: Confidential Compute SPDM disabled on Fmodel. **NVRM: Confidential Compute SPDM disabled on Fmodel. *RmGspOwnedFaultBuffersEnable**RmGspOwnedFaultBuffersEnable*pGspHeartbeatTimer**pGspHeartbeatTimer*gspProxyRegkeys*call to confComputeInstLocOverrides*NVRM: CC mode is enabled by HW **NVRM: CC mode is enabled by HW *call to gpuIsProtectedPcieEnabledInHw_DISPATCH*NVRM: Enabling protected PCIe in secure PRI **NVRM: Enabling protected PCIe in secure PRI *call to gpuIsDevModeEnabledInHw_DISPATCH*NVRM: CC Devtools mode is enabled by HW **NVRM: CC Devtools mode is enabled by HW *call to gpuIsMultiGpuNvleEnabledInHw_DISPATCH*NVRM: Enabling NVlink encryption with multi GPU mode. **NVRM: Enabling NVlink encryption with multi GPU mode. *call to _confComputeInitRegistryOverrides*NVRM: Unexpected failure in confComputeConstructEngine! Status:0x%x **NVRM: Unexpected failure in confComputeConstructEngine! Status:0x%x *call to sysGetStaticConfig*NVRM: GPU confidential compute capability is not enabled. **NVRM: GPU confidential compute capability is not enabled. *NVRM: GPU does not support confidential compute. **NVRM: GPU does not support confidential compute. *bForceEnableCC*NVRM: CPU does not support confidential compute. **NVRM: CPU does not support confidential compute. *call to confComputeIsGpuCcCapable_DISPATCH*confComputeIsGpuCcCapable_HAL(pGpu, pConfCompute)**confComputeIsGpuCcCapable_HAL(pGpu, pConfCompute)*NVRM: Confidential Compute devtools mode DISABLED in APM. **NVRM: Confidential Compute devtools mode DISABLED in APM. *keyRotationState**keyRotationState*call to confComputeEnableKeyRotationSupport_DISPATCH*confComputeEnableKeyRotationSupport_HAL(pGpu, pConfCompute)**confComputeEnableKeyRotationSupport_HAL(pGpu, pConfCompute)*call to confComputeEnableInternalKeyRotationSupport_DISPATCH*confComputeEnableInternalKeyRotationSupport_HAL(pGpu, pConfCompute)**confComputeEnableInternalKeyRotationSupport_HAL(pGpu, pConfCompute)*instLocOverrides*instLocOverrides2*instLocOverrides3*instLocOverrides4*PDB_PROP_GPU_IS_ALL_INST_IN_SYSMEM*rmapiLockIsOwner() && rmGpuLockIsOwner()*src/kernel/gpu/conf_compute/conf_compute_api.c**rmapiLockIsOwner() && rmGpuLockIsOwner()**src/kernel/gpu/conf_compute/conf_compute_api.c*subdeviceGetByHandle(RES_GET_CLIENT(pConfComputeApi), pParams->hSubDevice, &pSubdevice)**subdeviceGetByHandle(RES_GET_CLIENT(pConfComputeApi), pParams->hSubDevice, &pSubdevice)*bKernelKeyRotation*bUserKeyRotation*pConfComputeApi->pCcCaps->bAcceptClientRequest == NV_FALSE**pConfComputeApi->pCcCaps->bAcceptClientRequest == NV_FALSE*confComputeSetKeyRotationThreshold(pConfCompute, pParams->attackerAdvantage)**confComputeSetKeyRotationThreshold(pConfCompute, pParams->attackerAdvantage)*ccAttackerAdvantage**pKernelFifo*maxSec2Channels*maxCeChannels**pSpdm*cecAttestationReportSize*call to spdmGetAttestationReport_DISPATCH*cecAttestationReport**cecAttestationReport*call to confComputeSetErrorState_DISPATCH*certChainSize*call to spdmGetCertChains_DISPATCH**pHeap**pKernelMIGManager*call to kmigmgrGetMemoryPartitionHeapFromDevice_IMPL*kmigmgrGetMemoryPartitionHeapFromDevice(pGpu, pKernelMIGManager, GPU_RES_GET_DEVICE(pSubdevice), &pMemoryPartitionHeap)**kmigmgrGetMemoryPartitionHeapFromDevice(pGpu, pKernelMIGManager, GPU_RES_GET_DEVICE(pSubdevice), &pMemoryPartitionHeap)*call to pmaGetTotalProtectedMemory*call to pmaGetTotalUnprotectedMemory*protectedMemSizeInKb*unprotectedMemSizeInKb*call to knvlinkSetupNvleRemapTables_IMPL*knvlinkSetupNvleRemapTables(pGpu, pKernelNvlink)**knvlinkSetupNvleRemapTables(pGpu, pKernelNvlink)*cpuCapability*gpusCapability*environment*ccFeature*devToolsMode*multiGpuMode**pCcCaps*confComputeGetKeyRotationStatus(pConfCompute, h2dKey, &krStatus)*src/kernel/gpu/conf_compute/conf_compute_key_rotation.c**confComputeGetKeyRotationStatus(pConfCompute, h2dKey, &krStatus)**src/kernel/gpu/conf_compute/conf_compute_key_rotation.c*NVRM: Key rotation is already scheduled for key 0x%x **NVRM: Key rotation is already scheduled for key 0x%x *call to performKeyRotationByKeyPair*performKeyRotationByKeyPair(pGpu, pConfCompute, h2dKey, d2hKey)**performKeyRotationByKeyPair(pGpu, pConfCompute, h2dKey, d2hKey)*confComputeGetKeyRotationStatus(pConfCompute, kernH2DKey, &kernKRStatus)**confComputeGetKeyRotationStatus(pConfCompute, kernH2DKey, &kernKRStatus)*NVRM: Key rotation pending on h2d kern key = 0x%x **NVRM: Key rotation pending on h2d kern key = 0x%x *NVRM: no kernel key rotation pending **NVRM: no kernel key rotation pending *confComputeGetKeyRotationStatus(pConfCompute, h2dKey, &status)**confComputeGetKeyRotationStatus(pConfCompute, h2dKey, &status)*status == KEY_ROTATION_STATUS_PENDING_TIMER_SUSPENDED**status == KEY_ROTATION_STATUS_PENDING_TIMER_SUSPENDED*confComputeGetKeySlotFromGlobalKeyId(pConfCompute, h2dKey, &h2dKeyIndex)**confComputeGetKeySlotFromGlobalKeyId(pConfCompute, h2dKey, &h2dKeyIndex)*tmrEventScheduleRelSec(pTmr, pConfCompute->keyRotationTimeoutInfo[h2dKeyIndex].pTimer, pConfCompute->keyRotationTimeoutInfo[h2dKeyIndex].timeLeftNs/(1000 * 1000 * 1000))**tmrEventScheduleRelSec(pTmr, pConfCompute->keyRotationTimeoutInfo[h2dKeyIndex].pTimer, pConfCompute->keyRotationTimeoutInfo[h2dKeyIndex].timeLeftNs/(1000 * 1000 * 1000))*NVRM: Started key rotation timeout timer for key 0x%x with rel time left = %lldns **NVRM: Started key rotation timeout timer for key 0x%x with rel time left = %lldns *status != KEY_ROTATION_STATUS_PENDING_TIMER_SUSPENDED**status != KEY_ROTATION_STATUS_PENDING_TIMER_SUSPENDED*status == KEY_ROTATION_STATUS_PENDING**status == KEY_ROTATION_STATUS_PENDING*(pConfCompute->keyRotationTimeoutInfo[h2dKeyIndex].pTimer != NULL)**(pConfCompute->keyRotationTimeoutInfo[h2dKeyIndex].pTimer != NULL)*call to tmrEventOnList_IMPL*call to tmrEventTimeUntilNextCallback_IMPL*tmrEventTimeUntilNextCallback(pTmr, pConfCompute->keyRotationTimeoutInfo[h2dKeyIndex].pTimer, &timeNs)**tmrEventTimeUntilNextCallback(pTmr, pConfCompute->keyRotationTimeoutInfo[h2dKeyIndex].pTimer, &timeNs)*timeLeftNs*NVRM: Stopped key rotation timeout timer for key 0x%x with time left = %lldns **NVRM: Stopped key rotation timeout timer for key 0x%x with time left = %lldns *NVRM: Timeout timer never started on key 0x%x. leave it as is at %lldns **NVRM: Timeout timer never started on key 0x%x. leave it as is at %lldns *confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_PENDING_TIMER_SUSPENDED)**confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_PENDING_TIMER_SUSPENDED)*pStatsInfo*(attackerAdvantage >= offset) && (attackerAdvantage <= (offset + NV_ARRAY_ELEMENTS(keyRotationUpperThreshold) - 1))**(attackerAdvantage >= offset) && (attackerAdvantage <= (offset + NV_ARRAY_ELEMENTS(keyRotationUpperThreshold) - 1))*NVRM: Setting key rotation attacker advantage to %llu. **NVRM: Setting key rotation attacker advantage to %llu. *NVRM: Key rotation lower threshold is %llu and upper threshold is %llu. **NVRM: Key rotation lower threshold is %llu and upper threshold is %llu. *call to confComputeGetKeyPairByChannel_DISPATCH*confComputeGetKeyPairByChannel_HAL(pGpu, pConfCompute, pKernelChannel, &h2dKey, &d2hKey)**confComputeGetKeyPairByChannel_HAL(pGpu, pConfCompute, pKernelChannel, &h2dKey, &d2hKey)*pConfCompute->keyRotationState[h2dIndex] == pConfCompute->keyRotationState[d2hIndex]**pConfCompute->keyRotationState[h2dIndex] == pConfCompute->keyRotationState[d2hIndex]*pWorkItemInfo*pWorkItemInfo != NULL**pWorkItemInfo != NULL*confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_IN_PROGRESS)**confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_IN_PROGRESS)**pWorkItemInfo*osQueueWorkItem(pGpu, performKeyRotation_WORKITEM, (void *)pWorkItemInfo, (OsQueueWorkItemFlags){ .bLockSema = NV_TRUE, .apiLock = WORKITEM_FLAGS_API_LOCK_READ_WRITE, .bLockGpus = NV_TRUE})**osQueueWorkItem(pGpu, performKeyRotation_WORKITEM, (void *)pWorkItemInfo, (OsQueueWorkItemFlags){ .bLockSema = NV_TRUE, .apiLock = WORKITEM_FLAGS_API_LOCK_READ_WRITE, .bLockGpus = NV_TRUE})*call to performKeyRotation_WORKITEM*(state == KEY_ROTATION_STATUS_PENDING) || (state == KEY_ROTATION_STATUS_PENDING_TIMER_SUSPENDED)**(state == KEY_ROTATION_STATUS_PENDING) || (state == KEY_ROTATION_STATUS_PENDING_TIMER_SUSPENDED)*call to kchannelIsDisabledForKeyRotation*NVRM: channel 0x%08x (hChannel 0x%x) was NOT disabled for key rotation, can't start KR yet **NVRM: channel 0x%08x (hChannel 0x%x) was NOT disabled for key rotation, can't start KR yet *bIdle*NVRM: scheduling KR for h2d key = 0x%x **NVRM: scheduling KR for h2d key = 0x%x *confComputePerformKeyRotation(pGpu, pConfCompute, h2dKey, d2hKey, NV_FALSE)**confComputePerformKeyRotation(pGpu, pConfCompute, h2dKey, d2hKey, NV_FALSE)*call to confComputeUpdateSecrets_DISPATCH*confComputeUpdateSecrets_HAL(pConfCompute, h2dKey)**confComputeUpdateSecrets_HAL(pConfCompute, h2dKey)*kchannelUpdateNotifierMem(pKernelChannel, NV_CHANNELGPFIFO_NOTIFICATION_TYPE_KEY_ROTATION_STATUS, pConfCompute->keyRotationCount[h2dIndex] - 1, 0, (NvU32)KEY_ROTATION_STATUS_IDLE)**kchannelUpdateNotifierMem(pKernelChannel, NV_CHANNELGPFIFO_NOTIFICATION_TYPE_KEY_ROTATION_STATUS, pConfCompute->keyRotationCount[h2dIndex] - 1, 0, (NvU32)KEY_ROTATION_STATUS_IDLE)*kchannelUpdateNotifierMem(pKernelChannel, NV_CHANNELGPFIFO_NOTIFICATION_TYPE_KEY_ROTATION_STATUS, pConfCompute->keyRotationCount[h2dIndex], 0, (NvU32)KEY_ROTATION_STATUS_IDLE)**kchannelUpdateNotifierMem(pKernelChannel, NV_CHANNELGPFIFO_NOTIFICATION_TYPE_KEY_ROTATION_STATUS, pConfCompute->keyRotationCount[h2dIndex], 0, (NvU32)KEY_ROTATION_STATUS_IDLE)*NVRM: channel 0x%08x (hChannel 0x%x) was disabled for key rotation, writing notifier with KEY_ROTATION_STATUS_IDLE **NVRM: channel 0x%08x (hChannel 0x%x) was disabled for key rotation, writing notifier with KEY_ROTATION_STATUS_IDLE *call to kchannelDisableForKeyRotation*call to kchannelEnableAfterKeyRotation**pEncStatsBuf*confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_IDLE)**confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_IDLE)*NVRM: No kernel key rotation pending,restarting suspended user key rotation timers **NVRM: No kernel key rotation pending,restarting suspended user key rotation timers *NVRM: Restarting timeout timer for user key 0x%x **NVRM: Restarting timeout timer for user key 0x%x *call to confComputeStartKeyRotationTimer_IMPL*confComputeStartKeyRotationTimer(pGpu, pConfCompute, userH2DKey)**confComputeStartKeyRotationTimer(pGpu, pConfCompute, userH2DKey)*NVRM: Key rotation successful for key IDs 0x%x and 0x%x. **NVRM: Key rotation successful for key IDs 0x%x and 0x%x. *NVRM: This keypair has been rotated %u times. **NVRM: This keypair has been rotated %u times. *NVRM: Failed to perform key rotation with status = 0x%x for h2dKey = 0x%x **NVRM: Failed to perform key rotation with status = 0x%x for h2dKey = 0x%x *confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_FAILED_ROTATION)**confComputeSetKeyRotationStatus(pConfCompute, h2dKey, KEY_ROTATION_STATUS_FAILED_ROTATION)*confComputeGetKeySlotFromGlobalKeyId(pConfCompute, h2dKey, &h2dIndex) == NV_OK**confComputeGetKeySlotFromGlobalKeyId(pConfCompute, h2dKey, &h2dIndex) == NV_OK*confComputeInitChannelIterForKey(pGpu, pConfCompute, h2dKey, &iter) == NV_OK**confComputeInitChannelIterForKey(pGpu, pConfCompute, h2dKey, &iter) == NV_OK*kchannelUpdateNotifierMem(pKernelChannel, NV_CHANNELGPFIFO_NOTIFICATION_TYPE_KEY_ROTATION_STATUS, pConfCompute->keyRotationCount[h2dIndex], 0, (NvU32)pWorkItemInfo->status)**kchannelUpdateNotifierMem(pKernelChannel, NV_CHANNELGPFIFO_NOTIFICATION_TYPE_KEY_ROTATION_STATUS, pConfCompute->keyRotationCount[h2dIndex], 0, (NvU32)pWorkItemInfo->status)*NVRM: channel 0x%08x (hChannel 0x%x) was NOT disabled for key rotation, writing notifier with val 0x%x **NVRM: channel 0x%08x (hChannel 0x%x) was NOT disabled for key rotation, writing notifier with val 0x%x *NVRM: Control call to RC non-idle channels failed with status 0x%x, can't perform key rotation for h2dKey = 0x%x **NVRM: Control call to RC non-idle channels failed with status 0x%x, can't perform key rotation for h2dKey = 0x%x *NVRM: Unexpected key rotation status 0x%x **NVRM: Unexpected key rotation status 0x%x *NVRM: KR fialed with status 0x%x **NVRM: KR fialed with status 0x%x *call to confComputeTriggerKeyRotation_DISPATCH*NVRM: Key rotation callback failed with status 0x%x **NVRM: Key rotation callback failed with status 0x%x *src/kernel/gpu/dbgbuffer.c**src/kernel/gpu/dbgbuffer.c*call to nvdFreeDebugBuffer_IMPL*call to nvdAllocDebugBuffer_IMPL*NVRM: DebugBuffer object could not be allocated. **NVRM: DebugBuffer object could not be allocated. *src/kernel/gpu/dce_client/dce_client.c*NVRM: Destroy DCE Client State Called **src/kernel/gpu/dce_client/dce_client.c**NVRM: Destroy DCE Client State Called *NVRM: dceclientStateUnload_IMPL Called **NVRM: dceclientStateUnload_IMPL Called *call to dceclientDeinitRpcInfra_IMPL*pDceClient*call to dceclientInitRpcInfra_IMPL*NVRM: dceclientDestruct_IMPL Called **NVRM: dceclientDestruct_IMPL Called *NVRM: dceclientConstructEngine_IMPL Called **NVRM: dceclientConstructEngine_IMPL Called **pPrivateContext*src/kernel/gpu/dce_client/dce_client_rpc.c*NVRM: NVRM_RPC_DCE RPC to trigger %s called **src/kernel/gpu/dce_client/dce_client_rpc.c**NVRM: NVRM_RPC_DCE RPC to trigger %s called **RmInitAdapter**RmShutdownAdapter*call to _dceRpcGetMessageData*msg_data**msg_data*rpc_params**rpc_params*call to rpcWriteCommonHeader*NVRM: NVRM_RPC_DCE: Writing RPC Header Failed [0x%x] **NVRM: NVRM_RPC_DCE: Writing RPC Header Failed [0x%x] *call to _dceRpcIssueAndWait*call to _dceRpcGetRpcResult*NVRM: NVRM_RPC_DCE: Failed RM init/deinit result 0x%x: **NVRM: NVRM_RPC_DCE: Failed RM init/deinit result 0x%x: *pDceClientrm*hInternalClient*pGbvParams*pGbvParams != NULL**pGbvParams != NULL*call to rpcRmApiControl_dce**pGbvParams*NVRM: Possibly incompatible DCE RM version! CPU RM: %s DCE RM: %s **NVRM: Possibly incompatible DCE RM version! CPU RM: %s DCE RM: %s *NVRM: NVRM_RPC_DCE Dup Object RPC Called for hClient: 0x%x **NVRM: NVRM_RPC_DCE Dup Object RPC Called for hClient: 0x%x *dup_object_v*NVRM: NVRM_RPC_DCE: Failed RM Dup Object result 0x%x: **NVRM: NVRM_RPC_DCE: Failed RM Dup Object result 0x%x: *NVRM: NVRM_RPC_DCE: RPC for DUP OBJECT Successful **NVRM: NVRM_RPC_DCE: RPC for DUP OBJECT Successful *NVRM: NVRM_RPC_DCE Free RPC Called for hClient: 0x%x **NVRM: NVRM_RPC_DCE Free RPC Called for hClient: 0x%x *free_v*NVRM: NVRM_RPC_DCE: Failed RM Free Object result 0x%x: **NVRM: NVRM_RPC_DCE: Failed RM Free Object result 0x%x: *NVRM: NVRM_RPC_DCE: RPC for Free Successful **NVRM: NVRM_RPC_DCE: RPC for Free Successful *NVRM: NVRM_RPC_DCE: Prepare and send RmApiAlloc RPC **NVRM: NVRM_RPC_DCE: Prepare and send RmApiAlloc RPC *call to rmapiGetClassAllocParamSize*rmapiGetClassAllocParamSize(¶msSize, pAllocParams, &bNullAllowed, hClass)**rmapiGetClassAllocParamSize(¶msSize, pAllocParams, &bNullAllowed, hClass)*NVRM: NVRM_RPC_DCE: NULL allocation params not allowed for class 0x%x **NVRM: NVRM_RPC_DCE: NULL allocation params not allowed for class 0x%x *NVRM: NVRM_RPC_DCE: Failed RM Alloc Object 0x%x result 0x%x: %s **NVRM: NVRM_RPC_DCE: Failed RM Alloc Object 0x%x result 0x%x: %s *NVRM: NVRM_RPC_DCE: RPC for GSP RM Alloc Successful **NVRM: NVRM_RPC_DCE: RPC for GSP RM Alloc Successful *NVRM: NVRM_RPC_DCE : Prepare and send RmApiControl RPC [cmd:0x%x] **NVRM: NVRM_RPC_DCE : Prepare and send RmApiControl RPC [cmd:0x%x] *NVRM: NVRM_RPC_DCE: Failed RM ctrl call cmd:0x%x result 0x%x: %s **NVRM: NVRM_RPC_DCE: Failed RM ctrl call cmd:0x%x result 0x%x: %s *NVRM: NVRM_RPC_DCE: RPC for GSP RM Control Successful **NVRM: NVRM_RPC_DCE: RPC for GSP RM Control Successful *NVRM: dceclientHandleAsyncRpcCallback called **NVRM: dceclientHandleAsyncRpcCallback called *interfaceType == DCE_CLIENT_RM_IPC_TYPE_EVENT**interfaceType == DCE_CLIENT_RM_IPC_TYPE_EVENT*pGpu != NULL && data != NULL**pGpu != NULL && data != NULL*msg_hdr**msg_hdr**rpc_message_data*rpc_msg_data**rpc_msg_data**eventData*call to CliGetEventInfo*CliGetEventInfo(rpc_params->hClient, rpc_params->hEvent, &pEvent)**CliGetEventInfo(rpc_params->hClient, rpc_params->hEvent, &pEvent)*pNotifyList**pNotifyList*pNotifyList != NULL**pNotifyList != NULL*call to osNotifyEvent*NVRM: osNotifyEvent failed with status: %x **NVRM: osNotifyEvent failed with status: %x *pNotifyEvent != NULL**pNotifyEvent != NULL*call to kdispInvokeRgLineCallback_KERNEL*call to kdispInvokeDisplayModesetCallback_DISPATCH*NVRM: Unexpected RPC function 0x%x **NVRM: Unexpected RPC function 0x%x *Unexpected RPC function**Unexpected RPC function*call to _dceRpcGetMessageHeader**message_header*call to _dceclientrmPrintHdr*call to dceclientSendRpc_IMPL*NVRM: NVRM_RPC_DCE: Error while issuing RPC [0x%x] **NVRM: NVRM_RPC_DCE: Error while issuing RPC [0x%x] *NVRM: NVRM_RPC_DCE : [msg-buf:0x%p] header_version = 0x%x signature = 0x%x length = 0x%x function = 0x%x rpc_result = 0x%x **NVRM: NVRM_RPC_DCE : [msg-buf:0x%p] header_version = 0x%x signature = 0x%x length = 0x%x function = 0x%x rpc_result = 0x%x *NVRM: Send Message Called **NVRM: Send Message Called *NVRM: Receive Message Called **NVRM: Receive Message Called **clientId*NVRM: Send RPC Called, clientid used 0x%x **NVRM: Send RPC Called, clientid used 0x%x *call to osTegraDceClientIpcSendRecv*NVRM: Send RPC failed for clientId %u error %u **NVRM: Send RPC failed for clientId %u error %u *NVRM: Free RPC Infra Called **NVRM: Free RPC Infra Called *call to osTegraDceUnregisterIpcClient**message_buffer**pRpc*rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_INIT, &gpusLockedMask)**rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_INIT, &gpusLockedMask)*call to rpcDceRmInit_dce*NVRM: Init RPC Infra Called **NVRM: Init RPC Infra Called *call to initRpcObject*NVRM: initRpcObject failed **NVRM: initRpcObject failed *maxRpcSize*NVRM: Cannot allocate memory for message_buffer **NVRM: Cannot allocate memory for message_buffer *call to osTegraDceRegisterIpcClient*NVRM: Register dce ipc client failed for DCE_CLIENT_RM_IPC_TYPE_SYNC error 0x%x **NVRM: Register dce ipc client failed for DCE_CLIENT_RM_IPC_TYPE_SYNC error 0x%x *NVRM: Registered dce ipc client DCE_CLIENT_RM_IPC_TYPE_SYNC handle: 0x%x **NVRM: Registered dce ipc client DCE_CLIENT_RM_IPC_TYPE_SYNC handle: 0x%x *NVRM: Register dce ipc client failed for DCE_CLIENT_RM_IPC_TYPE_EVENT error 0x%x **NVRM: Register dce ipc client failed for DCE_CLIENT_RM_IPC_TYPE_EVENT error 0x%x *NVRM: Register dce ipc client DCE_CLIENT_RM_IPC_TYPE_EVENT: 0x%x **NVRM: Register dce ipc client DCE_CLIENT_RM_IPC_TYPE_EVENT: 0x%x *call to _Class5080GetDeferredApiInfo*pCliDeferredApi*pDeferredApiInfo**pDeferredApiInfo*pDeferredApiParams**pDeferredApiParams**pDeferredApi*bIsCtrlCall*pClientVA*src/kernel/gpu/deferred_api.c*NVRM: Unable to find target gpu from hClient(%x), hDevice(%x) **src/kernel/gpu/deferred_api.c**NVRM: Unable to find target gpu from hClient(%x), hDevice(%x) *pTgtGpu**pTgtGpu*api_bundle*InvalidateTlb*call to _Class5080UpdateTLBFlushState*NVRM: Unknown or Unimplemented Command %x **NVRM: Unknown or Unimplemented Command %x *rmCtrlParams***pParams*pCookie**pCookie**pLockInfo*bDeferredApi*lockInfo*call to subdeviceGetByDeviceAndGpu_IMPL*call to resControlLookup_IMPL*call to serverControl_InitCookie*callContext**pResourceRef**pServer*call to resservSwapTlsCallContext*resservSwapTlsCallContext(&pOldContext, &callContext)**resservSwapTlsCallContext(&pOldContext, &callContext)*call to serverControl_Prologue*call to resservRestoreTlsCallContext*pOldContext*resservRestoreTlsCallContext(pOldContext)**resservRestoreTlsCallContext(pOldContext)*call to serverControl_Epilogue*call to _Class5080DelDeferredApi*call to NV_RM_RPC_REMOVE_DEFERRED_API*pRemoveApi*call to _Class5080AddDeferredApi*call to NV_RM_RPC_DEFERRED_API_CONTROL*serverGetClientUnderLock(&g_resServ, pParams->hClient, &pClient)**serverGetClientUnderLock(&g_resServ, pParams->hClient, &pClient)*clientGetResourceRef(pClient, pParams->hUserdMemory, &pResourceRef)**clientGetResourceRef(pClient, pParams->hUserdMemory, &pResourceRef)*pResourceRef->pParentRef != NULL**pResourceRef->pParentRef != NULL*call to memRegisterWithGsp_IMPL*memRegisterWithGsp(pGpu, pClient, pResourceRef->pParentRef->hResource, pResourceRef->hResource)**memRegisterWithGsp(pGpu, pClient, pResourceRef->pParentRef->hResource, pResourceRef->hResource)*pRmApi->Control(pRmApi, RES_GET_CLIENT_HANDLE(pDeferredApiObj), RES_GET_HANDLE(pDeferredApiObj), NV5080_CTRL_CMD_DEFERRED_API_INTERNAL, pDeferredApi, sizeof(*pDeferredApi))**pRmApi->Control(pRmApi, RES_GET_CLIENT_HANDLE(pDeferredApiObj), RES_GET_HANDLE(pDeferredApiObj), NV5080_CTRL_CMD_DEFERRED_API_INTERNAL, pDeferredApi, sizeof(*pDeferredApi))*DeferredApiList**pCliDeferredApi*call to btreeEnumNext*bNotifyTrigger*pCallContext != NULL**pCallContext != NULL*Handle***pDeferredApiInfo*pbBroadcast*call to gpuGetByRef**ppGpuGrp*call to deviceGetByInstance_IMPL**ppDevice*call to deviceRemoveFromClientShare_IMPL*call to deviceKPerfCudaLimitCliDisable*src/kernel/gpu/device.c*NVRM: Disable of Cuda limit activation failed**src/kernel/gpu/device.c**NVRM: Disable of Cuda limit activation failed*call to gpuacctStopGpuAccounting_IMPL*call to gpumgrGetPrimaryForDevice**pKernelHostVgpuDevice*call to gpuresSetGpu_IMPL*call to deviceSetClientShare_IMPL*call to gpuacctStartGpuAccounting_IMPL*NVRM: gpuacctStartGpuAccounting() failed for procId : %d and SubProcessID : %d. Ignoring the failure and continuing. **NVRM: gpuacctStartGpuAccounting() failed for procId : %d and SubProcessID : %d. Ignoring the failure and continuing. *allocFlags & NV_DEVICE_ALLOCATION_FLAGS_HOST_VGPU_DEVICE**allocFlags & NV_DEVICE_ALLOCATION_FLAGS_HOST_VGPU_DEVICE*call to gpuresInternalControlForward_IMPL*pParams->hClient == RES_GET_CLIENT_HANDLE(pDevice)**pParams->hClient == RES_GET_CLIENT_HANDLE(pDevice)*pParams->hObject == RES_GET_HANDLE(pDevice)**pParams->hObject == RES_GET_HANDLE(pDevice)*pParams->hParent == RES_GET_PARENT_HANDLE(pDevice)**pParams->hParent == RES_GET_PARENT_HANDLE(pDevice)**pGpuGrp*call to gpuresControl_IMPL*NVRM: type: device **NVRM: type: device *call to _deviceTeardownRef*call to _deviceTeardown*pDeviceTest*bClientInUse*pNv0080AllocParams*call to gpumgrIsDeviceInstanceValid*call to gpumgrIsDeviceEnabled*call to _deviceInit*call to osIsGpuAccessible*call to krcTestAllowAlloc_IMPL*physicalAllocFlags*root_alloc_params*processID*root_alloc_params.processID == osGetCurrentProcess()**root_alloc_params.processID == osGetCurrentProcess()***pOsPidInfo*device_alloc_params*vaSpaceSize*rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_DEVICE, GPU_LOCK_FLAGS_SAFE_LOCK_UPGRADE, RM_LOCK_MODULES_GPU, &gpuMask)*src/kernel/gpu/device_ctrl.c**rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_DEVICE, GPU_LOCK_FLAGS_SAFE_LOCK_UPGRADE, RM_LOCK_MODULES_GPU, &gpuMask)**src/kernel/gpu/device_ctrl.c*call to gpuIsVgxBranded*isVgx*call to gpuSetSparseTextureComputeMode_IMPL*pModeParams*call to gpuGetSparseTextureComputeMode_IMPL*call to kvgpumgrIsMigTimeslicingModeEnabled*call to kvgpuMgrGetHeterogeneousMode*kvgpuMgrGetHeterogeneousMode(pGpu, pParams->gpuInstanceId, &pParams->bHeterogeneousMode)**kvgpuMgrGetHeterogeneousMode(pGpu, pParams->gpuInstanceId, &pParams->bHeterogeneousMode)*bHeterogeneousMode*NVRM: Call not supported with SMC enabled **NVRM: Call not supported with SMC enabled *pPgpuInfo**pPgpuInfo*call to kvgpuMgrGetVgpuPlacementInfo**pKernelVgpuTypePlacementInfo*pKernelVgpuTypePlacementInfo != NULL**pKernelVgpuTypePlacementInfo != NULL*NVRM: No Graphic Instance created **NVRM: No Graphic Instance created *NVRM: GPU does not support heterogenenous vGPU mode **NVRM: GPU does not support heterogenenous vGPU mode *assignedSwizzIdVgpuCount**assignedSwizzIdVgpuCount*vgpuCount*NVRM: Failed to set heterogeneous vGPU mode as vGPU instance is active **NVRM: Failed to set heterogeneous vGPU mode as vGPU instance is active *call to kvgpumgrSetVgpuType*call to kmigmgrGetGPUInstanceInfo_IMPL*kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, pParams->gpuInstanceId, &pKernelMIGGpuInstance)**kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, pParams->gpuInstanceId, &pKernelMIGGpuInstance)*NVRM: Failed to set heterogeneous vGPU mode in GSP RM. status: 0x%x **NVRM: Failed to set heterogeneous vGPU mode in GSP RM. status: 0x%x *call to kvgpuMgrSetHeterogeneousModePerGI*kvgpuMgrSetHeterogeneousModePerGI(pGpu, pParams->gpuInstanceId, pParams->bHeterogeneousMode)**kvgpuMgrSetHeterogeneousModePerGI(pGpu, pParams->gpuInstanceId, pParams->bHeterogeneousMode)*PDB_PROP_GPU_IS_VGPU_HETEROGENEOUS_MODE*pVgpuTypeInfo**pVgpuTypeInfo*pVgpuTypeSupportedPlacementInfo**pVgpuTypeSupportedPlacementInfo*vgpuTypePlacementInfo**vgpuTypePlacementInfo*pVgpuTypePlacementInfo**pVgpuTypePlacementInfo*vgpuInstanceSupportedPlacementInfo**vgpuInstanceSupportedPlacementInfo*pVgpuInstanceSupportedPlacementInfo**pVgpuInstanceSupportedPlacementInfo*vgpuInstancePlacementInfo**vgpuInstancePlacementInfo*pVgpuInstancePlacementInfo**pVgpuInstancePlacementInfo*creatablePlacementId*call to kvgpumgrCheckHomogeneousPlacementSupported*call to subdeviceGetByInstance_IMPL*call to gpuGetSriovCaps_DISPATCH*isGridBuild*virtualizationMode*NVRM: invalid virtualization Mode: %x. Returning NONE! **NVRM: invalid virtualization Mode: %x. Returning NONE! *NVRM: Virtualization Mode: %x **NVRM: Virtualization Mode: %x *swStatePersistence*pSubDeviceCountParams*pClassListParams*call to gpuGetClassList_IMPL*call to serverutilGetResourceRefWithParent*src/kernel/gpu/device_share.c*NVRM: Invalid object handle 0x%x pEntry %p *pVaSpaceApi**src/kernel/gpu/device_share.c**NVRM: Invalid object handle 0x%x pEntry %p **pVaSpaceApi*NVRM: device already has an Associated VASPace **NVRM: device already has an Associated VASPace **pVASpace*call to vaspaceIncRefCnt_IMPL*call to deviceInitClientShare*call to gpugrpGetGlobalVASpace_IMPL*pDevice->vaStartInternal**pDevice->vaStartInternal*pDevice->vaLimitInternal**pDevice->vaLimitInternal*!pDevice->vaStartInternal**!pDevice->vaStartInternal*!pDevice->vaLimitInternal**!pDevice->vaLimitInternal*call to kgmmuGetVaspaceClass_DISPATCH*call to memmgrIsPmaInitialized*call to memmgrAreClientPageTablesPmaManaged*pClientShare*pShareDevice*pDpmodesetData*linkFreqHz*src/kernel/gpu/disp/arch/v02/kern_disp_0204.c*NVRM: One of linkFreq (%d Hz), pDpmodesetData->laneCount (%d Hz) or PClkFreq (%lld Hz) came in as zero. Report issue to client (DD) **src/kernel/gpu/disp/arch/v02/kern_disp_0204.c**NVRM: One of linkFreq (%d Hz), pDpmodesetData->laneCount (%d Hz) or PClkFreq (%lld Hz) came in as zero. Report issue to client (DD) *linkFreqHz && pDpmodesetData->laneCount && pDpmodesetData->PClkFreqHz**linkFreqHz && pDpmodesetData->laneCount && pDpmodesetData->PClkFreqHz*num_symbols_per_line*NVRM: watermark greater than number of symbols in the line **NVRM: watermark greater than number of symbols in the line *PixelSteeringBits*NumBlankingLinkClocks*hActive >= 60**hActive >= 60*(pDpmodesetData->SetRasterSizeWidth - hActive) >= minHBlank**(pDpmodesetData->SetRasterSizeWidth - hActive) >= minHBlank*hblank_symbols*NVRM: Minimum HBlank required is %d **NVRM: Minimum HBlank required is %d *NVRM: [IN]: PixelClockHz:%lld PixelDepth:%d LinkBw:%d LaneCount:%d TuSize: %d DSCEnable:%s [OUT]:WaterMark:%d [OUT]:VBlankSymbols:%d HBlankSymbols:%d **NVRM: [IN]: PixelClockHz:%lld PixelDepth:%d LinkBw:%d LaneCount:%d TuSize: %d DSCEnable:%s [OUT]:WaterMark:%d [OUT]:VBlankSymbols:%d HBlankSymbols:%d *NV_TRUE**NV_TRUE*call to gpumgrGetGpuLockAndDrPorts*pinSetIn*pinSetOut*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_DISP_PINSETS_TO_LOCKPINS, ¶ms, sizeof(params))*src/kernel/gpu/disp/arch/v02/kern_disp_0207.c**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_DISP_PINSETS_TO_LOCKPINS, ¶ms, sizeof(params))**src/kernel/gpu/disp/arch/v02/kern_disp_0207.c*PDB_PROP_KDISP_IN_AWAKEN_INTR*intrCtrlDisp*call to kdispReadAwakenChannelNumMask_DISPATCH*bAwakenIntrPending*call to _kdispHandleAwakenChnMask*src/kernel/gpu/disp/arch/v03/kern_disp_0300.c**bAwakenIntrPending**src/kernel/gpu/disp/arch/v03/kern_disp_0300.c*call to _kdispResetAwakenChannelNumMask*bEventFound*call to kdispGetChannelNum_DISPATCH*pClientChannelTable*pKernelDisplay->pClientChannelTable != NULL**pKernelDisplay->pClientChannelTable != NULL*call to dispchnGetByHandle_IMPL*call to notifyEvents*NVRM: seeing an awaken in channel %d without an associated awaken event **NVRM: seeing an awaken in channel %d without an associated awaken event *channelNum*writeIntr*NVRM: invalid channel class passed! **NVRM: invalid channel class passed! *pAwakenChannelNumMask**pAwakenChannelNumMask*NVRM: invalid channel class passed **NVRM: invalid channel class passed *pbTargetAperture*dsiFliplock*feFliplock*call to kdispGetNumHeads*call to gpuIsClassSupported_IMPL*NVRM: class %x not supported **NVRM: class %x not supported *dispChannelNum < NV_UDISP_FE_CHN_ASSY_BASEADR__SIZE_1**dispChannelNum < NV_UDISP_FE_CHN_ASSY_BASEADR__SIZE_1*call to kdispGetBaseOffset_DISPATCH*pStaticInfo != NULL**pStaticInfo != NULL*NVRM: Unknown channel class %x **NVRM: Unknown channel class %x *call to _kdispGetChnStatusRegs*src/kernel/gpu/disp/arch/v03/kern_disp_channel_0300.c**src/kernel/gpu/disp/arch/v03/kern_disp_channel_0300.c*chnStatus*NVRM: timeout waiting for METHOD_EXEC to IDLE **NVRM: timeout waiting for METHOD_EXEC to IDLE *channelCtl*NVRM: timeout waiting for channel state to UNCONNECTED **NVRM: timeout waiting for channel state to UNCONNECTED *chnStatusRegRead*hwChannelState*channelState*NVRM: timeout! current Channel state = 0x%x **NVRM: timeout! current Channel state = 0x%x *call to _kdispGetChnCtlRegs*call to kdispApplyChannelConnectDisconnect_DISPATCH*pKernelDisplay == GPU_GET_KERNEL_DISPLAY(pParentGpu)*src/kernel/gpu/disp/arch/v04/kern_disp_0400.c**pKernelDisplay == GPU_GET_KERNEL_DISPLAY(pParentGpu)**src/kernel/gpu/disp/arch/v04/kern_disp_0400.c*rmGpuGroupLockAcquire(pParentGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_RPC, &parentGpuLockMask)**rmGpuGroupLockAcquire(pParentGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_RPC, &parentGpuLockMask)*rmGpuGroupLockAcquire(pChildGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_RPC, &childGpuLockMask)**rmGpuGroupLockAcquire(pChildGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_RPC, &childGpuLockMask)*call to setSliLinkGpioSwControl*setSliLinkGpioSwControl(pParentGpu, parentPinSet, &parentGpioFunction, &parentGpioPin, &parentGpioDirection)**setSliLinkGpioSwControl(pParentGpu, parentPinSet, &parentGpioFunction, &parentGpioPin, &parentGpioDirection)*setSliLinkGpioSwControl(pChildGpu, childPinSet, &childGpioFunction, &childGpioPin, &childGpioDirection)**setSliLinkGpioSwControl(pChildGpu, childPinSet, &childGpioFunction, &childGpioPin, &childGpioDirection)*call to programGpioDirection*programGpioDirection(pParentGpu, parentGpioPin, NV_FALSE)**programGpioDirection(pParentGpu, parentGpioPin, NV_FALSE)*programGpioDirection(pChildGpu, childGpioPin, NV_TRUE)**programGpioDirection(pChildGpu, childGpioPin, NV_TRUE)*call to programGpioOutput*programGpioOutput(pParentGpu, parentGpioPin, 1)**programGpioOutput(pParentGpu, parentGpioPin, 1)*call to readGpioInput*readGpioInput(pChildGpu, childGpioPin, &value)**readGpioInput(pChildGpu, childGpioPin, &value)*programGpioOutput(pParentGpu, parentGpioPin, 0)**programGpioOutput(pParentGpu, parentGpioPin, 0)*programGpioDirection(pParentGpu, parentGpioPin, NV_TRUE)**programGpioDirection(pParentGpu, parentGpioPin, NV_TRUE)*programGpioDirection(pChildGpu, childGpioPin, NV_FALSE)**programGpioDirection(pChildGpu, childGpioPin, NV_FALSE)*programGpioOutput(pChildGpu, childGpioPin, 1)**programGpioOutput(pChildGpu, childGpioPin, 1)*readGpioInput(pParentGpu, parentGpioPin, &value)**readGpioInput(pParentGpu, parentGpioPin, &value)*programGpioOutput(pChildGpu, childGpioPin, 0)**programGpioOutput(pChildGpu, childGpioPin, 0)*programGpioDirection(pParentGpu, parentGpioPin, parentGpioDirection)**programGpioDirection(pParentGpu, parentGpioPin, parentGpioDirection)*call to activateHwFunction*activateHwFunction(pParentGpu, parentGpioPin, parentGpioFunction)**activateHwFunction(pParentGpu, parentGpioPin, parentGpioFunction)*programGpioDirection(pChildGpu, childGpioPin, childGpioDirection)**programGpioDirection(pChildGpu, childGpioPin, childGpioDirection)*activateHwFunction(pChildGpu, childGpioPin, childGpioFunction)**activateHwFunction(pChildGpu, childGpioPin, childGpioFunction)*call to kdispReadPendingWinSemIntr_DISPATCH*call to osDispService*call to kdispNotifyEvent_IMPL*src/kernel/gpu/disp/arch/v04/kern_disp_0402.c*NVRM: %s requests ISO BW = %u KBPS, floor BW = %u KBPS **src/kernel/gpu/disp/arch/v04/kern_disp_0402.c**NVRM: %s requests ISO BW = %u KBPS, floor BW = %u KBPS *RM**RM*Ext client**Ext client*Unknown client**Unknown client*NVRM: Bad iccBwClient value (%u) **NVRM: Bad iccBwClient value (%u) *minRequiredIsoBandwidthKBPS <= clientBwValues[DISPLAY_ICC_BW_CLIENT_EXT].minRequiredIsoBandwidthKBPS**minRequiredIsoBandwidthKBPS <= clientBwValues[DISPLAY_ICC_BW_CLIENT_EXT].minRequiredIsoBandwidthKBPS*newArbBwValues*NVRM: Sending request to icc_set_bw: ISO BW = %u KBPS, floor BW = %u KBPS **NVRM: Sending request to icc_set_bw: ISO BW = %u KBPS, floor BW = %u KBPS *call to osTegraAllocateDisplayBandwidth*NVRM: Allocation request returns: %s (0x%08X) **NVRM: Allocation request returns: %s (0x%08X) **pRgVblankCb*linkFreqHz && pDpModesetData->laneCount && pDpModesetData->PClkFreqHz*src/kernel/gpu/disp/arch/v05/kern_disp_0501.c**linkFreqHz && pDpModesetData->laneCount && pDpModesetData->PClkFreqHz**src/kernel/gpu/disp/arch/v05/kern_disp_0501.c*twoChannelAudioSymbols*eightChannelAudioSymbols*call to _convertLinkRateToDataRate*linkDataRate*linkTotalDataRate*effectiveBppxScaler*call to _calcEffectiveBppxScalerNonDsc*call to _calcDpMinHBlankMST*call to _getDpAudioSymbolMST*call to _getDpAudioSymbolSST*call to _calcPClkFactorAndRgPacketMode*pclkFactor*rgPacketMode*msaSym*call to _calcWatermark8b10bSST*dataRateHz*bppScaler*ratio_x1000*(PClkFactor != NULL && rgPacketMode != NULL)**(PClkFactor != NULL && rgPacketMode != NULL)*cyclesPerPacket*cyclesPerPacketInc*packetsPerLine*call to _getSdpSymbolsMST*logicalLanes*call to kdispGetDisplayCapsBaseAndSize_DISPATCH*pushBufferParams*hclass*call to ctxdmaGetByHandle_IMPL*src/kernel/gpu/disp/disp_channel.c*NVRM: disp channel[0x%x] didn't have valid ctxdma 0x%x **src/kernel/gpu/disp/disp_channel.c**NVRM: disp channel[0x%x] didn't have valid ctxdma 0x%x *pBufferContextDma*physicalAddr*call to kdispGetPBTargetAperture_DISPATCH**pInstMem*call to instmemUnbindContextDmaFromAllChannels_IMPL*call to instmemUnbindDispChannelContextDmas_IMPL*dispchnGetByHandle(pClient, hChannel, &pDispChannel)**dispchnGetByHandle(pClient, hChannel, &pDispChannel)*pContextDma->pDevice == GPU_RES_GET_DEVICE(pDispChannel)**pContextDma->pDevice == GPU_RES_GET_DEVICE(pDispChannel)*call to instmemUnbindContextDma_IMPL*NVRM: ISO ctx dmas must be 4K aligned. PteAdjust = 0x%x **NVRM: ISO ctx dmas must be 4K aligned. PteAdjust = 0x%x *call to instmemBindContextDma_IMPL*gpuIsGpuFullPower(pGpu)**gpuIsGpuFullPower(pGpu)**pTmpGpu*call to dispobjGetByDevice_IMPL*dispIt**pTmpDispChannel*NVRM: Failed to free satellite DispChannel 0x%x! **NVRM: Failed to free satellite DispChannel 0x%x! *NVRM: Failed to reset clientChannelTable! **NVRM: Failed to reset clientChannelTable! *call to kdispUnbindUnmapDispChannel_IMPL*call to kdispReleaseDispChannelHw_56cd7a*call to dispchnUnbindAllCtx_IMPL*call to osUnmapGPU*call to osMapGPU*NVRM: disp channel[0x%x] mapping failed. Return status = 0x%x **NVRM: disp channel[0x%x] mapping failed. Return status = 0x%x *call to kdispGetDisplayChannelUserBaseAndSize_DISPATCH*NVRM: disp channel grab failed because of bad display parent 0x%x **NVRM: disp channel grab failed because of bad display parent 0x%x *call to dispchnParseAllocParams*NVRM: Information supplied for handle 0x%x doesn't match that in RM's client DB **NVRM: Information supplied for handle 0x%x doesn't match that in RM's client DB *call to kdispGetIntChnClsForHwCls_IMPL*call to kdispSetPushBufferParamsToPhysical_IMPL*call to kdispAcquireDispChannelHw_56cd7a*pDmaChannelAllocParams***pControl*pPioChannelAllocParams*pDispObject != NULL**pDispObject != NULL*NVRM: Failure allocating display class 0x%08x: Only root(admin)/kernel clients are allowed **NVRM: Failure allocating display class 0x%08x: Only root(admin)/kernel clients are allowed *call to osAssertFailed*NVRM: Unsupported class in **NVRM: Unsupported class in **pDispObject*bIsDma*DispClass*InstanceNumber*NVRM: disp channel[0x%x] alloc failed. Return status = 0x%x **NVRM: disp channel[0x%x] alloc failed. Return status = 0x%x *call to dispchnSetRegBaseOffsetAndSize_IMPL*call to kdispMapDispChannel_IMPL*NVRM: kdispGetChannelNum_HAL failed! **NVRM: kdispGetChannelNum_HAL failed! *NVRM: Mapped hclient: %p hchannel: 0x%x channleNum: 0x%x **NVRM: Mapped hclient: %p hchannel: 0x%x channleNum: 0x%x **pDmaChannelAllocParams*NVRM: Error notifier parameter is not used in Display channel allocation. **NVRM: Error notifier parameter is not used in Display channel allocation. **pPioChannelAllocParams*call to dispapiSetUnicastAndSynchronize_KERNEL*call to _dispValidateDDSMuxSupport*src/kernel/gpu/disp/disp_common_ctrl_acpi.c*NVRM: Invalid arguments 0x%x **src/kernel/gpu/disp/disp_common_ctrl_acpi.c**NVRM: Invalid arguments 0x%x *call to _dispDfpGetExternalMuxStatus*NVRM: failed to get external mux status **NVRM: failed to get external mux status *NVRM: failed to get internal mux status **NVRM: failed to get internal mux status *call to pfmFindAcpiId_IMPL*NVRM: acpiId not found for displayId 0x%x **NVRM: acpiId not found for displayId 0x%x *call to osCallACPI_MXDS*NVRM: ACPI call to get mux state failed. **NVRM: ACPI call to get mux state failed. *muxMethodData*acpiIdMuxModeTable**acpiIdMuxModeTable*acpiidIndex*NVRM: ACPI lookup to get mux mode failed. **NVRM: ACPI lookup to get mux mode failed. *call to _dispDfpSwitchExternalMux*NVRM: external mux switch failed 0x%x **NVRM: external mux switch failed 0x%x *NVRM: internal mux switch failed 0x%x **NVRM: internal mux switch failed 0x%x *NVRM: osCallACPI_MXDS failed 0x%x **NVRM: osCallACPI_MXDS failed 0x%x *pAcpiMethodParams*inDataSize*inOutDataSize*inData**inData*NVRM: ERROR: NV0073_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: Parameter validation failed: outDataSize=%d inDataSize=%ud method = %ud **NVRM: ERROR: NV0073_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: Parameter validation failed: outDataSize=%d inDataSize=%ud method = %ud **pInOutData***pInOutData*NVRM: ERROR: NV0073_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: mem alloc failed **NVRM: ERROR: NV0073_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: mem alloc failed *call to osCallACPI_MXMX*call to osCallACPI_NVHG_GPUON*call to osCallACPI_NVHG_GPUOFF*call to osCallACPI_NVHG_GPUSTA*call to osCallACPI_NVHG_MXDS*call to osCallACPI_MXDM*call to osCallACPI_MXID*call to osCallACPI_LRST*call to osCallACPI_DDC*call to osCallACPI_NVHG_MXMX*call to osCallACPI_NVHG_DOS*call to osCallACPI_NVHG_ROM*call to osCallACPI_NVHG_DCS*call to osCallACPI_DOD*call to kdispDsmMxmMxcbExecuteAcpi_92bfc3*dsmCurrentFunc**dsmCurrentFunc*dsmCurrentSubFunc**dsmCurrentSubFunc*call to osCallACPI_OPTM_GPUON*NVRM: ERROR: NV0073_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: Unrecognized Api Code: 0x%x **NVRM: ERROR: NV0073_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: Unrecognized Api Code: 0x%x *NVRM: ERROR: NV0073_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: Execution failed for method: 0x%x, status=0x%x **NVRM: ERROR: NV0073_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: Execution failed for method: 0x%x, status=0x%x *NVRM: ERROR: NV0073_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: output buffer is smaller then expected! **NVRM: ERROR: NV0073_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: output buffer is smaller then expected! *bDoCopyOut*pOs**pOs*PDB_PROP_OS_WAIT_FOR_ACPI_SUBSYSTEM*NVRM: ERROR: NV0073_CTRL_SYSTEM_ACPI_SUBSYSTEM_ACTIVATED control call failed. Should be rmapi ctrl call which is failed**NVRM: ERROR: NV0073_CTRL_SYSTEM_ACPI_SUBSYSTEM_ACTIVATED control call failed. Should be rmapi ctrl call which is failed*mapTable**mapTable*call to pfmUpdateAcpiIdMapping_IMPL*acpiIdx*pOrigGpu*pCrashLockCounterInfoParams*pKernelHead != NULL*src/kernel/gpu/disp/disp_common_kern_ctrl_minimal.c**pKernelHead != NULL**src/kernel/gpu/disp/disp_common_kern_ctrl_minimal.c*call to kheadGetCrashLockCounterV_DISPATCH*counterValueV*NVRM: Crash Lock Counter value fetched from register is : %x **NVRM: Crash Lock Counter value fetched from register is : %x *pLoadVCounterInfoParams*call to dispapiValidateRmctrlPriv_IMPL*call to kheadGetLoadVCounter_DISPATCH*counterValue*NVRM: LoadV Counter value fetched from register is : %x **NVRM: LoadV Counter value fetched from register is : %x *pDpRingBuffer**pDpRingBuffer*dpModesetData*dp2LinkBw*bDP2xChannelCoding*bFecEnable*(IS_VALID_DP2_X_LINKBW(dpModesetData.dp2LinkBw) && IS_VALID_LANECOUNT(dpModesetData.laneCount))**(IS_VALID_DP2_X_LINKBW(dpModesetData.dp2LinkBw) && IS_VALID_LANECOUNT(dpModesetData.laneCount))*PClkFreqHz*SetRasterSizeWidth*SetRasterBlankStartX*SetRasterBlankEndX*call to kdispComputeDpModeSettings_DISPATCH*rmGpuLocksGetOwnedMask() == 0**rmGpuLocksGetOwnedMask() == 0*pVBCounterParams*NVRM: invalid head number! **NVRM: invalid head number! *NVRM: no memory allocated for vblank count **NVRM: no memory allocated for vblank count *Vblank*VblankCountTimeout*rmDeviceGpuLocksAcquire(pGpu, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_DISP)**rmDeviceGpuLocksAcquire(pGpu, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_DISP)*Callback*CheckVblankCount*bObjectIsChannelDescendant*Param1*Param2*bIsVblankNotifyEnable*call to kheadAddVblankCallback_IMPL*call to kheadGetVblankLowLatencyCounter_IMPL*verticalBlankCounter*kHeadVblankCount**kHeadVblankCount*call to kheadGetVblankNormLatencyCounter_46f6a7*memRegisterWithGsp(pGpu, pClient, hParent, pParams->hMemory)**memRegisterWithGsp(pGpu, pClient, hParent, pParams->hMemory)*pParams->displayId**pParams->displayId*edpData*hotplugNotificationParams*plugDisplayMask*unplugDisplayMask*pVBEnableParams*call to kheadReadVblankIntrState_IMPL*call to kdispArbAndAllocDisplayBandwidth_DISPATCH**pHotplugParams*hotPlugMask*hotUnplugMask*call to dispcmnGetByDevice_IMPL*pDispCommonLoop*hotPlugMaskToBeReported*hotUnplugMaskToBeReported*src/kernel/gpu/disp/disp_object_kern_ctrl_minimal.c**src/kernel/gpu/disp/disp_object_kern_ctrl_minimal.c*NVRM: disp Channel not allocated by RM yet! **NVRM: disp Channel not allocated by RM yet! *call to kdispIsChannelAllocatedHw_DISPATCH*NVRM: disp Channel not allocated by HW yet! **NVRM: disp Channel not allocated by HW yet! *call to kdispSetChannelTrashAndAbortAccel_DISPATCH*call to kdispIsChannelIdle_DISPATCH*NVRM: disp channel not in idle state! %u %u **NVRM: disp channel not in idle state! %u %u *pParams->head < pKernelDisplay->numHeads**pParams->head < pKernelDisplay->numHeads*serverutilGetResourceRef(hClient, pParams->peer.hDisplay, &pPeerDisplayRef)**serverutilGetResourceRef(hClient, pParams->peer.hDisplay, &pPeerDisplayRef)*pPeerDisplayRef*pPeerDisplayRef->pParentRef != NULL**pPeerDisplayRef->pParentRef != NULL*(dynamicCast(pPeerDisplayRef->pResource, DispCommon) != NULL || dynamicCast(pPeerDisplayRef->pResource, DispObject) != NULL)**(dynamicCast(pPeerDisplayRef->pResource, DispCommon) != NULL || dynamicCast(pPeerDisplayRef->pResource, DispObject) != NULL)*subdeviceGetByInstance(RES_GET_CLIENT(pDispObject), pPeerDisplayRef->pParentRef->hResource, pParams->peer.subdeviceIndex, &pPeerSubdevice)**subdeviceGetByInstance(RES_GET_CLIENT(pDispObject), pPeerDisplayRef->pParentRef->hResource, pParams->peer.subdeviceIndex, &pPeerSubdevice)*pPeerSubdevice**pPeerGpu*pPeerGpu != NULL**pPeerGpu != NULL*pParams->peer.head < GPU_GET_KERNEL_DISPLAY(pPeerGpu)->numHeads**pParams->peer.head < GPU_GET_KERNEL_DISPLAY(pPeerGpu)->numHeads*call to getRgConnectedLockpin*call to kdispGetRgScanLock_DISPATCH*minPrivLevel*src/kernel/gpu/disp/disp_objs.c**src/kernel/gpu/disp/disp_objs.c*NVRM: cmd 0x%x: no event list **NVRM: cmd 0x%x: no event list *pSetEventParams*NVRM: bad event 0x%x **NVRM: bad event 0x%x *call to gpumgrGetSubDeviceMaxValuePlus1*NVRM: bad subDeviceInstance 0x%x **NVRM: bad subDeviceInstance 0x%x *pNotifyActions**pNotifyActions***pNotifyActions*call to bindEventNotificationToSubdevice**ppDispChannel*rmFreeFlags*call to refFindChildOfType*call to _dispapiNotifierInit*NVRM: class: 0x%x cmd 0x%x **NVRM: class: 0x%x cmd 0x%x *call to gpuSetThreadBcState_b3696a*call to resControl_IMPL*rpcGpuInstance**pGpuInRmctrl*call to rmresControl_Prologue_IMPL*pBaseParameters*call to gpugrpGetGpuFromSubDeviceInstance_IMPL*pGpuGroup*ppDisp**ppDisp*call to dispobjConstructHal_IMPL*call to kdispSelectClass_DISPATCH*call to gpuGetClassByClassId_IMPL*NVRM: bad class 0x%x **NVRM: bad class 0x%x *src/kernel/gpu/disp/disp_sf_user.c**src/kernel/gpu/disp/disp_sf_user.c*call to kdispGetDisplaySfUserBaseAndSize_DISPATCH*call to kheadResetPendingVblank_DISPATCH*headId*rgSemId*rgSemIndex*rgIntrMask**rgIntrMask*rgIntr*src/kernel/gpu/disp/head/kernel_head.c*NVRM: Unknown state %x requested on head %d. **src/kernel/gpu/disp/head/kernel_head.c**NVRM: Unknown state %x requested on head %d. *enablehw*updatehw*IntrState*call to kheadWriteVblankIntrEnable_DISPATCH*call to kheadReadVblankIntrEnable_DISPATCH*call to kheadGetDisplayInitialized_DISPATCH*thisGpu**pTmr*call to tmrGetCurrentTime_IMPL*Counters*call to kheadIsVblankCallbackDue*NormLatency*LowLatency*Total**pSpinlock*pKernelHead->Vblank.pSpinlock != NULL**pKernelHead->Vblank.pSpinlock != NULL*Instance**Instance*instoffset*src/kernel/gpu/disp/inst_mem/arch/v03/disp_inst_mem_0300.c**instoffset**src/kernel/gpu/disp/inst_mem/arch/v03/disp_inst_mem_0300.c*pInstMemDesc**pInst*FrameAddr*Limit*bIsSurfaceBl*call to memmgrIsSurfaceBlockLinear_TU102*NVRM: Unexpected PTE_KIND value **NVRM: Unexpected PTE_KIND value *ctxDMAFlag*NVRM: Invalid address space: %d **NVRM: Invalid address space: %d *pInstMemCpuVA**pInstMemCpuVA*FrameAddr256Align*LimitAlign*pHashTable*call to _instmemClearHashEntry*call to _instmemRemoveReference*call to _instmemRemoveHashEntry*InstRefCount**InstRefCount*pContextDma->InstRefCount[gpuSubDevInst]*src/kernel/gpu/disp/inst_mem/disp_inst_mem.c**pContextDma->InstRefCount[gpuSubDevInst]**src/kernel/gpu/disp/inst_mem/disp_inst_mem.c*call to instmemDecommitContextDma_b3696a*call to _instmemFreeContextDma*gpuSubDevInst*call to _instmemProbeHashEntry*NVRM: The ctx dma (0x%x) has already been bound **NVRM: The ctx dma (0x%x) has already been bound *call to _instmemReserveContextDma*NVRM: Failed to alloc space in disp inst mem for ctx dma 0x%x **NVRM: Failed to alloc space in disp inst mem for ctx dma 0x%x *call to instmemCommitContextDma_DISPATCH*NVRM: Failed to commit ctx dma (0x%x) to inst mem **NVRM: Failed to commit ctx dma (0x%x) to inst mem *call to _instmemAddHashEntry*call to instmemHashFunc_DISPATCH*!(pInstMem->nHashTableEntries & (pInstMem->nHashTableEntries - 1))**!(pInstMem->nHashTableEntries & (pInstMem->nHashTableEntries - 1))*NVRM: Instance pointer is invalid!! **NVRM: Instance pointer is invalid!! *call to instmemIsValid_DISPATCH*instmemIsValid_HAL(pGpu, pInstMem, offset)**instmemIsValid_HAL(pGpu, pInstMem, offset)*hash < pInstMem->nHashTableEntries**hash < pInstMem->nHashTableEntries*NVRM: Display Hash table is FULL!! **NVRM: Display Hash table is FULL!! **pContextDma**pDispChannel*ht_ObjectHandle*call to instmemGenerateHashTableData_DISPATCH*ht_Context*memmgrMemWrite(pMemoryManager, &dest, &entry, sizeof(entry), TRANSFER_FLAGS_NONE)**memmgrMemWrite(pMemoryManager, &dest, &entry, sizeof(entry), TRANSFER_FLAGS_NONE)*pInstHeap***pInstMem*bPersistent*call to instmemDestroy*call to instmemGetSize_DISPATCH*call to instmemInitHashTable*instmemInitHashTable(pGpu, pInstMem, hashTableSize)**instmemInitHashTable(pGpu, pInstMem, hashTableSize)*call to instmemInitBitmap*instmemInitBitmap(pGpu, pInstMem, instMemSize, hashTableSize)**instmemInitBitmap(pGpu, pInstMem, instMemSize, hashTableSize)*call to instmemInitMemDesc*instmemInitMemDesc(pGpu, pInstMem, instMemSize)**instmemInitMemDesc(pGpu, pInstMem, instMemSize)*instMemAddrSpace*instMemCpuCacheAttr*instMemPhysAddr*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_DISPLAY_WRITE_INST_MEM, &ctrlParams, sizeof(ctrlParams))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_DISPLAY_WRITE_INST_MEM, &ctrlParams, sizeof(ctrlParams))**pInstMemDesc*pAllocedInstMemDesc**pAllocedInstMemDesc**pInstHeap**pHashTable*call to instmemSetMemory_IMPL*memdescCreate(&pInstMem->pInstMemDesc, pGpu, pInstMem->instMemSize, DISP_INST_MEM_ALIGN, NV_TRUE, pInstMem->instMemAddrSpace, pInstMem->instMemAttr, MEMDESC_FLAGS_MEMORY_TYPE_DISPLAY_NISO)**memdescCreate(&pInstMem->pInstMemDesc, pGpu, pInstMem->instMemSize, DISP_INST_MEM_ALIGN, NV_TRUE, pInstMem->instMemAddrSpace, pInstMem->instMemAttr, MEMDESC_FLAGS_MEMORY_TYPE_DISPLAY_NISO)*memdescCreate(&pInstMem->pAllocedInstMemDesc, pGpu, instMemSize + (DISP_INST_MEM_ALIGN - RM_PAGE_SIZE), DISP_INST_MEM_ALIGN, bContig, pInstMem->instMemAddrSpace, pInstMem->instMemAttr, MEMDESC_FLAGS_MEMORY_TYPE_DISPLAY_NISO)**memdescCreate(&pInstMem->pAllocedInstMemDesc, pGpu, instMemSize + (DISP_INST_MEM_ALIGN - RM_PAGE_SIZE), DISP_INST_MEM_ALIGN, bContig, pInstMem->instMemAddrSpace, pInstMem->instMemAttr, MEMDESC_FLAGS_MEMORY_TYPE_DISPLAY_NISO)*memdescCreateSubMem(&pInstMem->pInstMemDesc, pInstMem->pAllocedInstMemDesc, pGpu, offset, instMemSize)**memdescCreateSubMem(&pInstMem->pInstMemDesc, pInstMem->pAllocedInstMemDesc, pGpu, offset, instMemSize)*instMemAttr*instMemBase*nHashTableEntries*call to instmemGetHashTableBaseAddr_DISPATCH*hashTableBaseAddr*NVRM: Unable to allocate hash table. **NVRM: Unable to allocate hash table. *freeInstMemBase*freeInstMemSize*freeInstMemMax*NVRM: Unable to allocate instance memory heap manager. **NVRM: Unable to allocate instance memory heap manager. *NVRM: FB Free Size = 0x%x **NVRM: FB Free Size = 0x%x *NVRM: FB Free Inst Base = 0x%x **NVRM: FB Free Inst Base = 0x%x *NVRM: FB Free Inst Max = 0x%x **NVRM: FB Free Inst Max = 0x%x *allocOffset*NVRM: eheapAlloc failed for instance memory heap manager. **NVRM: eheapAlloc failed for instance memory heap manager. *call to kdispGetSupportedDisplayMask_IMPL*supportedMask*connectParams*call to kdispGetInternalClientHandle*call to kdispGetDispCommonHandle*pRmApi->Control(pRmApi, kdispGetInternalClientHandle(pKernelDisplay), kdispGetDispCommonHandle(pKernelDisplay), NV0073_CTRL_CMD_SYSTEM_GET_CONNECT_STATE, &connectParams, sizeof(connectParams))*src/kernel/gpu/disp/kern_disp.c**pRmApi->Control(pRmApi, kdispGetInternalClientHandle(pKernelDisplay), kdispGetDispCommonHandle(pKernelDisplay), NV0073_CTRL_CMD_SYSTEM_GET_CONNECT_STATE, &connectParams, sizeof(connectParams))**src/kernel/gpu/disp/kern_disp.c*pRmApi->Control(pRmApi, kdispGetInternalClientHandle(pKernelDisplay), kdispGetDispCommonHandle(pKernelDisplay), NV0073_CTRL_CMD_SYSTEM_GET_SUPPORTED, &supportParams, sizeof(supportParams))**pRmApi->Control(pRmApi, kdispGetInternalClientHandle(pKernelDisplay), kdispGetDispCommonHandle(pKernelDisplay), NV0073_CTRL_CMD_SYSTEM_GET_SUPPORTED, &supportParams, sizeof(supportParams))*supportParams*NVRM: Kernel RM received "%s of modeset" notification (minRequiredIsoBandwidthKBPS = %u, minRequiredFloorBandwidthKBPS = %u) **NVRM: Kernel RM received "%s of modeset" notification (minRequiredIsoBandwidthKBPS = %u, minRequiredFloorBandwidthKBPS = %u) *pParams->engineIdx == MC_ENGINE_IDX_DISP || pParams->engineIdx == MC_ENGINE_IDX_DISP_LOW**pParams->engineIdx == MC_ENGINE_IDX_DISP || pParams->engineIdx == MC_ENGINE_IDX_DISP_LOW*call to kdispAcquireLowLatencyLock*pRecords[engineIdx].pInterruptService == NULL**pRecords[engineIdx].pInterruptService == NULL**pInterruptService*call to kheadReadPendingVblank_DISPATCH*headIdx*pIntrServicedHeadMask*pIntrPending*bIsLowLatencyInterruptLine*call to kdispHandleWinSemEvt_DISPATCH*call to kdispServiceAwakenIntr_DISPATCH*call to kheadReadPendingRgLineIntr_DISPATCH*call to kheadProcessRgLineCallbacks_DISPATCH*call to osIsISR*call to kheadResetRgLineIntrMask_DISPATCH*call to kheadReadPendingRgSemIntr_DISPATCH*call to kheadHandleRgSemIntr_DISPATCH*call to kdispReadPendingVblank_IMPL*check_pending*pending_checked*maskNonEmptyQueues**maskNonEmptyQueues*call to kheadCheckVblankCallbacksQueued_IMPL*skippedcallbacks*isrVblankHeads*call to kheadGetVblankTotalCounter_IMPL*call to kheadSetVblankTotalCounter_IMPL*call to kheadSetVblankLowLatencyCounter_IMPL*call to kheadSetVblankNormLatencyCounter_b3696a*call to kheadProcessVblankCallbacks_DISPATCH*call to kdispIntrRetrigger_DISPATCH*call to intrReenableIntrMask_IMPL*head < OBJ_MAX_HEADS**head < OBJ_MAX_HEADS*rgIntrLine < MAX_RG_LINE_CALLBACKS_PER_HEAD**rgIntrLine < MAX_RG_LINE_CALLBACKS_PER_HEAD*rgLineCallbackPerHead**rgLineCallbackPerHead***rgLineCallbackPerHead****rgLineCallbackPerHead*pCallbackObject*call to rglcbInvoke_IMPL*NVRM: got RgLineCallback invocation for null callback **NVRM: got RgLineCallback invocation for null callback *pRgLineCallback*Invalid KernelDisplay state for RgLineCallback**Invalid KernelDisplay state for RgLineCallback*exVblankServiceHeadMask*osVblankCallback**pOsVblankCallback*pTmpCallback**pTmpCallback***pParm3*pLowLatencyLock**pDisplayApi*singleCmd**pEventNotifications*call to osEventNotification**pNotifyParams**pDispCommon*pDispChnClass*call to osTegraSocGetImpImportData*NVRM: osTegraSocGetImpImportData returned nvStatus = 0x%08X **NVRM: osTegraSocGetImpImportData returned nvStatus = 0x%08X *pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_DISPLAY_SET_IMP_INIT_INFO, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_DISPLAY_SET_IMP_INIT_INFO, ¶ms, sizeof(params))*call to instmemStateUnload_IMPL*call to kdispFreeSharedMem_IMPL*call to instmemStateLoad_IMPL*call to kdispAllocateSharedMem_DISPATCH*pSharedMemDesc**pSharedMemDesc*pKernelDisplay->pSharedData == NULL**pKernelDisplay->pSharedData == NULL*bIsFbBroken*NVRM: failed to create memdesc from FB! **NVRM: failed to create memdesc from FB! *NVRM: failed to allocate memory from FB! **NVRM: failed to allocate memory from FB! *NVRM: failed to map memory! **NVRM: failed to map memory! *memDescInfo*NVRM: NV0073_CTRL_CMD_SYSTEM_MAP_SHARED_DATA RM control failed! **NVRM: NV0073_CTRL_CMD_SYSTEM_MAP_SHARED_DATA RM control failed! *call to extdevDestroy*call to instmemStateDestroy_IMPL**pClientChannelTable*call to kdispDestroyCommonHandle_IMPL*NVRM: Could not allocate KernelDisplayStaticInfo **NVRM: Could not allocate KernelDisplayStaticInfo *pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_DISPLAY_GET_STATIC_INFO, pStaticInfo, sizeof(*pStaticInfo))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_DISPLAY_GET_STATIC_INFO, pStaticInfo, sizeof(*pStaticInfo))*numDispChannels*NVRM: Could not allocate clientChannelTable **NVRM: Could not allocate clientChannelTable *call to kdispInitBrightcStateLoad_DISPATCH*NVRM: rmapi control call for brightc state load failed **NVRM: rmapi control call for brightc state load failed *call to kdispSetupAcpiEdid_DISPATCH*NVRM: rmapi control call for acpi child device init failed **NVRM: rmapi control call for acpi child device init failed *call to instmemStateInitLocked_IMPL*instmemStateInitLocked(pGpu, pKernelDisplay->pInst)**instmemStateInitLocked(pGpu, pKernelDisplay->pInst)*i2cPortForExtdev*NVRM: Error in getting valid I2Cport for Extdevice or extdevice doesn't exist **NVRM: Error in getting valid I2Cport for Extdevice or extdevice doesn't exist *call to gpuExtdevConstruct_DISPATCH*NVRM: gpuExtdevConstruct() failed or not supported **NVRM: gpuExtdevConstruct() failed or not supported *call to kdispImportImpData_DISPATCH*kdispImportImpData_HAL(pKernelDisplay)**kdispImportImpData_HAL(pKernelDisplay)*NVRM: Could not allocate memory for pEdidParams **NVRM: Could not allocate memory for pEdidParams *dodMethodData*tableLen*edidTable**edidTable*acpiIdList**acpiIdList*call to gpuIsInternalSkuFuseEnabled_DISPATCH*bInternalSkuFuseEnabled*pBrightcInfo**pBrightcInfo*NVRM: Could not allocate memory for pBrightcInfo **NVRM: Could not allocate memory for pBrightcInfo *backLightDataSize*backLightData**backLightData*NVRM: Failed to read display IP version (FUSE disabled), status=0x%x **NVRM: Failed to read display IP version (FUSE disabled), status=0x%x *call to gpuInitDispIpHal_IMPL*call to kdispUpdatePdbAfterIpHalInit_b3696a*call to kdispAllocateCommonHandle_IMPL*call to rmapiutilFreeClientAndDeviceHandles*hInternalDevice*hInternalSubdevice*hDispCommonHandle*call to rmapiutilAllocClientAndDeviceHandles*call to kdispDestructInstMem_IMPL*call to kdispDestructKhead_IMPL*call to kdispDestroyVBlank_IMPL*IS_VIRTUAL(pGpu) || IS_FW_CLIENT(pGpu) || RMCFG_MODULE_DISP**IS_VIRTUAL(pGpu) || IS_FW_CLIENT(pGpu) || RMCFG_MODULE_DISP*call to kdispConstructInstMem_IMPL*call to kdispConstructKhead_IMPL*call to portAtomicSetS32*RMInternalPanelDisconnected**RMInternalPanelDisconnected*PDB_PROP_KDISP_INTERNAL_PANEL_DISCONNECTED*RmEnableAggressiveVblank**RmEnableAggressiveVblank*PDB_PROP_KDISP_AGGRESSIVE_VBLANK_HANDLING*call to _registerRgLineCallback*src/kernel/gpu/disp/rg_line_callback/rg_line_callback.c*NVRM: Trying to register/un-register a NULL RG line callback **src/kernel/gpu/disp/rg_line_callback/rg_line_callback.c**NVRM: Trying to register/un-register a NULL RG line callback *intrLine*call to kdispRegisterRgLineCallback_IMPL*ppPrev**ppPrev***ppPrev**Next*pCallback->Proc*src/kernel/gpu/disp/vblank_callback/vblank.c**pCallback->Proc**src/kernel/gpu/disp/vblank_callback/vblank.c*bQueueDpc*VBlankCount*call to osQueueDpc*call to kheadWriteVblankIntrState_IMPL*NVRM: Changed vblank state on head %d to AVAILABLE **NVRM: Changed vblank state on head %d to AVAILABLE **pListLL**pListNL**pPrev*bShouldDisable*!(pCallback->Flags & VBLANK_CALLBACK_FLAG_SPECIFIED_VBLANK_COUNT) || (pCallback->VBlankCount > Count)**!(pCallback->Flags & VBLANK_CALLBACK_FLAG_SPECIFIED_VBLANK_COUNT) || (pCallback->VBlankCount > Count)*NVRM: headAddVblankCallback: pGpu=%p cb=%p **NVRM: headAddVblankCallback: pGpu=%p cb=%p *NVRM: cbproc=%p cbobj=%p p1=0x%x p2=0x%x count=0x%x flags=0x%x offset=0x%x **NVRM: cbproc=%p cbobj=%p p1=0x%x p2=0x%x count=0x%x flags=0x%x offset=0x%x *vblankIntrIsBeingGenerated*pCheck**pCheck*NVRM: headAddVblankCallback: VblankCallback already on the Callback List **NVRM: headAddVblankCallback: VblankCallback already on the Callback List *OktoAdd*pCallback->Next == NULL**pCallback->Next == NULL*NVRM: VBlankCallback discarded in dacCRTCAddVblankCallback to avoid infinite loop **NVRM: VBlankCallback discarded in dacCRTCAddVblankCallback to avoid infinite loop *NVRM: headAddVblankCallback: immediate invocation **NVRM: headAddVblankCallback: immediate invocation *bImmediateCallback*NVRM: headAddVblankCallback: Changed vblank stat to ENABLED **NVRM: headAddVblankCallback: Changed vblank stat to ENABLED *pVblankCallback*CallBack*call to kheadPauseVblankCbNotifications_IMPL***pRgVblankCb*call to kdispSetupVBlank_IMPL**pVblankCallback*VBlankOffset*TimeStamp*MC_CallbackFlag*src/kernel/gpu/eng_state.c**src/kernel/gpu/eng_state.c*lockStatus**stats*targetState < ENGSTATE_STATE_COUNT**targetState < ENGSTATE_STATE_COUNT*transitionTimeUs**Undefined**Construct**Pre-Init**Init**Pre-Load**Load**Post-Load**Pre-Unload**Unload**Post-Unload**Destroy*NVRM: Engine %s state change: %s -> %s, took %uus *call to engstateGetName_IMPL**NVRM: Engine %s state change: %s -> %s, took %uus *stateStrings**stateStrings***stateStrings*call to portMemExTrackingGetActiveStats*memstats*memoryAllocCount*memoryAllocSize*NVRM: Memory usage change: %d allocations, %d bytes **NVRM: Memory usage change: %d allocations, %d bytes *currentState*%s:%d**%s:%d*call to __nvoc_objGetClassInfo*pGsync != NULL*src/kernel/gpu/external_device/arch/blackwell/kern_gsync_gb100.c**pGsync != NULL**src/kernel/gpu/external_device/arch/blackwell/kern_gsync_gb100.c**pExtDev*pExtDev != NULL**pExtDev != NULL*pExtDeviceTest**pExtDeviceTest*src/kernel/gpu/external_device/arch/kepler/kern_external_device_gk104.c*NVRM: Out of memory. **src/kernel/gpu/external_device/arch/kepler/kern_external_device_gk104.c**NVRM: Out of memory. *pConstruct**pConstruct*ExtDevice*pI*NVRM: EXTDEV: device is connecting **NVRM: EXTDEV: device is connecting *foundDevice*pMulDivSettings*pMulDivSettings != NULL*src/kernel/gpu/external_device/arch/kepler/kern_gsync_p2060.c**pMulDivSettings != NULL**src/kernel/gpu/external_device/arch/kepler/kern_gsync_p2060.c*call to supportsMulDiv*supportsMulDiv(pExtDev)**supportsMulDiv(pExtDev)*call to GetP2060MasterableGpu*call to writeregu008_extdeviceTargeted*writeregu008_extdeviceTargeted(pGpu, pExtDev, NV_P2060_MULTIPLIER_DIVIDER, reg)**writeregu008_extdeviceTargeted(pGpu, pExtDev, NV_P2060_MULTIPLIER_DIVIDER, reg)**pMulDivSettings*call to readregu008_extdeviceTargeted*readregu008_extdeviceTargeted(pGpu, pExtDev, NV_P2060_MULTIPLIER_DIVIDER, ®)**readregu008_extdeviceTargeted(pGpu, pExtDev, NV_P2060_MULTIPLIER_DIVIDER, ®)*devIds**devIds*pOtherGpu*pOtherGpu != NULL**pOtherGpu != NULL*call to gsyncmgrIsFirmwareGPUMismatch_STATIC_DISPATCH*pExtdev*pExtdev->deviceId == DAC_EXTERNAL_DEVICE_P2060**pExtdev->deviceId == DAC_EXTERNAL_DEVICE_P2060*call to gsyncGetNumberOfGpuFrameCountRollbacks_P2060*Iface**Iface*GpuInfo*Sync**Master*Slaved**Slaved*LocalSlave**LocalSlave*FrameCountData*head < kdispGetNumHeads(pKernelDisplay)**head < kdispGetNumHeads(pKernelDisplay)*safeRegionUpperLimit*safeRegionLowerLimit*call to kdispReadRgLineCountAndFrameCount_DISPATCH*NVRM: Failed to read RG_DPCA. **NVRM: Failed to read RG_DPCA. *rawGsyncFrameCount*currentFrameCount*numberOfRollbacks*modGsyncFrameCount*initialDifference*previousFrameCount*totalFrameCount*enableFrmCmpMatchIntSlave*pTimerEvents**pTimerEvents***pTimerEvents*bReCheck*lastFrameCounterQueryTime*isFrmCmpMatchIntMasterEnabled**pGsync*(pGsync && pGsync->pExtDev)**(pGsync && pGsync->pExtDev)*call to gsyncUpdateFrameCount_P2060*call to GetP2060GpuLocation*call to gsyncIsP2060MasterBoard*call to GpuIsP2060Master*call to GpuIsP2060Connected*bIsMasterBoard*bIsMaster*call to gsyncReadMaster_P2060*call to gsyncReadSlaves_P2060*ctrl2*MosaicGroup**MosaicGroup*gpuTimingSource*gpuTimingSlaves**gpuTimingSlaves*slaveGpuCount*enabledMosaic*NVRM: P2060[%d] disabled interrupt **NVRM: P2060[%d] disabled interrupt *call to gsyncIsStereoEnabled_p2060*NVRM: P2060[%d] enabled interrupt **NVRM: P2060[%d] enabled interrupt *watchdogCountDownValue**pTempGpu*NVRM: P2060[%d] extdevCancelWatchdog. **NVRM: P2060[%d] extdevCancelWatchdog. *call to extdevCancelWatchdog*tempIface*NVRM: P2060[%d]:%s snapshot reset to _SYNC_LOSS_TRUE _VCXO_NOT_SERVO _STEREO_NOLOCK **NVRM: P2060[%d]:%s snapshot reset to _SYNC_LOSS_TRUE _VCXO_NOT_SERVO _STEREO_NOLOCK *Snapshot**Snapshot*Status1*lastStereoToggleTime*lastSyncCheckTime*gainedSync*call to gsyncSignalServiceRequested*call to gsyncGetGsyncInstance*lastEventNotified**tempGpu*call to gsyncGpuCanBeMaster_P2060*call to GetP2060ConnectorIndexFromGpu*bIsMasterGpu*call to gsyncDisableNonFramelockInterrupt_P2060*NVRM: Failed to disable non-framelock interrupts on gsync GPU. **NVRM: Failed to disable non-framelock interrupts on gsync GPU. *isNonFramelockInterruptEnabled*NVRM: failed to find P2060 connector of the GPU. **NVRM: failed to find P2060 connector of the GPU. *call to gsyncEnableNonFramelockInterrupt_P2060*NVRM: Failed to enable non-framelock interrupts on gsync GPU. **NVRM: Failed to enable non-framelock interrupts on gsync GPU. *interruptEnabledInterface*call to gsyncProgramExtStereoPolarity_P2060*NVRM: Failed to Program External Stereo Polarity for GPU. **NVRM: Failed to Program External Stereo Polarity for GPU. *call to gsyncProgramSwapBarrier_P2060*call to gsyncReadSwapBarrier_P2060*deviceRev*revId*extendedRevision*capFlags*call to isFirmwareRevMismatch*p2060*maxStartDelay*startDelayResolution*call to needsMasterBarrierWar*pGpu && pP2060**pGpu && pP2060*call to gsyncCancelWatchdog_P2060*call to GetP2060WatchdogGpu*call to extdevScheduleWatchdog*call to gsyncIsOnlyFrameLockMaster_P2060*call to gsyncReadFrameRate_P2060*call to gsyncReadHouseSignalPresent_P2060*call to gsyncReadHouseSyncFrameRate_P2060*call to gsyncReadIsSyncDetected_P2060*call to gsyncReadIsTiming_P2060*call to gsyncReadStereoLocked_P2060*call to gsyncReadUniversalFrameCount_P2060*RefreshRate*call to gsyncProgramSlaves_P2060*NVRM: P2060 GPU can not be Framelock Master. **NVRM: P2060 GPU can not be Framelock Master. *call to GpuIsMosaicTimingSlave*NVRM: P2060 GPU is mosaic timing slave. Can not set Framelock Master. **NVRM: P2060 GPU is mosaic timing slave. Can not set Framelock Master. *call to gsyncProgramMaster_P2060*bMaster*port0Direction*port1Direction*expectedPort0Direction*expectedPort1Direction*call to gsyncIsFrameLocked_P2060*ifaceEvents**ifaceEvents*pIfaceGpu**pIfaceGpu*oldStatus*newStatus*localMaster*NVRM: P2060[%d] is local master => GAINED SYNC **NVRM: P2060[%d] is local master => GAINED SYNC *call to tmrGetTime_DISPATCH*timeDiff*NVRM: P2060[%d] snapshot timeDiff is %d ms **NVRM: P2060[%d] snapshot timeDiff is %d ms *NVRM: P2060[%d] GAINED SYNC **NVRM: P2060[%d] GAINED SYNC *call to gsyncGpuStereoHeadSync*updateSnapshot*NVRM: Update P2060[%d] settled from 0x%x ( **NVRM: Update P2060[%d] settled from 0x%x ( *) to 0x%x ( **) to 0x%x ( *) **) *diffStatus*pIfaceTmr**pIfaceTmr*NVRM: Event P2060[%d]: 0x%x (**NVRM: Event P2060[%d]: 0x%x (*slave**slave*localSlave**localSlave*regStatus*NVRM: Stereo headsync failed **NVRM: Stereo headsync failed *NVRM: mosaicGroup equaling/extending NV_P2060_MAX_MOSAIC_GROUPS. **NVRM: mosaicGroup equaling/extending NV_P2060_MAX_MOSAIC_GROUPS. *NVRM: mosaic slaveGpuCount extending NV_P2060_MAX_MOSAIC_SLAVES. **NVRM: mosaic slaveGpuCount extending NV_P2060_MAX_MOSAIC_SLAVES. *NVRM: trying to enable mosaicGroup which is already enabled. **NVRM: trying to enable mosaicGroup which is already enabled. *pSourceGpu*mosaicReg*NVRM: Failed to write P2060 mosaic slave register. **NVRM: Failed to write P2060 mosaic slave register. *call to gsyncSetLsrMinTime*NVRM: Failed to write P2060 mosaic Source register. **NVRM: Failed to write P2060 mosaic Source register. *NVRM: trying to disable mosaicGroup which is not enabled. **NVRM: trying to disable mosaicGroup which is not enabled. *call to gsyncResetMosaicData_P2060*tSwapRdyHiLsrMinTime*TSwapRdyHiLsrMinTime**TSwapRdyHiLsrMinTime*DsiFliplock*call to kdispComputeLsrMinTimeValue_DISPATCH*call to kdispSetSwapBarrierLsrMinTime_DISPATCH*OrigLsrMinTime**OrigLsrMinTime*NVRM: Error occured while computing LSR_MIN_TIME for Swap Barrier **NVRM: Error occured while computing LSR_MIN_TIME for Swap Barrier *saved*call to kdispRestoreOriginalLsrMinTime_DISPATCH*call to GpuIsConnectedToMasterViaBridge*call to gpumgrGetGpuBridgeType*call to isBoardWithNvlinkQsyncContention*pThis->tSwapRdyHiLsrMinTime != 0**pThis->tSwapRdyHiLsrMinTime != 0*SwapReadyRequested*call to extdevGetBoundHeadsAndDisplayIds*DisplayIds**DisplayIds*NVRM: Failed to get Gpu location. Can not program Slave. **NVRM: Failed to get Gpu location. Can not program Slave. *NVRM: Failed to read ctrl register. Can not program slave. **NVRM: Failed to read ctrl register. Can not program slave. *bHouseSelect*bLocalMaster*call to gsyncApplyStereoPinAlwaysHiWar*NVRM: Failed to drive stereo output pin for bug3362661. **NVRM: Failed to drive stereo output pin for bug3362661. *NVRM: Failed to read ctrl3 register. Can not program slave. **NVRM: Failed to read ctrl3 register. Can not program slave. *NVRM: Failed to get connector index for Gpu. Can not program slave. **NVRM: Failed to get connector index for Gpu. Can not program slave. *NVRM: Failed to write SYNC_SRC. Can not program slave. **NVRM: Failed to write SYNC_SRC. Can not program slave. *bCoupled*call to gsyncProgramFramelockEnable_P2060*call to gsyncDisableFrameLockInterrupt_P2060*call to gsyncUnApplyStereoPinAlwaysHiWar*call to gsyncResetFrameCountData_P2060*NVRM: Failed to get Gpu location. Can not program Master. **NVRM: Failed to get Gpu location. Can not program Master. *NVRM: Failed to read Ctrl data. Can not program Master. **NVRM: Failed to read Ctrl data. Can not program Master. *bTestModePresent*NVRM: Failed to get connector index for Gpu. Can not program Master. **NVRM: Failed to get connector index for Gpu. Can not program Master. *bGPUAlreadyMaster*bQSyncAlreadyMaster*ExternalDevice*NVRM: Failed to read NV_P2060_FPGA. Can not program Master. **NVRM: Failed to read NV_P2060_FPGA. Can not program Master. *NVRM: Failed to write SYNC_SRC. Can not program Master. **NVRM: Failed to write SYNC_SRC. Can not program Master. *NVRM: Failed to write I_AM_MSTR. Can not program Master. **NVRM: Failed to write I_AM_MSTR. Can not program Master. *call to gsyncUpdateSwapRdyConnectionForGpu_P2060*NVRM: Failed to update SwapRdyEnable. Can not program Master. **NVRM: Failed to update SwapRdyEnable. Can not program Master. *otherGpuId**pOtherGpu*bEnableMaster*RasterSyncGpio*bRasterSyncGpioSaved*bRasterSyncGpioDirection*NVRM: Extdev control call to save/restore GPIO direction is failed! **NVRM: Extdev control call to save/restore GPIO direction is failed! *NVRM: Failed to program SLI slaves. Can not program Master. **NVRM: Failed to program SLI slaves. Can not program Master. *GetP2060GpuLocation(pGpu, pThis, &iface)**GetP2060GpuLocation(pGpu, pThis, &iface)*call to GetP2060GpuSnapshot*readregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_STATUS, ®Status)**readregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_STATUS, ®Status)*gsyncReadHouseSignalPresent_P2060(pGpu, pExtDev, NV_TRUE, pVal)**gsyncReadHouseSignalPresent_P2060(pGpu, pExtDev, NV_TRUE, pVal)*tempHead*call to gsyncUpdateGsyncStatusSnapshot_P2060*SyncStartDelay*StartDelayLow*StartDelayHigh*call to gsyncSupportsLargeSyncSkew_P2060*SyncSkewLow*SyncSkewHigh*regNSync*currentSyncPolarity*frameTime*EmitTestSignal*call to gsyncIsFrameLockMaster_P2060*refreshTime*refreshTime != 0**refreshTime != 0*ctrl4*timingParameters*NVRM: OptimizeTimingParameters control call has failed! **NVRM: OptimizeTimingParameters control call has failed! *gsyncIsFrameLocked_P2060(pThis)**gsyncIsFrameLocked_P2060(pThis)*pTmpTmr**pTmpTmr*queryTimeDiff*pThis->RefreshRate >= 10**pThis->RefreshRate >= 10*calculatedDiff*bStereoEnabled**bStereoEnabled*bStereoLocked*call to gsyncEnableFramelockInterrupt_P2060*workerThreadData**workerThreadData*intrData*NVRM: Couldn't raad the register status physical RMs. **NVRM: Couldn't raad the register status physical RMs. *NVRM: Cannot get P2060 Gpu location for serving interrupt. **NVRM: Cannot get P2060 Gpu location for serving interrupt. **pExtDevice*NVRM: Memalloc failed **NVRM: Memalloc failed *call to _extdevService*WatchdogControl*call to gsyncRemoveGpu*PDB_PROP_GPU_GSYNC_III_ATTACHED*PDB_PROP_GPU_QSYNC_II_ATTACHED*call to gsyncFindGpuHandleLocation*NVRM: Couldn't find saved GPU entry, check saved i2chandles. **NVRM: Couldn't find saved GPU entry, check saved i2chandles. *i2cHandles**i2cHandles*hSubscription*call to extdevDestroy_Base*call to gsyncReadBoardId_P2060*NVRM: failed to read P2060 device Id. **NVRM: failed to read P2060 device Id. *gsyncTable**gsyncTable*pOtherGpuId*NVRM: Failed to read Ctrl data. **NVRM: Failed to read Ctrl data. *bSkipResetForVM*NVRM: Failed to get connector index for Gpu. **NVRM: Failed to get connector index for Gpu. *regCtrl2*NVRM: failed to read P2060 device Id after reset. **NVRM: failed to read P2060 device Id after reset. *pGsync->pExtDev != *ppExtdevs**pGsync->pExtDev != *ppExtdevs*pExt2060Temp**pExt2060Temp*call to gsyncFindFreeHandleLocation*NVRM: Failed to free index for new GPU entry. **NVRM: Failed to free index for new GPU entry. *NVRM: Couldn't find saved GPU entry, check extdevSaveI2cHandles. **NVRM: Couldn't find saved GPU entry, check extdevSaveI2cHandles. **pExtdev*NVRM: failed to update P2060 device Id. **NVRM: failed to update P2060 device Id. *NVRM: failed to find P2060 connector. **NVRM: failed to find P2060 connector. *NVRM: failed to create P2060 watchdog timer event. **NVRM: failed to create P2060 watchdog timer event. *NVRM: failed to create P2060 frame count timer event. **NVRM: failed to create P2060 frame count timer event. *call to gsyncAttachGpu*NVRM: failed to attach P2060 gsync to gpu. **NVRM: failed to attach P2060 gsync to gpu. *pThis->ExternalDevice.deviceId == DAC_EXTERNAL_DEVICE_P2060**pThis->ExternalDevice.deviceId == DAC_EXTERNAL_DEVICE_P2060*QuadroSyncFirmwareRevisionCheckDisable**QuadroSyncFirmwareRevisionCheckDisable*PDB_PROP_SYS_IS_QSYNC_FW_REVISION_CHECK_DISABLED*call to gsyncSetSyncPolarity_P2060*pGpuTemp*pGsyncTemp**pGsyncTemp*call to _externalDeviceInit_P2060*NVRM: Maximum number of GPUs have been attached **NVRM: Maximum number of GPUs have been attached *NVRM: Extdev GPIO interrupt enable failed **NVRM: Extdev GPIO interrupt enable failed *call to extdevConstruct_Base*Attach*GetDevice*Service*Watchdog*setI2cHandles*I2CAddr*I2CPort*MaxGpus*gpuAttachMask*tSwapRdyHi*call to readregu008_extdevice*externalDeviceId*externalDeviceRev*deviceExRev*readregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2061_CONTROL5, &data)*src/kernel/gpu/external_device/arch/pascal/kern_gsync_p2061.c**readregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2061_CONTROL5, &data)**src/kernel/gpu/external_device/arch/pascal/kern_gsync_p2061.c*old_data*new_setting*writeregu008_extdeviceTargeted(pGpu, pExtDev, NV_P2061_CONTROL5, data)**writeregu008_extdeviceTargeted(pGpu, pExtDev, NV_P2061_CONTROL5, data)*pServerGpu*pRmApi->Control(pRmApi, pServerGpu->hInternalClient, pServerGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GSYNC_GET_RASTER_SYNC_DECODE_MODE, &rasterSyncDecodeModeParams, sizeof(rasterSyncDecodeModeParams))**pRmApi->Control(pRmApi, pServerGpu->hInternalClient, pServerGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GSYNC_GET_RASTER_SYNC_DECODE_MODE, &rasterSyncDecodeModeParams, sizeof(rasterSyncDecodeModeParams))*rasterSyncDecodeModeParams*readregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_SYNC_SKEW_LOW, &data)**readregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_SYNC_SKEW_LOW, &data)*readregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_SYNC_SKEW_HIGH, &data)**readregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_SYNC_SKEW_HIGH, &data)*readregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_SYNC_SKEW_UPPER, &data)**readregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_SYNC_SKEW_UPPER, &data)*syncSkew <= NV_P2061_V204_SYNC_SKEW_MAX_UNITS**syncSkew <= NV_P2061_V204_SYNC_SKEW_MAX_UNITS*difference*lastUserSkewSent*writeregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_SYNC_SKEW_LOW, syncSkew & DRF_MASK(NV_P2060_SYNC_SKEW_LOW_VAL))**writeregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_SYNC_SKEW_LOW, syncSkew & DRF_MASK(NV_P2060_SYNC_SKEW_LOW_VAL))*writeregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_SYNC_SKEW_HIGH, (syncSkew >> 8) & DRF_MASK(NV_P2060_SYNC_SKEW_HIGH_VAL))**writeregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_SYNC_SKEW_HIGH, (syncSkew >> 8) & DRF_MASK(NV_P2060_SYNC_SKEW_HIGH_VAL))*writeregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_SYNC_SKEW_UPPER, (syncSkew >> 16) & DRF_MASK(NV_P2060_SYNC_SKEW_UPPER_VAL))**writeregu008_extdeviceTargeted(pGpu, pExtDev, (NvU8)NV_P2060_SYNC_SKEW_UPPER, (syncSkew >> 16) & DRF_MASK(NV_P2060_SYNC_SKEW_UPPER_VAL))*call to gsyncGetCplStatus_P2060*regCtrl4*connectorCount*eventNum*isEventOccured*gsyncInst < NV30F1_MAX_GSYNCS*src/kernel/gpu/external_device/gsync.c**gsyncInst < NV30F1_MAX_GSYNCS**src/kernel/gpu/external_device/gsync.c*gsyncHal*gsyncGpuCanBeMaster*gsyncOptimizeTiming*gsyncGetStereoLockMode*gsyncSetStereoLockMode*gsyncGetSyncPolarity*gsyncSetSyncPolarity*gsyncGetVideoMode*gsyncSetVideoMode*gsyncGetNSync*gsyncSetNSync*gsyncGetSyncSkew*gsyncSetSyncSkew*gsyncGetUseHouse*gsyncSetUseHouse*gsyncGetSyncStartDelay*gsyncSetSyncStartDelay*gsyncGetEmitTestSignal*gsyncSetEmitTestSignal*gsyncGetInterlaceMode*gsyncSetInterlaceMode*gsyncRefSwapBarrier*gsyncRefSignal*gsyncRefMaster*gsyncRefSlaves*gsyncGetCplStatus*gsyncSetWatchdog*gsyncGetRevision*gsyncSetMosaic*gsyncConfigFlashGsync*gsyncGetMulDiv*gsyncSetMulDiv*gsyncGetVRR*gsyncSetVRR*gsyncSetRasterSyncDecodeMode*pGsync && pGsync->pExtDev**pGsync && pGsync->pExtDev*call to gsyncFilterEvents*call to CliNotifyGsyncEvent*TSwapRdyHiSwapLockoutStart**TSwapRdyHiSwapLockoutStart*bAutomaticWatchdogScheduling*pGsync->gsyncHal.gsyncGetSyncPolarity(pGpu, pGsync->pExtDev, &SyncPolarity)**pGsync->gsyncHal.gsyncGetSyncPolarity(pGpu, pGsync->pExtDev, &SyncPolarity)*bLeadingEdge*bFallingEdge*pGsync->gsyncHal.gsyncGetSyncStartDelay(pGpu, pGsync->pExtDev, &pParams->syncDelay)**pGsync->gsyncHal.gsyncGetSyncStartDelay(pGpu, pGsync->pExtDev, &pParams->syncDelay)*pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_Refresh, &pParams->refresh)**pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_Refresh, &pParams->refresh)*pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_HouseSyncIncoming, &pParams->houseSyncIncoming)**pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_HouseSyncIncoming, &pParams->houseSyncIncoming)*pGsync->gsyncHal.gsyncGetNSync(pGpu, pGsync->pExtDev, &pParams->syncInterval)**pGsync->gsyncHal.gsyncGetNSync(pGpu, pGsync->pExtDev, &pParams->syncInterval)*pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bSyncReady, &pParams->bSyncReady)**pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bSyncReady, &pParams->bSyncReady)*pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bSwapReady, &pParams->bSwapReady)**pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bSwapReady, &pParams->bSwapReady)*pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bHouseSync, &pParams->bHouseSync)**pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bHouseSync, &pParams->bHouseSync)*pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bPort0Input, &pParams->bPort0Input)**pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bPort0Input, &pParams->bPort0Input)*pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bPort1Input, &pParams->bPort1Input)**pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bPort1Input, &pParams->bPort1Input)*pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bPort0Ethernet, &pParams->bPort0Ethernet)**pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bPort0Ethernet, &pParams->bPort0Ethernet)*pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bPort1Ethernet, &pParams->bPort1Ethernet)**pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bPort1Ethernet, &pParams->bPort1Ethernet)*pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_UniversalFrameCount, &pParams->universalFrameCount)**pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_UniversalFrameCount, &pParams->universalFrameCount)*pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bInternalSlave, &pParams->bInternalSlave)**pGsync->gsyncHal.gsyncGetCplStatus(pGpu, pGsync->pExtDev, gsync_Status_bInternalSlave, &pParams->bInternalSlave)*call to gsyncIsGpuInGsync*call to gsyncIsAnyHeadFramelocked*NVRM: Extdev display IDs for the display doesn't exist! **NVRM: Extdev display IDs for the display doesn't exist! **pSourceGpu*pGsync->gsyncHal.gsyncGetVideoMode(pGpu, pGsync->pExtDev, &VideoMode)**pGsync->gsyncHal.gsyncGetVideoMode(pGpu, pGsync->pExtDev, &VideoMode)*pGsync->gsyncHal.gsyncGetNSync(pGpu, pGsync->pExtDev, &pParams->nSync)**pGsync->gsyncHal.gsyncGetNSync(pGpu, pGsync->pExtDev, &pParams->nSync)*pGsync->gsyncHal.gsyncGetSyncSkew(pGpu, pGsync->pExtDev, &pParams->syncSkew)**pGsync->gsyncHal.gsyncGetSyncSkew(pGpu, pGsync->pExtDev, &pParams->syncSkew)*pGsync->gsyncHal.gsyncGetSyncStartDelay(pGpu, pGsync->pExtDev, &pParams->syncStartDelay)**pGsync->gsyncHal.gsyncGetSyncStartDelay(pGpu, pGsync->pExtDev, &pParams->syncStartDelay)*pGsync->gsyncHal.gsyncGetUseHouse(pGpu, pGsync->pExtDev, &pParams->useHouseSync)**pGsync->gsyncHal.gsyncGetUseHouse(pGpu, pGsync->pExtDev, &pParams->useHouseSync)*pGsync->gsyncHal.gsyncGetMulDiv(pGpu, pGsync->pExtDev, &pParams->syncMulDiv)**pGsync->gsyncHal.gsyncGetMulDiv(pGpu, pGsync->pExtDev, &pParams->syncMulDiv)*pGsync->gsyncHal.gsyncGetVRR(pGpu, pGsync->pExtDev, &pParams->syncVRR)**pGsync->gsyncHal.gsyncGetVRR(pGpu, pGsync->pExtDev, &pParams->syncVRR)*RJ45**RJ45*proxyGpuId*call to gsyncSetHouseSyncMode*call to gsyncGetHouseSyncMode*call to gsyncWriteRegister*call to gsyncReadRegister*call to gsyncSetControlStereoLockMode*call to gsyncGetControlStereoLockMode*ppEventNotifications**ppEventNotifications***ppEventNotifications*ONEBITSET(tempMask)**ONEBITSET(tempMask)*pEventByType**pEventByType***pEventByType*pGsyncApi->pEventByType[eventNum] == NULL**pGsyncApi->pEventByType[eventNum] == NULL*oldEventNotification*bDoEventFiltering*call to gsyncConfigFlash*call to gsyncSetLocalSync*call to gsyncGetOptimizedTiming*call to gsyncGetStatusCaps*call to gsyncGetControlSwapLockWindow*call to gsyncSetControlSwapBarrier*call to gsyncGetControlSwapBarrier*call to gsyncSetControlInterlaceMode*call to gsyncGetControlInterlaceMode*call to gsyncSetControlWatchdog*call to gsyncSetControlTesting*call to gsyncGetControlTesting*call to gsyncGetStatus*call to gsyncGetStatusSync*call to gsyncSetControlUnsync*call to gsyncSetControlSync*call to gsyncGetControlSync*call to gsyncSetControlParams*call to gsyncGetStatusParams*call to gsyncGetStatusSignals*call to gsyncGetGpuTopology*pGsyncGetVersionParams*pGpus**pGpus*bIsGpuFoundInGsync*pGsyncInfo*gsyncFlags*pGsyncIdsParams*pGsyncIds**pGsyncMgr*call to gsyncSetupNullProvider*PDB_PROP_SYS_IS_GSYNC_ENABLED*NVRM: GPU was not found in gsync object **NVRM: GPU was not found in gsync object *NVRM: gpu is %d already attached! **NVRM: gpu is %d already attached! *NVRM: gsync table full! **NVRM: gsync table full! *pProxyGpu*call to gsyncStartupProvider*gsyncStartupProvider(pGsync, externalDevice)**gsyncStartupProvider(pGsync, externalDevice)*pGsync->gsyncHal.gsyncSetRasterSyncDecodeMode(pGpu, pGpu, pGsync->pExtDev)**pGsync->gsyncHal.gsyncSetRasterSyncDecodeMode(pGpu, pGpu, pGsync->pExtDev)*call to gsyncP2060StartupProvider*call to gsyncP2061StartupProvider*syncSkewResolutionInNs*syncSkewMax*masterableGpuConnectors*NVRM: Invalid gsync state **NVRM: Invalid gsync state *pGsyncMgr->gsyncTable[gsyncInst].gpuCount > 0**pGsyncMgr->gsyncTable[gsyncInst].gpuCount > 0*gsyncCount**pGSyncApi**pEventNotification*call to gsyncConvertNewEventToOldEventNum*src/kernel/gpu/external_device/gsync_api.c*NVRM: gsync instance 0x%0x has had a status change: 0x%0x **src/kernel/gpu/external_device/gsync_api.c**NVRM: gsync instance 0x%0x has had a status change: 0x%0x *(pEventNotification->NotifyType == NV01_EVENT_KERNEL_CALLBACK_EX) || (pEventNotification->NotifyType == NV01_EVENT_OS_EVENT)**(pEventNotification->NotifyType == NV01_EVENT_KERNEL_CALLBACK_EX) || (pEventNotification->NotifyType == NV01_EVENT_OS_EVENT)*NVRM: gsync instance 0x%0x has had a status change: %d **NVRM: gsync instance 0x%0x has had a status change: %d *call to osObjectEventNotification*pEventNotification->NotifyType == NV01_EVENT_KERNEL_CALLBACK_EX**pEventNotification->NotifyType == NV01_EVENT_KERNEL_CALLBACK_EX*call to gsyncIsInstanceValid*pNv30f1AllocParams**pDisplayId*src/kernel/gpu/external_device/kern_external_device.c*NVRM: Extdev getting display IDs have failed! **src/kernel/gpu/external_device/kern_external_device.c**NVRM: Extdev getting display IDs have failed! *displayIds**displayIds*call to extdevGetExtDev*pdsif**pdsif*call to i2c_extdeviceHelper*call to writeregu008_extdevice*Scheduled**pI*Validate*TimeOut*call to kflcnRiscvRegRead_DISPATCH*riscvCpuctl*riscvIrqmask*riscvIrqdest*riscvPc*riscvIrqdeleg*riscvPrivErrStat*riscvPrivErrInfo*riscvPrivErrAddrH*riscvPrivErrAddrL*riscvHubErrStat*src/kernel/gpu/falcon/arch/ampere/kernel_falcon_ga102.c*NVRM: Trace buffer blocked, skipping. **src/kernel/gpu/falcon/arch/ampere/kernel_falcon_ga102.c**NVRM: Trace buffer blocked, skipping. *NVRM: Trace buffer larger than expected. Bailing! **NVRM: Trace buffer larger than expected. Bailing! *tracePCEntries*call to kflcnRiscvRegWrite_DISPATCH*tracePC**tracePC*NVRM: Timeout waiting for RISC-V to halt **NVRM: Timeout waiting for RISC-V to halt **pKernelFlcn*hwcfg2*call to kflcnIsRiscvCpuEnabled_DISPATCH*bcrCtrl*Failed to switch core to Falcon mode**Failed to switch core to Falcon mode*bcr*call to kflcnPreResetWait_DISPATCH*kflcnPreResetWait_HAL(pGpu, pKernelFlcn)**kflcnPreResetWait_HAL(pGpu, pKernelFlcn)*call to kflcnResetHw_DISPATCH*kflcnResetHw(pGpu, pKernelFlcn)**kflcnResetHw(pGpu, pKernelFlcn)*call to kflcnWaitForResetToFinish_DISPATCH*kflcnWaitForResetToFinish_HAL(pGpu, pKernelFlcn)**kflcnWaitForResetToFinish_HAL(pGpu, pKernelFlcn)*call to kflcnRiscvProgramBcr_DISPATCH*ONEBITSET(errorCode)*src/kernel/gpu/falcon/arch/blackwell/kernel_falcon_gb100.c**ONEBITSET(errorCode)**src/kernel/gpu/falcon/arch/blackwell/kernel_falcon_gb100.c*FB Poison**FB Poison*BROM ECC**BROM ECC*ITCM ECC**ITCM ECC*DTCM ECC**DTCM ECC*ICACHE ECC**ICACHE ECC*DCACHE ECC**DCACHE ECC*RISCV Delayed Lockstep**RISCV Delayed Lockstep*TKE Register ECC**TKE Register ECC*SE Logic**SE Logic*SE Keyslot ECC**SE Keyslot ECC*TKE Watchdog Timeout**TKE Watchdog Timeout*FBIF ECC**FBIF ECC*MPU RAM ECC**MPU RAM ECC*Engine Fault 0**Engine Fault 0*Engine Fault 1**Engine Fault 1*Engine Fault 2**Engine Fault 2*Engine Fault 3**Engine Fault 3*Engine Fault 4**Engine Fault 4*Engine Fault 5**Engine Fault 5*Engine Fault 6**Engine Fault 6*Engine Fault 7**Engine Fault 7*pErrorStatus != NULL**pErrorStatus != NULL*NVRM: Cannot read NV_PRISCV_RISCV_FAULT_CONTAINMENT_SRCSTAT (0x%x) **NVRM: Cannot read NV_PRISCV_RISCV_FAULT_CONTAINMENT_SRCSTAT (0x%x) *src/kernel/gpu/falcon/arch/turing/kernel_crashcat_engine_tu102.c*NVRM: unknown CrashCat scratch ID %u **src/kernel/gpu/falcon/arch/turing/kernel_crashcat_engine_tu102.c**NVRM: unknown CrashCat scratch ID %u *(offset & (sizeof(NvU32) - 1)) == 0**(offset & (sizeof(NvU32) - 1)) == 0*(size & (sizeof(NvU32) - 1)) == 0**(size & (sizeof(NvU32) - 1)) == 0*call to kcrashcatEngineMaskDmemAddr_DISPATCH*dmemc*call to kcrashcatEngineRegWrite_DISPATCH*pWordBuf*call to kcrashcatEngineRegRead_DISPATCH*falconMailbox**falconMailbox*falconIrqstat*falconIrqmode*fbifInstblk*fbifCtl*fbifThrottle*fbifAchkBlk**fbifAchkBlk*fbifAchkCtl**fbifAchkCtl*fbifCg1*src/kernel/gpu/falcon/arch/turing/kernel_falcon_tu102.c**src/kernel/gpu/falcon/arch/turing/kernel_falcon_tu102.c*call to kflcnIsRiscvActive_DISPATCH*icdCmd*call to kflcnIcdWriteCmdReg_DISPATCH*call to kflcnRiscvIcdWaitForIdle_DISPATCH*call to s_riscvIcdGetValue*call to kflcnRiscvIcdWriteAddress_DISPATCH*call to kflcnIcdReadCmdReg_DISPATCH*call to kflcnRiscvIcdReadRdata_DISPATCH*NVRM: Timeout waiting for Falcon to halt **NVRM: Timeout waiting for Falcon to halt *call to kflcnReset_TU102*kflcnReset_TU102(pGpu, pKernelFlcn)**kflcnReset_TU102(pGpu, pKernelFlcn)*(status == NV_OK) || (status == NV_ERR_GPU_IN_FULLCHIP_RESET)**(status == NV_OK) || (status == NV_ERR_GPU_IN_FULLCHIP_RESET)*call to kflcnSwitchToFalcon_DISPATCH**pEngPriv*call to memdescSetKernelMapping*call to memdescSetKernelMappingPriv*call to _kcrashcatEngineCreateBufferMemDesc*src/kernel/gpu/falcon/kernel_crashcat_engine.c**src/kernel/gpu/falcon/kernel_crashcat_engine.c*memdescMap(pMemDesc, 0, memdescGetSize(pMemDesc), NV_TRUE, NV_PROTECT_READABLE, &pBuf, &pPriv)**memdescMap(pMemDesc, 0, memdescGetSize(pMemDesc), NV_TRUE, NV_PROTECT_READABLE, &pBuf, &pPriv)***pEngPriv*call to _crashcatApertureToAddressSpace*memdescCreate(&pMemDesc, pKernelCrashCatEng->pGpu, pBufDesc->size, 0, NV_TRUE, bufAddrSpace, NV_MEMORY_CACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pMemDesc, pKernelCrashCatEng->pGpu, pBufDesc->size, 0, NV_TRUE, bufAddrSpace, NV_MEMORY_CACHED, MEMDESC_FLAGS_NONE)*call to crashcatEngineUnregisterCrashBuffer_IMPL*call to _addressSpaceToCrashcatAperture*call to crashcatEngineRegisterCrashBuffer_IMPL*fmtBuffer*%s %s**fmtBuffer**%s %s*argsCopy**argsCopy*call to nvErrorLog**fmt*call to portStringCat*newline*printBuffer**printBuffer*call to kcrashcatEngineUnregisterCrashBuffer_IMPL**pQueueMemDesc*call to crashcatEngineUnload_IMPL*pEngConfig*pEngConfig->pName != NULL**pEngConfig->pName != NULL*pEngConfig->errorId != 0**pEngConfig->errorId != 0*bConfigured**pName*errorId*dmemPort*allocQueueSize*memdescCreate(&pKernelCrashCatEng->pQueueMemDesc, pKernelCrashCatEng->pGpu, pEngConfig->allocQueueSize, CRASHCAT_QUEUE_ALIGNMENT, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pKernelCrashCatEng->pQueueMemDesc, pKernelCrashCatEng->pGpu, pEngConfig->allocQueueSize, CRASHCAT_QUEUE_ALIGNMENT, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, MEMDESC_FLAGS_NONE)*memdescAlloc(pKernelCrashCatEng->pQueueMemDesc)**memdescAlloc(pKernelCrashCatEng->pQueueMemDesc)*call to kcrashcatEngineRegisterCrashBuffer_IMPL*kcrashcatEngineRegisterCrashBuffer(pKernelCrashCatEng, pKernelCrashCatEng->pQueueMemDesc)**kcrashcatEngineRegisterCrashBuffer(pKernelCrashCatEng, pKernelCrashCatEng->pQueueMemDesc)*src/kernel/gpu/falcon/kernel_falcon.c*NVRM: ICD: Core is booted. **src/kernel/gpu/falcon/kernel_falcon.c**NVRM: ICD: Core is booted. *NVRM: ICD: [ERROR] Core is not booted. **NVRM: ICD: [ERROR] Core is not booted. *call to kflcnRiscvIcdRstat_DISPATCH*NVRM: ICD: RSTAT%d 0x%016llx **NVRM: ICD: RSTAT%d 0x%016llx *NVRM: ICD: [ERROR] Unable to retrieve any RSTAT register. **NVRM: ICD: [ERROR] Unable to retrieve any RSTAT register. *call to kflcnRiscvIcdHalt_DISPATCH*NVRM: ICD: [ERROR] ICD Halt command failed. **NVRM: ICD: [ERROR] ICD Halt command failed. *call to kflcnRiscvIcdRpc_DISPATCH*call to kflcnCoreDumpPc_DISPATCH*NVRM: ICD: [WARN] Cannot retrieve PC. **NVRM: ICD: [WARN] Cannot retrieve PC. *NVRM: ICD: PC = 0x--------%08llx **NVRM: ICD: PC = 0x--------%08llx *NVRM: ICD: PC = 0x%016llx **NVRM: ICD: PC = 0x%016llx *call to kflcnRiscvIcdReadReg_DISPATCH*riscvCoreRegisters**riscvCoreRegisters*traceRa*traceS0*NVRM: ICD: register read failed for x%02d **NVRM: ICD: register read failed for x%02d *NVRM: ICD: ra:0x%016llx sp:0x%016llx gp:0x%016llx tp:0x%016llx **NVRM: ICD: ra:0x%016llx sp:0x%016llx gp:0x%016llx tp:0x%016llx *NVRM: ICD: a0:0x%016llx a1:0x%016llx a2:0x%016llx a3:0x%016llx **NVRM: ICD: a0:0x%016llx a1:0x%016llx a2:0x%016llx a3:0x%016llx *NVRM: ICD: a4:0x%016llx a5:0x%016llx a6:0x%016llx a7:0x%016llx **NVRM: ICD: a4:0x%016llx a5:0x%016llx a6:0x%016llx a7:0x%016llx *NVRM: ICD: s0:0x%016llx s1:0x%016llx s2:0x%016llx s3:0x%016llx **NVRM: ICD: s0:0x%016llx s1:0x%016llx s2:0x%016llx s3:0x%016llx *NVRM: ICD: s4:0x%016llx s5:0x%016llx s6:0x%016llx s7:0x%016llx **NVRM: ICD: s4:0x%016llx s5:0x%016llx s6:0x%016llx s7:0x%016llx *NVRM: ICD: s8:0x%016llx s9:0x%016llx s10:0x%016llx s11:0x%016llx **NVRM: ICD: s8:0x%016llx s9:0x%016llx s10:0x%016llx s11:0x%016llx *NVRM: ICD: t0:0x%016llx t1:0x%016llx t2:0x%016llx t3:0x%016llx **NVRM: ICD: t0:0x%016llx t1:0x%016llx t2:0x%016llx t3:0x%016llx *NVRM: ICD: t4:0x%016llx t5:0x%016llx t6:0x%016llx **NVRM: ICD: t4:0x%016llx t5:0x%016llx t6:0x%016llx *call to kflcnRiscvIcdRcsr_DISPATCH*NVRM: ICD: csr[%03x] = 0x%016llx **NVRM: ICD: csr[%03x] = 0x%016llx *call to kflcnRiscvIcdReadMem_DISPATCH*NVRM: ICD: unwind%02u: 0x%016llx **NVRM: ICD: unwind%02u: 0x%016llx *NVRM: ICD: [WARN] unwind greater than max depth... **NVRM: ICD: [WARN] unwind greater than max depth... *NVRM: ICD: [WARN] unwind retrieved zero values :( **NVRM: ICD: [WARN] unwind retrieved zero values :( *NVRM: ICD: unwind complete. **NVRM: ICD: unwind complete. *call to kflcnDumpCoreRegs_DISPATCH*call to kflcnDumpPeripheralRegs_DISPATCH*call to kflcnDumpTracepc_DISPATCH*NVRM: PRI: riscvPc : %08x **NVRM: PRI: riscvPc : %08x *PeregrineCoreRegisters*NVRM: PRI: riscvCpuctl : %08x **NVRM: PRI: riscvCpuctl : %08x *NVRM: PRI: riscvIrqmask : %08x **NVRM: PRI: riscvIrqmask : %08x *NVRM: PRI: riscvIrqdest : %08x **NVRM: PRI: riscvIrqdest : %08x *NVRM: PRI: riscvPrivErrStat : %08x **NVRM: PRI: riscvPrivErrStat : %08x *NVRM: PRI: riscvPrivErrInfo : %08x **NVRM: PRI: riscvPrivErrInfo : %08x *NVRM: PRI: riscvPrivErrAddr : %016llx **NVRM: PRI: riscvPrivErrAddr : %016llx *NVRM: PRI: riscvHubErrStat : %08x **NVRM: PRI: riscvHubErrStat : %08x *NVRM: PRI: falconMailbox : 0:%08x 1:%08x **NVRM: PRI: falconMailbox : 0:%08x 1:%08x *NVRM: PRI: falconIrqstat : %08x **NVRM: PRI: falconIrqstat : %08x *NVRM: PRI: falconIrqmode : %08x **NVRM: PRI: falconIrqmode : %08x *NVRM: PRI: fbifInstblk : %08x **NVRM: PRI: fbifInstblk : %08x *NVRM: PRI: fbifCtl : %08x **NVRM: PRI: fbifCtl : %08x *NVRM: PRI: fbifThrottle : %08x **NVRM: PRI: fbifThrottle : %08x *NVRM: PRI: fbifAchkBlk : 0:%08x 1:%08x **NVRM: PRI: fbifAchkBlk : 0:%08x 1:%08x *NVRM: PRI: fbifAchkCtl : 0:%08x 1:%08x **NVRM: PRI: fbifAchkCtl : 0:%08x 1:%08x *NVRM: PRI: fbifCg1 : %08x **NVRM: PRI: fbifCg1 : %08x *NVRM: TRACE: %02u = 0x%016llx **NVRM: TRACE: %02u = 0x%016llx *NVRM: nonstall intr for MC 0x%x **NVRM: nonstall intr for MC 0x%x *rmEngineType != RM_ENGINE_TYPE_NULL**rmEngineType != RM_ENGINE_TYPE_NULL**pKernelFalcon*NVRM: physEngDesc 0x%x **NVRM: physEngDesc 0x%x *NVRM: Registering 0x%x/0x%x to handle nonstall intr **NVRM: Registering 0x%x/0x%x to handle nonstall intr *pRecords[mcIdx].pNotificationService == NULL**pRecords[mcIdx].pNotificationService == NULL*This should only be called on full KernelFalcon implementations**This should only be called on full KernelFalcon implementations*pFalconConfig*call to kflcnConfigureEngine_IMPL*pKernelChannel != NULL**pKernelChannel != NULL*call to _kflcnNeedToAllocContext*call to kchannelIsCtxBufferAllocSkipped*call to kchangrpGetEngineContextMemDesc_IMPL*pCtxMemDesc*NVRM: channel 0x%08x does not have a falcon engine instance for engDesc=0x%x **NVRM: channel 0x%08x does not have a falcon engine instance for engDesc=0x%x *call to kchannelUnmapEngineCtxBuf_IMPL*call to kchannelSetEngineContextMemDesc_IMPL*kchannelSetEngineContextMemDesc(pGpu, pKernelChannel, pKernelFalcon->physEngDesc, NULL)**kchannelSetEngineContextMemDesc(pGpu, pKernelChannel, pKernelFalcon->physEngDesc, NULL)*gpuIsClassSupported(pGpu, classNum)**gpuIsClassSupported(pGpu, classNum)*call to _kflcnAllocAndMapCtxBuffer*_kflcnAllocAndMapCtxBuffer(pGpu, pKernelFalcon, pKernelChannel)**_kflcnAllocAndMapCtxBuffer(pGpu, pKernelFalcon, pKernelChannel)*call to videoEventTraceCtxInit*videoEventTraceCtxInit(pGpu, pKernelChannel, pKernelFalcon->physEngDesc) == NV_OK**videoEventTraceCtxInit(pGpu, pKernelChannel, pKernelFalcon->physEngDesc) == NV_OK*call to _kflcnPromoteContext*gpumgrGetSubDeviceInstanceFromGpu(pGpu) == 0**gpumgrGetSubDeviceInstanceFromGpu(pGpu) == 0*subdeviceGetByInstance(pClient, RES_GET_HANDLE(pDevice), 0, &pSubdevice)**subdeviceGetByInstance(pClient, RES_GET_HANDLE(pDevice), 0, &pSubdevice)*ppEngCtxDesc**ppEngCtxDesc*pEngCtx**pEngCtx*pEngCtx != NULL**pEngCtx != NULL*kfifoEngineInfoXlate_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), ENGINE_INFO_TYPE_ENG_DESC, pKernelFalcon->physEngDesc, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *)&rmEngineType)**kfifoEngineInfoXlate_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), ENGINE_INFO_TYPE_ENG_DESC, pKernelFalcon->physEngDesc, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *)&rmEngineType)*hChanClient*kchangrpGetEngineContextMemDesc(pGpu, pKernelChannel->pKernelChannelGroupApi->pKernelChannelGroup, &pMemDesc)**kchangrpGetEngineContextMemDesc(pGpu, pKernelChannel->pKernelChannelGroupApi->pKernelChannelGroup, &pMemDesc)*entryCount*promoteEntry**promoteEntry*bufferId*bInitialize*bNonmapped*call to vaListFindVa*vaListFindVa(&pEngCtx->vaList, pKernelChannel->pVAS, &addr)**vaListFindVa(&pEngCtx->vaList, pKernelChannel->pVAS, &addr)*pRmApi->Control(pRmApi, pClient->hClient, RES_GET_HANDLE(pSubdevice), NV2080_CTRL_CMD_GPU_PROMOTE_CTX, &rmCtrlParams, sizeof(rmCtrlParams))**pRmApi->Control(pRmApi, pClient->hClient, RES_GET_HANDLE(pSubdevice), NV2080_CTRL_CMD_GPU_PROMOTE_CTX, &rmCtrlParams, sizeof(rmCtrlParams))*NVRM: This channel already has a falcon engine instance on engine %d:%d **NVRM: This channel already has a falcon engine instance on engine %d:%d *call to ctxBufPoolIsSupported**pCtxBufPool*memdescCreate(&pCtxMemDesc, pGpu, pKernelFalcon->ctxBufferSize, FLCN_BLK_ALIGNMENT, NV_TRUE, ADDR_UNKNOWN, pKernelFalcon->ctxAttr, flags)**memdescCreate(&pCtxMemDesc, pGpu, pKernelFalcon->ctxBufferSize, FLCN_BLK_ALIGNMENT, NV_TRUE, ADDR_UNKNOWN, pKernelFalcon->ctxAttr, flags)*call to memdescSetCtxBufPool*memdescSetCtxBufPool(pCtxMemDesc, pCtxBufPool)**memdescSetCtxBufPool(pCtxMemDesc, pCtxBufPool)*call to memdescAllocList*call to memdescU32ToAddrSpaceList*memmgrMemDescMemSet(GPU_GET_MEMORY_MANAGER(pGpu), pCtxMemDesc, 0, TRANSFER_FLAGS_NONE)**memmgrMemDescMemSet(GPU_GET_MEMORY_MANAGER(pGpu), pCtxMemDesc, 0, TRANSFER_FLAGS_NONE)*kchannelSetEngineContextMemDesc(pGpu, pKernelChannel, pKernelFalcon->physEngDesc, pCtxMemDesc)**kchannelSetEngineContextMemDesc(pGpu, pKernelChannel, pKernelFalcon->physEngDesc, pCtxMemDesc)*call to kchannelMapEngineCtxBuf_IMPL*kchannelMapEngineCtxBuf(pGpu, pKernelChannel, pKernelFalcon->physEngDesc)**kchannelMapEngineCtxBuf(pGpu, pKernelChannel, pKernelFalcon->physEngDesc)*call to kchannelGetGfid_IMPL*call to kflcnIsRiscvMode*call to kflcnRiscvReadIntrStatus_DISPATCH*call to kflcnReadIntrStatus_DISPATCH*registerBase*riscvRegisterBase*bBootFromHs*ctxAttr*ctxBufferSize*addrSpaceList*call to kcrashcatEngineConfigure_IMPL*NVRM: for physEngDesc 0x%x **NVRM: for physEngDesc 0x%x *src/kernel/gpu/falcon/kernel_falcon_ctrl.c**src/kernel/gpu/falcon/kernel_falcon_ctrl.c*CliGetKernelChannel(RES_GET_CLIENT(pSubdevice), pParams->hChannel, &pKernelChannel)**CliGetKernelChannel(RES_GET_CLIENT(pSubdevice), pParams->hChannel, &pKernelChannel)*call to memmgrSetMemDescPageSize_DISPATCH*pageSize != 0**pageSize != 0*totalBufferSize*serverGetClientUnderLock(&g_resServ, pParams->hUserClient, &pUserClient)**serverGetClientUnderLock(&g_resServ, pParams->hUserClient, &pUserClient)*pUserClient*CliGetKernelChannel(pUserClient, pParams->hChannel, &pKernelChannel)**CliGetKernelChannel(pUserClient, pParams->hChannel, &pKernelChannel)**bufferHandle***bufferHandle*bIsContigous*call to kgmmuGetExternalAllocAperture_IMPL*call to memdescGetPhysAddrsForGpu*pageSize <= NV_U32_MAX**pageSize <= NV_U32_MAX*bDeviceDescendant*gpuGetGidInfo(pGpu, &pUuid, &uuidLength, flags)**gpuGetGidInfo(pGpu, &pUuid, &uuidLength, flags)**pUuid*pMmuExceptInfo**HUBCLIENT_NVJPG1**HUBCLIENT_NVJPG2**HUBCLIENT_NVJPG3*call to kfifoIsMmuFaultEngineIdPbdma_IMPL*call to kgmmuIsFaultEngineBar1_DISPATCH**BAR1*call to kgmmuIsFaultEngineBar2_DISPATCH**BAR2**DISPLAY**IFB**SEC**PERF**NVDEC0**NVDEC1**NVDEC2**NVDEC3**CE0**CE1**CE2**CE3**CE4**CE5**PTP**NVENC0**NVENC1**NVENC2**PHYSICAL**NVJPG0**NVJPG1**NVJPG2**NVJPG3**FLA**GRAPHICS**GR1**GR2**GR3**GR4**GR5**GR6**GR7*pInstSubDeviceMemDesc**pInstSubDeviceMemDesc***pInstSubDeviceMemDesc*call to kfifoChannelGetFifoContextMemDesc_DISPATCH*pSubDevInstMemDesc*pSubDevInstMemDesc != NULL*src/kernel/gpu/fifo/arch/ampere/kernel_channel_ga10b.c**pSubDevInstMemDesc != NULL**src/kernel/gpu/fifo/arch/ampere/kernel_channel_ga10b.c**ppMemDesc != NULL***ppMemDesc != NULL*call to kfifoGenerateWorkSubmitToken_IMPL*kfifoGenerateWorkSubmitToken(pGpu, pKernelFifo, pKernelChannel, &workSubmitToken, NV_TRUE)*src/kernel/gpu/fifo/arch/ampere/kernel_fifo_ga100.c**kfifoGenerateWorkSubmitToken(pGpu, pKernelFifo, pKernelChannel, &workSubmitToken, NV_TRUE)**src/kernel/gpu/fifo/arch/ampere/kernel_fifo_ga100.c*call to kfifoUpdateUsermodeDoorbell_DISPATCH*kfifoUpdateUsermodeDoorbell_HAL(pGpu, pKernelFifo, workSubmitToken)**kfifoUpdateUsermodeDoorbell_HAL(pGpu, pKernelFifo, workSubmitToken)*runlistVal*channelVal**HUBCLIENT_NVENC0**HUBCLIENT_NVDEC4**UNRECOGNIZED_CLIENT**GPCCLIENT_T1_0**GPCCLIENT_T1_1**GPCCLIENT_T1_2**GPCCLIENT_T1_3**GPCCLIENT_T1_4**GPCCLIENT_T1_5**GPCCLIENT_T1_6**GPCCLIENT_T1_7**GPCCLIENT_PE_0**GPCCLIENT_PE_1**GPCCLIENT_PE_2**GPCCLIENT_PE_3**GPCCLIENT_PE_4**GPCCLIENT_PE_5**GPCCLIENT_PE_6**GPCCLIENT_PE_7**GPCCLIENT_RAST**GPCCLIENT_GCC**GPCCLIENT_GPCCS**GPCCLIENT_PROP_0**GPCCLIENT_PROP_1**GPCCLIENT_T1_8**GPCCLIENT_T1_9**GPCCLIENT_T1_10**GPCCLIENT_T1_11**GPCCLIENT_T1_12**GPCCLIENT_T1_13**GPCCLIENT_T1_14**GPCCLIENT_T1_15**GPCCLIENT_TPCCS_0**GPCCLIENT_TPCCS_1**GPCCLIENT_TPCCS_2**GPCCLIENT_TPCCS_3**GPCCLIENT_TPCCS_4**GPCCLIENT_TPCCS_5**GPCCLIENT_TPCCS_6**GPCCLIENT_TPCCS_7**GPCCLIENT_PE_8**GPCCLIENT_TPCCS_8**GPCCLIENT_T1_16**GPCCLIENT_T1_17**GPCCLIENT_ROP_0**GPCCLIENT_ROP_1**GPCCLIENT_GPM**HUBCLIENT_VIP**HUBCLIENT_CE0**HUBCLIENT_CE1**HUBCLIENT_DNISO**HUBCLIENT_FE**HUBCLIENT_FECS**HUBCLIENT_HOST**HUBCLIENT_HOST_CPU**HUBCLIENT_HOST_CPU_NB**HUBCLIENT_ISO**HUBCLIENT_MMU**HUBCLIENT_NVDEC0**HUBCLIENT_NVENC1**HUBCLIENT_NISO**HUBCLIENT_P2P**HUBCLIENT_PD**HUBCLIENT_PERF**HUBCLIENT_PMU**HUBCLIENT_RASTERTWOD**HUBCLIENT_SCC**HUBCLIENT_SCC_NB**HUBCLIENT_SEC**HUBCLIENT_SSYNC**HUBCLIENT_CE2**HUBCLIENT_XV**HUBCLIENT_MMU_NB**HUBCLIENT_DFALCON**HUBCLIENT_SKED**HUBCLIENT_AFALCON**HUBCLIENT_DONT_CARE**HUBCLIENT_HSCE0**HUBCLIENT_HSCE1**HUBCLIENT_HSCE2**HUBCLIENT_HSCE3**HUBCLIENT_HSCE4**HUBCLIENT_HSCE5**HUBCLIENT_HSCE6**HUBCLIENT_HSCE7**HUBCLIENT_HSCE8**HUBCLIENT_HSCE9**HUBCLIENT_HSHUB**HUBCLIENT_PTP_X0**HUBCLIENT_PTP_X1**HUBCLIENT_PTP_X2**HUBCLIENT_PTP_X3**HUBCLIENT_PTP_X4**HUBCLIENT_PTP_X5**HUBCLIENT_PTP_X6**HUBCLIENT_PTP_X7**HUBCLIENT_NVENC2**HUBCLIENT_VPR_SCRUBBER0**HUBCLIENT_VPR_SCRUBBER1**HUBCLIENT_DWBIF**HUBCLIENT_FBFALCON**HUBCLIENT_CE_SHIM**HUBCLIENT_GSP**HUBCLIENT_NVDEC1**HUBCLIENT_NVDEC2**HUBCLIENT_NVJPG0**HUBCLIENT_NVDEC3**HUBCLIENT_OFA0**HUBCLIENT_HSCE10**HUBCLIENT_HSCE11**HUBCLIENT_HSCE12**HUBCLIENT_HSCE13**HUBCLIENT_HSCE14**HUBCLIENT_HSCE15**HUBCLIENT_FE1**HUBCLIENT_FE2**HUBCLIENT_FE3**HUBCLIENT_FE4**HUBCLIENT_FE5**HUBCLIENT_FE6**HUBCLIENT_FE7**HUBCLIENT_FECS1**HUBCLIENT_FECS2**HUBCLIENT_FECS3**HUBCLIENT_FECS4**HUBCLIENT_FECS5**HUBCLIENT_FECS6**HUBCLIENT_FECS7**HUBCLIENT_SKED1**HUBCLIENT_SKED2**HUBCLIENT_SKED3**HUBCLIENT_SKED4**HUBCLIENT_SKED5**HUBCLIENT_SKED6**HUBCLIENT_SKED7**HUBCLIENT_ESC**NVDEC4**CE6**CE7**CE8**CE9*NVRM: %d PBDMAs **NVRM: %d PBDMAs *pEngineInfo*pEngineInfo->maxNumPbdmas != 0**pEngineInfo->maxNumPbdmas != 0*call to kfifoConstructEngineList_KERNEL*kfifoConstructEngineList_HAL(pGpu, pKernelFifo)**kfifoConstructEngineList_HAL(pGpu, pKernelFifo)**pEngineInfo*pEngineInfo != NULL**pEngineInfo != NULL*val < pEngineInfo->engineInfoListSize**val < pEngineInfo->engineInfoListSize*pbdmaFaultIds**pbdmaFaultIds*engineData**engineData*eng*call to kfifoRunlistQueryNumChannels_KERNEL*call to kbusIsBug2751296LimitBar2PtSize*pGeneratedToken != NULL**pGeneratedToken != NULL*(pKernelChannel->pKernelChannelGroupApi != NULL) && (pKernelChannel->pKernelChannelGroupApi->pKernelChannelGroup != NULL)**(pKernelChannel->pKernelChannelGroupApi != NULL) && (pKernelChannel->pKernelChannelGroupApi->pKernelChannelGroup != NULL)*vgpuGetCallingContextGfid(pGpu, &gfId)**vgpuGetCallingContextGfid(pGpu, &gfId)*call to kfifoGetVChIdForSChId_DISPATCH*kfifoGetVChIdForSChId_HAL(pGpu, pKernelFifo, chId, gfId, kchannelGetEngineType(pKernelChannel), &vChId)**kfifoGetVChIdForSChId_HAL(pGpu, pKernelFifo, chId, gfId, kchannelGetEngineType(pKernelChannel), &vChId)*call to kchannelIsRunlistSet*NVRM: FAILED Channel 0x%x is not assigned to runlist yet **NVRM: FAILED Channel 0x%x is not assigned to runlist yet *NVRM: Generated workSubmitToken 0x%x for channel 0x%x runlist 0x%x **NVRM: Generated workSubmitToken 0x%x for channel 0x%x runlist 0x%x *NVRM: Poking workSubmitToken 0x%x **NVRM: Poking workSubmitToken 0x%x *pKernelChannelGroup != NULL**pKernelChannelGroup != NULL*call to kgrmgrGetGrIdxVeidMask*call to kfifoChannelGroupGetLocalMaxSubcontext_GM107*call to kfifoEngineInfoXlate_GV100*kfifoEngineInfoXlate_GV100(pGpu, pKernelFifo, ENGINE_INFO_TYPE_ENG_DESC, ENG_GR(0), ENGINE_INFO_TYPE_MMU_FAULT_ID, &baseGrFaultId)**kfifoEngineInfoXlate_GV100(pGpu, pKernelFifo, ENGINE_INFO_TYPE_ENG_DESC, ENG_GR(0), ENGINE_INFO_TYPE_MMU_FAULT_ID, &baseGrFaultId)*call to kgrmgrGetGrIdxForVeid_IMPL*kgrmgrGetGrIdxForVeid(pGpu, pKernelGraphicsManager, subctxId, &grIdx)**kgrmgrGetGrIdxForVeid(pGpu, pKernelGraphicsManager, subctxId, &grIdx)*call to kgrmgrGetVeidBaseForGrIdx_IMPL*kgrmgrGetVeidBaseForGrIdx(pGpu, pKernelGraphicsManager, grIdx, &startSubctxId)**kgrmgrGetVeidBaseForGrIdx(pGpu, pKernelGraphicsManager, grIdx, &startSubctxId)**HUBCLIENT_ESC0**HUBCLIENT_ESC1**HUBCLIENT_ESC2**HUBCLIENT_ESC3**HUBCLIENT_ESC4**HUBCLIENT_ESC5**HUBCLIENT_ESC6**HUBCLIENT_ESC7**HUBCLIENT_ESC8**HUBCLIENT_ESC9**HUBCLIENT_ESC10**HUBCLIENT_ESC11**GSPLITE*src/kernel/gpu/fifo/arch/blackwell/kernel_fifo_gb100.c**src/kernel/gpu/fifo/arch/blackwell/kernel_fifo_gb100.c*pEngineInfoList*baseGrPbdmaId**HUBCLIENT_GSPLITE**HUBCLIENT_GSPLITE1**HUBCLIENT_GSPLITE2**HUBCLIENT_GSPLITE3**HUBCLIENT_VPR_SCRUBBER2**HUBCLIENT_VPR_SCRUBBER3**HUBCLIENT_VPR_SCRUBBER4**HUBCLIENT_NVENC3**HUBCLIENT_PD1**HUBCLIENT_PD2**HUBCLIENT_PD3**HUBCLIENT_RASTERTWOD1**HUBCLIENT_RASTERTWOD2**HUBCLIENT_RASTERTWOD3**HUBCLIENT_SCC1**HUBCLIENT_SCC_NB1**HUBCLIENT_SCC2**HUBCLIENT_SCC_NB2**HUBCLIENT_SCC3**HUBCLIENT_SCC_NB3**HUBCLIENT_SSYNC1**HUBCLIENT_SSYNC2**HUBCLIENT_SSYNC3*src/kernel/gpu/fifo/arch/blackwell/kernel_fifo_gb202.c*NVRM: Invalid (SCG,runqueue) combination: (0x%x,0x%x) **src/kernel/gpu/fifo/arch/blackwell/kernel_fifo_gb202.c**NVRM: Invalid (SCG,runqueue) combination: (0x%x,0x%x) **HUBCLIENT_FSP**GSPLITE1**GSPLITE2**GSPLITE3**GSPLITE4**GSPLITE5**GSPLITE6**GSPLITE7*call to kfifoRingChannelDoorBell_GV100*pMmuExceptionInfo**HUBCLIENT_CE3**HUBCLIENT_NVJPG4**HUBCLIENT_NVJPG5**HUBCLIENT_NVJPG6**HUBCLIENT_NVJPG7**HUBCLIENT_NVDEC5**HUBCLIENT_NVDEC6**HUBCLIENT_NVDEC7**FSP*call to memCreateMemDesc_IMPL*memCreateMemDesc(pGpu, ppMemDesc, ADDR_SYSMEM, offset, size, attr, attr2)*src/kernel/gpu/fifo/arch/hopper/kernel_fifo_gh100.c**memCreateMemDesc(pGpu, ppMemDesc, ADDR_SYSMEM, offset, size, attr, attr2)**src/kernel/gpu/fifo/arch/hopper/kernel_fifo_gh100.c*call to memmgrGetMessageKind_DISPATCH*call to memdescSetGpuCacheSnoop*call to memdescSetCpuCacheSnoop*call to kfifoConstructUsermodeMemdescs_GV100*kfifoConstructUsermodeMemdescs_GV100(pGpu, pKernelFifo)**kfifoConstructUsermodeMemdescs_GV100(pGpu, pKernelFifo)*src/kernel/gpu/fifo/arch/maxwell/kernel_channel_gm107.c*NVRM: channel 0x%08x **src/kernel/gpu/fifo/arch/maxwell/kernel_channel_gm107.c**NVRM: channel 0x%08x *kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RUNLIST, *pEngDesc, ENGINE_INFO_TYPE_ENG_DESC, pEngDesc)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RUNLIST, *pEngDesc, ENGINE_INFO_TYPE_ENG_DESC, pEngDesc)**pSubDevInstMemDesc*call to kfifoIsPreAllocatedUserDEnabled*kfifoIsPreAllocatedUserDEnabled(pKernelFifo)**kfifoIsPreAllocatedUserDEnabled(pKernelFifo)*pUserdInfo*NVRM: fifoGetUserdBar1Offset_GF100: BAR1 map of USERD has not been setup yet **NVRM: fifoGetUserdBar1Offset_GF100: BAR1 map of USERD has not been setup yet *(*bar1MapOffset + *bar1MapSize) <= (pUserdInfo->userdBar1MapStartOffset + pUserdInfo->userdBar1MapSize)**(*bar1MapOffset + *bar1MapSize) <= (pUserdInfo->userdBar1MapStartOffset + pUserdInfo->userdBar1MapSize)*call to CliGetChannelClassInfo*classInfo*!pKernelChannel->bClientAllocatedUserD**!pKernelChannel->bClientAllocatedUserD*call to kchannelGetUserdBar1MapOffset_DISPATCH*pGpu->getProperty(pGpu, PDB_PROP_GPU_ATS_SUPPORTED)**pGpu->getProperty(pGpu, PDB_PROP_GPU_ATS_SUPPORTED)*NVRM: class = %x not supported for user base mapping **NVRM: class = %x not supported for user base mapping *pChidMgr != NULL**pChidMgr != NULL*pFifoDataHeap*pFifoDataBlock**pFifoDataBlock*pFifoDataBlock != NULL**pFifoDataBlock != NULL*pFifoDataBlock->pData == pKernelChannel**pFifoDataBlock->pData == pKernelChannel*call to kfifoChidMgrFreeChid_IMPL*NVRM: Unable to Free Channel From Heap: %d **NVRM: Unable to Free Channel From Heap: %d *call to kfifoIsPerRunlistChramEnabled*!kfifoIsPerRunlistChramEnabled(pKernelFifo)**!kfifoIsPerRunlistChramEnabled(pKernelFifo)**pChidMgr*kfifoChidMgrGetKernelChannel(pGpu, pKernelFifo, pChidMgr, ChID) == NULL**kfifoChidMgrGetKernelChannel(pGpu, pKernelFifo, pChidMgr, ChID) == NULL*allocMode*FLD_TEST_DRF(OS04, _FLAGS, _CHANNEL_USERD_INDEX_FIXED, _FALSE, Flags)**FLD_TEST_DRF(OS04, _FLAGS, _CHANNEL_USERD_INDEX_FIXED, _FALSE, Flags)*call to kfifoChidMgrAllocChid_IMPL*subdevInst*call to _kchannelDestroyRMUserdMemDesc*pFifoHalData**pFifoHalData***pFifoHalData**pInstanceBlock*pRamfcDesc**pRamfcDesc*pInstanceBlockDesc**pInstanceBlockDesc**pKernelChannelGroup*call to kfifoValidateSCGTypeAndRunqueue_DISPATCH*call to kfifoGetInstMemInfo_DISPATCH*NVRM: Unable to get instance memory info! **NVRM: Unable to get instance memory info! *pInstAllocList**pInstAllocList*pChannelBufPool**pChannelBufPool*NVRM: Instance block is NULL for hClient 0x%x Channel 0x%x! **NVRM: Instance block is NULL for hClient 0x%x Channel 0x%x! *NVRM: Unable to allocate instance memory descriptor! **NVRM: Unable to allocate instance memory descriptor! *call to gpuIsInstanceMemoryAlwaysCached*call to memdescSetName*rm_instance_block_surface**rm_instance_block_surface*NVRM: Instance block allocation for hClient 0x%x hChannel 0x%x failed **NVRM: Instance block allocation for hClient 0x%x hChannel 0x%x failed *NVRM: Could not allocate memdesc for RAMFC **NVRM: Could not allocate memdesc for RAMFC *pUserdSubDeviceMemDesc**pUserdSubDeviceMemDesc***pUserdSubDeviceMemDesc*pKernelChannel->pUserdSubDeviceMemDesc[subdevInst] == NULL**pKernelChannel->pUserdSubDeviceMemDesc[subdevInst] == NULL*call to _kchannelCreateRMUserdMemDesc*NVRM: Could not allocate sub memdesc for USERD **NVRM: Could not allocate sub memdesc for USERD *call to kchannelCreateUserMemDesc_DISPATCH*NVRM: kchannelCreateUserMemDesc failed **NVRM: kchannelCreateUserMemDesc failed *NVRM: hChannel 0x%x hClient 0x%x, Class ID 0x%x Instance Block @ 0x%llx (%s %x) USERD @ 0x%llx for subdevice %d **NVRM: hChannel 0x%x hClient 0x%x, Class ID 0x%x Instance Block @ 0x%llx (%s %x) USERD @ 0x%llx for subdevice %d *NVRM: Could not create Channel **NVRM: Could not create Channel *ppUserdSubdevMemDesc**ppUserdSubdevMemDesc*userdPhysDesc**userdPhysDesc***userdPhysDesc*bSkipCtxBufferAlloc*call to kchannelFindChildByHandle*kchannelFindChildByHandle(pKernelChannel, handle, &pObject)**kchannelFindChildByHandle(pKernelChannel, handle, &pObject)*pObject != NULL**pObject != NULL*resourceDesc*halEngineTag*call to gpuXlateEngDescToClientEngineId_IMPL*pRmEngineID*NVRM: class ID: 0x%08x classEngine ID: 0x%08x **NVRM: class ID: 0x%08x classEngine ID: 0x%08x *pUserdAperture != NULL && pUserdAttribute != NULL*src/kernel/gpu/fifo/arch/maxwell/kernel_fifo_gm107.c**pUserdAperture != NULL && pUserdAttribute != NULL**src/kernel/gpu/fifo/arch/maxwell/kernel_fifo_gm107.c*NVRM: BAR1 map of USERD has not been setup yet **NVRM: BAR1 map of USERD has not been setup yet *pParentKernelFifo*NVRM: Unmapping USERD from NVLINK. **NVRM: Unmapping USERD from NVLINK. **userdBar1CpuPtr*userdBar1Priv**userdBar1Priv*NVRM: Freeing preallocated USERD phys and bar1 range **NVRM: Freeing preallocated USERD phys and bar1 range *bFifoFirstInit*userdBar1MapStartOffset*userdBar1MapSize*bUserdInSystemMemory*call to kfifoChidMgrGetNumChannels_IMPL*NVRM: Could not memdescCreate for USERD for %x #channels **NVRM: Could not memdescCreate for USERD for %x #channels *call to kfifoGetUserdBar1MapStartOffset_DISPATCH*NVRM: Could not allocate USERD for %x #channels **NVRM: Could not allocate USERD for %x #channels *NVRM: Mapping USERD with coherent link (USERD in FBMEM). **NVRM: Mapping USERD with coherent link (USERD in FBMEM). *pUserdInfo->userdPhysDesc[currentGpuInst]->_flags & MEMDESC_FLAGS_PHYSICALLY_CONTIGUOUS**pUserdInfo->userdPhysDesc[currentGpuInst]->_flags & MEMDESC_FLAGS_PHYSICALLY_CONTIGUOUS**_pteArray*NVRM: Mapping USERD with coherent link (USERD in SYSMEM). **NVRM: Mapping USERD with coherent link (USERD in SYSMEM). *NVRM: Pre-allocated USERD is not supported with MIG **NVRM: Pre-allocated USERD is not supported with MIG *NVRM: Could not map USERD to BAR1 **NVRM: Could not map USERD to BAR1 *NVRM: Could not cpu map BAR1 snoop range **NVRM: Could not cpu map BAR1 snoop range *NVRM: USERD Preallocated phys @ 0x%llx bar1 offset @ 0x%llx of size 0x%x **NVRM: USERD Preallocated phys @ 0x%llx bar1 offset @ 0x%llx of size 0x%x *call to kfifoFreePreAllocUserD_DISPATCH*NVRM: can't find runlist ID for engine ENG_GR(0)! **NVRM: can't find runlist ID for engine ENG_GR(0)! *pPartnerListParams*call to kfifoGetNumRunqueues_DISPATCH*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)rmEngineType, ENGINE_INFO_TYPE_RUNLIST, &srcRunlist)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)rmEngineType, ENGINE_INFO_TYPE_RUNLIST, &srcRunlist)*call to kfifoGetEnginePbdmaIds_DISPATCH*kfifoGetEnginePbdmaIds_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)rmEngineType, &pSrcPbdmaIds, &numSrcPbdmaIds)**kfifoGetEnginePbdmaIds_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)rmEngineType, &pSrcPbdmaIds, &numSrcPbdmaIds)*pSrcPbdmaIds*srcPbdmaId*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, ENGINE_INFO_TYPE_ENG_DESC, &engDesc)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, ENGINE_INFO_TYPE_ENG_DESC, &engDesc)*gpuGetClassList(pGpu, &numClasses, NULL, engDesc)**gpuGetClassList(pGpu, &numClasses, NULL, engDesc)*NVRM: EngineID %x is not part classDB, skipping **NVRM: EngineID %x is not part classDB, skipping *kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, ENGINE_INFO_TYPE_RUNLIST, &runlist)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, ENGINE_INFO_TYPE_RUNLIST, &runlist)*kfifoGetEnginePbdmaIds_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, &pPbdmaIds, &numPbdmaIds)**kfifoGetEnginePbdmaIds_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, &pPbdmaIds, &numPbdmaIds)*pPbdmaIds*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *)&localRmEngineType)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *)&localRmEngineType)*pbdmaIds**pbdmaIds*inVal < pEngineInfo->engineInfoListSize**inVal < pEngineInfo->engineInfoListSize**engineName*kfifoConstructEngineList_HAL(pGpu, pKernelFifo) == NV_OK**kfifoConstructEngineList_HAL(pGpu, pKernelFifo) == NV_OK*pEngineInfo->engineInfoListSize**pEngineInfo->engineInfoListSize*call to memmgrMemSet_IMPL*memmgrMemSet(GPU_GET_MEMORY_MANAGER(pGpu), &tSurf, 0, NV_RAMUSERD_CHAN_SIZE, TRANSFER_FLAGS_NONE)**memmgrMemSet(GPU_GET_MEMORY_MANAGER(pGpu), &tSurf, 0, NV_RAMUSERD_CHAN_SIZE, TRANSFER_FLAGS_NONE)*pOutVal != NULL**pOutVal != NULL*outType != ENGINE_INFO_TYPE_PBDMA_ID**outType != ENGINE_INFO_TYPE_PBDMA_ID*pFoundInputEngine**pFoundInputEngine*pThisEngine*call to _isEngineInfoTypeValidForOnlyHostDriven*NVRM: Asked for host-specific type(0x%x) for non-host engine type(0x%x),val(0x%08x) **NVRM: Asked for host-specific type(0x%x) for non-host engine type(0x%x),val(0x%08x) *0 && "check all ENGINE_INFO_TYPE are classified as host-driven or not"**0 && "check all ENGINE_INFO_TYPE are classified as host-driven or not"*pInst != NULL**pInst != NULL*instAperture*NVRM: unknown inst target 0x%x **NVRM: unknown inst target 0x%x *call to memmgrComparePhysicalAddresses_DISPATCH*NVRM: No channel found for instance 0x%016llx (target 0x%x) **NVRM: No channel found for instance 0x%016llx (target 0x%x) *NVRM: bad engineState 0x%x on engine 0x%x **NVRM: bad engineState 0x%x on engine 0x%x *call to memdescHasSubDeviceMemDescs*!memdescHasSubDeviceMemDescs(*ppMemDesc)**!memdescHasSubDeviceMemDescs(*ppMemDesc)*NVRM: channel 0x%08x engine 0x%x engineState 0x%x *ppMemDesc %p **NVRM: channel 0x%08x engine 0x%x engineState 0x%x *ppMemDesc %p *call to kfifoGetSubctxType_DISPATCH*call to kfifoValidateEngineAndRunqueue_DISPATCH*call to kchannelGetRunqueue*call to kfifoValidateEngineAndSubctxType_DISPATCH*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_ENG_DESC, engDesc, ENGINE_INFO_TYPE_RUNLIST, &runlistId)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_ENG_DESC, engDesc, ENGINE_INFO_TYPE_RUNLIST, &runlistId)*call to kfifoRunlistSetId_DISPATCH*NVRM: Unable to program runlist for %s **NVRM: Unable to program runlist for %s *NVRM: Channel has already been assigned a runlist incompatible with this engine (requested: 0x%x current: 0x%x). **NVRM: Channel has already been assigned a runlist incompatible with this engine (requested: 0x%x current: 0x%x). *call to kfifoRunlistIsTsgHeaderSupported_DISPATCH*NVRM: Runlist does not support TSGs **NVRM: Runlist does not support TSGs *bRunlistAssigned*pKernelChannel->pKernelChannelGroupApi->pKernelChannelGroup->runlistId == runlistId**pKernelChannel->pKernelChannelGroupApi->pKernelChannelGroup->runlistId == runlistId*call to kchannelSetRunlistId*call to kchannelSetRunlistSet*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)rmEngineType, ENGINE_INFO_TYPE_ENG_DESC, &engDesc)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)rmEngineType, ENGINE_INFO_TYPE_ENG_DESC, &engDesc)*pAlignment != NULL**pAlignment != NULL*NVRM: start **NVRM: start *call to kfifoIsZombieSubctxWarEnabled*call to _kfifoFreeDummyPage*call to kfifoTriggerPreSchedulingDisableCallback_IMPL*bBcState*sliLoopReentrancy*call to kfifoSetupBar1UserdSnoop_b3696a*call to kfifoPreAllocUserD_DISPATCH*kfifoPreAllocUserD_HAL(pGpu, pKernelFifo)**kfifoPreAllocUserD_HAL(pGpu, pKernelFifo)*call to _kfifoAllocDummyPage*NVRM: Failed to allocate dummy page for zombie subcontexts **NVRM: Failed to allocate dummy page for zombie subcontexts *NVRM: RM control call to setup zombie subctx failed, status 0x%x **NVRM: RM control call to setup zombie subctx failed, status 0x%x *call to kfifoTriggerPostSchedulingEnableCallback_IMPL**pDummyPageMemDesc*NVRM: Could not memdescCreate for dummy page **NVRM: Could not memdescCreate for dummy page *NVRM: Could not allocate dummy page **NVRM: Could not allocate dummy page *InstAttr*userdAperture*userdAttr*call to memdescOverrideInstLoc*USERD**USERD*!IS_VIRTUAL(pGpu) && !IS_GSP_CLIENT(pGpu)*src/kernel/gpu/fifo/arch/maxwell/kernel_fifo_gm200.c**!IS_VIRTUAL(pGpu) && !IS_GSP_CLIENT(pGpu)**src/kernel/gpu/fifo/arch/maxwell/kernel_fifo_gm200.c*numPbdmas*src/kernel/gpu/fifo/arch/pascal/kernel_fifo_gp102.c**src/kernel/gpu/fifo/arch/pascal/kernel_fifo_gp102.c*kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(0), &rmEngineType)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(0), &rmEngineType)*grEngineTag*kfifoGetEnginePbdmaIds_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_ENG_DESC, ceEngineTag, &pCePbdmaIds, &numCePbdmaIds) == NV_OK**kfifoGetEnginePbdmaIds_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_ENG_DESC, ceEngineTag, &pCePbdmaIds, &numCePbdmaIds) == NV_OK*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_ENG_DESC, grEngineTag, ENGINE_INFO_TYPE_RUNLIST, &srcRunlist) == NV_OK**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_ENG_DESC, grEngineTag, ENGINE_INFO_TYPE_RUNLIST, &srcRunlist) == NV_OK*kfifoGetEnginePbdmaIds_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_ENG_DESC, grEngineTag, &pSrcPbdmaIds, &numSrcPbdmaIds) == NV_OK**kfifoGetEnginePbdmaIds_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_ENG_DESC, grEngineTag, &pSrcPbdmaIds, &numSrcPbdmaIds) == NV_OK*pCePbdmaIds*call to _kfifoIsValidCETag_GP102*bIsGrCe*NVRM: ASYNC Subcontext only supported on GR/GRCE **NVRM: ASYNC Subcontext only supported on GR/GRCE *NVRM: Unsupported Subcontext Type: 0x%x **NVRM: Unsupported Subcontext Type: 0x%x *NVRM: Runqueue 1 only supports GR/GRCE **NVRM: Runqueue 1 only supports GR/GRCE *NVRM: Unsupported runqueue: 0x%x **NVRM: Unsupported runqueue: 0x%x **HUBCLIENT_SEC2**HUBCLIENT_FBFLCN*call to kfifoGetPbdmaIdFromMmuFaultId_IMPL*kfifoGetPbdmaIdFromMmuFaultId(pGpu, pKernelFifo, engineID, &pbdmaId) == NV_OK*src/kernel/gpu/fifo/arch/turing/kernel_fifo_tu102.c**kfifoGetPbdmaIdFromMmuFaultId(pGpu, pKernelFifo, engineID, &pbdmaId) == NV_OK**src/kernel/gpu/fifo/arch/turing/kernel_fifo_tu102.c*pbdmaId < NV_HOST_NUM_PBDMA**pbdmaId < NV_HOST_NUM_PBDMA**HOST0**HOST1**HOST2**HOST3**HOST4**HOST5**HOST6**HOST7**HOST8**HOST9**HOST10**HOST11*kfifoGetVChIdForSChId_HAL(pGpu, pKernelFifo, chId, gfid, kchannelGetEngineType(pKernelChannel), &vChId)**kfifoGetVChIdForSChId_HAL(pGpu, pKernelFifo, chId, gfid, kchannelGetEngineType(pKernelChannel), &vChId)*NVRM: FAILED channel 0x%08x is not assigned to runlist yet **NVRM: FAILED channel 0x%08x is not assigned to runlist yet *NVRM: Generated workSubmitToken 0x%x for channel 0x%08x runlist 0x%x **NVRM: Generated workSubmitToken 0x%x for channel 0x%08x runlist 0x%x *(pKernelChannelGroup->pMthdBuffers != NULL)*src/kernel/gpu/fifo/arch/volta/kernel_channel_group_gv100.c**(pKernelChannelGroup->pMthdBuffers != NULL)**src/kernel/gpu/fifo/arch/volta/kernel_channel_group_gv100.c*(runqueue < runQueues)**(runqueue < runQueues)*pFaultMthdBuf**pFaultMthdBuf*(pFaultMthdBuf->pMemDesc != NULL)**(pFaultMthdBuf->pMemDesc != NULL)*call to kbusUnmapCpuInvisibleBar2Aperture_DISPATCH*bar2Addr*call to kbusMapCpuInvisibleBar2Aperture_DISPATCH*kbusMapCpuInvisibleBar2Aperture_HAL(pGpu, pKernelBus, pFaultMthdBuf->pMemDesc, &(pFaultMthdBuf->bar2Addr), pFaultMthdBuf->pMemDesc->Size, 0, gfid)**kbusMapCpuInvisibleBar2Aperture_HAL(pGpu, pKernelBus, pFaultMthdBuf->pMemDesc, &(pFaultMthdBuf->bar2Addr), pFaultMthdBuf->pMemDesc->Size, 0, gfid)*NVRM: Allocating Method buffer with Bar2Addr LO 0x%08x Bar2Addr HI 0x%08x runqueue 0x%0x **NVRM: Allocating Method buffer with Bar2Addr LO 0x%08x Bar2Addr HI 0x%08x runqueue 0x%0x *call to gpuGetCeFaultMethodBufferSize_KERNEL*gpuGetCeFaultMethodBufferSize(pGpu, &bufSizeInBytes)**gpuGetCeFaultMethodBufferSize(pGpu, &bufSizeInBytes)*(bufSizeInBytes > 0)**(bufSizeInBytes > 0)*faultBufApert*faultBufAttr*fault method buffer**fault method buffer*rm_ce_fault_method_buffer_surface**rm_ce_fault_method_buffer_surface*memmgrMemSet(pMemoryManager, &surf, 0, bufSizeInBytes, TRANSFER_FLAGS_NONE)**memmgrMemSet(pMemoryManager, &surf, 0, bufSizeInBytes, TRANSFER_FLAGS_NONE)*call to kchangrpFreeFaultMethodBuffers_DISPATCH*userdAlignment*call to serverutilGetResourceRefWithType*pUserdMemoryRef*src/kernel/gpu/fifo/arch/volta/kernel_channel_gv100.c**pUserdMemoryRef**src/kernel/gpu/fifo/arch/volta/kernel_channel_gv100.c*pUserdMemory**pUserdMemory*pUserdMemDescForSubDev**pUserdMemDescForSubDev*userdAddr*call to kchannelIsUserdAddrSizeValid_DISPATCH*NVRM: physical addr size of userdAddrHi=0x%08x, userAddrLo=0x%08x is incorrect! **NVRM: physical addr size of userdAddrHi=0x%08x, userAddrLo=0x%08x is incorrect! *pUserdSubMemDesc*pUserdAddr*pUserdAper*phUserdMemory*pUserdOffset*NVRM: User provided memory info for index %d is NULL **NVRM: User provided memory info for index %d is NULL *NVRM: NV_CHANNEL_ALLOC_PARAMS needs to have all subdevice info **NVRM: NV_CHANNEL_ALLOC_PARAMS needs to have all subdevice info *call to kchannelCreateUserdMemDesc_DISPATCH*call to kchannelDestroyUserdMemDesc_DISPATCH*bClientAllocatedUserD*src/kernel/gpu/fifo/arch/volta/kernel_fifo_gv100.c**src/kernel/gpu/fifo/arch/volta/kernel_fifo_gv100.c**ACCESS_TYPE_VIRT_READ**ACCESS_TYPE_VIRT_WRITE**ACCESS_TYPE_VIRT_ATOMIC**ACCESS_TYPE_VIRT_PREFETCH**ACCESS_TYPE_PHYS_READ**ACCESS_TYPE_PHYS_WRITE**ACCESS_TYPE_PHYS_ATOMIC**ACCESS_TYPE_PHYS_PREFETCH**UNRECOGNIZED_ACCESS_TYPE*kfifoGetUsermodeMapInfo_HAL(pGpu, pKernelFifo, &offset, &size)**kfifoGetUsermodeMapInfo_HAL(pGpu, pKernelFifo, &offset, &size)*memCreateMemDesc(pGpu, &(pKernelFifo->pRegVF), ADDR_REGMEM, offset, size, attr, attr2)**memCreateMemDesc(pGpu, &(pKernelFifo->pRegVF), ADDR_REGMEM, offset, size, attr, attr2)*call to kfifoGetMaxCeChannelGroups_DISPATCH*maxChannelGroups*runQueues*gpuGetCeFaultMethodBufferSize(pGpu, &totalSize)**gpuGetCeFaultMethodBufferSize(pGpu, &totalSize)*call to kfifoIsLiteModeEnabled_3dd2c9**pSubctxType*pKernelChannel->subctxId != FIFO_PDB_IDX_BASE**pKernelChannel->subctxId != FIFO_PDB_IDX_BASE*pSubctxIdHeap*NVRM: subcontext not allocated for this TSG **NVRM: subcontext not allocated for this TSG **pKernelCtxShare*call to kfifoIsSubcontextSupported*call to kfifoGetMaxSubcontextFromGr_KERNEL*maxVeid / 32 <= SUBCTX_MASK_ARRAY_SIZE**maxVeid / 32 <= SUBCTX_MASK_ARRAY_SIZE*maxSubcontextCount*call to gpuGetRegBaseOffset_DISPATCH*gpuGetRegBaseOffset_HAL(pGpu, NV_REG_BASE_USERMODE, &offset)**gpuGetRegBaseOffset_HAL(pGpu, NV_REG_BASE_USERMODE, &offset)*call to kfifoEngineInfoXlate_GM107*kfifoEngineInfoXlate_GM107(pGpu, pKernelFifo, ENGINE_INFO_TYPE_ENG_DESC, ENG_GR(0), ENGINE_INFO_TYPE_MMU_FAULT_ID, &grFaultId)**kfifoEngineInfoXlate_GM107(pGpu, pKernelFifo, ENGINE_INFO_TYPE_ENG_DESC, ENG_GR(0), ENGINE_INFO_TYPE_MMU_FAULT_ID, &grFaultId)*NVRM: FAILED to get work submit token. **NVRM: FAILED to get work submit token. *NVRM: Unable to get work submit token. **NVRM: Unable to get work submit token. *src/kernel/gpu/fifo/channel_descendant.c*NVRM: Unicast DMA mappings of non-memory objects not supported. **src/kernel/gpu/fifo/channel_descendant.c**NVRM: Unicast DMA mappings of non-memory objects not supported. *NVRM: Method NoOperation: Class=0x%x Data=0x%x **NVRM: Method NoOperation: Class=0x%x Data=0x%x *call to CliDelObjectEvents*call to kchannelDeregisterChild_IMPL*call to chandesDestroy_b3696a*IS_VIRTUAL(pGpu) || gpuIsGpuFullPowerForPmResume(pGpu)**IS_VIRTUAL(pGpu) || gpuIsGpuFullPowerForPmResume(pGpu)*call to rmapiInRtd3PmPath*rmapiLockIsOwner() || rmapiInRtd3PmPath()**rmapiLockIsOwner() || rmapiInRtd3PmPath()*call to gpuIsDebuggerActive_DISPATCH*pClassDescriptor**pClassDescriptor*NVRM: Channel should have engineType associated with it **NVRM: Channel should have engineType associated with it *call to kfifoIsHostEngineExpansionSupported*call to gpuIsCCorApmFeatureEnabled_IMPL*call to rmapiutilIsExternalClassIdInternalOnly*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)kchannelGetEngineType(pKernelChannel), ENGINE_INFO_TYPE_ENG_DESC, &engDesc)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)kchannelGetEngineType(pKernelChannel), ENGINE_INFO_TYPE_ENG_DESC, &engDesc)*internalClassDescriptor*kfifoGetEnginePartnerList_HAL(pGpu, pKernelFifo, &partnerParams)**kfifoGetEnginePartnerList_HAL(pGpu, pKernelFifo, &partnerParams)*partnerParams.numPartners == 1**partnerParams.numPartners == 1*call to gpuGetClassByEngineAndClassId_IMPL*pEngObject**pEngObject***pEngObject**call to kflcnGetKernelFalconForEngine_IMPL*NVRM: engine is missing for class 0x%x **NVRM: engine is missing for class 0x%x *call to kfifoRunlistSetIdByEngine_DISPATCH*NVRM: Invalid object allocation request on channel 0x%08x **NVRM: Invalid object allocation request on channel 0x%08x *call to kchannelRegisterChild_IMPL*src/kernel/gpu/fifo/kernel_channel.c**src/kernel/gpu/fifo/kernel_channel.c*pKernelChannel->pEncStatsBuf == NULL**pKernelChannel->pEncStatsBuf == NULL*memdescCreateSubMem(&pKernelChannel->pEncStatsBufMemDesc, pMemDesc, pGpu, 0, memdescGetSize(pMemDesc))**memdescCreateSubMem(&pKernelChannel->pEncStatsBufMemDesc, pMemDesc, pGpu, 0, memdescGetSize(pMemDesc))**pEncStatsBufMemDesc*pKernelChannel->pEncStatsBufMemDesc != NULL**pKernelChannel->pEncStatsBufMemDesc != NULL*call to memdescIsSubMemoryMemDesc*memdescIsSubMemoryMemDesc(pKernelChannel->pEncStatsBufMemDesc)**memdescIsSubMemoryMemDesc(pKernelChannel->pEncStatsBufMemDesc)*pNotifierMemDesc*pNotifierMemDesc != NULL**pNotifierMemDesc != NULL*memdescGetSize(pNotifierMemDesc) >= ((notifyIndex + 1) * sizeof(NvNotification))**memdescGetSize(pNotifierMemDesc) >= ((notifyIndex + 1) * sizeof(NvNotification))*addressSpace == ADDR_SYSMEM**addressSpace == ADDR_SYSMEM*pKeyRotationNotifierMemDesc*memdescCreateSubMem(&pKernelChannel->pKeyRotationNotifierMemDesc, pNotifierMemDesc, pGpu, notifyIndex * sizeof(NvNotification), sizeof(NvNotification))**memdescCreateSubMem(&pKernelChannel->pKeyRotationNotifierMemDesc, pNotifierMemDesc, pGpu, notifyIndex * sizeof(NvNotification), sizeof(NvNotification))**pKeyRotationNotifier*pKernelChannel->pKeyRotationNotifier != NULL**pKernelChannel->pKeyRotationNotifier != NULL**pKeyRotationNotifierMemDesc*swState**swState**pMmuExceptionData*NVRM: Rotating IV in GSP-RM. **NVRM: Rotating IV in GSP-RM. *call to kchannelRotateSecureChannelIv_46f6a7*NVRM: Rotating IV in CPU-RM. **NVRM: Rotating IV in CPU-RM. **pRotateIvParams*confComputeGetKeyPairByChannel(pGpu, pConfCompute, pKernelChannel, &h2dKey, NULL)**confComputeGetKeyPairByChannel(pGpu, pConfCompute, pKernelChannel, &h2dKey, NULL)*call to confComputeGetKeyRotationThreshold_IMPL*serverGetClientUnderLock(&g_resServ, hClient, &pRsClient)**serverGetClientUnderLock(&g_resServ, hClient, &pRsClient)*kchannelSetEncryptionStatsBuffer_HAL(pGpu, pKernelChannel, NULL, NV_FALSE)**kchannelSetEncryptionStatsBuffer_HAL(pGpu, pKernelChannel, NULL, NV_FALSE)*pRmApi->DupObject(pRmApi, hClient, hDevice, &pKernelChannel->hEncryptStatsBuf, hClient, pGetKmbParams->hMemory, 0)**pRmApi->DupObject(pRmApi, hClient, hDevice, &pKernelChannel->hEncryptStatsBuf, hClient, pGetKmbParams->hMemory, 0)*clientGetResourceRef(pRsClient, pKernelChannel->hEncryptStatsBuf, &pResourceRef)**clientGetResourceRef(pRsClient, pKernelChannel->hEncryptStatsBuf, &pResourceRef)*(pMemory != NULL) && (pMemory->pMemDesc != NULL)**(pMemory != NULL) && (pMemory->pMemDesc != NULL)*kchannelSetEncryptionStatsBuffer_HAL(pGpu, pKernelChannel, pMemory->pMemDesc, NV_TRUE)**kchannelSetEncryptionStatsBuffer_HAL(pGpu, pKernelChannel, pMemory->pMemDesc, NV_TRUE)*pCC != NULL**pCC != NULL*call to confComputeKeyStoreDeriveViaChannel_DISPATCH*confComputeKeyStoreDeriveViaChannel_HAL(pCC, pKernelChannel, ROTATE_IV_ALL_VALID, keyMaterialBundle)**confComputeKeyStoreDeriveViaChannel_HAL(pCC, pKernelChannel, ROTATE_IV_ALL_VALID, keyMaterialBundle)***pContext**ChID*call to kchannelGetFromDualHandle_IMPL*kchannelGetFromDualHandle(pClient, hDual, ppKernelChannel)**kchannelGetFromDualHandle(pClient, hDual, ppKernelChannel)*NVRM: channel handle 0x%08x is part of a channel group, not allowed! **NVRM: channel handle 0x%08x is part of a channel group, not allowed! *call to CliGetChannelGroup*pChanGrpRef*(pKernelChannelGroupApi != NULL) && (pKernelChannelGroupApi->pKernelChannelGroup != NULL)**(pKernelChannelGroupApi != NULL) && (pKernelChannelGroupApi->pKernelChannelGroup != NULL)***ppKernelChannel != NULL*call to kchannelGetUserdInfo_DISPATCH*call to kchannelSetCpuMapped*NVRM: BAR1 offset 0x%llx for USERD of channel 0x%08x could not be cpu mapped **NVRM: BAR1 offset 0x%llx for USERD of channel 0x%08x could not be cpu mapped **pChannelStateParams*pRmApi->Control(pRmApi, pRmCtrlParams->hClient, pRmCtrlParams->hObject, NV208F_CTRL_CMD_FIFO_GET_CHANNEL_STATE, pChannelStateParams, sizeof(*pChannelStateParams))**pRmApi->Control(pRmApi, pRmCtrlParams->hClient, pRmCtrlParams->hObject, NV208F_CTRL_CMD_FIFO_GET_CHANNEL_STATE, pChannelStateParams, sizeof(*pChannelStateParams))*pContextDma->Limit >= sizeof(NvNotification) - 1**pContextDma->Limit >= sizeof(NvNotification) - 1*call to notifyFillNotifier**notifyIndex*notifyStatus*index != NV_CHANNELGPFIFO_NOTIFICATION_TYPE_ERROR**index != NV_CHANNELGPFIFO_NOTIFICATION_TYPE_ERROR*index != NV_CHANNELGPFIFO_NOTIFICATION_TYPE_KEY_ROTATION_STATUS**index != NV_CHANNELGPFIFO_NOTIFICATION_TYPE_KEY_ROTATION_STATUS*call to portSafeMulU64*pMemory->Length >= notificationBufferSize**pMemory->Length >= notificationBufferSize*CliGetDmaMappingInfo(pClient, RES_GET_HANDLE(pDevice), RES_GET_HANDLE(pMemory), physAddr, gpumgrGetDeviceGpuMask(pGpu->deviceInstance), &pDmaMappingInfo)**CliGetDmaMappingInfo(pClient, RES_GET_HANDLE(pDevice), RES_GET_HANDLE(pMemory), physAddr, gpumgrGetDeviceGpuMask(pGpu->deviceInstance), &pDmaMappingInfo)*pDmaMappingInfo*pDmaMappingInfo->pMemDesc->Size >= notificationBufferSize**pDmaMappingInfo->pMemDesc->Size >= notificationBufferSize*pContextDma->Limit >= (notificationBufferSize - 1)**pContextDma->Limit >= (notificationBufferSize - 1)*engDesc != ENG_FIFO**engDesc != ENG_FIFO*call to kchannelCheckBcStateCurrent_IMPL*kchannelCheckBcStateCurrent(pGpu, pKernelChannel)**kchannelCheckBcStateCurrent(pGpu, pKernelChannel)*NVRM: channel 0x%08x engDesc %s (0x%x) **NVRM: channel 0x%08x engDesc %s (0x%x) **pGVAS*pGVAS != NULL**pGVAS != NULL**pTempMemDesc*pTempMemDesc != NULL**pTempMemDesc != NULL*Externally owned object not found**Externally owned object not found*call to kfifoGetCtxBufferMapFlags_DISPATCH*call to kchannelGetGoldenCtxUpdateFlags_IMPL*call to dmaMapBuffer_DISPATCH*NVRM: Could not map context buffer for engDesc 0x%x **NVRM: Could not map context buffer for engDesc 0x%x *call to vaListAddVa**vaListFindVa*NVRM: GPU = %d, channel 0x%08x, bcStateCurrent = %d, channelBcStateEnum = %d **NVRM: GPU = %d, channel 0x%08x, bcStateCurrent = %d, channelBcStateEnum = %d *bcStateCurrent*pKernelChannel->bcStateCurrent == channelBcStateEnum**pKernelChannel->bcStateCurrent == channelBcStateEnum*NVRM: channel 0x%08x engDesc 0x%x **NVRM: channel 0x%08x engDesc 0x%x *pEngCtxDesc**pEngCtxDesc*call to _kchannelClearVAList*call to vaListGetManaged*NVRM: channel 0x%08x engDesc 0x%x pMemDesc %p **NVRM: channel 0x%08x engDesc 0x%x pMemDesc %p *call to kchangrpAllocEngineContextDescriptor_IMPL*kchangrpAllocEngineContextDescriptor(pGpu, pKernelChannelGroup)**kchangrpAllocEngineContextDescriptor(pGpu, pKernelChannelGroup)*pEngCtxDesc != NULL**pEngCtxDesc != NULL*call to memdescAddRef*call to vaListSetManaged*pVas**pVas*pVaList*simple*call to dmaUnmapBuffer_DISPATCH*call to vaListClear*clientGetResourceRef(pClient, hResource, &pResourceRef)**clientGetResourceRef(pClient, hResource, &pResourceRef)*pResourceRef->pParentRef->hResource == RES_GET_HANDLE(pKernelChannel)**pResourceRef->pParentRef->hResource == RES_GET_HANDLE(pKernelChannel)**ppObject != NULL***ppObject != NULL*pIt != NULL**pIt != NULL**pHead*pHead->pKernelChannel != NULL**pHead->pKernelChannel != NULL*call to kchannelGetChildIterator*channelNode**pIt*pIter != NULL**pIter != NULL*rsIter*pChild != NULL**pChild != NULL*call to kfifoDeleteObject_56cd7a*NVRM: Could not delete hal resources with object **NVRM: Could not delete hal resources with object *firstObjectClassID*pMatchingObject*NVRM: channel %08x:%08x: out of handles! **NVRM: channel %08x:%08x: out of handles! **pMatchingObject*pMatchingObject != NULL**pMatchingObject != NULL*call to kfifoAddObject_56cd7a*call to kchannelUpdateWorkSubmitTokenNotifIndex_IMPL*pKernelChannel->pKernelChannelGroupApi != NULL**pKernelChannel->pKernelChannelGroupApi != NULL*kfifoGenerateWorkSubmitToken(pGpu, pKernelFifo, pKernelChannel, &pTokenParams->workSubmitToken, pKernelChannelGroup->bIsCallingContextVgpuPlugin)**kfifoGenerateWorkSubmitToken(pGpu, pKernelFifo, pKernelChannel, &pTokenParams->workSubmitToken, pKernelChannelGroup->bIsCallingContextVgpuPlugin)*call to kchannelNotifyWorkSubmitToken_IMPL*pInterleaveLevel*channelInterleaveLevel*call to kchangrpSetInterleaveLevel_IMPL*NVRM: Bind requested for channel 0x%08x belonging to TSG %d. **NVRM: Bind requested for channel 0x%08x belonging to TSG %d. *kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, GPU_RES_GET_DEVICE(pKernelChannel), &ref)**kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, GPU_RES_GET_DEVICE(pKernelChannel), &ref)*kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, localRmEngineType, &globalRmEngineType)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, localRmEngineType, &globalRmEngineType)*NVRM: Binding channel 0x%08x to Engine %d **NVRM: Binding channel 0x%08x to Engine %d *call to gpuXlateClientEngineIdToEngDesc_IMPL*gpuXlateClientEngineIdToEngDesc(pGpu, globalRmEngineType, &engineDesc)**gpuXlateClientEngineIdToEngDesc(pGpu, globalRmEngineType, &engineDesc)*call to kchannelBindToRunlist_IMPL*kchannelBindToRunlist(pKernelChannel, localRmEngineType, engineDesc)**kchannelBindToRunlist(pKernelChannel, localRmEngineType, engineDesc)*NVRM: calling setErrorNotifier on channel 0x%08x, broadcast to TSG: %s **NVRM: calling setErrorNotifier on channel 0x%08x, broadcast to TSG: %s *pSetErrorNotifierParams*call to krcErrorSetNotifier_IMPL*call to kchannelIsSchedulable_IMPL*kchannelIsSchedulable_HAL(pGpu, pKernelChannel)**kchannelIsSchedulable_HAL(pGpu, pKernelChannel)*call to kchannelFwdToInternalCtrl_56cd7a*pResetChannelParams*bIsRcPending**bIsRcPending**pResetChannelParams*pResetIsolatedChannelParams**pResetIsolatedChannelParams*pParams->hObject != RES_GET_CLIENT_HANDLE(pKernelChannel)**pParams->hObject != RES_GET_CLIENT_HANDLE(pKernelChannel)*call to kchannelGetClassEngineID_DISPATCH*kchannelGetClassEngineID_HAL(pGpu, pKernelChannel, pParams->hObject, &pParams->classEngineID, &pParams->classID, &rmEngineType)**kchannelGetClassEngineID_HAL(pGpu, pKernelChannel, pParams->hObject, &pParams->classEngineID, &pParams->classID, &rmEngineType)*call to kmigmgrIsEnginePartitionable_IMPL*kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, rmEngineType, &localRmEngineType)**kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, rmEngineType, &localRmEngineType)*NVRM: Overriding global engine type 0x%x to local engine type 0x%x (0x%x) due to MIG **NVRM: Overriding global engine type 0x%x to local engine type 0x%x (0x%x) due to MIG *call to kchannelGetCid*contextId*NVRM: Failed to set RunlistID 0x%08x for channel 0x%08x **NVRM: Failed to set RunlistID 0x%08x for channel 0x%08x *pRpcParams**pRpcParams*pRpcParams != NULL**pRpcParams != NULL*pChannelGpfifoParams*hContextShare*hObjectEccError*hPhysChannelGroup*internalFlags**encryptIv**decryptIv*hmacNonce**hmacNonce*pInstanceBlock != NULL**pInstanceBlock != NULL*instanceMem*ramfcMem*userdMem*mthdbufMem*ProcessID*SubProcessID*NVRM: Alloc Channel chid %d, hClient:0x%x, hParent:0x%x, hObject:0x%x, hClass:0x%x **NVRM: Alloc Channel chid %d, hClient:0x%x, hParent:0x%x, hObject:0x%x, hClass:0x%x *IS_VIRTUAL_WITH_SRIOV(pGpu) && gpuIsWarBug200577889SriovHeavyEnabled(pGpu)**IS_VIRTUAL_WITH_SRIOV(pGpu) && gpuIsWarBug200577889SriovHeavyEnabled(pGpu)*subDevInst*memInfoParams*NVRM: Unable to get subdevice object. **NVRM: Unable to get subdevice object. *NVRM: RM Control call to fetch channel meminfo failed, hKernelChannel 0x%x **NVRM: RM Control call to fetch channel meminfo failed, hKernelChannel 0x%x *chMemInfo*apert*NVRM: Unknown aperture, hClient 0x%x, hKernelChannel 0x%x **NVRM: Unknown aperture, hClient 0x%x, hKernelChannel 0x%x *(pKernelChannelGroupApi != NULL)**(pKernelChannelGroupApi != NULL)*(RMCFG_FEATURE_PLATFORM_GSP && !pKernelChannel->bGspOwned) || (IS_GFID_VF(gfid) && !gpuIsWarBug200577889SriovHeavyEnabled(pGpu))**(RMCFG_FEATURE_PLATFORM_GSP && !pKernelChannel->bGspOwned) || (IS_GFID_VF(gfid) && !gpuIsWarBug200577889SriovHeavyEnabled(pGpu))*(pChannelGpfifoParams != NULL)**(pChannelGpfifoParams != NULL)*pChannelGpfifoParams->mthdbufMem.size > 0**pChannelGpfifoParams->mthdbufMem.size > 0*pChannelGpfifoParams->mthdbufMem.base != 0**pChannelGpfifoParams->mthdbufMem.base != 0*call to _kchannelAllocHalData*_kchannelAllocHalData(pGpu, pKernelChannel)**_kchannelAllocHalData(pGpu, pKernelChannel)*call to kchannelCreateUserdMemDescBc_DISPATCH*kchannelCreateUserdMemDescBc_HAL(pGpu, pKernelChannel, hClient, pChannelGpfifoParams->hUserdMemory, pChannelGpfifoParams->userdOffset)**kchannelCreateUserdMemDescBc_HAL(pGpu, pKernelChannel, hClient, pChannelGpfifoParams->hUserdMemory, pChannelGpfifoParams->userdOffset)*call to _kchannelDescribeMemDescsHeavySriov*_kchannelDescribeMemDescsHeavySriov(pGpu, pKernelChannel)**_kchannelDescribeMemDescsHeavySriov(pGpu, pKernelChannel)*call to _kchannelDescribeMemDescsFromParams*_kchannelDescribeMemDescsFromParams(pGpu, pKernelChannel, pChannelGpfifoParams)**_kchannelDescribeMemDescsFromParams(pGpu, pKernelChannel, pChannelGpfifoParams)*call to kchannelAllocMem_DISPATCH*call to _kchannelgetVerifFlags*kchannelAllocMem_HAL(pGpu, pKernelChannel, pChannelGpfifoParams->flags, _kchannelgetVerifFlags(pGpu, pChannelGpfifoParams))**kchannelAllocMem_HAL(pGpu, pKernelChannel, pChannelGpfifoParams->flags, _kchannelgetVerifFlags(pGpu, pChannelGpfifoParams))*call to kfifoSetupUserD_DISPATCH*call to _kchannelFreeHalData*call to kchannelDestroyMem_DISPATCH*pKernelChannel->pFifoHalData[gpumgrGetSubDeviceInstanceFromGpu(pGpu)] != NULL**pKernelChannel->pFifoHalData[gpumgrGetSubDeviceInstanceFromGpu(pGpu)] != NULL*gfId*NVRM: Check for channel schedulability for channel 0x%08x is already performed on guest-RM **NVRM: Check for channel schedulability for channel 0x%08x is already performed on guest-RM *pGVAS != NULL || IS_MODS_AMODEL(pGpu)**pGVAS != NULL || IS_MODS_AMODEL(pGpu)*call to kchannelGetEngine_DISPATCH*kchannelGetEngine_HAL(pGpu, pKernelChannel, &engineDesc) == NV_OK**kchannelGetEngine_HAL(pGpu, pKernelChannel, &engineDesc) == NV_OK*NVRM: Cannot schedule externally-owned channel 0x%08x with unbound allocations! **NVRM: Cannot schedule externally-owned channel 0x%08x with unbound allocations! *pNotifierType*pNotifierType != NULL**pNotifierType != NULL*NVRM: Cannot find DMA mapping for GPU_VA notifier **NVRM: Cannot find DMA mapping for GPU_VA notifier *NVRM: Notifier does not fit within DMA mapping for GPU_VA **NVRM: Notifier does not fit within DMA mapping for GPU_VA *subdeviceInstance*KernelVAddr**KernelVAddr***KernelVAddr*NVRM: Kernel VA addr mapping not present for notifier **NVRM: Kernel VA addr mapping not present for notifier *kchannelFwdToInternalCtrl_HAL(pGpu, pKernelChannel, NVA06F_CTRL_CMD_INTERNAL_STOP_CHANNEL, pRmCtrlParams)**kchannelFwdToInternalCtrl_HAL(pGpu, pKernelChannel, NVA06F_CTRL_CMD_INTERNAL_STOP_CHANNEL, pRmCtrlParams)*call to kchannelNotifyRc_IMPL*kchannelNotifyRc_HAL(pKernelChannel)**kchannelNotifyRc_HAL(pKernelChannel)*memdescGetSize(pNotifierMemDesc) >= (notifyIndex + 1) * sizeof(NvNotification)**memdescGetSize(pNotifierMemDesc) >= (notifyIndex + 1) * sizeof(NvNotification)*call to _kchannelGetKeyRotationNotifier*pNotifier != NULL**pNotifier != NULL*bMemEndTransfer*memdescGetAddressSpace(pNotifierMemDesc) == ADDR_SYSMEM || !kbusIsBarAccessBlocked(pKernelBus)**memdescGetAddressSpace(pNotifierMemDesc) == ADDR_SYSMEM || !kbusIsBarAccessBlocked(pKernelBus)*call to notifyFillNvNotification*notifyIndex < classInfo.notifiersMaxCount**notifyIndex < classInfo.notifiersMaxCount*NVRM: Posting event on channel 0x%08x with info16 = 0x%x **NVRM: Posting event on channel 0x%08x with info16 = 0x%x *NVRM: No event on channel 0x%08x **NVRM: No event on channel 0x%08x *NVRM: Notification for channel 0x%08x stop is already performed on guest-RM **NVRM: Notification for channel 0x%08x stop is already performed on guest-RM *NVRM: channel 0x%08x has no notifier set **NVRM: channel 0x%08x has no notifier set *NVRM: Failed to set error notifier for channel 0x%08x with error 0x%x. **NVRM: Failed to set error notifier for channel 0x%08x with error 0x%x. *call to kchannelIsValid_88bc07*physicalChannelID*notifiersMaxCount*rcNotifierIndex*classType*NVRM: Invalid class for CliGetChannelClassInfo **NVRM: Invalid class for CliGetChannelClassInfo *clientGetResourceRef(pClient, hKernelChannel, &pResourceRef)**clientGetResourceRef(pClient, hKernelChannel, &pResourceRef)**pParentRef*pParentRef != NULL**pParentRef != NULL*(pParentRef->hResource == hParent) || (RES_GET_HANDLE(GPU_RES_GET_DEVICE(pKernelChannel)) == hParent)**(pParentRef->hResource == hParent) || (RES_GET_HANDLE(GPU_RES_GET_DEVICE(pKernelChannel)) == hParent)*pScopeRef*NVRM: Unicast DMA mappings of channels not supported. **NVRM: Unicast DMA mappings of channels not supported. *call to kfifoIsUserdMapDmaSupported*NVRM: Unicast DMA mappings of USERD not supported. **NVRM: Unicast DMA mappings of USERD not supported. *call to _kchannelGetUserMemDesc**pSrcGpu*call to kfifoGetUserdLocation_DISPATCH*kfifoGetUserdLocation_HAL(pKernelFifo, &userdAperture, &userdAttribute)**kfifoGetUserdLocation_HAL(pKernelFifo, &userdAperture, &userdAttribute)*call to kchannelUnmapUserD_IMPL*call to gpuresGetByDeviceOrSubdeviceHandle_IMPL*call to kchannelMapUserD_IMPL*call to _kchannelUpdateFifoMapping*NVRM: Caller missing proper locks **NVRM: Caller missing proper locks *pGpu->numUserKernelChannel > 0**pGpu->numUserKernelChannel > 0*call to gpuRusdRequestPermanentDataPoll_IMPL*call to confComputeUpdateFreedChannelStats_IMPL*confComputeUpdateFreedChannelStats(pGpu, pConfCompute, pKernelChannel)**confComputeUpdateFreedChannelStats(pGpu, pConfCompute, pKernelChannel)*confComputeGetKeyPairByChannel(pGpu, pConfCompute, pKernelChannel, &h2dKey, &d2hKey)**confComputeGetKeyPairByChannel(pGpu, pConfCompute, pKernelChannel, &h2dKey, &d2hKey)*bCheckKeyRotation*call to kchannelSetKeyRotationNotifier_DISPATCH*kchannelSetKeyRotationNotifier_HAL(pGpu, pKernelChannel, NV_FALSE)**kchannelSetKeyRotationNotifier_HAL(pGpu, pKernelChannel, NV_FALSE)**pErrContextMemDesc**pEccErrContextMemDesc*call to kgrctxFromKernelChannel_IMPL*call to kgrctxIsValid_IMPL*call to shrkgrctxDetach_IMPL*call to kgrctxGetShared*call to kchangrpRemoveChannel_IMPL*call to ctxBufPoolRelease**pKernelChannelGroupApi*call to kchannelFreeHwID_DISPATCH*call to kchannelFreeMmuExceptionInfo_IMPL*pKernelChannel->refCount == 1**pKernelChannel->refCount == 1*call to confComputeCheckAndPerformKeyRotation_IMPL*confComputeCheckAndPerformKeyRotation(pGpu, pConfCompute, h2dKey, d2hKey)**confComputeCheckAndPerformKeyRotation(pGpu, pConfCompute, h2dKey, d2hKey)*FLD_TEST_DRF(OS04, _FLAGS, _CHANNEL_TYPE, _PHYSICAL, flags)**FLD_TEST_DRF(OS04, _FLAGS, _CHANNEL_TYPE, _PHYSICAL, flags)*bIsContextBound*nextObjectClassID*vaSpaceId*goldenCtxUpdateFlags*refFindAncestorOfType(pResourceRef, classId(Device), &pDeviceRef)**refFindAncestorOfType(pResourceRef, classId(Device), &pDeviceRef)*vgpuGetCallingContextGfid(pGpu, &callingContextGfid)**vgpuGetCallingContextGfid(pGpu, &callingContextGfid)*bUvmOwnedFlag**pUserInfo*bGspOwned*!pKernelChannel->bGspOwned || RMCFG_FEATURE_PLATFORM_GSP**!pKernelChannel->bGspOwned || RMCFG_FEATURE_PLATFORM_GSP*call to hypervisorCheckForObjectAccess_IMPL*bUvmOwned*NVRM: Both context share and vaspace handles can't be valid at the same time **NVRM: Both context share and vaspace handles can't be valid at the same time *call to pmaQueryConfigs*pmaQueryConfigs(pHeap->pPmaObject, &pmaConfigs)**pmaQueryConfigs(pHeap->pPmaObject, &pmaConfigs)*bTopLevelScrubberEnabled*bTopLevelScrubberConstructed*NVRM: Channel allocation not allowed when MIG is enabled without GPU instancing **NVRM: Channel allocation not allowed when MIG is enabled without GPU instancing *call to clientGetResourceRefByType_IMPL*NVRM: Non-TSG channels can't use context share **NVRM: Non-TSG channels can't use context share *tsgParams*pRmApi->AllocWithSecInfo(pRmApi, hClient, hParent, &pChannelGpfifoParams->hPhysChannelGroup, KEPLER_CHANNEL_GROUP_A, NV_PTR_TO_NvP64(&tsgParams), sizeof(tsgParams), RMAPI_ALLOC_FLAGS_SKIP_RPC, NvP64_NULL, &pRmApi->defaultSecInfo)**pRmApi->AllocWithSecInfo(pRmApi, hClient, hParent, &pChannelGpfifoParams->hPhysChannelGroup, KEPLER_CHANNEL_GROUP_A, NV_PTR_TO_NvP64(&tsgParams), sizeof(tsgParams), RMAPI_ALLOC_FLAGS_SKIP_RPC, NvP64_NULL, &pRmApi->defaultSecInfo)*bTsgAllocated*clientGetResourceRefByType(pRsClient, hChanGrp, classId(KernelChannelGroupApi), &pChanGrpRef)**clientGetResourceRefByType(pRsClient, hChanGrp, classId(KernelChannelGroupApi), &pChanGrpRef)*bAllocatedByRm*NVRM: Invalid KernelChannelGroup* for channel 0x%x **NVRM: Invalid KernelChannelGroup* for channel 0x%x *NVRM: TSG channels can't use an explicit vaspace **NVRM: TSG channels can't use an explicit vaspace *pKernelChannelGroupApi != NULL**pKernelChannelGroupApi != NULL*NVRM: Only kernel priv clients can skip scrubber **NVRM: Only kernel priv clients can skip scrubber *call to ctxBufPoolSetScrubSkip*NVRM: Skipping scrubber for all allocations on this context **NVRM: Skipping scrubber for all allocations on this context *call to ctxBufPoolIsScrubSkipped*bIsScrubSkipped*NVRM: Mismatch between channel and parent TSG's policy on skipping scrubber **NVRM: Mismatch between channel and parent TSG's policy on skipping scrubber *NVRM: scrubbing %s skipped for TSG and %s for channel **NVRM: scrubbing %s skipped for TSG and %s for channel *is**is*is not**is not*bufInfo*kfifoGetInstMemInfo_HAL(pKernelFifo, &bufInfo.size, &bufInfo.align, NULL, NULL, NULL)**kfifoGetInstMemInfo_HAL(pKernelFifo, &bufInfo.size, &bufInfo.align, NULL, NULL, NULL)*call to ctxBufPoolReserve*ctxBufPoolReserve(pGpu, pChannelBufPool, &bufInfo, 1)**ctxBufPoolReserve(pGpu, pChannelBufPool, &bufInfo, 1)*NVRM: Not using ctx buf pool **NVRM: Not using ctx buf pool *rmDeviceGpuLocksAcquire(pGpu, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_FIFO)**rmDeviceGpuLocksAcquire(pGpu, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_FIFO)*bLockAcquired*hEccErrorContext*errorContextType*call to kchannelGetNotifierInfo*kchannelGetNotifierInfo(pGpu, pDevice, pKernelChannel->hErrorContext, &pKernelChannel->pErrContextMemDesc, &pKernelChannel->errorContextType, &pKernelChannel->errorContextOffset)**kchannelGetNotifierInfo(pGpu, pDevice, pKernelChannel->hErrorContext, &pKernelChannel->pErrContextMemDesc, &pKernelChannel->errorContextType, &pKernelChannel->errorContextOffset)*pKernelChannel->errorContextType != ERROR_NOTIFIER_TYPE_NONE**pKernelChannel->errorContextType != ERROR_NOTIFIER_TYPE_NONE*eccErrorContextType*kchannelGetNotifierInfo(pGpu, pDevice, pKernelChannel->hEccErrorContext, &pKernelChannel->pEccErrContextMemDesc, &pKernelChannel->eccErrorContextType, &pKernelChannel->eccErrorContextOffset)**kchannelGetNotifierInfo(pGpu, pDevice, pKernelChannel->hEccErrorContext, &pKernelChannel->pEccErrContextMemDesc, &pKernelChannel->eccErrorContextType, &pKernelChannel->eccErrorContextOffset)*pKernelChannel->eccErrorContextType != ERROR_NOTIFIER_TYPE_NONE**pKernelChannel->eccErrorContextType != ERROR_NOTIFIER_TYPE_NONE*pKernelChannel->pErrContextMemDesc**pKernelChannel->pErrContextMemDesc*errorNotifierMem*pKernelChannel->pEccErrContextMemDesc**pKernelChannel->pEccErrContextMemDesc*eccErrorNotifierMem*pKernelChannel->errorContextType != ERROR_NOTIFIER_TYPE_UNKNOWN**pKernelChannel->errorContextType != ERROR_NOTIFIER_TYPE_UNKNOWN*pKernelChannel->eccErrorContextType != ERROR_NOTIFIER_TYPE_UNKNOWN**pKernelChannel->eccErrorContextType != ERROR_NOTIFIER_TYPE_UNKNOWN*NVRM: All channels in a channel group must specify a CONTEXT_SHARE if any one of them specifies it **NVRM: All channels in a channel group must specify a CONTEXT_SHARE if any one of them specifies it *clientGetResourceRefByType(pRsClient, hKernelCtxShare, classId(KernelCtxShareApi), &pKernelCtxShareRef)**clientGetResourceRefByType(pRsClient, hKernelCtxShare, classId(KernelCtxShareApi), &pKernelCtxShareRef)*pKernelCtxShareRef*pKernelCtxShareRef->pParentRef != NULL && pKernelCtxShareRef->pParentRef->hResource == hParent**pKernelCtxShareRef->pParentRef != NULL && pKernelCtxShareRef->pParentRef->hResource == hParent*call to kchangrpapiSetLegacyMode_IMPL*kchangrpapiSetLegacyMode(pKernelChannelGroupApi, pGpu, pKernelFifo, hClient)**kchangrpapiSetLegacyMode(pKernelChannelGroupApi, pGpu, pKernelFifo, hClient)*subctxFlag*hLegacyKernelCtxShare*clientGetResourceRefByType(pRsClient, hLegacyKernelCtxShare, classId(KernelCtxShareApi), &pKernelCtxShareRef)**clientGetResourceRefByType(pRsClient, hLegacyKernelCtxShare, classId(KernelCtxShareApi), &pKernelCtxShareRef)**pKernelCtxShareApi*pKernelChannel->pKernelCtxShareApi != NULL**pKernelChannel->pKernelCtxShareApi != NULL*pKernelChannel->pKernelCtxShareApi->pShareData != NULL**pKernelChannel->pKernelCtxShareApi->pShareData != NULL*pKernelChannel->pVAS != NULL**pKernelChannel->pVAS != NULL*call to kfifoIsPerRunlistChramSupportedInHw*RM_ENGINE_TYPE_IS_VALID(pKernelChannelGroup->engineType)**RM_ENGINE_TYPE_IS_VALID(pKernelChannelGroup->engineType)*globalRmEngineType*kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, globalRmEngineType, &globalRmEngineType)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, globalRmEngineType, &globalRmEngineType)*NVRM: Engine type of channel = 0x%x (0x%x) not compatible with engine type of TSG = 0x%x (0x%x) **NVRM: Engine type of channel = 0x%x (0x%x) not compatible with engine type of TSG = 0x%x (0x%x) *call to kfifoGetDefaultRunlist_DISPATCH*bCCSecureChannel*bUseScrubKey*call to confComputeAcceptClientRequest_IMPL*confComputeKeyStoreRetrieveViaChannel_HAL(pConfCompute, pKernelChannel, ROTATE_IV_ALL_VALID, CHANNEL_IV_OPERATION_INCLUDE_ONLY, &pKernelChannel->clientKmb)**confComputeKeyStoreRetrieveViaChannel_HAL(pConfCompute, pKernelChannel, ROTATE_IV_ALL_VALID, CHANNEL_IV_OPERATION_INCLUDE_ONLY, &pKernelChannel->clientKmb)*kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, pDevice, &pKernelChannel->partitionRef)**kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, pDevice, &pKernelChannel->partitionRef)*call to kchannelAllocHwID_DISPATCH*NVRM: Error in Allocating channel id 0x%x for hClient 0x%x hKernelChannel 0x%x **NVRM: Error in Allocating channel id 0x%x for hClient 0x%x hKernelChannel 0x%x *bChidAllocated*call to _kchannelSendChannelAllocRpc*_kchannelSendChannelAllocRpc(pKernelChannel, pChannelGpfifoParams, pKernelChannelGroup, bFullSriov)**_kchannelSendChannelAllocRpc(pKernelChannel, pChannelGpfifoParams, pKernelChannelGroup, bFullSriov)*bRpcAllocated*call to _kchannelAllocOrDescribeInstMem*_kchannelAllocOrDescribeInstMem(pKernelChannel, pChannelGpfifoParams)**_kchannelAllocOrDescribeInstMem(pKernelChannel, pChannelGpfifoParams)*call to kchangrpAddChannel_IMPL*kchangrpAddChannel(pGpu, pKernelChannelGroup, pKernelChannel)**kchangrpAddChannel(pGpu, pKernelChannelGroup, pKernelChannel)*bAddedToGroup*kfifoRunlistSetId_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), pKernelChannel, pKernelChannelGroup->runlistId)**kfifoRunlistSetId_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), pKernelChannel, pKernelChannelGroup->runlistId)*call to kchannelAllocChannel_56cd7a*kchannelAllocChannel_HAL(pKernelChannel, pChannelGpfifoParams)**kchannelAllocChannel_HAL(pKernelChannel, pChannelGpfifoParams)*call to gpuacctSetProcType_IMPL*confComputeKeyStoreDeriveViaChannel_HAL(pConfCompute, pKernelChannel, ROTATE_IV_ALL_VALID, &pKernelChannel->clientKmb)**confComputeKeyStoreDeriveViaChannel_HAL(pConfCompute, pKernelChannel, ROTATE_IV_ALL_VALID, &pKernelChannel->clientKmb)*pTempKernelFifo*kfifoRunlistSetId_HAL(pGpu, pTempKernelFifo, pKernelChannel, pKernelChannel->runlistId)**kfifoRunlistSetId_HAL(pGpu, pTempKernelFifo, pKernelChannel, pKernelChannel->runlistId)*refAddDependant(pChanGrpRef, pResourceRef)**refAddDependant(pChanGrpRef, pResourceRef)*clientGetResourceRef(pRsClient, pChannelGpfifoParams->hVASpace, &pVASpaceRef)**clientGetResourceRef(pRsClient, pChannelGpfifoParams->hVASpace, &pVASpaceRef)*pVASpaceRef != NULL**pVASpaceRef != NULL*refAddDependant(pVASpaceRef, pResourceRef)**refAddDependant(pVASpaceRef, pResourceRef)*refAddDependant(RES_GET_REF(pKernelChannel->pKernelCtxShareApi), pResourceRef)**refAddDependant(RES_GET_REF(pKernelChannel->pKernelCtxShareApi), pResourceRef)*kgrctxFromKernelChannel(pKernelChannel, &pKernelGraphicsContext)**kgrctxFromKernelChannel(pKernelChannel, &pKernelGraphicsContext)*refAddDependant(RES_GET_REF(pKernelGraphicsContext), pResourceRef)**refAddDependant(RES_GET_REF(pKernelGraphicsContext), pResourceRef)*call to _kchannelNotifyOfChid*_kchannelNotifyOfChid(pGpu, pKernelChannel, pRsClient)**_kchannelNotifyOfChid(pGpu, pKernelChannel, pRsClient)*kchannelSetKeyRotationNotifier_HAL(pGpu, pKernelChannel, NV_TRUE)**kchannelSetKeyRotationNotifier_HAL(pGpu, pKernelChannel, NV_TRUE)*call to refRemoveDependant*pKernelChannelGroup->ppEngCtxDesc[subdeviceInstance] != NULL*src/kernel/gpu/fifo/kernel_channel_group.c**pKernelChannelGroup->ppEngCtxDesc[subdeviceInstance] != NULL**src/kernel/gpu/fifo/kernel_channel_group.c*call to vaListInit*vaListInit(&pKernelChannelGroup->ppEngCtxDesc[subdeviceInstance]->vaList)**vaListInit(&pKernelChannelGroup->ppEngCtxDesc[subdeviceInstance]->vaList)*pEngCtxDescriptor**pEngCtxDescriptor*call to vaListDestroy***ppEngCtxDesc*call to kchangrpSetInterleaveLevelSched_56cd7a*kchangrpSetInterleaveLevelSched(pGpu, pKernelChannelGroup, value)**kchangrpSetInterleaveLevelSched(pGpu, pKernelChannelGroup, value)*call to kfifoChannelListRemove_IMPL*NVRM: Could not remove channel from channel list **NVRM: Could not remove channel from channel list *NVRM: Channelcount in channel group not right!!! **NVRM: Channelcount in channel group not right!!! *call to kfifoGetMaxChannelGroupSize_DISPATCH*maxChanCount*NVRM: There are already max %d channels in this group **NVRM: There are already max %d channels in this group *pKernelCtxShare != NULL**pKernelCtxShare != NULL*pKernelChannelGroup->runlistId == kchannelGetRunlistId(pKernelChannel)**pKernelChannelGroup->runlistId == kchannelGetRunlistId(pKernelChannel)*NVRM: channel 0x%08x within TSG 0x%x is using subcontext 0x%x **NVRM: channel 0x%08x within TSG 0x%x is using subcontext 0x%x *call to kfifoChannelListAppend_IMPL*NVRM: Could not add channel to channel list **NVRM: Could not add channel to channel list *kchangrpSetInterleaveLevel(pGpu, pKernelChannelGroup, pKernelChannelGroup->pInterleaveLevel[subdevInst])**kchangrpSetInterleaveLevel(pGpu, pKernelChannelGroup, pKernelChannelGroup->pInterleaveLevel[subdevInst])*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, pKernelChannelGroup->engineType, ENGINE_INFO_TYPE_RUNLIST, &runlistId)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, pKernelChannelGroup->engineType, ENGINE_INFO_TYPE_RUNLIST, &runlistId)*pKernelChannelGroup->chanCount == 0**pKernelChannelGroup->chanCount == 0*call to kfifoChannelGroupGetLocalMaxSubcontext_DISPATCH*maxSubctx == kfifoChannelGroupGetLocalMaxSubcontext_HAL( pGpu, pKernelFifo, pKernelChannelGroup, pKernelChannelGroup->bLegacyMode)**maxSubctx == kfifoChannelGroupGetLocalMaxSubcontext_HAL( pGpu, pKernelFifo, pKernelChannelGroup, pKernelChannelGroup->bLegacyMode)*maxSubctx == numFreeSubctx**maxSubctx == numFreeSubctx**pSubctxIdHeap*pVaSpaceIdHeap**pVaSpaceIdHeap*call to _kchangrpFreeAllEngCtxDescs*call to kfifoChannelListDestroy_IMPL**pChanList*pChanGrpTree*pKernelChannelGroupTemp**pKernelChannelGroupTemp*NVRM: Could not find channel group %d **NVRM: Could not find channel group %d *call to kfifoChidMgrFreeChannelGroupHwID_IMPL*call to kchangrpUnmapFaultMethodBuffers_DISPATCH**pMthdBuffers*ppSubctxMask**ppSubctxMask***ppSubctxMask*ppZombieSubctxMask**ppZombieSubctxMask***ppZombieSubctxMask*pStateMask**pStateMask**pInterleaveLevel*(pKernelChannelGroup->ppSubctxMask != NULL && pKernelChannelGroup->ppZombieSubctxMask != NULL && pKernelChannelGroup->pStateMask != NULL && pKernelChannelGroup->pInterleaveLevel != NULL)**(pKernelChannelGroup->ppSubctxMask != NULL && pKernelChannelGroup->ppZombieSubctxMask != NULL && pKernelChannelGroup->pStateMask != NULL && pKernelChannelGroup->pInterleaveLevel != NULL)*call to kfifoChidMgrAllocChannelGroupHwID_IMPL*kfifoChidMgrAllocChannelGroupHwID(pGpu, pKernelFifo, pChidMgr, &grpID)**kfifoChidMgrAllocChannelGroupHwID(pGpu, pKernelFifo, pChidMgr, &grpID)*call to kfifoChannelGroupGetDefaultTimeslice_DISPATCH*call to kfifoChannelGroupSetTimeslice_IMPL*kfifoChannelGroupSetTimeslice(pGpu, pKernelFifo, pKernelChannelGroup, pKernelChannelGroup->timesliceUs, NV_TRUE)**kfifoChannelGroupSetTimeslice(pGpu, pKernelFifo, pKernelChannelGroup, pKernelChannelGroup->timesliceUs, NV_TRUE)*call to kfifoChannelListCreate_IMPL*kfifoChannelListCreate(pGpu, pKernelFifo, &pKernelChannelGroup->pChanList)**kfifoChannelListCreate(pGpu, pKernelFifo, &pKernelChannelGroup->pChanList)*pKernelChannelGroup->ppEngCtxDesc != NULL**pKernelChannelGroup->ppEngCtxDesc != NULL*pKernelChannelGroup->pSubctxIdHeap != NULL**pKernelChannelGroup->pSubctxIdHeap != NULL*maxSubctx*pKernelChannelGroup->pVaSpaceIdHeap != NULL**pKernelChannelGroup->pVaSpaceIdHeap != NULL*(runQueues > 0)**(runQueues > 0)*pKernelChannelGroup->pMthdBuffers != NULL**pKernelChannelGroup->pMthdBuffers != NULL*call to kchangrpAllocFaultMethodBuffers_DISPATCH*NVRM: Fault method buffer allocation failed for group ID 0x%0x with status 0x%0x **NVRM: Fault method buffer allocation failed for group ID 0x%0x with status 0x%0x *bMapFaultMthdBuffers*call to kchangrpMapFaultMethodBuffers_DISPATCH*NVRM: Fault method buffer BAR2 mapping failed for group ID 0x%0x with status 0x%0x **NVRM: Fault method buffer BAR2 mapping failed for group ID 0x%0x with status 0x%0x *tsgUniqueId*tsgInterleaveLevel*src/kernel/gpu/fifo/kernel_channel_group_api.c**src/kernel/gpu/fifo/kernel_channel_group_api.c*(pClass != NULL)**(pClass != NULL)*tsgID**pTsParams*localEngineType*kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, localEngineType, &globalEngineType)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, localEngineType, &globalEngineType)*NVRM: Binding TSG %d to Engine %d (%d) **NVRM: Binding TSG %d to Engine %d (%d) *gpuXlateClientEngineIdToEngDesc(pGpu, globalEngineType, &engineDesc)**gpuXlateClientEngineIdToEngDesc(pGpu, globalEngineType, &engineDesc)*kfifoEngineInfoXlate_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), ENGINE_INFO_TYPE_ENG_DESC, engineDesc, ENGINE_INFO_TYPE_RUNLIST, &pKernelChannelGroupApi->pKernelChannelGroup->runlistId)**kfifoEngineInfoXlate_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), ENGINE_INFO_TYPE_ENG_DESC, engineDesc, ENGINE_INFO_TYPE_RUNLIST, &pKernelChannelGroupApi->pKernelChannelGroup->runlistId)*kchannelBindToRunlist(pChanNode->pKernelChannel, localEngineType, engineDesc)**kchannelBindToRunlist(pChanNode->pKernelChannel, localEngineType, engineDesc)**pChanNode*kchannelIsSchedulable_HAL(pGpu, pChanNode->pKernelChannel)**kchannelIsSchedulable_HAL(pGpu, pChanNode->pKernelChannel)*NVRM: Channels in TSG %d have different runlist IDs this should never happen! **NVRM: Channels in TSG %d have different runlist IDs this should never happen! **pSchedParams*pKernelChannelGroup->pSubctxIdHeap->eheapGetSize( pKernelChannelGroup->pSubctxIdHeap, &numMax)**pKernelChannelGroup->pSubctxIdHeap->eheapGetSize( pKernelChannelGroup->pSubctxIdHeap, &numMax)*pKernelChannelGroup->pSubctxIdHeap->eheapGetFree( pKernelChannelGroup->pSubctxIdHeap, &numFree)**pKernelChannelGroup->pSubctxIdHeap->eheapGetFree( pKernelChannelGroup->pSubctxIdHeap, &numFree)*numMax == kfifoChannelGroupGetLocalMaxSubcontext_HAL(pGpu, pKernelFifo, pKernelChannelGroup, NV_FALSE)**numMax == kfifoChannelGroupGetLocalMaxSubcontext_HAL(pGpu, pKernelFifo, pKernelChannelGroup, NV_FALSE)*numMax == numFree && numMax != 0**numMax == numFree && numMax != 0*numMax == numFree**numMax == numFree*maxSubctx == 1 || maxSubctx == 2**maxSubctx == 1 || maxSubctx == 2*hkCtxShare*kctxshareParams*pRmApi->AllocWithSecInfo(pRmApi, hClient, hTsg, &hkCtxShare, FERMI_CONTEXT_SHARE_A, NV_PTR_TO_NvP64(&kctxshareParams), sizeof(kctxshareParams), RMAPI_ALLOC_FLAGS_SKIP_RPC, NvP64_NULL, &pRmApi->defaultSecInfo)**pRmApi->AllocWithSecInfo(pRmApi, hClient, hTsg, &hkCtxShare, FERMI_CONTEXT_SHARE_A, NV_PTR_TO_NvP64(&kctxshareParams), sizeof(kctxshareParams), RMAPI_ALLOC_FLAGS_SKIP_RPC, NvP64_NULL, &pRmApi->defaultSecInfo)*kctxshareParams.subctxId == NV_CTXSHARE_ALLOCATION_FLAGS_SUBCONTEXT_SYNC**kctxshareParams.subctxId == NV_CTXSHARE_ALLOCATION_FLAGS_SUBCONTEXT_SYNC*hLegacykCtxShareSync*kctxshareParams.subctxId == NV_CTXSHARE_ALLOCATION_FLAGS_SUBCONTEXT_ASYNC**kctxshareParams.subctxId == NV_CTXSHARE_ALLOCATION_FLAGS_SUBCONTEXT_ASYNC*hLegacykCtxShareAsync*numFree == 0**numFree == 0*pRmApi->DupObject(pRmApi, RES_GET_CLIENT_HANDLE(pChanGrpDest), RES_GET_HANDLE(pChanGrpDest), &pChanGrpDest->hLegacykCtxShareSync, RES_GET_CLIENT_HANDLE(pKernelChannelGroupApi), pKernelChannelGroupApi->hLegacykCtxShareSync, 0)**pRmApi->DupObject(pRmApi, RES_GET_CLIENT_HANDLE(pChanGrpDest), RES_GET_HANDLE(pChanGrpDest), &pChanGrpDest->hLegacykCtxShareSync, RES_GET_CLIENT_HANDLE(pKernelChannelGroupApi), pKernelChannelGroupApi->hLegacykCtxShareSync, 0)*pRmApi->DupObject(pRmApi, RES_GET_CLIENT_HANDLE(pChanGrpDest), RES_GET_HANDLE(pChanGrpDest), &pChanGrpDest->hLegacykCtxShareAsync, RES_GET_CLIENT_HANDLE(pKernelChannelGroupApi), pKernelChannelGroupApi->hLegacykCtxShareAsync, 0)**pRmApi->DupObject(pRmApi, RES_GET_CLIENT_HANDLE(pChanGrpDest), RES_GET_HANDLE(pChanGrpDest), &pChanGrpDest->hLegacykCtxShareAsync, RES_GET_CLIENT_HANDLE(pKernelChannelGroupApi), pKernelChannelGroupApi->hLegacykCtxShareAsync, 0)*NVRM: Failed to set channel group in legacy mode. **NVRM: Failed to set channel group in legacy mode. *ppChanGrpRef**ppChanGrpRef*pChanGrpSrc*call to serverRefShare*call to serverutilRefIter*pDstRef*pVaspaceRef**pVaspaceRef**pVaspaceApi*pVaspaceApi != NULL**pVaspaceApi != NULL*pRmApi->DupObject(pRmApi, pDstClient->hClient, pDstRef->hResource, &pKernelChannelGroupApi->hKernelGraphicsContext, pParams->pSrcClient->hClient, pChanGrpSrc->hKernelGraphicsContext, 0)**pRmApi->DupObject(pRmApi, pDstClient->hClient, pDstRef->hResource, &pKernelChannelGroupApi->hKernelGraphicsContext, pParams->pSrcClient->hClient, pChanGrpSrc->hKernelGraphicsContext, 0)*pRmApi->DupObject(pRmApi, pDstClient->hClient, pDstRef->hResource, &pKernelChannelGroupApi->hLegacykCtxShareSync, pParams->pSrcClient->hClient, pChanGrpSrc->hLegacykCtxShareSync, 0)**pRmApi->DupObject(pRmApi, pDstClient->hClient, pDstRef->hResource, &pKernelChannelGroupApi->hLegacykCtxShareSync, pParams->pSrcClient->hClient, pChanGrpSrc->hLegacykCtxShareSync, 0)*pRmApi->DupObject(pRmApi, pDstClient->hClient, pDstRef->hResource, &pKernelChannelGroupApi->hLegacykCtxShareAsync, pParams->pSrcClient->hClient, pChanGrpSrc->hLegacykCtxShareAsync, 0)**pRmApi->DupObject(pRmApi, pDstClient->hClient, pDstRef->hResource, &pKernelChannelGroupApi->hLegacykCtxShareAsync, pParams->pSrcClient->hClient, pChanGrpSrc->hLegacykCtxShareAsync, 0)*bRpcFree*call to listAppendValue_IMPL**call to listAppendValue_IMPL*call to serverGetShareRefCount*call to listRemoveFirstByValue_IMPL*call to kchangrpSetRealtime_56cd7a*call to kchannelGetIter*call to kchangrpDestroy_IMPL*call to ctxBufPoolDestroy*NVRM: grpID 0x%x handle 0x%x cmd 0x%x **NVRM: grpID 0x%x handle 0x%x cmd 0x%x *NVRM: hClient: 0x%x, hParent: 0x%x, hObject:0x%x, hClass: 0x%x **NVRM: hClient: 0x%x, hParent: 0x%x, hObject:0x%x, hClass: 0x%x *call to kchangrpapiCopyConstruct_IMPL*NVRM: TSG alloc should be called without acquiring GPU lock **NVRM: TSG alloc should be called without acquiring GPU lock *bufInfoList**bufInfoList*call to serverAllocShareWithHalspecParent*serverAllocShareWithHalspecParent(&g_resServ, classInfo(KernelChannelGroup), &pShared, staticCast(pGpu, Object))**serverAllocShareWithHalspecParent(&g_resServ, classInfo(KernelChannelGroup), &pShared, staticCast(pGpu, Object))*NVRM: Invalid client handle! **NVRM: Invalid client handle! *NVRM: Invalid parent/device handle! **NVRM: Invalid parent/device handle! *NVRM: Valid engine Id must be specified while allocating TSGs or bare channels! **NVRM: Valid engine Id must be specified while allocating TSGs or bare channels! *kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, rmEngineType, &rmEngineType)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, rmEngineType, &rmEngineType)*vaspaceGetByHandleOrDeviceDefault(pClient, pParams->hParent, hVASpace, &pVAS)**vaspaceGetByHandleOrDeviceDefault(pClient, pParams->hParent, hVASpace, &pVAS)*call to serverIsClientInternal*call to ctxBufPoolInit*ctxBufPoolInit(pGpu, pHeap, &pKernelChannelGroup->pCtxBufPool)**ctxBufPoolInit(pGpu, pHeap, &pKernelChannelGroup->pCtxBufPool)*ctxBufPoolInit(pGpu, pHeap, &pKernelChannelGroup->pChannelBufPool)**ctxBufPoolInit(pGpu, pHeap, &pKernelChannelGroup->pChannelBufPool)*NVRM: Skipping ctxBufPoolInit for RC watchdog **NVRM: Skipping ctxBufPoolInit for RC watchdog *call to kchangrpInit_IMPL*kchangrpInit(pGpu, pKernelChannelGroup, pVAS, gfid)**kchangrpInit(pGpu, pKernelChannelGroup, pVAS, gfid)*kchangrpSetInterleaveLevel(pGpu, pKernelChannelGroup, NVA06C_CTRL_INTERLEAVE_LEVEL_MEDIUM)**kchangrpSetInterleaveLevel(pGpu, pKernelChannelGroup, NVA06C_CTRL_INTERLEAVE_LEVEL_MEDIUM)*call to kgraphicsDiscoverMaxLocalCtxBufferSize_IMPL*call to kgrmgrIsCtxBufSupported_IMPL*NVRM: Reserving 0x%llx bytes for GR ctx bufId = %d **NVRM: Reserving 0x%llx bytes for GR ctx bufId = %d *localValue*bReserveMem*gpuXlateClientEngineIdToEngDesc(pGpu, pKernelChannelGroup->engineType, &engDesc)**gpuXlateClientEngineIdToEngDesc(pGpu, pKernelChannelGroup->engineType, &engDesc)*NVRM: Reserving 0x%llx bytes for engineType %d (%d) flcn ctx buffer **NVRM: Reserving 0x%llx bytes for engineType %d (%d) flcn ctx buffer *NVRM: No buffer reserved for engineType %d (%d) in ctx_buf_pool **NVRM: No buffer reserved for engineType %d (%d) in ctx_buf_pool *pRmApi->AllocWithSecInfo(pRmApi, pParams->hClient, RES_GET_HANDLE(pKernelChannelGroupApi), &pKernelChannelGroupApi->hKernelGraphicsContext, KERNEL_GRAPHICS_CONTEXT, NvP64_NULL, 0, RMAPI_ALLOC_FLAGS_SKIP_RPC, NvP64_NULL, &pRmApi->defaultSecInfo)**pRmApi->AllocWithSecInfo(pRmApi, pParams->hClient, RES_GET_HANDLE(pKernelChannelGroupApi), &pKernelChannelGroupApi->hKernelGraphicsContext, KERNEL_GRAPHICS_CONTEXT, NvP64_NULL, 0, RMAPI_ALLOC_FLAGS_SKIP_RPC, NvP64_NULL, &pRmApi->defaultSecInfo)*NVRM: Adding group Id: %d hClient:0x%x **NVRM: Adding group Id: %d hClient:0x%x *NVRM: KernelChannelGroupApi alloc RPC to vGpu Host failed **NVRM: KernelChannelGroupApi alloc RPC to vGpu Host failed *pMthdBuffer**pMthdBuffer*methodBufferMemdesc**methodBufferMemdesc**bar2Addr*runqueueIdx*numValidEntries*NVRM: Control call to update method buffer memdesc failed **NVRM: Control call to update method buffer memdesc failed *call to kchangrpSetSubcontextZombieState_b3696a*call to kchangrpUpdateSubcontextMask_b3696a*ctxBufPoolReserve(pGpu, pKernelChannelGroup->pCtxBufPool, bufInfoList, bufCount)**ctxBufPoolReserve(pGpu, pKernelChannelGroup->pCtxBufPool, bufInfoList, bufCount)*pBlock == NULL*src/kernel/gpu/fifo/kernel_ctxshare.c**pBlock == NULL**src/kernel/gpu/fifo/kernel_ctxshare.c**pShared*refcnt == 1**refcnt == 1*pKernelChannelGroup == pKernelChannelGroupApi->pKernelChannelGroup**pKernelChannelGroup == pKernelChannelGroupApi->pKernelChannelGroup*pVaSpaceEntry**pVaSpaceEntry*pVaSpaceEntry != NULL && pVaSpaceEntry->refCount != 0**pVaSpaceEntry != NULL && pVaSpaceEntry->refCount != 0*NVRM: VASpace map entry not found. **NVRM: VASpace map entry not found. *call to kctxshareDestroy_56cd7a*NVRM: Subcontext ID heap free failed with status = %s (0x%x) **NVRM: Subcontext ID heap free failed with status = %s (0x%x) *NVRM: VASpace ID heap free failed with status = %s (0x%x) **NVRM: VASpace ID heap free failed with status = %s (0x%x) *NVRM: Freed Context Share 0x%p with id 0x%x **NVRM: Freed Context Share 0x%p with id 0x%x *NVRM: Failed to free Context Share 0x%p with id 0x%x **NVRM: Failed to free Context Share 0x%p with id 0x%x *pVAS != NULL**pVAS != NULL*sbctxHeapFlag*subctxOffset*origSbctxRangeLo*origSbctxRangeHi*origSbctxRangeLo == 0**origSbctxRangeLo == 0*pKernelChannelGroup->pSubctxIdHeap->eheapSetAllocRange(pKernelChannelGroup->pSubctxIdHeap, origSbctxRangeLo, origSbctxRangeHi)**pKernelChannelGroup->pSubctxIdHeap->eheapSetAllocRange(pKernelChannelGroup->pSubctxIdHeap, origSbctxRangeLo, origSbctxRangeHi)*vaSpaceHeapFlag*pVaSpaceBlock*heapOffset*vaSpaceOffset*NVRM: Allocated subctxId: 0x%02llx, vaSpaceId: 0x%02llx **NVRM: Allocated subctxId: 0x%02llx, vaSpaceId: 0x%02llx *pSbctxBlock*call to kctxshareInit_56cd7a*call to kfifoGetMaxLowerSubcontext_DISPATCH*NVRM: New Context Share 0x%p allocated with id 0x%x **NVRM: New Context Share 0x%p allocated with id 0x%x *tmpStatus == NV_OK**tmpStatus == NV_OK*NVRM: Context Share 0x%p allocation with id 0x%x failed, status is %x **NVRM: Context Share 0x%p allocation with id 0x%x failed, status is %x *pKernelCtxShareSrc**pShareData*clientGetResourceRef(pCallContext->pClient, pKernelChannelGroupApi->hKernelGraphicsContext, &pKernelGraphicsContextRef)**clientGetResourceRef(pCallContext->pClient, pKernelChannelGroupApi->hKernelGraphicsContext, &pKernelGraphicsContextRef)*pKernelGraphicsContextRef**pChanGrpRef*pKernelCtxShareApi->pShareData->pKernelChannelGroup == pKernelChannelGroup**pKernelCtxShareApi->pShareData->pKernelChannelGroup == pKernelChannelGroup*NVRM: KernelCtxShareApi Ptr: %p ChanGrp: %p ! **NVRM: KernelCtxShareApi Ptr: %p ChanGrp: %p ! *NVRM: kctxshareapiDestruct_IMPL called on KernelCtxShare %p with refcnt %d **NVRM: kctxshareapiDestruct_IMPL called on KernelCtxShare %p with refcnt %d *refcnt >= 1**refcnt >= 1*NVRM: kctxshareapiDestruct_IMPL: KernelCtxShare %p has %d references left **NVRM: kctxshareapiDestruct_IMPL: KernelCtxShare %p has %d references left *NVRM: kctxshareapiDestruct_IMPL: KernelCtxShare %p has no more references, destroying... **NVRM: kctxshareapiDestruct_IMPL: KernelCtxShare %p has no more references, destroying... *call to kctxshareDestroyCommon_IMPL*pParams->status == NV_OK**pParams->status == NV_OK*call to kctxshareapiCopyConstruct_IMPL*(hVASpace == NV01_NULL_OBJECT) || (pDevice->vaMode != NV_DEVICE_ALLOCATION_VAMODE_SINGLE_VASPACE)**(hVASpace == NV01_NULL_OBJECT) || (pDevice->vaMode != NV_DEVICE_ALLOCATION_VAMODE_SINGLE_VASPACE)*NVRM: Constructing Legacy Context Share **NVRM: Constructing Legacy Context Share *hVASpace == NV01_NULL_OBJECT**hVASpace == NV01_NULL_OBJECT*NVRM: Constructing Client Allocated Context Share **NVRM: Constructing Client Allocated Context Share *(rmStatus == NV_OK)**(rmStatus == NV_OK)*(pVAS != NULL)**(pVAS != NULL)*serverAllocShareWithHalspecParent(&g_resServ, classInfo(KernelCtxShare), &pShared, staticCast(pGpu, Object))**serverAllocShareWithHalspecParent(&g_resServ, classInfo(KernelCtxShare), &pShared, staticCast(pGpu, Object))*call to kctxshareInitCommon_IMPL*kctxshareInitCommon(dynamicCast(pShared, KernelCtxShare), pKernelCtxShareApi, pGpu, pVAS, pUserParams->flags, &pUserParams->subctxId, pKernelChannelGroupApi)**kctxshareInitCommon(dynamicCast(pShared, KernelCtxShare), pKernelCtxShareApi, pGpu, pVAS, pUserParams->flags, &pUserParams->subctxId, pKernelChannelGroupApi)**_PBDMA0**_PBDMA1*call to kfifoGetEngineTypeFromPbdmaFaultId_IMPL*kfifoGetEngineTypeFromPbdmaFaultId(pGpu, pKernelFifo, pbdmaFaultId, &rmEngineType) == NV_OK*src/kernel/gpu/fifo/kernel_fifo.c**kfifoGetEngineTypeFromPbdmaFaultId(pGpu, pKernelFifo, pbdmaFaultId, &rmEngineType) == NV_OK**src/kernel/gpu/fifo/kernel_fifo.c**GR_HOST0**GR_HOST1**GR_HOST2**GR_HOST3**GR_HOST4**GR_HOST5**GR_HOST6**GR_HOST7*grIdx < RM_ENGINE_TYPE_GR_SIZE**grIdx < RM_ENGINE_TYPE_GR_SIZE*grHostString**grHostString***grHostString*call to kfifoGenerateWorkSubmitTokenHal_DISPATCH*kfifoGenerateWorkSubmitTokenHal_HAL(pGpu, pKernelFifo, pKernelChannel, pGeneratedToken, bUsedForHost)**kfifoGenerateWorkSubmitTokenHal_HAL(pGpu, pKernelFifo, pKernelChannel, pGeneratedToken, bUsedForHost)*call to kfifoGetEnginePbdmaFaultIds_DISPATCH*kfifoGetEnginePbdmaFaultIds_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)RM_ENGINE_TYPE_GR0, &pPbdmaFaultIds, &numPbdma)**kfifoGetEnginePbdmaFaultIds_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)RM_ENGINE_TYPE_GR0, &pPbdmaFaultIds, &numPbdma)*pPbdmaFaultIds*baseGrFaultId*pKernelFifo->bIsPbdmaMmuEngineIdContiguous**pKernelFifo->bIsPbdmaMmuEngineIdContiguous*call to bitVectorCountTrailingZeros_IMPL*pbdmaFaultIdStart*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_FIFO_GET_NUM_SECURE_CHANNELS, &numSecureChannelsParams, sizeof(numSecureChannelsParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_FIFO_GET_NUM_SECURE_CHANNELS, &numSecureChannelsParams, sizeof(numSecureChannelsParams))*numSecureChannelsParams*maxSec2SecureChannels*maxCeSecureChannels*NV_OK == gpuGetClassList(pGpu, &numClasses, NULL, ENG_KERNEL_FIFO)**NV_OK == gpuGetClassList(pGpu, &numClasses, NULL, ENG_KERNEL_FIFO)*numClasses > 0**numClasses > 0**pClassList*(pClassList != NULL)**(pClassList != NULL)*pBitMask*pBitMask != NULL**pBitMask != NULL**pBitMask*bitMaskSize % sizeof(NvU32) == 0**bitMaskSize % sizeof(NvU32) == 0*call to kfifoGetMaxNumRunlists_DISPATCH*pOutEngineIds*pOutEngineIds != NULL**pOutEngineIds != NULL*pNumEngines != NULL**pNumEngines != NULL*NVRM: Engine list for runlistId 0x%x: **NVRM: Engine list for runlistId 0x%x: *kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, ENGINE_INFO_TYPE_RUNLIST, &thisRunlistId)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, ENGINE_INFO_TYPE_RUNLIST, &thisRunlistId)*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *)&rmEngineType)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *)&rmEngineType)*NVRM: Engine name: %s **NVRM: Engine name: %s **pOutEngineIds*pChidOffset*pChidOffset != NULL**pChidOffset != NULL*pEngineIds**pEngineIds*(pEngineIds != NULL)**(pEngineIds != NULL)*call to kfifoGetEngineListForRunlist_IMPL*kfifoGetEngineListForRunlist(pGpu, pKernelFifo, pChidMgr->runlistId, pEngineIds, &numEngines)**kfifoGetEngineListForRunlist(pGpu, pKernelFifo, pChidMgr->runlistId, pEngineIds, &numEngines)*NV2080_ENGINE_TYPE_IS_VALID(pEngineIds[i])**NV2080_ENGINE_TYPE_IS_VALID(pEngineIds[i])*pChannelCount*call to kfifoProgramChIdTable_DISPATCH*kfifoProgramChIdTable_HAL(pGpu, pKernelFifo, pChidMgr, offset, numChannels, gfid, pMigDevice, engineFifoListNumEntries, pEngineFifoList)**kfifoProgramChIdTable_HAL(pGpu, pKernelFifo, pChidMgr, offset, numChannels, gfid, pMigDevice, engineFifoListNumEntries, pEngineFifoList)*pVChid != NULL**pVChid != NULL*call to vgpuGetCallingContextKernelHostVgpuDevice*vgpuGetCallingContextKernelHostVgpuDevice(pGpu, &pKernelHostVgpuDevice)**vgpuGetCallingContextKernelHostVgpuDevice(pGpu, &pKernelHostVgpuDevice)*pKernelHostVgpuDevice->gfid == gfid**pKernelHostVgpuDevice->gfid == gfid*engineId < (NV_ARRAY_ELEMENTS(pKernelHostVgpuDevice->chidOffset))**engineId < (NV_ARRAY_ELEMENTS(pKernelHostVgpuDevice->chidOffset))*chidOffset**chidOffset*pKernelHostVgpuDevice->chidOffset[engineId] != 0**pKernelHostVgpuDevice->chidOffset[engineId] != 0*sChId >= pKernelHostVgpuDevice->chidOffset[engineId]**sChId >= pKernelHostVgpuDevice->chidOffset[engineId]*bRetry*pEntry->pCallback != NULL**pEntry->pCallback != NULL*bHandled*pCallbackParam**pCallbackParam*bMadeProgress*bMadeProgress || status != NV_OK**bMadeProgress || status != NV_OK*bFirstPass*pTemp**pTemp*pPostSchedulingEnableHandlerData**pPostSchedulingEnableHandlerData*pPreSchedulingDisableHandlerData**pPreSchedulingDisableHandlerData*bPostHandlerAlreadyPresent*bPreHandlerAlreadyPresent*!(bPostHandlerAlreadyPresent ^ bPreHandlerAlreadyPresent)**!(bPostHandlerAlreadyPresent ^ bPreHandlerAlreadyPresent)*postEntry***pCallbackParam*call to listPrependValue_IMPL**call to listPrependValue_IMPL*listPrependValue(&pKernelFifo->postSchedulingEnableHandlerList, &postEntry)**listPrependValue(&pKernelFifo->postSchedulingEnableHandlerList, &postEntry)*preEntry*listPrependValue(&pKernelFifo->preSchedulingDisableHandlerList, &preEntry)**listPrependValue(&pKernelFifo->preSchedulingDisableHandlerList, &preEntry)*call to kfifoReturnPushbufferCaps_IMPL*kfifoBitMask*call to gpuIsPipelinedPteMemEnabled*call to kfifoIsUserdInSystemMemory*call to kfifoHostHasLbOverflow*call to kfifoIsWddmInterleavingPolicyEnabled*pKfifoCaps**pKfifoCaps*pKernelGraphicsManager != NULL**pKernelGraphicsManager != NULL*call to kgrmgrGetLegacyKGraphicsStaticInfo*kgrmgrGetLegacyKGraphicsStaticInfo(pGpu, pKernelGraphicsManager)->bInitialized**kgrmgrGetLegacyKGraphicsStaticInfo(pGpu, pKernelGraphicsManager)->bInitialized*pGrInfo*kgrmgrGetLegacyKGraphicsStaticInfo(pGpu, pKernelGraphicsManager)->pGrInfo != NULL**kgrmgrGetLegacyKGraphicsStaticInfo(pGpu, pKernelGraphicsManager)->pGrInfo != NULL*infoList**infoList*call to kfifoGetRunlistBufInfo_IMPL*NVRM: failed to get runlist buffer info 0x%08x **NVRM: failed to get runlist buffer info 0x%08x *NVRM: Runlist buffer memdesc create failed 0x%08x **NVRM: Runlist buffer memdesc create failed 0x%08x *NVRM: Failed to translate runlistId 0x%x to NV2080 engine type **NVRM: Failed to translate runlistId 0x%x to NV2080 engine type *call to ctxBufPoolGetGlobalPool*NVRM: Failed to get ctx buf pool for engine type 0x%x (0x%x) **NVRM: Failed to get ctx buf pool for engine type 0x%x (0x%x) *NVRM: Failed to set ctx buf pool for runlistId 0x%x **NVRM: Failed to set ctx buf pool for runlistId 0x%x *NVRM: Runlist buffer mem alloc failed 0x%08x **NVRM: Runlist buffer mem alloc failed 0x%08x *runlist**runlist*ppChidMgr**ppChidMgr*maxRunlistEntriesSupported*call to kfifoGetMaxChannelGroupsInSystem_IMPL*maxRunlistEntries <= maxRunlistEntriesSupported**maxRunlistEntries <= maxRunlistEntriesSupported*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RUNLIST, runlistId, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *)&rmEngineType)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RUNLIST, runlistId, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *)&rmEngineType)*call to gpuGetSchedulerPolicy_IMPL*call to vgpuMgrGetSwrlCountToAllocate*runlistSizeMultiplier*call to kfifoRunlistGetEntrySize_DISPATCH*runlistEntrySize*call to kfifoRunlistGetBaseShift_DISPATCH*pRunlistBufPool**pRunlistBufPool***pRunlistBufPool*(pEngines != NULL) && (engineCount > 0)**(pEngines != NULL) && (engineCount > 0)*call to kchannelGetNextKernelChannel*NVRM: Found channel on engine 0x%x owned by 0x%x **NVRM: Found channel on engine 0x%x owned by 0x%x *(kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)pEngines[i], ENGINE_INFO_TYPE_RUNLIST, &runlistId) == NV_OK)**(kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)pEngines[i], ENGINE_INFO_TYPE_RUNLIST, &runlistId) == NV_OK)*NVRM: Found channel on runlistId 0x%x owned by 0x%x **NVRM: Found channel on runlistId 0x%x owned by 0x%x *NVRM: Found channel owned by 0x%x that can be associated to any engine **NVRM: Found channel owned by 0x%x that can be associated to any engine *pTempNode**pTempNode*pTempNode->pKernelChannel && pTempNode->pKernelChannel->refCount**pTempNode->pKernelChannel && pTempNode->pKernelChannel->refCount**pNewNode**pPrevNode**pTail*bFoundOnce*NVRM: RefCount for channel is not right!!! **NVRM: RefCount for channel is not right!!! *NVRM: Can't find channel in channelGroupList (Normal during RC Recovery on GK110+ or if software scheduling is enabled). **NVRM: Can't find channel in channelGroupList (Normal during RC Recovery on GK110+ or if software scheduling is enabled). *(pNewNode != NULL)**(pNewNode != NULL)*(*ppList != NULL)**(*ppList != NULL)*call to gpuConstructDeviceInfoTable_DISPATCH*gpuConstructDeviceInfoTable_HAL(pGpu)**gpuConstructDeviceInfoTable_HAL(pGpu)*call to kfifoGetHostDeviceInfoTable_KERNEL*kfifoGetHostDeviceInfoTable_HAL(pGpu, pKernelFifo, pEngineInfo, 0)**kfifoGetHostDeviceInfoTable_HAL(pGpu, pKernelFifo, pEngineInfo, 0)*subdeviceGetByInstance(pClient, RES_GET_HANDLE(pMigDevice), 0, &pSubdevice)**subdeviceGetByInstance(pClient, RES_GET_HANDLE(pMigDevice), 0, &pSubdevice)*pLocals**pLocals*(pLocals != NULL)**(pLocals != NULL)*pFetchedTable**pFetchedTable*baseIndex**engineInfoList*pEngineInfo->engineInfoList != NULL**pEngineInfo->engineInfoList != NULL*numRunlists*maxRunlistId*maxPbdmaId*pLocalEntry*pFetchedEntry*pLocalEntry->numPbdmas <= NV_ARRAY_ELEMENTS(pLocalEntry->pbdmaIds) && pLocalEntry->numPbdmas <= NV_ARRAY_ELEMENTS(pLocalEntry->pbdmaFaultIds)**pLocalEntry->numPbdmas <= NV_ARRAY_ELEMENTS(pLocalEntry->pbdmaIds) && pLocalEntry->numPbdmas <= NV_ARRAY_ELEMENTS(pLocalEntry->pbdmaFaultIds)*call to kfifoReservePbdmaFaultIds_DISPATCH*kfifoReservePbdmaFaultIds_HAL(pGpu, pKernelFifo, pEngineInfo->engineInfoList, pEngineInfo->engineInfoListSize)**kfifoReservePbdmaFaultIds_HAL(pGpu, pKernelFifo, pEngineInfo->engineInfoList, pEngineInfo->engineInfoListSize)*call to _kfifoLocalizeGuestEngineData*maxNumRunlists*maxNumPbdmas*IS_VIRTUAL(pGpu)**IS_VIRTUAL(pGpu)*call to kfifoGetGuestEngineLookupTable_IMPL*guestEngineTable**guestEngineTable*newEngineIdx*pEngine*nv2080EngineType*guestTableIdx*call to _kfifoChidMgrGetNextKernelChannel*NVRM: kfifoFillMemInfo: pMemDesc = NULL **NVRM: kfifoFillMemInfo: pMemDesc = NULL *NVRM: kfifoFillMemInfo: Unknown cache attribute for sysmem aperture **NVRM: kfifoFillMemInfo: Unknown cache attribute for sysmem aperture *NVRM: Setting TSG %d Timeslice to %lldus **NVRM: Setting TSG %d Timeslice to %lldus *call to kfifoRunlistGetMinTimeSlice_4a4dee*NVRM: Setting Timeslice to %lldus not allowed. Min value is %lldus **NVRM: Setting Timeslice to %lldus not allowed. Min value is %lldus *call to kfifoChannelGroupSetTimesliceSched_56cd7a*kfifoChannelGroupSetTimesliceSched(pGpu, pKernelFifo, pKernelChannelGroup, timesliceUs, bSkipSubmit)**kfifoChannelGroupSetTimesliceSched(pGpu, pKernelFifo, pKernelChannelGroup, timesliceUs, bSkipSubmit)*numChannelGroups*call to nvBitFieldTest*channelGrpMgr*NVRM: Can't find channel group %d **NVRM: Can't find channel group %d *!kfifoIsLiteModeEnabled_HAL(pGpu, pKernelFifo)**!kfifoIsLiteModeEnabled_HAL(pGpu, pKernelFifo)*ppChidMgr != NULL**ppChidMgr != NULL*NVRM: Zero max channel groups!!! **NVRM: Zero max channel groups!!! *chGrpID < maxChannelGroups**chGrpID < maxChannelGroups*nvBitFieldTest(pChidMgr->channelGrpMgr.pHwIdInUse, pChidMgr->channelGrpMgr.hwIdInUseSz, chGrpID)**nvBitFieldTest(pChidMgr->channelGrpMgr.pHwIdInUse, pChidMgr->channelGrpMgr.hwIdInUseSz, chGrpID)*call to nvBitFieldSet*pChGrpID*call to nvBitFieldLSZero*NVRM: No allocatable FIFO available. **NVRM: No allocatable FIFO available. *logMessage*Guest attempted to allocate channel above its max per engine channel limit 0x%x**logMessage**Guest attempted to allocate channel above its max per engine channel limit 0x%x*numChannelsParams*numChannels > 0**numChannels > 0*bNumChannelsOverride*ppVirtualChIDHeap**ppVirtualChIDHeap*pChidMgr->ppVirtualChIDHeap[gfid] != NULL**pChidMgr->ppVirtualChIDHeap[gfid] != NULL*call to _kfifoChidMgrFreeIsolationId*NVRM: Failed to free IsolationId. Status = 0x%x **NVRM: Failed to free IsolationId. Status = 0x%x *pGlobalChIDHeap*NVRM: Failed to free channel IDs. Status = 0x%x **NVRM: Failed to free channel IDs. Status = 0x%x *call to kfifoSetChidOffset_IMPL*NVRM: Failed to program the CHID table **NVRM: Failed to program the CHID table **pIsolationID*(pIsolationID != NULL)**(pIsolationID != NULL)*chSize*NVRM: Failed to reserve channel IDs. Status = 0x%x **NVRM: Failed to reserve channel IDs. Status = 0x%x *pIsolationIdBlock**pIsolationIdBlock*NVRM: Could not fetch block from eheap **NVRM: Could not fetch block from eheap *NVRM: Error allocating memory for virtual channel ID heap **NVRM: Error allocating memory for virtual channel ID heap *kfifoSetChidOffset(pGpu, pKernelFifo, pChidMgr, 0, 0, gfid, pChidOffset, pChannelCount, pMigDevice, engineFifoListNumEntries, pEngineFifoList) == NV_OK**kfifoSetChidOffset(pGpu, pKernelFifo, pChidMgr, 0, 0, gfid, pChidOffset, pChannelCount, pMigDevice, engineFifoListNumEntries, pEngineFifoList) == NV_OK*pChidMgr->pGlobalChIDHeap->eheapFree(pChidMgr->pGlobalChIDHeap, offset) == NV_OK**pChidMgr->pGlobalChIDHeap->eheapFree(pChidMgr->pGlobalChIDHeap, offset) == NV_OK*pFifoDataBlock->refCount > 0**pFifoDataBlock->refCount > 0*call to kfifoChidMgrReleaseChid_IMPL*pChidMgr->ppVirtualChIDHeap[gfid]->eheapFree(pChidMgr->ppVirtualChIDHeap[gfid], ChID)**pChidMgr->ppVirtualChIDHeap[gfid]->eheapFree(pChidMgr->ppVirtualChIDHeap[gfid], ChID)*pChidMgr->pGlobalChIDHeap->eheapFree(pChidMgr->pGlobalChIDHeap, ChID)**pChidMgr->pGlobalChIDHeap->eheapFree(pChidMgr->pGlobalChIDHeap, ChID)*pChidMgr->pGlobalChIDHeap != NULL**pChidMgr->pGlobalChIDHeap != NULL*pChidMgr->pFifoDataHeap != NULL**pChidMgr->pFifoDataHeap != NULL*pChidMgr->pFifoDataHeap->eheapFree(pChidMgr->pFifoDataHeap, ChID)**pChidMgr->pFifoDataHeap->eheapFree(pChidMgr->pFifoDataHeap, ChID)*pVirtChIdBlock*pVirtChIdBlock != NULL**pVirtChIdBlock != NULL*pVirtChIdBlock->refCount > 0**pVirtChIdBlock->refCount > 0*pChIdBlock*pChIdBlock != NULL**pChIdBlock != NULL*pChIdBlock->refCount > 0**pChIdBlock->refCount > 0*chFlag*ChID64*NVRM: Invalid channel ID alloc mode %d **NVRM: Invalid channel ID alloc mode %d *NVRM: Invalid client handle %ux **NVRM: Invalid client handle %ux *pChidMgr->ppVirtualChIDHeap[gfid]**pChidMgr->ppVirtualChIDHeap[gfid]*call to _kfifoGetVgpuPluginChannelsCount*_kfifoGetVgpuPluginChannelsCount(pGpu, &numPluginChannels)**_kfifoGetVgpuPluginChannelsCount(pGpu, &numPluginChannels)*numPluginChannels < size**numPluginChannels < size*!bForceInternalIdx**!bForceInternalIdx*(ChID64 <= rangeHi) && (ChID64 >= rangeLo)**(ChID64 <= rangeHi) && (ChID64 >= rangeLo)*NVRM: Failed to allocate Channel ID 0x%llx %d on heap **NVRM: Failed to allocate Channel ID 0x%llx %d on heap *bIsSubProcessDisabled*pChidMgr->pGlobalChIDHeap->eheapSetAllocRange(pChidMgr->pGlobalChIDHeap, rangeLo, rangeHi)**pChidMgr->pGlobalChIDHeap->eheapSetAllocRange(pChidMgr->pGlobalChIDHeap, rangeLo, rangeHi)*NVRM: Failed to allocate Channel ID on heap **NVRM: Failed to allocate Channel ID on heap *NVRM: Failed to allocate Channel on fifo data heap **NVRM: Failed to allocate Channel on fifo data heap *pNumPluginChannels*pNumPluginChannels != NULL**pNumPluginChannels != NULL**pRequesterID*pBlockID*pAllocID*pIsolationIdBlock->refCount > 0**pIsolationIdBlock->refCount > 0*pIsolationIdBlock->pData != NULL**pIsolationIdBlock->pData != NULL**pHwIdInUse*hwIdInUseSz*pFifoHwID**pFifoDataHeap**pGlobalChIDHeap***ppVirtualChIDHeap*NVRM: pChidMgr->numChannels is 0 **NVRM: pChidMgr->numChannels is 0 *NVRM: Error in Allocating memory for pFifoDataHeap! Status = %s (0x%x) **NVRM: Error in Allocating memory for pFifoDataHeap! Status = %s (0x%x) *NVRM: Error in Allocating memory for global ChID heap! **NVRM: Error in Allocating memory for global ChID heap! *subProcessIsolation*NVRM: Sub Process channel isolation disabled by vGPU plugin **NVRM: Sub Process channel isolation disabled by vGPU plugin *call to _kfifoChidMgrAllocVChidHeapPointers*NVRM: Error allocating memory for virtual channel heap pointers **NVRM: Error allocating memory for virtual channel heap pointers **pChanGrpTree*call to _kfifoChidMgrDestroyChidHeaps*call to _kfifoChidMgrDestroyChannelGroupMgr***ppChidMgr*numChidMgrs*NVRM: numChidMgrs 0x%x exceeds MAX_NUM_RUNLISTS **NVRM: numChidMgrs 0x%x exceeds MAX_NUM_RUNLISTS *NVRM: Failed to allocate pFifo->pChidMgr **NVRM: Failed to allocate pFifo->pChidMgr *NVRM: Translation to runlistId failed for engine %d **NVRM: Translation to runlistId failed for engine %d *NVRM: Failed to allocate pFifo->pChidMgr[%d] **NVRM: Failed to allocate pFifo->pChidMgr[%d] *call to _kfifoChidMgrAllocChidHeaps*NVRM: Error allocating FifoDataHeap in pChidMgr. Status = %s (0x%x) **NVRM: Error allocating FifoDataHeap in pChidMgr. Status = %s (0x%x) *call to _kfifoChidMgrInitChannelGroupMgr*call to kfifoChidMgrDestruct_IMPL*kfifoGetNumEngines_HAL(ENG_GET_GPU(pKernelFifo), pKernelFifo) > 0**kfifoGetNumEngines_HAL(ENG_GET_GPU(pKernelFifo), pKernelFifo) > 0*pGetCidGrpParams*serverGetClientUnderLock(&g_resServ, pGetCidGrpParams->hClient, &pRsClient)*src/kernel/gpu/fifo/kernel_fifo_ctrl.c**serverGetClientUnderLock(&g_resServ, pGetCidGrpParams->hClient, &pRsClient)**src/kernel/gpu/fifo/kernel_fifo_ctrl.c*clientGetResourceRefByType(pRsClient, pGetCidGrpParams->hChannelOrTsg, classId(KernelChannel), &pResourceRef)**clientGetResourceRefByType(pRsClient, pGetCidGrpParams->hChannelOrTsg, classId(KernelChannel), &pResourceRef)*tsgId*channelUniqueID**channelUniqueID**veid*vasUniqueID**vasUniqueID*pGetChannelUidParams*(pGetChannelUidParams->numChannels > 0 && pGetChannelUidParams->numChannels <= NV2080_CTRL_CMD_FIFO_MAX_CHANNELS_PER_TSG)**(pGetChannelUidParams->numChannels > 0 && pGetChannelUidParams->numChannels <= NV2080_CTRL_CMD_FIFO_MAX_CHANNELS_PER_TSG)*hClients**hClients*serverGetClientUnderLock(&g_resServ, pGetChannelUidParams->hClients[i], &pRsClient)**serverGetClientUnderLock(&g_resServ, pGetChannelUidParams->hClients[i], &pRsClient)*hChannels**hChannels*clientGetResourceRefByType(pRsClient, pGetChannelUidParams->hChannels[i], classId(KernelChannel), &pResourceRef)**clientGetResourceRefByType(pRsClient, pGetChannelUidParams->hChannels[i], classId(KernelChannel), &pResourceRef)*channelUniqueIDs**channelUniqueIDs*engineContextBuffersInfo**engineContextBuffersInfo*fifoLatencyBufferSize**fifoLatencyBufferSize*gpEntries*pbEntries*i < NV2080_ENGINE_TYPE_LAST_v1C_09**i < NV2080_ENGINE_TYPE_LAST_v1C_09*fifoDeviceInfoTable**fifoDeviceInfoTable*bMore*hClientList**hClientList*NVRM: Failed to get client with hClient = 0x%x status = 0x%x **NVRM: Failed to get client with hClient = 0x%x status = 0x%x *hChannelList**hChannelList*NVRM: Failed to get channel with hclient = 0x%x hChannel = 0x%x status = 0x%x **NVRM: Failed to get channel with hclient = 0x%x hChannel = 0x%x status = 0x%x *h2dKeyList**h2dKeyList*keyIndex < CC_KEYSPACE_TOTAL_SIZE**keyIndex < CC_KEYSPACE_TOTAL_SIZE*NVRM: Forcing key rotation on h2dKey 0x%x **NVRM: Forcing key rotation on h2dKey 0x%x *call to confComputeForceKeyRotation_IMPL*NVRM: Forced key rotation for key 0x%x failed **NVRM: Forced key rotation for key 0x%x failed *pDisableChannelParams->numChannels <= NV_ARRAY_ELEMENTS(pDisableChannelParams->hChannelList)**pDisableChannelParams->numChannels <= NV_ARRAY_ELEMENTS(pDisableChannelParams->hChannelList)*call to _kfifoDisableChannelsForKeyRotation*pRunlistPreemptEvent**pRunlistPreemptEvent*pKfifoCapsParams*call to _kfifoGetCaps*call to kfifoGetDeviceCaps_IMPL*serverGetClientUnderLock(&g_resServ, pChannelStateParams->hClient, &pChannelClient)**serverGetClientUnderLock(&g_resServ, pChannelStateParams->hClient, &pChannelClient)*CliGetKernelChannel(pChannelClient, pChannelStateParams->hChannel, &pKernelChannel)**CliGetKernelChannel(pChannelClient, pChannelStateParams->hChannel, &pKernelChannel)*call to kchannelGetChannelPhysicalState_KERNEL*kchannelGetChannelPhysicalState(pGpu, pKernelChannel, pChannelStateParams)**kchannelGetChannelPhysicalState(pGpu, pKernelChannel, pChannelStateParams)*call to kchannelIsCpuMapped*bCpuMap*pRmCtrlParams->bDeferredApi || rmGpuLockIsOwner()**pRmCtrlParams->bDeferredApi || rmGpuLockIsOwner()*serverGetClientUnderLock(&g_resServ, pChannelInfo->hClient, &pChannelClient)**serverGetClientUnderLock(&g_resServ, pChannelInfo->hClient, &pChannelClient)*CliGetKernelChannel(pChannelClient, pChannelInfo->hChannel, &pKernelChannel)**CliGetKernelChannel(pChannelClient, pChannelInfo->hChannel, &pKernelChannel)*NVRM: kchannelCreateUserdMemDesc_HALfailed for hClient 0x%x and channel 0x%08x status 0x%x **NVRM: kchannelCreateUserdMemDesc_HALfailed for hClient 0x%x and channel 0x%08x status 0x%x *call to CliGetKernelChannelWithDevice*pChannelMemParams*call to kfifoFillMemInfo_IMPL*runqueues*(runqueues <= NV2080_CTRL_FIFO_GET_CHANNEL_MEM_INFO_MAX_COUNT)**(runqueues <= NV2080_CTRL_FIFO_GET_CHANNEL_MEM_INFO_MAX_COUNT)*methodBuf**methodBuf*pUserdLocationParams*NVRM: Invalid userdAperture value = 0x%08x **NVRM: Invalid userdAperture value = 0x%08x *NVRM: Invalid userdAttribute value = 0x%08x **NVRM: Invalid userdAttribute value = 0x%08x *call to kfifoGetAllocatedChannelMask_IMPL**bitMask*pFifoInfoParams*fifoInfoTbl**fifoInfoTbl*call to memmgrGetRsvdMemorySize*call to kfifoGetChannelGroupsInUse_IMPL*(NvU64_HI32(timeslice) == 0)**(NvU64_HI32(timeslice) == 0)*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, gpuGetRmEngineType(pFifoInfoParams->engineType), ENGINE_INFO_TYPE_RUNLIST, &runlistId)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, gpuGetRmEngineType(pFifoInfoParams->engineType), ENGINE_INFO_TYPE_RUNLIST, &runlistId)*call to kfifoGetRunlistChannelGroupsInUse_IMPL*physChannelCount*physChannelCountInUse*isGpuLockAcquired*pChannelParams*NVRM: Invalid Params for command NV0080_CTRL_CMD_FIFO_GET_CHANNELLIST **NVRM: Invalid Params for command NV0080_CTRL_CMD_FIFO_GET_CHANNELLIST *kchangrpGetEngineContextMemDesc(pGpu, pKernelChannel->pKernelChannelGroupApi->pKernelChannelGroup, &grCtxBufferMemDesc)*src/kernel/gpu/fifo/kernel_fifo_init.c**kchangrpGetEngineContextMemDesc(pGpu, pKernelChannel->pKernelChannelGroupApi->pKernelChannelGroup, &grCtxBufferMemDesc)**src/kernel/gpu/fifo/kernel_fifo_init.c*grCtxBufferMemDesc*kchannelUnmapEngineCtxBuf(pGpu, pKernelChannel, ENG_GR(grIdx))**kchannelUnmapEngineCtxBuf(pGpu, pKernelChannel, ENG_GR(grIdx))*kchannelSetEngineContextMemDesc(pGpu, pKernelChannel, ENG_GR(grIdx), NULL)**kchannelSetEngineContextMemDesc(pGpu, pKernelChannel, ENG_GR(grIdx), NULL)*kfifoTriggerPreSchedulingDisableCallback(pGpu, pKernelFifo)**kfifoTriggerPreSchedulingDisableCallback(pGpu, pKernelFifo)*NVRM: %s per runlist channel RAM in guest RM **NVRM: %s per runlist channel RAM in guest RM *Enabling**Enabling*Disabling**Disabling*NVRM: Enabling per runlist channel RAM on host RM **NVRM: Enabling per runlist channel RAM on host RM *bDisablePreAllocatedUserD*call to kfifoConstructUsermodeMemdescs_DISPATCH*kfifoConstructUsermodeMemdescs_HAL(pGpu, pKernelFifo)**kfifoConstructUsermodeMemdescs_HAL(pGpu, pKernelFifo)*call to kfifoChidMgrConstruct_IMPL*kfifoChidMgrConstruct(pGpu, pKernelFifo)**kfifoChidMgrConstruct(pGpu, pKernelFifo)*call to krcInitRegistryOverridesDelayed_IMPL*call to kfifoGetMaxSecureChannels_KERNEL*kfifoGetMaxSecureChannels(pGpu, pKernelFifo)**kfifoGetMaxSecureChannels(pGpu, pKernelFifo)*pLockRunlistWriteVfs**pLockRunlistWriteVfs*RmNumFifos**RmNumFifos*bPerRunlistChramOverride*RMDebugOverridePerRunlistChannelRam**RMDebugOverridePerRunlistChannelRam*NVRM: %s per runlist channel RAM **NVRM: %s per runlist channel RAM *RMSupportUserdMapDma**RMSupportUserdMapDma*NVRM: Enabling MapMemoryDma of USERD **NVRM: Enabling MapMemoryDma of USERD *bUserdMapDmaSupported*fifoToggleActiveChannelSchedulingParam*bDisableActiveChannels*call to _kfifoPreConstructRegistryOverrides*pppRunlistBufMemDesc**pppRunlistBufMemDesc***pppRunlistBufMemDesc****pppRunlistBufMemDesc*call to kfifoConstructHal_DISPATCH*kfifoConstructHal_HAL(pGpu, pKernelFifo)**kfifoConstructHal_HAL(pGpu, pKernelFifo)*src/kernel/gpu/fifo/kernel_idle_channels.c*NVRM: hChannel: 0x%x, numChannels: %u **src/kernel/gpu/fifo/kernel_idle_channels.c**NVRM: hChannel: 0x%x, numChannels: %u *paramCopyClients*paramCopyDevices*paramCopyChannels*numChannelsPerGpu**numChannelsPerGpu*chanIdx*NVRM: Failed to acquire Device lock, error 0x%x **NVRM: Failed to acquire Device lock, error 0x%x *isGpuGrpLockAcquired*pPerGpuClients**pPerGpuClients*pPerGpuDevices**pPerGpuDevices*pPerGpuChannels**pPerGpuChannels*((pPerGpuClients != NULL) && (pPerGpuDevices != NULL) && (pPerGpuChannels != NULL))**((pPerGpuClients != NULL) && (pPerGpuDevices != NULL) && (pPerGpuChannels != NULL))*call to kfifoIdleChannelsPerDevice_KERNEL*NVRM: DONE. hChannel: 0x%x, numChannels: %u, rmStatus: 0x%x **NVRM: DONE. hChannel: 0x%x, numChannels: %u, rmStatus: 0x%x *call to memGetMemInterMapParams_IMPL*memGetMemInterMapParams_IMPL(pMemory, pParams)*src/kernel/gpu/fifo/usermode_api.c**memGetMemInterMapParams_IMPL(pMemory, pParams)**src/kernel/gpu/fifo/usermode_api.c*pUserModeApi->bInternalMmio**pUserModeApi->bInternalMmio*pUserModeSrc*bInternalMmio*bPrivMapping*!bPrivMapping || bBar1Mapping**!bPrivMapping || bBar1Mapping*!bPrivMapping || pCallContext->secInfo.privLevel >= RS_PRIV_LEVEL_KERNEL**!bPrivMapping || pCallContext->secInfo.privLevel >= RS_PRIV_LEVEL_KERNEL*memClassId*call to memConstructCommon_IMPL*memConstructCommon(pMemory, memClassId, 0, pMemDesc, 0, NULL, 0, 0, 0, 0, NVOS32_MEM_TAG_NONE, NULL)**memConstructCommon(pMemory, memClassId, 0, pMemDesc, 0, NULL, 0, 0, 0, 0, NVOS32_MEM_TAG_NONE, NULL)*pUvmChannelRetainer*src/kernel/gpu/fifo/uvm_channel_retainer.c**src/kernel/gpu/fifo/uvm_channel_retainer.c*kfifoChidMgrReleaseChid(pGpu, pKernelFifo, pChidMgr, pUvmChannelRetainer->chId)**kfifoChidMgrReleaseChid(pGpu, pKernelFifo, pChidMgr, pUvmChannelRetainer->chId)*pUvmChannelRetainerParams*serverGetClientUnderLock(&g_resServ, pUvmChannelRetainerParams->hClient, &pChannelClient)**serverGetClientUnderLock(&g_resServ, pUvmChannelRetainerParams->hClient, &pChannelClient)*CliGetKernelChannel(pChannelClient, pUvmChannelRetainerParams->hChannel, &pKernelChannel)**CliGetKernelChannel(pChannelClient, pUvmChannelRetainerParams->hChannel, &pKernelChannel)*call to uvmchanrtnrIsAllocationAllowed_IMPL*NVRM: class Id %d can only be allocated by internal kernel clients **NVRM: class Id %d can only be allocated by internal kernel clients *call to kfifoChidMgrRetainChid_IMPL*kfifoChidMgrRetainChid(pGpu, pKernelFifo, pChidMgr, pKernelChannel->ChID)**kfifoChidMgrRetainChid(pGpu, pKernelFifo, pChidMgr, pKernelChannel->ChID)*kfifoChannelGetFifoContextMemDesc_HAL(pGpu, pKernelFifo, pKernelChannel, FIFO_CTX_INST_BLOCK, &pUvmChannelRetainer->pInstMemDesc)**kfifoChannelGetFifoContextMemDesc_HAL(pGpu, pKernelFifo, pKernelChannel, FIFO_CTX_INST_BLOCK, &pUvmChannelRetainer->pInstMemDesc)*kfifoChidMgrReleaseChid(pGpu, pKernelFifo, pChidMgr, pKernelChannel->ChID)**kfifoChidMgrReleaseChid(pGpu, pKernelFifo, pChidMgr, pKernelChannel->ChID)*call to kfspGetMaxRecvPacketSize_GH100*call to kfspGetMaxSendPacketSize_GH100*call to kfspIsResponseAvailable_GH100*call to kfspCanSendPacket_GH100*pBytesRead*pBytesRead != NULL*src/kernel/gpu/fsp/arch/blackwell/kern_fsp_gb100.c**pBytesRead != NULL**src/kernel/gpu/fsp/arch/blackwell/kern_fsp_gb100.c*gpuMnocMboxRecv_HAL(pGpu, &pKernelFsp->mboxAperture, KERNEL_FSP_MBOX_PORT, pPacket, &recvSize)**gpuMnocMboxRecv_HAL(pGpu, &pKernelFsp->mboxAperture, KERNEL_FSP_MBOX_PORT, pPacket, &recvSize)*call to kfspReadPacket_GH100*call to gpuMnocMboxSend_DISPATCH*call to kfspSendPacket_GH100*cms2Log*NVRM: CMS2 Log: **NVRM: CMS2 Log: *call to nvDbgDumpBufferBytes**cms2Log*inputPayload*NVRM: FSP microcode v%u.%u **NVRM: FSP microcode v%u.%u *NVRM: GPU %04x:%02x:%02x **NVRM: GPU %04x:%02x:%02x *NVRM: NV_PFSP_FALCON_COMMON_SCRATCH_GROUP_2(0) = 0x%x **NVRM: NV_PFSP_FALCON_COMMON_SCRATCH_GROUP_2(0) = 0x%x *NVRM: NV_PFSP_FALCON_COMMON_SCRATCH_GROUP_2(1) = 0x%x **NVRM: NV_PFSP_FALCON_COMMON_SCRATCH_GROUP_2(1) = 0x%x *NVRM: NV_PFSP_FALCON_COMMON_SCRATCH_GROUP_2(2) = 0x%x **NVRM: NV_PFSP_FALCON_COMMON_SCRATCH_GROUP_2(2) = 0x%x *NVRM: NV_PFSP_FALCON_COMMON_SCRATCH_GROUP_2(3) = 0x%x **NVRM: NV_PFSP_FALCON_COMMON_SCRATCH_GROUP_2(3) = 0x%x *NVRM: NV_PGSP_FALCON_MAILBOX0 = 0x%x **NVRM: NV_PGSP_FALCON_MAILBOX0 = 0x%x *NVRM: NV_PGSP_FALCON_MAILBOX1 = 0x%x **NVRM: NV_PGSP_FALCON_MAILBOX1 = 0x%x *NVRM: NV_PGSP_MAILBOX(%d) = 0x%x **NVRM: NV_PGSP_MAILBOX(%d) = 0x%x *call to _kfspGatherCms2Log_GB100*call to _kfspPrintCms2Log_GB100*bClockBoostSupported*NVRM: FSP has clock boost capability **NVRM: FSP has clock boost capability *NVRM: FSP doesn't have clock boost capability **NVRM: FSP doesn't have clock boost capability *gpuMarkDeviceForReset(pGpu)**gpuMarkDeviceForReset(pGpu)*Error status 0x%x while polling for FSP boot complete, 0x%x, 0x%x, 0x%x, 0x%x, 0x%x**Error status 0x%x while polling for FSP boot complete, 0x%x, 0x%x, 0x%x, 0x%x, 0x%x*call to kfspDumpDebugState_DISPATCH*NVRM: FSP fuse error check has passed. Status = 0x%08x. **NVRM: FSP fuse error check has passed. Status = 0x%08x. *NVRM: ****************************************** FSP Fuse Check Failure ************************************************ **NVRM: ****************************************** FSP Fuse Check Failure ************************************************ *FSP fuse error check has failed. Status = 0x%x.**FSP fuse error check has failed. Status = 0x%x.*NVRM: ** FSP fuse error check has failed. Status = 0x%x. ** **NVRM: ** FSP fuse error check has failed. Status = 0x%x. ** *NVRM: ****************************************************************************************************************** **NVRM: ****************************************************************************************************************** *src/kernel/gpu/fsp/arch/blackwell/kern_fsp_gb202.c**src/kernel/gpu/fsp/arch/blackwell/kern_fsp_gb202.c*pSysmemFrtsMemdesc*(pKernelFsp->pSysmemFrtsMemdesc != NULL)*src/kernel/gpu/fsp/arch/hopper/kern_fsp_gh100.c**(pKernelFsp->pSysmemFrtsMemdesc != NULL)**src/kernel/gpu/fsp/arch/hopper/kern_fsp_gh100.c*frtsSysmemAddr*call to kfspPrepareBootCommands_GH100*call to kfspSendBootCommands_GH100*pKernelFsp->pCotPayload != NULL**pKernelFsp->pCotPayload != NULL*NVRM: Sent following content to FSP: **NVRM: Sent following content to FSP: *NVRM: version=0x%x, size=0x%x, gspFmcSysmemOffset=0x%llx **NVRM: version=0x%x, size=0x%x, gspFmcSysmemOffset=0x%llx *NVRM: frtsSysmemOffset=0x%llx, frtsSysmemSize=0x%x **NVRM: frtsSysmemOffset=0x%llx, frtsSysmemSize=0x%x *NVRM: frtsVidmemOffset=0x%llx, frtsVidmemSize=0x%x **NVRM: frtsVidmemOffset=0x%llx, frtsVidmemSize=0x%x *NVRM: gspBootArgsSysmemOffset=0x%llx **NVRM: gspBootArgsSysmemOffset=0x%llx *call to _kfspCheckGspBootStatus*PDB_PROP_KFSP_BOOT_COMMAND_OK*NVRM: FSP boot cmds failed. RM cannot boot. **NVRM: FSP boot cmds failed. RM cannot boot. *call to kfspCleanupBootState_IMPL*call to kfspWaitForSecureBoot_DISPATCH*statusBoot*NVRM: FSP secure boot partition timed out. **NVRM: FSP secure boot partition timed out. *call to kfspSafeToSendBootCommands*NVRM: FSP secure boot GSP prechecks failed. **NVRM: FSP secure boot GSP prechecks failed. *call to kfspCheckForClockBoostCapability_DISPATCH*bClockBoostDisabledViaRegkey*RmBootGspRmWithBoostClocks**RmBootGspRmWithBoostClocks*call to kfspSendClockBoostRpc_DISPATCH*NVRM: Clock boost feature cmd %d via FSP failed with error 0x%x **NVRM: Clock boost feature cmd %d via FSP failed with error 0x%x **pCotPayload*(pKernelFsp->pCotPayload == NULL) ? NV_ERR_NO_MEMORY : NV_OK**(pKernelFsp->pCotPayload == NULL) ? NV_ERR_NO_MEMORY : NV_OK*frtsSize*frtsSize != 0**frtsSize != 0*pKernelFsp->pSysmemFrtsMemdesc == NULL**pKernelFsp->pSysmemFrtsMemdesc == NULL*pVaKernel**pVaKernel*pPrivKernel**pPrivKernel*frtsSysmemOffset*frtsSysmemSize*call to kfspFrtsSysmemLocationProgram_DISPATCH*kfspFrtsSysmemLocationProgram_HAL(pGpu, pKernelFsp)**kfspFrtsSysmemLocationProgram_HAL(pGpu, pKernelFsp)*call to memmgrGetFBEndReserveSizeEstimate_DISPATCH*call to kpmuReservedMemorySizeGet_IMPL*call to kfspGetExtraReservedMemorySize_DISPATCH*call to kgspGetWprEndMargin_IMPL*frtsOffsetFromEnd*frtsVidmemOffset*frtsVidmemSize*gspFmcSysmemOffset*gspBootArgsSysmemOffset*call to kfspSetupGspImages*NVRM: Ucode image preparation failed! **NVRM: Ucode image preparation failed! *NVRM: Preparing FSP boot cmds failed. RM cannot boot. **NVRM: Preparing FSP boot cmds failed. RM cannot boot. *NVRM: RM cannot boot with FSP missing on silicon. **NVRM: RM cannot boot with FSP missing on silicon. *NVRM: Secure boot is disabled due to missing FSP. **NVRM: Secure boot is disabled due to missing FSP. *call to kfspGspFmcIsEnforced_DISPATCH*NVRM: Chain-of-trust (GSP-FMC) cannot be disabled on silicon. **NVRM: Chain-of-trust (GSP-FMC) cannot be disabled on silicon. *NVRM: Chain-of-trust is disabled via regkey **NVRM: Chain-of-trust is disabled via regkey *call to kfspGetGspUcodeArchive**pBinArchive*NVRM: Cannot find correct ucode archive for booting! **NVRM: Cannot find correct ucode archive for booting! *call to bindataArchiveGetStorage*pGspImage**pGspImage*pGspImageHash**pGspImageHash*pGspImageSignature**pGspImageSignature*pGspImagePublicKey**pGspImagePublicKey*pGspImageSize*pGspImageMapSize*pGspFmcMemdesc*pKernelFsp->pGspFmcMemdesc == NULL**pKernelFsp->pGspFmcMemdesc == NULL*hash384**hash384*bindataGetBufferSize(pGspImageSignature) == pKernelFsp->cotPayloadSignatureSize**bindataGetBufferSize(pGspImageSignature) == pKernelFsp->cotPayloadSignatureSize*bindataGetBufferSize(pGspImageSignature) <= sizeof(pCotPayload->signature)**bindataGetBufferSize(pGspImageSignature) <= sizeof(pCotPayload->signature)*bindataGetBufferSize(pGspImagePublicKey) == pKernelFsp->cotPayloadPublicKeySize**bindataGetBufferSize(pGspImagePublicKey) == pKernelFsp->cotPayloadPublicKeySize*bindataGetBufferSize(pGspImagePublicKey) <= sizeof(pCotPayload->publicKey)**bindataGetBufferSize(pGspImagePublicKey) <= sizeof(pCotPayload->publicKey)*publicKey**publicKey*call to kfspGetGspBootArgs**pGspFmcMemdesc*pKernelGsp->pGspFmcArgumentsCached != NULL**pKernelGsp->pGspFmcArgumentsCached != NULL*pGspFmcArgumentsDescriptor*memdescGetAddressSpace(pKernelGsp->pGspFmcArgumentsDescriptor) == ADDR_SYSMEM**memdescGetAddressSpace(pKernelGsp->pGspFmcArgumentsDescriptor) == ADDR_SYSMEM*NVRM: Loading GSP-RM image using FSP. **NVRM: Loading GSP-RM image using FSP. *call to kgspIsDebugModeEnabled_DISPATCH*call to _kfspGetMsgQueueHeadTail_GH100*(packetSize >= sizeof(NvU32)) && (packetSize <= maxPacketSize)**(packetSize >= sizeof(NvU32)) && (packetSize <= maxPacketSize)*call to _kfspConfigEmemc_GH100*NVRM: About to read data from FSP, ememcOff=0, size=0x%x **NVRM: About to read data from FSP, ememcOff=0, size=0x%x *NVRM: Size=0x%x is not DWORD-aligned, data will be truncated! **NVRM: Size=0x%x is not DWORD-aligned, data will be truncated! *NVRM: After reading data, ememcOff = 0x%x **NVRM: After reading data, ememcOff = 0x%x *call to _kfspUpdateMsgQueueHeadTail_GH100*(ememOffsetEnd) == (packetSize / sizeof(NvU32))**(ememOffsetEnd) == (packetSize / sizeof(NvU32))*call to kfspPollForCanSend_IMPL*call to _kfspWriteToEmem_GH100*call to _kfspUpdateQueueHeadTail_GH100*ememOffsetStart*NVRM: About to send data to FSP, ememcOff=0x%x, size=0x%x **NVRM: About to send data to FSP, ememcOff=0x%x, size=0x%x *wordsWritten*leftoverBytes*NVRM: After sending data, ememcOff = 0x%x **NVRM: After sending data, ememcOff = 0x%x *(ememOffsetEnd - ememOffsetStart) == wordsWritten**(ememOffsetEnd - ememOffsetStart) == wordsWritten*NVRM: Expected FSP command response, but packet is not big enough for payload. Size: 0x%0x **NVRM: Expected FSP command response, but packet is not big enough for payload. Size: 0x%0x *NVRM: Received FSP command response. Task ID: 0x%0x Command type: 0x%0x Error code: 0x%0x **NVRM: Received FSP command response. Task ID: 0x%0x Command type: 0x%0x Error code: 0x%0x *call to kfspErrorCode2NvStatusMap_DISPATCH*NVRM: Last command was processed by FSP successfully! **NVRM: Last command was processed by FSP successfully! *NVRM: FSP response reported error. Task ID: 0x%0x Command type: 0x%0x Error code: 0x%0x **NVRM: FSP response reported error. Task ID: 0x%0x Command type: 0x%0x Error code: 0x%0x *call to kfspProcessCommandResponse_DISPATCH*NVRM: Unknown or unsupported NVDM type received: 0x%0x **NVRM: Unknown or unsupported NVDM type received: 0x%0x *NVRM: Invalid MCTP Message type 0x%0x, expecting 0x7e (Vendor Defined PCI) **NVRM: Invalid MCTP Message type 0x%0x, expecting 0x7e (Vendor Defined PCI) *NVRM: Invalid PCI Vendor Id 0x%0x, expecting 0x10de (Nvidia) **NVRM: Invalid PCI Vendor Id 0x%0x, expecting 0x10de (Nvidia) *NVRM: Packet doesn't contain NVDM type in payload header **NVRM: Packet doesn't contain NVDM type in payload header *call to kfspValidateMctpPayloadHeader_DISPATCH*call to _kfspGetQueueHeadTail_GH100*bBusy*rpcState***pCallbackArgs*pResponseBuffer**pResponseBuffer*pPollEvent*call to kfspSetResponseTimeout*call to kfspClearAsyncResponseState*call to kfspReadMessage*call to kfspExecuteAsyncRpcCallback*call to kfspProcessAsyncResponse*call to kfspCheckResponseTimeout*call to kfspIsResponseAvailable_DISPATCH*src/kernel/gpu/fsp/kern_fsp.c*NVRM: Failed to schedule work item, status=%x **src/kernel/gpu/fsp/kern_fsp.c**NVRM: Failed to schedule work item, status=%x *NVRM: FSP async command timed out **NVRM: FSP async command timed out *NVRM: Failed to reschedule callback, status=%x **NVRM: Failed to reschedule callback, status=%x *call to kfspSendMessage*call to kfspScheduleAsyncResponseCheck*NVRM: FSP queuing failed, status=%x **NVRM: FSP queuing failed, status=%x *call to kfspWaitForResponse*call to kfspPollForResponse_IMPL*NVRM: Tried to read FSP response but none is available **NVRM: Tried to read FSP response but none is available *call to kfspGetMaxRecvPacketSize_DISPATCH*recvBufferSize*pPacketBuffer != NULL**pPacketBuffer != NULL*call to kfspReadPacket_DISPATCH*kfspReadPacket_HAL(pGpu, pKernelFsp, pPacketBuffer, recvBufferSize, &packetSize)**kfspReadPacket_HAL(pGpu, pKernelFsp, pPacketBuffer, recvBufferSize, &packetSize)*call to kfspGetPacketInfo_DISPATCH*NVRM: No buffer provided when receiving multi-packet message. Buffer needed to reconstruct message **NVRM: No buffer provided when receiving multi-packet message. Buffer needed to reconstruct message *NVRM: Buffer provided for message payload too small. Payload size: 0x%x Buffer size: 0x%x **NVRM: Buffer provided for message payload too small. Payload size: 0x%x Buffer size: 0x%x *call to kfspProcessNvdmMessage_DISPATCH*call to kfspGetMaxSendPacketSize_DISPATCH*call to kfspNvdmToSeid_DISPATCH*call to kfspCreateMctpHeader_DISPATCH*call to kfspCreateNvdmHeader_DISPATCH*call to kfspSendPacket_DISPATCH*NVRM: FSP command timed out **NVRM: FSP command timed out *NVRM: Timed out waiting for FSP queues to be empty. **NVRM: Timed out waiting for FSP queues to be empty. *call to kfspCanSendPacket_DISPATCH*pVidmemFrtsMemdesc**pVidmemFrtsMemdesc*call to kfspReleaseProxyImage_IMPL*NVRM: Clock boost disbalement via FSP failed with error 0x%x **NVRM: Clock boost disbalement via FSP failed with error 0x%x *call to kfspFrtsSysmemLocationClear_DISPATCH**pSysmemFrtsMemdesc*pGspBootArgsMemdesc**pGspBootArgsMemdesc*RmDisableFsp**RmDisableFsp*NVRM: FSP disabled due to regkey override. **NVRM: FSP disabled due to regkey override. *RmDisableCotCmd**RmDisableCotCmd*PDB_PROP_KFSP_DISABLE_FRTS_VIDMEM*PDB_PROP_KFSP_DISABLE_GSPFMC*RmDisableFspFuseErrorCheck**RmDisableFspFuseErrorCheck*NVRM: FSP's fuse error detection status check during boot is disabled using the regkey. **NVRM: FSP's fuse error detection status check during boot is disabled using the regkey. *RmFspUseMnoc**RmFspUseMnoc*NVRM: CPU will use MNOC mailbox to communicate with FSP **NVRM: CPU will use MNOC mailbox to communicate with FSP *PDB_PROP_KFSP_USE_MNOC_CPU*NVRM: GSP will use MNOC MCTP to communicate with FSP **NVRM: GSP will use MNOC MCTP to communicate with FSP *PDB_PROP_KFSP_USE_MNOC_GSP*call to kfspInitRegistryOverrides*NVRM: KernelFsp is disabled **NVRM: KernelFsp is disabled *tmrEventCreate(pTmr, &(pKernelFsp->pPollEvent), kfspPollForAsyncResponse, NULL, TMR_FLAGS_NONE)**tmrEventCreate(pTmr, &(pKernelFsp->pPollEvent), kfspPollForAsyncResponse, NULL, TMR_FLAGS_NONE)*call to kfspConstructHal_DISPATCH*kfspConstructHal_HAL(pGpu, pKernelFsp)**kfspConstructHal_HAL(pGpu, pKernelFsp)*call to kfspPrepareAndSendBootCommands_DISPATCH*call to ksec2PrepareAndSendBootCommands_DISPATCH*ppDeviceEntry**ppDeviceEntry*ppDeviceEntry != NULL*src/kernel/gpu/gpu.c**ppDeviceEntry != NULL**src/kernel/gpu/gpu.c*call to gpuIterDeviceInfo_IMPL*gpuIterDeviceInfo(pGpu, &iter, deviceTypeEnum, dieletInstance)**gpuIterDeviceInfo(pGpu, &iter, deviceTypeEnum, dieletInstance)*call to gpuDeviceInfoIterNext_IMPL*pFirstMatch**pFirstMatch*devTypeEnum*dieletIdMask**pGidData*GID_DATA**GID_DATA*call to gpuRefreshRecoveryAction_KERNEL*pGSCI != NULL**pGSCI != NULL*bGspFatalError*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_LOG_OOB_XID, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_LOG_OOB_XID, ¶ms, sizeof(params))*NVRM: MIG_INSTANCE_REF determining is not supported for error ID 0x%x. **NVRM: MIG_INSTANCE_REF determining is not supported for error ID 0x%x. *NVRM: Invalid error ID: 0x%x **NVRM: Invalid error ID: 0x%x *osQueueWorkItem(pGpu, _gpuRefreshRecoveryActionInLock, NULL, (OsQueueWorkItemFlags){.bLockGpus = NV_TRUE})**osQueueWorkItem(pGpu, _gpuRefreshRecoveryActionInLock, NULL, (OsQueueWorkItemFlags){.bLockGpus = NV_TRUE})*call to _gpuRefreshRecoveryActionInLock*newAction*call to gpuIsDeviceMarkedForReset_DISPATCH*call to gpuIsDeviceMarkedForDrainAndReset_DISPATCH*oldAction*currentRecoveryAction*GPU recovery action changed from 0x%x (%s) to 0x%x (%s)*call to _gpuRecoveryActionName**GPU recovery action changed from 0x%x (%s) to 0x%x (%s)*NVRM: GetRecoveryAction: 0x%x (%s) **NVRM: GetRecoveryAction: 0x%x (%s) **None**GPU Reset Required**Node Reboot Required**Drain P2P**Drain and Reset*Unknown recovery action!**Unknown recovery action!*call to gpuGetDrainAndResetScratchBit_DISPATCH*call to _gpuSetDrainAndResetState*call to gpuSetDrainAndResetScratchBit_DISPATCH*call to gpuGetResetScratchBit_DISPATCH*call to _gpuSetResetRequiredState*call to gpuSetResetScratchBit_DISPATCH*call to gpuResetRequiredStateChanged_DISPATCH*configSchedPolicy*call to gpuGetSchedulerPolicyName_IMPL*schedPolicyName**schedPolicyName*call to _getEnabledString*isEnabledString**isEnabledString*NVRM: GPU at %04x:%02x:%02x.0 has software scheduler %s with policy %s on GR **NVRM: GPU at %04x:%02x:%02x.0 has software scheduler %s with policy %s on GR *PDB_PROP_GPU_SWRL_GRANULAR_LOCKING**BEST_EFFORT**EQUAL_SHARE**FIXED_SHARE**NONE*call to _gpuGetSchedulerPolicyGr*call to rmcfg_IsGB20XorBetter*RmPVMRL**RmPVMRL*NVRM: Invalid scheduling policy %u specified by PVMRL regkey 0x%08x for GR **NVRM: Invalid scheduling policy %u specified by PVMRL regkey 0x%08x for GR *nv2080EngineCaps**nv2080EngineCaps*engineCaps**engineCaps*call to gpuGetRmEngineTypeCapMask_IMPL*rmEngineCaps**rmEngineCaps*gpuGetRmEngineTypeCapMask(nv2080EngineCaps, NVGPU_ENGINE_CAPS_MASK_ARRAY_MAX, rmEngineCaps)**gpuGetRmEngineTypeCapMask(nv2080EngineCaps, NVGPU_ENGINE_CAPS_MASK_ARRAY_MAX, rmEngineCaps)*isEnginePresent*call to kmigmgrGetGRCERange_DISPATCH*grCeRange*call to gpuRequireGrCePresence_DISPATCH*gpuRequireGrCePresence_HAL(pGpu, engDesc, &isEnginePresent) == NV_OK**gpuRequireGrCePresence_HAL(pGpu, engDesc, &isEnginePresent) == NV_OK*NVRM: Query for ENG_INVALID considered erroneous: %d **NVRM: Query for ENG_INVALID considered erroneous: %d *call to gpuIsEngDescSupported_IMPL*NVRM: Unable to check engine ID: 0x%x **NVRM: Unable to check engine ID: 0x%x **bSupported*call to gpuCheckEngineWithOrderList_KERNEL*pKernelNvLink*pConnectedLinksMaskVec*call to gpuGetSkuInfo_DISPATCH*gpuGetSkuInfo_HAL(pGpu, &biosGetSKUInfoParams)**gpuGetSkuInfo_HAL(pGpu, &biosGetSKUInfoParams)*pciDevId*chipSku*chipSKU**chipSku*biosGetSKUInfoParams**chipSKU*chipMajor*chipMinor**pGrInfo*pGrInfo != NULL**pGrInfo != NULL*call to osSimEscapeReadBuffer*Size <= (sizeof *Value)**Size <= (sizeof *Value)*call to RmRpcSimEscapeRead*call to osSimEscapeRead*call to osSimEscapeWriteBuffer*Size <= (sizeof Value)**Size <= (sizeof Value)*call to RmRpcSimEscapeWrite*call to osSimEscapeWrite*bSBIOSCaps*call to gpuJtVersionSanityCheck_DISPATCH*NVRM: Unsupported JT revision ID. GC6 is being disabled. **NVRM: Unsupported JT revision ID. GC6 is being disabled. *NVRM: Unsupported JT revision ID. GC6 is being disabled. Update the board EC PIC FW. On Windows, update the SBIOS GC6 AML as well. **NVRM: Unsupported JT revision ID. GC6 is being disabled. Update the board EC PIC FW. On Windows, update the SBIOS GC6 AML as well. *call to intrServiceStallListAllGpusCond_IMPL*call to kceGetFaultMethodBufferSize_IMPL*GPU_GET_VGPU(pGpu) != NULL**GPU_GET_VGPU(pGpu) != NULL*GPU_GET_DCECLIENTRM(pGpu) != NULL**GPU_GET_DCECLIENTRM(pGpu) != NULL*GPU_GET_KERNEL_GSP(pGpu) != NULL**GPU_GET_KERNEL_GSP(pGpu) != NULL*call to gpuGetArch*call to decodePmcBoot0Architecture*minorRev*minorExtRev*pRmHalspecOwner*dispIpHalv00*NVRM: Invalid dispIpHal.__nvoc_HalVarIdx %d for Disp IP Vertion 0x%08x **NVRM: Invalid dispIpHal.__nvoc_HalVarIdx %d for Disp IP Vertion 0x%08x *pGpuHalspecOwner*call to gpuGetChildrenOrder_DISPATCH*pChildOrderList**pChildOrderList*bFirstIteration*bStarted*childOrderIndex*pCurChildOrder**pCurChildOrder*bAdvance*call to gpuFindChildPresent*pChildrenPresent*pCurChildPresent**pCurChildPresent*pAllocatedGfids*pGpu->sriovState.pAllocatedGfids != NULL**pGpu->sriovState.pAllocatedGfids != NULL*pChildPresentList*genericKernelFalcons**genericKernelFalcons***genericKernelFalcons*call to intrservRegisterIntrService_DISPATCH**pRecords*kernelVideoEngines**kernelVideoEngines***kernelVideoEngines*numKernelVideoEngines*call to kvidengFreeLogging_KERNEL*call to kvidengInitLogging_KERNEL*kvidengInitLogging(pGpu, pGpu->kernelVideoEngines[i])**kvidengInitLogging(pGpu, pGpu->kernelVideoEngines[i])*RmVideoEventTrace**RmVideoEventTrace*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_GPU_GET_CONSTRUCTED_FALCON_INFO, pParams, sizeof(*pParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_GPU_GET_CONSTRUCTED_FALCON_INFO, pParams, sizeof(*pParams))*constructedFalconsTable**constructedFalconsTable*numKernelVideoEngines < NV_ARRAY_ELEMENTS(pGpu->kernelVideoEngines)**numKernelVideoEngines < NV_ARRAY_ELEMENTS(pGpu->kernelVideoEngines)*objCreate(&pGpu->kernelVideoEngines[numKernelVideoEngines], pGpu, KernelVideoEngine, pGpu, physEngDesc)**objCreate(&pGpu->kernelVideoEngines[numKernelVideoEngines], pGpu, KernelVideoEngine, pGpu, physEngDesc)*videoTraceInfo*eventTraceRegkeyData*call to gpuDestroyKernelVideoEngineList_IMPL*numGenericKernelFalcons*bAllocatedParams*pParams->numConstructedFalcons <= NV_ARRAY_ELEMENTS(pGpu->genericKernelFalcons)**pParams->numConstructedFalcons <= NV_ARRAY_ELEMENTS(pGpu->genericKernelFalcons)*tgtFalconIdx*NVRM: Failed to create a GenericKernelFalcon object with engdesc %u **NVRM: Failed to create a GenericKernelFalcon object with engdesc %u *srcFalconIdx*call to gpuDestroyGenericKernelFalconList_IMPL*constructedFalcons**constructedFalcons***constructedFalcons**ppFlcn != NULL***ppFlcn != NULL*Attempted to remove a non-existent initialized Falcon!**Attempted to remove a non-existent initialized Falcon!*pGpu->numConstructedFalcons < NV_ARRAY_ELEMENTS(pGpu->constructedFalcons)**pGpu->numConstructedFalcons < NV_ARRAY_ELEMENTS(pGpu->constructedFalcons)*RMOptimizeComputeOrSparseTex**RMOptimizeComputeOrSparseTex*pDefault*osQueueWorkItem(pGpu, _gpuSetDisconnectedPropertiesWorker, NULL, (OsQueueWorkItemFlags){ .bFallbackToDpc = NV_TRUE, .bLockGpuGroupDevice = NV_TRUE})**osQueueWorkItem(pGpu, _gpuSetDisconnectedPropertiesWorker, NULL, (OsQueueWorkItemFlags){ .bFallbackToDpc = NV_TRUE, .bLockGpuGroupDevice = NV_TRUE})*call to gpuGenUgidData_DISPATCH*gidData**gidData*ppGidString**ppGidString*pGidStrlen*pGidStrlen != NULL**pGidStrlen != NULL*call to gpuGenGidData_DISPATCH*isInitialized*pGpu->engineDB.bValid**pGpu->engineDB.bValid*engType < RM_ENGINE_TYPE_LAST**engType < RM_ENGINE_TYPE_LAST*call to gpuEngineEventNotificationListDestroy*engineNonstallIntrEventNotifications**engineNonstallIntrEventNotifications***engineNonstallIntrEventNotifications**pType*NVRM: gpuUpdateEngineTable: EngineDB has not been created yet **NVRM: gpuUpdateEngineTable: EngineDB has not been created yet *NVRM: gpuConstructEngineTable: Could not allocate engine DB **NVRM: gpuConstructEngineTable: Could not allocate engine DB *call to gpuEngineEventNotificationListCreate*call to gpuDestroyEngineTable_IMPL*call to _setPlatformNoHostbridgeDetect*cfgBaseAddressLow*bBar2MovedByVtd*NVRM: VT-d moved BAR2 to 0x18. **NVRM: VT-d moved BAR2 to 0x18. *NVRM: VT-d still keeps BAR2 at 0x1C. **NVRM: VT-d still keeps BAR2 at 0x1C. *bBar1Is64Bit*NVRM: VT-d is using a 64bit BAR1. **NVRM: VT-d is using a 64bit BAR1. *bIsPassthru*NVRM: GPU at domain: %d bus: %d, device: %d is virtual (HW passthrough mode). **NVRM: GPU at domain: %d bus: %d, device: %d is virtual (HW passthrough mode). *chipImpl < HAL_IMPL_MAXIMUM**chipImpl < HAL_IMPL_MAXIMUM*NVRM: Invalid halimpl **NVRM: Invalid halimpl *call to gpuXlateHalImplToArchImpl*call to gpuGetChipArch*call to gpuSatisfiesTemporalOrder*pGpu->bIsVirtualWithSriov**pGpu->bIsVirtualWithSriov*bPipelinedPteMemEnabled*bNoHostBridgeDetected*call to gpuSetupVirtualGuestOwnedHW*NVRM: vGPU and Passthrough not supported simultaneously on the same VM. **NVRM: vGPU and Passthrough not supported simultaneously on the same VM. *pGpu->isVirtual == bIsVirtual**pGpu->isVirtual == bIsVirtual*inOut*NVRM: SBIOS did not acknowledge cfg space owner change **NVRM: SBIOS did not acknowledge cfg space owner change *RMD3Feature**RMD3Feature*call to gpuGetChipImpl*call to rmapiControlCacheFreeAllCacheForGpu*call to videoRemoveAllBindpointsForGpu*call to gpuGetNumEngDescriptors*pEngDescriptorList*call to engstateLogStateTransitionPre_IMPL*call to engstateLogStateTransitionPost_IMPL*call to gpuStateInitStartedRetract_b3696a*PDB_PROP_GPU_STATE_INITIALIZED*call to kgspUnloadRm_IMPL*call to _gpuFreeInternalObjects**pPrereqTracker**pChipInfo*pUserRegisterAccessMap**pUserRegisterAccessMap*pUnrestrictedRegisterAccessMap**pUnrestrictedRegisterAccessMap*userRegisterAccessMapSize*bFullyConstructed*call to gpuDeinitOptimusSettings_DISPATCH*osGetPerformanceCounter(&startTimens)**osGetPerformanceCounter(&startTimens)*NVRM: Failed to post unload engine with descriptor index: 0x%x and descriptor: 0x%x **NVRM: Failed to post unload engine with descriptor index: 0x%x and descriptor: 0x%x *call to _gpuStatePostUnloadEngineFailureStore*call to gpuServiceInterruptsAllGpus_IMPL*call to _gpuRemoveP2pCapsFromPeerGpus*_gpuRemoveP2pCapsFromPeerGpus(pGpu)**_gpuRemoveP2pCapsFromPeerGpus(pGpu)*call to _gpuStatePostUnloadUnknownFailureStore*bStateUnloading*call to gpuStatePreUnload*call to _gpuStateUnloadUnknownFailureStore*call to gpuFreeVideoLogging_IMPL*NVRM: Failed to unload engine with descriptor index: 0x%x and descriptor: 0x%x **NVRM: Failed to unload engine with descriptor index: 0x%x and descriptor: 0x%x *fatalErrorStatus*call to _gpuStateUnloadEngineFailureStore*NVRM: RPC to save host hibernation data failed, status 0x%x **NVRM: RPC to save host hibernation data failed, status 0x%x *call to gpuStatePostUnload*call to gpuDestroyDefaultClientShare_DISPATCH*call to gpuDeinitSriov_DISPATCH*bStateLoaded*NVRM: failed to unload the device with error 0x%x **NVRM: failed to unload the device with error 0x%x *call to rmapiControlCacheFreeNonPersistentCacheForGpu*call to gpuFabricProbeStop*NVRM: Failed to pre unload engine with descriptor index: 0x%x and descriptor: 0x%x **NVRM: Failed to pre unload engine with descriptor index: 0x%x and descriptor: 0x%x *call to _gpuStatePreUnloadEngineFailureStore*call to rmapiReportLeakedDevices*call to _gpuStatePreUnloadUnknownFailureStore*call to gpuLoadFailurePathTest_56cd7a*call to _gpuStatePostLoadEngineFailureStore*call to _gpuSetVgpuMgrConfig*_gpuSetVgpuMgrConfig(pGpu)**_gpuSetVgpuMgrConfig(pGpu)*call to _gpuPropagateP2PCapsToAllGpus*_gpuPropagateP2PCapsToAllGpus(pGpu)**_gpuPropagateP2PCapsToAllGpus(pGpu)*call to kvgpumgrSendAllVgpuTypesToGsp*kvgpumgrSendAllVgpuTypesToGsp(pGpu)**kvgpumgrSendAllVgpuTypesToGsp(pGpu)*call to gpuFabricProbeStart*gpuFabricProbeStart(pGpu, &pGpu->pGpuFabricProbeInfoKernel) == NV_OK**gpuFabricProbeStart(pGpu, &pGpu->pGpuFabricProbeInfoKernel) == NV_OK*call to gpuIsSystemRebootRequired_DISPATCH*call to gpuSetRecoveryRebootRequired_IMPL*call to _gpuStatePostLoadUnknownFailureStore*call to kvgpumgrIsHeterogeneousVgpuTypeSupported*bSupportHeterogeneousTimeSlicedVgpuTypes*pPeerRmApi*pPeerRmApi->Control(pPeerRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_VGPU_MGR_INTERNAL_SET_VGPU_MGR_CONFIG, ¶ms, sizeof(params))**pPeerRmApi->Control(pPeerRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_VGPU_MGR_INTERNAL_SET_VGPU_MGR_CONFIG, ¶ms, sizeof(params))*pSetP2PCapsParams**pSetP2PCapsParams*pSetP2PCapsParams != NULL**pSetP2PCapsParams != NULL*peerGpuIds**peerGpuIds*peerGpuIds != NULL**peerGpuIds != NULL*peerGpuInstances**peerGpuInstances*peerGpuInstances != NULL**peerGpuInstances != NULL*gpumgrGetGpuAttachInfo(&gpuCount, &attachMask)**gpumgrGetGpuAttachInfo(&gpuCount, &attachMask)*pAttachedGpu*peerGpuCount*peerGpuInfos**peerGpuInfos*pPeerInfo**pPeerInfo*call to CliGetSystemP2pCaps*p2pCapsStatus**p2pCapsStatus*CliGetSystemP2pCaps((NvU32[]) { pGpu->gpuId, pPeerInfo->gpuId }, (pGpu->gpuId == pPeerInfo->gpuId) ? 1 : 2, &pPeerInfo->p2pCaps, &pPeerInfo->p2pOptimalReadCEs, &pPeerInfo->p2pOptimalWriteCEs, pPeerInfo->p2pCapsStatus, &pPeerInfo->busPeerId, &pPeerInfo->busEgmPeerId)**CliGetSystemP2pCaps((NvU32[]) { pGpu->gpuId, pPeerInfo->gpuId }, (pGpu->gpuId == pPeerInfo->gpuId) ? 1 : 2, &pPeerInfo->p2pCaps, &pPeerInfo->p2pOptimalReadCEs, &pPeerInfo->p2pOptimalWriteCEs, pPeerInfo->p2pCapsStatus, &pPeerInfo->busPeerId, &pPeerInfo->busEgmPeerId)*pPeerRmApi->Control(pPeerRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_SET_P2P_CAPS, pSetP2PCapsParams, sizeof(*pSetP2PCapsParams))**pPeerRmApi->Control(pPeerRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_SET_P2P_CAPS, pSetP2PCapsParams, sizeof(*pSetP2PCapsParams))*failingGpuIndex*removeP2PCapsParams*peerGpuIdCount*ignoredStatus*pPeerRmApi->Control(pPeerRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_REMOVE_P2P_CAPS, &removeP2PCapsParams, sizeof(removeP2PCapsParams))**pPeerRmApi->Control(pPeerRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_REMOVE_P2P_CAPS, &removeP2PCapsParams, sizeof(removeP2PCapsParams))*pPeerRmApi->Control(pPeerRmApi, pPeerGpu->hInternalClient, pPeerGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_REMOVE_P2P_CAPS, &removeP2PCapsParams, sizeof(removeP2PCapsParams))**pPeerRmApi->Control(pPeerRmApi, pPeerGpu->hInternalClient, pPeerGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_REMOVE_P2P_CAPS, &removeP2PCapsParams, sizeof(removeP2PCapsParams))*call to gpuacctDisableAccounting_IMPL*call to gpuacctEnableAccounting_IMPL*pTransition*cumulativeTimeus*pMaxTimeEngstate**pMaxTimeEngstate*pCompletedEngstate*completedEngDescIdx*engineMaxTimeus*engineMaxTimeClassId*call to nvPowerStateFailureIsPopulated*call to _gpuEngineTransitionFailureStore*pFailureData*regReadCount*call to _gpuDetectNvswitchSupport*call to vgpuReinitializeRpcInfraOnStateLoad*NVRM: Failed to re-init RPC infrastructure on resume, status 0x%x **NVRM: Failed to re-init RPC infrastructure on resume, status 0x%x *call to _gpuStateLoadUnknownFailureStore*call to gpuInitSriov_DISPATCH*NVRM: Error initializing SRIOV: 0x%0x **NVRM: Error initializing SRIOV: 0x%0x *NVRM: RPC to restore host hibernation data failed, status 0x%x **NVRM: RPC to restore host hibernation data failed, status 0x%x *call to gpuCreateDefaultClientShare_DISPATCH*call to gpuStatePreLoad*NVRM: RPC: Allocate vGPU GSP buffers to FB_MEM **NVRM: RPC: Allocate vGPU GSP buffers to FB_MEM *NVRM: RPC buffers setup failed: 0x%x **NVRM: RPC buffers setup failed: 0x%x *bStateLoading*NVRM: NV_ERR_INVALID_ADDRESS is no longer supported in StateLoad (%s) **NVRM: NV_ERR_INVALID_ADDRESS is no longer supported in StateLoad (%s) *call to _gpuStateLoadEngineFailureStore*call to gpuInitVideoLogging_IMPL*gpuInitVideoLogging(pGpu) == NV_OK**gpuInitVideoLogging(pGpu) == NV_OK*call to gpuInitVmmuInfo*NVRM: Error initializing VMMU info: 0x%0x **NVRM: Error initializing VMMU info: 0x%0x *call to gpuStatePostLoad*gpuLoop*call to gpuEnableAccounting_IMPL*NVRM: gpuEnableAccounting failed with error %d on GPU ID %d **NVRM: gpuEnableAccounting failed with error %d on GPU ID %d *call to memmgrCheckZeroPmaUsage_IMPL*localStatus*memmgrCheckZeroPmaUsage(pGpu, pMemoryManager)**memmgrCheckZeroPmaUsage(pGpu, pMemoryManager)*call to _gpuStatePreLoadEngineFailureStore*call to _gpuStatePreLoadUnknownFailureStore*call to gpuStateInitStartedSatisfy_56cd7a*gpuStateInitStartedSatisfy_HAL(pGpu, pGpu->pPrereqTracker)**gpuStateInitStartedSatisfy_HAL(pGpu, pGpu->pPrereqTracker)*call to engstateStateInit_IMPL*call to rmcfg_IsdADA*RMBug3007008EmulateVfMmuTlbInvalidate**RMBug3007008EmulateVfMmuTlbInvalidate*objCreate(&pGpu->pPrereqTracker, pGpu, PrereqTracker, pGpu)**objCreate(&pGpu->pPrereqTracker, pGpu, PrereqTracker, pGpu)*call to gpuInitBranding_DISPATCH*gpuInitBranding(pGpu)**gpuInitBranding(pGpu)*call to gpuGetRtd3GC6Data_DISPATCH*call to gpuDetermineSelfHostedMode_DISPATCH*call to _gpuAllocateInternalObjects*_gpuAllocateInternalObjects(pGpu)**_gpuAllocateInternalObjects(pGpu)*call to _gpuInitChipInfo*_gpuInitChipInfo(pGpu)**_gpuInitChipInfo(pGpu)*call to gpuConstructUserRegisterAccessMap_IMPL*gpuConstructUserRegisterAccessMap(pGpu)**gpuConstructUserRegisterAccessMap(pGpu)*call to gpuBuildGenericKernelFalconList_IMPL*gpuBuildGenericKernelFalconList(pGpu)**gpuBuildGenericKernelFalconList(pGpu)*call to gpuBuildKernelVideoEngineList_IMPL*gpuBuildKernelVideoEngineList(pGpu)**gpuBuildKernelVideoEngineList(pGpu)*call to gpuValidateMIGSupport_DISPATCH*call to kvgpumgrMigTimeslicingModeEnabled*kvgpumgrMigTimeslicingModeEnabled(pGpu)**kvgpumgrMigTimeslicingModeEnabled(pGpu)*call to gpuRemoveMissingEngines*call to engstateStatePreInit_IMPL*call to gpuRemoveMissingEngineClasses*NVRM: engine removal in PreInit with NV_ERR_NOT_SUPPORTED is deprecated (%s) **NVRM: engine removal in PreInit with NV_ERR_NOT_SUPPORTED is deprecated (%s) *NVRM: disallowing NV_ERR_NOT_SUPPORTED PreInit removal of untracked engine (%s) **NVRM: disallowing NV_ERR_NOT_SUPPORTED PreInit removal of untracked engine (%s) *call to gpuDestroyMissingEngine**pEngstate*call to gpuDeleteEngineOnPreInit_IMPL*rmStatus == NV_OK || !"Error while trying to remove missing engine"**rmStatus == NV_OK || !"Error while trying to remove missing engine"*call to gpuDeleteClassFromClassDBByClassId_IMPL*call to gpuInitOptimusSettings_DISPATCH*bHostSupported**pEngDesc*call to gpuMissingEngDescriptor*NVRM: Update engine table operation failed! **NVRM: Update engine table operation failed! *pEngDescriptor*engDescriptorFound*call to __nvoc_objGetClassId*call to gpuGetNumChildren*NVRM: engine 0x%06x:%d is missing, removing **NVRM: engine 0x%06x:%d is missing, removing *pClassDescriptors*pCurDesc*bHostSupportsEngine*gpuDeleteClassFromClassDBByEngTag(pGpu, pCurDesc->engDesc)**gpuDeleteClassFromClassDBByEngTag(pGpu, pCurDesc->engDesc)*curClassDescIdx*childIdx*call to gpuCreateObject*call to gpuGetChildrenPresent_DISPATCH**pChildrenPresent*call to gpuDisableAccounting_IMPL*NVRM: gpuDisableAccounting failed with error %d on GPU ID %d **NVRM: gpuDisableAccounting failed with error %d on GPU ID %d *call to rmapiReportInternalLeakedDevices*call to gpuGetGpuMask_IMPL*call to vgpuDestructObject*call to _gpuFreeEngineOrderList**pDeviceInfoTable*numDeviceInfoEntries*call to gpuDestroyClassDB_IMPL*call to osDestroyOSHwInfo**pGpuHWBCList**pHWBCList*call to regAccessDestruct*pGpu->numConstructedFalcons == 0**pGpu->numConstructedFalcons == 0*pRegopOffsetScratchBuffer**pRegopOffsetScratchBuffer*pRegopOffsetAddrScratchBuffer**pRegopOffsetAddrScratchBuffer*regopScratchBufferMaxOffsets*pGpu->numSubdeviceBackReferences == 0**pGpu->numSubdeviceBackReferences == 0*pSubdeviceBackReferences**pSubdeviceBackReferences***pSubdeviceBackReferences*numSubdeviceBackReferences*maxSubdeviceBackReferences**pDpcThreadState*call to gpuDestructPhysical_b3696a*call to gpuShouldCreateObject*call to _gpuChildNvocClassInfoGet*_gpuChildNvocClassInfoGet(pGpu, classId, &pClassInfo)**_gpuChildNvocClassInfoGet(pGpu, classId, &pClassInfo)*pConcreteChild != NULL**pConcreteChild != NULL*(*ppChildPtr != NULL)**(*ppChildPtr != NULL)*call to engstateConstructBase_IMPL**pHosteng*NVRM: Failed to get hosteng. **NVRM: Failed to get hosteng. ***pDerivedChild**call to objDynamicCastById_IMPL*pEngineOrder**pEngineInitDescriptors**pEngineDestroyDescriptors**pEngineLoadDescriptors**pEngineUnloadDescriptors**pClassDescriptors*numLists*ppEngDescriptors**ppEngDescriptors***ppEngDescriptors****ppEngDescriptors*numEngineDesc*curEngineDesc*call to gpuGetEngineOrderListIter*listTypes**listTypes*call to gpuGetNextInEngineOrderList*NVRM: Sizes of all engine order lists do not match! **NVRM: Sizes of all engine order lists do not match! *numEngineDescriptors*call to gpuGetGenericClassList_IMPL*pGenericClassDescs**pGenericClassDescs*call to gpuGetNoEngClassList_DISPATCH*pNoEngClassDescsHal**pNoEngClassDescsHal*pEngClassDescsHal**pEngClassDescsHal*numClassDescriptors*call to rmapiControlCacheFreeObjectEntry*call to gpuGetDceClientInternalClientHandle*rmapiutilAllocClientAndDeviceHandles( pRmApi, pGpu, &pGpu->hInternalClient, &pGpu->hInternalDevice, &pGpu->hInternalSubdevice)**rmapiutilAllocClientAndDeviceHandles( pRmApi, pGpu, &pGpu->hInternalClient, &pGpu->hInternalDevice, &pGpu->hInternalSubdevice)*serverGetClientUnderLock(&g_resServ, pGpu->hInternalClient, &pGpu->pCachedRsClient)**serverGetClientUnderLock(&g_resServ, pGpu->hInternalClient, &pGpu->pCachedRsClient)*subdeviceGetByHandle(pGpu->pCachedRsClient, pGpu->hInternalSubdevice, &pGpu->pCachedSubdevice)**subdeviceGetByHandle(pGpu->pCachedRsClient, pGpu->hInternalSubdevice, &pGpu->pCachedSubdevice)*NVRM: GPU-%d allocated hInternalClient=0x%08x **NVRM: GPU-%d allocated hInternalClient=0x%08x *call to rmapiControlCacheSetGpuAttrForObject*pRmApi->AllocWithHandle(pRmApi, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_ROOT, &pGpu->hInternalLockStressClient, sizeof(pGpu->hInternalLockStressClient))**pRmApi->AllocWithHandle(pRmApi, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_ROOT, &pGpu->hInternalLockStressClient, sizeof(pGpu->hInternalLockStressClient))*hInternalLockStressClient*pGpu->pChipInfo != NULL**pGpu->pChipInfo != NULL*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_GET_CHIP_INFO, pGpu->pChipInfo, paramSize)**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_GET_CHIP_INFO, pGpu->pChipInfo, paramSize)*subRevision*physicalRmApi***pPrivateContext*defaultSecInfo*bHasDefaultSecInfo*bRmSemaInternal*Control*AllocWithHandle*pInternalRmApi*!IS_FW_CLIENT(pGpu)**!IS_FW_CLIENT(pGpu)*pGpu->pCachedSubdevice && pGpu->pCachedRsClient**pGpu->pCachedSubdevice && pGpu->pCachedRsClient*call to objGetExportedMethodDef_IMPL**pCachedSubdevice*pEntry->paramSize == paramsSize**pEntry->paramSize == paramsSize*NVRM: GPU Internal RM control 0x%08x on gpuInst:%x hClient:0x%08x hSubdevice:0x%08x **NVRM: GPU Internal RM control 0x%08x on gpuInst:%x hClient:0x%08x hSubdevice:0x%08x *callCtx*resservSwapTlsCallContext(&oldCtx, &callCtx)**resservSwapTlsCallContext(&oldCtx, &callCtx)*oldCtx*resservRestoreTlsCallContext(oldCtx)**resservRestoreTlsCallContext(oldCtx)**pGpuArch*gspRmInitialized*call to osInitOSHwInfo*pGpu->pDpcThreadState != NULL**pGpu->pDpcThreadState != NULL*call to gpuConstructPhysical_56cd7a*call to gpumgrAddDeviceInstanceToGpus*call to regAccessConstruct*NVRM: Failed to construct IO Apertures for attached devices **NVRM: Failed to construct IO Apertures for attached devices *call to gpuGetVirtRegPhysOffset_DISPATCH*virtualRegPhysOffset*simMode*call to gpuInitChipInfo_IMPL*call to gpuIsSocSdmEnabled_DISPATCH*PDB_PROP_GPU_IS_SOC_SDM*call to gpuInitRegistryOverrides_KERNEL*call to gpuInitInstLocOverrides_IMPL*call to gpuPrivSecInitRegistryOverrides_56cd7a*gpuPrivSecInitRegistryOverrides(pGpu)**gpuPrivSecInitRegistryOverrides(pGpu)*call to gpuDetermineVirtualMode*call to gpuIsCtxBufAllocInPmaSupported_DISPATCH*PDB_PROP_GPU_MOVE_CTX_BUFFERS_TO_PMA*call to _gpuInitPciHandle*call to _gpuChildrenPresentInit*_gpuChildrenPresentInit(pGpu)**_gpuChildrenPresentInit(pGpu)*call to _gpuCreateEngineOrderList*call to gpuBuildClassDB_IMPL*computeModeRefCount*hComputeModeReservation*call to timeoutInitializeGpuDefault*bTwoStageRcRecoveryEnabled*call to vgpuInitRegistryOverWrite*PDB_PROP_GPU_IS_VIRTUALIZATION_MODE_HOST_VGPU*call to gpuApplySchedulerPolicy_IMPL*call to gpuCreateChildObjects*call to gpuGetIdInfo_DISPATCH*call to gpuUpdateIdInfo_b3696a*call to _gpuInitPhysicalRmApi*call to gpuDeterminePersistantIllumSettings_b3696a*call to gpuConstructEngineTable_IMPL*call to gpuClearFbhubPoisonIntrForBug2924523_DISPATCH*pAttachArg*kbusBar2BootStrapInPhysicalMode_HAL(pGpu, pKernelBus)**kbusBar2BootStrapInPhysicalMode_HAL(pGpu, pKernelBus)*call to vgpuCreateObject*Guest driver is incompatible with host driver**Guest driver is incompatible with host driver*call to gpuGetHwDefaults_b3696a*call to clInitPropertiesFromRegistry_IMPL*call to gpuSetCacheOnlyModeOverrides_56cd7a*call to gpuDumpCallbackRegister_IMPL*externalKernelClientCount*call to confComputeTestPlatformSupport_DISPATCH*confComputeTestPlatformSupport_HAL(pGpu, pCC) == NV_OK**confComputeTestPlatformSupport_HAL(pGpu, pCC) == NV_OK*pGpu->computeModeRefCount >= 0**pGpu->computeModeRefCount >= 0*NVRM: GPU (ID: 0x%x): new mode: COMPUTE **NVRM: GPU (ID: 0x%x): new mode: COMPUTE *NVRM: GPU (ID: 0x%x): new mode: GRAPHICS **NVRM: GPU (ID: 0x%x): new mode: GRAPHICS *NVRM: Bad command: 0x%x **NVRM: Bad command: 0x%x *call to gpuEncodeBusDevice*hypervisorIsType(OS_HYPERVISOR_HYPERV)**hypervisorIsType(OS_HYPERVISOR_HYPERV)*fabricProbeRetryDelay*fabricProbeSlowdownThreshold*nvswitchSupport*call to GPU_GET_NVLINK*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_PMGR_GET_MODULE_INFO, &moduleInfoParams, sizeof(moduleInfoParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_PMGR_GET_MODULE_INFO, &moduleInfoParams, sizeof(moduleInfoParams))*moduleInfoParams*PDB_PROP_PFM_NO_HOSTBRIDGE_DETECT*src/kernel/gpu/gpu_access.c**src/kernel/gpu/gpu_access.c*call to regaprtIsRegValid_DISPATCH*pApertures*call to regaprtWriteReg32Uc_DISPATCH*call to regaprtWriteReg16_DISPATCH*call to regaprtWriteReg08_DISPATCH*call to regaprtReadReg16_DISPATCH*call to regaprtReadReg08_DISPATCH*numApertures != 0**numApertures != 0**pApertures*pValue8**pValue8*call to osGpuReadReg008*pValue16**pValue16*call to osGpuReadReg016*pValue32**pValue32*call to gpuHandleSanityCheckRegReadError_DISPATCH*Invalid access size**Invalid access size*call to gpuGetUserRegisterAccessPermissions_IMPL*NVRM: User does not have permission to access register offset 0x%x **NVRM: User does not have permission to access register offset 0x%x *bIsPowerHfrpEnabled*call to gpuSanityCheckVirtRegAccess_DISPATCH*NVRM: Invalid register access on VF, addr: 0x%x **NVRM: Invalid register access on VF, addr: 0x%x *pRetVal*pRegisterAccess*returnValue*call to regCheckAndLogReadFailure*call to _regCheckReadFailure*pBadRead*MemorySpace*Mask*Reason*call to osBugCheck*call to gpuSanityCheck_IMPL*call to _gpuEnablePciMemSpaceAndCheckPmcBoot0Match**Handle*NVRM: Failed to initialize pGpu IO aperture for devIdx %d. **NVRM: Failed to initialize pGpu IO aperture for devIdx %d. *call to _regRead*NVRM: Could not find mapping for reg %x, deviceIndex=0x%x instance=%d **NVRM: Could not find mapping for reg %x, deviceIndex=0x%x instance=%d *call to gpuHandleReadRegisterFilter*call to osDevReadReg008*call to osDevReadReg016*call to gpuSanityCheckRegRead_IMPL*pAperture != NULL**pAperture != NULL*call to ioaprtReadReg*call to ioaprtIsInitialized*ioaprtIsInitialized(pAperture)**ioaprtIsInitialized(pAperture)*call to _regWriteUnicast*call to regWrite032Unicast*call to ioaprtWriteRegUnicast*call to gpuHandleWriteRegisterFilter*call to osDevWriteReg008*call to osDevWriteReg016*call to osDevWriteReg032*length > 0**length > 0*pParentAperture*pMapping == NULL**pMapping == NULL*pGpu == NULL || pGpu == pParentAperture->pGpu**pGpu == NULL || pGpu == pParentAperture->pGpu*NVRM: Child aperture crosses parent's boundary, length 0x%llx offset 0x%x, Parent's length 0x%llx **NVRM: Child aperture crosses parent's boundary, length 0x%llx offset 0x%x, Parent's length 0x%llx **pIOAperture*devRegFilterInfo*pRegFilterList*!pGpu->deviceMappings[mappingNum].devRegFilterInfo.pRegFilterList**!pGpu->deviceMappings[mappingNum].devRegFilterInfo.pRegFilterList*pRegFilterLock**pRegFilterLock**pRegFilterRecycleList*mappingNum*minDeviceIndex*maxDeviceIndex*call to _gpuInitIOAperture*NVRM: Failed to initialize pGpu IO device/aperture for deviceIndex=%d. **NVRM: Failed to initialize pGpu IO device/aperture for deviceIndex=%d. *pFlagsFailed*src/kernel/gpu/gpu_device_mapping.c**src/kernel/gpu/gpu_device_mapping.c*call to _gpuCheckIsBar0OffByN*call to _gpuCheckDoesPciSpaceMatch*call to _gpuCheckIsPciMemSpaceEnabled*NVRM: Failed test flags: 0x%x **NVRM: Failed test flags: 0x%x *NVRM: Could not find mapping for deviceId=%d **NVRM: Could not find mapping for deviceId=%d *instance == 0**instance == 0*pGpu->gpuDeviceMapCount == 1**pGpu->gpuDeviceMapCount == 1*pDeviceMappingsByDeviceInstance**pDeviceMappingsByDeviceInstance***pDeviceMappingsByDeviceInstance*call to gpuGetDeviceIDList_4a4dee*numDeviceIDs*deviceIdMapping*NVRM: Could not find mapping for deviceIndex=%d **NVRM: Could not find mapping for deviceIndex=%d *GR**GR*COPY**COPY*NVENC**NVENC*NVJPEG**NVJPEG*VP**VP*ME**ME*PPP**PPP*MPEG**MPEG*SW**SW*TSEC**TSEC*VIC**VIC*MP**MP*HOST**HOST*DPU**DPU*FBFLCN**FBFLCN*capSize == NVGPU_ENGINE_CAPS_MASK_ARRAY_MAX*src/kernel/gpu/gpu_engine_type.c**capSize == NVGPU_ENGINE_CAPS_MASK_ARRAY_MAX**src/kernel/gpu/gpu_engine_type.c*pRmEngineTypeCap*pRmEngineTypeCap != NULL**pRmEngineTypeCap != NULL*pNV2080EngineTypeCap*pNV2080EngineTypeCap != NULL**pNV2080EngineTypeCap != NULL*engineCount < RM_ENGINE_TYPE_LAST**engineCount < RM_ENGINE_TYPE_LAST*pRmEngineList*pNv2080EngineList*index < RM_ENGINE_TYPE_LAST**index < RM_ENGINE_TYPE_LAST*index < NV2080_ENGINE_TYPE_LAST**index < NV2080_ENGINE_TYPE_LAST*call to _gpuFabricProbeFullSanityCheck*probeResponseMsg*probeRsp*call to gpuFabricProbeIsReceived*call to gpuFabricProbeIsSuccess*call to knvlinkIsBwModeSupported_DISPATCH*call to fabricvaspaceIsInUse_IMPL*call to gpuFabricProbeSetBwModePerGpu*call to gpuFabricProbeSuspend*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_INVALIDATE_FABRIC_PROBE, NULL, 0)*src/kernel/gpu/gpu_fabric_probe.c**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_INVALIDATE_FABRIC_PROBE, NULL, 0)**src/kernel/gpu/gpu_fabric_probe.c*call to _gpuFabricProbeInvalidate*call to _gpuFabricProbeRbmWakeLinks*call to knvlinkSetBWMode*call to gpuFabricProbeResume*gpuFabricProbeResume(pGpuFabricProbeInfoKernel)**gpuFabricProbeResume(pGpuFabricProbeInfoKernel)*NVRM: GPU%u Probe handling is disabled **NVRM: GPU%u Probe handling is disabled *pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_STOP_FABRIC_PROBE, NULL, 0)**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_STOP_FABRIC_PROBE, NULL, 0)**pGpuFabricProbeInfoKernel**ppGpuFabricProbeInfoKernel != NULL***ppGpuFabricProbeInfoKernel != NULL*bLocalEgmEnabled*call to knvlinkGetBWMode*call to gpumgrGetGpuNvlinkBwMode_IMPL*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_START_FABRIC_PROBE, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_START_FABRIC_PROBE, ¶ms, sizeof(params))*call to convertBitVectorToLinkMasks*pEnabledLinksVec*convertBitVectorToLinkMasks(pEnabledLinksVec, &enabledLinkMask, sizeof(enabledLinkMask), NULL)**convertBitVectorToLinkMasks(pEnabledLinksVec, &enabledLinkMask, sizeof(enabledLinkMask), NULL)*powerStatusParams*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_NVLINK_GET_POWER_STATE, &powerStatusParams, sizeof(powerStatusParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_NVLINK_GET_POWER_STATE, &powerStatusParams, sizeof(powerStatusParams))*call to knvlinkEnterExitSleep_IMPL*NVRM: Error waking links on linkmask 0x%x **NVRM: Error waking links on linkmask 0x%x *pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_RESUME_FABRIC_PROBE, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_RESUME_FABRIC_PROBE, ¶ms, sizeof(params))*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_SUSPEND_FABRIC_PROBE, NULL, 0)**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_SUSPEND_FABRIC_PROBE, NULL, 0)*Invalid GPU instance**Invalid GPU instance*pGpu->pGpuFabricProbeInfoKernel != NULL**pGpu->pGpuFabricProbeInfoKernel != NULL*rmGpuGroupLockIsOwner(gpuInstance, GPU_LOCK_GRP_SUBDEVICE, &gpuMaskUnused)**rmGpuGroupLockIsOwner(gpuInstance, GPU_LOCK_GRP_SUBDEVICE, &gpuMaskUnused)*pInbandRcvParams != NULL**pInbandRcvParams != NULL*pProbeRespMsg**pProbeRespMsg*pProbeUpdateReqMsg**pProbeUpdateReqMsg*probeUpdate*fabricHealthMask*call to _gpuFabricProbeSendCliqueIdChangeEvent*call to knvlinkTriggerProbeRequest_DISPATCH*call to _gpuFabricProbeSetupGpaRange*call to _gpuFabricProbeSetupFlaRange*call to _gpuFrabricProbeUpdateSupportedBwModes*call to _gpuFrabricProbeRbmSleepLinks*NVRM: Error setting links to sleep on linkmask 0x%x **NVRM: Error setting links to sleep on linkmask 0x%x *maxRbmLinks <= NVLINK_MAX_LINKS_SW**maxRbmLinks <= NVLINK_MAX_LINKS_SW*call to knvlinkSetMaxBWModeLinks*call to fabricGenerateEventId_IMPL*cliqueIdChange*call to fabricPostEventsV2_IMPL*NVRM: GPU%u Notifying cliqueId change failed **NVRM: GPU%u Notifying cliqueId change failed *call to gpuFabricProbeGetFlaAddress*gpuFabricProbeGetFlaAddress(pGpuFabricProbeInfoKernel, &flaBaseAddress) == NV_OK**gpuFabricProbeGetFlaAddress(pGpuFabricProbeInfoKernel, &flaBaseAddress) == NV_OK*call to gpuFabricProbeGetFlaAddressRange*gpuFabricProbeGetFlaAddressRange(pGpuFabricProbeInfoKernel, &flaSize) == NV_OK**gpuFabricProbeGetFlaAddressRange(pGpuFabricProbeInfoKernel, &flaSize) == NV_OK*call to fabricvaspaceClearUCRange_IMPL*fabricvaspaceInitUCRange(dynamicCast(pGpu->pFabricVAS, FABRIC_VASPACE), pGpu, flaBaseAddress, flaSize) == NV_OK**fabricvaspaceInitUCRange(dynamicCast(pGpu->pFabricVAS, FABRIC_VASPACE), pGpu, flaBaseAddress, flaSize) == NV_OK*call to gpuFabricProbeGetGpaAddress*gpuFabricProbeGetGpaAddress(pGpuFabricProbeInfoKernel, &gpaAddress) == NV_OK**gpuFabricProbeGetGpaAddress(pGpuFabricProbeInfoKernel, &gpaAddress) == NV_OK*call to gpuFabricProbeGetGpaAddressRange*gpuFabricProbeGetGpaAddressRange(pGpuFabricProbeInfoKernel, &gpaAddressSize) == NV_OK**gpuFabricProbeGetGpaAddressRange(pGpuFabricProbeInfoKernel, &gpaAddressSize) == NV_OK*call to knvlinkSetUniqueFabricBaseAddress_DISPATCH*knvlinkSetUniqueFabricBaseAddress_HAL(pGpu, pKernelNvlink, gpaAddress) == NV_OK**knvlinkSetUniqueFabricBaseAddress_HAL(pGpu, pKernelNvlink, gpaAddress) == NV_OK*call to gpuFabricProbeGetfmCaps*gpuFabricProbeGetfmCaps(pGpuFabricProbeInfoKernel, &fmCaps) == NV_OK**gpuFabricProbeGetfmCaps(pGpuFabricProbeInfoKernel, &fmCaps) == NV_OK*call to gpuFabricProbeGetEgmGpaAddress*gpuFabricProbeGetEgmGpaAddress(pGpuFabricProbeInfoKernel, &egmGpaAddress) == NV_OK**gpuFabricProbeGetEgmGpaAddress(pGpuFabricProbeInfoKernel, &egmGpaAddress) == NV_OK*call to knvlinkSetUniqueFabricEgmBaseAddress_DISPATCH*knvlinkSetUniqueFabricEgmBaseAddress_HAL(pGpu, pKernelNvlink, egmGpaAddress) == NV_OK**knvlinkSetUniqueFabricEgmBaseAddress_HAL(pGpu, pKernelNvlink, egmGpaAddress) == NV_OK*msgHdr*pProbeResponseMsg**pProbeResponseMsg*pProbeRespMsgHdr**pProbeRespMsgHdr*rmDeviceGpuLockIsOwner( gpuGetInstance(pGpuFabricProbeInfoKernel->pGpu))**rmDeviceGpuLockIsOwner( gpuGetInstance(pGpuFabricProbeInfoKernel->pGpu))*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_GET_FABRIC_PROBE_INFO, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_GET_FABRIC_PROBE_INFO, ¶ms, sizeof(params))*pClusterUuid*call to knvlinkClearUniqueFabricBaseAddress_DISPATCH*call to knvlinkClearUniqueFabricEgmBaseAddress_DISPATCH*NVRM: Fabric Probe failed: 0x%x **NVRM: Fabric Probe failed: 0x%x *pChipInfo != NULL*src/kernel/gpu/gpu_gspclient.c**pChipInfo != NULL**src/kernel/gpu/gpu_gspclient.c*regBase < NV_ARRAY_ELEMENTS(pChipInfo->regBases)**regBase < NV_ARRAY_ELEMENTS(pChipInfo->regBases)*regBases**regBases*nameStringBuffer**nameStringBuffer*gpuShortNameString**gpuShortNameString*gpuNameString**gpuNameString*gpuNameString_Unicode**gpuNameString_Unicode*pParams->numEntries <= NV2080_CTRL_CMD_INTERNAL_DEVICE_INFO_MAX_ENTRIES**pParams->numEntries <= NV2080_CTRL_CMD_INTERNAL_DEVICE_INFO_MAX_ENTRIES*pGpu->pDeviceInfoTable != NULL**pGpu->pDeviceInfoTable != NULL*deviceInfoTable**deviceInfoTable*GC6PerstDelay*GC6TotalBoardPower*zeroGid**zeroGid*NVRM: GSP Static Info has not been initialized yet for UUID **NVRM: GSP Static Info has not been initialized yet for UUID *bIsQuadro*bIsQuadroAD*bIsNvidiaNvs*bIsVgx*bGeforceSmb*bIsTitan*bIsTesla*bIsGeforce*call to gpuSetGfidUsage_IMPL**pAllocatedGfids*sriovCaps*maxGfid**pP2PInfo*bP2PAllocated*maxP2pGfid*totalPcieFns*NVRM: Memory allocation failed for GFID tracking **NVRM: Memory allocation failed for GFID tracking *gspNvdEngines**gspNvdEngines*prbEncNestedStart(pPrbEnc, NVDEBUG_GPUINFO_ENG_GPU)*src/kernel/gpu/gpu_protobuf.c**prbEncNestedStart(pPrbEnc, NVDEBUG_GPUINFO_ENG_GPU)**src/kernel/gpu/gpu_protobuf.c*call to _gpuDumpEngine_CommonFields*_gpuDumpEngine_CommonFields(pGpu, pPrbEnc, pNvDumpState)**_gpuDumpEngine_CommonFields(pGpu, pPrbEnc, pNvDumpState)*userSharedData*pAccessMap*pAccessMap != NULL*src/kernel/gpu/gpu_register_access_map.c**pAccessMap != NULL**src/kernel/gpu/gpu_register_access_map.c*accessMapSize != 0**accessMapSize != 0*pComprData*pComprData != NULL**pComprData != NULL*comprDataSize != 0**comprDataSize != 0*inflatedBytes*call to _getIsProfilingPrivileged*bRmProfilingPrivileged*pGpu->userRegisterAccessMapSize == 0**pGpu->userRegisterAccessMapSize == 0*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_GET_USER_REGISTER_ACCESS_MAP, pParams, sizeof(*pParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_GET_USER_REGISTER_ACCESS_MAP, pParams, sizeof(*pParams))*profilingRangesSize*profilingRanges**profilingRanges*profilingRangesArr**profilingRangesArr*NVRM: User Register Access Map unsupported for this chip. **NVRM: User Register Access Map unsupported for this chip. *NVRM: Allocated User Register Access Map of 0x%xB @%p **NVRM: Allocated User Register Access Map of 0x%xB @%p *NVRM: GPU/Platform does not have restricted user register access! Allowing all registers. **NVRM: GPU/Platform does not have restricted user register access! Allowing all registers. *call to gpuInitRegisterAccessMap_IMPL*gpuInitRegisterAccessMap(pGpu, pGpu->pUserRegisterAccessMap, pGpu->userRegisterAccessMapSize, compressedData, compressedSize)**gpuInitRegisterAccessMap(pGpu, pGpu->pUserRegisterAccessMap, pGpu->userRegisterAccessMapSize, compressedData, compressedSize)*NVRM: Failed to initialize unrestricted register access map **NVRM: Failed to initialize unrestricted register access map *call to gpuSetUserRegisterAccessPermissionsInBulk_IMPL*call to gpuIsFullyConstructed*No user register access map available to read**No user register access map available to read*NVRM: Parameter `offset` = %u is out of bounds. **NVRM: Parameter `offset` = %u is out of bounds. *NVRM: Parameter `offset` = %u must be 4-byte aligned. **NVRM: Parameter `offset` = %u must be 4-byte aligned. *!osIsRaisedIRQL()**!osIsRaisedIRQL()*(arrSizeBytes & (2 * sizeof(NvU32) - 1)) == 0**(arrSizeBytes & (2 * sizeof(NvU32) - 1)) == 0*call to gpuSetUserRegisterAccessPermissions_IMPL*pOffsetsSizesArr*pGpu->pUserRegisterAccessMap != NULL**pGpu->pUserRegisterAccessMap != NULL*(offset & 3) == 0**(offset & 3) == 0*(size & 3) == 0**(size & 3) == 0*NVRM: %sllowing access to 0x%x-0x%x **NVRM: %sllowing access to 0x%x-0x%x **A*Disa**Disa*NVRM: Byte 0x%x Bit 0x%x through Byte 0x%x Bit 0x%x **NVRM: Byte 0x%x Bit 0x%x through Byte 0x%x Bit 0x%x *bitOffset*bitSize*bitOffset < mapSize**bitOffset < mapSize*(bitOffset+bitSize) <= mapSize**(bitOffset+bitSize) <= mapSize*src/kernel/gpu/gpu_registry.c*NVRM: INSTLOC overrides may not work with large mem systems on GP100+ **src/kernel/gpu/gpu_registry.c**NVRM: INSTLOC overrides may not work with large mem systems on GP100+ *globalOverride*bRegUsesGlobalSurfaceOverrides*GlobalSurfaceOverrides**GlobalSurfaceOverrides*ovBits*RMInstLoc**RMInstLoc*RMInstLoc2**RMInstLoc2*RMInstLoc3**RMInstLoc3*RMInstLoc4**RMInstLoc4*call to _gpuInitGlobalSurfaceOverride*instCacheOverride*NVRM: Ignoring regkeys to place BAR PTE/PDE in SYSMEM **NVRM: Ignoring regkeys to place BAR PTE/PDE in SYSMEM *call to timeoutRegistryOverride*nvBrokenFb**nvBrokenFb*PDB_PROP_GPU_BROKEN_FB*RMInstVPR**RMInstVPR*instVprOverrides*computeModeRules*RmComputeModeRules**RmComputeModeRules*call to threadStateInitRegistryOverrides*bSurpriseRemovalSupported*RMGpuSurpriseRemoval**RMGpuSurpriseRemoval*RMSetSriovMode**RMSetSriovMode*NVRM: Overriding SRIOV Mode to %u **NVRM: Overriding SRIOV Mode to %u *call to rmcfg_IsTURING_CLASSIC_GPUS*NVRM: SRIOV status[%d]. **NVRM: SRIOV status[%d]. *bVgpuGspPluginOffloadEnabled*RMSetClientRMAllocatedCtxBuffer**RMSetClientRMAllocatedCtxBuffer*NVRM: Setting Client RM managed context buffer to %u **NVRM: Setting Client RM managed context buffer to %u *NVRM: Enabled Client RM managed context buffer for zero-FB + SRIOV. **NVRM: Enabled Client RM managed context buffer for zero-FB + SRIOV. *RMSplitVasMgmtServerClientRm**RMSplitVasMgmtServerClientRm*bSplitVasManagementServerClientRm*NVRM: Split VAS mgmt between Server/Client RM %u **NVRM: Split VAS mgmt between Server/Client RM %u *RmGpuFabricProbe**RmGpuFabricProbe*bBf3WarBug4040336Enabled*RmDmaAdjustPeerMmioBF3**RmDmaAdjustPeerMmioBF3*RMDebugRusdPolling**RMDebugRusdPolling*pollingRegistryOverride*RMRusdPollingInterval**RMRusdPollingInterval*bPollIntervalOverridden*RmInitMemReuse**RmInitMemReuse*call to gpuresGetByHandle_IMPL**ppGpuResource*pGpuResource->pGpu != NULL*src/kernel/gpu/gpu_resource.c**pGpuResource->pGpu != NULL**src/kernel/gpu/gpu_resource.c*call to gpuresControlSetup_IMPL*shareType*limbs**limbs*pInvokingDeviceRef**pInvokingDeviceRef*refFindAncestorOfType(pParentRef, classId(Device), &pInvokingDeviceRef)**refFindAncestorOfType(pParentRef, classId(Device), &pInvokingDeviceRef)*pInvokingDevice**pInvokingDevice*call to kmigmgrMakeGIReference_IMPL*refClient*refResource*call to kmigmgrAreMIGReferencesSame_IMPL*pParentDeviceAncestorRef**pParentDeviceAncestorRef*pDeviceAncestorRef*pParentDevice*call to rmresShareCallback_IMPL*call to CliGetGpuFromContext*call to rmapiMapGpuCommon*pGpuResourceSrc*call to gpuresCopyConstruct_IMPL*call to _gpuDeleteClassFromClassDBByEngTagClassId*pExternalClassId*(NULL != pEngDesc) || (NULL != pExternalClassId)*src/kernel/gpu/gpu_resource_desc.c**(NULL != pEngDesc) || (NULL != pExternalClassId)**src/kernel/gpu/gpu_resource_desc.c*pClassDB*call to _gpuAddClassToClassDBByEngTagClassId*pClassDescToCopy**pClassDescToCopy*bMatchingClassIdFound*matchingClassIdIndex*newClassDBIndex*bytesToMove*call to portMemMove*classDB*call to gpuGetSuppressedClassList*pSuppressClasses**pSuppressClasses*bSuppressRead*lastClassId*strLength*NVRM: portMemAllocNonPaged failed **NVRM: portMemAllocNonPaged failed *pSaveStr**pSaveStr*SuppressClassList**SuppressClassList*bSuppressClassList*call to nvStrToL*pEndStr*nIndex*ppClassDesc**ppClassDesc**pClasses*pClassStatic**pClassStatic*NVRM: num class descriptors: 0x%x **NVRM: num class descriptors: 0x%x *pClassDynamic**pClassDynamic*NVRM: alloc failed: 0x%x **NVRM: alloc failed: 0x%x *(pid != 0)*src/kernel/gpu/gpu_rmapi.c**(pid != 0)**src/kernel/gpu/gpu_rmapi.c*(pData != NULL)**(pData != NULL)*pSmcInfo*(pSmcInfo != NULL)**(pSmcInfo != NULL)*call to _gpuMatchClientPid*pRef*clientRef*call to kmigmgrIsMIGReferenceValid_IMPL*call to _gpuCollectMemInfo*call to _gpuConvertPid*call to _checkSysMemClassValidity*call to _checkVidmemClassValidity*pTargetedHeap*bIsMemProtected*!memdescGetFlag(pMemory->pMemDesc, MEMDESC_FLAGS_ALLOC_IN_UNPROTECTED_MEMORY)**!memdescGetFlag(pMemory->pMemDesc, MEMDESC_FLAGS_ALLOC_IN_UNPROTECTED_MEMORY)*pPidArray*(pPidArray != NULL)**(pPidArray != NULL)*pPidArrayCount*(pPidArrayCount != NULL)**(pPidArrayCount != NULL)*call to _gpuiIsPidSavedAlready*bClientHasMatchingInstance*elementInClient*NVRM: Maximum PIDs reached. Returning. **NVRM: Maximum PIDs reached. Returning. *call to osFindNsPid*notifyIndex < NVA084_NOTIFIERS_MAXCOUNT**notifyIndex < NVA084_NOTIFIERS_MAXCOUNT**pKernelHostVgpuDeviceApi*pCurThread*!(pCurThread->flags & THREAD_STATE_FLAGS_IS_ISR_LOCKLESS)**!(pCurThread->flags & THREAD_STATE_FLAGS_IS_ISR_LOCKLESS)*notifyIndex < NV2080_NOTIFIERS_MAXCOUNT**notifyIndex < NV2080_NOTIFIERS_MAXCOUNT*pSubdevice != NULL**pSubdevice != NULL*localNotifyType*localInfo32*call to _gpuFilterSubDeviceEventInfo*notifyActions**notifyActions*pNotifierMemory*call to notifyFillNotifierMemory*pNotifyType != NULL**pNotifyType != NULL*pInfo32 != NULL**pInfo32 != NULL*NULL != pRmClient**NULL != pRmClient*call to rmclientIsCapableOrAdmin_IMPL*rmclientIsCapableOrAdmin(pRmClient, NV_RM_CAP_SYS_SMC_MONITOR, privLevel)**rmclientIsCapableOrAdmin(pRmClient, NV_RM_CAP_SYS_SMC_MONITOR, privLevel)*call to kmigmgrIsInstanceAttributionIdValid_IMPL*call to kmigmgrGetAttributionIdFromMIGReference_IMPL*kmigmgrGetAttributionIdFromMIGReference(ref) == rcInstanceAttributionId**kmigmgrGetAttributionIdFromMIGReference(ref) == rcInstanceAttributionId*localIdx*Subdevice not found!**Subdevice not found!*newArray**newArray**pGpuResource*call to _gpuGetUserClientCount*pGpu->externalKernelClientCount > 0**pGpu->externalKernelClientCount > 0*src/kernel/gpu/gpu_suspend.c*NVRM: gpuPowerState Transitioning from NV2080_CTRL_GPU_SET_POWER_STATE_GPU_LEVEL_7 **src/kernel/gpu/gpu_suspend.c**NVRM: gpuPowerState Transitioning from NV2080_CTRL_GPU_SET_POWER_STATE_GPU_LEVEL_7 *NVRM: Beginning transition from D4 to D0 **NVRM: Beginning transition from D4 to D0 *call to gpuPowerManagementResume*resumeStatus*powerManagementDepth*NVRM: Ending transition from D4 to D0 **NVRM: Ending transition from D4 to D0 *NVRM: End resuming from APM Suspend **NVRM: End resuming from APM Suspend *NVRM: gpuPowerState NV2080_CTRL_GPU_SET_POWER_STATE_GPU_LEVEL_7 Requested **NVRM: gpuPowerState NV2080_CTRL_GPU_SET_POWER_STATE_GPU_LEVEL_7 Requested *NVRM: Beginning transition from D0 to D4 **NVRM: Beginning transition from D0 to D4 *NVRM: gpuPowerState NV2080_CTRL_GPU_SET_POWER_STATE_GPU_LEVEL_4 Requested **NVRM: gpuPowerState NV2080_CTRL_GPU_SET_POWER_STATE_GPU_LEVEL_4 Requested *NVRM: Beginning APM Suspend **NVRM: Beginning APM Suspend *call to gpuPowerManagementEnter*suspendStatus*NVRM: gpuPowerState Saving clocks and throttling them down **NVRM: gpuPowerState Saving clocks and throttling them down *NVRM: Ending transition from D0 to D4 **NVRM: Ending transition from D0 to D4 *NVRM: gpuPowerState Transitioning from NV2080_CTRL_GPU_SET_POWER_STATE_GPU_LEVEL_3 **NVRM: gpuPowerState Transitioning from NV2080_CTRL_GPU_SET_POWER_STATE_GPU_LEVEL_3 *NVRM: Beginning transition from %s to D0 **NVRM: Beginning transition from %s to D0 *GC6**GC6*D3**D3*NVRM: gpuPowerState Transitioning from NV2080_CTRL_GPU_SET_POWER_STATE_GPU_LEVEL_4 **NVRM: gpuPowerState Transitioning from NV2080_CTRL_GPU_SET_POWER_STATE_GPU_LEVEL_4 *NVRM: Beginning resume from %s **NVRM: Beginning resume from %s *APM Suspend**APM Suspend*call to _gpuPollCFGAndCheckD3Hot*NVRM: Polling BAR0 or BAR firewall timeout **NVRM: Polling BAR0 or BAR firewall timeout *NVRM: Ending transition from %s to D0 **NVRM: Ending transition from %s to D0 *NVRM: Ending resume from %s **NVRM: Ending resume from %s *NVRM: gpuPowerState NV2080_CTRL_GPU_SET_POWER_STATE_GPU_LEVEL_3 Requested **NVRM: gpuPowerState NV2080_CTRL_GPU_SET_POWER_STATE_GPU_LEVEL_3 Requested *NVRM: Beginning transition from D0 to %s **NVRM: Beginning transition from D0 to %s *NVRM: Beginning transition to %s **NVRM: Beginning transition to %s *NVRM: Ending transition from D0 to %s **NVRM: Ending transition from D0 to %s *NVRM: Ending transition to %s **NVRM: Ending transition to %s *call to kbifPollDeviceOnBus_IMPL*kbifPollDeviceOnBus(pGpu, pKernelBif)**kbifPollDeviceOnBus(pGpu, pKernelBif)*call to kbifPollBarFirewallDisengage_DISPATCH*kbifPollBarFirewallDisengage_HAL(pGpu, pKernelBif)**kbifPollBarFirewallDisengage_HAL(pGpu, pKernelBif)*call to gpuCheckGc6inD3Hot_IMPL*call to clResumeBridge_IMPL*call to kmemsysProgramSysmemFlushBuffer_DISPATCH*gspSrInitArgs*call to kgspPopulateGspRmInitArgs_IMPL*call to kgspWaitForGfwBootOk_DISPATCH*call to _gpuWaitForGfwBootOkFailureStore*call to kpmuInitLibosLoggingStructures_IMPL*NVRM: cannot init libOS PMU logging structures: 0x%x **NVRM: cannot init libOS PMU logging structures: 0x%x *call to _gpuInitLibosLoggingStructuresFailureStore*call to tmrSetCurrentTime_DISPATCH*call to libosLogUpdateTimerDelta*call to kgspPrepareForBootstrap_DISPATCH*NVRM: GSP boot preparation failed at resume (bootMode 0x%x): 0x%x **NVRM: GSP boot preparation failed at resume (bootMode 0x%x): 0x%x *call to _gpuGspPrepareForBootstrapFailureStore*call to kgspBootstrap_DISPATCH*NVRM: GSP boot failed at resume (bootMode 0x%x): 0x%x **NVRM: GSP boot failed at resume (bootMode 0x%x): 0x%x *call to _gpuGspBootstrapFailureStore*NVRM: GSP-RM proxy boot command failed during resume. **NVRM: GSP-RM proxy boot command failed during resume. *call to _gpuBootGspRmProxyFailureStore*call to gpuPowerManagementResumePreLoadPhysical_56cd7a*gpuPowerManagementResumePreLoadPhysical(pGpu, oldLevel, flags)**gpuPowerManagementResumePreLoadPhysical(pGpu, oldLevel, flags)*PDB_PROP_GPU_VGA_ENABLED*gpuStateLoad(pGpu, IS_GPU_GC6_STATE_EXITING(pGpu) ? GPU_STATE_FLAGS_PRESERVING | GPU_STATE_FLAGS_PM_TRANSITION | GPU_STATE_FLAGS_GC6_TRANSITION : GPU_STATE_FLAGS_PRESERVING | GPU_STATE_FLAGS_PM_TRANSITION)**gpuStateLoad(pGpu, IS_GPU_GC6_STATE_EXITING(pGpu) ? GPU_STATE_FLAGS_PRESERVING | GPU_STATE_FLAGS_PM_TRANSITION | GPU_STATE_FLAGS_GC6_TRANSITION : GPU_STATE_FLAGS_PRESERVING | GPU_STATE_FLAGS_PM_TRANSITION)*call to gpuPowerManagementResumePostLoadPhysical_56cd7a*gpuPowerManagementResumePostLoadPhysical(pGpu)**gpuPowerManagementResumePostLoadPhysical(pGpu)*NVRM: Adapter now in D0 state **NVRM: Adapter now in D0 state *call to kgspFreeSuspendResumeData_DISPATCH*call to memmgrFreeFbsrMemory_KERNEL*call to kgspPrepareSuspendResumeData_DISPATCH*call to _gpuGspPrepareSuspendResumeDataFailureStore*call to gpuPowerManagementEnterPreUnloadPhysical_56cd7a*gpuPowerManagementEnterPreUnloadPhysical(pGpu)**gpuPowerManagementEnterPreUnloadPhysical(pGpu)*gpuStateUnload(pGpu, IS_GPU_GC6_STATE_ENTERING(pGpu) ? GPU_STATE_FLAGS_PRESERVING | GPU_STATE_FLAGS_PM_TRANSITION | GPU_STATE_FLAGS_GC6_TRANSITION : GPU_STATE_FLAGS_PRESERVING | GPU_STATE_FLAGS_PM_TRANSITION)**gpuStateUnload(pGpu, IS_GPU_GC6_STATE_ENTERING(pGpu) ? GPU_STATE_FLAGS_PRESERVING | GPU_STATE_FLAGS_PM_TRANSITION | GPU_STATE_FLAGS_GC6_TRANSITION : GPU_STATE_FLAGS_PRESERVING | GPU_STATE_FLAGS_PM_TRANSITION)*call to gpuPowerManagementEnterPostUnloadPhysical_56cd7a*gpuPowerManagementEnterPostUnloadPhysical(pGpu, newLevel)**gpuPowerManagementEnterPostUnloadPhysical(pGpu, newLevel)*NVRM: GSP unload failed at suspend (bootMode 0x%x, newLevel 0x%x): 0x%x **NVRM: GSP unload failed at suspend (bootMode 0x%x, newLevel 0x%x): 0x%x *call to _gpuGspUnloadRmFailureStore*call to kpmuFreeLibosLoggingStructures_IMPL*call to gpuGetNameString_T234D*call to portStringConvertAsciiToUtf16**pTimeout*pTimeout != NULL*src/kernel/gpu/gpu_timeout.c**pTimeout != NULL**src/kernel/gpu/gpu_timeout.c*call to threadStateYieldCpuIfNecessary*call to _checkTimeout*call to threadStateCheckTimeout*call to threadPriorityThrottle*call to threadStateLogTimeout*NVRM: OS elapsed %llx >= %llx **NVRM: OS elapsed %llx >= %llx *pTmrGpu*NVRM: OS timeout == 0 **NVRM: OS timeout == 0 *pTimeout->pTmrGpu != NULL**pTimeout->pTmrGpu != NULL*pTmr != NULL**pTmr != NULL*call to tmrDelay_DISPATCH*NVRM: ptmr elapsed %llx >= %llx **NVRM: ptmr elapsed %llx >= %llx *NVRM: ptmr timeout == 0 **NVRM: ptmr timeout == 0 *NVRM: Invalid timeout flags 0x%08x **NVRM: Invalid timeout flags 0x%08x *timeoutNs*call to osGetMonotonicTickResolutionNs**pTmrGpu*bDefaultOverridden*defaultus*RmOverrideInternalTimeoutsMs**RmOverrideInternalTimeoutsMs*bug5203024OverrideTimeouts*RmDefaultTimeout**RmDefaultTimeout*defaultResetus*NVRM: Overriding default timeout to 0x%08x **NVRM: Overriding default timeout to 0x%08x *RmResetFsmStateTimeoutUs**RmResetFsmStateTimeoutUs*call to gpuGetDefaultResetFSMStateTransitionUs_4d4998*defaultResetFSMStateTransitionUs*bDefaultResetFSMStateTransitionOverridden*NVRM: Overriding default timeout for reset FSM state transition to 0x%08x **NVRM: Overriding default timeout for reset FSM state transition to 0x%08x *RmDefaultTimeoutFlags**RmDefaultTimeoutFlags*defaultFlags*NVRM: Unknown TIMEOUT_FLAGS value: 0x%08x **NVRM: Unknown TIMEOUT_FLAGS value: 0x%08x *NVRM: Overriding default flags to 0x%08x **NVRM: Overriding default flags to 0x%08x *bScaled*call to threadStateInitTimeout*src/kernel/gpu/gpu_user_shared_data.c**src/kernel/gpu/gpu_user_shared_data.c*call to _rusdPollingSupported*inputIntervalMs*NVRM: Invalid input polling interval: %u ms **NVRM: Invalid input polling interval: %u ms *originalGlobalPollingIntervalMs*originalClientPollingIntervalMs*call to _gpushareddataSendDataPollRpc*_gpushareddataSendDataPollRpc(pGpu, pGpu->userSharedData.lastPolledDataMask, inputIntervalMs)**_gpushareddataSendDataPollRpc(pGpu, pGpu->userSharedData.lastPolledDataMask, inputIntervalMs)*originalGlobalPollingIntervalMs == originalClientPollingIntervalMs**originalGlobalPollingIntervalMs == originalClientPollingIntervalMs*originalGlobalPollingIntervalMs < inputIntervalMs**originalGlobalPollingIntervalMs < inputIntervalMs*call to _handlePollMaskHelper*_handlePollMaskHelper(pGpu, NV_FALSE, NV_TRUE)**_handlePollMaskHelper(pGpu, NV_FALSE, NV_TRUE)*gpudbRusd*permanentPolledDataMask*call to gpudbSetRusdSettings*_handlePollMaskHelper(pGpu, NV_TRUE, NV_FALSE)**_handlePollMaskHelper(pGpu, NV_TRUE, NV_FALSE)*supportedFeatures*call to _rusdSupported*originalPolledDataMask*_handlePollMaskHelper(pGpu, NV_FALSE, NV_FALSE)**_handlePollMaskHelper(pGpu, NV_FALSE, NV_FALSE)*NVRM: Fail to queue work _gpuRusdRequestPermanentDataPollCallback **NVRM: Fail to queue work _gpuRusdRequestPermanentDataPollCallback *_handlePollMaskHelper(pGpu, NV_TRUE, NV_FALSE) == NV_OK**_handlePollMaskHelper(pGpu, NV_TRUE, NV_FALSE) == NV_OK*pollingIntervalMs != 0**pollingIntervalMs != 0*!(bPermanentRequest && bPollingIntervalRequested)**!(bPermanentRequest && bPollingIntervalRequested)*call to gpudbGetRusdSettings*NVRM: RUSD permanent polled data mask: 0x%llx **NVRM: RUSD permanent polled data mask: 0x%llx *call to _gpushareddataGetPollDataUnion*pollingIntervalMs == pGpu->userSharedData.pollingIntervalMs**pollingIntervalMs == pGpu->userSharedData.pollingIntervalMs*_gpushareddataSendDataPollRpc(pGpu, polledDataMask, pollingIntervalMs)**_gpushareddataSendDataPollRpc(pGpu, polledDataMask, pollingIntervalMs)*pGpu->userSharedData.pollingIntervalMs == originalPollingIntervalMs**pGpu->userSharedData.pollingIntervalMs == originalPollingIntervalMs*pIterData->pollingIntervalMs >= NV_REG_STR_RM_RUSD_POLLING_INTERVAL_MIN**pIterData->pollingIntervalMs >= NV_REG_STR_RM_RUSD_POLLING_INTERVAL_MIN*pIterData->pollingIntervalMs <= pGpu->userSharedData.defaultPollingIntervalMs**pIterData->pollingIntervalMs <= pGpu->userSharedData.defaultPollingIntervalMs*pollingIntervalMs >= NV_REG_STR_RM_RUSD_POLLING_INTERVAL_MIN**pollingIntervalMs >= NV_REG_STR_RM_RUSD_POLLING_INTERVAL_MIN*pollingIntervalMs <= pGpu->userSharedData.defaultPollingIntervalMs**pollingIntervalMs <= pGpu->userSharedData.defaultPollingIntervalMs*NVRM: RUSD polledDataMask: 0x%llx, pollingIntervalMs: %u **NVRM: RUSD polledDataMask: 0x%llx, pollingIntervalMs: %u *lastPolledDataMask*call to _gpushareddataDestroyGsp*pGpu->userSharedData.pMemDesc->RefCount == 1**pGpu->userSharedData.pMemDesc->RefCount == 1**pMapBuffer***pMapBuffer*memdescCreate(ppMemDesc, pGpu, sizeof(NV00DE_SHARED_DATA), 0, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, MEMDESC_FLAGS_USER_READ_ONLY)**memdescCreate(ppMemDesc, pGpu, sizeof(NV00DE_SHARED_DATA), 0, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, MEMDESC_FLAGS_USER_READ_ONLY)*call to _gpushareddataInitGsp*call to _gpushareddataInitPollingFrequency*defaultPollingIntervalMs*NVRM: Default RUSD polling frequency: %d ms **NVRM: Default RUSD polling frequency: %d ms *pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_USER_SHARED_DATA_SET_DATA_POLL, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_USER_SHARED_DATA_SET_DATA_POLL, ¶ms, sizeof(params))*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_INIT_USER_SHARED_DATA, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_INIT_USER_SHARED_DATA, ¶ms, sizeof(params))*_gpushareddataSendDataPollRpc(pGpu, ~0ULL, pGpu->userSharedData.pollingIntervalMs)**_gpushareddataSendDataPollRpc(pGpu, ~0ULL, pGpu->userSharedData.pollingIntervalMs)*call to _gpushareddataUpdateSeqClose*call to _gpushareddataUpdateSeqOpen*pSeq*call to portAtomicMemoryFenceStore*seqVal*RUSD_SEQ_DATA_VALID(seqVal)**RUSD_SEQ_DATA_VALID(seqVal)*call to portAtomicExSetU64*!RUSD_SEQ_DATA_VALID(seqVal)**!RUSD_SEQ_DATA_VALID(seqVal)*_handlePollMaskHelper(pGpu, NV_FALSE, NV_TRUE) == NV_OK**_handlePollMaskHelper(pGpu, NV_FALSE, NV_TRUE) == NV_OK*call to memdescRemoveRef*call to memDestructCommon_IMPL*memConstructCommon(pMemory, NV01_MEMORY_SYSTEM, 0, *ppMemDesc, 0, NULL, 0, 0, 0, 0, NVOS32_MEM_TAG_NONE, NULL)**memConstructCommon(pMemory, NV01_MEMORY_SYSTEM, 0, *ppMemDesc, 0, NULL, 0, 0, 0, 0, NVOS32_MEM_TAG_NONE, NULL)*call to _sha1Initialize*call to _sha1Update*call to _sha1Final*pHash*call to _sha1MemZero*call to _sha1Transform**pCount*pDig32**pDig32*buffer_offset*sha1GroupEntryNum**sha1GroupEntryNum*pGroupEntryNum**pGroupEntryNum**pPrefix*MIG-**MIG-*DLA-**DLA-*UGC-**UGC-*pUuidStr**pUuidStr*call to portStringBufferToHexGroups*call to _nvGenerateUuid*pSha1Digest**pSha1Digest*call to portUtilWriteLittleEndian16*call to portUtilWriteLittleEndian64*call to portUtilWriteLittleEndian32*call to sha1Generate*NV_UUID_LEN == gidSize*src/kernel/gpu/gpu_uuid.c**NV_UUID_LEN == gidSize**src/kernel/gpu/gpu_uuid.c*call to nvGetUuidString*src/kernel/gpu/gpu_vgpu.c**src/kernel/gpu/gpu_vgpu.c*pVGpu != NULL**pVGpu != NULL*pVSI->deviceInfoTable.numEntries <= NV2080_CTRL_CMD_INTERNAL_DEVICE_INFO_MAX_ENTRIES**pVSI->deviceInfoTable.numEntries <= NV2080_CTRL_CMD_INTERNAL_DEVICE_INFO_MAX_ENTRIES*NVRM: VF: Engine Entry [%d] **NVRM: VF: Engine Entry [%d] *NVRM: VF: Type Enum : %d **NVRM: VF: Type Enum : %d *NVRM: VF: DieletGlobal Id : %d **NVRM: VF: DieletGlobal Id : %d *NVRM: VF: Dielet, LocalId : %d, %d **NVRM: VF: Dielet, LocalId : %d, %d *NVRM: VF: Fault Id : %d **NVRM: VF: Fault Id : %d *NVRM: VF: Reset Id : %d **NVRM: VF: Reset Id : %d *NVRM: VF: Device PRI Base : 0x%x **NVRM: VF: Device PRI Base : 0x%x *NVRM: VF: Is Engine : %d **NVRM: VF: Is Engine : %d *NVRM: VF: Runlist Engine ID: %d **NVRM: VF: Runlist Engine ID: %d *NVRM: VF: Runlist PRI Base : 0x%x **NVRM: VF: Runlist PRI Base : 0x%x *shortGpuNameString**shortGpuNameString*adapterName**adapterName*adapterName_Unicode**adapterName_Unicode*pRmApi->AllocWithHandle(pRmApi, hClient, hClient, hClient, NV01_ROOT, &hClient, sizeof(hClient))**pRmApi->AllocWithHandle(pRmApi, hClient, hClient, hClient, NV01_ROOT, &hClient, sizeof(hClient))*hDefaultClientShare*deviceAllocParams*pRmApi->Alloc(pRmApi, hClient, hClient, &hDevice, NV01_DEVICE_0, &deviceAllocParams, sizeof(deviceAllocParams))**pRmApi->Alloc(pRmApi, hClient, hClient, &hDevice, NV01_DEVICE_0, &deviceAllocParams, sizeof(deviceAllocParams))*hDefaultClientShareDevice*subdeviceAllocParams*pRmApi->Alloc(pRmApi, hClient, hDevice, &hSubDevice, NV20_SUBDEVICE_0, &subdeviceAllocParams, sizeof(subdeviceAllocParams))**pRmApi->Alloc(pRmApi, hClient, hDevice, &hSubDevice, NV20_SUBDEVICE_0, &subdeviceAllocParams, sizeof(subdeviceAllocParams))*hDefaultClientShareSubDevice*activeFBIOs*call to _videoTimerDestroy*call to videoRemoveBindpoint*pBind*call to videoBufferTeardown*videoEventBufferBindingsUid*subIter*pEventBufferRef*src/kernel/gpu/gpuvideo/videoeventlist.c**src/kernel/gpu/gpuvideo/videoeventlist.c*bSelectLOD*rmapiLockIsOwner() && rmDeviceGpuLockIsOwner(pGpu->gpuInstance)**rmapiLockIsOwner() && rmDeviceGpuLockIsOwner(pGpu->gpuInstance)*call to kvidengIsVideoTraceLogSupported_STATIC_DISPATCH*eventMask*targetUser**bAllUsers**pEventBuffer*NVRM: failed to add UID binding! **NVRM: failed to add UID binding! *hEventBuffer*bAdmin*call to _videoEventBufferSetFlag*_videoEventBufferSetFlag(pGpu, VIDEO_TRACE_FLAG__LOGGING_ENABLED)**_videoEventBufferSetFlag(pGpu, VIDEO_TRACE_FLAG__LOGGING_ENABLED)*call to registerEventNotification*producerInfo*call to _videoTimerCreate*_videoTimerCreate(pGpu)**_videoTimerCreate(pGpu)*call to unregisterEventNotificationWithData*call to multimapRemoveItemByKey_IMPL*pGpu->videoCtxswLogConsumerCount >= 0**pGpu->videoCtxswLogConsumerCount >= 0*pSubmapAll**pSubmapAll*call to _videoGetTraceEvents*pKernelVideoEngine*pVideoTimerEvent**pVideoTimerEvent*tmrEventCreate(pTmr, &pGpu->pVideoTimerEvent, _videoTimerCallback, NULL, timerFlags)**tmrEventCreate(pTmr, &pGpu->pVideoTimerEvent, _videoTimerCallback, NULL, timerFlags)*tmrEventScheduleRel(pTmr, pGpu->pVideoTimerEvent, NV_VIDEO_TRACE_CALLBACK_TIME_NS)**tmrEventScheduleRel(pTmr, pGpu->pVideoTimerEvent, NV_VIDEO_TRACE_CALLBACK_TIME_NS)*osQueueWorkItem(pGpu, _videoOsWorkItem, NULL, (OsQueueWorkItemFlags){.bLockGpuGroupDevice = NV_TRUE})**osQueueWorkItem(pGpu, _videoOsWorkItem, NULL, (OsQueueWorkItemFlags){.bLockGpuGroupDevice = NV_TRUE})*tmrEventScheduleRel(pTmr, pTmrEvent, NV_VIDEO_TRACE_CALLBACK_TIME_NS)**tmrEventScheduleRel(pTmr, pTmrEvent, NV_VIDEO_TRACE_CALLBACK_TIME_NS)*call to nvEventBufferVideoCallback*pRingbuffer*pKernelVideoEngine != NULL**pKernelVideoEngine != NULL*pKernelVideoEngine->bVideoTraceEnabled**pKernelVideoEngine->bVideoTraceEnabled**pRingbuffer*call to kvidengRingbufferGetDataSize_IMPL*call to kvidengEventbufferGetRecord_IMPL*gotSize*videoRecord*noisyTimestampStart*notifyRecord*pVideoLogPrng**pVideoLogPrng***pEventData*pSubmapUserOnly**pSubmapUserOnly*cachedUserInfo*call to _notifyEventBuffers*pTraceBufferVariableData*pNotifyRecord*call to _videoEventBufferAdd*pLogData*notifyEvent*pVardata**pVardata***pVardata*vardataSize*api_id*noisyTimestampRange*call to portCryptoPseudoRandomGeneratorGetU32*event_start*engine_id*codec_id*event_end*stateChange*event_pstate_change*logData*event_log_data**pLogData***pPayload*call to eventBufferAdd*pListeners*bNotifyPending*call to kvidengFromEngDesc_IMPL**pKernelVideoEngine*logInfo*userInfo*memdescGetSize(pCtxMemDesc) >= (VIDEO_ENGINE_EVENT__LOG_INFO__OFFSET + VIDEO_ENGINE_EVENT__LOG_INFO__SIZE)**memdescGetSize(pCtxMemDesc) >= (VIDEO_ENGINE_EVENT__LOG_INFO__OFFSET + VIDEO_ENGINE_EVENT__LOG_INFO__SIZE)*pInstMem != NULL**pInstMem != NULL*pLogInfo*memPhysType*(memdescGetContiguity(pMemDesc, AT_GPU) && memPhysType != RM_VIDENG_DMAIDX_END)**(memdescGetContiguity(pMemDesc, AT_GPU) && memPhysType != RM_VIDENG_DMAIDX_END)*dmaAddr*call to kgraphicsIsCtxswLoggingEnabled_DISPATCH*call to kgraphicsGetFecsTraceRdOffset_DISPATCH*call to kgraphicsSetCtxswLoggingEnabled_fdfbe2*pVeidCount != NULL*src/kernel/gpu/gr/arch/ampere/kgrmgr_ga100.c**pVeidCount != NULL**src/kernel/gpu/gr/arch/ampere/kgrmgr_ga100.c*gpcCount > 0**gpcCount > 0*call to kgrmgrGetVeidStepSize_IMPL*kgrmgrGetVeidStepSize(pGpu, pKernelGraphicsManager, &veidStepSize)**kgrmgrGetVeidStepSize(pGpu, pKernelGraphicsManager, &veidStepSize)*src/kernel/gpu/gr/arch/blackwell/kgrmgr_gb100.c**src/kernel/gpu/gr/arch/blackwell/kgrmgr_gb100.c*maxSyspipes*maxPartitionableGpcs*maxVeids*bucketBoundary*syspipes*src/kernel/gpu/gr/arch/blackwell/kgrmgr_gb10b.c**src/kernel/gpu/gr/arch/blackwell/kgrmgr_gb10b.c*NVRM: Max SysPipes:%d, Max GPCs:%d, Max VEIDs:%d. **NVRM: Max SysPipes:%d, Max GPCs:%d, Max VEIDs:%d. *pKernelGraphicsStaticInfo**pKernelGraphicsStaticInfo*pKernelGraphicsStaticInfo != NULL*src/kernel/gpu/gr/arch/maxwell/kgraphics_gm200.c**pKernelGraphicsStaticInfo != NULL**src/kernel/gpu/gr/arch/maxwell/kgraphics_gm200.c*pContextBuffersInfo*pKernelGraphicsStaticInfo->pContextBuffersInfo != NULL**pKernelGraphicsStaticInfo->pContextBuffersInfo != NULL*circularBufferSize*circularBufferAlign*pagepoolBufferSize*pagepoolBufferAlign*attribBufferSize*attribBufferAlign*privMapBufferSize*privMapBufferAlign*unresPrivMapBufferSize*unresPrivMapBufferAlign*cbAllocFlags*call to kgraphicsShouldSetContextBuffersGPUPrivileged*call to kgrctxGetUnicast_IMPL*kgrctxGetUnicast(pGpu, pKernelGraphicsContext, &pKernelGraphicsContextUnicast)**kgrctxGetUnicast(pGpu, pKernelGraphicsContext, &pKernelGraphicsContextUnicast)*pKernelGraphicsContextUnicast*pCtxBuffers**pCtxBuffers*globalCtxBuffersInfo*localCtxAttr**localCtxAttr*pCtxAttr**pCtxAttr*pGlobalCtxBuffers*globalCtxAttr**globalCtxAttr*ctxBufPoolGetGlobalPool(pGpu, CTX_BUF_ID_GR_GLOBAL, RM_ENGINE_TYPE_GR(pKernelGraphics->instance), &pCtxBufPool)**ctxBufPoolGetGlobalPool(pGpu, CTX_BUF_ID_GR_GLOBAL, RM_ENGINE_TYPE_GR(pKernelGraphics->instance), &pCtxBufPool)*vfGlobalCtxAttr**vfGlobalCtxAttr***memDesc*pAllocList*bPhysicallyContiguous*memdescCreate(ppMemDesc, pGpu, circularBufferSize, circularBufferAlign, bPhysicallyContiguous, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_BUNDLE_CB].cpuAttr, cbAllocFlags | MEMDESC_FLAGS_GPU_PRIVILEGED | MEMDESC_FLAGS_HIGH_PRIORITY)**memdescCreate(ppMemDesc, pGpu, circularBufferSize, circularBufferAlign, bPhysicallyContiguous, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_BUNDLE_CB].cpuAttr, cbAllocFlags | MEMDESC_FLAGS_GPU_PRIVILEGED | MEMDESC_FLAGS_HIGH_PRIORITY)*memdescSetCtxBufPool(*ppMemDesc, pCtxBufPool)**memdescSetCtxBufPool(*ppMemDesc, pCtxBufPool)*call to kgraphicsSetContextBufferPteKind_IMPL*call to memmgrGetPteKindGenericMemoryCompressible_DISPATCH*memdescCreate(ppMemDesc, pGpu, pagepoolBufferSize, pagepoolBufferAlign, bPhysicallyContiguous, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_PAGEPOOL].cpuAttr, cbAllocFlags | MEMDESC_FLAGS_GPU_PRIVILEGED)**memdescCreate(ppMemDesc, pGpu, pagepoolBufferSize, pagepoolBufferAlign, bPhysicallyContiguous, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_PAGEPOOL].cpuAttr, cbAllocFlags | MEMDESC_FLAGS_GPU_PRIVILEGED)*memdescCreate(ppMemDesc, pGpu, attribBufferSize, attribBufferAlign, bPhysicallyContiguous, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_ATTRIBUTE_CB].cpuAttr, cbAllocFlags | MEMDESC_FLAGS_HIGH_PRIORITY)**memdescCreate(ppMemDesc, pGpu, attribBufferSize, attribBufferAlign, bPhysicallyContiguous, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_ATTRIBUTE_CB].cpuAttr, cbAllocFlags | MEMDESC_FLAGS_HIGH_PRIORITY)*rm_gr_ctx_circular_buffer_surface**rm_gr_ctx_circular_buffer_surface*call to kgraphicsDoesUcodeSupportPrivAccessMap*call to kgraphicsShouldForceMainCtxContiguity_88bc07*memdescCreate(ppMemDesc, pGpu, privMapBufferSize, privMapBufferAlign, bIsContiguous, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_PRIV_ACCESS_MAP].cpuAttr, flags)**memdescCreate(ppMemDesc, pGpu, privMapBufferSize, privMapBufferAlign, bIsContiguous, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_PRIV_ACCESS_MAP].cpuAttr, flags)*call to kgraphicsIsOverrideContextBuffersToGpuCached*call to kgraphicsIsUnrestrictedAccessMapSupported_DISPATCH*memdescCreate(ppMemDesc, pGpu, unresPrivMapBufferSize, unresPrivMapBufferAlign, bIsContiguous, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_UNRESTRICTED_PRIV_ACCESS_MAP].cpuAttr, flags)**memdescCreate(ppMemDesc, pGpu, unresPrivMapBufferSize, unresPrivMapBufferAlign, bIsContiguous, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_UNRESTRICTED_PRIV_ACCESS_MAP].cpuAttr, flags)*bAllocated*src/kernel/gpu/gr/arch/pascal/kgraphics_gp100.c**src/kernel/gpu/gr/arch/pascal/kgraphics_gp100.c*pParams->engineIdx == MC_ENGINE_IDX_GRn_FECS_LOG(grIdx)**pParams->engineIdx == MC_ENGINE_IDX_GRn_FECS_LOG(grIdx)*GR[1-7]_FECS_LOG is not supported if MIG is disabled!**GR[1-7]_FECS_LOG is not supported if MIG is disabled!*call to fecsGetCtxswLogConsumerCount*call to kgraphicsIsIntrDrivenCtxswLoggingEnabled*call to fecsClearIntrPendingIfPending*call to nvEventBufferFecsCallback*fecsBufferSize*fecsBufferAlign*call to kgraphicsIsCtxswLoggingSupported**pAllocList*cpuAttr*memdescCreate(ppMemDesc, pGpu, fecsBufferSize, fecsBufferAlign, NV_TRUE, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_FECS_EVENT].cpuAttr, allocFlags | MEMDESC_FLAGS_PHYSICALLY_CONTIGUOUS | MEMDESC_FLAGS_GPU_PRIVILEGED)**memdescCreate(ppMemDesc, pGpu, fecsBufferSize, fecsBufferAlign, NV_TRUE, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_FECS_EVENT].cpuAttr, allocFlags | MEMDESC_FLAGS_PHYSICALLY_CONTIGUOUS | MEMDESC_FLAGS_GPU_PRIVILEGED)*call to kgraphicsAllocGrGlobalCtxBuffers_GM200*RMCtxswLog**RMCtxswLog*bIntrFallback*bIntr*bLog*call to kgraphicsSetBottomHalfCtxswLoggingEnabled*call to kgraphicsSetIntrDrivenCtxswLoggingEnabled*call to kgraphicsSetCtxswLoggingSupported*call to fecsSetRecordsPerIntr*RMCtxswLogMaxRecordsPerIntr**RMCtxswLogMaxRecordsPerIntr*!IS_VIRTUAL(pGpu)**!IS_VIRTUAL(pGpu)*call to kgraphicsGetBug4208224WAREnabled*call to rmcfg_IsVOLTA_CLASSIC_GPUSorBetter*bBcStatus*rmapiutilAllocClientAndDeviceHandles(pRmApi, pGpu, &hClientId, &hDeviceId, &hSubdeviceId)*src/kernel/gpu/gr/arch/turing/kgraphics_tu102.c**rmapiutilAllocClientAndDeviceHandles(pRmApi, pGpu, &hClientId, &hDeviceId, &hSubdeviceId)**src/kernel/gpu/gr/arch/turing/kgraphics_tu102.c*bug4208224Info*hSubdeviceId*primarySliSubDeviceInstance*serverGetClientUnderLock(&g_resServ, hClientId, &pClientId)**serverGetClientUnderLock(&g_resServ, hClientId, &pClientId)*call to clientGenResourceHandle_IMPL*pClientId*clientGenResourceHandle(pClientId, &hSecondary)**clientGenResourceHandle(pClientId, &hSecondary)*pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, hVASpace, FERMI_VASPACE_A, &vaParams, sizeof(vaParams))**pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, hVASpace, FERMI_VASPACE_A, &vaParams, sizeof(vaParams))*pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, hPBPhysMemId, NV01_MEMORY_SYSTEM, &memAllocParams, sizeof(memAllocParams))**pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, hPBPhysMemId, NV01_MEMORY_SYSTEM, &memAllocParams, sizeof(memAllocParams))*pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, hPBVirtMemId, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))**pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, hPBVirtMemId, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))*ctrlSize*userdMemClass*pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, hUserdId, userdMemClass, &memAllocParams, sizeof(memAllocParams))**pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, hUserdId, userdMemClass, &memAllocParams, sizeof(memAllocParams))*call to kfifoGetChannelClassId_IMPL*classNum != 0**classNum != 0*channelGPFIFOAllocParams*pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, hChannelId, classNum, &channelGPFIFOAllocParams, sizeof(channelGPFIFOAllocParams))**pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, hChannelId, classNum, &channelGPFIFOAllocParams, sizeof(channelGPFIFOAllocParams))*rmGpuLocksAcquire(GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_GR)**rmGpuLocksAcquire(GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_GR)*call to kgraphicsGetClassByType_IMPL*kgraphicsGetClassByType(pGpu, pKernelGraphics, GR_OBJECT_TYPE_3D, &classNum)**kgraphicsGetClassByType(pGpu, pKernelGraphics, GR_OBJECT_TYPE_3D, &classNum)*pRmApi->AllocWithHandle(pRmApi, hClientId, hChannelId, hObj3D, classNum, NULL, 0)**pRmApi->AllocWithHandle(pRmApi, hClientId, hChannelId, hObj3D, classNum, NULL, 0)*pRmApi->Free(pRmApi, hClientId, hClientId)**pRmApi->Free(pRmApi, hClientId, hClientId)*call to kgraphicsCreateBug4208224Channel_DISPATCH*kgraphicsCreateBug4208224Channel_HAL(pGpu, pKernelGraphics)**kgraphicsCreateBug4208224Channel_HAL(pGpu, pKernelGraphics)*bTeardown*pRmApi->Free(pRmApi, pKernelGraphics->bug4208224Info.hClient, pKernelGraphics->bug4208224Info.hClient)**pRmApi->Free(pRmApi, pKernelGraphics->bug4208224Info.hClient, pKernelGraphics->bug4208224Info.hClient)*pRmApi->Control(pRmApi, pKernelGraphics->bug4208224Info.hClient, pKernelGraphics->bug4208224Info.hDeviceId, NV0080_CTRL_CMD_INTERNAL_KGR_INIT_BUG4208224_WAR, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pKernelGraphics->bug4208224Info.hClient, pKernelGraphics->bug4208224Info.hDeviceId, NV0080_CTRL_CMD_INTERNAL_KGR_INIT_BUG4208224_WAR, ¶ms, sizeof(params))*rtvcbBufferSize*rtvcbBufferAlign*memdescCreate(ppMemDesc, pGpu, rtvcbBufferSize, rtvcbBufferAlign, !bIsFbBroken, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_RTV_CB].cpuAttr, allocFlags)**memdescCreate(ppMemDesc, pGpu, rtvcbBufferSize, rtvcbBufferAlign, !bIsFbBroken, ADDR_UNKNOWN, pCtxAttr[GR_GLOBALCTX_BUFFER_RTV_CB].cpuAttr, allocFlags)*call to kgraphicsAllocGrGlobalCtxBuffers_GP100*pFecsGlobalTraceInfo != NULL*src/kernel/gpu/gr/fecs_event_list.c**pFecsGlobalTraceInfo != NULL**src/kernel/gpu/gr/fecs_event_list.c*pFecsTraceInfo != NULL**pFecsTraceInfo != NULL**pVgpuStaging*call to _getFecsMemDesc*pFecsMemDesc*pFecsBufferMapping**pFecsBufferMapping*call to fecsBufferDisableHw*call to fecsBufferUnmap*call to _fecsLoadInternalRoutingInfo*getHwEnableParams*_fecsLoadInternalRoutingInfo(pGpu, pKernelGraphics, &hClient, &hSubdevice, &getHwEnableParams.grRouteInfo)**_fecsLoadInternalRoutingInfo(pGpu, pKernelGraphics, &hClient, &hSubdevice, &getHwEnableParams.grRouteInfo)*setHwEnableParams*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_GR_GET_FECS_TRACE_HW_ENABLE, &getHwEnableParams, sizeof(getHwEnableParams))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_GR_GET_FECS_TRACE_HW_ENABLE, &getHwEnableParams, sizeof(getHwEnableParams))*fecsLastSeqno*traceWrOffsetParams*traceRdOffsetParams*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_GR_SET_FECS_TRACE_WR_OFFSET, &traceWrOffsetParams, sizeof(traceWrOffsetParams))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_GR_SET_FECS_TRACE_WR_OFFSET, &traceWrOffsetParams, sizeof(traceWrOffsetParams))*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_GR_SET_FECS_TRACE_RD_OFFSET, &traceRdOffsetParams, sizeof(traceRdOffsetParams))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_GR_SET_FECS_TRACE_RD_OFFSET, &traceRdOffsetParams, sizeof(traceRdOffsetParams))*fecsTraceRdOffset*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_GR_SET_FECS_TRACE_HW_ENABLE, &setHwEnableParams, sizeof(setHwEnableParams))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_GR_SET_FECS_TRACE_HW_ENABLE, &setHwEnableParams, sizeof(setHwEnableParams))*call to eventBufferSetEnable**pKernelGraphicsManager*uidBindIter*call to fecsRemoveBindpoint*call to _fecsTimerDestroy*gpuConsumerCount*bFecsBindingActive*call to osCheckAccess**pBind*call to gisubscriptionGetGPUInstanceSubscription_IMPL*call to gisubscriptionGetMIGGPUInstance*numIterations*call to fecsBufferReset*call to _fecsTimerCreate*_fecsTimerCreate(pGpu)**_fecsTimerCreate(pGpu)*pFecsGlobalTraceInfo->fecsCtxswLogConsumerCount >= 0**pFecsGlobalTraceInfo->fecsCtxswLogConsumerCount >= 0**pFecsTraceInfo*maxFecsRecordsPerIntr*call to _getFecsEventListParameters*fecsBufferSize > 0**fecsBufferSize > 0*fecsReadOffset*pPeekRecord**pPeekRecord*fecsReadOffsetPrev*NVRM: FECS buffer overflow detected **NVRM: FECS buffer overflow detected *_fecsLoadInternalRoutingInfo(pGpu, pKernelGraphics, &hClient, &hSubdevice, ¶ms.grRouteInfo)**_fecsLoadInternalRoutingInfo(pGpu, pKernelGraphics, &hClient, &hSubdevice, ¶ms.grRouteInfo)*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_GR_GET_FECS_TRACE_RD_OFFSET, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_GR_GET_FECS_TRACE_RD_OFFSET, ¶ms, sizeof(params))*fecsTraceCounter*pCurrRecord*call to formatAndNotifyFecsRecord*call to kgraphicsSetFecsTraceRdOffset_DISPATCH*call to fecsSignalIntrPendingIfNotPending*pFecsTimerEvent**pFecsTimerEvent*tmrEventCreate(pTmr, &pFecsGlobalTraceInfo->pFecsTimerEvent, _fecsTimerCallback, NULL, timerFlags)**tmrEventCreate(pTmr, &pFecsGlobalTraceInfo->pFecsTimerEvent, _fecsTimerCallback, NULL, timerFlags)*tmrEventScheduleRel(pTmr, pFecsGlobalTraceInfo->pFecsTimerEvent, pFecsGlobalTraceInfo->fecsTimerInterval)**tmrEventScheduleRel(pTmr, pFecsGlobalTraceInfo->pFecsTimerEvent, pFecsGlobalTraceInfo->fecsTimerInterval)*numIter*call to fecsBufferChanged*call to _fecsSignalCallbackScheduled*osQueueWorkItem(pGpu, _fecsOsWorkItem, NULL, (OsQueueWorkItemFlags){.bLockGpuGroupDevice = NV_TRUE})**osQueueWorkItem(pGpu, _fecsOsWorkItem, NULL, (OsQueueWorkItemFlags){.bLockGpuGroupDevice = NV_TRUE})*call to _fecsClearCallbackScheduled*tmrEventScheduleRel(pTmr, pTmrEvent, pFecsGlobalTraceInfo->fecsTimerInterval)**tmrEventScheduleRel(pTmr, pTmrEvent, pFecsGlobalTraceInfo->fecsTimerInterval)**pFecsGlobalTraceInfo*fecsCtxswLogRecordsPerIntr*fecsTimerInterval*call to portCryptoPseudoRandomGeneratorDestroy*pFecsLogPrng**pFecsLogPrng*ppFecsTraceInfo**ppFecsTraceInfo*ppFecsTraceInfo != NULL**ppFecsTraceInfo != NULL*call to portCryptoPseudoRandomGeneratorCreate*call to kgraphicsInitFecsRegistryOverrides_GP100*ppFecsMemDesc**ppFecsMemDesc*_getFecsMemDesc(pGpu, pKernelGraphics, ppFecsMemDesc)**_getFecsMemDesc(pGpu, pKernelGraphics, ppFecsMemDesc)*pFecsTraceDefines*pStaticInfo->pFecsTraceDefines != NULL**pStaticInfo->pFecsTraceDefines != NULL*call to _fecsEventBufferAdd*fecsRecord*migGpuInstanceId*migComputeInstanceId*NVRM: Invalid FECS record! **NVRM: Invalid FECS record! *call to kfifoConvertInstToKernelChannel_DISPATCH*NVRM: Error getting channel! **NVRM: Error getting channel! *NVRM: Error getting new channel! **NVRM: Error getting new channel! *pKernelChannelNew**pKernelChannelNew*call to kchannelGetMIGReference*pChannelRef**pChannelRef*pNewChannelRef**pNewChannelRef*call to kgraphicsIsFecsRecordUcodeSeqnoSupported*notifRecord*dropCount*call to fecsExtractTagAndTimestamp**ts*fecsExtractTagAndTimestamp(pGpu, pKernelGraphics, pRecord->ts[timestampId], ¬ifRecord.timestamp, ¬ifRecord.tag)**fecsExtractTagAndTimestamp(pGpu, pKernelGraphics, pRecord->ts[timestampId], ¬ifRecord.timestamp, ¬ifRecord.tag)*subpid*noisyTimestamp*call to notifyEventBuffers*timestampId*pKernelGraphicsStaticInfo->pFecsTraceDefines != NULL**pKernelGraphicsStaticInfo->pFecsTraceDefines != NULL*pGrRouteInfo**pGrRouteInfo*pFecsTraceInfo->hClient != NV01_NULL_OBJECT**pFecsTraceInfo->hClient != NV01_NULL_OBJECT*call to kgrmgrCtrlSetEngineID_IMPL*lastModifiedTimestamp*src/kernel/gpu/gr/kernel_graphics.c**src/kernel/gpu/gr/kernel_graphics.c*call to kgrmgrCtrlRouteKGRWithDevice_IMPL*kgrmgrCtrlRouteKGRWithDevice(pGpu, pKernelGraphicsManager, pDevice, &pParams->grRouteInfo, &pKernelGraphics)**kgrmgrCtrlRouteKGRWithDevice(pGpu, pKernelGraphicsManager, pDevice, &pParams->grRouteInfo, &pKernelGraphics)*call to kgraphicsSetFecsTraceWrOffset_DISPATCH*call to kgraphicsSetFecsTraceHwEnable_DISPATCH*floorsweepingMasks*physGfxGpcMask*numGfxTpc*call to kgrmgrCtrlSetChannelHandle_IMPL*kgrmgrCtrlRouteKGRWithDevice(pGpu, pKernelGraphicsManager, GPU_RES_GET_DEVICE(pKernelChannel), &grRouteInfo, &pKernelGraphics)**kgrmgrCtrlRouteKGRWithDevice(pGpu, pKernelGraphicsManager, GPU_RES_GET_DEVICE(pKernelChannel), &grRouteInfo, &pKernelGraphics)*call to kgrctxGetCtxBufferPtes_IMPL*physAddrs**physAddrs*kgrctxGetCtxBufferPtes(pGpu, pKernelGraphicsContext, pKernelGraphics, kchannelGetGfid(pKernelChannel), pParams->bufferType, pParams->firstPage, pParams->physAddrs, NV_ARRAY_ELEMENTS(pParams->physAddrs), &pParams->numPages, &pParams->bNoMorePages)**kgrctxGetCtxBufferPtes(pGpu, pKernelGraphicsContext, pKernelGraphics, kchannelGetGfid(pKernelChannel), pParams->bufferType, pParams->firstPage, pParams->physAddrs, NV_ARRAY_ELEMENTS(pParams->physAddrs), &pParams->numPages, &pParams->bNoMorePages)*call to kgrctxGetCtxBufferInfo_IMPL*ctxBufferInfo**ctxBufferInfo*kgrctxGetCtxBufferInfo(pGpu, pKernelGraphicsContext, pKernelGraphics, kchannelGetGfid(pKernelChannel), NV_ARRAY_ELEMENTS(pParams->ctxBufferInfo), &pParams->bufferCount, pParams->ctxBufferInfo)**kgrctxGetCtxBufferInfo(pGpu, pKernelGraphicsContext, pKernelGraphics, kchannelGetGfid(pKernelChannel), NV_ARRAY_ELEMENTS(pParams->ctxBufferInfo), &pParams->bufferCount, pParams->ctxBufferInfo)*kgrmgrCtrlRouteKGRWithDevice(pGpu, pKernelGraphicsManager, pDevice, &grRouteInfo, &pKernelGraphics)**kgrmgrCtrlRouteKGRWithDevice(pGpu, pKernelGraphicsManager, pDevice, &grRouteInfo, &pKernelGraphics)*CliGetKernelChannel(pClient, pParams->hChannel, &pKernelChannel)**CliGetKernelChannel(pClient, pParams->hChannel, &pKernelChannel)*call to kgrctxGetBufferCount_IMPL*kgrctxGetBufferCount(pGpu, pKernelGraphicsContext, pKernelGraphics, &bufferCount)**kgrctxGetBufferCount(pGpu, pKernelGraphicsContext, pKernelGraphics, &bufferCount)*pCtxBufferInfo**pCtxBufferInfo*pCtxBufferInfo != NULL**pCtxBufferInfo != NULL*kgrctxGetCtxBufferInfo(pGpu, pKernelGraphicsContext, pKernelGraphics, kchannelGetGfid(pKernelChannel), bufferCount, &bufferCount, pCtxBufferInfo)**kgrctxGetCtxBufferInfo(pGpu, pKernelGraphicsContext, pKernelGraphics, kchannelGetGfid(pKernelChannel), bufferCount, &bufferCount, pCtxBufferInfo)*prevAlignment*call to kgraphicsInitializeDeferredStaticData_IMPL*kgraphicsInitializeDeferredStaticData(pGpu, pKernelGraphics, NV01_NULL_OBJECT, NV01_NULL_OBJECT)**kgraphicsInitializeDeferredStaticData(pGpu, pKernelGraphics, NV01_NULL_OBJECT, NV01_NULL_OBJECT)*bInfoPopulated*pAttribBufferSizeParams*pRopInfo*pKernelGraphicsStaticInfo->pRopInfo != NULL**pKernelGraphicsStaticInfo->pRopInfo != NULL*pRopInfoParams**pRopInfoParams**pRopInfo*kgrmgrCtrlRouteKGRWithDevice(pGpu, pKernelGraphicsManager, GPU_RES_GET_DEVICE(pSubdevice), &grRouteInfo, &pKernelGraphics)**kgrmgrCtrlRouteKGRWithDevice(pGpu, pKernelGraphicsManager, GPU_RES_GET_DEVICE(pSubdevice), &grRouteInfo, &pKernelGraphics)*call to kchannelGetFromDualHandleRestricted_IMPL*kchannelGetFromDualHandleRestricted(RES_GET_CLIENT(pSubdevice), pParams->hChannel, &pKernelChannel)**kchannelGetFromDualHandleRestricted(RES_GET_CLIENT(pSubdevice), pParams->hChannel, &pKernelChannel)*call to kgrctxSetupDeferredPmBuffer_IMPL*kgrctxSetupDeferredPmBuffer(pGpu, pKernelGraphicsContext, pKernelGraphics, pKernelChannel)**kgrctxSetupDeferredPmBuffer(pGpu, pKernelGraphicsContext, pKernelGraphics, pKernelChannel)*pZcullInfo*pKernelGraphicsStaticInfo->pZcullInfo != NULL**pKernelGraphicsStaticInfo->pZcullInfo != NULL**pZcullInfo*call to kgrmgrGetLegacyZcullMask_IMPL*zcullMask*NVRM: Incorrect GPC-Idx provided = %d **NVRM: Incorrect GPC-Idx provided = %d **zcullMask*pParams->physSyspipeId < GPU_MAX_GRS**pParams->physSyspipeId < GPU_MAX_GRS*kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(pParams->physSyspipeId), &localRmEngineType)**kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(pParams->physSyspipeId), &localRmEngineType)*pKernelGraphics != NULL**pKernelGraphics != NULL*gpcMask*reasonCode*serverutilGetResourceRefWithType(hClient, pParams->hEventBuffer, classId(EventBuffer), &pEventBufferRef)**serverutilGetResourceRefWithType(hClient, pParams->hEventBuffer, classId(EventBuffer), &pEventBufferRef)*call to fecsAddBindpoint*call to kgrmgrGetLegacyPpcMask_IMPL*kgrmgrGetLegacyPpcMask(pGpu, pKernelGraphicsManager, pParams->gpcId, &pParams->ppcMask)**kgrmgrGetLegacyPpcMask(pGpu, pKernelGraphicsManager, pParams->gpcId, &pParams->ppcMask)*pPpcMasks*pKernelGraphicsStaticInfo->pPpcMasks != NULL**pKernelGraphicsStaticInfo->pPpcMasks != NULL*pKernelGraphicsStaticInfo->pGrInfo != NULL**pKernelGraphicsStaticInfo->pGrInfo != NULL*ppcMask*tpcCount**tpcCount*numTpcs*call to kgrmgrGetLegacyTpcMask_IMPL*tpcMask**tpcMask*call to kgrmgrGetLegacyGpcMask_IMPL*kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(0), &globalGrEngine)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(0), &globalGrEngine)*pSmIssueThrottleCtrl*pStaticInfo->pSmIssueThrottleCtrl != NULL**pStaticInfo->pSmIssueThrottleCtrl != NULL*call to findSmIssueThrottleCtrl*smIssueThrottleCtrlList**smIssueThrottleCtrlList*findSmIssueThrottleCtrl(pParams->smIssueThrottleCtrlList[i].index, &fuseValue, pStaticInfo->pSmIssueThrottleCtrl)**findSmIssueThrottleCtrl(pParams->smIssueThrottleCtrlList[i].index, &fuseValue, pStaticInfo->pSmIssueThrottleCtrl)*smIssueThrottleCtrlListSize*findSmIssueThrottleCtrl(pParams->smIssueThrottleCtrlList[i].index, &(pParams->smIssueThrottleCtrlList[i].data), pStaticInfo->pSmIssueThrottleCtrl)**findSmIssueThrottleCtrl(pParams->smIssueThrottleCtrlList[i].index, &(pParams->smIssueThrottleCtrlList[i].data), pStaticInfo->pSmIssueThrottleCtrl)*ref.pMIGComputeInstance != NULL && ref.pKernelMIGGpuInstance != NULL**ref.pMIGComputeInstance != NULL && ref.pKernelMIGGpuInstance != NULL*pSmIssueRateModifierV2*pStaticInfo->pSmIssueRateModifierV2 != NULL**pStaticInfo->pSmIssueRateModifierV2 != NULL*call to findSmIssueRateModifier*smIssueRateModifierList**smIssueRateModifierList*findSmIssueRateModifier(pParams->smIssueRateModifierList[i].index, &fuseValue, pStaticInfo->pSmIssueRateModifierV2)**findSmIssueRateModifier(pParams->smIssueRateModifierList[i].index, &fuseValue, pStaticInfo->pSmIssueRateModifierV2)*smIssueRateModifierListSize*findSmIssueRateModifier(pParams->smIssueRateModifierList[i].index, &(pParams->smIssueRateModifierList[i].data), pStaticInfo->pSmIssueRateModifierV2)**findSmIssueRateModifier(pParams->smIssueRateModifierList[i].index, &(pParams->smIssueRateModifierList[i].data), pStaticInfo->pSmIssueRateModifierV2)*pSmIssueRateModifier*pStaticInfo->pSmIssueRateModifier != NULL**pStaticInfo->pSmIssueRateModifier != NULL*imla0*fmla16*fmla32*ffma*imla1*imla2*imla3*imla4*globalSmOrder*pStaticInfo->globalSmOrder.numSm <= NV2080_CTRL_CMD_GR_GET_GLOBAL_SM_ORDER_MAX_SM_COUNT**pStaticInfo->globalSmOrder.numSm <= NV2080_CTRL_CMD_GR_GET_GLOBAL_SM_ORDER_MAX_SM_COUNT*numSm*numTpc*globalSmId**globalSmId*localTpcId*localSmId*globalTpcId*virtualGpcId*migratableTpcId*physicalCpcId*virtualTpcId*pStaticInfo->globalSmOrder.numSm <= NV2080_CTRL_GR_GET_SM_TO_GPC_TPC_MAPPINGS_MAX_SM_COUNT**pStaticInfo->globalSmOrder.numSm <= NV2080_CTRL_GR_GET_SM_TO_GPC_TPC_MAPPINGS_MAX_SM_COUNT*smId**smId*tpcId*call to _kgraphicsCtrlCmdGrGetInfoV2*_kgraphicsCtrlCmdGrGetInfoV2(pGpu, pDevice, pParams)**_kgraphicsCtrlCmdGrGetInfoV2(pGpu, pDevice, pParams)*pGrInfos*pGrInfos != NULL**pGrInfos != NULL*grInfoParamsV2**pGrInfos*_kgraphicsCtrlCmdGrGetInfoV2(pGpu, pDevice, &grInfoParamsV2)**_kgraphicsCtrlCmdGrGetInfoV2(pGpu, pDevice, &grInfoParamsV2)*call to kgraphicsGetCaps_IMPL*kgraphicsGetCaps(pGpu, pKernelGraphics, pParams->capsTbl)**kgraphicsGetCaps(pGpu, pKernelGraphics, pParams->capsTbl)*bCapsPopulated*call to kgraphicsGetMainCtxBufferSize_IMPL*kgraphicsGetMainCtxBufferSize(pGpu, pKernelGraphics, NV_TRUE, &size)**kgraphicsGetMainCtxBufferSize(pGpu, pKernelGraphics, NV_TRUE, &size)*call to kgraphicsSetCtxBufferInfo_IMPL*call to kgrctxCtxBufferToFifoEngineId_IMPL*kgrctxCtxBufferToFifoEngineId(bufId, &fifoEngineId)**kgrctxCtxBufferToFifoEngineId(bufId, &fifoEngineId)*NVRM: Invalid grInfoList size: 0x%x **NVRM: Invalid grInfoList size: 0x%x *pParams->grInfoList[i].index < NV2080_CTRL_GR_INFO_MAX_SIZE**pParams->grInfoList[i].index < NV2080_CTRL_GR_INFO_MAX_SIZE*pGrCaps*pGrCaps != NULL**pGrCaps != NULL*pParams->capsTblSize == NV0080_CTRL_GR_CAPS_TBL_SIZE**pParams->capsTblSize == NV0080_CTRL_GR_CAPS_TBL_SIZE*kgraphicsGetCaps(pGpu, pKernelGraphics, pGrCaps)**kgraphicsGetCaps(pGpu, pKernelGraphics, pGrCaps)*pParams->engineIdx == MC_ENGINE_IDX_GRn(grIdx)**pParams->engineIdx == MC_ENGINE_IDX_GRn(grIdx)*call to kgraphicsNonstallIntrCheckAndClear_b3696a**pGrCaps*grCaps*bEvict**bInitialized*call to kmemsysCacheOp_DISPATCH*pVaParams**pVaParams*pVaParams != NULL**pVaParams != NULL*pMemAllocParams**pMemAllocParams*pMemAllocParams != NULL**pMemAllocParams != NULL*pChannelGPFIFOAllocParams**pChannelGPFIFOAllocParams*pChannelGPFIFOAllocParams != NULL**pChannelGPFIFOAllocParams != NULL*bNeedMIGWar*kmigmgrGetMIGReferenceFromEngineType(pGpu, pKernelMIGManager, RM_ENGINE_TYPE_GR(pKernelGraphics->instance), &ref)**kmigmgrGetMIGReferenceFromEngineType(pGpu, pKernelMIGManager, RM_ENGINE_TYPE_GR(pKernelGraphics->instance), &ref)*nvC637AllocParams*pRmApi->AllocWithHandle(pRmApi, hClientId, hSubdeviceId, KGRAPHICS_CHANNEL_HANDLE_PARTITIONREF, AMPERE_SMC_PARTITION_REF, &nvC637AllocParams, sizeof(nvC637AllocParams))**pRmApi->AllocWithHandle(pRmApi, hClientId, hSubdeviceId, KGRAPHICS_CHANNEL_HANDLE_PARTITIONREF, AMPERE_SMC_PARTITION_REF, &nvC637AllocParams, sizeof(nvC637AllocParams))*nvC638AllocParams*pRmApi->AllocWithHandle(pRmApi, hClientId, KGRAPHICS_CHANNEL_HANDLE_PARTITIONREF, KGRAPHICS_CHANNEL_HANDLE_EXECPARTITIONREF, AMPERE_SMC_EXEC_PARTITION_REF, &nvC638AllocParams, sizeof(nvC638AllocParams))**pRmApi->AllocWithHandle(pRmApi, hClientId, KGRAPHICS_CHANNEL_HANDLE_PARTITIONREF, KGRAPHICS_CHANNEL_HANDLE_EXECPARTITIONREF, AMPERE_SMC_EXEC_PARTITION_REF, &nvC638AllocParams, sizeof(nvC638AllocParams))*pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, KGRAPHICS_CHANNEL_HANDLE_VAS, FERMI_VASPACE_A, pVaParams, sizeof(NV_VASPACE_ALLOCATION_PARAMETERS))**pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, KGRAPHICS_CHANNEL_HANDLE_VAS, FERMI_VASPACE_A, pVaParams, sizeof(NV_VASPACE_ALLOCATION_PARAMETERS))*pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, KGRAPHICS_CHANNEL_HANDLE_PBPHYS, NV01_MEMORY_SYSTEM, pMemAllocParams, sizeof(NV_MEMORY_ALLOCATION_PARAMS))**pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, KGRAPHICS_CHANNEL_HANDLE_PBPHYS, NV01_MEMORY_SYSTEM, pMemAllocParams, sizeof(NV_MEMORY_ALLOCATION_PARAMS))*pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, KGRAPHICS_CHANNEL_HANDLE_PBVIRT, NV50_MEMORY_VIRTUAL, pMemAllocParams, sizeof(NV_MEMORY_ALLOCATION_PARAMS))**pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, KGRAPHICS_CHANNEL_HANDLE_PBVIRT, NV50_MEMORY_VIRTUAL, pMemAllocParams, sizeof(NV_MEMORY_ALLOCATION_PARAMS))*pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, KGRAPHICS_CHANNEL_HANDLE_USERD, userdMemClass, pMemAllocParams, sizeof(NV_MEMORY_ALLOCATION_PARAMS))**pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, KGRAPHICS_CHANNEL_HANDLE_USERD, userdMemClass, pMemAllocParams, sizeof(NV_MEMORY_ALLOCATION_PARAMS))*deviceGetByHandle(pClientId, hDeviceId, &pDevice)**deviceGetByHandle(pClientId, hDeviceId, &pDevice)*kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(pKernelGraphics->instance), &localRmEngineType)**kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(pKernelGraphics->instance), &localRmEngineType)*pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, KGRAPHICS_CHANNEL_HANDLE_CHANNELID, classNum, pChannelGPFIFOAllocParams, sizeof(NV_CHANNEL_ALLOC_PARAMS))**pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, KGRAPHICS_CHANNEL_HANDLE_CHANNELID, classNum, pChannelGPFIFOAllocParams, sizeof(NV_CHANNEL_ALLOC_PARAMS))*CliGetKernelChannel(pClientId, KGRAPHICS_CHANNEL_HANDLE_CHANNELID, &pKernelChannel)**CliGetKernelChannel(pClientId, KGRAPHICS_CHANNEL_HANDLE_CHANNELID, &pKernelChannel)*reserveSize*vaspaceReserveMempool(pKernelChannel->pVAS, pGpu, GPU_RES_GET_DEVICE(pKernelChannel), reserveSize, RM_PAGE_SIZE, VASPACE_RESERVE_FLAGS_NONE)**vaspaceReserveMempool(pKernelChannel->pVAS, pGpu, GPU_RES_GET_DEVICE(pKernelChannel), reserveSize, RM_PAGE_SIZE, VASPACE_RESERVE_FLAGS_NONE)*call to kgraphicsIsGFXSupported_IMPL*objectType*kgraphicsGetClassByType(pGpu, pKernelGraphics, objectType, &classNum)**kgraphicsGetClassByType(pGpu, pKernelGraphics, objectType, &classNum)*pRmApi->AllocWithHandle(pRmApi, hClientId, KGRAPHICS_CHANNEL_HANDLE_CHANNELID, KGRAPHICS_CHANNEL_HANDLE_3DOBJ, classNum, NULL, 0)**pRmApi->AllocWithHandle(pRmApi, hClientId, KGRAPHICS_CHANNEL_HANDLE_CHANNELID, KGRAPHICS_CHANNEL_HANDLE_3DOBJ, classNum, NULL, 0)*call to GR_CTX_BUFFER_FROM32*NV_ENUM_IS(GR_CTX_BUFFER, buf)**NV_ENUM_IS(GR_CTX_BUFFER, buf)**ctxAttr*NVRM: bad requested object type : %d **NVRM: bad requested object type : %d *gpuGetClassList(pGpu, &numClasses, NULL, ENG_GR(pKernelGraphics->instance))**gpuGetClassList(pGpu, &numClasses, NULL, ENG_GR(pKernelGraphics->instance))*pClassesSupported**pClassesSupported*pClassesSupported != NULL**pClassesSupported != NULL*call to kgrmgrGetGrObjectType_IMPL*NVRM: classNum=0x%08x, type=%d **NVRM: classNum=0x%08x, type=%d *NVRM: gpu:%d isBC=%d **NVRM: gpu:%d isBC=%d *call to vaListRemoveVa*vaListRemoveVa(pVaList, pVAS)**vaListRemoveVa(pVaList, pVAS)*(NV_OK == status) || (NV_ERR_OBJECT_NOT_FOUND == status)**(NV_OK == status) || (NV_ERR_OBJECT_NOT_FOUND == status)*NVRM: Freed ctx buffer mapping at VA 0x%llx **NVRM: Freed ctx buffer mapping at VA 0x%llx *!gvaspaceIsExternallyOwned(pGVAS)**!gvaspaceIsExternallyOwned(pGVAS)*call to kgraphicsIsPerSubcontextContextHeaderSupported*call to vaListMapCount*!bAlignSize**!bAlignSize*vaddrCached*dmaAllocMapping_HAL(pGpu, GPU_GET_DMA(pGpu), pVAS, pMemDesc, &vaddr, mapFlags, 0, NULL, KMIGMGR_SWIZZID_INVALID)**dmaAllocMapping_HAL(pGpu, GPU_GET_DMA(pGpu), pVAS, pMemDesc, &vaddr, mapFlags, 0, NULL, KMIGMGR_SWIZZID_INVALID)*vaddr == vaddrCached**vaddr == vaddrCached*NVRM: New ctx buffer mapping at VA 0x%llx **NVRM: New ctx buffer mapping at VA 0x%llx *vaListAddVa(pVaList, pVAS, vaddr)**vaListAddVa(pVaList, pVAS, vaddr)*bSizeAligned**bSizeAligned*call to kgrctxGlobalCtxBufferToFifoEngineId_IMPL*kgrctxGlobalCtxBufferToFifoEngineId(buffId, &fifoEngineId)**kgrctxGlobalCtxBufferToFifoEngineId(buffId, &fifoEngineId)*buffSize*NVRM: Could not map %s Buffer as buffer is not supported *call to GR_GLOBALCTX_BUFFER_TO_STRING**NVRM: Could not map %s Buffer as buffer is not supported *NVRM: Could not map %s Buffer, no memory allocated for it! **NVRM: Could not map %s Buffer, no memory allocated for it! *call to kgraphicsMapCtxBuffer_IMPL*globalCtxBufferVaList**globalCtxBufferVaList*kgraphicsMapCtxBuffer(pGpu, pKernelGraphics, pMemDesc, pVAS, &pKernelGraphicsContextUnicast->globalCtxBufferVaList[buffId], bSizeAligned, bIsReadOnly)**kgraphicsMapCtxBuffer(pGpu, pKernelGraphics, pMemDesc, pVAS, &pKernelGraphicsContextUnicast->globalCtxBufferVaList[buffId], bSizeAligned, bIsReadOnly)*vaListFindVa(&pKernelGraphicsContextUnicast->globalCtxBufferVaList[buffId], pVAS, &vaddr)**vaListFindVa(&pKernelGraphicsContextUnicast->globalCtxBufferVaList[buffId], pVAS, &vaddr)*NVRM: GPU:%d %s Buffer PA @ 0x%llx VA @ 0x%llx of Size 0x%llx **NVRM: GPU:%d %s Buffer PA @ 0x%llx VA @ 0x%llx of Size 0x%llx *call to kgraphicsAllocGlobalCtxBuffers_DISPATCH*kgraphicsAllocGlobalCtxBuffers_HAL(pGpu, pKernelGraphics, gfid)**kgraphicsAllocGlobalCtxBuffers_HAL(pGpu, pKernelGraphics, gfid)*call to _kgraphicsMapGlobalCtxBuffer*call to fecsBufferIsMapped*call to fecsBufferMap*call to GR_GLOBALCTX_BUFFER_FROM32*NV_ENUM_IS(GR_GLOBALCTX_BUFFER, buf)**NV_ENUM_IS(GR_GLOBALCTX_BUFFER, buf)*maxCtxBufSize**maxCtxBufSize*gfxCapabilites*call to _kgraphicsInternalClientAlloc*deviceGetByHandle(pClient, hDevice, &pDevice)**deviceGetByHandle(pClient, hDevice, &pDevice)*kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(grIdx), &localRmEngineType)**kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(grIdx), &localRmEngineType)*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KGR_GET_CAPS, pParams, sizeof(pParams->caps))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KGR_GET_CAPS, pParams, sizeof(pParams->caps))*staticInfo**engineInfo*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KGR_GET_FLOORSWEEPING_MASKS, pParams, sizeof(pParams->floorsweepingMasks))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KGR_GET_FLOORSWEEPING_MASKS, pParams, sizeof(pParams->floorsweepingMasks))**floorsweepingMasks*call to kgrmgrSetLegacyKgraphicsStaticInfo_IMPL*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KGR_GET_GLOBAL_SM_ORDER, pParams, sizeof(pParams->globalSmOrder))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KGR_GET_GLOBAL_SM_ORDER, pParams, sizeof(pParams->globalSmOrder))**globalSmOrder**pPpcMasks*ppcMasks*enginePpcMasks**enginePpcMasks*zcullInfo*engineZcullInfo**engineZcullInfo*ropInfo*engineRopInfo**engineRopInfo**pSmIssueRateModifier**smIssueRateModifier**pSmIssueRateModifierV2**smIssueRateModifierV2**pSmIssueThrottleCtrl**smIssueThrottleCtrl*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KGR_GET_FECS_RECORD_SIZE, pParams, sizeof(pParams->fecsRecordSize))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KGR_GET_FECS_RECORD_SIZE, pParams, sizeof(pParams->fecsRecordSize))**fecsRecordSize**pFecsTraceDefines**fecsTraceDefines*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KGR_GET_PDB_PROPERTIES, pParams, sizeof(pParams->pdbProperties))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KGR_GET_PDB_PROPERTIES, pParams, sizeof(pParams->pdbProperties))*pdbProperties*pdbTable**pdbTable*call to kgraphicsSetPerSubcontextContextHeaderSupported*call to kgraphicsShouldDeferContextInit*kgraphicsInitializeDeferredStaticData(pGpu, pKernelGraphics, hClient, hSubdevice)**kgraphicsInitializeDeferredStaticData(pGpu, pKernelGraphics, hClient, hSubdevice)**pContextBuffersInfo*gpumgrGetBcEnabledStatus(pGpu) != bBcState**gpumgrGetBcEnabledStatus(pGpu) != bBcState*call to kmigmgrUseLegacyVgpuPolicy*grCapsBits**grCapsBits*bPerSubCtxheaderSupported*pdbTableParams*NVRM: Profiling support not requested. Disabling ctxsw logging **NVRM: Profiling support not requested. Disabling ctxsw logging *bInternalClientAllocated*bCollectingDeferredStaticData*subdeviceGetByHandle(pClient, hSubdevice, &pSubdevice)**subdeviceGetByHandle(pClient, hSubdevice, &pSubdevice)*kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, GPU_RES_GET_DEVICE(pSubdevice), &ref)**kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, GPU_RES_GET_DEVICE(pSubdevice), &ref)*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KGR_GET_CONTEXT_BUFFERS_INFO, pParams, sizeof(*pParams))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KGR_GET_CONTEXT_BUFFERS_INFO, pParams, sizeof(*pParams))*call to kgraphicsAllocGrGlobalCtxBuffers_DISPATCH*kgraphicsAllocGrGlobalCtxBuffers_HAL(pGpu, pKernelGraphics, gfid, NULL)**kgraphicsAllocGrGlobalCtxBuffers_HAL(pGpu, pKernelGraphics, gfid, NULL)*phClient != NULL**phClient != NULL*phDevice != NULL**phDevice != NULL*phSubdevice != NULL**phSubdevice != NULL*rmapiutilAllocClientAndDeviceHandles(pRmApi, pGpu, phClient, phDevice, phSubdevice)**rmapiutilAllocClientAndDeviceHandles(pRmApi, pGpu, phClient, phDevice, phSubdevice)*serverutilGenResourceHandle(*phClient, &hSubscription)**serverutilGenResourceHandle(*phClient, &hSubscription)*pRmApi->AllocWithHandle(pRmApi, *phClient, *phSubdevice, hSubscription, AMPERE_SMC_PARTITION_REF, ¶ms, sizeof(params))**pRmApi->AllocWithHandle(pRmApi, *phClient, *phSubdevice, hSubscription, AMPERE_SMC_PARTITION_REF, ¶ms, sizeof(params))*pGrIndex**pGrIndex*call to kgraphicsIsBug4208224WARNeeded_DISPATCH*pmaQueryConfigs(pHeap->pPmaObject, &pmaConfig)**pmaQueryConfigs(pHeap->pPmaObject, &pmaConfig)*call to kgraphicsCreateGoldenImageChannel_IMPL*kgraphicsCreateGoldenImageChannel(pGpu, pKernelGraphics)**kgraphicsCreateGoldenImageChannel(pGpu, pKernelGraphics)*call to kgraphicsInitializeBug4208224WAR_DISPATCH*call to kgraphicsLoadStaticInfo_DISPATCH*kgraphicsLoadStaticInfo(pGpu, pKernelGraphics, KMIGMGR_SWIZZID_INVALID)**kgraphicsLoadStaticInfo(pGpu, pKernelGraphics, KMIGMGR_SWIZZID_INVALID)*kgraphicsAllocGlobalCtxBuffers_HAL(pGpu, pKernelGraphics, GPU_GFID_PF)**kgraphicsAllocGlobalCtxBuffers_HAL(pGpu, pKernelGraphics, GPU_GFID_PF)*call to fecsBufferTeardown**pGlobalCtxBuffers*call to kgraphicsTeardownBug4208224State_DISPATCH*call to kgraphicsFreeGlobalCtxBuffers_IMPL**instance*kfifoAddSchedulingHandler(pGpu, GPU_GET_KERNEL_FIFO(pGpu), _kgraphicsPostSchedulingEnableHandler, (void *)((NvUPtr)(pKernelGraphics->instance)), NULL, NULL)**kfifoAddSchedulingHandler(pGpu, GPU_GET_KERNEL_FIFO(pGpu), _kgraphicsPostSchedulingEnableHandler, (void *)((NvUPtr)(pKernelGraphics->instance)), NULL, NULL)*NVRM: Adding class ID 0x%x to ClassDB **NVRM: Adding class ID 0x%x to ClassDB *call to gpuAddClassToClassDBByEngTagClassId_IMPL*gpuAddClassToClassDBByEngTagClassId(pGpu, ENG_GR(pKernelGraphics->instance), classNum)**gpuAddClassToClassDBByEngTagClassId(pGpu, ENG_GR(pKernelGraphics->instance), classNum)*nGlobalCtx*call to kgraphicsSetBug4208224WAREnabled*call to fecsCtxswLoggingTeardown*call to kgraphicsInvalidateStaticInfo_IMPL*localBuf*call to memdescOverrideInstLocList*call to GR_CTX_BUFFER_TO_STRING*instlocOverrides**instlocOverrides*call to fecsCtxswLoggingInit*fecsCtxswLoggingInit(pGpu, pKernelGraphics, &pKernelGraphics->pFecsTraceInfo)**fecsCtxswLoggingInit(pGpu, pKernelGraphics, &pKernelGraphics->pFecsTraceInfo)*call to _kgraphicsInitRegistryOverrides*RmForceGrScrubberChannel**RmForceGrScrubberChannel*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RUNLIST, kchannelGetRunlistId(pKernelChannel), ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *)&rmEngineType)*src/kernel/gpu/gr/kernel_graphics_context.c**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RUNLIST, kchannelGetRunlistId(pKernelChannel), ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *)&rmEngineType)**src/kernel/gpu/gr/kernel_graphics_context.c*NVRM: Channel destroyed but never bound, scheduled, or had a descendant object created **NVRM: Channel destroyed but never bound, scheduled, or had a descendant object created *call to kgrctxShouldCleanup_KERNEL*call to kgrctxUnmapBuffers_KERNEL*call to kgrctxFreeAssociatedCtxBuffers_IMPL*ctxPatchBuffer*pmCtxswBuffer*zcullCtxswBuffer*preemptCtxswBuffer*spillCtxswBuffer*betaCBCtxswBuffer*pagepoolCtxswBuffer*rtvCbCtxswBuffer*setupCtxswBuffer*call to shrkgrctxDestructUnicast_IMPL*pKernelGraphicsContextShared*listCount(&pKernelGraphicsContextShared->activeDebuggers) == 0**listCount(&pKernelGraphicsContextShared->activeDebuggers) == 0*vaListInit(&pKernelGraphicsContextUnicast->globalCtxBufferVaList[i])**vaListInit(&pKernelGraphicsContextUnicast->globalCtxBufferVaList[i])*vaListInit(&pKernelGraphicsContextUnicast->ctxPatchBuffer.vAddrList)**vaListInit(&pKernelGraphicsContextUnicast->ctxPatchBuffer.vAddrList)*vaListInit(&pKernelGraphicsContextUnicast->pmCtxswBuffer.vAddrList)**vaListInit(&pKernelGraphicsContextUnicast->pmCtxswBuffer.vAddrList)*vaListInit(&pKernelGraphicsContextUnicast->zcullCtxswBuffer.vAddrList)**vaListInit(&pKernelGraphicsContextUnicast->zcullCtxswBuffer.vAddrList)*vaListInit(&pKernelGraphicsContextUnicast->preemptCtxswBuffer.vAddrList)**vaListInit(&pKernelGraphicsContextUnicast->preemptCtxswBuffer.vAddrList)*vaListInit(&pKernelGraphicsContextUnicast->spillCtxswBuffer.vAddrList)**vaListInit(&pKernelGraphicsContextUnicast->spillCtxswBuffer.vAddrList)*vaListInit(&pKernelGraphicsContextUnicast->betaCBCtxswBuffer.vAddrList)**vaListInit(&pKernelGraphicsContextUnicast->betaCBCtxswBuffer.vAddrList)*vaListInit(&pKernelGraphicsContextUnicast->pagepoolCtxswBuffer.vAddrList)**vaListInit(&pKernelGraphicsContextUnicast->pagepoolCtxswBuffer.vAddrList)*vaListInit(&pKernelGraphicsContextUnicast->rtvCbCtxswBuffer.vAddrList)**vaListInit(&pKernelGraphicsContextUnicast->rtvCbCtxswBuffer.vAddrList)*call to kgraphicsGetPeFiroBufferEnabled*vaListInit(&pKernelGraphicsContextUnicast->setupCtxswBuffer.vAddrList)**vaListInit(&pKernelGraphicsContextUnicast->setupCtxswBuffer.vAddrList)*bSupportsPerSubctxHeader*call to shrkgrctxConstructUnicast_IMPL*shrkgrctxConstructUnicast(pGpu, pKernelGraphicsContextShared, pKernelGraphicsContext, pKernelGraphics, pKernelGraphicsContextUnicast)**shrkgrctxConstructUnicast(pGpu, pKernelGraphicsContextShared, pKernelGraphicsContext, pKernelGraphics, pKernelGraphicsContextUnicast)*call to shrkgrctxTeardown_IMPL*call to kchannelCheckIsAdmin_IMPL*pKernelGraphicsContextUnicast->channelObjects != 0**pKernelGraphicsContextUnicast->channelObjects != 0*NVRM: No active GR objects to free for Class 0x%x **NVRM: No active GR objects to free for Class 0x%x *call to gpuChangeComputeModeRefCount_IMPL*countIdx*NVRM: Unrecognized graphics class 0x%x **NVRM: Unrecognized graphics class 0x%x *objectCounts**objectCounts*pKernelGraphicsContextUnicast->objectCounts[countIdx] > 0**pKernelGraphicsContextUnicast->objectCounts[countIdx] > 0*NVRM: Class 0x%x allocated. %d objects allocated **NVRM: Class 0x%x allocated. %d objects allocated *call to kgrctxGetRegisterAccessMapId_DISPATCH*call to kgrobjGetKernelGraphicsContext*kgrctxGetUnicast(pGpu, kgrobjGetKernelGraphicsContext(pGpu, pKernelGraphicsObject), &pKernelGraphicsContextUnicast)**kgrctxGetUnicast(pGpu, kgrobjGetKernelGraphicsContext(pGpu, pKernelGraphicsObject), &pKernelGraphicsContextUnicast)*call to kgrctxUnmapMainCtxBuffer_IMPL*call to kgrctxFreeMainCtxBuffer_IMPL*call to kgraphicsUnmapCtxBuffer_IMPL*call to vaListGetRefCount*call to kgrctxUnmapCtxPmBuffer_IMPL*call to kgrctxUnmapGlobalCtxBuffer_IMPL*bRelease3d*call to kgrctxUnmapGlobalCtxBuffers_IMPL*call to kgrctxFreePatchBuffer_IMPL*call to kgrctxFreePmBuffer_IMPL*call to kgrctxFreeZcullBuffer_IMPL*call to kgrctxFreeCtxPreemptionBuffers_IMPL*call to kgrctxFreeLocalGlobalCtxBuffers_IMPL*kmemsysCacheOp_HAL(pGpu, pKernelMemorySystem, NULL, FB_CACHE_VIDEO_MEMORY, FB_CACHE_EVICT)**kmemsysCacheOp_HAL(pGpu, pKernelMemorySystem, NULL, FB_CACHE_VIDEO_MEMORY, FB_CACHE_EVICT)*!memdescHasSubDeviceMemDescs(pMemDesc)**!memdescHasSubDeviceMemDescs(pMemDesc)*NVRM: Attempt to free null pm ctx buffer pointer?? **NVRM: Attempt to free null pm ctx buffer pointer?? *bKGrPmCtxBufferInitialized*NVRM: Attempt to free null ctx patch buffer pointer, skipped! **NVRM: Attempt to free null ctx patch buffer pointer, skipped! *bKGrPatchCtxBufferInitialized*NVRM: call to free zcull ctx buffer not RM managed, skipped! **NVRM: call to free zcull ctx buffer not RM managed, skipped! *pMainCtxBuffer**pMainCtxBuffer*bKGrMainCtxBufferInitialized*NVRM: Unmapping %s from VA @ 0x%llx **NVRM: Unmapping %s from VA @ 0x%llx *NVRM: Buffer for %s already unmapped **NVRM: Buffer for %s already unmapped *localCtxBuffer*call to kgraphicsIsRtvCbSupported*pCtxBufferMemDesc*call to kgraphicsGetInstance*kchannelSetEngineContextMemDesc(pGpu, pKernelChannel, ENG_GR(kgraphicsGetInstance(pGpu, pKernelGraphics)), NULL) == NV_OK**kchannelSetEngineContextMemDesc(pGpu, pKernelChannel, ENG_GR(kgraphicsGetInstance(pGpu, pKernelGraphics)), NULL) == NV_OK*call to kgrctxUnmapAssociatedCtxBuffers_IMPL*NULL != pKernelChannel**NULL != pKernelChannel*pKernelGraphicsContext != NULL**pKernelGraphicsContext != NULL*bProfilingEnabledVgpuGuest*call to kgrctxShouldPreAllocPmBuffer_PF*call to kgrctxUnmapCtxZcullBuffer_IMPL*call to kgrctxUnmapCtxPreemptionBuffers_IMPL*pVAddrList**pVaList*subdeviceGetByInstance( RES_GET_CLIENT(pKernelChannel), RES_GET_HANDLE(pDevice), gpumgrGetSubDeviceInstanceFromGpu(pGpu), &pSubdevice)**subdeviceGetByInstance( RES_GET_CLIENT(pKernelChannel), RES_GET_HANDLE(pDevice), gpumgrGetSubDeviceInstanceFromGpu(pGpu), &pSubdevice)*pKernelGraphicsContextUnicast->pmCtxswBuffer.pMemDesc != NULL**pKernelGraphicsContextUnicast->pmCtxswBuffer.pMemDesc != NULL*call to kgrctxAllocPmBuffer_IMPL*kgrctxAllocPmBuffer(pGpu, pKernelGraphicsContext, pKernelGraphics, pKernelChannel)**kgrctxAllocPmBuffer(pGpu, pKernelGraphicsContext, pKernelGraphics, pKernelChannel)**pScopeRef*pLoopKernelChannel*pLoopKernelChannel != NULL**pLoopKernelChannel != NULL*kgraphicsMapCtxBuffer(pGpu, pKernelGraphics, pKernelGraphicsContextUnicast->pmCtxswBuffer.pMemDesc, pLoopKernelChannel->pVAS, &pKernelGraphicsContextUnicast->pmCtxswBuffer.vAddrList, NV_FALSE, NV_FALSE)**kgraphicsMapCtxBuffer(pGpu, pKernelGraphics, pKernelGraphicsContextUnicast->pmCtxswBuffer.pMemDesc, pLoopKernelChannel->pVAS, &pKernelGraphicsContextUnicast->pmCtxswBuffer.vAddrList, NV_FALSE, NV_FALSE)*call to kgrctxPrepareInitializeCtxBuffer_IMPL*kgrctxPrepareInitializeCtxBuffer(pGpu, pKernelGraphicsContext, pKernelGraphics, pKernelChannel, NV2080_CTRL_GPU_PROMOTE_CTX_BUFFER_ID_PM, ¶ms.promoteEntry[0], &bInitialize)**kgrctxPrepareInitializeCtxBuffer(pGpu, pKernelGraphicsContext, pKernelGraphics, pKernelChannel, NV2080_CTRL_GPU_PROMOTE_CTX_BUFFER_ID_PM, ¶ms.promoteEntry[0], &bInitialize)*call to kgrctxPreparePromoteCtxBuffer_IMPL*kgrctxPreparePromoteCtxBuffer(pGpu, pKernelGraphicsContext, pKernelChannel, NV2080_CTRL_GPU_PROMOTE_CTX_BUFFER_ID_PM, ¶ms.promoteEntry[0], &bPromote)**kgrctxPreparePromoteCtxBuffer(pGpu, pKernelGraphicsContext, pKernelChannel, NV2080_CTRL_GPU_PROMOTE_CTX_BUFFER_ID_PM, ¶ms.promoteEntry[0], &bPromote)*bInitialize || bPromote**bInitialize || bPromote*pRmApi->Control(pRmApi, RES_GET_CLIENT_HANDLE(pSubdevice), RES_GET_HANDLE(pSubdevice), NV2080_CTRL_CMD_GPU_PROMOTE_CTX, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, RES_GET_CLIENT_HANDLE(pSubdevice), RES_GET_HANDLE(pSubdevice), NV2080_CTRL_CMD_GPU_PROMOTE_CTX, ¶ms, sizeof(params))*call to kgrctxMarkCtxBufferInitialized_IMPL*vaListGetRefCount(&pKernelGraphicsContextUnicast->ctxPatchBuffer.vAddrList, pLoopKernelChannel->pVAS, &refCount)**vaListGetRefCount(&pKernelGraphicsContextUnicast->ctxPatchBuffer.vAddrList, pLoopKernelChannel->pVAS, &refCount)*call to vaListSetRefCount*vaListSetRefCount(&pKernelGraphicsContextUnicast->pmCtxswBuffer.vAddrList, pLoopKernelChannel->pVAS, refCount)**vaListSetRefCount(&pKernelGraphicsContextUnicast->pmCtxswBuffer.vAddrList, pLoopKernelChannel->pVAS, refCount)*call to kgrctxGetGlobalContextBufferInternalId_IMPL*kgrctxGetGlobalContextBufferInternalId(externalId, &internalId)**kgrctxGetGlobalContextBufferInternalId(externalId, &internalId)*pKCtxBuffers**pKCtxBuffers*!"Unrecognized promote ctx enum"**!"Unrecognized promote ctx enum"*vaListFindVa(pVaList, pKernelChannel->pVAS, &vaddr)**vaListFindVa(pVaList, pKernelChannel->pVAS, &vaddr)*gpuVirtAddr*pEngCtx->pMemDesc != NULL**pEngCtx->pMemDesc != NULL*pKernelGraphicsContextUnicast->ctxPatchBuffer.pMemDesc != NULL**pKernelGraphicsContextUnicast->ctxPatchBuffer.pMemDesc != NULL*kgraphicsMapCtxBuffer(pGpu, pKernelGraphics, pMemDesc, pKernelChannel->pVAS, &pEngCtx->vaList, NV_FALSE, NV_FALSE)**kgraphicsMapCtxBuffer(pGpu, pKernelGraphics, pMemDesc, pKernelChannel->pVAS, &pEngCtx->vaList, NV_FALSE, NV_FALSE)*kgraphicsMapCtxBuffer(pGpu, pKernelGraphics, pKernelGraphicsContextUnicast->ctxPatchBuffer.pMemDesc, pKernelChannel->pVAS, &pKernelGraphicsContextUnicast->ctxPatchBuffer.vAddrList, NV_FALSE, NV_FALSE)**kgraphicsMapCtxBuffer(pGpu, pKernelGraphics, pKernelGraphicsContextUnicast->ctxPatchBuffer.pMemDesc, pKernelChannel->pVAS, &pKernelGraphicsContextUnicast->ctxPatchBuffer.vAddrList, NV_FALSE, NV_FALSE)*kgraphicsMapCtxBuffer(pGpu, pKernelGraphics, pKernelGraphicsContextUnicast->pmCtxswBuffer.pMemDesc, pKernelChannel->pVAS, &pKernelGraphicsContextUnicast->pmCtxswBuffer.vAddrList, NV_FALSE, NV_FALSE)**kgraphicsMapCtxBuffer(pGpu, pKernelGraphics, pKernelGraphicsContextUnicast->pmCtxswBuffer.pMemDesc, pKernelChannel->pVAS, &pKernelGraphicsContextUnicast->pmCtxswBuffer.vAddrList, NV_FALSE, NV_FALSE)*call to kgrctxMapGlobalCtxBuffer_IMPL*bAcquire3d*call to kgrctxMapGlobalCtxBuffers_IMPL*call to kgraphicsIsGlobalCtxBufferSizeAligned_IMPL*kgraphicsMapCtxBuffer(pGpu, pKernelGraphics, pMemDesc, pVAS, &pKernelGraphicsContextUnicast->globalCtxBufferVaList[buffId], kgraphicsIsGlobalCtxBufferSizeAligned(pGpu, pKernelGraphics, buffId), bIsReadOnly)**kgraphicsMapCtxBuffer(pGpu, pKernelGraphics, pMemDesc, pVAS, &pKernelGraphicsContextUnicast->globalCtxBufferVaList[buffId], kgraphicsIsGlobalCtxBufferSizeAligned(pGpu, pKernelGraphics, buffId), bIsReadOnly)*NVRM: %s Buffer could not be mapped **NVRM: %s Buffer could not be mapped *kgrobjGetKernelGraphicsContext(pGpu, pKernelGraphicsObject) != NULL**kgrobjGetKernelGraphicsContext(pGpu, pKernelGraphicsObject) != NULL*call to kgrctxIsMainContextAllocated_IMPL*call to kgrctxAllocMainCtxBuffer_IMPL*kgrctxAllocMainCtxBuffer(pGpu, pKernelGraphicsContext, pKernelGraphics, pKernelChannel)**kgrctxAllocMainCtxBuffer(pGpu, pKernelGraphicsContext, pKernelGraphics, pKernelChannel)*kchannelSetEngineContextMemDesc(pGpu, pKernelChannel, ENG_GR(kgraphicsGetInstance(pGpu, pKernelGraphics)), pKernelGraphicsContextUnicast->pMainCtxBuffer)**kchannelSetEngineContextMemDesc(pGpu, pKernelChannel, ENG_GR(kgraphicsGetInstance(pGpu, pKernelGraphics)), pKernelGraphicsContextUnicast->pMainCtxBuffer)*call to kgrctxAllocPatchBuffer_IMPL*kgrctxAllocPatchBuffer(pGpu, pKernelGraphicsContext, pKernelGraphics, pKernelChannel)**kgrctxAllocPatchBuffer(pGpu, pKernelGraphicsContext, pKernelGraphics, pKernelChannel)*call to kgrctxShouldPreAllocPmBuffer_DISPATCH*kgrctxAllocPmBuffer(pGpu, kgrobjGetKernelGraphicsContext(pGpu, pKernelGraphicsObject), pKernelGraphics, pChannelDescendant->pKernelChannel)**kgrctxAllocPmBuffer(pGpu, kgrobjGetKernelGraphicsContext(pGpu, pKernelGraphicsObject), pKernelGraphics, pChannelDescendant->pKernelChannel)*kgraphicsAllocGrGlobalCtxBuffers_HAL(pGpu, pKernelGraphics, gfid, kgrobjGetKernelGraphicsContext(pGpu, pKernelGraphicsObject))**kgraphicsAllocGrGlobalCtxBuffers_HAL(pGpu, pKernelGraphics, gfid, kgrobjGetKernelGraphicsContext(pGpu, pKernelGraphicsObject))*memdescCreate(ppMemDesc, pGpu, size, RM_PAGE_SIZE, NV_TRUE, ADDR_UNKNOWN, pAttr->cpuAttr, flags)**memdescCreate(ppMemDesc, pGpu, size, RM_PAGE_SIZE, NV_TRUE, ADDR_UNKNOWN, pAttr->cpuAttr, flags)*memmgrSetMemDescPageSize_HAL(pGpu, pMemoryManager, *ppMemDesc, AT_GPU, RM_ATTR_PAGE_SIZE_4KB)**memmgrSetMemDescPageSize_HAL(pGpu, pMemoryManager, *ppMemDesc, AT_GPU, RM_ATTR_PAGE_SIZE_4KB)*pStaticInfo->pContextBuffersInfo != NULL**pStaticInfo->pContextBuffersInfo != NULL*kgraphicsGetMainCtxBufferSize(pGpu, pKernelGraphics, NV_TRUE, &ctxSize)**kgraphicsGetMainCtxBufferSize(pGpu, pKernelGraphics, NV_TRUE, &ctxSize)*memdescCreate(&pGrCtxBufferMemDesc, pGpu, ctxSize, RM_PAGE_SIZE, bIsContiguous, ADDR_UNKNOWN, pAttr->cpuAttr, allocFlags | MEMDESC_FLAGS_OWNED_BY_CURRENT_DEVICE)**memdescCreate(&pGrCtxBufferMemDesc, pGpu, ctxSize, RM_PAGE_SIZE, bIsContiguous, ADDR_UNKNOWN, pAttr->cpuAttr, allocFlags | MEMDESC_FLAGS_OWNED_BY_CURRENT_DEVICE)*pGrCtxBufferMemDesc*memmgrSetMemDescPageSize_HAL(pGpu, pMemoryManager, pGrCtxBufferMemDesc, AT_GPU, RM_ATTR_PAGE_SIZE_4KB)**memmgrSetMemDescPageSize_HAL(pGpu, pMemoryManager, pGrCtxBufferMemDesc, AT_GPU, RM_ATTR_PAGE_SIZE_4KB)*memdescSetCtxBufPool(pGrCtxBufferMemDesc, pCtxBufPool)**memdescSetCtxBufPool(pGrCtxBufferMemDesc, pCtxBufPool)*kchannelSetEngineContextMemDesc(pGpu, pKernelChannel, ENG_GR(kgraphicsGetInstance(pGpu, pKernelGraphics)), pGrCtxBufferMemDesc)**kchannelSetEngineContextMemDesc(pGpu, pKernelChannel, ENG_GR(kgraphicsGetInstance(pGpu, pKernelGraphics)), pGrCtxBufferMemDesc)*call to kgrctxGetCtxBuffers_IMPL*pMemDescArray**pMemDescArray*bufferExternalId***pMemDescArray**bufferExternalId*kgrctxGetCtxBuffers(pGpu, pKernelGraphicsContext, pKernelGraphics, gfid, NV_ARRAY_ELEMENTS(pMemDescArray), pMemDescArray, bufferExternalId, &bufferCount, NULL)**kgrctxGetCtxBuffers(pGpu, pKernelGraphicsContext, pKernelGraphics, gfid, NV_ARRAY_ELEMENTS(pMemDescArray), pMemDescArray, bufferExternalId, &bufferCount, NULL)*i != bufferCount**i != bufferCount*call to memdescGetPhysAddrs*pPhysAddrs*kgrctxGetCtxBuffers(pGpu, pKernelGraphicsContext, pKernelGraphics, gfid, NV_ARRAY_ELEMENTS(pMemDescArray), pMemDescArray, bufferExternalId, &memdescCount, &firstGlobalBuffer)**kgrctxGetCtxBuffers(pGpu, pKernelGraphicsContext, pKernelGraphics, gfid, NV_ARRAY_ELEMENTS(pMemDescArray), pMemDescArray, bufferExternalId, &memdescCount, &firstGlobalBuffer)*call to kgrctxFillCtxBufferInfo_IMPL*kgrctxFillCtxBufferInfo(pMemDescArray[i], bufferExternalId[i], bGlobalBuffer, &pCtxBufferInfo[i])**kgrctxFillCtxBufferInfo(pMemDescArray[i], bufferExternalId[i], bGlobalBuffer, &pCtxBufferInfo[i])*call to kgrctxGetMainContextBuffer_IMPL*kgrctxGetMainContextBuffer(pGpu, pKernelGraphicsContext, &pGrCtxBufferMemDesc)**kgrctxGetMainContextBuffer(pGpu, pKernelGraphicsContext, &pGrCtxBufferMemDesc)*pGrCtxBufferMemDesc != NULL**pGrCtxBufferMemDesc != NULL*bufferCountOut < bufferCount**bufferCountOut < bufferCount*pCtxBufferType*ppBuffers**ppBuffers*pFirstGlobalBuffer*call to kgrctxGetGlobalContextBufferExternalId_IMPL*kgrctxGetGlobalContextBufferExternalId(i, &pCtxBufferType[bufferCountOut])**kgrctxGetGlobalContextBufferExternalId(i, &pCtxBufferType[bufferCountOut])*kchannelIsRunlistSet(pGpu, pKernelChannel)**kchannelIsRunlistSet(pGpu, pKernelChannel)*kfifoEngineInfoXlate_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), ENGINE_INFO_TYPE_RUNLIST, runlistId, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *) &engineType)**kfifoEngineInfoXlate_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), ENGINE_INFO_TYPE_RUNLIST, runlistId, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *) &engineType)*RM_ENGINE_TYPE_IS_GR(engineType)**RM_ENGINE_TYPE_IS_GR(engineType)*mmuFault*faultInfo*pKernelGraphicsContextUnicast->mmuFault.head < size**pKernelGraphicsContextUnicast->mmuFault.head < size*pKernelGraphicsContextUnicast->mmuFault.tail < size**pKernelGraphicsContextUnicast->mmuFault.tail < size*mmuFaultInfoList**mmuFaultInfoList*faultAddress*pKernelGraphicsContext->pShared != NULL**pKernelGraphicsContext->pShared != NULL*pKernelGraphicsContextDst*pKernelGraphicsContextSrc*call to kgrctxCopyConstruct_IMPL*serverAllocShare(&g_resServ, classInfo(KernelGraphicsContextShared), &pShared)**serverAllocShare(&g_resServ, classInfo(KernelGraphicsContextShared), &pShared)*call to shrkgrctxInit_IMPL*shrkgrctxInit(GPU_RES_GET_GPU(pKernelGraphicsContext), pKernelGraphicsContext->pShared, pKernelGraphicsContext)**shrkgrctxInit(GPU_RES_GET_GPU(pKernelGraphicsContext), pKernelGraphicsContext->pShared, pKernelGraphicsContext)*bGlobalBuffer*bLocalBuffer*call to kgrctxGetGidInfoInPlace_IMPL*pUuidBuffer**pUuidBuffer*pFifoEngineId*pFifoEngineId != NULL**pFifoEngineId != NULL*NV_ENUM_IS(GR_GLOBALCTX_BUFFER, buffId)**NV_ENUM_IS(GR_GLOBALCTX_BUFFER, buffId)*NV_ENUM_IS(GR_CTX_BUFFER, buffer)**NV_ENUM_IS(GR_CTX_BUFFER, buffer)*pInternalId*pInternalId != NULL**pInternalId != NULL*pExternalId*pExternalId != NULL**pExternalId != NULL*NV_ENUM_IS(GR_GLOBALCTX_BUFFER, id)**NV_ENUM_IS(GR_GLOBALCTX_BUFFER, id)*clientGetResourceRefByType(pCallContext->pClient, hKernelGraphicsContext, classId(KernelGraphicsContext), &pResourceRef)**clientGetResourceRefByType(pCallContext->pClient, hKernelGraphicsContext, classId(KernelGraphicsContext), &pResourceRef)*clientGetResourceRefByType(RES_GET_CLIENT(pKernelChannelGroupApi), pKernelChannelGroupApi->hKernelGraphicsContext, classId(KernelGraphicsContext), &pResourceRef)**clientGetResourceRefByType(RES_GET_CLIENT(pKernelChannelGroupApi), pKernelChannelGroupApi->hKernelGraphicsContext, classId(KernelGraphicsContext), &pResourceRef)**ppKernelGraphicsContext != NULL***ppKernelGraphicsContext != NULL*clientGetResourceRefByType(RES_GET_CLIENT(pKernelChannel), pKernelChannel->hKernelGraphicsContext, classId(KernelGraphicsContext), &pResourceRef)**clientGetResourceRefByType(RES_GET_CLIENT(pKernelChannel), pKernelChannel->hKernelGraphicsContext, classId(KernelGraphicsContext), &pResourceRef)*clientGetResourceRefByType(RES_GET_CLIENT(pKernelChannel), pKernelChannel->pKernelChannelGroupApi->hKernelGraphicsContext, classId(KernelGraphicsContext), &pResourceRef)**clientGetResourceRefByType(RES_GET_CLIENT(pKernelChannel), pKernelChannel->pKernelChannelGroupApi->hKernelGraphicsContext, classId(KernelGraphicsContext), &pResourceRef)*call to kgrmgrGetVeidSizePerSpan_IMPL*kgrmgrGetVeidSizePerSpan(pGpu, pKernelGraphicsManager, &veidSizePerSpan)*src/kernel/gpu/gr/kernel_graphics_manager.c**kgrmgrGetVeidSizePerSpan(pGpu, pKernelGraphicsManager, &veidSizePerSpan)**src/kernel/gpu/gr/kernel_graphics_manager.c*pSpanStart*pSpanStart != NULL**pSpanStart != NULL*pInUseMask != NULL**pInUseMask != NULL*NVRM: veidCount %d is not aligned to veidSizePerSpan=%d **NVRM: veidCount %d is not aligned to veidSizePerSpan=%d *resourceAllocation*GPUInstanceVeidEnd*GPUInstanceVeidMask*GPUInstanceVeidMask != 0x0**GPUInstanceVeidMask != 0x0*GPUInstanceFreeVeidMask*veidStart*veidEnd*veidStart < veidEnd**veidStart < veidEnd*veidStart < 64**veidStart < 64*veidEnd < 64**veidEnd < 64*veidMask*veidMask != 0x0**veidMask != 0x0*((pKernelMIGGPUInstance->swizzId == 0) || (GPUInstanceVeidMask & veidMask) == veidMask)**((pKernelMIGGPUInstance->swizzId == 0) || (GPUInstanceVeidMask & veidMask) == veidMask)*(*pInUseMask & veidMask) == 0**(*pInUseMask & veidMask) == 0*veidStart >= pKernelMIGGPUInstance->resourceAllocation.veidOffset**veidStart >= pKernelMIGGPUInstance->resourceAllocation.veidOffset*legacyKgraphicsStaticInfo*pKernelGraphicsManager->legacyKgraphicsStaticInfo.bInitialized**pKernelGraphicsManager->legacyKgraphicsStaticInfo.bInitialized*pKernelGraphicsManager->legacyKgraphicsStaticInfo.pGrInfo != NULL**pKernelGraphicsManager->legacyKgraphicsStaticInfo.pGrInfo != NULL*maxNumGpcs*(gpcId < maxNumGpcs)**(gpcId < maxNumGpcs)*!IS_MIG_IN_USE(pGpu)**!IS_MIG_IN_USE(pGpu)*kgrctxGlobalCtxBufferToFifoEngineId(bufId, &fifoEngineId)**kgrctxGlobalCtxBufferToFifoEngineId(bufId, &fifoEngineId)*call to kgrmgrSetGlobalCtxBufInfo_IMPL*globalCtxBufInfo**globalCtxBufInfo*call to kmigmgrIsMemoryPartitioningNeeded_DISPATCH*pGrIdx*pGrIdx != NULL**pGrIdx != NULL*i != KGRMGR_MAX_GR**i != KGRMGR_MAX_GR*pVeidStart*pVeidStart != NULL**pVeidStart != NULL*grIdx != KGRMGR_MAX_GR**grIdx != KGRMGR_MAX_GR*call to portUtilCountTrailingZeros64*pVeidSizePerSpan*pVeidSizePerSpan != NULL**pVeidSizePerSpan != NULL*call to kmigmgrSmallestComputeProfileSize_IMPL*computeSize != KMIGMGR_COMPUTE_SIZE_INVALID**computeSize != KMIGMGR_COMPUTE_SIZE_INVALID*call to kmigmgrGetComputeProfileFromSize_IMPL*kmigmgrGetComputeProfileFromSize(pGpu, pKernelMIGManager, computeSize, &computeProfile)**kmigmgrGetComputeProfileFromSize(pGpu, pKernelMIGManager, computeSize, &computeProfile)*call to kgrmgrGetVeidsFromGpcCount_DISPATCH*computeProfile*kgrmgrGetVeidsFromGpcCount_HAL(pGpu, pKernelGraphicsManager, computeProfile.gpcCount, pVeidSizePerSpan)**kgrmgrGetVeidsFromGpcCount_HAL(pGpu, pKernelGraphicsManager, computeProfile.gpcCount, pVeidSizePerSpan)*pVeidStepSize*pVeidStepSize != NULL**pVeidStepSize != NULL*(pKernelGraphicsManager->veidInUseMask & veidMask) == veidMask**(pKernelGraphicsManager->veidInUseMask & veidMask) == veidMask*pKernelGraphicsManager->grIdxVeidMask[grIdx] == 0**pKernelGraphicsManager->grIdxVeidMask[grIdx] == 0*veidSpanOffset != KMIGMGR_SPAN_OFFSET_INVALID**veidSpanOffset != KMIGMGR_SPAN_OFFSET_INVALID*reqVeidMask*reqVeidMask != 0x0**reqVeidMask != 0x0*(pKernelGraphicsManager->veidInUseMask & reqVeidMask) == 0**(pKernelGraphicsManager->veidInUseMask & reqVeidMask) == 0*((pKernelMIGGPUInstance->swizzId == 0) || ((GPUInstanceFreeVeidMask & reqVeidMask) == reqVeidMask))**((pKernelMIGGPUInstance->swizzId == 0) || ((GPUInstanceFreeVeidMask & reqVeidMask) == reqVeidMask))*(physGpcId < maxNumGpcs)**(physGpcId < maxNumGpcs)*pPpcMask*(pPpcMask != NULL)**(pPpcMask != NULL)*(pKernelGraphicsManager->legacyKgraphicsStaticInfo.pPpcMasks != NULL)**(pKernelGraphicsManager->legacyKgraphicsStaticInfo.pPpcMasks != NULL)*call to _kgrmgrGPUInstanceHasComputeInstances*NVRM: Cannot give GR Route flag of TYPE_NONE with MIG enabled! **NVRM: Cannot give GR Route flag of TYPE_NONE with MIG enabled! *kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(localGrIdx), &globalRmEngType)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(localGrIdx), &globalRmEngType)*RM_ENGINE_TYPE_IS_GR(globalRmEngType)**RM_ENGINE_TYPE_IS_GR(globalRmEngType)*NVRM: Failed to find a channel or TSG with given handle 0x%08x associated with hClient=0x%08x **NVRM: Failed to find a channel or TSG with given handle 0x%08x associated with hClient=0x%08x *(pKernelChannelGroupApi != NULL && pKernelChannelGroupApi->pKernelChannelGroup != NULL)**(pKernelChannelGroupApi != NULL && pKernelChannelGroupApi->pKernelChannelGroup != NULL)*NVRM: Found TSG with given handle 0x%08x, using this to determine GR engine ID **NVRM: Found TSG with given handle 0x%08x, using this to determine GR engine ID *NVRM: Found channel with given handle 0x%08x, using this to determine GR engine ID **NVRM: Found channel with given handle 0x%08x, using this to determine GR engine ID *NVRM: Failed to route GR using non-GR engine type 0x%x (0x%x) **NVRM: Failed to route GR using non-GR engine type 0x%x (0x%x) *NVRM: Unrecognized GR Route flag type 0x%x! **NVRM: Unrecognized GR Route flag type 0x%x! *ppKernelGraphics**ppKernelGraphics*!pKernelGraphicsManager->legacyKgraphicsStaticInfo.bInitialized**!pKernelGraphicsManager->legacyKgraphicsStaticInfo.bInitialized*(pKernelGraphics != NULL) && (kgraphicsGetInstance(pGpu, pKernelGraphics) == 0)**(pKernelGraphics != NULL) && (kgraphicsGetInstance(pGpu, pKernelGraphics) == 0)*pKernelGraphicsManager->legacyKgraphicsStaticInfo.pPpcMasks != NULL**pKernelGraphicsManager->legacyKgraphicsStaticInfo.pPpcMasks != NULL*call to fecsRemoveAllBindpointsForGpu*call to fecsGlobalLoggingTeardown*call to _kgrmgrInitRegistryOverrides*call to fecsGlobalLoggingInit*pRouteInfo*NV_ENUM_IS(GR_GLOBALCTX_BUFFER, bufId)**NV_ENUM_IS(GR_GLOBALCTX_BUFFER, bufId)*NV_ENUM_IS(GR_CTX_BUFFER, bufId)**NV_ENUM_IS(GR_CTX_BUFFER, bufId)*pKernelMIGGpuInstance != NULL**pKernelMIGGpuInstance != NULL*MIGComputeInstance**MIGComputeInstance*computeInstanceIdx*NV_ARRAY_ELEMENTS(promote3d) <= maxPromoteIds*src/kernel/gpu/gr/kernel_graphics_object.c**NV_ARRAY_ELEMENTS(promote3d) <= maxPromoteIds**src/kernel/gpu/gr/kernel_graphics_object.c**pPromoteIds*promote3d**promote3d*NV_ARRAY_ELEMENTS(promoteNon3d) <= maxPromoteIds**NV_ARRAY_ELEMENTS(promoteNon3d) <= maxPromoteIds*promoteNon3d**promoteNon3d*call to kgrobjGetPromoteIds_FWCLIENT*NV_ARRAY_ELEMENTS(promoteSriovHeavy) <= maxPromoteIds**NV_ARRAY_ELEMENTS(promoteSriovHeavy) <= maxPromoteIds*promoteSriovHeavy**promoteSriovHeavy**pMmioMemDesc*pChanDes*memdescCreate(&pKernelGraphicsObject->pMmioMemDesc, pGpu, RM_PAGE_SIZE, 0, NV_TRUE, addrSpace, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pKernelGraphicsObject->pMmioMemDesc, pGpu, RM_PAGE_SIZE, 0, NV_TRUE, addrSpace, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*memdescAlloc(pKernelGraphicsObject->pMmioMemDesc)**memdescAlloc(pKernelGraphicsObject->pMmioMemDesc)*call to _kgrobjDestruct*NVRM: class: 0x%x on channel 0x%08x **NVRM: class: 0x%x on channel 0x%08x *pParams->hResource != NV01_NULL_OBJECT**pParams->hResource != NV01_NULL_OBJECT*pkChannel*pkChannel != NULL**pkChannel != NULL*call to _kgrAlloc*call to kgrobjFreeComputeMmio_DISPATCH*call to kgrctxDecObjectCount_IMPL*call to kgrobjShouldCleanup_KERNEL*call to kgrctxUnmapCtxBuffers_IMPL*numGpcs*numGpcs > 0**numGpcs > 0*call to kgrobjSetComputeMmio_DISPATCH*kgrobjSetComputeMmio_HAL(pGpu, pKernelGraphicsObject)**kgrobjSetComputeMmio_HAL(pGpu, pKernelGraphicsObject)*call to kgrctxShouldManageCtxBuffers_KERNEL*call to kgraphicsAllocKgraphicsBuffers_KERNEL*kgraphicsAllocKgraphicsBuffers_HAL(pGpu, pKernelGraphics, pKernelGraphicsObject->pKernelGraphicsContext, pChannelDescendant->pKernelChannel)**kgraphicsAllocKgraphicsBuffers_HAL(pGpu, pKernelGraphics, pKernelGraphicsObject->pKernelGraphicsContext, pChannelDescendant->pKernelChannel)*call to kgrctxAllocCtxBuffers_IMPL*kgrctxAllocCtxBuffers(pGpu, pKernelGraphicsObject->pKernelGraphicsContext, pKernelGraphics, pKernelGraphicsObject)**kgrctxAllocCtxBuffers(pGpu, pKernelGraphicsObject->pKernelGraphicsContext, pKernelGraphics, pKernelGraphicsObject)*call to kgrctxMapCtxBuffers_IMPL*kgrctxMapCtxBuffers(pGpu, pKernelGraphicsObject->pKernelGraphicsContext, pKernelGraphics, pKernelGraphicsObject)**kgrctxMapCtxBuffers(pGpu, pKernelGraphicsObject->pKernelGraphicsContext, pKernelGraphics, pKernelGraphicsObject)*call to kgrctxIncObjectCount_IMPL*call to kgrobjPromoteContext_IMPL*kgrobjPromoteContext(pGpu, pKernelGraphicsObject, pKernelGraphics)**kgrobjPromoteContext(pGpu, pKernelGraphicsObject, pKernelGraphics)*subdeviceGetByDeviceAndGpu( RES_GET_CLIENT(pKernelGraphicsObject), pDevice, pGpu, &pSubdevice)**subdeviceGetByDeviceAndGpu( RES_GET_CLIENT(pKernelGraphicsObject), pDevice, pGpu, &pSubdevice)*call to kgrobjGetPromoteIds_DISPATCH*promoteIds**promoteIds*kgrctxPrepareInitializeCtxBuffer(pGpu, pKernelGraphicsObject->pKernelGraphicsContext, pKernelGraphics, pChannelDescendant->pKernelChannel, promoteIds[i], ¶ms.promoteEntry[entryCount], &bInitialize)**kgrctxPrepareInitializeCtxBuffer(pGpu, pKernelGraphicsObject->pKernelGraphicsContext, pKernelGraphics, pChannelDescendant->pKernelChannel, promoteIds[i], ¶ms.promoteEntry[entryCount], &bInitialize)*kgrctxPreparePromoteCtxBuffer(pGpu, pKernelGraphicsObject->pKernelGraphicsContext, pChannelDescendant->pKernelChannel, promoteIds[i], ¶ms.promoteEntry[entryCount], &bPromote)**kgrctxPreparePromoteCtxBuffer(pGpu, pKernelGraphicsObject->pKernelGraphicsContext, pChannelDescendant->pKernelChannel, promoteIds[i], ¶ms.promoteEntry[entryCount], &bPromote)*call to kgrctxDeregisterKernelSMDebuggerSession_IMPL*pGrResourceRef**pDebugSession*pRsSession**pRsSession*call to sessionAddDependency_IMPL*call to sessionAddDependant_IMPL*pDebuggerRef*src/kernel/gpu/gr/kernel_sm_debugger_session.c*NVRM: KernelGraphicsObject already a dependent of a non-debugger session **src/kernel/gpu/gr/kernel_sm_debugger_session.c**NVRM: KernelGraphicsObject already a dependent of a non-debugger session *pNv83deAllocParams*Old Nv83deAllocParams interface not supported**Old Nv83deAllocParams interface not supported*hAppClient*hClass3dObject*hKernelSMDebuggerSession*serverGetClientUnderLock(&g_resServ, hAppClient, &pAppClient)**serverGetClientUnderLock(&g_resServ, hAppClient, &pAppClient)*pAppClient*NVRM: hObject 0x%x not found for client 0x%x **NVRM: hObject 0x%x not found for client 0x%x *call to osValidateClientTokens*call to rmclientGetSecurityTokenByHandle**call to rmclientGetSecurityTokenByHandle*call to rsAccessCheckRights*NVRM: Current user does not have debugging rights on the compute object. Status = 0x%x **NVRM: Current user does not have debugging rights on the compute object. Status = 0x%x *hAppChannel*pGpu == GPU_RES_GET_GPU(pKernelSMDebuggerSession->pObject)**pGpu == GPU_RES_GET_GPU(pKernelSMDebuggerSession->pObject)**pAppDevice*subdeviceGetByInstance(pAppClient, RES_GET_HANDLE(pAppDevice), gpumgrGetSubDeviceInstanceFromGpu(pGpu), &pSubdevice)**subdeviceGetByInstance(pAppClient, RES_GET_HANDLE(pAppDevice), gpumgrGetSubDeviceInstanceFromGpu(pGpu), &pSubdevice)*hChannelClient*hDebugger*hDebuggerClient*call to kgrctxRegisterKernelSMDebuggerSession_IMPL*NVRM: Failed to insert Debugger into channel list, handle = 0x%x **NVRM: Failed to insert Debugger into channel list, handle = 0x%x *call to _ksmdbgssnInitClient*_ksmdbgssnInitClient(pGpu, pKernelSMDebuggerSession)**_ksmdbgssnInitClient(pGpu, pKernelSMDebuggerSession)*call to _ShareDebugger*_ShareDebugger(pKernelSMDebuggerSession, pCallContext->pResourceRef, pGrResourceRef)**_ShareDebugger(pKernelSMDebuggerSession, pCallContext->pResourceRef, pGrResourceRef)*pRmApi->AllocWithHandle(pRmApi, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_ROOT, &pKernelSMDebuggerSession->hInternalClient, sizeof(pKernelSMDebuggerSession->hInternalClient))**pRmApi->AllocWithHandle(pRmApi, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_ROOT, &pKernelSMDebuggerSession->hInternalClient, sizeof(pKernelSMDebuggerSession->hInternalClient))*serverutilGenResourceHandle(pKernelSMDebuggerSession->hInternalClient, &pKernelSMDebuggerSession->hInternalDevice)**serverutilGenResourceHandle(pKernelSMDebuggerSession->hInternalClient, &pKernelSMDebuggerSession->hInternalDevice)*pRmApi->AllocWithHandle(pRmApi, pKernelSMDebuggerSession->hInternalClient, pKernelSMDebuggerSession->hInternalClient, pKernelSMDebuggerSession->hInternalDevice, NV01_DEVICE_0, &nv0080AllocParams, sizeof(nv0080AllocParams))**pRmApi->AllocWithHandle(pRmApi, pKernelSMDebuggerSession->hInternalClient, pKernelSMDebuggerSession->hInternalClient, pKernelSMDebuggerSession->hInternalDevice, NV01_DEVICE_0, &nv0080AllocParams, sizeof(nv0080AllocParams))*serverutilGenResourceHandle(pKernelSMDebuggerSession->hInternalClient, &pKernelSMDebuggerSession->hInternalSubdevice)**serverutilGenResourceHandle(pKernelSMDebuggerSession->hInternalClient, &pKernelSMDebuggerSession->hInternalSubdevice)*pRmApi->AllocWithHandle(pRmApi, pKernelSMDebuggerSession->hInternalClient, pKernelSMDebuggerSession->hInternalDevice, pKernelSMDebuggerSession->hInternalSubdevice, NV20_SUBDEVICE_0, &nv2080AllocParams, sizeof(nv2080AllocParams))**pRmApi->AllocWithHandle(pRmApi, pKernelSMDebuggerSession->hInternalClient, pKernelSMDebuggerSession->hInternalDevice, pKernelSMDebuggerSession->hInternalSubdevice, NV20_SUBDEVICE_0, &nv2080AllocParams, sizeof(nv2080AllocParams))*serverutilGenResourceHandle(pKernelSMDebuggerSession->hInternalClient, &pKernelSMDebuggerSession->hInternalSubscription)**serverutilGenResourceHandle(pKernelSMDebuggerSession->hInternalClient, &pKernelSMDebuggerSession->hInternalSubscription)*pRmApi->AllocWithHandle(pRmApi, pKernelSMDebuggerSession->hInternalClient, pKernelSMDebuggerSession->hInternalSubdevice, pKernelSMDebuggerSession->hInternalSubscription, AMPERE_SMC_PARTITION_REF, &nvC637AllocParams, sizeof(nvC637AllocParams))**pRmApi->AllocWithHandle(pRmApi, pKernelSMDebuggerSession->hInternalClient, pKernelSMDebuggerSession->hInternalSubdevice, pKernelSMDebuggerSession->hInternalSubscription, AMPERE_SMC_PARTITION_REF, &nvC637AllocParams, sizeof(nvC637AllocParams))*hInternalSubscription*hInternalMemMapping*pDependency*call to ksmdbgssnFreeCallback_IMPL*call to sessionRemoveDependency_IMPL*call to sessionRemoveDependant_IMPL*src/kernel/gpu/gr/kernel_sm_debugger_session_ctrl.c**src/kernel/gpu/gr/kernel_sm_debugger_session_ctrl.c*call to kgrctxLookupMmuFaultInfo_IMPL*kgrctxLookupMmuFaultInfo(pGpu, kgrobjGetKernelGraphicsContext(pGpu, pKernelSMDebuggerSession->pObject), pParams)**kgrctxLookupMmuFaultInfo(pGpu, kgrobjGetKernelGraphicsContext(pGpu, pKernelSMDebuggerSession->pObject), pParams)*call to kgrctxClearMmuFault_IMPL*kgrctxClearMmuFault(pGpu, kgrobjGetKernelGraphicsContext(pGpu, pKernelSMDebuggerSession->pObject))**kgrctxClearMmuFault(pGpu, kgrobjGetKernelGraphicsContext(pGpu, pKernelSMDebuggerSession->pObject))*call to kgrctxLookupMmuFault_IMPL*kgrctxLookupMmuFault(pGpu, kgrobjGetKernelGraphicsContext(pGpu, pKernelSMDebuggerSession->pObject), &pParams->mmuFault)**kgrctxLookupMmuFault(pGpu, kgrobjGetKernelGraphicsContext(pGpu, pKernelSMDebuggerSession->pObject), &pParams->mmuFault)*pParams->count <= MAX_ACCESS_MEMORY_OPS**pParams->count <= MAX_ACCESS_MEMORY_OPS*portSafeAddU32(pParams->entries[i].dataOffset, pParams->entries[i].length, &endingOffset) && (endingOffset <= pParams->dataLength)**portSafeAddU32(pParams->entries[i].dataOffset, pParams->entries[i].length, &endingOffset) && (endingOffset <= pParams->dataLength)*call to _nv83deCtrlCmdDebugAccessMemory*_nv83deCtrlCmdDebugAccessMemory(pGpu, pKernelSMDebuggerSession, RES_GET_CLIENT_HANDLE(pKernelSMDebuggerSession), pParams->entries[i].hMemory, pParams->entries[i].memOffset, pParams->entries[i].length, pData, GRDBG_MEM_ACCESS_TYPE_WRITE)**_nv83deCtrlCmdDebugAccessMemory(pGpu, pKernelSMDebuggerSession, RES_GET_CLIENT_HANDLE(pKernelSMDebuggerSession), pParams->entries[i].hMemory, pParams->entries[i].memOffset, pParams->entries[i].length, pData, GRDBG_MEM_ACCESS_TYPE_WRITE)*_nv83deCtrlCmdDebugAccessMemory(pGpu, pKernelSMDebuggerSession, RES_GET_CLIENT_HANDLE(pKernelSMDebuggerSession), pParams->entries[i].hMemory, pParams->entries[i].memOffset, pParams->entries[i].length, pData, GRDBG_MEM_ACCESS_TYPE_READ)**_nv83deCtrlCmdDebugAccessMemory(pGpu, pKernelSMDebuggerSession, RES_GET_CLIENT_HANDLE(pKernelSMDebuggerSession), pParams->entries[i].hMemory, pParams->entries[i].memOffset, pParams->entries[i].length, pData, GRDBG_MEM_ACCESS_TYPE_READ)*pParams->regOpCount <= NV83DE_CTRL_GPU_EXEC_REG_OPS_MAX_OPS**pParams->regOpCount <= NV83DE_CTRL_GPU_EXEC_REG_OPS_MAX_OPS*call to gpuValidateRegOps*regOps**regOps*gpuValidateRegOps(pGpu, pParams->regOps, pParams->regOpCount, pParams->bNonTransactional, isClientGspPlugin, NV_FALSE)**gpuValidateRegOps(pGpu, pParams->regOps, pParams->regOpCount, pParams->bNonTransactional, isClientGspPlugin, NV_FALSE)*NVRM: Invalid handle: hMemory %x is not of type classId(Memory): (GPU 0x%llx, hClient 0x%x, hMemory %x, offset 0x%llx, length 0x%x, flags 0x%x) **NVRM: Invalid handle: hMemory %x is not of type classId(Memory): (GPU 0x%llx, hClient 0x%x, hMemory %x, offset 0x%llx, length 0x%x, flags 0x%x) *pTargetGpu*bCpuMemory*bGpuCached*call to _nv83deFlushAllGpusL2Cache*_nv83deFlushAllGpusL2Cache(pMemDesc)**_nv83deFlushAllGpusL2Cache(pMemDesc)*call to _nv83deMapMemoryIntoGrdbgClient*NVRM: Failed to map memory into internal smdbg client (GPU 0x%llx, hClient 0x%x, hMemory %x, offset 0x%llx, length 0x%x, flags 0x%x): (rmStatus = %x) **NVRM: Failed to map memory into internal smdbg client (GPU 0x%llx, hClient 0x%x, hMemory %x, offset 0x%llx, length 0x%x, flags 0x%x): (rmStatus = %x) *pCpuVirtAddr**pCpuVirtAddr*NVRM: portMemCopy failed (from VA 0x%p to 0x%p, length 0x%x) **NVRM: portMemCopy failed (from VA 0x%p to 0x%p, length 0x%x) *NVRM: Reading %d bytes of memory from 0x%x **NVRM: Reading %d bytes of memory from 0x%x *NVRM: Writing %d bytes of memory to 0x%x **NVRM: Writing %d bytes of memory to 0x%x *call to _nv83deUnmapMemoryFromGrdbgClient*rmUnmapStatus*transferFlags*memmgrMemRead(pMemoryManager, &surf, NvP64_VALUE(buffer), length, transferFlags)**memmgrMemRead(pMemoryManager, &surf, NvP64_VALUE(buffer), length, transferFlags)*memmgrMemWrite(pMemoryManager, &surf, NvP64_VALUE(buffer), length, transferFlags)**memmgrMemWrite(pMemoryManager, &surf, NvP64_VALUE(buffer), length, transferFlags)*pKernelSMDebuggerSession->hInternalMemMapping == NV01_NULL_OBJECT**pKernelSMDebuggerSession->hInternalMemMapping == NV01_NULL_OBJECT*NVRM: Unable to dup source memory (0x%x,0x%x) under device (status = 0x%x). Attempting subdevice dup. **NVRM: Unable to dup source memory (0x%x,0x%x) under device (status = 0x%x). Attempting subdevice dup. *NVRM: Unable to dup source memory (0x%x,0x%x) under subdevice (status = 0x%x). Aborting. **NVRM: Unable to dup source memory (0x%x,0x%x) under subdevice (status = 0x%x). Aborting. *NVRM: RmMapMemory failed 0x%x **NVRM: RmMapMemory failed 0x%x *kmemsysCacheOp_HAL(pTempGpu, GPU_GET_KERNEL_MEMORY_SYSTEM(pTempGpu), pMemDesc, FB_CACHE_SYSTEM_MEMORY, FB_CACHE_INVALIDATE)**kmemsysCacheOp_HAL(pTempGpu, GPU_GET_KERNEL_MEMORY_SYSTEM(pTempGpu), pMemDesc, FB_CACHE_SYSTEM_MEMORY, FB_CACHE_INVALIDATE)*call to _nv83deCtrlCmdFetchVAS*_nv83deCtrlCmdFetchVAS(pClient, pKernelSMDebuggerSession->hChannel, &pVASpace)**_nv83deCtrlCmdFetchVAS(pClient, pKernelSMDebuggerSession->hChannel, &pVASpace)*traceArg**pMapParams*mmuParams*call to mmuTrace*call to _nv8deCtrlCmdReadWriteSurface*CliGetKernelChannel(pClient, pKernelSMDebuggerSession->hChannel, &pKernelChannel)**CliGetKernelChannel(pClient, pKernelSMDebuggerSession->hChannel, &pKernelChannel)*opsBuffer**opsBuffer*bufSize != 0**bufSize != 0*CliGetDmaMappingInfo(pClient, RES_GET_PARENT_HANDLE(pKernelSMDebuggerSession), pKernelChannel->hVASpace, virtAddr, gpumgrGetDeviceGpuMask(pGpu->deviceInstance), &pDmaMappingInfo)**CliGetDmaMappingInfo(pClient, RES_GET_PARENT_HANDLE(pKernelSMDebuggerSession), pKernelChannel->hVASpace, virtAddr, gpumgrGetDeviceGpuMask(pGpu->deviceInstance), &pDmaMappingInfo)*curSize*pKernBuffer*pKernBuffer != NULL**pKernBuffer != NULL*bufPtr**bufPtr**pKernBuffer*portMemExCopyFromUser(bufPtr, pKernBuffer, curSize)**portMemExCopyFromUser(bufPtr, pKernBuffer, curSize)*memmgrMemWrite(pMemoryManager, &surf, pKernBuffer, curSize, transferFlags)**memmgrMemWrite(pMemoryManager, &surf, pKernBuffer, curSize, transferFlags)*memmgrMemRead(pMemoryManager, &surf, pKernBuffer, curSize, transferFlags)**memmgrMemRead(pMemoryManager, &surf, pKernBuffer, curSize, transferFlags)*portMemExCopyToUser(pKernBuffer, bufPtr, curSize)**portMemExCopyToUser(pKernBuffer, bufPtr, curSize)***bufPtr**ppVASpace*ppVASpace != NULL**ppVASpace != NULL*CliGetKernelChannel(pClient, hChannel, &pKernelChannel)**CliGetKernelChannel(pClient, hChannel, &pKernelChannel)*RmEnableGrWatchdog**RmEnableGrWatchdog*watchdogState*call to _kwdtInitRegistryOverrides*call to _kgspIsScrubberCompleted*src/kernel/gpu/gsp/arch/ada/kernel_gsp_ad102.c*NVRM: skipping executing Scrubber as it already ran **src/kernel/gpu/gsp/arch/ada/kernel_gsp_ad102.c**NVRM: skipping executing Scrubber as it already ran *NVRM: executing Scrubber **NVRM: executing Scrubber *pScrubberUcode*pKernelGsp->pScrubberUcode != NULL**pKernelGsp->pScrubberUcode != NULL*call to kflcnReset_DISPATCH*kflcnReset_HAL(pGpu, staticCast(pKernelSec2, KernelFalcon))**kflcnReset_HAL(pGpu, staticCast(pKernelSec2, KernelFalcon))*call to kgspExecuteHsFalcon_DISPATCH*NVRM: failed to execute Scrubber: 0x%x **NVRM: failed to execute Scrubber: 0x%x *NVRM: failed to execute Scrubber: done bit not set **NVRM: failed to execute Scrubber: done bit not set *pFlcnUcode != NULL*src/kernel/gpu/gsp/arch/ampere/kernel_gsp_falcon_ga102.c**pFlcnUcode != NULL**src/kernel/gpu/gsp/arch/ampere/kernel_gsp_falcon_ga102.c*pKernelFlcn != NULL**pKernelFlcn != NULL*pKernelFlcn->bBootFromHs**pKernelFlcn->bBootFromHs*pFlcnUcode->bootType == KGSP_FLCN_UCODE_BOOT_FROM_HS**pFlcnUcode->bootType == KGSP_FLCN_UCODE_BOOT_FROM_HS*pUcode**pUcode*pUcodeMemDesc*pUcode->pUcodeMemDesc != NULL**pUcode->pUcodeMemDesc != NULL*memdescGetAddressSpace(pUcode->pUcodeMemDesc) == ADDR_SYSMEM**memdescGetAddressSpace(pUcode->pUcodeMemDesc) == ADDR_SYSMEM*call to kflcnDisableCtxReq_DISPATCH*call to s_dmaTransfer_GA102*s_dmaTransfer_GA102(pGpu, pKernelFlcn, pUcode->imemPa, pUcode->imemVa, srcPhysAddr, pUcode->imemSize, dmaCmd)**s_dmaTransfer_GA102(pGpu, pKernelFlcn, pUcode->imemPa, pUcode->imemVa, srcPhysAddr, pUcode->imemSize, dmaCmd)*s_dmaTransfer_GA102(pGpu, pKernelFlcn, pUcode->dmemPa, memOff, srcPhysAddr, pUcode->dmemSize, dmaCmd)**s_dmaTransfer_GA102(pGpu, pKernelFlcn, pUcode->dmemPa, memOff, srcPhysAddr, pUcode->dmemSize, dmaCmd)*call to kflcnStartCpu_DISPATCH*call to s_dmaPoll_GA102*pollValue*NVRM: Error while waiting for Falcon DMA; mode: %d, status: 0x%08x **NVRM: Error while waiting for Falcon DMA; mode: %d, status: 0x%08x *fuseVal*payloadSize == 0*src/kernel/gpu/gsp/arch/ampere/kernel_gsp_ga102.c**payloadSize == 0**src/kernel/gpu/gsp/arch/ampere/kernel_gsp_ga102.c*bLibosLogsPollingEnabled*call to kflcnResetIntoRiscv_DISPATCH*kflcnResetIntoRiscv_HAL(pGpu, pKernelFalcon)**kflcnResetIntoRiscv_HAL(pGpu, pKernelFalcon)*call to kgspProgramLibosBootArgsAddr_DISPATCH*NVRM: ---------------Starting SEC2 to resume GSP-RM------------ **NVRM: ---------------Starting SEC2 to resume GSP-RM------------ *pKernelSec2Falcon*secMailbox0*NVRM: Timeout waiting for SEC2-RTOS to resume GSP-RM. SEC2 Mailbox0 is : 0x%x **NVRM: Timeout waiting for SEC2-RTOS to resume GSP-RM. SEC2 Mailbox0 is : 0x%x *pRiscvDesc*NVRM: GSP ucode loaded and RISCV started. **NVRM: GSP ucode loaded and RISCV started. *Failed to boot GSP**Failed to boot GSP*falconConfig*crashcatEngConfig*call to kgspGetCrashcatSysmemBufferSize_DISPATCH*eccStatus != 0*src/kernel/gpu/gsp/arch/blackwell/kernel_gsp_ecc_gb100.c**eccStatus != 0**src/kernel/gpu/gsp/arch/blackwell/kernel_gsp_ecc_gb100.c*NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_IMEM PENDING **NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_IMEM PENDING *NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_DMEM PENDING **NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_DMEM PENDING *NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_ICACHE PENDING **NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_ICACHE PENDING *NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_DCACHE PENDING **NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_DCACHE PENDING *NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_MPU_RAM PENDING **NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_MPU_RAM PENDING *NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_DCLS PENDING **NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_DCLS PENDING *NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_REG PENDING **NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_REG PENDING *NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_EMEM PENDING **NVRM: NV_PGSP_FALCON_ECC_STATUS_UNCORRECTED_ERR_EMEM PENDING *bFatalError*GSP-RISCV uncorrectable ECC error**GSP-RISCV uncorrectable ECC error*GSP-RISCV, Uncorrectable SRAM error**GSP-RISCV, Uncorrectable SRAM error*eccIntrStatus != 0**eccIntrStatus != 0*call to kgspEccServiceUncorrError_DISPATCH*Corrected errors are not supported**Corrected errors are not supported*call to kflcnGetFatalHwErrorStatus_DISPATCH*src/kernel/gpu/gsp/arch/blackwell/kernel_gsp_gb100.c*NVRM: NV_PGSP_FALCON_IRQSTAT_FATAL_ERROR unknown error pending **src/kernel/gpu/gsp/arch/blackwell/kernel_gsp_gb100.c**NVRM: NV_PGSP_FALCON_IRQSTAT_FATAL_ERROR unknown error pending *GSP-RISCV instance 0 unknown fatal error**GSP-RISCV instance 0 unknown fatal error*NVRM: NV_PGSP_FALCON_IRQSTAT_FATAL_ERROR %s pending (mask: 0x%x) *pErrorNameNvPrintf**NVRM: NV_PGSP_FALCON_IRQSTAT_FATAL_ERROR %s pending (mask: 0x%x) *locType*GSP poison pending when poison is disabled**GSP poison pending when poison is disabled*call to gpuUpdateErrorContainmentState_DISPATCH*gpuUpdateErrorContainmentState_HAL(pGpu, NV_ERROR_CONT_ERR_ID_E24_GSP_POISON, loc, NULL)**gpuUpdateErrorContainmentState_HAL(pGpu, NV_ERROR_CONT_ERR_ID_E24_GSP_POISON, loc, NULL)*GSP-RISCV instance 0 %s fatal error (mask: 0x%x)*pErrorName**GSP-RISCV instance 0 %s fatal error (mask: 0x%x)*errorCodeBitIdx*pBinArchiveConcatenatedFMC*pBinArchiveConcatenatedFMCDesc*call to kgspGetGspRmBootUcodeStorage_GA102**.fwsignature_cc_*NVRM: Timed out waiting for GSP reset PLM to be lowered **NVRM: Timed out waiting for GSP reset PLM to be lowered *call to kgspResetHw_GH100*call to kgspPopulateWprMeta_GH100*kgspPopulateWprMeta_GH100(pGpu, pKernelGsp, pGspFw)*src/kernel/gpu/gsp/arch/blackwell/kernel_gsp_gb10b.c**kgspPopulateWprMeta_GH100(pGpu, pKernelGsp, pGspFw)**src/kernel/gpu/gsp/arch/blackwell/kernel_gsp_gb10b.c*vgaWorkspaceSize*pmuReservedSize*nonWprHeapSize*call to _is48VmEnabled*src/kernel/gpu/gsp/arch/hopper/kernel_gsp_gh100.c*NVRM: CC secret cleanup failed due to timeout! **src/kernel/gpu/gsp/arch/hopper/kernel_gsp_gh100.c**NVRM: CC secret cleanup failed due to timeout! *ccCleanupStatus*NVRM: CC secret cleanup successful! **NVRM: CC secret cleanup successful! *NVRM: CC secret cleanup failed! Status 0x%x **NVRM: CC secret cleanup failed! Status 0x%x *opCode < GSP_NOTIFY_OP_OPCODE_MAX**opCode < GSP_NOTIFY_OP_OPCODE_MAX*argc <= GSP_NOTIFY_OP_MAX_ARGUMENT_COUNT**argc <= GSP_NOTIFY_OP_MAX_ARGUMENT_COUNT*(argc == 0 || pArgs != NULL)**(argc == 0 || pArgs != NULL)*pNotifyOpSurf*pKernelGsp->pNotifyOpSurf != NULL**pKernelGsp->pNotifyOpSurf != NULL*pNotifyOpSharedSurface**pNotifyOpSharedSurface*pInUse**pInUse*pSeqAddr**pSeqAddr*pOpCodeAddr**pOpCodeAddr*pStatusAddr**pStatusAddr*pArgcAddr**pArgcAddr*pArgsAddr**pArgsAddr*seqValue*NVRM: gpuCheckTimeout failed, status = 0x%x **NVRM: gpuCheckTimeout failed, status = 0x%x *call to kflcnWaitForHaltRiscv_DISPATCH*NVRM: Starting to boot GSP via FSP. **NVRM: Starting to boot GSP via FSP. *call to kfspSendBootCommands_DISPATCH*kfspSendBootCommands_HAL(pGpu, pKernelFsp)**kfspSendBootCommands_HAL(pGpu, pKernelFsp)*NVRM: Starting to boot GSP via SEC2. **NVRM: Starting to boot GSP via SEC2. *call to ksec2SendBootCommands_DISPATCH*ksec2SendBootCommands_HAL(pGpu, pKernelSec2)**ksec2SendBootCommands_HAL(pGpu, pKernelSec2)*call to _kgspBootstrapGspFmc_GH100*_kgspBootstrapGspFmc_GH100(pGpu, pKernelGsp)**_kgspBootstrapGspFmc_GH100(pGpu, pKernelGsp)*call to kfspWaitForGspTargetMaskReleased_DISPATCH*NVRM: Timeout waiting for GSP target mask release. This error may be caused by several reasons: Bootrom may have failed, GSP init code may have failed or ACR failed to release target mask. RM does not have access to information on which of those conditions happened. **NVRM: Timeout waiting for GSP target mask release. This error may be caused by several reasons: Bootrom may have failed, GSP init code may have failed or ACR failed to release target mask. RM does not have access to information on which of those conditions happened. *call to ksec2WaitForGspTargetMaskReleased_DISPATCH*NVRM: SEC2 GSP Boot:Timeout waiting for GSP target mask release. This error may be caused by several reasons: Bootrom may have failed, GSP init code may have failed or ACR failed to release target mask. RM does not have access to information on which of those conditions happened. **NVRM: SEC2 GSP Boot:Timeout waiting for GSP target mask release. This error may be caused by several reasons: Bootrom may have failed, GSP init code may have failed or ACR failed to release target mask. RM does not have access to information on which of those conditions happened. *call to _kgspEstablishSpdmSession*_kgspEstablishSpdmSession(pGpu, pKernelGsp)**_kgspEstablishSpdmSession(pGpu, pKernelGsp)**pKernelGsp*NVRM: Timeout waiting for lockdown release. It's also possible that bootrom may have failed. RM may not have access to the BR status to be able to say for sure what failed. **NVRM: Timeout waiting for lockdown release. It's also possible that bootrom may have failed. RM may not have access to the BR status to be able to say for sure what failed. *NVRM: GSP-FMC reported an error while attempting to boot GSP: 0x%x **NVRM: GSP-FMC reported an error while attempting to boot GSP: 0x%x *NVRM: Waiting for GSP fw RM to be ready... **NVRM: Waiting for GSP fw RM to be ready... *call to GspStatusQueueInit*GspStatusQueueInit(pGpu, &pKernelGsp->pRpc->pMessageQueueInfo)**GspStatusQueueInit(pGpu, &pKernelGsp->pRpc->pMessageQueueInfo)*call to kgspWaitForRmInitDone_IMPL*kgspWaitForRmInitDone(pGpu, pKernelGsp)**kgspWaitForRmInitDone(pGpu, pKernelGsp)*NVRM: GSP FW RM ready. **NVRM: GSP FW RM ready. *NVRM: IS_GSP_CLIENT is not set. **NVRM: IS_GSP_CLIENT is not set. *call to kgspSetupGspFmcArgs_DISPATCH*kgspSetupGspFmcArgs_HAL(pGpu, pKernelGsp, bootMode)**kgspSetupGspFmcArgs_HAL(pGpu, pKernelGsp, bootMode)*PDB_PROP_KFSP_GSP_MODE_GSPRM*call to kfspPrepareBootCommands_DISPATCH*kfspPrepareBootCommands_HAL(pGpu, pKernelFsp)**kfspPrepareBootCommands_HAL(pGpu, pKernelFsp)*NVRM: Sec2 preparing for GSPRM boot **NVRM: Sec2 preparing for GSPRM boot *PDB_PROP_KSEC2_GSP_MODE_GSPRM*call to ksec2PrepareBootCommands_DISPATCH*ksec2PrepareBootCommands_HAL(pGpu, pKernelSec2)**ksec2PrepareBootCommands_HAL(pGpu, pKernelSec2)*kgspResetHw_HAL(pGpu, pKernelGsp)**kgspResetHw_HAL(pGpu, pKernelGsp)*pGspRmBootUcodeMemdesc*bCcEnabled*NVRM: Timeout waiting for SPDM Responder to initialize! **NVRM: Timeout waiting for SPDM Responder to initialize! *NVRM: GSP-FMC reported an error prior to SPDM boot: 0x%x **NVRM: GSP-FMC reported an error prior to SPDM boot: 0x%x *call to spdmEstablishSession_IMPL*spdmEstablishSession(pGpu, pSpdm, requesterId)**spdmEstablishSession(pGpu, pSpdm, requesterId)*NVRM: CC key derivation failed. **NVRM: CC key derivation failed. *call to kgspQueueAsyncInitRpcs_IMPL*NVRM: Timeout waiting for SPDM to proceed with boot! **NVRM: Timeout waiting for SPDM to proceed with boot! *NVRM: Failed to establish session with SPDM Responder! **NVRM: Failed to establish session with SPDM Responder! *mailbox0*IS_GSP_CLIENT(pGpu)**IS_GSP_CLIENT(pGpu)*pGspFmcBootParams*initParams*gspRmMemParams*flushSysmemAddrValLo*flushSysmemAddrValHi*call to knvlinkGetGSPProxyRegkeys*bootGspRmParams*pSRMetaDescriptor*gspRmDescOffset*gspRmDescSize*call to _kgspMemdescToDmaTarget*pWprMetaDescriptor*bIsGspRmBoot*gspRmParams*pLibosInitArgumentsDescriptor*bootArgsOffset*call to spdmSetupCommunicationBuffers_IMPL*NVRM: Failure when initializing SPDM messaging infrastructure. Status:0x%x **NVRM: Failure when initializing SPDM messaging infrastructure. Status:0x%x *gspSpdmParams*pPayloadBufferMemDesc*payloadBufferOffset*pGspRmBootUcodeImage*pKernelGsp->pGspRmBootUcodeImage != NULL**pKernelGsp->pGspRmBootUcodeImage != NULL*pKernelGsp->gspRmBootUcodeSize != 0**pKernelGsp->gspRmBootUcodeSize != 0*pRiscvDesc != NULL**pRiscvDesc != NULL*sizeOfBootloader*sysmemAddrOfBootloader*sizeOfRadix3Elf*pGspUCodeRadix3Descriptor*sysmemAddrOfRadix3Elf*bootloaderCodeOffset*bootloaderDataOffset*bootloaderManifestOffset*pSignatureMemdesc*sysmemAddrOfSignature*sizeOfSignature*call to kgspGetNonWprHeapSize_DISPATCH*call to kgspGetFwHeapSize_IMPL*gspFwHeapSize*call to kgspGetFrtsSize_DISPATCH*call to kgspVgpuNumVgpuPartitions_DISPATCH*gspFwHeapVfPartitionCount*pCrashCatQueueMemDesc*sysmemAddrOfCrashReportQueue*sizeOfCrashReportQueue*call to gpuIsCCMultiGpuProtectedPcieModeEnabled_IMPL*call to kgspIsWpr2Up_TU102*call to kgspFreeBootArgs_TU102**pGspFmcArgumentsCached*pGspFmcArgumentsMappingPriv**pGspFmcArgumentsMappingPriv***pGspFmcArgumentsMappingPriv**pGspFmcArgumentsDescriptor*memdescCreate(&pKernelGsp->pGspFmcArgumentsDescriptor, pGpu, sizeof(GSP_FMC_BOOT_PARAMS), 0x1000, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)**memdescCreate(&pKernelGsp->pGspFmcArgumentsDescriptor, pGpu, sizeof(GSP_FMC_BOOT_PARAMS), 0x1000, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)**nvStatus*memdescMap(pKernelGsp->pGspFmcArgumentsDescriptor, 0, memdescGetSize(pKernelGsp->pGspFmcArgumentsDescriptor), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)**memdescMap(pKernelGsp->pGspFmcArgumentsDescriptor, 0, memdescGetSize(pKernelGsp->pGspFmcArgumentsDescriptor), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)*pVa**pVa*call to kgspAllocBootArgs_TU102*call to kgspFreeBootArgs_DISPATCH*NVRM: Timed out waiting for GSP falcon reset to assert **NVRM: Timed out waiting for GSP falcon reset to assert *NVRM: Timed out waiting for GSP falcon reset to deassert **NVRM: Timed out waiting for GSP falcon reset to deassert *call to kgspIsWpr2Up_DISPATCH*src/kernel/gpu/gsp/arch/turing/kernel_gsp_booter_tu102.c*NVRM: skipping executing Booter Unload as WPR2 is not up **src/kernel/gpu/gsp/arch/turing/kernel_gsp_booter_tu102.c**NVRM: skipping executing Booter Unload as WPR2 is not up *NVRM: executing Booter Unload **NVRM: executing Booter Unload *pBooterUnloadUcode*pKernelGsp->pBooterUnloadUcode != NULL**pKernelGsp->pBooterUnloadUcode != NULL*mailbox1*call to s_executeBooterUcode_TU102*NVRM: failed to execute Booter Unload: 0x%x **NVRM: failed to execute Booter Unload: 0x%x *NVRM: failed to execute Booter Unload: WPR2 is cleared despite GC6 **NVRM: failed to execute Booter Unload: WPR2 is cleared despite GC6 *NVRM: failed to execute Booter Unload: WPR2 is still up **NVRM: failed to execute Booter Unload: WPR2 is still up *pBooterLoadUcode*pKernelGsp->pBooterLoadUcode != NULL**pKernelGsp->pBooterLoadUcode != NULL*NVRM: executing Booter Load, sysmemAddrOfData 0x%llx **NVRM: executing Booter Load, sysmemAddrOfData 0x%llx *NVRM: failed to execute Booter Load: 0x%x **NVRM: failed to execute Booter Load: 0x%x *pBooterUcode*pBooterUcode != NULL**pBooterUcode != NULL*NVRM: before Booter mailbox0 0x%08x, mailbox1 0x%08x **NVRM: before Booter mailbox0 0x%08x, mailbox1 0x%08x *NVRM: starting Booter with mailbox0 0x%08x, mailbox1 0x%08x **NVRM: starting Booter with mailbox0 0x%08x, mailbox1 0x%08x *NVRM: after Booter mailbox0 0x%08x, mailbox1 0x%08x **NVRM: after Booter mailbox0 0x%08x, mailbox1 0x%08x *NVRM: failed to execute Booter: status 0x%x, mailbox 0x%x **NVRM: failed to execute Booter: status 0x%x, mailbox 0x%x *NVRM: Booter failed with non-zero error code: 0x%x **NVRM: Booter failed with non-zero error code: 0x%x *src/kernel/gpu/gsp/arch/turing/kernel_gsp_falcon_tu102.c**src/kernel/gpu/gsp/arch/turing/kernel_gsp_falcon_tu102.c*!pKernelFlcn->bBootFromHs**!pKernelFlcn->bBootFromHs*call to s_prepareHsFalconWithLoader*call to s_prepareHsFalconDirect*pCodeMemDesc*pUcode->pCodeMemDesc != NULL**pUcode->pCodeMemDesc != NULL*pDataMemDesc*pUcode->pDataMemDesc != NULL**pUcode->pDataMemDesc != NULL*pKernelSec2 != NULL**pKernelSec2 != NULL*ucodePACode*ucodePAData*blDmemDesc*ctxDma*nonSecureCodeOff*nonSecureCodeSize*secureCodeOff*secureCodeSize*codeEntryPoint*call to ksec2GetGenericBlUcode_DISPATCH*ksec2GetGenericBlUcode_HAL(pGpu, pKernelSec2, &pBlUcDesc, &pBlImg)**ksec2GetGenericBlUcode_HAL(pGpu, pKernelSec2, &pBlUcDesc, &pBlImg)*pBlUcDesc*blImgHeader*blSize*call to s_dmemCopyTo_TU102*s_dmemCopyTo_TU102(pGpu, pKernelFlcn, 0, (NvU8 *) &blDmemDesc, sizeof(RM_FLCN_BL_DMEM_DESC))**s_dmemCopyTo_TU102(pGpu, pKernelFlcn, 0, (NvU8 *) &blDmemDesc, sizeof(RM_FLCN_BL_DMEM_DESC))*imemDstBlk*call to s_imemCopyTo_TU102*pBlImg*s_imemCopyTo_TU102(pGpu, pKernelFlcn, imemDstBlk << FALCON_IMEM_BLKSIZE2, pBlImg, blSize, NV_FALSE, virtAddr)**s_imemCopyTo_TU102(pGpu, pKernelFlcn, imemDstBlk << FALCON_IMEM_BLKSIZE2, pBlImg, blSize, NV_FALSE, virtAddr)*pUcode->pImage != NULL**pUcode->pImage != NULL*s_imemCopyTo_TU102(pGpu, pKernelFlcn, 0, pUcode->pImage + pUcode->imemNsPa, pUcode->imemNsSize, NV_FALSE, pUcode->imemNsPa)**s_imemCopyTo_TU102(pGpu, pKernelFlcn, 0, pUcode->pImage + pUcode->imemNsPa, pUcode->imemNsSize, NV_FALSE, pUcode->imemNsPa)*s_imemCopyTo_TU102(pGpu, pKernelFlcn, NV_ALIGN_UP(pUcode->imemNsSize, FLCN_BLK_ALIGNMENT), pUcode->pImage + pUcode->imemSecPa, pUcode->imemSecSize, NV_TRUE, pUcode->imemSecPa)**s_imemCopyTo_TU102(pGpu, pKernelFlcn, NV_ALIGN_UP(pUcode->imemNsSize, FLCN_BLK_ALIGNMENT), pUcode->pImage + pUcode->imemSecPa, pUcode->imemSecSize, NV_TRUE, pUcode->imemSecPa)*s_dmemCopyTo_TU102(pGpu, pKernelFlcn, pUcode->dmemPa, pUcode->pImage + pUcode->dataOffset, pUcode->dmemSize)**s_dmemCopyTo_TU102(pGpu, pKernelFlcn, pUcode->dmemPa, pUcode->pImage + pUcode->dataOffset, pUcode->dmemSize)*RM_IS_ALIGNED(imemDest, FLCN_BLK_ALIGNMENT)**RM_IS_ALIGNED(imemDest, FLCN_BLK_ALIGNMENT)*pSrc != NULL**pSrc != NULL*RM_IS_ALIGNED(sizeBytes, FLCN_IMEM_ACCESS_ALIGNMENT)**RM_IS_ALIGNED(sizeBytes, FLCN_IMEM_ACCESS_ALIGNMENT)*pSrcWords**pSrcWords*call to kflcnMaskImemAddr_DISPATCH*wordIdx*RM_IS_ALIGNED(dmemDest, FLCN_DMEM_ACCESS_ALIGNMENT)**RM_IS_ALIGNED(dmemDest, FLCN_DMEM_ACCESS_ALIGNMENT)*RM_IS_ALIGNED(sizeBytes, FLCN_DMEM_ACCESS_ALIGNMENT)**RM_IS_ALIGNED(sizeBytes, FLCN_DMEM_ACCESS_ALIGNMENT)*call to s_prepareForFwsec_TU102*pPreparedCmd*src/kernel/gpu/gsp/arch/turing/kernel_gsp_frts_tu102.c**src/kernel/gpu/gsp/arch/turing/kernel_gsp_frts_tu102.c*pPreparedCmd != NULL**pPreparedCmd != NULL*NVRM: failed to execute FWSEC cmd 0x%x: status 0x%x **NVRM: failed to execute FWSEC cmd 0x%x: status 0x%x *frtsErrCode*NVRM: failed to execute FWSEC for FRTS: FRTS error code 0x%x **NVRM: failed to execute FWSEC for FRTS: FRTS error code 0x%x *wpr2HiVal*NVRM: failed to execute FWSEC for FRTS: no initialized WPR2 found **NVRM: failed to execute FWSEC for FRTS: no initialized WPR2 found *wpr2LoVal*expectedLoVal*NVRM: failed to execute FWSEC for FRTS: WPR2 initialized at an unexpected location: 0x%08x (expected 0x%08x) **NVRM: failed to execute FWSEC for FRTS: WPR2 initialized at an unexpected location: 0x%08x (expected 0x%08x) *NVRM: failed to execute FWSEC for SB: GFW PLM not lowered **NVRM: failed to execute FWSEC for SB: GFW PLM not lowered *NVRM: failed to execute FWSEC for SB: GFW progress not completed **NVRM: failed to execute FWSEC for SB: GFW progress not completed *sbErrCode*NVRM: failed to execute FWSEC for SB: SB error code 0x%x **NVRM: failed to execute FWSEC for SB: SB error code 0x%x *NVRM: (note: VBIOS version %s) *vbiosVersionStr**NVRM: (note: VBIOS version %s) **vbiosVersionStr*pFwsecUcode != NULL**pFwsecUcode != NULL*(cmd != FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_FRTS) || (frtsOffset > 0)**(cmd != FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_FRTS) || (frtsOffset > 0)**pFwsecUcode*readVbiosDesc*gfwImageOffset*gfwImageSize*frtsCmd*frtsRegionDesc*frtsRegionOffset4K*frtsRegionSize*frtsRegionMediaType*pCmdBuffer**pCmdBuffer***pCmdBuffer*pSignatures*pUcode->pSignatures != NULL**pUcode->pSignatures != NULL*call to kgspReadUcodeFuseVersion_DISPATCH*ucodeVersionVal*hsSigVersions*sigOffset*bSafe*pMappedImage**pMappedImage*pMappedData**pMappedData*call to s_vbiosPatchInterfaceData*NVRM: failed to prepare interface data for FWSEC cmd 0x%x: 0x%x **NVRM: failed to prepare interface data for FWSEC cmd 0x%x: 0x%x *pIntFaceHdr**pIntFaceHdr*NVRM: too few interface entires found for FWSEC cmd 0x%x **NVRM: too few interface entires found for FWSEC cmd 0x%x *pIntFaceEntry**pIntFaceEntry*pDmemMapper**pDmemMapper*NVRM: failed to find required interface entry for FWSEC cmd 0x%x **NVRM: failed to find required interface entry for FWSEC cmd 0x%x *init_cmd*NVRM: insufficient cmd buffer for FWSEC interface cmd 0x%x **NVRM: insufficient cmd buffer for FWSEC interface cmd 0x%x *src/kernel/gpu/gsp/arch/turing/kernel_gsp_tu102.c*NVRM: GSP: MAILBOX(%d) = 0x%08X **src/kernel/gpu/gsp/arch/turing/kernel_gsp_tu102.c**NVRM: GSP: MAILBOX(%d) = 0x%08X *gspfwSRMeta*sizeOfSuspendResumeData*call to kgspCreateRadix3_IMPL*kgspCreateRadix3(pGpu, pKernelGsp, &pKernelGsp->pSRRadix3Descriptor, NULL, NULL, gspfwSRMeta.sizeOfSuspendResumeData)**kgspCreateRadix3(pGpu, pKernelGsp, &pKernelGsp->pSRRadix3Descriptor, NULL, NULL, gspfwSRMeta.sizeOfSuspendResumeData)*pSRRadix3Descriptor*memdescCreate(&pKernelGsp->pSRMetaDescriptor, pGpu, sizeof(GspFwSRMeta), 256, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pKernelGsp->pSRMetaDescriptor, pGpu, sizeof(GspFwSRMeta), 256, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*memdescMap(pKernelGsp->pSRMetaDescriptor, 0, memdescGetSize(pKernelGsp->pSRMetaDescriptor), NV_TRUE, NV_PROTECT_WRITEABLE, &pVa, &pPriv)**memdescMap(pKernelGsp->pSRMetaDescriptor, 0, memdescGetSize(pKernelGsp->pSRMetaDescriptor), NV_TRUE, NV_PROTECT_WRITEABLE, &pVa, &pPriv)**pSRMetaDescriptor**pSRRadix3Descriptor*NVRM: failed to wait for GFW boot complete: 0x%x VBIOS version %s **NVRM: failed to wait for GFW boot complete: 0x%x VBIOS version %s *NVRM: (the GPU may be in a bad state and may need to be reset) **NVRM: (the GPU may be in a bad state and may need to be reset) *mailbox*call to kflcnGetPendingHostInterrupts_IMPL*KGSP service called when no KGSP interrupt pending **KGSP service called when no KGSP interrupt pending *NVRM: GPU is detached, bailing! **NVRM: GPU is detached, bailing! *call to kgspDumpGspLogs_IMPL*call to kgspHealthCheck_DISPATCH*call to kgspRpcRecvEvents_IMPL*call to kgspServiceFatalHwError_DISPATCH*call to kflcnGetEccInterruptMask_DISPATCH*call to kgspEccServiceEvent_DISPATCH*call to kflcnIntrRetrigger_DISPATCH*call to crashcatEngineGetNextCrashReport_IMPL*call to crashcatReportIsWatchdog_V1*NVRM: Assign a CrashcatReport to pWatchdogReport **NVRM: Assign a CrashcatReport to pWatchdogReport **pWatchdogReport*call to kgspCrashCatReportImpactsGspRm*bHealthy*NVRM: ****************************** GSP-CrashCat Report ******************************* **NVRM: ****************************** GSP-CrashCat Report ******************************* *call to kgspPrintGspBinBuildId_IMPL*call to crashcatReportLog_IMPL*call to kgspPostCrashcatReportToNocat_IMPL*call to kgspInitNocatData_IMPL*nocatData*call to kgspLogRpcDebugInfoToProtobuf*call to kgspPostNocatData_IMPL*rpcHistory**rpcHistory*call to kgspLogRpcDebugInfo*call to gpuCheckEccCounts_DISPATCH*NVRM: ********************************************************************************** **NVRM: ********************************************************************************** *GSP timed out. Triggering TDR.**GSP timed out. Triggering TDR.*call to crashcatReportSourceContainment_DISPATCH*containment*call to kmemsysGetUsableFbSize_DISPATCH*kmemsysGetUsableFbSize_HAL(pGpu, pKernelMemorySystem, &pWprMeta->fbSize)**kmemsysGetUsableFbSize_HAL(pGpu, pKernelMemorySystem, &pWprMeta->fbSize)*call to kdispGetVgaWorkspaceBase_DISPATCH*vgaWorkspaceOffset*call to memmgrReadMmuLock_DISPATCH*memmgrReadMmuLock_HAL(pGpu, pMemoryManager, &bIsMmuLockValid, &mmuLockLo, &mmuLockHi)**memmgrReadMmuLock_HAL(pGpu, pMemoryManager, &bIsMmuLockValid, &mmuLockLo, &mmuLockHi)*vbiosReservedOffset*gspFwWprEnd*bootBinOffset*gspFwOffset*gspFwHeapOffset*gspFwWprStart*nonWprHeapOffset*gspFwRsvdStart*bootCount*verified*call to kgspPrepareForFwsecSb_DISPATCH*NVRM: failed to prepare for FWSEC-SB for PreOsApps during driver unload: 0x%x **NVRM: failed to prepare for FWSEC-SB for PreOsApps during driver unload: 0x%x *FWSEC-SB prep failed**FWSEC-SB prep failed*call to kgspExecuteFwsec_DISPATCH*NVRM: failed to execute FWSEC-SB for PreOsApps during driver unload: 0x%x **NVRM: failed to execute FWSEC-SB for PreOsApps during driver unload: 0x%x *FWSEC-SB failed**FWSEC-SB failed*call to kgspExecuteBooterUnloadIfNeeded_DISPATCH*call to _kgspGetBooterUnloadArgs*unexpected GSP unload mode**unexpected GSP unload mode*call to kgspExecuteScrubberIfNeeded_DISPATCH*kgspExecuteScrubberIfNeeded_HAL(pGpu, pKernelGsp)**kgspExecuteScrubberIfNeeded_HAL(pGpu, pKernelGsp)*pPreparedFwsecCmd*pKernelGsp->pPreparedFwsecCmd != NULL**pKernelGsp->pPreparedFwsecCmd != NULL*kflcnReset_HAL(pGpu, pKernelFalcon)**kflcnReset_HAL(pGpu, pKernelFalcon)**pPreparedFwsecCmd*call to kgspExecuteBooterLoad_DISPATCH*call to _kgspGetBooterLoadArgs*NVRM: failed to execute Booter Load (ucode for initial boot): 0x%x **NVRM: failed to execute Booter Load (ucode for initial boot): 0x%x *NVRM: Failed to boot GSP. **NVRM: Failed to boot GSP. *unexpected GSP boot mode**unexpected GSP boot mode*NVRM: RISC-V core is not enabled. **NVRM: RISC-V core is not enabled. *call to kgspPrepareForFwsecFrts_DISPATCH*queueIdx < NV_PGSP_QUEUE_HEAD__SIZE_1**queueIdx < NV_PGSP_QUEUE_HEAD__SIZE_1**pWprMeta*pWprMetaMappingPriv**pWprMetaMappingPriv***pWprMetaMappingPriv**pWprMetaDescriptor**pLibosInitArgumentsCached*pLibosInitArgumentsMappingPriv**pLibosInitArgumentsMappingPriv***pLibosInitArgumentsMappingPriv**pLibosInitArgumentsDescriptor*pGspArgumentsDescriptor**pGspArgumentsCached*pGspArgumentsMappingPriv**pGspArgumentsMappingPriv***pGspArgumentsMappingPriv**pGspArgumentsDescriptor**pGspUCodeRadix3Descriptor**pSignatureMemdesc*pSysmemHeapDescriptor**pSysmemHeapDescriptor*memdescCreate(&pKernelGsp->pWprMetaDescriptor, pGpu, 0x1000, 0x1000, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)**memdescCreate(&pKernelGsp->pWprMetaDescriptor, pGpu, 0x1000, 0x1000, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)*memdescMap(pKernelGsp->pWprMetaDescriptor, 0, memdescGetSize(pKernelGsp->pWprMetaDescriptor), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)**memdescMap(pKernelGsp->pWprMetaDescriptor, 0, memdescGetSize(pKernelGsp->pWprMetaDescriptor), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)*memdescCreate(&pKernelGsp->pLibosInitArgumentsDescriptor, pGpu, LIBOS_MEMORY_REGION_INIT_ARGUMENTS_MAX, LIBOS_MEMORY_REGION_INIT_ARGUMENTS_MAX, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, flags)**memdescCreate(&pKernelGsp->pLibosInitArgumentsDescriptor, pGpu, LIBOS_MEMORY_REGION_INIT_ARGUMENTS_MAX, LIBOS_MEMORY_REGION_INIT_ARGUMENTS_MAX, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, flags)*memdescMap(pKernelGsp->pLibosInitArgumentsDescriptor, 0, memdescGetSize(pKernelGsp->pLibosInitArgumentsDescriptor), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)**memdescMap(pKernelGsp->pLibosInitArgumentsDescriptor, 0, memdescGetSize(pKernelGsp->pLibosInitArgumentsDescriptor), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)*memdescCreate(&pKernelGsp->pGspArgumentsDescriptor, pGpu, 0x1000, 0x1000, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)**memdescCreate(&pKernelGsp->pGspArgumentsDescriptor, pGpu, 0x1000, 0x1000, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)*memdescMap(pKernelGsp->pGspArgumentsDescriptor, 0, memdescGetSize(pKernelGsp->pGspArgumentsDescriptor), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)**memdescMap(pKernelGsp->pGspArgumentsDescriptor, 0, memdescGetSize(pKernelGsp->pGspArgumentsDescriptor), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)*RmGspSysmemHeapSizeMB**RmGspSysmemHeapSizeMB*heapSizeMB*memdescCreate(&pKernelGsp->pSysmemHeapDescriptor, pGpu, (NvU64)heapSizeMB << 20, 0, NV_FALSE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, flags)**memdescCreate(&pKernelGsp->pSysmemHeapDescriptor, pGpu, (NvU64)heapSizeMB << 20, 0, NV_FALSE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, flags)*memdescCheckContiguity(pKernelGsp->pSysmemHeapDescriptor, AT_GPU)**memdescCheckContiguity(pKernelGsp->pSysmemHeapDescriptor, AT_GPU)*call to s_getBaseBiosMaxSize_TU102*src/kernel/gpu/gsp/arch/turing/kernel_gsp_vbios_tu102.c**src/kernel/gpu/gsp/arch/turing/kernel_gsp_vbios_tu102.c*ppVbiosImg != NULL**ppVbiosImg != NULL*pVbiosImg**pVbiosImg*call to kbifPreOsGlobalErotGrantRequest_DISPATCH*NVRM: ERoT Req/Grant for EEPROM access failed, status=%u **NVRM: ERoT Req/Grant for EEPROM access failed, status=%u *call to s_romImgRead16*romSig*call to s_romImgFindPciHeader_TU102*s_romImgFindPciHeader_TU102(&src, &pciOffset)**s_romImgFindPciHeader_TU102(&src, &pciOffset)*NVRM: did not find valid ROM signature **NVRM: did not find valid ROM signature *call to s_locateExpansionRoms*NVRM: failed to locate expansion ROMs: 0x%x **NVRM: failed to locate expansion ROMs: 0x%x *NVRM: expansion ROM has exceedingly large size: 0x%x **NVRM: expansion ROM has exceedingly large size: 0x%x *biosSize*pImageDwords*biosSizeAligned*call to s_promRead32*call to s_promRead08*expansionRomOffset*call to kgspFreeVbiosImg*currSrc*pciBlck*call to s_romImgRead32*pciDataSig*call to s_romImgRead8*bIsLastImage*imgLen*subImgLen*extSrc*nvPciDataExtSig*blockOffset*extRomOffset*baseRomSize*currBlock*pBiosSize*pExpansionRomOffset*pIfrSize*pIfrSize != NULL**pIfrSize != NULL*fixed0*fixed1*fixed2*ifrVersion*extendedOffset*imageOffset*ifrTotalDataSize*flashStatusOffset*romDirectoryOffset*romDirectorySig*NVRM: Error: ROM Directory not found = 0x%08x. **NVRM: Error: ROM Directory not found = 0x%08x. *NVRM: Error: IFR version not supported = 0x%08x. **NVRM: Error: IFR version not supported = 0x%08x. *NV_IS_ALIGNED(imageOffset, 4)**NV_IS_ALIGNED(imageOffset, 4)*call to s_romImgReadGeneric*pStatus != NULL**pStatus != NULL*sizeBytes <= 4**sizeBytes <= 4*byteIndex*bReadWord1*pSrc->pGpu != NULL**pSrc->pGpu != NULL**byte*retValue**pBuildIdSection*pBuildIdNoteHeader**pBuildIdNoteHeader*call to portStringBufferToHex*buildIdString**buildIdString*src/kernel/gpu/gsp/kernel_gsp.c*NVRM: GSP bin buildId: %s **src/kernel/gpu/gsp/kernel_gsp.c**NVRM: GSP bin buildId: %s *call to crashcatReportRa_V1*call to crashcatReportXcause_V1*call to crashcatReportXtval_V1*call to crashcatReportLogToProtobuf_V1*GSP_RPC_HISTORY**GSP_RPC_HISTORY*GSP_RPC_TIMEOUT**GSP_RPC_TIMEOUT*GSP_RPC_PERF**GSP_RPC_PERF*call to prbSetupDclMsg*prbEncNestedStart(pPrbEnc, NVDEBUG_GPUINFO_ENG_KGSP)**prbEncNestedStart(pPrbEnc, NVDEBUG_GPUINFO_ENG_KGSP)*prbEncNestedStart(pPrbEnc, NVDEBUG_ENG_KGSP_RPC_HISTORY)**prbEncNestedStart(pPrbEnc, NVDEBUG_ENG_KGSP_RPC_HISTORY)*rpcEventHistory**rpcEventHistory*prbEncNestedStart(pPrbEnc, NVDEBUG_ENG_KGSP_EVENT_HISTORY)**prbEncNestedStart(pPrbEnc, NVDEBUG_ENG_KGSP_EVENT_HISTORY)*PDB_PROP_GPU_IS_MOBILE*PDB_PROP_GPU_RTD3_GC6_SUPPORTED*PDB_PROP_GPU_RTD3_GC8_SUPPORTED*PDB_PROP_GPU_RTD3_GCOFF_SUPPORTED*PDB_PROP_GPU_IS_UEFI*PDB_PROP_GPU_IS_EFI_INIT*PDB_PROP_GPU_LEGACY_GCOFF_SUPPORTED*wprEndMargin*pWprMeta->sizeOfRadix3Elf > 0**pWprMeta->sizeOfRadix3Elf > 0*NVRM: Adding margin of 0x%llx bytes after the end of WPR2 **NVRM: Adding margin of 0x%llx bytes after the end of WPR2 *call to kgspGetPrescrubbedTopFbSize_DISPATCH*maxScrubbedHeapSizeMB*call to kgspGetMinWprHeapSizeMB_DISPATCH*call to kgspGetMaxWprHeapSizeMB_DISPATCH*minGspFwHeapSizeMB < maxGspFwHeapSizeMB**minGspFwHeapSizeMB < maxGspFwHeapSizeMB*NVRM: Firmware heap size clamped to maximum (%uMB) **NVRM: Firmware heap size clamped to maximum (%uMB) *heapSizeMBOverride*NVRM: Firmware heap size clamped to minimum (%uMB) **NVRM: Firmware heap size clamped to minimum (%uMB) *NVRM: Firmware heap size overridden (%uMB) **NVRM: Firmware heap size overridden (%uMB) *call to _kgspCalculateFwHeapSize*call to kgspVgpuFwHeapSize_DISPATCH*memSizeGB*kmemsysGetUsableFbSize_HAL(pGpu, pKernelMemorySystem, &fbSize)**kmemsysGetUsableFbSize_HAL(pGpu, pKernelMemorySystem, &fbSize)*call to kgspGetFwHeapParamOsCarveoutSize_DISPATCH*heapSize*NVRM: GSP FW heap %lluMB of %uGB **NVRM: GSP FW heap %lluMB of %uGB *pParams->engineIdx == MC_ENGINE_IDX_GSP**pParams->engineIdx == MC_ENGINE_IDX_GSP*call to kgspService_DISPATCH**pRunCpuSeqParams**commandBuffer*(pParams->bufferSizeDWord != 0)**(pParams->bufferSizeDWord != 0)*buffer_end < pParams->bufferSizeDWord**buffer_end < pParams->bufferSizeDWord*current_cmd_index + payloadSize <= buffer_end**current_cmd_index + payloadSize <= buffer_end*regModify*regPoll*NVRM: Timeout waiting for register to settle, value = 0x%x, err_code = 0x%x **NVRM: Timeout waiting for register to settle, value = 0x%x, err_code = 0x%x *delayUs*regStore*regStore.index < GSP_SEQ_BUF_REG_SAVE_SIZE**regStore.index < GSP_SEQ_BUF_REG_SAVE_SIZE*regSaveArea**regSaveArea*kflcnReset_HAL(pGpu, staticCast(pKernelGsp, KernelFalcon))**kflcnReset_HAL(pGpu, staticCast(pKernelGsp, KernelFalcon))*kflcnWaitForHalt_HAL(pGpu, staticCast(pKernelGsp, KernelFalcon), GPU_TIMEOUT_DEFAULT, 0)**kflcnWaitForHalt_HAL(pGpu, staticCast(pKernelGsp, KernelFalcon), GPU_TIMEOUT_DEFAULT, 0)*call to kgspExecuteSequencerCommand_DISPATCH*kgspExecuteSequencerCommand_HAL(pGpu, pKernelGsp, opCode, &pCmd[current_cmd_index], payloadSize * sizeof (*pCmd))**kgspExecuteSequencerCommand_HAL(pGpu, pKernelGsp, opCode, &pCmd[current_cmd_index], payloadSize * sizeof (*pCmd))*bIsRTD3Gc6D3HotTransition*bIsRTD3GcoffD3HotTransition*rpcRecvPoll(pGpu, pRpc, NV_VGPU_MSG_EVENT_GSP_INIT_DONE, 0)**rpcRecvPoll(pGpu, pRpc, NV_VGPU_MSG_EVENT_GSP_INIT_DONE, 0)*RPC_HDR->rpc_result**RPC_HDR->rpc_result*NVRM: GSP-RM reports bIsD3Hot = 0x%08x **NVRM: GSP-RM reports bIsD3Hot = 0x%08x *rmGpuGroupLockIsOwner(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, &gpuMaskUnused)**rmGpuGroupLockIsOwner(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, &gpuMaskUnused)*call to _kgspRpcDrainEvents*_kgspRpcDrainEvents(pGpu, pKernelGsp, NV_VGPU_MSG_FUNCTION_NUM_FUNCTIONS, 0, KGSP_RPC_EVENT_HANDLER_CONTEXT_INTERRUPT)**_kgspRpcDrainEvents(pGpu, pKernelGsp, NV_VGPU_MSG_FUNCTION_NUM_FUNCTIONS, 0, KGSP_RPC_EVENT_HANDLER_CONTEXT_INTERRUPT)*pLibosInitArgs**pLibosInitArgs*call to kgspGetLogCount_DISPATCH*rmLibosLogMem**rmLibosLogMem*id8*pTaskLogBuffer*pTaskLogDescriptor*call to _kgspGenerateInitArgId*RMARGS**RMARGS**pElfData*pElfData != NULL**pElfData != NULL*elfDataSize > 0**elfDataSize > 0*pSectionName*pSectionName != NULL**pSectionName != NULL*ppSectionData**ppSectionData*ppSectionData != NULL**ppSectionData != NULL*pSectionSize*pSectionSize != NULL**pSectionSize != NULL*elfDataSize >= sizeof(LibosElf64Header)**elfDataSize >= sizeof(LibosElf64Header)*sectionNameLength*pGspBuf*pElfHeader**pElfHeader**(NvU32*)&pElfHeader->ident == elfMagicNumber***(NvU32*)&pElfHeader->ident == elfMagicNumber*pElfHeader->ident[5] == elfLittleEndian**pElfHeader->ident[5] == elfLittleEndian*pElfHeader->ident[4] == elfClass64**pElfHeader->ident[4] == elfClass64*pElfHeader->shentsize == sizeof(LibosElf64SectionHeader)**pElfHeader->shentsize == sizeof(LibosElf64SectionHeader)*portSafeMulU64(pElfHeader->shentsize, pElfHeader->shnum, &elfSectionHeaderTableLength)**portSafeMulU64(pElfHeader->shentsize, pElfHeader->shnum, &elfSectionHeaderTableLength)*portSafeAddU64(pElfHeader->shoff, elfSectionHeaderTableLength - 1, &elfSectionHeaderMaxIdx)**portSafeAddU64(pElfHeader->shoff, elfSectionHeaderTableLength - 1, &elfSectionHeaderMaxIdx)*elfDataSize >= elfSectionHeaderMaxIdx**elfDataSize >= elfSectionHeaderMaxIdx*pElfHeader->shstrndx <= pElfHeader->shnum**pElfHeader->shstrndx <= pElfHeader->shnum*pElfSectionHeader**pElfSectionHeader*elfSectionNamesTableOffset*elfSectionNamesTableSize*portSafeAddU64(elfSectionNamesTableOffset, elfSectionNamesTableSize - 1, &elfSectionNamesTableMaxIdx)**portSafeAddU64(elfSectionNamesTableOffset, elfSectionNamesTableSize - 1, &elfSectionNamesTableMaxIdx)*elfDataSize >= elfSectionNamesTableMaxIdx**elfDataSize >= elfSectionNamesTableMaxIdx*elfSectionNamesTableSize - 1 >= pElfSectionHeader->name**elfSectionNamesTableSize - 1 >= pElfSectionHeader->name*currentSectionNameMaxLength*pCurrentSectionName**pCurrentSectionName*portSafeAddU64(pElfSectionHeader->offset, pElfSectionHeader->size - 1, &elfSectionMaxIdx)**portSafeAddU64(pElfSectionHeader->offset, pElfSectionHeader->size - 1, &elfSectionMaxIdx)*elfSectionMaxIdx*elfDataSize >= elfSectionMaxIdx**elfDataSize >= elfSectionMaxIdx*ppMemdescRadix3**ppMemdescRadix3*ppMemdescRadix3 != NULL**ppMemdescRadix3 != NULL*pMemdescData*Specify pMemdescData or pData, or none, but not both**Specify pMemdescData or pData, or none, but not both*radix3**radix3*nPages*radix3[0].nPages == 1**radix3[0].nPages == 1*ptSize*memdescCreate(ppMemdescRadix3, pGpu, allocSize, LIBOS_MEMORY_REGION_RADIX_PAGE_SIZE, NV_MEMORY_NONCONTIGUOUS, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)**memdescCreate(ppMemdescRadix3, pGpu, allocSize, LIBOS_MEMORY_REGION_RADIX_PAGE_SIZE, NV_MEMORY_NONCONTIGUOUS, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)*NVRM: memdescTagAllocate failed for huge pages, trying again with regular ones **NVRM: memdescTagAllocate failed for huge pages, trying again with regular ones *memdescMap(*ppMemdescRadix3, 0, allocSize, NV_TRUE, NV_PROTECT_WRITEABLE, &pVaKernel, &pPrivKernel)**memdescMap(*ppMemdescRadix3, 0, allocSize, NV_TRUE, NV_PROTECT_WRITEABLE, &pVaKernel, &pPrivKernel)*NVRM: VA error for radix3 shared buffer **NVRM: VA error for radix3 shared buffer *pRadix3Buf**pRadix3Buf*dataOffset**pMemdescData*call to _kgspFwContainerVerifyVersion*GSP firmware image**GSP firmware image*_kgspFwContainerVerifyVersion(pGpu, pKernelGsp, pGspFw->pBuf, pGspFw->size, "GSP firmware image")**_kgspFwContainerVerifyVersion(pGpu, pKernelGsp, pGspFw->pBuf, pGspFw->size, "GSP firmware image")*call to _kgspFwContainerGetSection*.fwimage**.fwimage*_kgspFwContainerGetSection(pGpu, pKernelGsp, pGspFw->pBuf, pGspFw->size, GSP_IMAGE_SECTION_NAME, &pGspFw->pImageData, &pGspFw->imageSize)**_kgspFwContainerGetSection(pGpu, pKernelGsp, pGspFw->pBuf, pGspFw->size, GSP_IMAGE_SECTION_NAME, &pGspFw->pImageData, &pGspFw->imageSize)*call to _kgspGetSectionNameForPrefix*signatureSectionName*call to kgspGetSignatureSectionNamePrefix_DISPATCH**signatureSectionName*_kgspGetSectionNameForPrefix(pGpu, pKernelGsp, signatureSectionName, sizeof(signatureSectionName), kgspGetSignatureSectionNamePrefix_HAL(pGpu, pKernelGsp))**_kgspGetSectionNameForPrefix(pGpu, pKernelGsp, signatureSectionName, sizeof(signatureSectionName), kgspGetSignatureSectionNamePrefix_HAL(pGpu, pKernelGsp))*_kgspFwContainerGetSection(pGpu, pKernelGsp, pGspFw->pBuf, pGspFw->size, signatureSectionName, &pGspFw->pSignatureData, &pGspFw->signatureSize)**_kgspFwContainerGetSection(pGpu, pKernelGsp, pGspFw->pBuf, pGspFw->size, signatureSectionName, &pGspFw->pSignatureData, &pGspFw->signatureSize)*call to _kgspCreateSignatureMemdesc*_kgspCreateSignatureMemdesc(pGpu, pKernelGsp, pGspFw)**_kgspCreateSignatureMemdesc(pGpu, pKernelGsp, pGspFw)*pImageData**pImageData*kgspCreateRadix3(pGpu, pKernelGsp, &pKernelGsp->pGspUCodeRadix3Descriptor, NULL, pGspFw->pImageData, pGspFw->imageSize)**kgspCreateRadix3(pGpu, pKernelGsp, &pKernelGsp->pGspUCodeRadix3Descriptor, NULL, pGspFw->pImageData, pGspFw->imageSize)*.note.gnu.build-id**.note.gnu.build-id*_kgspFwContainerGetSection(pGpu, pKernelGsp, pGspFw->pBuf, pGspFw->size, GSP_BUILD_ID_SECTION_NAME, &pBuildIdSectionData, &buildIdSectionSize)**_kgspFwContainerGetSection(pGpu, pKernelGsp, pGspFw->pBuf, pGspFw->size, GSP_BUILD_ID_SECTION_NAME, &pBuildIdSectionData, &buildIdSectionSize)***pBuildIdSection*pKernelGsp->pBuildIdSection != NULL**pKernelGsp->pBuildIdSection != NULL*buildIdSectionSize*pBuildIdSectionData**pBuildIdSectionData*pSectionNameBuf*pSectionNameBuf != NULL**pSectionNameBuf != NULL*sectionNameBufSize > 0**sectionNameBufSize > 0*pSectionPrefix*pSectionPrefix != NULL**pSectionPrefix != NULL*chipFamily != NV_FIRMWARE_CHIP_FAMILY_NULL**chipFamily != NV_FIRMWARE_CHIP_FAMILY_NULL*call to nv_firmware_chip_family_to_string*pChipFamilyName**pChipFamilyName*portStringLength(pChipFamilyName) != 0**portStringLength(pChipFamilyName) != 0*sectionPrefixLength*chipFamilyNameLength*sectionNameBufSize >= sectionPrefixLength + 1**sectionNameBufSize >= sectionPrefixLength + 1*sectionNameBufSize >= totalSize**sectionNameBufSize >= totalSize*.fwversion**.fwversion*_kgspFwContainerGetSection(pGpu, pKernelGsp, pElfData, elfDataSize, GSP_VERSION_SECTION_NAME, &pFwversionRaw, &fwversionSize)**_kgspFwContainerGetSection(pGpu, pKernelGsp, pElfData, elfDataSize, GSP_VERSION_SECTION_NAME, &pFwversionRaw, &fwversionSize)*pFwversionRaw**pFwversionRaw*pFwversion**pFwversion*bIsVersionValid*NVRM: %s version mismatch: got version %s, expected version %s *pNameInMsg**NVRM: %s version mismatch: got version %s, expected version %s *NVRM: %s version unknown or malformed, expected version %s **NVRM: %s version unknown or malformed, expected version %s *memdescCreate(&pKernelGsp->pSignatureMemdesc, pGpu, NV_ALIGN_UP(pGspFw->signatureSize, 256), 256, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)**memdescCreate(&pKernelGsp->pSignatureMemdesc, pGpu, NV_ALIGN_UP(pGspFw->signatureSize, 256), 256, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)*pSignatureVa**pSignatureVa*(pSignatureVa != NULL) ? NV_OK : NV_ERR_INSUFFICIENT_RESOURCES**(pSignatureVa != NULL) ? NV_OK : NV_ERR_INSUFFICIENT_RESOURCES*pSignatureData**pSignatureData*call to bindataStorageReleaseData**pGspRmBootUcodeDesc**pGspRmBootUcodeImage*pGspRmBootUcodeMemdescPriv**pGspRmBootUcodeMemdescPriv***pGspRmBootUcodeMemdescPriv**pGspRmBootUcodeMemdesc*gspRmBootUcodeSize*pKernelGsp->pGspRmBootUcodeImage == NULL**pKernelGsp->pGspRmBootUcodeImage == NULL*pKernelGsp->pGspRmBootUcodeDesc == NULL**pKernelGsp->pGspRmBootUcodeDesc == NULL*call to kgspGetGspRmBootUcodeStorage_DISPATCH*pBinStorageImage*bufSizeAligned*memdescCreate(&pKernelGsp->pGspRmBootUcodeMemdesc, pGpu, bufSizeAligned, RM_PAGE_SIZE, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)**memdescCreate(&pKernelGsp->pGspRmBootUcodeMemdesc, pGpu, bufSizeAligned, RM_PAGE_SIZE, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)*memdescMap(pKernelGsp->pGspRmBootUcodeMemdesc, 0, memdescGetSize(pKernelGsp->pGspRmBootUcodeMemdesc), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)**memdescMap(pKernelGsp->pGspRmBootUcodeMemdesc, 0, memdescGetSize(pKernelGsp->pGspRmBootUcodeMemdesc), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)*bindataWriteToBuffer(pBinStorageImage, pKernelGsp->pGspRmBootUcodeImage, bufSize)**bindataWriteToBuffer(pBinStorageImage, pKernelGsp->pGspRmBootUcodeImage, bufSize)*call to bindataStorageAcquireData*pBinStorageDesc*bindataStorageAcquireData(pBinStorageDesc, (const void**)&pDesc)**bindataStorageAcquireData(pBinStorageDesc, (const void**)&pDesc)*call to _kgspFreeBootBinaryImage*pGspArgs*pMQInitArgs*sharedMemPhysAddr*pageTableEntryCount*cmdQueueOffset*rpcQueues**rpcQueues*statQueueOffset*pGspInitArgs*RmGspStackPlacement**RmGspStackPlacement*stackReg*bDmemStack*pSrInitArgs*pProfilerSamples**pProfilerSamples*pProfilerSamplesMD*profilerArgs*sysmemHeapArgs*call to _kgspDumpGspLogsUnlocked*call to libosExtractLogs*logDecodeVgpuPartition**logDecodeVgpuPartition*call to kgspFreeFlcnUcode**pBooterLoadUcode**pBooterUnloadUcode**pScrubberUcode*call to _kgspFreeLibosLoggingStructures*call to _kgspFreeRpcInfrastructure*call to _kgspFreeSimAccessBuffer*call to _kgspFreeNotifyOpSharedSurface*NVRM: unloading GSP-RM **NVRM: unloading GSP-RM *call to kgspCheckGspRmCcCleanup_DISPATCH*call to kgspWaitForProcessorSuspend_DISPATCH*call to kgspTeardown_DISPATCH*NVRM: need firmware to initialize GSP **NVRM: need firmware to initialize GSP *bInInit*call to kgspExtractVbiosFromRom_DISPATCH*call to kgspParseFwsecUcodeFromVbiosImg_IMPL*call to _kgspVbiosVersionToStr*NVRM: failed to parse FWSEC ucode from VBIOS image (VBIOS version %s): 0x%x **NVRM: failed to parse FWSEC ucode from VBIOS image (VBIOS version %s): 0x%x *NVRM: parsed VBIOS version %s **NVRM: parsed VBIOS version %s *NVRM: failed to extract VBIOS image from ROM: 0x%x **NVRM: failed to extract VBIOS image from ROM: 0x%x *call to kgspAllocateBooterLoadUcodeImage_IMPL*NVRM: failed to allocate Booter Load ucode: 0x%x **NVRM: failed to allocate Booter Load ucode: 0x%x *call to kgspAllocateBooterUnloadUcodeImage_IMPL*NVRM: failed to allocate Booter Unload ucode: 0x%x **NVRM: failed to allocate Booter Unload ucode: 0x%x *call to kgspPrepareBootBinaryImage_IMPL*NVRM: Error preparing boot binary image **NVRM: Error preparing boot binary image *call to _kgspPrepareGspRmBinaryImage*NVRM: Error preparing GSP-RM image **NVRM: Error preparing GSP-RM image *call to _kgspInitLibosLoggingStructures*NVRM: init libos logging structures failed: 0x%x **NVRM: init libos logging structures failed: 0x%x *call to _kgspInitLibosLogDecoder*_kgspInitLibosLogDecoder(pGpu, pKernelGsp, pGspFw)**_kgspInitLibosLogDecoder(pGpu, pKernelGsp, pGspFw)*call to nvlogRegisterFlushCb*nvlogRegisterFlushCb(kgspNvlogFlushCb, pKernelGsp)**nvlogRegisterFlushCb(kgspNvlogFlushCb, pKernelGsp)*kgspWaitForGfwBootOk_HAL(pGpu, pKernelGsp)**kgspWaitForGfwBootOk_HAL(pGpu, pKernelGsp)*call to kgspSetupLibosInitArgs_IMPL*bDelayInitRpcs*RmGspBootRetryAttempts**RmGspBootRetryAttempts*maxGspBootAttempts*RmGspBootInitialShift**RmGspBootInitialShift*bootAttempts*NVRM: Initial shift, %d, is larger than max allowed [0, %d]. Modulo applied **NVRM: Initial shift, %d, is larger than max allowed [0, %d]. Modulo applied *call to _kgspBootGspRm*rmapiLockIsOwner() && (gpusLockedMask != 0)**rmapiLockIsOwner() && (gpusLockedMask != 0)*powerDisconnectedGpuBus**powerDisconnectedGpuBus*NVRM: Max GSP-RM boot attempts exceeded: %d/%d **NVRM: Max GSP-RM boot attempts exceeded: %d/%d *call to RmRpcSetGuestSystemInfo*NVRM: SET_GUEST_SYSTEM_INFO failed: 0x%x **NVRM: SET_GUEST_SYSTEM_INFO failed: 0x%x *NVRM: GET_GSP_STATIC_INFO failed: 0x%x **NVRM: GET_GSP_STATIC_INFO failed: 0x%x *call to _kgspInitGpuProperties*call to _kgspSetFwWprLayoutOffset*call to kgspStartLogPolling_IMPL*kgspStartLogPolling(pGpu, pKernelGsp)**kgspStartLogPolling(pGpu, pKernelGsp)*call to libosPreserveLogs*call to kmemsysCheckReadoutEccEnablement_DISPATCH*pbRetry*pbRetry != NULL**pbRetry != NULL*NVRM: unexpected WPR2 already up, cannot proceed with booting GSP **NVRM: unexpected WPR2 already up, cannot proceed with booting GSP *NVRM: (the GPU is likely in a bad state and may need to be reset) **NVRM: (the GPU is likely in a bad state and may need to be reset) *call to kgspPopulateWprMeta_DISPATCH*kgspPopulateWprMeta_HAL(pGpu, pKernelGsp, pGspFw)**kgspPopulateWprMeta_HAL(pGpu, pKernelGsp, pGspFw)*call to _kgspPrepareScrubberImageIfNeeded*_kgspPrepareScrubberImageIfNeeded(pGpu, pKernelGsp)**_kgspPrepareScrubberImageIfNeeded(pGpu, pKernelGsp)*kgspPrepareForBootstrap_HAL(pGpu, pKernelGsp, KGSP_BOOT_MODE_NORMAL)**kgspPrepareForBootstrap_HAL(pGpu, pKernelGsp, KGSP_BOOT_MODE_NORMAL)*call to _kgspShouldRelaxGspInitLocking*RmGspScanWprEndMargin**RmGspScanWprEndMargin*bScanWprEndMargin*call to _kgspBootReacquireLocks*pGpusLockedMask*_kgspBootReacquireLocks(pGpu, pKernelGsp, pGpusLockedMask)**_kgspBootReacquireLocks(pGpu, pKernelGsp, pGpusLockedMask)*rmapiLockAcquire(API_LOCK_FLAGS_NONE, RM_LOCK_MODULES_INIT)**rmapiLockAcquire(API_LOCK_FLAGS_NONE, RM_LOCK_MODULES_INIT)*call to gpumgrIsGpuPointerAttached*gpumgrIsGpuPointerAttached(pGpu)**gpumgrIsGpuPointerAttached(pGpu)*rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_INIT, pGpusLockedMask)**rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_INIT, pGpusLockedMask)*RmRelaxedGspInitLocking**RmRelaxedGspInitLocking*relaxGspInitLockingReg*fwWprLayoutOffset*NV_RM_RPC_GSP_SET_SYSTEM_INFO**NV_RM_RPC_GSP_SET_SYSTEM_INFO*NV_RM_RPC_SET_REGISTRY**NV_RM_RPC_SET_REGISTRY*NVRM: pre-scrubbed memory: 0x%llx bytes, needed: 0x%llx bytes **NVRM: pre-scrubbed memory: 0x%llx bytes, needed: 0x%llx bytes *call to kgspIsScrubberImageSupported_DISPATCH*call to kgspAllocateScrubberUcodeImage_IMPL*kgspAllocateScrubberUcodeImage(pGpu, pKernelGsp, &pKernelGsp->pScrubberUcode)**kgspAllocateScrubberUcodeImage(pGpu, pKernelGsp, &pKernelGsp->pScrubberUcode)*pVbiosVersionStr*%2X.%02X.%02X.%02X.%02X**%2X.%02X.%02X.%02X.%02X*call to _kgspReadRegkeyOverrides*call to kgspConfigureFalcon_DISPATCH*call to _kgspInitRpcInfrastructure*NVRM: init RPC infrastructure failed **NVRM: init RPC infrastructure failed *call to kgspAllocBootArgs_DISPATCH*NVRM: boot arg alloc failed: 0x%x **NVRM: boot arg alloc failed: 0x%x *call to _kgspAllocSimAccessBuffer*NVRM: sim access buffer alloc failed: 0x%x **NVRM: sim access buffer alloc failed: 0x%x *call to _kgspAllocNotifyOpSharedSurface*NVRM: notify operation shared surface alloc failed: 0x%x **NVRM: notify operation shared surface alloc failed: 0x%x *RmGspWprEndMargin**RmGspWprEndMargin*RmGspFirmwareHeapSizeMB**RmGspFirmwareHeapSizeMB*pNotifyOpSurfMemDesc**pNotifyOpSurf*pNotifyOpSurfPriv**pNotifyOpSurfPriv**pNotifyOpSurfMemDesc***pNotifyOpSurfPriv*memdescCreate(&pKernelGsp->pNotifyOpSurfMemDesc, pGpu, sizeof(NotifyOpSharedSurface), RM_PAGE_SIZE, NV_FALSE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, flags)**memdescCreate(&pKernelGsp->pNotifyOpSurfMemDesc, pGpu, sizeof(NotifyOpSharedSurface), RM_PAGE_SIZE, NV_FALSE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, flags)*memdescMap(pKernelGsp->pNotifyOpSurfMemDesc, 0, memdescGetSize(pKernelGsp->pNotifyOpSurfMemDesc), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)**memdescMap(pKernelGsp->pNotifyOpSurfMemDesc, 0, memdescGetSize(pKernelGsp->pNotifyOpSurfMemDesc), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)*pMemDesc_simAccessBuf**pMemDesc_simAccessBuf*pSimAccessBuf**pSimAccessBuf*pSimAccessBufPriv**pSimAccessBufPriv***pSimAccessBufPriv*memdescCreate(&pKernelGsp->pMemDesc_simAccessBuf, pGpu, sizeof(SimAccessBuffer), RM_PAGE_SIZE, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pKernelGsp->pMemDesc_simAccessBuf, pGpu, sizeof(SimAccessBuffer), RM_PAGE_SIZE, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*memdescMap(pKernelGsp->pMemDesc_simAccessBuf, 0, memdescGetSize(pKernelGsp->pMemDesc_simAccessBuf), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)**memdescMap(pKernelGsp->pMemDesc_simAccessBuf, 0, memdescGetSize(pKernelGsp->pMemDesc_simAccessBuf), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)*GSP firmware log**GSP firmware log*_kgspFwContainerVerifyVersion(pGpu, pKernelGsp, pGspFw->pLogElf, pGspFw->logElfSize, "GSP firmware log")**_kgspFwContainerVerifyVersion(pGpu, pKernelGsp, pGspFw->pLogElf, pGspFw->logElfSize, "GSP firmware log")*.fwlogging**.fwlogging*_kgspFwContainerGetSection(pGpu, pKernelGsp, pGspFw->pLogElf, pGspFw->logElfSize, GSP_LOGGING_SECTION_NAME, &pLogData, &logSize)**_kgspFwContainerGetSection(pGpu, pKernelGsp, pGspFw->pLogElf, pGspFw->logElfSize, GSP_LOGGING_SECTION_NAME, &pLogData, &logSize)***pLogElf*logElfDataSize*pKernelGsp->pLogElf != NULL**pKernelGsp->pLogElf != NULL*call to libosLogCreate*call to kgspGetLibosVersion_DISPATCH*call to _setupLogBufferBaremetal*call to kgspHasLibosKernelLogging_STATIC_DISPATCH**LOGKRNL*NVRM: Unknown chip for libos kernel logging (non-fatal) **NVRM: Unknown chip for libos kernel logging (non-fatal) *memdescCreate(&pLog->pTaskLogDescriptor, pGpu, size, RM_PAGE_SIZE, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)**memdescCreate(&pLog->pTaskLogDescriptor, pGpu, size, RM_PAGE_SIZE, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)*memdescMap(pLog->pTaskLogDescriptor, 0, memdescGetSize(pLog->pTaskLogDescriptor), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)**memdescMap(pLog->pTaskLogDescriptor, 0, memdescGetSize(pLog->pTaskLogDescriptor), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)**pTaskLogBuffer*pTaskLogMappingPriv**pTaskLogMappingPriv***pTaskLogMappingPriv*szMemoryId*szPrefix*call to _kgspStopLogPolling*call to nvlogDeregisterFlushCb*call to libosLogDestroy**pTaskLogDescriptor**gspPluginInitTaskLogMem**gspPluginVgpuTaskLogMem*V%02d**V%02d*call to libosLogCreateEx*call to isLibosPreserveLogBufferFull*bPreserveLogBufferFull*call to _setupLogBufferVgpu*NVRM: Unknown chip for libos kernel logging **NVRM: Unknown chip for libos kernel logging **libosKernelLogMem*vmMergedLogString*VGPU**vmMergedLogString**VGPU*call to libosLogSetupMergedNvlog*bHasVgpuLogs*call to _kgspFreeLibosVgpuPartitionLoggingStructures*pTaskLog**pTaskLog*memdescCreate(&pTaskLog->pTaskLogDescriptor, pGpu, logVgpuSetupParams.bufSize, RM_PAGE_SIZE, NV_TRUE, ADDR_FBMEM, NV_MEMORY_CACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pTaskLog->pTaskLogDescriptor, pGpu, logVgpuSetupParams.bufSize, RM_PAGE_SIZE, NV_TRUE, ADDR_FBMEM, NV_MEMORY_CACHED, MEMDESC_FLAGS_NONE)***pVa*vm_string**vm_string*NVRM: Failed to map memory for %s task log buffer for vGPU partition **NVRM: Failed to map memory for %s task log buffer for vGPU partition *vgpuLogBuffers**vgpuLogBuffers***vgpuLogBuffers*call to _kgspUnmapTaskLogBuf*call to GspMsgQueuesCleanup*pMQI*pMQI != NULL**pMQI != NULL*pMessageQueueInfo**pMessageQueueInfo*rpcHistoryCurrent*rpcEventHistoryCurrent*pRpcMsgBuf*call to GspMsgQueuesInit*NVRM: GspMsgQueueInit failed **NVRM: GspMsgQueueInit failed **pMQCollection*call to _kgspConstructRpcObject*NVRM: init task RM RPC infrastructure failed **NVRM: init task RM RPC infrastructure failed *!pKernelGsp->bPollingForRpcResponse**!pKernelGsp->bPollingForRpcResponse*portSafeMulU32(GSP_SCALE_TIMEOUT_EMU_SIM, 1500000, &timeoutResult)**portSafeMulU32(GSP_SCALE_TIMEOUT_EMU_SIM, 1500000, &timeoutResult)*timeoutFlags*call to _kgspCompleteRpcHistoryEntry*call to _kgspCheckSlowRpc*call to _kgspRpcSanityCheck*call to _kgspLogRpcSanityCheckFailure*bQuietPrints*call to _kgspRpcIncrementTimeoutCountAndRateLimitPrints*call to _kgspLogXid119*Back to back GSP RPC timeout detected! GPU marked for reset**Back to back GSP RPC timeout detected! GPU marked for reset*call to kflcnCoreDumpDestructive_IMPL*Triggering TDR to recover from GSP hang**Triggering TDR to recover from GSP hang*NVRM: gpuCheckTimeout() returned unexpected error (0x%08x) **NVRM: gpuCheckTimeout() returned unexpected error (0x%08x) *call to kdispApplyAggressiveVblankHandlingWar_IMPL*NVRM: Rate limiting GSP RPC error prints for GPU at PCI:%04x:%02x:%02x (printing 1 of every %d). The GPU likely needs to be reset. **NVRM: Rate limiting GSP RPC error prints for GPU at PCI:%04x:%02x:%02x (printing 1 of every %d). The GPU likely needs to be reset. *NVRM: Rate limiting GSP RPC error prints (printing 1 of every %d) **NVRM: Rate limiting GSP RPC error prints (printing 1 of every %d) *pHistoryEntry*expectedFunc == pHistoryEntry->function**expectedFunc == pHistoryEntry->function*NVRM: GPU%d sanity check failed 0x%x waiting for RPC response from GSP. Expected function %d (%s) sequence %u (0x%llx 0x%llx). *call to _getRpcName**NVRM: GPU%d sanity check failed 0x%x waiting for RPC response from GSP. Expected function %d (%s) sequence %u (0x%llx 0x%llx). *NVRM: ********************************* GSP Timeout ********************************** **NVRM: ********************************* GSP Timeout ********************************** *NVRM: Note: Please also check logs above. **NVRM: Note: Please also check logs above. *ts_end > pHistoryEntry->ts_start**ts_end > pHistoryEntry->ts_start*call to _tsDiffToDuration*Timeout after %llus of waiting for RPC response from GPU%d GSP! Expected function %d (%s) sequence %u (0x%llx 0x%llx).**Timeout after %llus of waiting for RPC response from GPU%d GSP! Expected function %d (%s) sequence %u (0x%llx 0x%llx).*call to kgspDumpMailbox_DISPATCH*call to kflcnCoreDumpNondestructive_IMPL*NVRM: ******************************************************************************** **NVRM: ******************************************************************************** *tsFreqUs > 0**tsFreqUs > 0*NVRM: Slow RPC response from GPU%d GSP (%lluus). Function %d (%s) sequence %u (0x%llx 0x%llx). **NVRM: Slow RPC response from GPU%d GSP (%lluus). Function %d (%s) sequence %u (0x%llx 0x%llx). *Slow RPC response from GSP!**Slow RPC response from GSP!*call to _kgspGetActiveRpcDebugData*activeData**activeData*NVRM: GPU%d GSP RPC buffer contains function %d (%s) sequence %u and data 0x%016llx 0x%016llx. **NVRM: GPU%d GSP RPC buffer contains function %d (%s) sequence %u and data 0x%016llx 0x%016llx. *NVRM: GPU%d RPC history (CPU -> GSP): **NVRM: GPU%d RPC history (CPU -> GSP): *NVRM: entry function sequence data0 data1 ts_start ts_end duration actively_polling **NVRM: entry function sequence data0 data1 ts_start ts_end duration actively_polling *historyEntry*call to _kgspLogRpcHistoryEntry*NVRM: GPU%d RPC event history (CPU <- GSP): **NVRM: GPU%d RPC event history (CPU <- GSP): *NVRM: entry function sequence data0 data1 ts_start ts_end duration during_incomplete_rpc **NVRM: entry function sequence data0 data1 ts_start ts_end duration during_incomplete_rpc *call to _kgspIsTimestampDuringRecentRpc*pProtobufData*NVRM: %c%-4d %-4d %-21.21s %10u 0x%016llx 0x%016llx 0x%016llx 0x%016llx %6llu%cs %c **NVRM: %c%-4d %-4d %-21.21s %10u 0x%016llx 0x%016llx 0x%016llx 0x%016llx %6llu%cs %c *NVRM: %c%-4d %-4d %-21.21s %10u 0x%016llx 0x%016llx 0x%016llx 0x%016llx %c **NVRM: %c%-4d %-4d %-21.21s %10u 0x%016llx 0x%016llx 0x%016llx 0x%016llx %c *call to _kgspRpcDrainOneEvent*call to GspMsgQueueReceiveStatus*call to _kgspProcessRpcEvent*nvStatus != NV_WARN_MORE_PROCESSING_REQUIRED**nvStatus != NV_WARN_MORE_PROCESSING_REQUIRED*call to fecsHandleFecsLoggingError*pErrorReport*NVRM: Received signal from GSP to trigger bugcheck! BugCode=0x%x **NVRM: Received signal from GSP to trigger bugcheck! BugCode=0x%x *NVRM: Forcing driver shutdown by error injection test, marking GPU%d for reset! **NVRM: Forcing driver shutdown by error injection test, marking GPU%d for reset! *call to gpuSetRecoveryDrainP2P_KERNEL*call to sysSetRecoveryRebootRequired_IMPL*pErrorReport != NULL**pErrorReport != NULL**pErrorReport*NVRM: received event from GPU%d: 0x%x (%s) status: 0x%x size: %d **NVRM: received event from GPU%d: 0x%x (%s) status: 0x%x size: %d *call to _kgspAddRpcHistoryEntry*NVRM: Attempted to process RPC event from GPU%d: 0x%x (%s) during bootup without API lock **NVRM: Attempted to process RPC event from GPU%d: 0x%x (%s) during bootup without API lock *call to _kgspRpcRunCpuSequencer*call to _kgspRpcPostEvent*call to _kgspRpcRCTriggered*call to _kgspRpcMMUFaultQueued*call to _kgspRpcSimRead*call to _kgspRpcSimWrite*call to _kgspRpcOsErrorLog*call to _kgspRpcGpuacctPerfmonUtilSamples*call to _kgspRpcPerfGpuBoostSyncLimitsCallback*call to _kgspRpcPerfBridgelessInfoUpdate*call to _kgspRpcSemaphoreScheduleCallback*call to _kgspRpcTimedSemaphoreRelease*call to _kgspRpcNvlinkFaultUpCallback*call to _kgspRpcNvlinkInbandReceivedData256Callback*call to _kgspRpcNvlinkInbandReceivedData512Callback*call to _kgspRpcNvlinkInbandReceivedData1024Callback*call to _kgspRpcNvlinkInbandReceivedData2048Callback*call to _kgspRpcNvlinkInbandReceivedData4096Callback*call to _kgspRpcNvlinkFatalErrorRecoveryCallback*call to _kgspRpcEventIsGpuDegradedCallback*call to _kgspRpcUcodeLibosPrint*call to _kgspRpcVgpuGspPluginTriggered*call to _kgspRpcGspVgpuConfig*call to _kgspRpcGspExtdevIntrService*call to _kgspRpcEventPlatformRequestHandlerStateSyncCallback*call to _kgspRpcMigCiConfigUpdate*call to _kgspRpcGspLockdownNotice*call to _kgspRpcGspUpdateTrace*call to _kgspRpcGspPostNocatRecord*call to _kgspRpcGspEventFecsError*call to _kgspRpcGspEventRecoveryAction*call to _kgspRpcGspTriggerBugcheck*call to _kgspRpcGspForcedDriverShutdown*NVRM: Unexpected RPC event from GPU%d: 0x%x (%s), sequence: %u **NVRM: Unexpected RPC event from GPU%d: 0x%x (%s), sequence: %u *NVRM: Failed to process received event 0x%x (%s) from GPU%d: status=0x%x **NVRM: Failed to process received event 0x%x (%s) from GPU%d: status=0x%x **NOP**SET_GUEST_SYSTEM_INFO**ALLOC_ROOT**ALLOC_DEVICE**ALLOC_MEMORY**ALLOC_CTX_DMA**ALLOC_CHANNEL_DMA**MAP_MEMORY**BIND_CTX_DMA**ALLOC_OBJECT**FREE**LOG**ALLOC_VIDMEM**UNMAP_MEMORY**MAP_MEMORY_DMA**UNMAP_MEMORY_DMA**GET_EDID**ALLOC_DISP_CHANNEL**ALLOC_DISP_OBJECT**ALLOC_SUBDEVICE**ALLOC_DYNAMIC_MEMORY**DUP_OBJECT**IDLE_CHANNELS**ALLOC_EVENT**SEND_EVENT**REMAPPER_CONTROL**DMA_CONTROL**DMA_FILL_PTE_MEM**MANAGE_HW_RESOURCE**BIND_ARBITRARY_CTX_DMA**CREATE_FB_SEGMENT**DESTROY_FB_SEGMENT**ALLOC_SHARE_DEVICE**DEFERRED_API_CONTROL**REMOVE_DEFERRED_API**SIM_ESCAPE_READ**SIM_ESCAPE_WRITE**SIM_MANAGE_DISPLAY_CONTEXT_DMA**FREE_VIDMEM_VIRT**PERF_GET_PSTATE_INFO**PERF_GET_PERFMON_SAMPLE**PERF_GET_VIRTUAL_PSTATE_INFO**PERF_GET_LEVEL_INFO**MAP_SEMA_MEMORY**UNMAP_SEMA_MEMORY**SET_SURFACE_PROPERTIES**CLEANUP_SURFACE**UNLOADING_GUEST_DRIVER**TDR_SET_TIMEOUT_STATE**SWITCH_TO_VGA**GPU_EXEC_REG_OPS**GET_STATIC_INFO**ALLOC_VIRTMEM**UPDATE_PDE_2**SET_PAGE_DIRECTORY**GET_STATIC_PSTATE_INFO**TRANSLATE_GUEST_GPU_PTES**RESERVED_57**RESET_CURRENT_GR_CONTEXT**SET_SEMA_MEM_VALIDATION_STATE**GET_ENGINE_UTILIZATION**UPDATE_GPU_PDES**GET_ENCODER_CAPACITY**VGPU_PF_REG_READ32**SET_GUEST_SYSTEM_INFO_EXT**GET_GSP_STATIC_INFO**RMFS_INIT**RMFS_CLOSE_QUEUE**RMFS_CLEANUP**RMFS_TEST**UPDATE_BAR_PDE**CONTINUATION_RECORD**GSP_SET_SYSTEM_INFO**SET_REGISTRY**GSP_INIT_POST_OBJGPU**SUBDEV_EVENT_SET_NOTIFICATION**GSP_RM_CONTROL**GET_STATIC_INFO2**DUMP_PROTOBUF_COMPONENT**UNSET_PAGE_DIRECTORY**GET_CONSOLIDATED_STATIC_INFO**GMMU_REGISTER_FAULT_BUFFER**GMMU_UNREGISTER_FAULT_BUFFER**GMMU_REGISTER_CLIENT_SHADOW_FAULT_BUFFER**GMMU_UNREGISTER_CLIENT_SHADOW_FAULT_BUFFER**CTRL_SET_VGPU_FB_USAGE**CTRL_NVFBC_SW_SESSION_UPDATE_INFO**CTRL_NVENC_SW_SESSION_UPDATE_INFO**CTRL_RESET_CHANNEL**CTRL_RESET_ISOLATED_CHANNEL**CTRL_GPU_HANDLE_VF_PRI_FAULT**CTRL_CLK_GET_EXTENDED_INFO**CTRL_PERF_BOOST**CTRL_PERF_VPSTATES_GET_CONTROL**CTRL_GET_ZBC_CLEAR_TABLE**CTRL_SET_ZBC_COLOR_CLEAR**CTRL_SET_ZBC_DEPTH_CLEAR**CTRL_GPFIFO_SCHEDULE**CTRL_SET_TIMESLICE**CTRL_PREEMPT**CTRL_FIFO_DISABLE_CHANNELS**CTRL_SET_TSG_INTERLEAVE_LEVEL**CTRL_SET_CHANNEL_INTERLEAVE_LEVEL**GSP_RM_ALLOC**CTRL_GET_P2P_CAPS_V2**CTRL_CIPHER_AES_ENCRYPT**CTRL_CIPHER_SESSION_KEY**CTRL_CIPHER_SESSION_KEY_STATUS**CTRL_DBG_CLEAR_ALL_SM_ERROR_STATES**CTRL_DBG_READ_ALL_SM_ERROR_STATES**CTRL_DBG_SET_EXCEPTION_MASK**CTRL_GPU_PROMOTE_CTX**CTRL_GR_CTXSW_PREEMPTION_BIND**CTRL_GR_SET_CTXSW_PREEMPTION_MODE**CTRL_GR_CTXSW_ZCULL_BIND**CTRL_GPU_INITIALIZE_CTX**CTRL_VASPACE_COPY_SERVER_RESERVED_PDES**CTRL_FIFO_CLEAR_FAULTED_BIT**CTRL_GET_LATEST_ECC_ADDRESSES**CTRL_MC_SERVICE_INTERRUPTS**CTRL_DMA_SET_DEFAULT_VASPACE**CTRL_GET_CE_PCE_MASK**CTRL_GET_ZBC_CLEAR_TABLE_ENTRY**CTRL_GET_NVLINK_PEER_ID_MASK**CTRL_GET_NVLINK_STATUS**CTRL_GET_P2P_CAPS**CTRL_GET_P2P_CAPS_MATRIX**RESERVED_0**CTRL_RESERVE_PM_AREA_SMPC**CTRL_RESERVE_HWPM_LEGACY**CTRL_B0CC_EXEC_REG_OPS**CTRL_BIND_PM_RESOURCES**CTRL_DBG_SUSPEND_CONTEXT**CTRL_DBG_RESUME_CONTEXT**CTRL_DBG_EXEC_REG_OPS**CTRL_DBG_SET_MODE_MMU_DEBUG**CTRL_DBG_READ_SINGLE_SM_ERROR_STATE**CTRL_DBG_CLEAR_SINGLE_SM_ERROR_STATE**CTRL_DBG_SET_MODE_ERRBAR_DEBUG**CTRL_DBG_SET_NEXT_STOP_TRIGGER_TYPE**CTRL_ALLOC_PMA_STREAM**CTRL_PMA_STREAM_UPDATE_GET_PUT**CTRL_FB_GET_INFO_V2**CTRL_FIFO_SET_CHANNEL_PROPERTIES**CTRL_GR_GET_CTX_BUFFER_INFO**CTRL_KGR_GET_CTX_BUFFER_PTES**CTRL_GPU_EVICT_CTX**CTRL_FB_GET_FS_INFO**CTRL_GRMGR_GET_GR_FS_INFO**CTRL_STOP_CHANNEL**CTRL_GR_PC_SAMPLING_MODE**CTRL_PERF_RATED_TDP_GET_STATUS**CTRL_PERF_RATED_TDP_SET_CONTROL**CTRL_FREE_PMA_STREAM**CTRL_TIMER_SET_GR_TICK_FREQ**CTRL_FIFO_SETUP_VF_ZOMBIE_SUBCTX_PDB**GET_CONSOLIDATED_GR_STATIC_INFO**CTRL_DBG_SET_SINGLE_SM_SINGLE_STEP**CTRL_GR_GET_TPC_PARTITION_MODE**CTRL_GR_SET_TPC_PARTITION_MODE**UVM_PAGING_CHANNEL_ALLOCATE**UVM_PAGING_CHANNEL_DESTROY**UVM_PAGING_CHANNEL_MAP**UVM_PAGING_CHANNEL_UNMAP**UVM_PAGING_CHANNEL_PUSH_STREAM**UVM_PAGING_CHANNEL_SET_HANDLES**UVM_METHOD_STREAM_GUEST_PAGES_OPERATION**CTRL_INTERNAL_QUIESCE_PMA_CHANNEL**DCE_RM_INIT**REGISTER_VIRTUAL_EVENT_BUFFER**CTRL_EVENT_BUFFER_UPDATE_GET**GET_PLCABLE_ADDRESS_KIND**CTRL_PERF_LIMITS_SET_STATUS_V2**CTRL_INTERNAL_SRIOV_PROMOTE_PMA_STREAM**CTRL_GET_MMU_DEBUG_MODE**CTRL_INTERNAL_PROMOTE_FAULT_METHOD_BUFFERS**CTRL_FLCN_GET_CTX_BUFFER_SIZE**CTRL_FLCN_GET_CTX_BUFFER_INFO**DISABLE_CHANNELS**CTRL_FABRIC_MEMORY_DESCRIBE**CTRL_FABRIC_MEM_STATS**SAVE_HIBERNATION_DATA**RESTORE_HIBERNATION_DATA**CTRL_INTERNAL_MEMSYS_SET_ZBC_REFERENCED**CTRL_EXEC_PARTITIONS_CREATE**CTRL_EXEC_PARTITIONS_DELETE**CTRL_GPFIFO_GET_WORK_SUBMIT_TOKEN**CTRL_GPFIFO_SET_WORK_SUBMIT_TOKEN_NOTIF_INDEX**PMA_SCRUBBER_SHARED_BUFFER_GUEST_PAGES_OPERATION**CTRL_MASTER_GET_VIRTUAL_FUNCTION_ERROR_CONT_INTR_MASK**RESERVED_190**CTRL_SUBDEVICE_GET_P2P_CAPS**CTRL_BUS_SET_P2P_MAPPING**CTRL_BUS_UNSET_P2P_MAPPING**CTRL_FLA_SETUP_INSTANCE_MEM_BLOCK**CTRL_GPU_MIGRATABLE_OPS**CTRL_GET_TOTAL_HS_CREDITS**CTRL_GET_HS_CREDITS**CTRL_SET_HS_CREDITS**CTRL_PM_AREA_PC_SAMPLER**INVALIDATE_TLB**CTRL_GPU_QUERY_ECC_STATUS**ECC_NOTIFIER_WRITE_ACK**CTRL_DBG_GET_MODE_MMU_DEBUG**RM_API_CONTROL**CTRL_CMD_INTERNAL_GPU_START_FABRIC_PROBE**CTRL_NVLINK_GET_INBAND_RECEIVED_DATA**GET_STATIC_DATA**RESERVED_208**CTRL_GPU_GET_INFO_V2**GET_BRAND_CAPS**CTRL_CMD_NVLINK_INBAND_SEND_DATA**UPDATE_GPM_GUEST_BUFFER_INFO**CTRL_CMD_INTERNAL_CONTROL_GSP_TRACE**CTRL_SET_ZBC_STENCIL_CLEAR**CTRL_SUBDEVICE_GET_VGPU_HEAP_STATS**CTRL_SUBDEVICE_GET_LIBOS_HEAP_STATS**CTRL_DBG_SET_MODE_MMU_GCC_DEBUG**CTRL_DBG_GET_MODE_MMU_GCC_DEBUG**CTRL_RESERVE_HES**CTRL_RELEASE_HES**CTRL_RESERVE_CCU_PROF**CTRL_RELEASE_CCU_PROF**SETUP_HIBERNATION_BUFFER**CTRL_CMD_GET_CHIPLET_HS_CREDIT_POOL**CTRL_CMD_GET_HS_CREDITS_MAPPING**CTRL_EXEC_PARTITIONS_EXPORT**CTRL_CMD_INTERNAL_GPU_CHECK_CTS_ID_VALID**NUM_FUNCTIONS**FIRST_EVENT**GSP_INIT_DONE**GSP_RUN_CPU_SEQUENCER**POST_EVENT**RC_TRIGGERED**MMU_FAULT_QUEUED**OS_ERROR_LOG**RG_LINE_INTR**GPUACCT_PERFMON_UTIL_SAMPLES**SIM_READ**SIM_WRITE**SEMAPHORE_SCHEDULE_CALLBACK**UCODE_LIBOS_PRINT**VGPU_GSP_PLUGIN_TRIGGERED**PERF_GPU_BOOST_SYNC_LIMITS_CALLBACK**PERF_BRIDGELESS_INFO_UPDATE**VGPU_CONFIG**DISPLAY_MODESET**EXTDEV_INTR_SERVICE**NVLINK_INBAND_RECEIVED_DATA_256**NVLINK_INBAND_RECEIVED_DATA_512**NVLINK_INBAND_RECEIVED_DATA_1024**NVLINK_INBAND_RECEIVED_DATA_2048**NVLINK_INBAND_RECEIVED_DATA_4096**TIMED_SEMAPHORE_RELEASE**NVLINK_IS_GPU_DEGRADED**PFM_REQ_HNDLR_STATE_SYNC_CALLBACK**NVLINK_FAULT_UP**GSP_LOCKDOWN_NOTICE**MIG_CI_CONFIG_UPDATE**UPDATE_GSP_TRACE**NVLINK_FATAL_ERROR_RECOVERY**GSP_POST_NOCAT_RECORD**FECS_ERROR**RECOVERY_ACTION**TRIGGER_BUGCHECK**BIND_BAR2**FORCED_DRIVER_SHUTDOWN**NUM_EVENTS*bInLockdown*NVRM: GSP lockdown %s **NVRM: GSP lockdown %s *engaged**engaged*disengaged**disengaged*syncData*smbpbi*sensorId*call to pfmreqhndlrStateSync_IMPL*call to gspTraceEventBufferLogRecord*GspTraceRecords*rpc_params->execPartCount <= NVC637_CTRL_MAX_EXEC_PARTITIONS**rpc_params->execPartCount <= NVC637_CTRL_MAX_EXEC_PARTITIONS*execPartId**execPartId*call to kmigmgrUpdateCiConfigForVgpu_IMPL*call to extdevGsyncService*rpc_params->notifyIndex < NVA081_NOTIFIERS_MAXCOUNT**rpc_params->notifyIndex < NVA081_NOTIFIERS_MAXCOUNT*call to CliNotifyVgpuConfigEvent*call to gpuGspPluginTriggeredEvent_IMPL*pKernelPmu != NULL**pKernelPmu != NULL*call to kpmuLogBuf_IMPL*libosPrintBuf**libosPrintBuf*Attempting to use libos prints with an unsupported ucode! **Attempting to use libos prints with an unsupported ucode! *call to tsemaRelease_KERNEL*call to dispswReleaseSemaphoreAndNotifierFill*call to gpuSimEscapeWrite*rpc_params->count <= sizeof(pKernelGsp->pSimAccessBuf->data)**rpc_params->count <= sizeof(pKernelGsp->pSimAccessBuf->data)*call to osQueueMMUFaultHandler*call to knvlinkFatalErrorRecovery_IMPL*knvlinkFatalErrorRecovery(pGpu, pKernelNvlink, pDest->bRecoverable, pDest->bLazy)**knvlinkFatalErrorRecovery(pGpu, pKernelNvlink, pDest->bRecoverable, pDest->bLazy)*call to knvlinkSetDegradedMode_IMPL*call to knvlinkInbandMsgCallbackDispatcher_IMPL*NV_OK == knvlinkInbandMsgCallbackDispatcher(pGpu, pKernelNvlink, dest->dataSize, dest->data)**NV_OK == knvlinkInbandMsgCallbackDispatcher(pGpu, pKernelNvlink, dest->dataSize, dest->data)*call to knvlinkHandleFaultUpInterrupt_DISPATCH*call to kPerfGpuBoostSyncBridgelessUpdateInfo*currLimits**currLimits*call to kperfDoSyncGpuBoostLimits_IMPL*nvenc*nvdec**pPreviousChannelInError*call to nvErrorLog2_va*errString**errString*call to krcCheckBusError_KERNEL*call to kfifoGetChidMgrFromType_IMPL*pCommonRecord*call to rcdbAddRcDiagRecFromGsp_IMPL*pRcDiagRecord*NVRM: Lost RC diagnostic record coming from GPU%d GSP: type=0x%x stateMask=0x%llx **NVRM: Lost RC diagnostic record coming from GPU%d GSP: type=0x%x stateMask=0x%llx *rcDiagRecEnd*call to rcdbUpdateRcDiagRecContext_IMPL*bIsCcEnabled*krcErrorSetNotifier(pGpu, pKernelRc, pKernelChannel, rpc_params->exceptType, rmEngineType, rpc_params->scope)**krcErrorSetNotifier(pGpu, pKernelRc, pKernelChannel, rpc_params->exceptType, rmEngineType, rpc_params->scope)*call to krcErrorSendEventNotifications_KERNEL*pEvent->pNotifierShare != NULL**pEvent->pNotifierShare != NULL*notifyClassId*call to _kgspProcessEccNotifier*osEventNotificationWithInfo(pGpu, pNotifyList, rpc_params->notifyIndex, rpc_params->data, rpc_params->info16, rpc_params->eventData, rpc_params->eventDataSize)**osEventNotificationWithInfo(pGpu, pNotifyList, rpc_params->notifyIndex, rpc_params->data, rpc_params->info16, rpc_params->eventData, rpc_params->eventDataSize)*osNotifyEvent(pGpu, pNotifyEvent, 0, rpc_params->data, rpc_params->status)**osNotifyEvent(pGpu, pNotifyEvent, 0, rpc_params->data, rpc_params->status)*pMemoryMgr*call to heapStorePendingBlackList_IMPL*NVRM: Since we hit the DED on the reserved region, nothing to handle in this code path... **NVRM: Since we hit the DED on the reserved region, nothing to handle in this code path... *NVRM: Relying on FBHUB interrupt to kill all the channels and force reset the GPU.. **NVRM: Relying on FBHUB interrupt to kill all the channels and force reset the GPU.. *NVRM: Dynamically blacklisting the DED page offset failed with, status: %x **NVRM: Dynamically blacklisting the DED page offset failed with, status: %x *call to kgspExecuteSequencerBuffer_IMPL*call to GspMsgQueueSendCommand*NVRM: GspMsgQueueSendCommand failed on GPU%d: 0x%x **NVRM: GspMsgQueueSendCommand failed on GPU%d: 0x%x *call to kgspSetCmdQueueHead_DISPATCH*pHistory*pHistory[current].ts_start != 0**pHistory[current].ts_start != 0*ts_end*ts_start*NVRM: GSP crashed, skipping RPC **NVRM: GSP crashed, skipping RPC *NVRM: GPU in reset, skipping RPC **NVRM: GPU in reset, skipping RPC *NVRM: GPU lost, skipping RPC **NVRM: GPU lost, skipping RPC *call to osIsGpuShutdown*NVRM: GPU shutdown, skipping RPC **NVRM: GPU shutdown, skipping RPC *NVRM: GPU not full power, skipping RPC **NVRM: GPU not full power, skipping RPC *NVRM: GPU has no sysmem access, skipping RPC **NVRM: GPU has no sysmem access, skipping RPC *src/kernel/gpu/gsp/kernel_gsp_booter.c**src/kernel/gpu/gsp/kernel_gsp_booter.c*ppScrubberUcode**ppScrubberUcode*ppScrubberUcode != NULL**ppScrubberUcode != NULL*call to ksec2GetBinArchiveSecurescrubUcode_DISPATCH*pBinArchive != NULL**pBinArchive != NULL*call to s_allocateUcodeFromBinArchive*ppBooterUnloadUcode**ppBooterUnloadUcode*ppBooterUnloadUcode != NULL**ppBooterUnloadUcode != NULL*call to kgspGetBinArchiveBooterUnloadUcode_DISPATCH*ppBooterLoadUcode**ppBooterLoadUcode*ppBooterLoadUcode != NULL**ppBooterLoadUcode != NULL*call to kgspGetBinArchiveBooterLoadUcode_DISPATCH*pBinImage**pBinImage*pBinHeader**pBinHeader*pBinSig**pBinSig*pBinImage != NULL**pBinImage != NULL*pBinHeader != NULL**pBinHeader != NULL*pBinSig != NULL**pBinSig != NULL*pBinPatchSig**pBinPatchSig*pBinPatchLoc**pBinPatchLoc*pBinPatchMeta**pBinPatchMeta*pBinNumSigs**pBinNumSigs*pBinPatchSig != NULL**pBinPatchSig != NULL*pBinPatchLoc != NULL**pBinPatchLoc != NULL*pBinPatchMeta != NULL**pBinPatchMeta != NULL*pBinNumSigs != NULL**pBinNumSigs != NULL**pFlcnUcode*call to s_bindataWriteToFixedSizeBuffer*s_bindataWriteToFixedSizeBuffer(pBinHeader, &header, sizeof(header))**s_bindataWriteToFixedSizeBuffer(pBinHeader, &header, sizeof(header))*s_bindataWriteToFixedSizeBuffer(pBinPatchLoc, &patchLoc, sizeof(patchLoc))**s_bindataWriteToFixedSizeBuffer(pBinPatchLoc, &patchLoc, sizeof(patchLoc))*s_bindataWriteToFixedSizeBuffer(pBinPatchSig, &patchSig, sizeof(patchSig))**s_bindataWriteToFixedSizeBuffer(pBinPatchSig, &patchSig, sizeof(patchSig))*s_bindataWriteToFixedSizeBuffer(pBinPatchMeta, &patchMeta, sizeof(patchMeta))**s_bindataWriteToFixedSizeBuffer(pBinPatchMeta, &patchMeta, sizeof(patchMeta))*s_bindataWriteToFixedSizeBuffer(pBinNumSigs, &numSigs, sizeof(numSigs))**s_bindataWriteToFixedSizeBuffer(pBinNumSigs, &numSigs, sizeof(numSigs))*bindataStorageAcquireData(pBinSig, &pSignatures)**bindataStorageAcquireData(pBinSig, &pSignatures)*bootType*codeOffset*imemSize*imemPa*imemVa*dmemPa*dmemVa*hsSigDmemAddr*patchMeta*engineIdMask*memdescCreate(&pUcode->pUcodeMemDesc, pGpu, pUcode->size, 16, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pUcode->pUcodeMemDesc, pGpu, pUcode->size, 16, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*pMappedUcodeMem**pMappedUcodeMem*call to s_patchBooterUcodeSignature**pSignatures*imemNsPa*imemNsSize*imemSecPa*imemSecSize*bindataWriteToBuffer(pBinImage, pUcode->pImage, pUcode->size)**bindataWriteToBuffer(pBinImage, pUcode->pImage, pUcode->size)*s_patchBooterUcodeSignature(pGpu, patchMeta.ucodeId, pUcode->pImage, patchLoc, pUcode->size, pSignatures, signaturesTotalSize, numSigs)**s_patchBooterUcodeSignature(pGpu, patchMeta.ucodeId, pUcode->pImage, patchLoc, pUcode->size, pSignatures, signaturesTotalSize, numSigs)***pSignatures*pImage != NULL**pImage != NULL*imageSize > sigDestOffset**imageSize > sigDestOffset*imageSize - sigDestOffset > sigSize**imageSize - sigDestOffset > sigSize*pSignatures != NULL**pSignatures != NULL*numSigs > 0**numSigs > 0*call to ksec2ReadUcodeFuseVersion_DISPATCH*fuseVer*NVRM: signature for fuse version %u not present **NVRM: signature for fuse version %u not present *sigIndex**pUcodeMemDesc**pCodeMemDesc**pDataMemDesc*src/kernel/gpu/gsp/kernel_gsp_fwsec.c**src/kernel/gpu/gsp/kernel_gsp_fwsec.c*pVbiosImg != NULL**pVbiosImg != NULL*pVbiosImg->pImage != NULL**pVbiosImg->pImage != NULL*ppFwsecUcode**ppFwsecUcode*call to s_vbiosFindBitHeader*NVRM: failed to find BIT header in VBIOS image: 0x%x **NVRM: failed to find BIT header in VBIOS image: 0x%x *call to s_vbiosParseFwsecUcodeDescFromBit*pVbiosVersionCombined*NVRM: failed to parse FWSEC ucode desc from VBIOS image: 0x%x **NVRM: failed to parse FWSEC ucode desc from VBIOS image: 0x%x *call to s_vbiosNewFlcnUcodeFromDesc*NVRM: failed to prepare new flcn ucode for FWSEC: 0x%x **NVRM: failed to prepare new flcn ucode for FWSEC: 0x%x *pFlcnUcodeDescFromBit*pFlcnUcodeDescFromBit != NULL**pFlcnUcodeDescFromBit != NULL*ppFlcnUcode**ppFlcnUcode*ppFlcnUcode != NULL**ppFlcnUcode != NULL*call to s_vbiosFillFlcnUcodeFromDescV2*descUnion*call to s_vbiosFillFlcnUcodeFromDescV3*NVRM: failed to parse/prepare Falcon ucode (desc: version 0x%x, offset 0x%x, size 0x%x): 0x%x **NVRM: failed to parse/prepare Falcon ucode (desc: version 0x%x, offset 0x%x, size 0x%x): 0x%x *pDescV3*pDescV3 != NULL**pDescV3 != NULL*sigSize*sigCount*vbiosSigVersions*memdescCreate(&pUcode->pUcodeMemDesc, pGpu, pUcode->size, 256, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pUcode->pUcodeMemDesc, pGpu, pUcode->size, 256, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*pDescV2*pDescV2 != NULL**pDescV2 != NULL*codeEntry*codeSizeAligned*dataSizeAligned*memdescCreate(&pUcode->pCodeMemDesc, pGpu, codeSizeAligned, 256, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_PHYSICALLY_CONTIGUOUS)**memdescCreate(&pUcode->pCodeMemDesc, pGpu, codeSizeAligned, 256, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_PHYSICALLY_CONTIGUOUS)*memdescCreate(&pUcode->pDataMemDesc, pGpu, dataSizeAligned, 256, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_PHYSICALLY_CONTIGUOUS)**memdescCreate(&pUcode->pDataMemDesc, pGpu, dataSizeAligned, 256, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_PHYSICALLY_CONTIGUOUS)*pMappedCodeMem**pMappedCodeMem*pMappedDataMem**pMappedDataMem*pVbiosImg->biosSize > 0**pVbiosImg->biosSize > 0*pFwsecUcodeDescFromBit*pFwsecUcodeDescFromBit != NULL**pFwsecUcodeDescFromBit != NULL*call to s_vbiosReadStructure*NVRM: failed to read BIT table structure: 0x%x **NVRM: failed to read BIT table structure: 0x%x *2b1w1d**2b1w1d*bitTokenSzFmt**bitTokenSzFmt*NVRM: Invalid BIT token size: %u **NVRM: Invalid BIT token size: %u *NVRM: failed to read BIT token %u, skipping: 0x%x **NVRM: failed to read BIT token %u, skipping: 0x%x *1d1b**1d1b*NVRM: failed to read BIOSDATA (BIT token %u), skipping: 0x%x **NVRM: failed to read BIOSDATA (BIT token %u), skipping: 0x%x *binver*1d**1d*NVRM: failed to read Falcon ucode data (BIT token %u), skipping: 0x%x **NVRM: failed to read Falcon ucode data (BIT token %u), skipping: 0x%x *6b*falconData**6b*NVRM: failed to read Falcon ucode header (BIT token %u), skipping: 0x%x **NVRM: failed to read Falcon ucode header (BIT token %u), skipping: 0x%x *ucodeHeader*2b1d**2b1d*NVRM: failed to read Falcon ucode entry %u (BIT token %u), skipping: 0x%x **NVRM: failed to read Falcon ucode entry %u (BIT token %u), skipping: 0x%x *ucodeEntry*NVRM: failed to read Falcon ucode desc header for entry %u (BIT token %u), skipping: 0x%x **NVRM: failed to read Falcon ucode desc header for entry %u (BIT token %u), skipping: 0x%x *ucodeDescHdr*NVRM: unexpected ucode desc version missing for entry %u (BIT token %u), skipping **NVRM: unexpected ucode desc version missing for entry %u (BIT token %u), skipping *ucodeDescVersion*ucodeDescSize*15d**15d*ucodeDescFmt**ucodeDescFmt*9d1w2b2w**9d1w2b2w*NVRM: unexpected ucode desc version 0x%x or size 0x%x for entry %u (BIT token %u), skipping **NVRM: unexpected ucode desc version 0x%x or size 0x%x for entry %u (BIT token %u), skipping *ucodeDescOffset*NVRM: failed to read Falcon ucode desc (desc version 0x%x) for entry %u (BIT token %u), skipping: 0x%x **NVRM: failed to read Falcon ucode desc (desc version 0x%x) for entry %u (BIT token %u), skipping: 0x%x *descVersion*entryIdx*tokIdx*pBitAddr*pBitAddr != NULL**pBitAddr != NULL*call to s_vbiosRead16*call to s_vbiosRead32*call to s_vbiosRead8*call to s_biosStructCalculatePackedSize*call to s_biosUnpackStructure*pStructure**pStructure*call to gspTraceRemoveBindpoint*gspTraceLoggingBufferActive*src/kernel/gpu/gsp/kernel_gsp_trace_rats.c**src/kernel/gpu/gsp/kernel_gsp_trace_rats.c*bufferWatermark*memdescCreate(&(pBind->pMemDesc), pGpu, allocBufferSize, 0, NV_TRUE, ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&(pBind->pMemDesc), pGpu, allocBufferSize, 0, NV_TRUE, ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*bufferAddr*bScheduled*pVgpuGspTracingBuffer**pVgpuGspTracingBuffer*bGuestNotifInProgress*call to _gspTraceReadVgpuTracingBuffer*call to _gspTraceEventBufferAdd*call to eventBufferIsEmpty*lastReadTimestamp*seqNo*pTgt**pTgt*nElements*call to msgqRxGetReadBuffer*hQueue**hQueue**call to msgqRxGetReadBuffer*pNextElement**pNextElement*src/kernel/gpu/gsp/message_queue_cpu.c*NVRM: Incomplete read. **src/kernel/gpu/gsp/message_queue_cpu.c**NVRM: Incomplete read. *call to _checkSum32**pCmdQueueElement*checkSum*rpc*NVRM: Bad checksum. **NVRM: Bad checksum. *NVRM: Bad sequence number. Expected %u got %u. Possible memory corruption. **NVRM: Bad sequence number. Expected %u got %u. Possible memory corruption. *NVRM: Attempting recovery: ignoring old package with seqNum=%u of %u elements. **NVRM: Attempting recovery: ignoring old package with seqNum=%u of %u elements. *seqMismatchDiff*call to msgqRxMarkConsumed*nRet*NVRM: msgqRxMarkConsumed failed: %d **NVRM: msgqRxMarkConsumed failed: %d *nRetries*NVRM: Read succeeded with %d retries. **NVRM: Read succeeded with %d retries. *NVRM: Read failed after %d retries. **NVRM: Read failed after %d retries. *call to ccslDecryptWithRotationChecks_KERNEL**aadBuffer**authTagBuffer*NVRM: Fatal error detected in RPC decrypt: 0x%x! **NVRM: Fatal error detected in RPC decrypt: 0x%x! *NVRM: Incorrect message length %u **NVRM: Incorrect message length %u *elemCount*call to ccslEncryptWithRotationChecks_KERNEL*NVRM: Encryption failed with status = 0x%x. **NVRM: Encryption failed with status = 0x%x. *NVRM: Fatal error detected in RPC encrypt: IV overflow! **NVRM: Fatal error detected in RPC encrypt: IV overflow! *call to msgqTxGetWriteBuffer**call to msgqTxGetWriteBuffer*NVRM: buffer is full (waiting for %d free elements, got %d) **NVRM: buffer is full (waiting for %d free elements, got %d) *txBufferFull*call to msgqTxSubmitBuffers*NVRM: msgqTxSubmitBuffers failed: %d **NVRM: msgqTxSubmitBuffers failed: %d *ppMQCollection**ppMQCollection*pRmQueueInfo**pRmQueueInfo*call to _gspMsgQueueCleanup***pWorkArea*pMetaData**pMetaData***pMetaData*call to msgqRxLink*pStatusQueue**pStatusQueue*NVRM: Status queue linked to command queue. **NVRM: Status queue linked to command queue. *NVRM: msgqRxLink failed: %d, nvStatus 0x%08x, retries: %d **NVRM: msgqRxLink failed: %d, nvStatus 0x%08x, retries: %d *NVRM: GSP message queue was already initialized. **NVRM: GSP message queue was already initialized. *NVRM: Error allocating queue info area. **NVRM: Error allocating queue info area. *call to _getMsgQueueParams*sharedBufSize*memdescCreate(&pMQCollection->pSharedMemDesc, pGpu, sharedBufSize, RM_PAGE_SIZE, NV_MEMORY_NONCONTIGUOUS, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)**memdescCreate(&pMQCollection->pSharedMemDesc, pGpu, sharedBufSize, RM_PAGE_SIZE, NV_MEMORY_NONCONTIGUOUS, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)*NVRM: Allocation failed with big page size, retrying with default page size **NVRM: Allocation failed with big page size, retrying with default page size *memdescMap(pMQCollection->pSharedMemDesc, 0, sharedBufSize, NV_TRUE, NV_PROTECT_WRITEABLE, &pVaKernel, &pPrivKernel)**memdescMap(pMQCollection->pSharedMemDesc, 0, sharedBufSize, NV_TRUE, NV_PROTECT_WRITEABLE, &pVaKernel, &pPrivKernel)*NVRM: Error allocating message queue shared buffer **NVRM: Error allocating message queue shared buffer *pPageTbl**pPageTbl*pCommandQueue**pCommandQueue***pCommandQueue***pStatusQueue*lastQueueVa**lastQueueVa***lastQueueVa*lastQueueSize*NvP64_PLUS_OFFSET(pVaKernel, sharedBufSize) == NvP64_PLUS_OFFSET(lastQueueVa, lastQueueSize)**NvP64_PLUS_OFFSET(pVaKernel, sharedBufSize) == NvP64_PLUS_OFFSET(lastQueueVa, lastQueueSize)*call to _gspMsgQueueInit*_gspMsgQueueInit(pRmQueueInfo)**_gspMsgQueueInit(pRmQueueInfo)*sharedMemPA*call to msgqGetMetaSize*workAreaSize*NVRM: Error allocating pWorkArea. **NVRM: Error allocating pWorkArea. *call to msgqInit*NVRM: msgqInit failed: %d **NVRM: msgqInit failed: %d *call to msgqTxCreate*NVRM: msgqTxCreate failed: %d **NVRM: msgqTxCreate failed: %d **pRpcMsgBuf*NVRM: Created command queue. **NVRM: Created command queue. *commandQueueSize*RmGspStatusQueueSize**RmGspStatusQueueSize*regStatusQueueSize*statusQueueSize*(queueSize & RM_PAGE_MASK) == 0**(queueSize & RM_PAGE_MASK) == 0*numPtes*pageTableSize*call to kgspliteDumpLibosLogs_IMPL*pLogBuf*pLogMemDesc**pLogBuf*pLogBufPriv**pLogBufPriv**pLogMemDesc*logBufferAddr*call to kgspliteFreeLibosLoggingStructures_IMPL*src/kernel/gpu/gsplite/kernel_gsplite.c*NVRM: Sending LibOS Log Buffer Info to CMC failed! **src/kernel/gpu/gsplite/kernel_gsplite.c**NVRM: Sending LibOS Log Buffer Info to CMC failed! *pKernelGsplite->pLogBuf == NULL**pKernelGsplite->pLogBuf == NULL*pBinStorageRiscvElfFileData*logElfSize*bindataWriteToBuffer(pBinStorageRiscvElfFileData, pKernelGsplite->pLogElf, logElfSize)**bindataWriteToBuffer(pBinStorageRiscvElfFileData, pKernelGsplite->pLogElf, logElfSize)*logBufSize*memdescCreate(&pKernelGsplite->pLogMemDesc, pGpu, pKernelGsplite->logBufSize, RM_PAGE_SIZE, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pKernelGsplite->pLogMemDesc, pGpu, pKernelGsplite->logBufSize, RM_PAGE_SIZE, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, MEMDESC_FLAGS_NONE)*memdescMap(pKernelGsplite->pLogMemDesc, 0, memdescGetSize(pKernelGsplite->pLogMemDesc), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)**memdescMap(pKernelGsplite->pLogMemDesc, 0, memdescGetSize(pKernelGsplite->pLogMemDesc), NV_TRUE, NV_PROTECT_READ_WRITE, &pVa, &pPriv)*CMC**CMC*UCODE**UCODE*call to libosLogInitEx*RmGspliteEnableMask**RmGspliteEnableMask*NVRM: KernelGsplite%d enabled due to regkey override. **NVRM: KernelGsplite%d enabled due to regkey override. *NVRM: KernelGsplite%d missing due to lack of regkey override. **NVRM: KernelGsplite%d missing due to lack of regkey override. *NVRM: KernelGsplite%d: %s **NVRM: KernelGsplite%d: %s *call to _kgspliteStopLogPolling*call to kgspliteSendLibosLoggingStructuresInfo_IMPL*kgspliteSendLibosLoggingStructuresInfo(pGpu, pKernelGsplite)**kgspliteSendLibosLoggingStructuresInfo(pGpu, pKernelGsplite)*call to _kgspliteStartLogPolling*_kgspliteStartLogPolling(pGpu, pKernelGsplite)**_kgspliteStartLogPolling(pGpu, pKernelGsplite)*call to kgspliteInitLibosLoggingStructures_IMPL*kgspliteInitLibosLoggingStructures(pGpu, pKernelGsplite)**kgspliteInitLibosLoggingStructures(pGpu, pKernelGsplite)*call to _kgspliteInitRegistryOverrides*src/kernel/gpu/hfrp/kernel_hfrp.c*NVRM: hfrp is not enabled **src/kernel/gpu/hfrp/kernel_hfrp.c**NVRM: hfrp is not enabled *commandHeader*khfrpInfo*pMailboxIoInfo**pMailboxIoInfo*call to khfrpAllocateSequenceId_IMPL*pResponseStatus**pResponsePayload*pResponsePayloadSize*call to khfrpMailboxQueueMessage_IMPL*pCommandPayload**pCommandPayload*call to khfrpFreeSequenceId_IMPL*call to khfrpWriteBit_IMPL*call to khfrpPollOnIrqWrapper_IMPL*NVRM: Timed out while waiting to receive response for the command posted **NVRM: Timed out while waiting to receive response for the command posted *call to khfrpServiceEvent_IMPL*call to khfrpIsSequenceIdFree_IMPL*call to khfrpReadBit_IMPL*call to khfrpPollOnIrqRm_IMPL*pSequenceIdInfo*sequenceIdState**sequenceIdState*pResponsePayloadArray**pResponsePayloadArray***pResponsePayloadArray*pResponseStatusArray**pResponseStatusArray***pResponseStatusArray*pResponsePayloadSizeArray**pResponsePayloadSizeArray***pResponsePayloadSizeArray*pStatusArray**pStatusArray***pStatusArray*arrayIndex*indexValue*sequenceIdIndex*sequenceIdArrayIndex*NVRM: Could not allocate a sequence id to the command **NVRM: Could not allocate a sequence id to the command *hfrpBufferStartAddr*hfrpBufferEndAddr*hfrpHeadPtrAddr*hfrpTailPtrAddr*mailboxBufferSize*call to _hfrpReadByte*NVRM: Invalid state: head (%u) or tail (%u) pointer is out of range **NVRM: Invalid state: head (%u) or tail (%u) pointer is out of range *bufferSizeUsed*NVRM: Buffer full: buffer size used (%u) + payload size (%u) + header size (%u) >= mailbox buffer size (%u) **NVRM: Buffer full: buffer size used (%u) + payload size (%u) + header size (%u) >= mailbox buffer size (%u) *headAddr*call to _hfrpWriteMailboxData*pPayloadArray*call to _hfrpWriteByte**pAperture***pAperture*khfrpPrivBase**khfrpPrivBase*hfrpCommandBufferHeadPtrAddr*hfrpCommandBufferTailPtrAddr*hfrpCommandBufferStartAddr*hfrpCommandBufferEndAddr*hfrpResponseBufferHeadPtrAddr*hfrpResponseBufferTailPtrAddr*hfrpResponseBufferStartAddr*hfrpResponseBufferEndAddr*hfrpIrqInSetAddr*hfrpIrqOutSetAddr*hfrpIrqInClrAddr*hfrpIrqOutClrAddr*call to khfrpIoApertureDestruct_IMPL*NVRM: Failed to get HFRP info, bail out **NVRM: Failed to get HFRP info, bail out *hfrpParams*hfrpPrivBase**hfrpPrivBase*khfrpIntrCtrlReg**khfrpIntrCtrlReg*hfrpIntrCtrlReg**hfrpIntrCtrlReg*call to khfrpIoApertureConstruct_IMPL*RmEnableDStateHfrp**RmEnableDStateHfrp*RmEnableHdaDStateHfrp**RmEnableHdaDStateHfrp*call to khfrpCommonConstruct_IMPL*call to khfrpProcessResponse_IMPL*call to khfrpMailboxDequeueMessage_IMPL*responseSequenceId*NVRM: Invalid state: sequence id (%u) is not in accepted range **NVRM: Invalid state: sequence id (%u) is not in accepted range *NVRM: Invalid state: sequence id (%u) is not allocated for any command **NVRM: Invalid state: sequence id (%u) is not allocated for any command *NVRM: Invalid state: response status pointer is not allocated **NVRM: Invalid state: response status pointer is not allocated *NVRM: Invalid state: status pointer is not allocated **NVRM: Invalid state: status pointer is not allocated *clientPayloadSize*NVRM: Invalid state: response size (%u) > client payload size (%u) **NVRM: Invalid state: response size (%u) > client payload size (%u) *NVRM: Invalid state: response payload size pointer is not allocated **NVRM: Invalid state: response payload size pointer is not allocated *NVRM: Buffer empty: head (%u) == tail (%u) **NVRM: Buffer empty: head (%u) == tail (%u) *tailAddr*call to _hfrpReadMailboxData*messageSize*NVRM: Invalid state: buffer size used (%u) < message size (%u) **NVRM: Invalid state: buffer size used (%u) < message size (%u) *NVRM: payload size (%u) > Maximum allowed payload size (%u) **NVRM: payload size (%u) > Maximum allowed payload size (%u) *tempData*pNumCblocksPerPma*pNumChannels*pNumCblock*profilerCgStatus*streamoutState**pPmaVasRefcnt**streamoutState*call to _khwpmInitPmaStreamAttributes*_khwpmInitPmaStreamAttributes(pGpu, pKernelHwpm)*src/kernel/gpu/hwpm/kern_hwpm.c**_khwpmInitPmaStreamAttributes(pGpu, pKernelHwpm)**src/kernel/gpu/hwpm/kern_hwpm.c*vaSpaceBase*perCtxSize*RmHwpmExtendedBuffer**RmHwpmExtendedBuffer*call to _khwpmProfilerPmaVaSpaceRefcntInit*NVRM: Initialization of VaSpace Refcnt objects failed! **NVRM: Initialization of VaSpace Refcnt objects failed! *numPma*pKernelHwpm->numPma <= MAX_PMA_CREDIT_POOL**pKernelHwpm->numPma <= MAX_PMA_CREDIT_POOL*call to khwpmGetCblockInfo_DISPATCH*maxPmaChannels*maxCblocks*pRefcnt*bCreateVaSpace*bpcIdx < pKernelHwpm->maxCblocks**bpcIdx < pKernelHwpm->maxCblocks*call to khwpmStreamoutCreatePmaVaSpace_IMPL*NVRM: Failed to allocate PMA VA space for CBLOCK ID 0x%x. Error 0x%x **NVRM: Failed to allocate PMA VA space for CBLOCK ID 0x%x. Error 0x%x *bPmaVasRequested*call to khwpmStreamoutFreePmaVaSpace_IMPL*NVRM: Failed to free PMA VA space for CBLOCK ID 0x%x. Error 0x%x **NVRM: Failed to free PMA VA space for CBLOCK ID 0x%x. Error 0x%x *(bpcIdx < pKernelHwpm->maxCblocks)*src/kernel/gpu/hwpm/kern_hwpm_streamout.c**(bpcIdx < pKernelHwpm->maxCblocks)**src/kernel/gpu/hwpm/kern_hwpm_streamout.c*pPmaVAS*call to khwpmStreamoutInstBlkDestruct**pPmaVAS*call to khwpmPmaStreamSriovSetGfid_56cd7a*(pKernelHwpm->streamoutState[bpcIdx].pPmaVAS == NULL)**(pKernelHwpm->streamoutState[bpcIdx].pPmaVAS == NULL)*call to khwpmInstBlkConstruct*NVRM: Failed to construct PMA Instance block. Status 0x%x **NVRM: Failed to construct PMA Instance block. Status 0x%x *NVRM: Could not construct PMA vaspace object. Status 0x%x **NVRM: Could not construct PMA vaspace object. Status 0x%x *NVRM: Error Locking down VA space root. **NVRM: Error Locking down VA space root. *bRootPageDirPinned*NVRM: Error initializing HWPM PMA Instance Block. **NVRM: Error initializing HWPM PMA Instance Block. *HWPM PMA instblk**HWPM PMA instblk*NVRM: couldn't allocate PERF instblk! **NVRM: couldn't allocate PERF instblk! **pRefcnt*pPmaStream**pNumBytesCpuAddr*call to _hwpmStreamoutFreeCpuMapping*pNumBytesBufDesc*pNumBytesCpuAddrPriv**pNumBytesCpuAddrPriv***pNumBytesCpuAddr***pNumBytesCpuAddrPriv*vaddrRecordBuf*pRecordBufDesc**pRecordBufDesc**pNumBytesBufDesc*vaddrNumBytesBuf*pmaChannelIdx*call to refcntReleaseReferences_IMPL*NVRM: Releasing pPmaVasRefcnt failed on pmChIdx-%d. **NVRM: Releasing pPmaVasRefcnt failed on pmChIdx-%d. *call to refcntRequestReference_IMPL*refcntRequestReference(pRefcnt, profilerId, REFCNT_STATE_ENABLED, NV_FALSE)**refcntRequestReference(pRefcnt, profilerId, REFCNT_STATE_ENABLED, NV_FALSE)*bRefCnted*vaSizeRequested*virtSize*vaAlign*NVRM: vaspaceAlloc failed: 0x%08x **NVRM: vaspaceAlloc failed: 0x%08x *virtualAddressIter*call to _hwpmStreamoutAllocPmaMapping*NVRM: Failed to map records buffer to pma vaspace: 0x%08x **NVRM: Failed to map records buffer to pma vaspace: 0x%08x *virtualAddress2*NVRM: Failed to map available bytes buffer to pma vaspace: 0x%08x **NVRM: Failed to map available bytes buffer to pma vaspace: 0x%08x ***_pMemData*call to _hwpmStreamoutAllocCpuMapping*NVRM: Failed to map available bytes buffer to cpu vaspace: 0x%08x **NVRM: Failed to map available bytes buffer to cpu vaspace: 0x%08x *pCpuAddrTmp*NVRM: busMapRmAperture_HAL failed **NVRM: busMapRmAperture_HAL failed *pAddr64**pAddr64***pAddr64*NVRM: memdescMap failed: 0x%x **NVRM: memdescMap failed: 0x%x *NVRM: Error: 0x%x **NVRM: Error: 0x%x *call to profilerControlHwpmSupported_ac1694*profilerControlHwpmSupported_HAL(pProfiler, pParams)*src/kernel/gpu/hwpm/profiler_v1/kern_profiler_v1.c**profilerControlHwpmSupported_HAL(pProfiler, pParams)**src/kernel/gpu/hwpm/profiler_v1/kern_profiler_v1.c*call to gpuIsRmProfilingPrivileged*call to profilerIsProfilingPermitted_IMPL*call to profilerConstructState_ac1694*src/kernel/gpu/hwpm/profiler_v2/kern_profiler_v2.c**src/kernel/gpu/hwpm/profiler_v2/kern_profiler_v2.c*NVRM: Context level profiler is not supported **NVRM: Context level profiler is not supported *call to profilerBaseQueryCapabilities_IMPL*pProfBase*call to _isProfilingPermitted*(pClient->ProcID == pChannel->ProcessID)**(pClient->ProcID == pChannel->ProcessID)*call to profilerCtxConstructState_DISPATCH*call to profilerCtxConstructStatePrologue_DISPATCH*profilerCtxConstructStatePrologue_HAL(pProfCtx, pCallContext, pAllocParams)**profilerCtxConstructStatePrologue_HAL(pProfCtx, pCallContext, pAllocParams)*call to profilerCtxConstructStateInterlude_DISPATCH*profilerCtxConstructStateInterlude_HAL(pProfCtx, pCallContext, pAllocParams, clientPermissions)**profilerCtxConstructStateInterlude_HAL(pProfCtx, pCallContext, pAllocParams, clientPermissions)*call to profilerCtxConstructStateEpilogue_DISPATCH*profilerCtxConstructStateEpilogue_HAL(pProfCtx, pCallContext, pAllocParams)**profilerCtxConstructStateEpilogue_HAL(pProfCtx, pCallContext, pAllocParams)*call to _profilerBaseConstructVgpuGuest*_profilerBaseConstructVgpuGuest(pProfBase, pParams)**_profilerBaseConstructVgpuGuest(pProfBase, pParams)*call to profilerDevDestructState_DISPATCH*(pParentRef->internalClassId == classId(Subdevice))**(pParentRef->internalClassId == classId(Subdevice))*bCtxProfilingPermitted*bAdminProfilingPermitted*bVideoMemoryProfilingPermitted*bAsyncCeProfilingPermitted*bSysMemoryProfilingPermitted*bDevProfilingPermitted*call to profilerDevConstructStatePrologue_DISPATCH*profilerDevConstructStatePrologue_HAL(pProfDev, pCallContext, pAllocParams)**profilerDevConstructStatePrologue_HAL(pProfDev, pCallContext, pAllocParams)*call to profilerDevConstructStateInterlude_DISPATCH*profilerDevConstructStateInterlude_HAL(pProfDev, pCallContext, pAllocParams, clientPermissions)**profilerDevConstructStateInterlude_HAL(pProfDev, pCallContext, pAllocParams, clientPermissions)*call to profilerDevConstructStateEpilogue_DISPATCH*profilerDevConstructStateEpilogue_HAL(pProfDev, pCallContext, pAllocParams)**profilerDevConstructStateEpilogue_HAL(pProfDev, pCallContext, pAllocParams)**pPmaStreamList*pClientPermissions*call to _isMemoryProfilingPermitted*bAnyProfilingPermitted*call to profilerDevConstructState_DISPATCH*call to profilerBaseQuiesceStreamout_IMPL*call to khwpmStreamoutFreePmaStream_IMPL*pMemBytesAddr*NVRM: Invalid MEM_BYTES_ADDR. **NVRM: Invalid MEM_BYTES_ADDR. *call to _profilerPollForUpdatedMembytes*pmaIdleParams*NVRM: Waiting for PMA to be idle failed with error 0x%x **NVRM: Waiting for PMA to be idle failed with error 0x%x *NVRM: timeout occurred while waiting for PM streamout to idle. **NVRM: timeout occurred while waiting for PM streamout to idle. *NVRM: status=0x%08x, *MEM_BYTES_ADDR=0x%08x. **NVRM: status=0x%08x, *MEM_BYTES_ADDR=0x%08x. *ppBytesAvailable**ppBytesAvailable*ppStreamBuffers**ppStreamBuffers***ppStreamBuffers***ppBytesAvailable*call to profilerBaseDestructState_DISPATCH*bMmaBoostDisabled*call to profilerBaseConstructState_DISPATCH*(ref.pKernelMIGGpuInstance != NULL) && (ref.pMIGComputeInstance != NULL)**(ref.pKernelMIGGpuInstance != NULL) && (ref.pMIGComputeInstance != NULL)*call to _isNonAdminProfilingPermitted*requestParams*statusMask*globalStatus*call to khwpmGetRequestCgStatusMask*khwpmGetRequestCgStatusMask(&pParams->statusMask, &requestParams)*src/kernel/gpu/hwpm/profiler_v2/kern_profiler_v2_ctrl.c**khwpmGetRequestCgStatusMask(&pParams->statusMask, &requestParams)**src/kernel/gpu/hwpm/profiler_v2/kern_profiler_v2_ctrl.c*vPmaChIdx < pKernelHwpm->maxPmaChannels**vPmaChIdx < pKernelHwpm->maxPmaChannels*pmaBufferVA*membytesVA*pmaBufferSize*hwpmIBPA*hwpmIBAperture*(pParams->hwpmIBAperture == ADDR_FBMEM)**(pParams->hwpmIBAperture == ADDR_FBMEM)*call to _issueRpcToHost*extParams*pNewParams*call to _profilerBaseCtrlCmdFreePmaStreamVgpuGuest*call to _profilerBaseCtrlCmdPmaStreamUpdateGetPutVgpuGuest*pParams->bInputPmaChIdx == NV_FALSE**pParams->bInputPmaChIdx == NV_FALSE*pMemoryPmaBufferRef*pMemPmaBuffer**pMemPmaBuffer*pMemoryPmaAvailBytesRef*pMemPmaAvailBytes**pMemPmaAvailBytes*NVRM: failed: Memory 0x%x provided is not read only (Attr2: 0x%08x). **NVRM: failed: Memory 0x%x provided is not read only (Attr2: 0x%08x). *call to _profilerBaseCtrlCmdAllocPmaStreamVgpuGuest*bytesAvailable*NVRM: Failed to quiesce HWPM streamout. Error 0x%x **NVRM: Failed to quiesce HWPM streamout. Error 0x%x *NVRM: Failed to free PMA stream. Error 0x%x **NVRM: Failed to free PMA stream. Error 0x%x *bMemBytesBufferAccessAllowed*(memdescGetAddressSpace(pMemPmaAvailBytes->pMemDesc) == ADDR_SYSMEM)**(memdescGetAddressSpace(pMemPmaAvailBytes->pMemDesc) == ADDR_SYSMEM)*(memdescGetAddressSpace(pMemPmaBuffer->pMemDesc) == ADDR_SYSMEM)**(memdescGetAddressSpace(pMemPmaBuffer->pMemDesc) == ADDR_SYSMEM)*NVRM: Failed to allocate PMA stream. Error 0x%x **NVRM: Failed to allocate PMA stream. Error 0x%x *pTgtMemDescPmaAvailBytes*NVRM: Mapping of MEM_BYTES_ADDR buffer into CPU VA failed: 0x%x **NVRM: Mapping of MEM_BYTES_ADDR buffer into CPU VA failed: 0x%x *pTgtMemDescPmaBuffer*call to khwpmStreamoutAllocPmaStream_IMPL*promoteParams*refFindAncestorOfType(RES_GET_REF(pProfiler), classId(Device), &pDevice)**refFindAncestorOfType(RES_GET_REF(pProfiler), classId(Device), &pDevice)*pRmApi->Control(pRmApi, pClient->hClient, hObject, NVB0CC_CTRL_CMD_INTERNAL_GET_MAX_PMAS, &maxPmaParams, sizeof(maxPmaParams))**pRmApi->Control(pRmApi, pClient->hClient, hObject, NVB0CC_CTRL_CMD_INTERNAL_GET_MAX_PMAS, &maxPmaParams, sizeof(maxPmaParams))*maxPmaParams*pProfiler->ppBytesAvailable != NULL**pProfiler->ppBytesAvailable != NULL*pProfiler->ppStreamBuffers != NULL**pProfiler->ppStreamBuffers != NULL*memRegisterWithGsp(pGpu, pClient, hDevice, pParams->hMemPmaBuffer)**memRegisterWithGsp(pGpu, pClient, hDevice, pParams->hMemPmaBuffer)*memRegisterWithGsp(pGpu, pClient, hDevice, pParams->hMemPmaBytesAvailable)**memRegisterWithGsp(pGpu, pClient, hDevice, pParams->hMemPmaBytesAvailable)*internalParams*hMemPmaBuffer*pmaBufferOffset*hMemPmaBytesAvailable*pmaBytesAvailableOffset*ctxsw*bInputPmaChIdx*pRmApi->Control(pRmApi, pClient->hClient, hObject, NVB0CC_CTRL_CMD_INTERNAL_ALLOC_PMA_STREAM, &internalParams, sizeof(internalParams))**pRmApi->Control(pRmApi, pClient->hClient, hObject, NVB0CC_CTRL_CMD_INTERNAL_ALLOC_PMA_STREAM, &internalParams, sizeof(internalParams))*clientGetResourceRef(pClient, pParams->hMemPmaBytesAvailable, &pMemoryRef)**clientGetResourceRef(pClient, pParams->hMemPmaBytesAvailable, &pMemoryRef)*clientGetResourceRef(pClient, pParams->hMemPmaBuffer, &pMemoryRef)**clientGetResourceRef(pClient, pParams->hMemPmaBuffer, &pMemoryRef)*pmaVchIdx*bLegacyHwpm*pCntRef**pCntRef*pBoundCntBuf*pBufRef**pBufRef*pBoundPmaBuf*pCntMem*bRmExclusiveUse**pBoundCntBuf*pBufMem**pBoundPmaBuf*!pProfiler->bLegacyHwpm && pProfiler->maxPmaChannels != 0**!pProfiler->bLegacyHwpm && pProfiler->maxPmaChannels != 0*pCntRef != NULL && pBufRef != NULL**pCntRef != NULL && pBufRef != NULL**pCntMem**pBufMem*pCntMem != NULL && pBufMem != NULL**pCntMem != NULL && pBufMem != NULL*call to memdescAcquireRmExclusiveUse*pCountRef**pCountRef*pBufferRef**pBufferRef*src/kernel/gpu/intr/arch/ampere/intr_ga100.c*NVRM: MC_ENGINE_IDX %u has invalid notification intr vector %u **src/kernel/gpu/intr/arch/ampere/intr_ga100.c**NVRM: MC_ENGINE_IDX %u has invalid notification intr vector %u *NVRM: MC_ENGINE_IDX %u has invalid stall intr vector %u **NVRM: MC_ENGINE_IDX %u has invalid stall intr vector %u *i < NV_ARRAY_ELEMENTS((*pInterruptVectors))**i < NV_ARRAY_ELEMENTS((*pInterruptVectors))*call to intrWriteRegLeafEnSet_DISPATCH*call to intrWriteRegLeafEnClear_DISPATCH*call to intrGetIntrTopCategoryMask_IMPL*ret != 0*src/kernel/gpu/intr/arch/hopper/intr_gh100.c**ret != 0**src/kernel/gpu/intr/arch/hopper/intr_gh100.c*mask != 0**mask != 0*subtreeMap**subtreeMap*pCategoryEngine*pCategoryEngineNotification*pCategoryRunlistLocked*pCategoryRunlistNotification*pCategoryUvmOwned*pCategoryUvmShared*call to intrGetPendingStallEngines_TU102*call to vgpuGetPendingEvent*call to fecsIsIntrPending*pEngines != NULL*src/kernel/gpu/intr/arch/maxwell/intr_gm107.c**pEngines != NULL**src/kernel/gpu/intr/arch/maxwell/intr_gm107.c*intrGetPendingStallEngines_HAL(pGpu, pIntr, pEngines, pThreadState)**intrGetPendingStallEngines_HAL(pGpu, pIntr, pEngines, pThreadState)*call to intrGetAuxiliaryPendingStall_DISPATCH*call to intrGetAuxiliaryPendingStall_GM107*call to intrGetGmmuInterrupts_IMPL*pEngMask*call to intrConvertPmcIntrMaskToEngineMask_IMPL*call to intrDecodeStallIntrEn_DISPATCH*call to bitVectorTestAllSet_IMPL*call to intrConvertEngineMaskToPmcIntrMask_IMPL*call to _intrSetIntrEnInHw_GP100*intrEn0 <= INTERRUPT_TYPE_MAX*src/kernel/gpu/intr/arch/pascal/intr_gp100.c**intrEn0 <= INTERRUPT_TYPE_MAX**src/kernel/gpu/intr/arch/pascal/intr_gp100.c*intrCachedEn0*call to intrEncodeStallIntrEn_DISPATCH*pmcIntrEnSet*pmcIntrEnClear*call to intrSetHubLeafIntr_b3696a*call to intrGetIntrTopNonStallMask_DISPATCH*call to intrReadRegTop_DISPATCH*call to intrReadRegLeaf_DISPATCH*call to intrReadRegLeafEnSet_DISPATCH*call to _intrServiceNonStallLeaf_TU102*Could not service nonstall interrupt leafs*src/kernel/gpu/intr/arch/turing/intr_nonstall_tu102.c**Could not service nonstall interrupt leafs**src/kernel/gpu/intr/arch/turing/intr_nonstall_tu102.c*Could not service FIFO 'non-stall' intr**Could not service FIFO 'non-stall' intr*call to intrDisableTopNonstall_DISPATCH*call to intrGetInterruptTable_DISPATCH*intrGetInterruptTable_HAL(pGpu, pIntr, &pIntrTable)**intrGetInterruptTable_HAL(pGpu, pIntr, &pIntrTable)*call to vectIterNext_IMPL*intrPending*mcEngineIdx*call to intrServiceNotificationRecords_IMPL*NVRM: Could not service nonstall interrupt from mcEngineIdx %d. NV_STATUS = 0x%x **NVRM: Could not service nonstall interrupt from mcEngineIdx %d. NV_STATUS = 0x%x *call to vectAt_IMPL**call to vectAt_IMPL*call to vectCount_IMPL**pIntrTable*call to intrEnableTopNonstall_DISPATCH*nonStallMask*call to intrWriteRegTopEnClear_DISPATCH*call to intrWriteRegTopEnSet_DISPATCH*maskLo*maskHi*call to intrReadRegTopEnSet_DISPATCH*call to intrDisableLeaf_DISPATCH*call to intrEnableLeaf_DISPATCH*src/kernel/gpu/intr/arch/turing/intr_tu102.c**src/kernel/gpu/intr/arch/turing/intr_tu102.c*call to intrGetLeafSize_DISPATCH*NVRM: Interrupt registers: **NVRM: Interrupt registers: *NVRM: INTR_TOP_EN_SET(%u)=0x%x **NVRM: INTR_TOP_EN_SET(%u)=0x%x *NVRM: INTR_LEAF_EN_SET(%u)=0x%x **NVRM: INTR_LEAF_EN_SET(%u)=0x%x *NVRM: MC Interrupt table: **NVRM: MC Interrupt table: *intrGetInterruptTable_HAL(pGpu, pIntr, &pIntrTable) == NV_OK**intrGetInterruptTable_HAL(pGpu, pIntr, &pIntrTable) == NV_OK*NVRM: %2u: mcEngineIdx=%-4u intrVector=%-10u intrVectorNonStall=%-10u **NVRM: %2u: mcEngineIdx=%-4u intrVector=%-10u intrVectorNonStall=%-10u *call to intrGetIntrTopLegacyStallMask_DISPATCH*leafIndex*pLeafVals*subtreeIndex*call to kgmmuGetFatalFaultIntrPendingState_IMPL*call to kgmmuServiceNonReplayableFault_DISPATCH*Failed to service MMU non-replayable fault**Failed to service MMU non-replayable fault*call to intrGetAuxiliaryPendingStall_GP100*call to portAtomicOrS32*call to intrGetNumLeaves_DISPATCH*numIntrLeaves <= NV_MAX_INTR_LEAVES**numIntrLeaves <= NV_MAX_INTR_LEAVES*sanityCheckSubtreeMask*intrLeafValues**intrLeafValues*call to intrGetLeafStatus_DISPATCH*intrGetLeafStatus_HAL(pGpu, pIntr, intrLeafValues, pThreadState)**intrGetLeafStatus_HAL(pGpu, pIntr, intrLeafValues, pThreadState)*leafBit*engIdx < MC_ENGINE_IDX_MAX**engIdx < MC_ENGINE_IDX_MAX*call to intrWriteRegLeaf_DISPATCH*call to _intrGetUvmLeafMask_TU102*uvmShared*call to intrGetSubtreeRange_IMPL*intrGetSubtreeRange(pIntr, NV2080_INTR_CATEGORY_UVM_SHARED, &uvmShared)**intrGetSubtreeRange(pIntr, NV2080_INTR_CATEGORY_UVM_SHARED, &uvmShared)*lowestSubtreeIdx*ONEBITSET(uvmShared.subtreeMask)**ONEBITSET(uvmShared.subtreeMask)*call to _intrDisableStall_TU102*call to _intrEnableStall_TU102*NVRM: Exceeding the range of INTR leaf registers. intrVector = 0x%x, Reg = 0x%x **NVRM: Exceeding the range of INTR leaf registers. intrVector = 0x%x, Reg = 0x%x *call to intrGetIntrTopLockedMask_DISPATCH*accessCntrIntrVector*call to gpuIsGspOwnedFaultBuffersEnabled*replayableFaultIntrVector*call to intrCacheDispIntrVectors_DISPATCH*NVRM: UVM interrupt vectors for replayable fault 0x%x and access counter 0x%x are in different CPU_INTR_LEAF registers **NVRM: UVM interrupt vectors for replayable fault 0x%x and access counter 0x%x are in different CPU_INTR_LEAF registers *intrGetSubtreeRange(pIntr, NV2080_INTR_CATEGORY_UVM_OWNED, &uvmOwned)**intrGetSubtreeRange(pIntr, NV2080_INTR_CATEGORY_UVM_OWNED, &uvmOwned)*uvmOwned*NVRM: UVM owned interrupt vector for access counter is in an unexpected subtree Expected mask = 0x%llx, actual = 0x%llx **NVRM: UVM owned interrupt vector for access counter is in an unexpected subtree Expected mask = 0x%llx, actual = 0x%llx *uvmSharedCpuLeafEn*call to intrGetUvmSharedLeafEnDisableMask_IMPL*uvmSharedCpuLeafEnDisableMask*call to _intrClearLeafEnables_TU102*pmcRmOwnsIntrMask*NVRM: Enabling non-stall interrupt vector 0x%x **NVRM: Enabling non-stall interrupt vector 0x%x *call to intrCacheIntrFields_DISPATCH*call to intrDumpState_DISPATCH*intrStateUnload_HAL(pGpu, pIntr, GPU_STATE_FLAGS_PRESERVING)*src/kernel/gpu/intr/intr.c**intrStateUnload_HAL(pGpu, pIntr, GPU_STATE_FLAGS_PRESERVING)**src/kernel/gpu/intr/intr.c*call to intrInitInterruptTable_DISPATCH*intrInitInterruptTable_HAL(pGpu, pIntr)**intrInitInterruptTable_HAL(pGpu, pIntr)*intrStateLoad_HAL(pGpu, pIntr, GPU_STATE_FLAGS_PRESERVING)**intrStateLoad_HAL(pGpu, pIntr, GPU_STATE_FLAGS_PRESERVING)*call to intrGetLocklessVectorsInRmSubtree_DISPATCH*highestSubtreeIdx*lowestSubtreeIdx == highestSubtreeIdx**lowestSubtreeIdx == highestSubtreeIdx*(NV_CTRL_INTR_SUBTREE_TO_LEAF_IDX_END(highestSubtreeIdx) - 1) == NV_CTRL_INTR_SUBTREE_TO_LEAF_IDX_START(lowestSubtreeIdx)**(NV_CTRL_INTR_SUBTREE_TO_LEAF_IDX_END(highestSubtreeIdx) - 1) == NV_CTRL_INTR_SUBTREE_TO_LEAF_IDX_START(lowestSubtreeIdx)*locklessRmVectors**locklessRmVectors*NV_CTRL_INTR_GPU_VECTOR_TO_LEAF_REG(vector) == NV_CTRL_INTR_SUBTREE_TO_LEAF_IDX_START(lowestSubtreeIdx)**NV_CTRL_INTR_GPU_VECTOR_TO_LEAF_REG(vector) == NV_CTRL_INTR_SUBTREE_TO_LEAF_IDX_START(lowestSubtreeIdx)*tree < NV_ARRAY_ELEMENTS(pIntr->vectorToMcIdx)**tree < NV_ARRAY_ELEMENTS(pIntr->vectorToMcIdx)*vectorToMcIdx**vectorToMcIdx***vectorToMcIdx*pIntr->vectorToMcIdx[tree] != NULL**pIntr->vectorToMcIdx[tree] != NULL*vectorToMcIdxCounts**vectorToMcIdxCounts*vector < pIntr->vectorToMcIdxCounts[tree]**vector < pIntr->vectorToMcIdxCounts[tree]*pIntr->vectorToMcIdx[tree][vector].mcEngine == MC_ENGINE_IDX_NULL**pIntr->vectorToMcIdx[tree][vector].mcEngine == MC_ENGINE_IDX_NULL*pIntr->subtreeMap[category].subtreeMask != 0x0**pIntr->subtreeMap[category].subtreeMask != 0x0*pIntr->subtreeMap[category].subtreeMask != NV_U16_MAX**pIntr->subtreeMap[category].subtreeMask != NV_U16_MAX*pIntr->subtreeMap[category].subtreeMask != NV_U32_MAX**pIntr->subtreeMap[category].subtreeMask != NV_U32_MAX*pIntr->subtreeMap[category].subtreeMask != NV_U64_MAX**pIntr->subtreeMap[category].subtreeMask != NV_U64_MAX*call to _intrServiceStallCommonCheckBegin*_intrServiceStallCommonCheckBegin(pGpu, pIntr, &pOldContext)**_intrServiceStallCommonCheckBegin(pGpu, pIntr, &pOldContext)*call to intrIsPending_DISPATCH*call to _intrServiceStallExactList*call to _intrServiceStallCommonCheckEnd*intrGetPendingStall_HAL(pGpu, pIntr, &exactEngines, NULL )**intrGetPendingStall_HAL(pGpu, pIntr, &exactEngines, NULL )*call to bitVectorAnd_IMPL*call to _intrLogLongRunningInterrupts*NVRM: Failed GPU reg read : 0x%x. Check whether GPU is present on the bus **NVRM: Failed GPU reg read : 0x%x. Check whether GPU is present on the bus *ppOldContext**ppOldContext*resservSwapTlsCallContext(ppOldContext, NULL)**resservSwapTlsCallContext(ppOldContext, NULL)*call to _stuckIntrNewGeneration*call to intrServiceInterruptRecords_IMPL*intrGeneration*intrCount*NVRM: Stuck interrupt detected for mcEngine %u **NVRM: Stuck interrupt detected for mcEngine %u *bIntrStuck*intrVal*call to vgpuService*NVRM: Interrupt is stuck. Bailing after %d iterations. **NVRM: Interrupt is stuck. Bailing after %d iterations. *longIntrStats**longIntrStats*NVRM: %u long-running interrupts (%llu ns or slower) from engine %u, longest taking %llu ns **NVRM: %u long-running interrupts (%llu ns or slower) from engine %u, longest taking %llu ns *intrLength*lastPrintTime*call to intrIsDpcQueueEmpty_IMPL*pDPCQueue*call to intrDequeueDpc_IMPL*dpcdata*nextEngine*call to intrQueueInterruptBasedDpc_IMPL*call to bitVectorCopy_IMPL*bDpcStarted*call to vectClear_IMPL*ppTable**ppTable*ppTable != NULL**ppTable != NULL*call to vectIsEmpty_IMPL*!vectIsEmpty(&pIntr->intrTable)**!vectIsEmpty(&pIntr->intrTable)*fecsCtxswLogConsumerCount == 0**fecsCtxswLogConsumerCount == 0*pbCtxswLog*pbCtxswLog != NULL**pbCtxswLog != NULL*call to kgraphicsIsBottomHalfCtxswLoggingEnabled*intrServiceTable**intrServiceTable*NVRM: Missing notification interrupt handler for engine idx %d **NVRM: Missing notification interrupt handler for engine idx %d *Missing notification interrupt handler**Missing notification interrupt handler*NVRM: Could not service notification interrupt for engine idx %d; returned NV_STATUS = 0x%x **NVRM: Could not service notification interrupt for engine idx %d; returned NV_STATUS = 0x%x *Could not service notification interrupt**Could not service notification interrupt*NVRM: Missing interrupt handler for engine idx %d **NVRM: Missing interrupt handler for engine idx %d *Missing interrupt handler**Missing interrupt handler*bShouldService*pServiced*intrTiming*call to gpuGetNextChildOfTypeUnsafe_IMPL**call to gpuGetNextChildOfTypeUnsafe_IMPL*call to gpuRegisterGenericKernelFalconIntrService_IMPL*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_INTR_GET_KERNEL_TABLE, pParams, sizeof *pParams)**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_INTR_GET_KERNEL_TABLE, pParams, sizeof *pParams)*pParams->tableLen <= NV2080_CTRL_INTERNAL_INTR_MAX_TABLE_SIZE**pParams->tableLen <= NV2080_CTRL_INTERNAL_INTR_MAX_TABLE_SIZE*call to vectReserve_IMPL*vectReserve(&pIntr->intrTable, pParams->tableLen)**vectReserve(&pIntr->intrTable, pParams->tableLen)*intrVectorNonStall*call to vectAppend_IMPL**call to vectAppend_IMPL*vectAppend(&pIntr->intrTable, &entry) != NULL**vectAppend(&pIntr->intrTable, &entry) != NULL*call to vectTrim_IMPL*vectTrim(&pIntr->intrTable, 0)**vectTrim(&pIntr->intrTable, 0)*saveIntrEn0*RMDisablePerIntrDPCQueueing**RMDisablePerIntrDPCQueueing*pIntrLoop**pIntrLoop*PDB_PROP_INTR_DISABLE_PER_INTR_DPC_QUEUEING*PDB_PROP_INTR_USE_INTR_MASK_FOR_LOCKING*RMIntrLockingMode**RMIntrLockingMode*NVRM: NV_REG_STR_RM_INTR_LOCKING_MODE was set to: 0x%x **NVRM: NV_REG_STR_RM_INTR_LOCKING_MODE was set to: 0x%x *intrStuckThreshold*RM654663**RM654663*call to intrCheckAndServiceNonReplayableFault_DISPATCH*call to intrCheckAndServiceFecsEventbuffer_IMPL*call to intrStateDestroyPhysical_56cd7a*halIntrEnabled*call to intrDestroyInterruptTable_DISPATCH*intrDestroyInterruptTable_HAL(pGpu, pIntr)**intrDestroyInterruptTable_HAL(pGpu, pIntr)*call to intrSetDefaultIntrEn_IMPL*uvmSharedIntrRmOwnsMask*call to intrSetIntrMaskUnblocked_IMPL*RMIntrDetailedLogs**RMIntrDetailedLogs*PDB_PROP_INTR_ENABLE_DETAILED_LOGS*call to _intrInitRegistryOverrides*subtreeMask*call to _intrInitServiceTable*call to vectDestroy_IMPL*dpcQueue*pFront**pFront*pRear**pRear*call to vectInit_IMPL*vectInit(&pIntr->intrTable, portMemAllocatorGetGlobalNonPaged(), 0 )**vectInit(&pIntr->intrTable, portMemAllocatorGetGlobalNonPaged(), 0 )*pSmallestVector*pSmallestVector != NULL**pSmallestVector != NULL**pmcIntrMask*NVRM: mcEngineIdx %d with bNonStall = %d has invalid vector **NVRM: mcEngineIdx %d with bNonStall = %d has invalid vector *intrVector != NV_INTR_VECTOR_INVALID**intrVector != NV_INTR_VECTOR_INVALID*Failed to get interrupt table**Failed to get interrupt table*NVRM: Could not find the specified engine Id %u **NVRM: Could not find the specified engine Id %u *bBCState*call to intrServiceStallList_DISPATCH*call to intrServiceStallListDevice_IMPL*call to gpuIsStateLoading*dpctype*call to intrQueueDpc_IMPL*NVRM: Cannot allocate memory for the DPC queue entry **NVRM: Cannot allocate memory for the DPC queue entry *pUnblockedEngines**pUnblockedEngines*NVRM: intrGetIntrEn: Returning interrupt disabled. Interrupts disabled in the HAL **NVRM: intrGetIntrEn: Returning interrupt disabled. Interrupts disabled in the HAL *NVRM: intrSetIntrEn: set interrupt refused since interrupts are disabled in the HAL **NVRM: intrSetIntrEn: set interrupt refused since interrupts are disabled in the HAL *call to intrGetHubLeafIntrPending_b3696a*call to intrServiceStallSingle_DISPATCH*NVRM: NVRM_RPC: NV2080_CTRL_CMD_MC_SERVICE_INTERRUPTS failed with error 0x%x **NVRM: NVRM_RPC: NV2080_CTRL_CMD_MC_SERVICE_INTERRUPTS failed with error 0x%x *pServiceInterruptParams*grCount*call to kmigmgrCountEnginesOfType_IMPL*kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(i), &globalRmEngineType)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(i), &globalRmEngineType)*intrGetPendingStall_HAL(pGpu, pIntr, &pendingEngines, NULL )**intrGetPendingStall_HAL(pGpu, pIntr, &pendingEngines, NULL )*call to intrProcessDPCQueue_DISPATCH*pRange*pRange != NULL**pRange != NULL*category < NV2080_INTR_CATEGORY_ENUM_COUNT**category < NV2080_INTR_CATEGORY_ENUM_COUNT*intrservServiceNotificationInterrupt called but not implemented*src/kernel/gpu/intr/intr_service.c**intrservServiceNotificationInterrupt called but not implemented**src/kernel/gpu/intr/intr_service.c*intrservServiceInterrupt called but not implemented**intrservServiceInterrupt called but not implemented*vectIsEmpty(&pIntr->intrTable)*src/kernel/gpu/intr/intr_vgpu.c**vectIsEmpty(&pIntr->intrTable)**src/kernel/gpu/intr/intr_vgpu.c*pStaticParams*pStaticParams->numEntries <= NV2080_CTRL_MC_GET_STATIC_INTR_TABLE_MAX**pStaticParams->numEntries <= NV2080_CTRL_MC_GET_STATIC_INTR_TABLE_MAX*pVSI->mcEngineNotificationIntrVectors.numEntries <= NV2080_CTRL_MC_GET_ENGINE_NOTIFICATION_INTR_VECTORS_MAX_ENGINES**pVSI->mcEngineNotificationIntrVectors.numEntries <= NV2080_CTRL_MC_GET_ENGINE_NOTIFICATION_INTR_VECTORS_MAX_ENGINES*vectReserve(&pIntr->intrTable, pStaticParams->numEntries + pVSI->mcEngineNotificationIntrVectors.numEntries)**vectReserve(&pIntr->intrTable, pStaticParams->numEntries + pVSI->mcEngineNotificationIntrVectors.numEntries)*call to _intrCopyVfStaticInterruptTable*_intrCopyVfStaticInterruptTable(pGpu, pIntr, &pIntr->intrTable, pStaticParams)**_intrCopyVfStaticInterruptTable(pGpu, pIntr, &pIntr->intrTable, pStaticParams)*call to _intrCopyVfDynamicInterruptTable*_intrCopyVfDynamicInterruptTable(pGpu, pIntr, numEngines, &pIntr->intrTable, &pVSI->mcEngineNotificationIntrVectors)**_intrCopyVfDynamicInterruptTable(pGpu, pIntr, numEngines, &pIntr->intrTable, &pVSI->mcEngineNotificationIntrVectors)*intrCategorySubtreeMapParams*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, ENGINE_INFO_TYPE_MC, &engineIdx)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, i, ENGINE_INFO_TYPE_MC, &engineIdx)*NVRM: VF: Dynamic intr MC_ENGINE_IDX=%u, stall=0x%08x, nonstall=0x%08x, pmcMask=0x%08x **NVRM: VF: Dynamic intr MC_ENGINE_IDX=%u, stall=0x%08x, nonstall=0x%08x, pmcMask=0x%08x *vectAppend(pTable, &entry) != NULL**vectAppend(pTable, &entry) != NULL*NVRM: Unknown NV2080_INTR_TYPE 0x%x **NVRM: Unknown NV2080_INTR_TYPE 0x%x *Unknown NV2080_INTR_TYPE**Unknown NV2080_INTR_TYPE*NVRM: VF: Static intr MC_ENGINE_IDX=%u, stall=0x%08x, nonstall=0x%08x, pmcMask=0x%08x **NVRM: VF: Static intr MC_ENGINE_IDX=%u, stall=0x%08x, nonstall=0x%08x, pmcMask=0x%08x *src/kernel/gpu/intr/swintr.c**src/kernel/gpu/intr/swintr.c*Invalid engineIdx**Invalid engineIdx*PDB_PROP_GPU_FORCE_PERF_BIOS_LEVEL*bIsRTD3HotTransition*src/kernel/gpu/kern_gpu_power.c*NVRM: GPU is unable to transition from GC6 to D0 state. **src/kernel/gpu/kern_gpu_power.c**NVRM: GPU is unable to transition from GC6 to D0 state. *call to gpuCompletedGC6PowerOff_88bc07*NVRM: D3Hot detected. Going to recover from D3Hot **NVRM: D3Hot detected. Going to recover from D3Hot *NVRM: D3Hot case for Turing and later but legacy GC6/FGC6 flavor. **NVRM: D3Hot case for Turing and later but legacy GC6/FGC6 flavor. *NVRM: D3Hot case for none RTD3 and no WAR enabled. **NVRM: D3Hot case for none RTD3 and no WAR enabled. *PDB_PROP_GPU_MSHYBRID_GC6_ACTIVE*PDB_PROP_GPU_FAST_GC6_ACTIVE*PDB_PROP_GPU_RTD3_GC6_ACTIVE*NVRM: GPU is not in GC6 state. **NVRM: GPU is not in GC6 state. *call to _gpuGc6EntryFailed*NVRM: GPU is unable to transition from D0 to GC6 state. **NVRM: GPU is unable to transition from D0 to GC6 state. *NVRM: Cannot perform RTD3 as chip does not support. **NVRM: Cannot perform RTD3 as chip does not support. *NVRM: GPU is already in GC6 state or stuck in transition. **NVRM: GPU is already in GC6 state or stuck in transition. *call to gpuWaitGC6Ready_56cd7a*NVRM: GPU is not ready to transition from D0 to GC6 state. **NVRM: GPU is not ready to transition from D0 to GC6 state. *call to osGC6PowerControl*osGC6PowerControl(pGpu, 0x8000, &powerStatus)**osGC6PowerControl(pGpu, 0x8000, &powerStatus)*osGC6PowerControl(pGpu, 0x0, &powerStatus)**osGC6PowerControl(pGpu, 0x0, &powerStatus)*NVRM: Timeout waiting for GPU to enter PWOK/ON state.Current State %x **NVRM: Timeout waiting for GPU to enter PWOK/ON state.Current State %x *call to gpuIsOnTheBus_IMPL*NVRM: GPU is not yet on the bus after GC6 power-up. **NVRM: GPU is not yet on the bus after GC6 power-up. *NVRM: Timeout waiting for GPU to appear on the bus. **NVRM: Timeout waiting for GPU to appear on the bus. *osGC6PowerControl(pGpu, deferCmd, NULL)**osGC6PowerControl(pGpu, deferCmd, NULL)*call to gpuPrePowerOff_DISPATCH*gpuPrePowerOff_HAL(pGpu)**gpuPrePowerOff_HAL(pGpu)*NVRM: Skip call to power off GPU in OSPM RTD3 **NVRM: Skip call to power off GPU in OSPM RTD3 *call to gpuPowerOff_DISPATCH*NVRM: Call to power off GPU failed. **NVRM: Call to power off GPU failed. *call to _gpuGc6EntrySanityCheck*call to _gpuGc6EntrySwStateUpdate*call to _gpuGc6EntryStateUnload*call to gpuGc6EntryPstateCheck_56cd7a*call to gpuGc6EntryGpuPowerOff_IMPL*NVRM: GPU is now in GC6 state. **NVRM: GPU is now in GC6 state. *call to _gpuGc6ExitSanityCheck*call to _gpuGc6ExitGpuPowerOn*call to _gpuGc6ExitStateLoad*NVRM: GPU is now in D0 state. **NVRM: GPU is now in D0 state. *call to _gpuGc6ExitSwStateRestore*executedStepMask*LatencyTimerControl*DontModifyTimerValue*LatencyTimerValue*PciLatencyTimerControl**PciLatencyTimerControl*call to _kmcSetPciLatencyTimer*call to _kmcInitPciRegistryOverrides**pKernelMc*prbEncNestedStart(pPrbEnc, NVDEBUG_GPUINFO_ENG_MC)*src/kernel/gpu/mc/kernel_mc.c**prbEncNestedStart(pPrbEnc, NVDEBUG_GPUINFO_ENG_MC)**src/kernel/gpu/mc/kernel_mc.c*prbEncNestedStart(pPrbEnc, NVDEBUG_ENG_MC_RM_DATA)**prbEncNestedStart(pPrbEnc, NVDEBUG_ENG_MC_RM_DATA)*prbEncNestedStart(pPrbEnc, NVDEBUG_ENG_MC_PCI_BARS)**prbEncNestedStart(pPrbEnc, NVDEBUG_ENG_MC_PCI_BARS)*call to kbusGetPciBarOffset_IMPL*call to kgmmuChangeReplayableFaultOwnership_DISPATCH*pReplayableFaultOwnrshpParams*pManufacturerParams*manufacturer*call to rmcfg_IsTEGRA_NVDISP_GPUS*pArchInfoParams*call to memmgrGetMaxContextSize_GA100*src/kernel/gpu/mem_mgr/arch/ada/mem_mgr_ad104.c**src/kernel/gpu/mem_mgr/arch/ada/mem_mgr_ad104.c*call to fbsrEnd_GM107*call to fbsrSendMemsysProgramRawCompressionMode_DISPATCH*fbsrSendMemsysProgramRawCompressionMode_HAL(pGpu, pFbsr, NV_TRUE)*src/kernel/gpu/mem_mgr/arch/ampere/fbsr_ga100.c**fbsrSendMemsysProgramRawCompressionMode_HAL(pGpu, pFbsr, NV_TRUE)**src/kernel/gpu/mem_mgr/arch/ampere/fbsr_ga100.c*bRawModeWasEnabled*fbsrSendMemsysProgramRawCompressionMode_HAL(pGpu, pFbsr, NV_FALSE)**fbsrSendMemsysProgramRawCompressionMode_HAL(pGpu, pFbsr, NV_FALSE)*call to fbsrBegin_GM107*pFlaOwnerGpu*call to memdescIsEgm*memmgrReadMmuLock_HAL(pGpu, pMemoryManager, &bIsMmuLockValid, &memLockLo, &memLockHi)*src/kernel/gpu/mem_mgr/arch/ampere/mem_mgr_ga100.c**memmgrReadMmuLock_HAL(pGpu, pMemoryManager, &bIsMmuLockValid, &memLockLo, &memLockHi)**src/kernel/gpu/mem_mgr/arch/ampere/mem_mgr_ga100.c*memLockHi*rsvdSize*bRsvdRegion*bSupportCompressed*bSupportISO*bInternalHeap*call to memmgrInsertFbRegion_IMPL*NVRM: Unprotected Block Start: 0x%0llx End: 0x%0llx Size: 0x%0llx **NVRM: Unprotected Block Start: 0x%0llx End: 0x%0llx Size: 0x%0llx *call to gpuCheckPageRetirementSupport_DISPATCH*pBlParams**pBlParams*NVRM: No blacklisted pages **NVRM: No blacklisted pages *NVRM: Failed to read black list addresses **NVRM: Failed to read black list addresses *offlined**offlined*pKind*call to memmgrGetMaxContextSize_TU102*blockedFbRegion*bLostOnSuspend*NVRM: Blocked Start: 0x%0llx End: 0x%0llx Size: 0x%0llx **NVRM: Blocked Start: 0x%0llx End: 0x%0llx Size: 0x%0llx *plm*NVRM: MMU_LOCK read permission disabled, PLM val 0x%0x **NVRM: MMU_LOCK read permission disabled, PLM val 0x%0x *bUseVasForCeMemoryOps*call to memmgrAllocDetermineAlignment_GM107*call to memmgrIsFlaSysmemSupported_DISPATCH*pFlaOwnerMemoryManager*call to memmgrIsMemDescSupportedByFla_GA100*call to memmgrGetCarveoutRegionInfo_KERNEL*pMemoryManager->pReservedConsoleMemDesc == NULL*src/kernel/gpu/mem_mgr/arch/blackwell/mem_mgr_gb10b_phys.c**pMemoryManager->pReservedConsoleMemDesc == NULL**src/kernel/gpu/mem_mgr/arch/blackwell/mem_mgr_gb10b_phys.c*carveoutRegion**carveoutRegion*pRegion*NVRM: Allocating console region of size: %llx, at base : %llx **NVRM: Allocating console region of size: %llx, at base : %llx *pEHeap*pEHeap != NULL**pEHeap != NULL*pBlock != NULL**pBlock != NULL*call to memmgrFreeScanoutCarveoutRegionResources_DISPATCH*pHeap != NULL**pHeap != NULL*NVRM: Not found allocation in carveout region**NVRM: Not found allocation in carveout region*NVRM: Error in freeing eheap 0x%0x **NVRM: Error in freeing eheap 0x%0x *pMemorySystem*pMemorySystem != NULL**pMemorySystem != NULL*pVidHeapAlloc != NULL**pVidHeapAlloc != NULL*pHeapFlag != NULL**pHeapFlag != NULL*NVRM: memdescCreate returns error 0x%0x **NVRM: memdescCreate returns error 0x%0x *call to memSetSysmemCacheAttrib_IMPL*call to memmgrAllocScanoutCarveoutRegionResources_DISPATCH*NVRM: memdescAlloc returns error 0x%0x **NVRM: memdescAlloc returns error 0x%0x *memdescGetFlag(pMemDesc, MEMDESC_FLAGS_ALLOC_FROM_SCANOUT_CARVEOUT)**memdescGetFlag(pMemDesc, MEMDESC_FLAGS_ALLOC_FROM_SCANOUT_CARVEOUT)*NVRM: EheapAlloc returns error 0x%0x **NVRM: EheapAlloc returns error 0x%0x *call to memdescSetPte**pScanoutHeap*call to _memmgrSocGetScanoutCarveout*NVRM: Created scanout carveout heap with base address=0x%llx and size=%llx **NVRM: Created scanout carveout heap with base address=0x%llx and size=%llx *NVRM: Carveout support not available on simulation **NVRM: Carveout support not available on simulation *call to memmgrCreateScanoutCarveoutHeap_DISPATCH*NVRM: Carveout supported not configured on this platform **NVRM: Carveout supported not configured on this platform *NVRM: Failed to create scanout carveout heap. **NVRM: Failed to create scanout carveout heap. *src/kernel/gpu/mem_mgr/arch/blackwell/mem_mgr_gb202.c*NVRM: Max context size set from %llu MB to %llu MB **src/kernel/gpu/mem_mgr/arch/blackwell/mem_mgr_gb202.c**NVRM: Max context size set from %llu MB to %llu MB *src/kernel/gpu/mem_mgr/arch/blackwell/mem_mgr_gb202_base.c*NVRM: Bad op (%08x) passed in **src/kernel/gpu/mem_mgr/arch/blackwell/mem_mgr_gb202_base.c**NVRM: Bad op (%08x) passed in *src/kernel/gpu/mem_mgr/arch/blackwell/mem_mgr_gb20b.c*NVRM: Unknown kind 0x%x. **src/kernel/gpu/mem_mgr/arch/blackwell/mem_mgr_gb20b.c**NVRM: Unknown kind 0x%x. *call to memmgrGetUncompressedKind_TU102*call to memmgrChooseKindCompressZ_TU102*pFbAllocPageFormat*call to memmgrChooseKindZ_TU102*src/kernel/gpu/mem_mgr/arch/blackwell/mem_mgr_gb20b_base.c**src/kernel/gpu/mem_mgr/arch/blackwell/mem_mgr_gb20b_base.c*pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, pChannel->channelId, pChannel->engineObjectId, pChannel->sec2Class, NULL, 0)*src/kernel/gpu/mem_mgr/arch/hopper/mem_utils_gh100.c**pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, pChannel->channelId, pChannel->engineObjectId, pChannel->sec2Class, NULL, 0)**src/kernel/gpu/mem_mgr/arch/hopper/mem_utils_gh100.c*call to memmgrMemUtilsChannelSchedulingSetup_IMPL*memmgrMemUtilsChannelSchedulingSetup(pGpu, pMemoryManager, pChannel)**memmgrMemUtilsChannelSchedulingSetup(pGpu, pMemoryManager, pChannel)*NVRM: end NV_STATUS=0x%08x **NVRM: end NV_STATUS=0x%08x *call to memmgrIsFastScrubberEnabled*pBar1P2PPhysMemDesc*pBar1P2PVirtMemDesc*src/kernel/gpu/mem_mgr/arch/hopper/virt_mem_allocator_gh100.c*NVRM: bar1p2p surface UN-mapped at 0x%llx + 0x%llx **src/kernel/gpu/mem_mgr/arch/hopper/virt_mem_allocator_gh100.c**NVRM: bar1p2p surface UN-mapped at 0x%llx + 0x%llx *pPeerMemDesc**pPeerKernelBus*bar1ApertureLen*bar1PhyAddr*NVRM: bar1p2p surface mapped at bar1PhyAddr 0x%llx, len 0x%llx **NVRM: bar1p2p surface mapped at bar1PhyAddr 0x%llx, len 0x%llx **pBar1P2PVirtMemDesc**pBar1P2PPhysMemDesc*pMemDescOut**pMemDescOut*offsetOut*flagsOut*NVRM: Failed to create bar1p2p mapping **NVRM: Failed to create bar1p2p mapping *pVidMemDesc->Size == 0*src/kernel/gpu/mem_mgr/arch/maxwell/fbsr_gm107.c**pVidMemDesc->Size == 0**src/kernel/gpu/mem_mgr/arch/maxwell/fbsr_gm107.c*NVRM: return early since FB is broken! **NVRM: return early since FB is broken! *bOperationFailed*pSysMemNodeHead**pSysMemNodeHead**pSysMemNodeCurrent*pStandbyBuffer*call to memdescSetStandbyBuffer*NVRM: %s allocation %llx-%llx [%s] **NVRM: %s allocation %llx-%llx [%s] *saving**saving*restoring**restoring*DMA**DMA*CPU**CPU*vidSurface*sysSurface*call to memmgrMemCopy_IMPL*memmgrMemCopy(pMemoryManager, &vidSurface, &sysSurface, pVidMemDesc->Size, TRANSFER_FLAGS_PREFER_CE | TRANSFER_FLAGS_CE_PRI_DEFER_FLUSH)**memmgrMemCopy(pMemoryManager, &vidSurface, &sysSurface, pVidMemDesc->Size, TRANSFER_FLAGS_PREFER_CE | TRANSFER_FLAGS_CE_PRI_DEFER_FLUSH)*memmgrMemCopy(pMemoryManager, &sysSurface, &vidSurface, pVidMemDesc->Size, TRANSFER_FLAGS_PREFER_CE | TRANSFER_FLAGS_CE_PRI_DEFER_FLUSH)**memmgrMemCopy(pMemoryManager, &sysSurface, &vidSurface, pVidMemDesc->Size, TRANSFER_FLAGS_PREFER_CE | TRANSFER_FLAGS_CE_PRI_DEFER_FLUSH)*pagedBufferInfo*((pFbsr->sysOffset + vidOffset)& 0xffff) == 0**((pFbsr->sysOffset + vidOffset)& 0xffff) == 0*avblViewSz*sysAddr**sysAddr***sysAddr*call to osMapViewToSection*sectionHandle**sectionHandle*cpuCopyOffset**cpuCopyOffset*pPinnedBuffer*memmgrMemCopy(pMemoryManager, &vidSurface, &sysSurface, copySize, TRANSFER_FLAGS_PREFER_CE)**memmgrMemCopy(pMemoryManager, &vidSurface, &sysSurface, copySize, TRANSFER_FLAGS_PREFER_CE)*memmgrMemCopy(pMemoryManager, &sysSurface, &vidSurface, copySize, TRANSFER_FLAGS_PREFER_CE)**memmgrMemCopy(pMemoryManager, &sysSurface, &vidSurface, copySize, TRANSFER_FLAGS_PREFER_CE)*call to osUnmapViewFromSection*threadStateResetTimeout(pGpu)**threadStateResetTimeout(pGpu)*((pFbsr->sysOffset + vidOffset) & (CPU_MAX_PINNED_BUFFER_SIZE - 1)) == 0**((pFbsr->sysOffset + vidOffset) & (CPU_MAX_PINNED_BUFFER_SIZE - 1)) == 0*threadTimeoutCopySize*call to osReadFromFile*pDmaBuffer*call to osWriteToFile*(pFbsr->sysOffset & 3) == 0**(pFbsr->sysOffset & 3) == 0*call to memdescUnlock**pSysMemDesc*call to osCloseFile*call to osSrUnpinSysmem**pCe*memmgrInitCeUtils(pMemoryManager, NV_FALSE, bVirtualMode)**memmgrInitCeUtils(pMemoryManager, NV_FALSE, bVirtualMode)*NVRM: %s %lld bytes of data **NVRM: %s %lld bytes of data **pSysReservedMemDesc*pFbsr->pSysReservedMemDesc->Size >= pFbsr->length**pFbsr->pSysReservedMemDesc->Size >= pFbsr->length*call to memdescLock*call to osSrPinSysmem*pMdl**pMdl*pFbsr->pagedBufferInfo.pMdl**pFbsr->pagedBufferInfo.pMdl*call to osCreateMemFromOsDescriptorInternal*pFbsr->pagedBufferInfo.sysAddr**pFbsr->pagedBufferInfo.sysAddr*osUnmapViewFromSection(pGpu->pOsGpuInfo, NvP64_VALUE(pFbsr->pagedBufferInfo.sysAddr), bIommuEnabled) == NV_OK**osUnmapViewFromSection(pGpu->pOsGpuInfo, NvP64_VALUE(pFbsr->pagedBufferInfo.sysAddr), bIommuEnabled) == NV_OK*call to osOpenTemporaryFile*call to _fbsrInitGsp*_fbsrInitGsp(pGpu, pFbsr)**_fbsrInitGsp(pGpu, pFbsr)*sysOffset*pMapCookie**pMapCookie**pDmaBuffer**pPinnedBuffer*call to osReleaseCpuAddressSpaceUpperBound*call to memmgrGetRsvdSizeForSr_DISPATCH*call to osReserveCpuAddressSpaceUpperBound*!pFbsr->pSysMemDesc->PteAdjust**!pFbsr->pSysMemDesc->PteAdjust*hSysMem*bEnteringGcoffState*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_FBSR_INIT, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_FBSR_INIT, ¶ms, sizeof(params))*bPreserveOnSuspend*src/kernel/gpu/mem_mgr/arch/maxwell/mem_mgr_gm107.c**src/kernel/gpu/mem_mgr/arch/maxwell/mem_mgr_gm107.c*instBlkBarOverride*InstBlkAperture*InstBlkAttr*BAR instblk**BAR instblk*tmpAddr*NVRM: Reserve space for Bar1 inst block offset = 0x%llx size = 0x%x **NVRM: Reserve space for Bar1 inst block offset = 0x%llx size = 0x%x *NVRM: Reserve space for Bar2 inst block offset = 0x%llx size = 0x%x **NVRM: Reserve space for Bar2 inst block offset = 0x%llx size = 0x%x *call to memmgrReserveBar2BackingStore_IMPL*rsvdMemorySize*NVRM: Calculated size of reserved memory = 0x%llx. Size finalized in StateInit. **NVRM: Calculated size of reserved memory = 0x%llx. Size finalized in StateInit. *rsvdBlockInfo*rsvdBlockList**rsvdBlockList*rsvdMemorySizeIncrement*NVRM: RM can only increase reserved heap by 0x%llx bytes **NVRM: RM can only increase reserved heap by 0x%llx bytes *NVRM: RT::: incrementing the reserved size by: %llx **NVRM: RT::: incrementing the reserved size by: %llx *call to memmgrGetGrHeapReservationSize_DISPATCH*smallPagePte*call to kgmmuGetMaxBigPageSize_DISPATCH*bigPagePte*call to memmgrGetUserdReservedFbSpace_DISPATCH*userdReservedSize*call to memmgrGetRunlistEntriesReservedFbSpace_DISPATCH*runlistEntriesReservedSize*call to memmgrGetMaxContextSize_DISPATCH*maxContextSize*call to memmgrCalcReservedFbSpaceForUVM_DISPATCH*call to kgmmuGetFaultBufferReservedFbSpaceSize_DISPATCH*mmuFaultBufferSize*faultMethodBufferSize*NVRM: Before capping: rsvdFastSize = 0x%llx bytes rsvdSlowSize = 0x%llx bytes Usable FB = 0x%llx bytes **NVRM: Before capping: rsvdFastSize = 0x%llx bytes rsvdSlowSize = 0x%llx bytes Usable FB = 0x%llx bytes *NVRM: Fail the rsvd memory capping in case of user specified increase = %llx bytes **NVRM: Fail the rsvd memory capping in case of user specified increase = %llx bytes *NVRM: After capping: rsvdFastSize = 0x%llx bytes rsvdSlowSize = 0x%llx bytes **NVRM: After capping: rsvdFastSize = 0x%llx bytes rsvdSlowSize = 0x%llx bytes *NVRM: Error logging the FB reservation entries **NVRM: Error logging the FB reservation entries *call to kmemsysGetMaximumBlacklistPages_DISPATCH**pBlAddrs*call to memmgrGetBlackListPages_DISPATCH*status != NV_ERR_BUFFER_TOO_SMALL**status != NV_ERR_BUFFER_TOO_SMALL*call to heapAddPageToBlackList_IMPL*NVRM: No more space in blacklist, status: %x! **NVRM: No more space in blacklist, status: %x! *NVRM: Offlining pages not supported **NVRM: Offlining pages not supported *NVRM: Failed to read offlined addresses **NVRM: Failed to read offlined addresses *pPlacementStrategy*pMemoryManager->Ram.fbAddrSpaceSizeMb != 0**pMemoryManager->Ram.fbAddrSpaceSizeMb != 0*0 == NvU64_HI32(pMemoryManager->Ram.fbUsableMemSize >> 20)**0 == NvU64_HI32(pMemoryManager->Ram.fbUsableMemSize >> 20)*overrideFbsrRsvdBufferSize*call to _memmgrGetOptimalSysmemPageSize*newPageSize*call to kgmmuIsHugePageSupported*kgmmuIsHugePageSupported(pKernelGmmu)**kgmmuIsHugePageSupported(pKernelGmmu)*0 == (physAddr & (RM_PAGE_SIZE_HUGE - 1))**0 == (physAddr & (RM_PAGE_SIZE_HUGE - 1))*call to kgmmuIsPageSize512mbSupported*kgmmuIsPageSize512mbSupported(pKernelGmmu)**kgmmuIsPageSize512mbSupported(pKernelGmmu)*0 == (physAddr & (RM_PAGE_SIZE_512M - 1))**0 == (physAddr & (RM_PAGE_SIZE_512M - 1))*call to kgmmuIsPageSize256gbSupported*kgmmuIsPageSize256gbSupported(pKernelGmmu)**kgmmuIsPageSize256gbSupported(pKernelGmmu)*0 == (physAddr & (RM_PAGE_SIZE_256G - 1))**0 == (physAddr & (RM_PAGE_SIZE_256G - 1))*NVRM: invalid page size attr **NVRM: invalid page size attr *call to kgmmuGetMinBigPageSize_IMPL*call to memmgrIsKindCompressible_DISPATCH*0 == (physAddr & (newPageSize - 1))**0 == (physAddr & (newPageSize - 1))*oldPageSize*oldPageSize == newPageSize**oldPageSize == newPageSize*call to memmgrComputeAndSetVgaDisplayMemoryBase_GM107*NVRM: failed to compute/set VGA display memory base! **NVRM: failed to compute/set VGA display memory base! *bMemoryProtectionEnabled*call to memmgrStateInitReservedMemory*NVRM: Final reserved memory size = 0x%llx **NVRM: Final reserved memory size = 0x%llx *call to memmgrCheckReservedMemorySize_DISPATCH*RMCFG_FEATURE_PLATFORM_GSP || memmgrCheckReservedMemorySize_HAL(pGpu, pMemoryManager) == NV_OK**RMCFG_FEATURE_PLATFORM_GSP || memmgrCheckReservedMemorySize_HAL(pGpu, pMemoryManager) == NV_OK*NVRM: RESERVED Memory size: 0x%llx **NVRM: RESERVED Memory size: 0x%llx *bRsvdRegionIsValid*rsvdRegion*rsvdAlignment*rsvdTopOfMem*call to memmgrGetFbTaxSize_DISPATCH*rsvdMemoryBase*0 == NvU64_HI32(rsvdTopOfMem - pMemoryManager->rsvdMemoryBase)**0 == NvU64_HI32(rsvdTopOfMem - pMemoryManager->rsvdMemoryBase)*pMemoryManager->Ram.fbUsableMemSize >= pMemoryManager->rsvdMemorySize**pMemoryManager->Ram.fbUsableMemSize >= pMemoryManager->rsvdMemorySize*rsvdFbRegion*call to memmgrStateInitAdjustReservedMemory*memmgrStateInitAdjustReservedMemory(pGpu, pMemoryManager)**memmgrStateInitAdjustReservedMemory(pGpu, pMemoryManager)*pMemoryManager->rsvdMemorySize < DRF_SIZE(NV_PRAMIN)**pMemoryManager->rsvdMemorySize < DRF_SIZE(NV_PRAMIN)*call to kmemsysPreFillCacheOnlyMemory_56cd7a*NVRM: couldn't allocate BAR1 instblk in sysmem **NVRM: couldn't allocate BAR1 instblk in sysmem *NVRM: couldn't allocate BAR2 instblk in sysmem **NVRM: couldn't allocate BAR2 instblk in sysmem *NVRM: FB region #%d:rsvdSize=%d **NVRM: FB region #%d:rsvdSize=%d *subdeviceGetByInstance(pClient, RES_GET_HANDLE(pDevice), gpumgrGetSubDeviceInstanceFromGpu(pGpu), &pSubdevice)**subdeviceGetByInstance(pClient, RES_GET_HANDLE(pDevice), gpumgrGetSubDeviceInstanceFromGpu(pGpu), &pSubdevice)*bar1MaxContigAvailSize*bankSwizzleAlignment*call to kbusGetBar1VARangeForDevice_IMPL*kbusGetBar1VARangeForDevice(pGpu, pKernelBus, pDevice, &bar1VARange)**kbusGetBar1VARangeForDevice(pGpu, pKernelBus, pDevice, &bar1VARange)*call to subdeviceCtrlCmdFbGetInfoV2_IMPL*subdeviceCtrlCmdFbGetInfoV2(pSubdevice, &fbInfoParams)**subdeviceCtrlCmdFbGetInfoV2(pSubdevice, &fbInfoParams)*pageFormat*call to kmemsysFreeComprResources_b3696a*call to _memmgrGetZbcSurfacesIndex*_memmgrGetZbcSurfacesIndex(pGpu, pFbAllocInfo->hClient, pFbAllocInfo->hDevice, &zbcTableIndex)**_memmgrGetZbcSurfacesIndex(pGpu, pFbAllocInfo->hClient, pFbAllocInfo->hDevice, &zbcTableIndex)*zbcSurfaces**zbcSurfaces*pMemoryManager->zbcSurfaces[zbcTableIndex] != 0**pMemoryManager->zbcSurfaces[zbcTableIndex] != 0*call to memmgrSetZbcReferenced*NVRM: [1] hwResId = 0x%x, offset = 0x%llx, size = 0x%llx **NVRM: [1] hwResId = 0x%x, offset = 0x%llx, size = 0x%llx *NVRM: [2] zbcSurfaces[%d] = 0x%x **NVRM: [2] zbcSurfaces[%d] = 0x%x *zcullAttr*cacheAttr*call to memmgrAllocGetAddrSpace_IMPL*bAlignPhase*call to memmgrVerifyComprAttrs_88bc07*call to memmgrVerifyDepthSurfaceAttrs_88bc07*NVRM: compression disabled due to regkey **NVRM: compression disabled due to regkey *call to memmgrComprSupported_IMPL*NVRM: Compression not supported for this configuration. **NVRM: Compression not supported for this configuration. *call to memmgrChooseKind_DISPATCH*NVRM: ERROR: Compression requested for small page allocation. **NVRM: ERROR: Compression requested for small page allocation. *bComprWar*call to kmemsysAllocComprResources_KERNEL*NVRM: memsysAllocComprResources failed **NVRM: memsysAllocComprResources failed *NVRM: zbcSurfaces[%d] = 0x%x, hwResId = 0x%x **NVRM: zbcSurfaces[%d] = 0x%x, hwResId = 0x%x *FLD_TEST_DRF(OS32, _ATTR2, _ZBC, _PREFER_NO_ZBC, retAttr2)**FLD_TEST_DRF(OS32, _ATTR2, _ZBC, _PREFER_NO_ZBC, retAttr2)*subdeviceGetByHandle(pClient, hDevice, &pSubdevice)**subdeviceGetByHandle(pClient, hDevice, &pSubdevice)*ref.pKernelMIGGpuInstance != NULL**ref.pKernelMIGGpuInstance != NULL*ref.pMIGComputeInstance != NULL**ref.pMIGComputeInstance != NULL*kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(0), &globalRmEngineType)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(0), &globalRmEngineType)*subdevInstance*pRmApi->Control( pRmApi, hClient, hDevice, NV0080_CTRL_CMD_INTERNAL_MEMSYS_SET_ZBC_REFERENCED, ¶ms, sizeof(params))**pRmApi->Control( pRmApi, hClient, hDevice, NV0080_CTRL_CMD_INTERNAL_MEMSYS_SET_ZBC_REFERENCED, ¶ms, sizeof(params))*call to dmaNvos32ToPageSizeAttr*NVRM: - invalid page size specified **NVRM: - invalid page size specified *NVRM: Compression requested on small page size mappings **NVRM: Compression requested on small page size mappings *call to memmgrChooseKindCompressCForMS2_4a4dee*call to memmgrIsPmaEnabled*call to memmgrIsPmaSupportedOnPlatform*gpuGetClassList(pGpu, &numClasses, NULL, ENG_CE(eng))*src/kernel/gpu/mem_mgr/arch/maxwell/mem_utils_gm107.c**gpuGetClassList(pGpu, &numClasses, NULL, ENG_CE(eng))**src/kernel/gpu/mem_mgr/arch/maxwell/mem_utils_gm107.c*class != 0**class != 0*NVRM: Base = 0x%llx, Size = 0x%llx, PB location = %p **NVRM: Base = 0x%llx, Size = 0x%llx, PB location = %p *payLoad*call to memmgrChannelPushSemaphoreMethodsBlock_f2d351*NVRM: Pushing Semaphore Payload 0x%x **NVRM: Pushing Semaphore Payload 0x%x *lastPayloadPushed*call to memmgrChannelPushAddressMethodsBlock_f2d351*call to memmgrMemUtilsCheckMemoryFastScrubEnable_DISPATCH*bMemoryScrubEnable*NVRM: Using Fast memory scrubber **NVRM: Using Fast memory scrubber *remapConstB*remapComponentSize*srcAddressSpace == 0**srcAddressSpace == 0*dstAddressSpace == ADDR_FBMEM**dstAddressSpace == ADDR_FBMEM*call to _ceChannelPushMethodAperture_GM107*launchParams*bitFlip*NVRM: Pushing Finishing Semaphore Payload 0x%x **NVRM: Pushing Finishing Semaphore Payload 0x%x *pbCpuVA*channelPutOffset*pStartPtr*pChannel->pbCpuVA != NULL**pChannel->pbCpuVA != NULL*pControlGPFifo*pChannel->pControlGPFifo != NULL**pChannel->pControlGPFifo != NULL*GPGet*GPPutNext*NVRM: Put %d Get %d PutNext%d **NVRM: Put %d Get %d PutNext%d *NVRM: gp Base 0x%x, Size %d **NVRM: gp Base 0x%x, Size %d *NVRM: invalid Put %u >= %u **NVRM: invalid Put %u >= %u *Timed Out waiting for space in GPFIFIO!**Timed Out waiting for space in GPFIFIO!*NVRM: invalid Get %u >= %u **NVRM: invalid Get %u >= %u *GpEntry0*GpEntry1*pGpEntry**pGpEntry*CliGetKernelChannelWithDevice(pChannel->pRsClient, pChannel->deviceId, pChannel->channelId, &pFifoKernelChannel) == NV_OK**CliGetKernelChannelWithDevice(pChannel->pRsClient, pChannel->deviceId, pChannel->channelId, &pFifoKernelChannel) == NV_OK*pFifoKernelChannel*pbBitMapVA*blockSema*pBlockPendingState*pBlockDoneState*call to _getSpaceInPb*spaceInPb*NVRM: Space in PB is %d and starting fill at 0x%x **NVRM: Space in PB is %d and starting fill at 0x%x *gpBase*NVRM: Wrap numBytes %d **NVRM: Wrap numBytes %d *call to _ceChannelUpdateGpFifo_GM107*NVRM: Wrapping PB around **NVRM: Wrapping PB around *semaCount*NVRM: Inserting Finish Payload!!!!!!!!!! **NVRM: Inserting Finish Payload!!!!!!!!!! *call to _checkSynchronization*call to _ceChannelPushMethodsBlock_GM107*Timed out waiting for Space in PB!**Timed out waiting for Space in PB!*filledSpace*avlblSpace*NVRM: Space in PB is %d **NVRM: Space in PB is %d *pEccTD*call to channelAllocSubdevice*channelAllocSubdevice(pGpu, pChannel)**channelAllocSubdevice(pGpu, pChannel)*call to memmgrMemUtilsChannelInitialize_DISPATCH*pEccSyncChannel*call to memmgrMemUtilsCopyEngineInitialize_DISPATCH*pEccAsyncChannel*pRmApi->DupObject(pRmApi, pEccSyncChannel->hClient, pEccSyncChannel->deviceId, &pEccSyncChannel->bitMapSemPhysId, pEccAsyncChannel->hClient, pEccAsyncChannel->bitMapSemPhysId, 0)**pRmApi->DupObject(pRmApi, pEccSyncChannel->hClient, pEccSyncChannel->deviceId, &pEccSyncChannel->bitMapSemPhysId, pEccAsyncChannel->hClient, pEccAsyncChannel->bitMapSemPhysId, 0)*pRmApi->AllocWithHandle(pRmApi, pEccSyncChannel->hClient, pEccSyncChannel->deviceId, pEccSyncChannel->bitMapSemVirtId, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))**pRmApi->AllocWithHandle(pRmApi, pEccSyncChannel->hClient, pEccSyncChannel->deviceId, pEccSyncChannel->bitMapSemVirtId, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))*Could not get back lock after allocating reduction sema**Could not get back lock after allocating reduction sema*mapDmaParams*pRmApi->Map(pRmApi, &mapDmaParams)**pRmApi->Map(pRmApi, &mapDmaParams)*pbGpuBitMapVA**pbBitMapVA*call to _memUtilsAllocateReductionSema*NVRM: Size should be a multiple of %d **NVRM: Size should be a multiple of %d *call to _ceChannelScheduleWork_GM107*blocksPushed*semAddr*NVRM: Semaphore Payload is 0x%x last is 0x%x **NVRM: Semaphore Payload is 0x%x last is 0x%x *NVRM: Timed Out waiting for CE semaphore **NVRM: Timed Out waiting for CE semaphore *NVRM: GET=0x%x, PUT=0x%x, GPGET=0x%x, GPPUT=0x%x **NVRM: GET=0x%x, PUT=0x%x, GPGET=0x%x, GPPUT=0x%x *finishPayload*serverGetClientUnderLock(&g_resServ, hClientId, &pClient)**serverGetClientUnderLock(&g_resServ, hClientId, &pClient)*deviceGetByHandle(pClient, hDeviceId, &pDevice)**deviceGetByHandle(pClient, hDeviceId, &pDevice)*kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, engineType, &localCe)**kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, engineType, &localCe)*NVRM: Unable to determine CE's channel class. **NVRM: Unable to determine CE's channel class. *call to _memUtilsAllocateUserD*_memUtilsAllocateUserD(pGpu, pMemoryManager, hClientId, hDeviceId, pChannel)**_memUtilsAllocateUserD(pGpu, pMemoryManager, hClientId, hDeviceId, pChannel)*pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, hChannelId, hClass, &channelGPFIFOAllocParams, sizeof(channelGPFIFOAllocParams))**pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, hChannelId, hClass, &channelGPFIFOAllocParams, sizeof(channelGPFIFOAllocParams))*rmGpuLocksAcquire(GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_MEM)**rmGpuLocksAcquire(GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_MEM)*!rmGpuLockIsOwner()**!rmGpuLockIsOwner()*internalflags*pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, pChannel->hUserD, userdMemClass, &memAllocParams, sizeof(memAllocParams))**pRmApi->AllocWithHandle(pRmApi, hClientId, hDeviceId, pChannel->hUserD, userdMemClass, &memAllocParams, sizeof(memAllocParams))*call to memmgrMemUtilsGetMemDescFromHandle_IMPL*pUserdMemdesc**pUserdMemdesc*pChannel->pUserdMemdesc != NULL**pChannel->pUserdMemdesc != NULL**pControlGPFifo**call to memmgrMemDescBeginTransfer_IMPL*pRmApi->MapToCpu(pRmApi, hClientId, hDeviceId, pChannel->bClientUserd ? pChannel->hUserD : hChannelId, 0, userdSize, (void **)&pChannel->pControlGPFifo, 0)**pRmApi->MapToCpu(pRmApi, hClientId, hDeviceId, pChannel->bClientUserd ? pChannel->hUserD : hChannelId, 0, userdSize, (void **)&pChannel->pControlGPFifo, 0)*createParams*call to memmgrMemUtilsGetCopyEngineClass_DISPATCH*NVRM: Unable to determine CE's engine class. **NVRM: Unable to determine CE's engine class. *pRmApi->AllocWithHandle(pRmApi, hClientId, hChannelId, hCopyObjectId, pChannel->hTdCopyClass, &createParams, sizeof(createParams))**pRmApi->AllocWithHandle(pRmApi, hClientId, hChannelId, hCopyObjectId, pChannel->hTdCopyClass, &createParams, sizeof(createParams))*call to _memUtilsAllocCe_GM107*_memUtilsAllocCe_GM107(pGpu, pMemoryManager, pChannel, pChannel->hClient, pChannel->deviceId, pChannel->channelId, pChannel->engineObjectId)**_memUtilsAllocCe_GM107(pGpu, pMemoryManager, pChannel, pChannel->hClient, pChannel->deviceId, pChannel->channelId, pChannel->engineObjectId)*call to memmgrGetPteKindForScrubber_DISPATCH*physMemParams*Aliasing FbListMem**Aliasing FbListMem*NVRM: Allocating FbAlias: %x for size: %llx, kind: %x **NVRM: Allocating FbAlias: %x for size: %llx, kind: %x *cacheSnoopFlag*bClientUserd*NVRM: Channel VAS heap base: %llx total: %llx **NVRM: Channel VAS heap base: %llx total: %llx *startFbOffset*clientGenResourceHandle(pRsClient, &pChannel->hFbAlias)**clientGenResourceHandle(pRsClient, &pChannel->hFbAlias)*call to memmgrMemUtilsCreateMemoryAlias_DISPATCH*NVRM: Setting Identity mapping failed.. status: %x **NVRM: Setting Identity mapping failed.. status: %x *NVRM: failed allocating scrubber vaspace, status=0x%x **NVRM: failed allocating scrubber vaspace, status=0x%x *NVRM: failed getting the scrubber vaspace from handle, status=0x%x **NVRM: failed getting the scrubber vaspace from handle, status=0x%x *NVRM: failed pinning down Scrubber VAS, status=0x%x **NVRM: failed pinning down Scrubber VAS, status=0x%x *clientGenResourceHandle(pRsClient, &pChannel->hFbAliasVA)**clientGenResourceHandle(pRsClient, &pChannel->hFbAliasVA)*vaStartOffset*NVRM: Allocating VASpace for (base, size): (%llx, %llx) failed, with status: %x **NVRM: Allocating VASpace for (base, size): (%llx, %llx) failed, with status: %x *call to memmgrMemUtilsMapFbAlias*memmgrMemUtilsMapFbAlias(pMemoryManager, pChannel)**memmgrMemUtilsMapFbAlias(pMemoryManager, pChannel)*call to kfifoCheckChannelAllocAddrSpaces_DISPATCH*USERD in sysmem and PushBuffer/GPFIFO in vidmem not allowed**USERD in sysmem and PushBuffer/GPFIFO in vidmem not allowed*call to _memUtilsChannelAllocatePB_GM107*Could not get back lock after allocating Push Buffer sema**Could not get back lock after allocating Push Buffer sema*pbGpuVA*pbGpuNotifierVA*call to _memUtilsAllocateChannel*_memUtilsAllocateChannel(pGpu, pMemoryManager, hClient, hDevice, hChannel, hErrNotifierVirt, hPushBuffer, pChannel)**_memUtilsAllocateChannel(pGpu, pMemoryManager, hClient, hDevice, hChannel, hErrNotifierVirt, hPushBuffer, pChannel)*call to _memUtilsMapUserd_GM107*_memUtilsMapUserd_GM107(pGpu, pMemoryManager, pChannel, hClient, hDevice, hChannel, bUseRmApiForBar1)**_memUtilsMapUserd_GM107(pGpu, pMemoryManager, pChannel, hClient, hDevice, hChannel, bUseRmApiForBar1)*pChannelBufferMemdesc**pChannelBufferMemdesc*pChannel->pChannelBufferMemdesc != NULL**pChannel->pChannelBufferMemdesc != NULL*pErrNotifierMemdesc**pErrNotifierMemdesc*pChannel->pErrNotifierMemdesc != NULL**pChannel->pErrNotifierMemdesc != NULL**pbCpuVA*pTokenFromNotifier**pTokenFromNotifier*pErrNotifierCpuVA**pErrNotifierCpuVA*pErrNotifierCpuVA != NULL**pErrNotifierCpuVA != NULL*call to memmgrScrubMapDoorbellRegion_DISPATCH*memmgrScrubMapDoorbellRegion_HAL(pGpu, pMemoryManager, pChannel)**memmgrScrubMapDoorbellRegion_HAL(pGpu, pMemoryManager, pChannel)*pRmApi->Control(pRmApi, pChannel->hClient, pChannel->subdeviceId, NV2080_CTRL_CMD_GPU_GET_MAX_SUPPORTED_PAGE_SIZE, &maxPageSizeParams, sizeof(maxPageSizeParams))**pRmApi->Control(pRmApi, pChannel->hClient, pChannel->subdeviceId, NV2080_CTRL_CMD_GPU_GET_MAX_SUPPORTED_PAGE_SIZE, &maxPageSizeParams, sizeof(maxPageSizeParams))*maxPageSizeParams*maxPageSizeParams.maxSupportedPageSize != 0**maxPageSizeParams.maxSupportedPageSize != 0*vasCapsParams*pRmApi->Control(pRmApi, pChannel->hClient, pChannel->deviceId, NV0080_CTRL_CMD_DMA_ADV_SCHED_GET_VA_CAPS, &vasCapsParams, sizeof(vasCapsParams))**pRmApi->Control(pRmApi, pChannel->hClient, pChannel->deviceId, NV0080_CTRL_CMD_DMA_ADV_SCHED_GET_VA_CAPS, &vasCapsParams, sizeof(vasCapsParams))*vasCapsParams.supportedPageSizeMask != 0**vasCapsParams.supportedPageSizeMask != 0*NV_IS_ALIGNED64(pChannel->startFbOffset, RM_PAGE_SIZE_512M)**NV_IS_ALIGNED64(pChannel->startFbOffset, RM_PAGE_SIZE_512M)*NV_IS_ALIGNED64(pChannel->vaStartOffset, RM_PAGE_SIZE_512M)**NV_IS_ALIGNED64(pChannel->vaStartOffset, RM_PAGE_SIZE_512M)*NVRM: fb start 0x%llx size 0x%llx supported page sizes 0x%llx (0x%llx) va start 0x%llx **NVRM: fb start 0x%llx size 0x%llx supported page sizes 0x%llx (0x%llx) va start 0x%llx *NV_IS_ALIGNED64(currentFbOffset, pageSize)**NV_IS_ALIGNED64(currentFbOffset, pageSize)*NV_IS_ALIGNED64(currentVaddr, pageSize)**NV_IS_ALIGNED64(currentVaddr, pageSize)*pageSizeMap*pageSize == vasCapsParams.bigPageSize**pageSize == vasCapsParams.bigPageSize*NVRM: page size not supported: 0x%llx **NVRM: page size not supported: 0x%llx *currentVaddr*NVRM: CeUtils VAS : FB (addr: %llx, size: %llx) identity mapped to VAS at addr: %llx, page size: 0x%llx **NVRM: CeUtils VAS : FB (addr: %llx, size: %llx) identity mapped to VAS at addr: %llx, page size: 0x%llx *currentFbOffset == pChannel->fbSize**currentFbOffset == pChannel->fbSize*fbAliasVA*attrNotifier*hVirtMem*pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, hDevice, hPhysMem, hClass, &memAllocParams, sizeof(memAllocParams))**pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, hDevice, hPhysMem, hClass, &memAllocParams, sizeof(memAllocParams))*pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, hDevice, hVirtMem, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))**pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, hDevice, hVirtMem, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))*pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, hDevice, pChannel->errNotifierIdPhys, hClass, &memAllocParams, sizeof(memAllocParams))**pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, hDevice, pChannel->errNotifierIdPhys, hClass, &memAllocParams, sizeof(memAllocParams))*pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, hDevice, pChannel->errNotifierIdVirt, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))**pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, hDevice, pChannel->errNotifierIdVirt, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))*pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, pChannel->deviceId, pChannel->bitMapSemPhysId, NV01_MEMORY_SYSTEM, &memAllocParams, sizeof(memAllocParams))**pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, pChannel->deviceId, pChannel->bitMapSemPhysId, NV01_MEMORY_SYSTEM, &memAllocParams, sizeof(memAllocParams))*pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, pChannel->deviceId, pChannel->bitMapSemVirtId, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))**pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, pChannel->deviceId, pChannel->bitMapSemVirtId, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))*pRmApi->MapToCpu(pRmApi, pChannel->hClient, pChannel->deviceId, pChannel->bitMapSemPhysId, 0, (((pChannel->blockCount + 31)/32)*4), (void **)&pChannel->pbBitMapVA, 0)**pRmApi->MapToCpu(pRmApi, pChannel->hClient, pChannel->deviceId, pChannel->bitMapSemPhysId, 0, (((pChannel->blockCount + 31)/32)*4), (void **)&pChannel->pbBitMapVA, 0)*gpuGartCaps*call to kgmmuSetupWarForBug2720120_DISPATCH*kgmmuSetupWarForBug2720120_HAL(pKernelGmmu)*src/kernel/gpu/mem_mgr/arch/maxwell/virt_mem_allocator_gm107.c**kgmmuSetupWarForBug2720120_HAL(pKernelGmmu)**src/kernel/gpu/mem_mgr/arch/maxwell/virt_mem_allocator_gm107.c*bApplyWarForBug2720120**pteTemplate*pFmtFamily*bug2720120WarPde1**pMemBlock*pMemBlock != NULL**pMemBlock != NULL*pVASBlock**pVASBlock*gvaspaceWalkUserCtxAcquire(pGVAS, pGpu, pVASBlock, &userCtx)**gvaspaceWalkUserCtxAcquire(pGVAS, pGpu, pVASBlock, &userCtx)*mmuWalkMap(userCtx.pGpuState->pWalk, vaLo, vaHi, &mapTarget)**mmuWalkMap(userCtx.pGpuState->pWalk, vaLo, vaHi, &mapTarget)*pAddr != NULL**pAddr != NULL*memType != NULL**memType != NULL*NULL != pVAS**NULL != pVAS**pVaddr*pageSizeSubDev*pageOffsSubDev*pageOffs*pageSize == pageSizeSubDev**pageSize == pageSizeSubDev*pageOffs == pageOffsSubDev**pageOffs == pageOffsSubDev*NVRM: memmgrGetKindComprFromMemDesc failed **NVRM: memmgrGetKindComprFromMemDesc failed *call to kgmmuIsPerVaspaceBigPageEn*pageSize != RM_PAGE_SIZE_HUGE**pageSize != RM_PAGE_SIZE_HUGE*mapLength*compAlign*NVRM: vaspaceAlloc failed **NVRM: vaspaceAlloc failed *call to dmaPageArrayInit**call to memdescGetPteArray*call to dmaIsDefaultGpuUncached_DISPATCH*call to kmemsysNeedInvalidateGpuCacheOnUnmap_DISPATCH*pGpu == pRootMemDesc->pGpu && memdescGetAddressSpace(pRootMemDesc) == ADDR_SYSMEM**pGpu == pRootMemDesc->pGpu && memdescGetAddressSpace(pRootMemDesc) == ADDR_SYSMEM*bInvalidateL2OnFree*NVRM: dmaUpdateVASpace_GF100 failed **NVRM: dmaUpdateVASpace_GF100 failed *call to deleteInfoPtr*call to addInfoPtr**call to addInfoPtr*optimizeUseCaseOverride*pDHPI*vasReverse*RMRestrictVARange**RMRestrictVARange*bDmaRestrictVaRange*tlbLock*readOnly*writeDisable*readDisable*forceAccessCounterDisable**pageSize*vaSpaceBigPageSize*pageSize == vaSpaceBigPageSize**pageSize == vaSpaceBigPageSize*readPte*pagePrevAddr*pageAddr*NVRM: MMU: given non-contig 4KB pages for %lldkB mapping **NVRM: MMU: given non-contig 4KB pages for %lldkB mapping *call to memmgrGetUncompressedKind_DISPATCH*kindNoCompression*overMapModulus*bReadPtes*kindNoCompr*bCompr*bUpdatePhysAddr*bUpdateCompr*call to kgmmuIsBug2720120WarEnabled*call to _dmaApplyWarForBug2720120*_dmaApplyWarForBug2720120(pGVAS, pGpu, vaLo, vaHi)**_dmaApplyWarForBug2720120(pGVAS, pGpu, vaLo, vaHi)*call to kmemsysNeedInvalidateGpuCacheOnMap_DISPATCH*NVRM: force ACD=true with ptePcfSw = 0x%X **NVRM: force ACD=true with ptePcfSw = 0x%X *(kgmmuTranslatePtePcfFromSw_HAL(pKernelGmmu, ptePcfSw, &ptePcfHw) == NV_OK)**(kgmmuTranslatePtePcfFromSw_HAL(pKernelGmmu, ptePcfSw, &ptePcfHw) == NV_OK)*NULL != pFmtFamily**NULL != pFmtFamily*sparsePte*call to nvFieldIsValid32*fldReadDisable*fldWriteDisable*fldLocked*fldAtomicDisable*pTgtPteMem*call to mmuFmtVirtAddrToEntryIndex*call to _gmmuWalkCBMapNextEntries_Direct*progress == entryIndexHi - entryIndexLo + 1**progress == entryIndexHi - entryIndexLo + 1*NULL != pMemBlock**NULL != pMemBlock*bRemap*call to knvlinkGetUniqueFabricEgmBaseAddress_4de472*NVRM: Nvswitch systems don't support compression. **NVRM: Nvswitch systems don't support compression. *pGVAS_FLA*NULL != pIter->pMap**NULL != pIter->pMap*pProgress*pMap != NULL**pMap != NULL*(pIter->pPageArray->count == 1) && (currIdxMod > 0)**(pIter->pPageArray->count == 1) && (currIdxMod > 0)*bCompressible*iRegion*call to kmemsysIsPagePLCable_DISPATCH*bIsWarApplied*call to memmgrGetDisablePlcKind_DISPATCH*call to kgmmuFieldSetKindCompTags_IMPL*NULL != pMemDesc**NULL != pMemDesc*memdescGetPageSize(memdescGetMemDescFromGpu(pMemDesc, pGpu), VAS_ADDRESS_TRANSLATION(pVAS)) != 0**memdescGetPageSize(memdescGetMemDescFromGpu(pMemDesc, pGpu), VAS_ADDRESS_TRANSLATION(pVAS)) != 0*subDevIdSrc**pVirtualMemory*pCliMapInfo != NULL**pCliMapInfo != NULL*pCliMapInfo->pDmaMappingInfo->mapPageSize != 0**pCliMapInfo->pDmaMappingInfo->mapPageSize != 0*NVRM: Using dmaFreeMapping with sparse == False in BAR1 path! **NVRM: Using dmaFreeMapping with sparse == False in BAR1 path! *NVRM: error updating VA space. **NVRM: error updating VA space. *call to memdescGetGpuP2PCacheAttrib*peerNumber*totalVaRange*bIsBarOrPerf*bAllocVASpace*bIsBar1*bIsMIGMemPartitioningEnabled*writeOnly*call to memdescGetCpuCacheSnoop*call to memdescGetGpuCacheSnoop*NVRM: GPU cache snoop flag is enabled at allocation time, but the mapping is requested with cache snoop disabled. **NVRM: GPU cache snoop flag is enabled at allocation time, but the mapping is requested with cache snoop disabled. *NVRM: GPU cache snoop flag is disabled at allocation time, but the mapping is requested with cache snoop enabled. **NVRM: GPU cache snoop flag is disabled at allocation time, but the mapping is requested with cache snoop enabled. *shaderFlags*call to fabricvaspaceGetGpaMemdesc_IMPL*NVRM: Failed to get the adjusted memdesc for the fabric memdesc **NVRM: Failed to get the adjusted memdesc for the fabric memdesc *pAdjustedMemDesc*NVRM: Choosing default map pagesize through fixed offset path. **NVRM: Choosing default map pagesize through fixed offset path. *NVRM: Choosing default map pagesize based on client provided VA range. **NVRM: Choosing default map pagesize based on client provided VA range. *call to _dmaGetMaxVAPageSize*_dmaGetMaxVAPageSize(*pVaddr, pCliMapInfo->pVirtualMemory, memdescGetPhysAddr(pLocals->pTempMemDesc, addressTranslation, 0), memdescGetSize(pLocals->pTempMemDesc), memdescGetPteArray(pLocals->pTempMemDesc, addressTranslation)[0], pLocals->pTempMemDesc->ActualSize, &vaMaxPageSize, pLocals->vaspaceBigPageSize)**_dmaGetMaxVAPageSize(*pVaddr, pCliMapInfo->pVirtualMemory, memdescGetPhysAddr(pLocals->pTempMemDesc, addressTranslation, 0), memdescGetSize(pLocals->pTempMemDesc), memdescGetPteArray(pLocals->pTempMemDesc, addressTranslation)[0], pLocals->pTempMemDesc->ActualSize, &vaMaxPageSize, pLocals->vaspaceBigPageSize)*NVRM: Choosing default map pagesize based on the map call allocating VA for the mapping. **NVRM: Choosing default map pagesize based on the map call allocating VA for the mapping. *_dmaGetMaxVAPageSize(*pVaddr, NULL, memdescGetPhysAddr(pLocals->pTempMemDesc, addressTranslation, 0), memdescGetSize(pLocals->pTempMemDesc), memdescGetPteArray(pLocals->pTempMemDesc, addressTranslation)[0], pLocals->pTempMemDesc->ActualSize, &vaMaxPageSize, pLocals->vaspaceBigPageSize)**_dmaGetMaxVAPageSize(*pVaddr, NULL, memdescGetPhysAddr(pLocals->pTempMemDesc, addressTranslation, 0), memdescGetSize(pLocals->pTempMemDesc), memdescGetPteArray(pLocals->pTempMemDesc, addressTranslation)[0], pLocals->pTempMemDesc->ActualSize, &vaMaxPageSize, pLocals->vaspaceBigPageSize)*call to isPageSizeCompatibleWithVA*pMemDescVA*pLocals->pageSize != 0**pLocals->pageSize != 0*NVRM: Use preallocated VA's page size(0x%llx) **NVRM: Use preallocated VA's page size(0x%llx) *NVRM: Unknown page size flag encountered during mapping **NVRM: Unknown page size flag encountered during mapping *NVRM: Picked Page size based on flags: 0x%llx flagVal: 0x%x **NVRM: Picked Page size based on flags: 0x%llx flagVal: 0x%x *NVRM: Requested mapping at larger page size than the physical granularity PhysPageSize = 0x%llx MapPageSize = 0x%llx. Overriding to physical page granularity... **NVRM: Requested mapping at larger page size than the physical granularity PhysPageSize = 0x%llx MapPageSize = 0x%llx. Overriding to physical page granularity... *(pLocals->readOnly == DMA_UPDATE_VASPACE_FLAGS_READ_ONLY)**(pLocals->readOnly == DMA_UPDATE_VASPACE_FLAGS_READ_ONLY)*call to kgmmuIsVaspaceInteropSupported*call to kbusIsBar1Force64KBMappingEnabled*NVRM: Requested 4K mapping on compressible sufrace. Overriding to physical page granularity... **NVRM: Requested 4K mapping on compressible sufrace. Overriding to physical page granularity... *disableEncryption*overMap*kmemsysSwizzIdToMIGMemRange(pGpu, pKernelMemorySystem, swizzId, pLocals->totalVaRange, &pLocals->totalVaRange)**kmemsysSwizzIdToMIGMemRange(pGpu, pKernelMemorySystem, swizzId, pLocals->totalVaRange, &pLocals->totalVaRange)*vaRangeLo*vaRangeHi*(pLocals->pageSize == pLocals->vaspaceBigPageSize) || (pLocals->pageSize == RM_PAGE_SIZE) || (pLocals->pageSize == RM_PAGE_SIZE_HUGE) || (pLocals->pageSize == RM_PAGE_SIZE_512M) || (pLocals->pageSize == RM_PAGE_SIZE_256G)**(pLocals->pageSize == pLocals->vaspaceBigPageSize) || (pLocals->pageSize == RM_PAGE_SIZE) || (pLocals->pageSize == RM_PAGE_SIZE_HUGE) || (pLocals->pageSize == RM_PAGE_SIZE_512M) || (pLocals->pageSize == RM_PAGE_SIZE_256G)*call to virtmemGetAddressAndSize_IMPL*targetSpaceLimit*targetSpaceBase*targetSpaceLength*NVRM: Requested BAR1 VA Lo=0x%llx Hi=0x%llx total BAR1 VA range Lo=0x%llx Hi=0x%llx **NVRM: Requested BAR1 VA Lo=0x%llx Hi=0x%llx total BAR1 VA range Lo=0x%llx Hi=0x%llx *requestedRange*call to gvaspaceIsInternalVaRestricted_IMPL*pLocals->pageSize <= pLocals->vaspaceBigPageSize**pLocals->pageSize <= pLocals->vaspaceBigPageSize*bReverse*NVRM: The VA space requires all allocations to specify a fixed address **NVRM: The VA space requires all allocations to specify a fixed address *NVRM: can't alloc VA space for mapping. **NVRM: can't alloc VA space for mapping. *0 == (pLocals->vaLo & (pLocals->pageSize - 1))**0 == (pLocals->vaLo & (pLocals->pageSize - 1))*vaSize >= pLocals->mapLength**vaSize >= pLocals->mapLength*NVRM: Virtual address 0x%llX is not compatible with page size 0x%llX or page offset 0x%llX. **NVRM: Virtual address 0x%llX is not compatible with page size 0x%llX or page offset 0x%llX. *pLocals->pMemory != NULL**pLocals->pMemory != NULL*NVRM: P2P LOOPBACK setup with physical vidmem at 0x%llx and virtual address at 0x%llx **NVRM: P2P LOOPBACK setup with physical vidmem at 0x%llx and virtual address at 0x%llx *call to dmaPageArrayInitWithFlags*NVRM: Fabric memory should not be compressible. **NVRM: Fabric memory should not be compressible. *pMappingGpu*NVRM: Mapping Gpu is not attached to the given memory object **NVRM: Mapping Gpu is not attached to the given memory object *call to kbusGetNvlinkPeerId_DISPATCH*call to kbusGetNvSwitchPeerId_DISPATCH*pMemoryManager->bLocalEgmEnabled**pMemoryManager->bLocalEgmEnabled*NVRM: No P2P for system memory. **NVRM: No P2P for system memory. *call to gpuGetVmmuSegmentSize*Vidmem page size is limited by VMMU segment size when GPU is in SRIOV mode**Vidmem page size is limited by VMMU segment size when GPU is in SRIOV mode*call to knvlinkGetDirectConnectBaseAddress_90d271*call to _dmaGetFabricEgmAddress*call to _dmaGetFabricAddress*NVRM: SYS_NCOH + VOL=1 mappings do not support platform atomics. **NVRM: SYS_NCOH + VOL=1 mappings do not support platform atomics. *bNeedL2InvalidateAtUnmap*!pLocals->bNeedL2InvalidateAtUnmap**!pLocals->bNeedL2InvalidateAtUnmap*NVRM: can't update VA space for mapping @vaddr=0x%llx **NVRM: can't update VA space for mapping @vaddr=0x%llx *mapPageSize*pVASInfo**pMapNode*call to fabricvaspacePutGpaMemdesc_IMPL*NVRM: targetSpaceBase: 0x%llx, targetSpaceLength: 0x%llx, targetSpaceLimit: 0x%llx **NVRM: targetSpaceBase: 0x%llx, targetSpaceLength: 0x%llx, targetSpaceLimit: 0x%llx *NVRM: memDescribedStartAddr: 0x%llx, memDescribedSize: 0x%llx **NVRM: memDescribedStartAddr: 0x%llx, memDescribedSize: 0x%llx *alignedVA*vaMaxPageSize*bFoundPageSize*call to memmgrGetMaxContextSize_GM200*newRegion*overrideInitHeapMin*regionLimit*regionBase*newRegionIndex*overrideHeapMax*call to memmgrChooseKindCompressC_GM107*call to memmgrIsScrubOnFreeEnabled*call to scrubberDestruct*call to scrubberConstruct*scrubberConstruct(pGpu, pHeap)*src/kernel/gpu/mem_mgr/arch/pascal/mem_mgr_scrub_gp100.c**scrubberConstruct(pGpu, pHeap)**src/kernel/gpu/mem_mgr/arch/pascal/mem_mgr_scrub_gp100.c*objCreate(&pMemoryManager->pSysmemScrubber, pMemoryManager, SysmemScrubber, pGpu)**objCreate(&pMemoryManager->pSysmemScrubber, pMemoryManager, SysmemScrubber, pGpu)*pFbRegion0*pFbRegion1*((NvU64) *offset << 10ULL) == (pFbRegion1->limit + 1)*src/kernel/gpu/mem_mgr/arch/turing/mem_mgr_tu102.c**((NvU64) *offset << 10ULL) == (pFbRegion1->limit + 1)**src/kernel/gpu/mem_mgr/arch/turing/mem_mgr_tu102.c*((NvU64) *offset << 10ULL) == (pFbRegion0->limit + 1)**((NvU64) *offset << 10ULL) == (pFbRegion0->limit + 1)*call to memmgrGetMaxContextSize_GV100*NVRM: Setting ctag offset before allocating: %x **NVRM: Setting ctag offset before allocating: %x *NVRM: - comptagline offset is outside the bounds, offset: %x, limit:%x. **NVRM: - comptagline offset is outside the bounds, offset: %x, limit:%x. *bRmToChooseKind*NVRM: Client sets a compressible PTE kind 0x%x, while sets NVOS32_ATTR_COMPR_NONE. RM will ignore the PTE kind from client and choose an uncompressible kind instead. **NVRM: Client sets a compressible PTE kind 0x%x, while sets NVOS32_ATTR_COMPR_NONE. RM will ignore the PTE kind from client and choose an uncompressible kind instead. *call to memmgrChooseKindZ_DISPATCH*call to memmgrChooseKindCompressZ_DISPATCH*NVRM: Unable to set a kind, dumping attributes:comprAttr = 0x%x, type = 0x%x, attr = 0x%x **NVRM: Unable to set a kind, dumping attributes:comprAttr = 0x%x, type = 0x%x, attr = 0x%x *src/kernel/gpu/mem_mgr/arch/turing/mem_mgr_tu102_base.c**src/kernel/gpu/mem_mgr/arch/turing/mem_mgr_tu102_base.c*call to memmgrGetMaxContextSize_GP100*pDoorbellRegion*pDoorbellRegisterOffset**pDoorbellRegisterOffset*bUseDoorbellRegister*src/kernel/gpu/mem_mgr/ce_utils.c*NVRM: Failed to get resource in resource server for physical memory handle. **src/kernel/gpu/mem_mgr/ce_utils.c**NVRM: Failed to get resource in resource server for physical memory handle. *pDstPhysmemRef*pSrcPhysmemRef*submittedWorkId*pPhysmemRef*call to ceutilsMemset_IMPL*NVRM: CeUtils: unsupported flags = 0x%llx **NVRM: CeUtils: unsupported flags = 0x%llx *call to ceutilsUpdateProgress_IMPL*pLiteKernelChannel*ceutilsGetFirstAsyncCe(pCeUtils, pGpu, pChannel->pRsClient, pChannel->deviceId, &ceId, NV_FALSE) == NV_OK**ceutilsGetFirstAsyncCe(pCeUtils, pGpu, pChannel->pRsClient, pChannel->deviceId, &ceId, NV_FALSE) == NV_OK*call to ceutilsIsSubmissionPaused*ceutilsIsSubmissionPaused(pCeUtils)**ceutilsIsSubmissionPaused(pCeUtils)*ceutilsUsesPreferredCe(pCeUtils)**ceutilsUsesPreferredCe(pCeUtils)*call to channelWaitForFinishPayload*(pCeUtils != NULL) && (pCeUtils->pChannel != NULL)**(pCeUtils != NULL) && (pCeUtils->pChannel != NULL)*call to channelReadChannelMemdesc*hwCurrentCompletedPayload*lastCompletedPayload*!ceutilsIsSubmissionPaused(pCeUtils)**!ceutilsIsSubmissionPaused(pCeUtils)*NVRM: Src/Dst Memory descriptor should be valid. **NVRM: Src/Dst Memory descriptor should be valid. *NVRM: CeUtils does not support p2p copies right now. **NVRM: CeUtils does not support p2p copies right now. *dstSize*NVRM: Invalid offset passed for the src/dst memdesc. **NVRM: Invalid offset passed for the src/dst memdesc. *NVRM: Invalid memcopy length. **NVRM: Invalid memcopy length. *channelPbInfo*bCeMemcopy*lastSubmittedPayload*pCompletionCallback*pCompletionCallbackArg**pCompletionCallbackArg***pCompletionCallbackArg*srcPageGranularity*dstPageGranularity*srcAddrTranslation**srcAddrTranslation*dstAddrTranslation**dstAddrTranslation*bSrcContig*bDstContig*copyLength*NVRM: CeUtils Memcopy dstAddr: %llx, srcAddr: %llx, size: %x **NVRM: CeUtils Memcopy dstAddr: %llx, srcAddr: %llx, size: %x *call to memdescGetPtePhysAddr*srcAddr*dstAddr*call to _ceutilsSubmitPushBuffer*NVRM: Cannot submit push buffer for memcopy. **NVRM: Cannot submit push buffer for memcopy. *NVRM: Async memset payload returned: 0x%x **NVRM: Async memset payload returned: 0x%x *NVRM: Work was done from RM PoV lastSubmitted = 0x%x **NVRM: Work was done from RM PoV lastSubmitted = 0x%x *NVRM: Invalid memdesc for CeUtils memset. **NVRM: Invalid memdesc for CeUtils memset. *NVRM: Invalid memory descriptor passed. **NVRM: Invalid memory descriptor passed. *addrTranslation**addrTranslation*NVRM: Invalid offset passed for the memdesc. **NVRM: Invalid offset passed for the memdesc. *NVRM: CeUtils Args to memset - offset: %llx, size: %llx **NVRM: CeUtils Args to memset - offset: %llx, size: %llx *NVRM: Invalid memset length passed. **NVRM: Invalid memset length passed. *pageGranularity*memsetLength*memsetSizeContig*NVRM: CeUtils Memset dstAddr: %llx, size: %x **NVRM: CeUtils Memset dstAddr: %llx, size: %x *NVRM: Cannot submit push buffer for memset. **NVRM: Cannot submit push buffer for memset. *pChannelPbInfo*pChannelPbInfo != NULL**pChannelPbInfo != NULL*pChannel != NULL**pChannel != NULL*NVRM: Actual size of copying to be pushed: %x **NVRM: Actual size of copying to be pushed: %x *call to channelWaitForFreeEntry*NVRM: Cannot get putIndex. **NVRM: Cannot get putIndex. *call to _ceutilsInsertCallback*_ceutilsInsertCallback(pCeUtils, pChannelPbInfo)**_ceutilsInsertCallback(pCeUtils, pChannelPbInfo)*bReleaseMapping*call to _ceUtilsFastScrubEnabled*call to channelFillPbFastScrub*pChannel->bUseVasForCeCopy**pChannel->bUseVasForCeCopy*call to channelFillCePb*NVRM: Cannot push methods to channel. **NVRM: Cannot push methods to channel. *call to channelFillGpFifo*NVRM: Channel operation failures during memcopy **NVRM: Channel operation failures during memcopy *lastSubmittedEntry*call to channelServiceScrubberInterrupts*call to _ceutilsProcessCompletionCallbacks*pCallbackLock*listCount(&pCeUtils->completionCallbacks) == 0**listCount(&pCeUtils->completionCallbacks) == 0*pCeUtils->lastCompletedPayload == lastSubmittedPayload**pCeUtils->lastCompletedPayload == lastSubmittedPayload*NVRM: Leaked USERD mapping from ceUtils! **NVRM: Leaked USERD mapping from ceUtils! *NVRM: Leaked pushbuffer mapping! **NVRM: Leaked pushbuffer mapping! *NVRM: Leaked notifier mapping! **NVRM: Leaked notifier mapping! *bForcedCeId*bCompletionCallbackEnabled*call to clientSetHandleGenerator_IMPL*clientSetHandleGenerator(pChannel->pRsClient, RS_UNIQUE_HANDLE_BASE, RS_UNIQUE_HANDLE_RANGE/2 - VGPU_RESERVED_HANDLE_RANGE)**clientSetHandleGenerator(pChannel->pRsClient, RS_UNIQUE_HANDLE_BASE, RS_UNIQUE_HANDLE_RANGE/2 - VGPU_RESERVED_HANDLE_RANGE)*clientSetHandleGenerator(pChannel->pRsClient, 1U, ~0U - 1U)**clientSetHandleGenerator(pChannel->pRsClient, 1U, ~0U - 1U)*bClientAllocated*hVASpaceId*bUseBar1*NVRM: Enabled fast scrubber in construct. **NVRM: Enabled fast scrubber in construct. *call to channelSetupIDs*call to channelSetupChannelBufferSizes*ceId*ceutilsGetFirstAsyncCe(pCeUtils, pGpu, pChannel->pRsClient, pChannel->deviceId, &pChannel->ceId, NV_FALSE)**ceutilsGetFirstAsyncCe(pCeUtils, pGpu, pChannel->pRsClient, pChannel->deviceId, &pChannel->ceId, NV_FALSE)*NVRM: Channel alloc successful for ceUtils **NVRM: Channel alloc successful for ceUtils **pCallbackLock*pCeUtils->pCallbackLock != NULL**pCeUtils->pCallbackLock != NULL**pCeUtils*pCeUtils->bCompletionCallbackEnabled**pCeUtils->bCompletionCallbackEnabled*pCallbackEntry**pCallbackEntry*pCallbackEntry != NULL**pCallbackEntry != NULL***pArg*NVRM: inserted completion callback payload=%llu **NVRM: inserted completion callback payload=%llu *NVRM: lastCompletedPayload=%llu **NVRM: lastCompletedPayload=%llu *NVRM: calling completion callback payload=%llu **NVRM: calling completion callback payload=%llu *call to kmigmgrGetGPUInstanceScrubberCe_IMPL*pCeInstance*kmigmgrGetGPUInstanceScrubberCe(pGpu, GPU_GET_KERNEL_MIG_MANAGER(pGpu), pDevice, pCeInstance)**kmigmgrGetGPUInstanceScrubberCe(pGpu, GPU_GET_KERNEL_MIG_MANAGER(pGpu), pDevice, pCeInstance)*pipelinedValue*flushValue*disablePlcKind*(pChannel != NULL)*src/kernel/gpu/mem_mgr/channel_utils.c**(pChannel != NULL)**src/kernel/gpu/mem_mgr/channel_utils.c*(pChannelPbInfo != NULL)**(pChannelPbInfo != NULL)*(pCcslCtx != NULL)**(pCcslCtx != NULL)*pAuthTagBufMemDesc*(pAuthTagBufMemDesc != NULL)**(pAuthTagBufMemDesc != NULL)*pSemaMemDesc*(pSemaMemDesc != NULL)**(pSemaMemDesc != NULL)*pMethodLength*(pMethodLength != NULL)**(pMethodLength != NULL)**pPtr**pStartPtr**pMemoryManager*NVRM: PutIndex: %x, PbOffset: %x **NVRM: PutIndex: %x, PbOffset: %x *pScrubMethdAuthTagBuf*(pScrubMethdAuthTagBuf != NULL)**(pScrubMethdAuthTagBuf != NULL)*pSemaAuthTagBuf*(pSemaAuthTagBuf != NULL)**(pSemaAuthTagBuf != NULL)*pMethods**pMethods*pMethods != NULL**pMethods != NULL*call to addMethodsToMethodBuf*addMethodsToMethodBuf(NV906F_SET_OBJECT, pChannel->classEngineID, pMethods, methodIdx++)**addMethodsToMethodBuf(NV906F_SET_OBJECT, pChannel->classEngineID, pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_DECRYPT_COPY_DST_ADDR_HI, NvU64_HI32(pChannelPbInfo->dstAddr), pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_DECRYPT_COPY_DST_ADDR_HI, NvU64_HI32(pChannelPbInfo->dstAddr), pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_DECRYPT_COPY_DST_ADDR_LO, NvU64_LO32(pChannelPbInfo->dstAddr), pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_DECRYPT_COPY_DST_ADDR_LO, NvU64_LO32(pChannelPbInfo->dstAddr), pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_DECRYPT_COPY_SIZE, pChannelPbInfo->size, pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_DECRYPT_COPY_SIZE, pChannelPbInfo->size, pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_METHOD_STREAM_AUTH_TAG_ADDR_HI, NvU64_HI32(scrubMthdAuthTagBufGpuVA + scrubAuthTagBufoffset), pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_METHOD_STREAM_AUTH_TAG_ADDR_HI, NvU64_HI32(scrubMthdAuthTagBufGpuVA + scrubAuthTagBufoffset), pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_METHOD_STREAM_AUTH_TAG_ADDR_LO, NvU64_LO32(scrubMthdAuthTagBufGpuVA + scrubAuthTagBufoffset), pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_METHOD_STREAM_AUTH_TAG_ADDR_LO, NvU64_LO32(scrubMthdAuthTagBufGpuVA + scrubAuthTagBufoffset), pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_SEMAPHORE_A, NvU64_HI32(pChannel->pbGpuVA + pChannel->authTagBufSemaOffset), pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_SEMAPHORE_A, NvU64_HI32(pChannel->pbGpuVA + pChannel->authTagBufSemaOffset), pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_SEMAPHORE_B, NvU64_LO32(pChannel->pbGpuVA + pChannel->authTagBufSemaOffset), pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_SEMAPHORE_B, NvU64_LO32(pChannel->pbGpuVA + pChannel->authTagBufSemaOffset), pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_SET_SEMAPHORE_PAYLOAD_LOWER, scrubAuthTagBufIndex, pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_SET_SEMAPHORE_PAYLOAD_LOWER, scrubAuthTagBufIndex, pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_EXECUTE, execute, pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_EXECUTE, execute, pMethods, methodIdx++)*call to ccslSign_IMPL*hmacDigest**hmacDigest*pBufScrub**pBufScrub*methodIdx*addMethodsToMethodBuf(NVCBA2_METHOD_STREAM_AUTH_TAG_ADDR_HI, NvU64_HI32(semaMthdAuthTagBufGpuVA + semaAuthTagBufoffset), pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_METHOD_STREAM_AUTH_TAG_ADDR_HI, NvU64_HI32(semaMthdAuthTagBufGpuVA + semaAuthTagBufoffset), pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_METHOD_STREAM_AUTH_TAG_ADDR_LO, NvU64_LO32(semaMthdAuthTagBufGpuVA + semaAuthTagBufoffset), pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_METHOD_STREAM_AUTH_TAG_ADDR_LO, NvU64_LO32(semaMthdAuthTagBufGpuVA + semaAuthTagBufoffset), pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_SEMAPHORE_A, NvU64_HI32(pChannel->pbGpuVA + pChannel->finishPayloadOffset), pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_SEMAPHORE_A, NvU64_HI32(pChannel->pbGpuVA + pChannel->finishPayloadOffset), pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_SEMAPHORE_B, NvU64_LO32(pChannel->pbGpuVA + pChannel->finishPayloadOffset), pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_SEMAPHORE_B, NvU64_LO32(pChannel->pbGpuVA + pChannel->finishPayloadOffset), pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_SET_SEMAPHORE_PAYLOAD_LOWER, pChannelPbInfo->payload, pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_SET_SEMAPHORE_PAYLOAD_LOWER, pChannelPbInfo->payload, pMethods, methodIdx++)*addMethodsToMethodBuf(NVCBA2_SEMAPHORE_D, semaD, pMethods, methodIdx++)**addMethodsToMethodBuf(NVCBA2_SEMAPHORE_D, semaD, pMethods, methodIdx++)*hmacBufferSizeBytes*pBufSema**pBufSema*call to channelAddHostSema*methodSize <= pChannel->methodSizePerBlock**methodSize <= pChannel->methodSizePerBlock*(index < SEC2_WL_METHOD_ARRAY_SIZE)**(index < SEC2_WL_METHOD_ARRAY_SIZE)*pMethodBuf*call to channelPushSecureCopyProperties*call to channelPushMemoryProperties*pChannelPbInfo->clientSemaAddr == 0**pChannelPbInfo->clientSemaAddr == 0*call to channelPushMethod*gpuIsCCFeatureEnabled(pChannel->pGpu)**gpuIsCCFeatureEnabled(pChannel->pGpu)*pChannel->bSecure**pChannel->bSecure*pChannel->hTdCopyClass >= HOPPER_DMA_COPY_A**pChannel->hTdCopyClass >= HOPPER_DMA_COPY_A*pSemaAddr*intrValue*putIndex < pChannel->channelNumGpFifioEntries**putIndex < pChannel->channelNumGpFifioEntries*pbPutOffset*call to kbusFlushPcieForBar0Doorbell_DISPATCH*NVRM: Busflush failed in _scrubFillGpFifo **NVRM: Busflush failed in _scrubFillGpFifo *serverGetClientUnderLock(&g_resServ, pChannel->hClient, &pClient)**serverGetClientUnderLock(&g_resServ, pChannel->hClient, &pClient)*CliGetKernelChannel(pClient, pChannel->channelId, &pKernelChannel)**CliGetKernelChannel(pClient, pChannel->channelId, &pKernelChannel)*call to kfifoRingChannelDoorBell_DISPATCH*kfifoRingChannelDoorBell_HAL(pGpu, pKernelFifo, pKernelChannel)**kfifoRingChannelDoorBell_HAL(pGpu, pKernelFifo, pKernelChannel)*NVRM: Get Index: %x, PayloadIndex: %x **NVRM: Get Index: %x, PayloadIndex: %x *pPutIndex*pPutIndex != NULL**pPutIndex != NULL*mcIndex*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, engineType, ENGINE_INFO_TYPE_RUNLIST, &runlistId)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, engineType, ENGINE_INFO_TYPE_RUNLIST, &runlistId)*pChannel->pGpu != NULL**pChannel->pGpu != NULL*pChannel->type < MAX_CHANNEL_TYPE**pChannel->type < MAX_CHANNEL_TYPE*methodSizePerBlock*channelNotifierSize*channelNumGpFifioEntries*channelPbSize*channelSize*semaOffset*finishPayloadOffset*authTagBufSemaOffset*pRmApi->AllocWithHandle(pRmApi, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_ROOT, &pChannel->hClient, sizeof(pChannel->hClient))**pRmApi->AllocWithHandle(pRmApi, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_ROOT, &pChannel->hClient, sizeof(pChannel->hClient))*serverGetClientUnderLock(&g_resServ, pChannel->hClient, &pRsClient)**serverGetClientUnderLock(&g_resServ, pChannel->hClient, &pRsClient)*clientSetHandleGenerator(pRsClient, RS_UNIQUE_HANDLE_BASE, RS_UNIQUE_HANDLE_RANGE/2 - VGPU_RESERVED_HANDLE_RANGE)**clientSetHandleGenerator(pRsClient, RS_UNIQUE_HANDLE_BASE, RS_UNIQUE_HANDLE_RANGE/2 - VGPU_RESERVED_HANDLE_RANGE)*clientSetHandleGenerator(pRsClient, 1U, ~0U - 1U)**clientSetHandleGenerator(pRsClient, 1U, ~0U - 1U)*clientGenResourceHandle(pRsClient, &pChannel->deviceId)**clientGenResourceHandle(pRsClient, &pChannel->deviceId)*pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, pChannel->hClient, pChannel->deviceId, NV01_DEVICE_0, ¶ms, sizeof(params))**pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, pChannel->hClient, pChannel->deviceId, NV01_DEVICE_0, ¶ms, sizeof(params))*clientGenResourceHandle(pRsClient, &pChannel->subdeviceId)**clientGenResourceHandle(pRsClient, &pChannel->subdeviceId)*pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, pChannel->deviceId, pChannel->subdeviceId, NV20_SUBDEVICE_0, ¶ms, sizeof(params))**pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, pChannel->deviceId, pChannel->subdeviceId, NV20_SUBDEVICE_0, ¶ms, sizeof(params))*clientGenResourceHandle(pRsClient, &pChannel->hPartitionRef)**clientGenResourceHandle(pRsClient, &pChannel->hPartitionRef)*pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, pChannel->subdeviceId, pChannel->hPartitionRef, AMPERE_SMC_PARTITION_REF, ¶ms, sizeof(params))**pRmApi->AllocWithHandle(pRmApi, pChannel->hClient, pChannel->subdeviceId, pChannel->hPartitionRef, AMPERE_SMC_PARTITION_REF, ¶ms, sizeof(params))*serverutilGenResourceHandle(pChannel->hClient, &pChannel->physMemId)**serverutilGenResourceHandle(pChannel->hClient, &pChannel->physMemId)*serverutilGenResourceHandle(pChannel->hClient, &pChannel->channelId)**serverutilGenResourceHandle(pChannel->hClient, &pChannel->channelId)*serverutilGenResourceHandle(pChannel->hClient, &pChannel->errNotifierIdVirt)**serverutilGenResourceHandle(pChannel->hClient, &pChannel->errNotifierIdVirt)*serverutilGenResourceHandle(pChannel->hClient, &pChannel->errNotifierIdPhys)**serverutilGenResourceHandle(pChannel->hClient, &pChannel->errNotifierIdPhys)*serverutilGenResourceHandle(pChannel->hClient, &pChannel->engineObjectId)**serverutilGenResourceHandle(pChannel->hClient, &pChannel->engineObjectId)*serverutilGenResourceHandle(pChannel->hClient, &pChannel->eventId)**serverutilGenResourceHandle(pChannel->hClient, &pChannel->eventId)*serverutilGenResourceHandle(pChannel->hClient, &pChannel->pushBufferId)**serverutilGenResourceHandle(pChannel->hClient, &pChannel->pushBufferId)*serverutilGenResourceHandle(pChannel->hClient, &pChannel->doorbellRegionHandle)**serverutilGenResourceHandle(pChannel->hClient, &pChannel->doorbellRegionHandle)*serverutilGenResourceHandle(pChannel->hClient, &pChannel->hUserD)**serverutilGenResourceHandle(pChannel->hClient, &pChannel->hUserD)*pChannel->hVASpaceId == NV01_NULL_OBJECT**pChannel->hVASpaceId == NV01_NULL_OBJECT*serverutilGenResourceHandle(pChannel->hClient, &pChannel->hVASpaceId)**serverutilGenResourceHandle(pChannel->hClient, &pChannel->hVASpaceId)*src/kernel/gpu/mem_mgr/context_dma.c*NVRM: Cannot obtain the video memory offset of a noncontiguous vidmem alloc! **src/kernel/gpu/mem_mgr/context_dma.c**NVRM: Cannot obtain the video memory offset of a noncontiguous vidmem alloc! *physaddr*NVRM: Invalid DMA context in ctxdmaGetKernelVA **NVRM: Invalid DMA context in ctxdmaGetKernelVA *call to ctxdmaValidate_IMPL*NVRM: Invalid DMA context in ctxdmaValidate **NVRM: Invalid DMA context in ctxdmaValidate *bUnicast*FbApertureLen**FbApertureLen*FbAperture**FbAperture*call to refAddMapping*call to CliUpdateDeviceMemoryMapping*call to NV_RM_RPC_ALLOC_CONTEXT_DMA*call to _ctxdmaDestroyFBMappings*call to _ctxdmaDestruct*call to dispchnUnbindCtx_IMPL*pUnbindCtxDmaParams*pBindCtxDmaParams*call to dispchnBindCtx_IMPL*call to ctxdmaIsBound_IMPL*call to dispchnUnbindCtxFromAllChannels_IMPL*KernelPriv**KernelPriv***KernelPriv*pUpdateCtxDmaParams*pUpdateCtxDmaParams->hCtxDma == RES_GET_HANDLE(pContextDma)**pUpdateCtxDmaParams->hCtxDma == RES_GET_HANDLE(pContextDma)*pSubdevice->pDevice == pContextDma->pDevice**pSubdevice->pDevice == pContextDma->pDevice**pNewAddress**pNewLimit*call to instmemUpdateContextDma_DISPATCH*hParentFromMemory*bReadOnly*cachesnoop*CacheSnoop*Type*NVRM: HASH_TABLE=ENABLE no longer supported! **NVRM: HASH_TABLE=ENABLE no longer supported! *call to _ctxdmaConstruct*call to refFindCpuMapping*call to refRemoveMapping*pPageArray->pData*src/kernel/gpu/mem_mgr/dma.c**pPageArray->pData**src/kernel/gpu/mem_mgr/dma.c*pageIndex < pPageArray->count**pageIndex < pPageArray->count*call to osPageArrayGetPhysAddr*NVRM: Unable to determine memdesc localization information! **NVRM: Unable to determine memdesc localization information! *call to memdescGetPteArraySize*pPageData**pPageData*pSubdevRef**pGpuSubDevInfo*vaspaceGetByHandleOrDeviceDefault(RES_GET_CLIENT(pDiagApi), RES_GET_PARENT_HANDLE(pGpuSubDevInfo), pParams->hVASpace, &pVAS)**vaspaceGetByHandleOrDeviceDefault(RES_GET_CLIENT(pDiagApi), RES_GET_PARENT_HANDLE(pGpuSubDevInfo), pParams->hVASpace, &pVAS)*NULL != pHeap**NULL != pHeap*beginAddress*endAddress*alignedAddress*0 != pParams->pageSize**0 != pParams->pageSize*pDmaCapsParams*vaspaceGetByHandleOrDeviceDefault(RES_GET_CLIENT(pDevice), RES_GET_HANDLE(pDevice), pParams->hVASpace, &pVAS)**vaspaceGetByHandleOrDeviceDefault(RES_GET_CLIENT(pDevice), RES_GET_HANDLE(pDevice), pParams->hVASpace, &pVAS)*pDmaInfoParams*dmaInfoTbl**dmaInfoTbl*vaspaceGetByHandleOrDeviceDefault(RES_GET_CLIENT(pSubdevice), RES_GET_PARENT_HANDLE(pSubdevice), pParams->hVASpace, &pVAS)**vaspaceGetByHandleOrDeviceDefault(RES_GET_CLIENT(pSubdevice), RES_GET_PARENT_HANDLE(pSubdevice), pParams->hVASpace, &pVAS)*call to deviceSetDefaultVASpace_IMPL*deviceSetDefaultVASpace( pDevice, pParams->hVASpace)**deviceSetDefaultVASpace( pDevice, pParams->hVASpace)*NVRM: vaspaceGetPageTableInfo failed **NVRM: vaspaceGetPageTableInfo failed *gpugrpGetGlobalVASpace(pGpuGrp, &pVAS)**gpugrpGetGlobalVASpace(pGpuGrp, &pVAS)*call to vaspaceGetVasInfo_DISPATCH*vaspaceGetVasInfo(pVAS, pParams)**vaspaceGetVasInfo(pVAS, pParams)*compressionPageSize*NVRM: Flush op invoked with target Unit 0x%x **NVRM: Flush op invoked with target Unit 0x%x *NVRM: vaspaceGetPteInfo failed **NVRM: vaspaceGetPteInfo failed *call to gvaspaceExternalRootDirRevoke_IMPL*call to gvaspaceUnregisterAllChanGrps_IMPL*vaspaceGetByHandleOrDeviceDefault(RES_GET_CLIENT(pDevice), hDevice, pParams->hVASpace, &pVAS)**vaspaceGetByHandleOrDeviceDefault(RES_GET_CLIENT(pDevice), hDevice, pParams->hVASpace, &pVAS)*call to gvaspaceExternalRootDirCommit_IMPL*gvaspaceExternalRootDirCommit(pGVAS, hClient, pGpu, pParams)**gvaspaceExternalRootDirCommit(pGVAS, hClient, pGpu, pParams)*memdescGetAddressSpace(vaspaceGetPageDirBase(pVAS, pGpu)) == ADDR_SYSMEM**memdescGetAddressSpace(vaspaceGetPageDirBase(pVAS, pGpu)) == ADDR_SYSMEM*call to gvaspaceResize_IMPL*gvaspaceResize(pGVAS, pParams)**gvaspaceResize(pGVAS, pParams)*call to NV_RM_RPC_UPDATE_PDE_2*call to gvaspaceUpdatePde2_IMPL*vaspaceGetByHandleOrDeviceDefault(RES_GET_CLIENT(pDevice), pRmCtrlParams->hObject, pParams->hVASpace, &pVAS)**vaspaceGetByHandleOrDeviceDefault(RES_GET_CLIENT(pDevice), pRmCtrlParams->hObject, pParams->hVASpace, &pVAS)*subDevIdTgt**pDmaMappingInfo*pVirtualMemory != NULL**pVirtualMemory != NULL*dmaAllocMapFlag*dmaAllocMapFlag2*vaddr >= baseVirtAddr**vaddr >= baseVirtAddr*vaddr < (baseVirtAddr + virtSize)**vaddr < (baseVirtAddr + virtSize)*DmaOffset*pRegionRecords**pRegionRecords*numRegions*pHeap->heapType == HEAP_TYPE_PHYS_MEM_SUBALLOCATOR*src/kernel/gpu/mem_mgr/heap.c**pHeap->heapType == HEAP_TYPE_PHYS_MEM_SUBALLOCATOR**src/kernel/gpu/mem_mgr/heap.c*blackListAddresses*call to heapFreeBlackListedPages_IMPL*pBlockLast**pBlockLast*pBlockList*pBlockLast->owner == NVOS32_BLOCK_TYPE_FREE**pBlockLast->owner == NVOS32_BLOCK_TYPE_FREE*call to portSafeAddS64*portSafeAddS64(pBlockLast->end - pBlockLast->begin, resizeBy, &newSize) && (newSize > 0)**portSafeAddS64(pBlockLast->end - pBlockLast->begin, resizeBy, &newSize) && (newSize > 0)*pBlockNew**pBlockNew*pFreeBlockList**pFreeBlockList**nextFree*u0*prevFree**prevFree*call to _heapUpdate*NVRM: _heapUpdate failed to _ADD block **NVRM: _heapUpdate failed to _ADD block *call to memmgrGetBlackListPagesForHeap_DISPATCH*NVRM: Failed to read blackList pages (0x%x). **NVRM: Failed to read blackList pages (0x%x). *call to heapFilterBlackListPages_IMPL*call to heapBlackListPages_IMPL*NVRM: Error 0x%x creating blacklist **NVRM: Error 0x%x creating blacklist *call to portAtomicExDecrementU64*call to pmaDestroy*offset <= limit**offset <= limit*call to pmaIsPmaManaged*NVRM: Added 0x%0llx (blacklist count: %u) **NVRM: Added 0x%0llx (blacklist count: %u) *pPageNumbers**pPageNumbers*pPageNumbersWithEccOn*pPageNumbersWithECcOff*NVRM: No more space in blacklist! **NVRM: No more space in blacklist! *physicalAddress*call to heapIsPmaManaged_IMPL*NVRM: Calling PMA helper function to blacklist page offset: %llx **NVRM: Calling PMA helper function to blacklist page offset: %llx *call to pmaAddToBlacklistTracking*pBlacklist*NVRM: We have blacklisted maximum number of pages possible. returning error **NVRM: We have blacklisted maximum number of pages possible. returning error *bPendingRetirement*pBlackList**pBlacklistChunks*pAddresses*NVRM: Error: BlackList already exists! **NVRM: Error: BlackList already exists! *NVRM: Could not allocate memory for blackList! **NVRM: Could not allocate memory for blackList! *dynamicRmBlackListedCount*staticRmBlackListedCount*NVRM: Error 0x%x creating blacklisted page memdesc for address 0x%llx, skipping **NVRM: Error 0x%x creating blacklisted page memdesc for address 0x%llx, skipping *NVRM: Error 0x%x blacklisting page at address 0x%llx, skipping **NVRM: Error 0x%x blacklisting page at address 0x%llx, skipping *bIsValid*dynamicBlacklistSize*staticBlacklistSize*call to pmaGetBlacklistSize*call to _heapGetPageBlackListGranularity*btreeInsert failed to ADD/SIZE_CHANGE block**btreeInsert failed to ADD/SIZE_CHANGE block*call to _heapAddBlockToNoncontigList*btreeUnlink failed to REMOVE block**btreeUnlink failed to REMOVE block*call to _heapRemoveBlockFromNoncontigList*NVRM: Invalid page size attribute! **NVRM: Invalid page size attribute! *alignedSize*numPagesLeft*NVRM: pageSize: 0x%llx, numPagesLeft: 0x%llx, allocSize: 0x%llx **NVRM: pageSize: 0x%llx, numPagesLeft: 0x%llx, allocSize: 0x%llx *pCurrBlock*pNextBlock**pNextBlock*nextFreeNoncontig*blockBegin*blockEnd*blockAligned*blockSizeInPages*NVRM: blockId: %d, blockBegin: 0x%llx, blockEnd: 0x%llx, blockSize: 0x%llx, blockSizeInPages: 0x%llx, numPagesLeft: 0x%llx **NVRM: blockId: %d, blockBegin: 0x%llx, blockEnd: 0x%llx, blockSize: 0x%llx, blockSizeInPages: 0x%llx, numPagesLeft: 0x%llx *allocAl*call to _heapProcessFreeBlock*NVRM: ERROR: Could not process free block, error: 0x%x **NVRM: ERROR: Could not process free block, error: 0x%x *pSavedAllocList**pSavedAllocList*noncontigAllocListNext**noncontigAllocListNext*pLastBlock**pLastBlock*pteAddress*shuffleStrides**shuffleStrides*shuffleStride*shuffleStrideIndex*textureId*textureData**textureData*allocedMemDesc*bFirstBlock**pCurrBlock*NVRM: Could not satisfy request: allocSize: 0x%llx **NVRM: Could not satisfy request: allocSize: 0x%llx *call to _heapBlockFree*unwindStatus*NVRM: ERROR: Could not free block, error 0x%x! **NVRM: ERROR: Could not free block, error 0x%x! *memdescSetAllocSizeFields(pMemDesc, alignedSize, pageArrayGranularity)**memdescSetAllocSizeFields(pMemDesc, alignedSize, pageArrayGranularity)*prevFreeNoncontig*pBlock == pHeap->pNoncontigFreeBlockList || pBlock->prevFreeNoncontig != NULL || pBlock->nextFreeNoncontig != NULL**pBlock == pHeap->pNoncontigFreeBlockList || pBlock->prevFreeNoncontig != NULL || pBlock->nextFreeNoncontig != NULL**pNoncontigFreeBlockList**nextFreeNoncontig**prevFreeNoncontig*pBlock->prevFreeNoncontig == NULL && pBlock->nextFreeNoncontig == NULL**pBlock->prevFreeNoncontig == NULL && pBlock->nextFreeNoncontig == NULL*nextSize*mhandle*pBlockSplit**pBlockSplit*NVRM: _heapUpdate failed to _SIZE_CHANGE block **NVRM: _heapUpdate failed to _SIZE_CHANGE block **pBlockList*NVRM: failed to allocate block **NVRM: failed to allocate block *call to _heapAdjustFree*call to osInternalReserveAllocCallback*pHeap->free <= pHeap->total**pHeap->free <= pHeap->total*pAllocHint*(pAllocHint->pSize != NULL)**(pAllocHint->pSize != NULL)*(pAllocHint->type < NVOS32_NUM_MEM_TYPES)**(pAllocHint->type < NVOS32_NUM_MEM_TYPES)*pHeight*((pAllocHint->pHeight != NULL) && (pAllocHint->pAttr != NULL))**((pAllocHint->pHeight != NULL) && (pAllocHint->pAttr != NULL))**pFbAllocInfo**pFbAllocPageFormat**pageFormat*pad*call to memmgrDeterminePageSize_DISPATCH*NVRM: memmgrDeterminePageSize failed, status: 0x%x **NVRM: memmgrDeterminePageSize failed, status: 0x%x *call to memmgrAllocDetermineAlignment_DISPATCH*NVRM: memmgrAllocDetermineAlignment failed, status: 0x%x **NVRM: memmgrAllocDetermineAlignment failed, status: 0x%x *call to memmgrAllocHwResources_IMPL*possAttr*NVRM: memmgrAllocHwResources failed, status: 0x%x **NVRM: memmgrAllocHwResources failed, status: 0x%x *hostPageSize*call to memUtilsLeastCommonAlignment**pFbRegion*highestAddr*(highestAddr & RM_PAGE_MASK) != 0**(highestAddr & RM_PAGE_MASK) != 0*call to _heapGetMaxFree*largestOffset*largestFree**pBlockFirstFree*freeBlockSize*pBlockTree*pMemDesc->pHeap == pHeap**pMemDesc->pHeap == pHeap*call to _heapFindBlockByOffset*pMemDesc == pBlock->pMemDesc**pMemDesc == pBlock->pMemDesc*hwResource*bTurnBlacklistOff*allocBegin*allocEnd*call to memdescSetHwResId*call to _heapBlacklistChunks*call to _heapBlacklistSingleChunk*NVRM: heapBlacklistSingleChunk, status: %x! **NVRM: heapBlacklistSingleChunk, status: %x! *bBlacklistFailed*call to _heapFindAlignedBlockWithOwner*call to heapGetBlock_IMPL*ppBlock**ppBlock*NVRM: heapReference: reference count %x will exceed maximum 0x%x: **NVRM: heapReference: reference count %x will exceed maximum 0x%x: *call to memmgrFreeHwResources_DISPATCH*call to osInternalReserveFreeCallback*pHeap->reserved >= pBlock->end - pBlock->begin + 1**pHeap->reserved >= pBlock->end - pBlock->begin + 1*pBlockTmp**pBlockTmp*bBlocksMerged*blockLo*blockHi**pPmaAllocInfo***pPmaAllocInfo*(memmgrAllocGetAddrSpace(GPU_GET_MEMORY_MANAGER(pGpu), pVidHeapAlloc->flags, pVidHeapAlloc->attr) == ADDR_FBMEM) && (pAllocRequest->pPmaAllocInfo[gpumgrGetSubDeviceInstanceFromGpu(pGpu)] == NULL)**(memmgrAllocGetAddrSpace(GPU_GET_MEMORY_MANAGER(pGpu), pVidHeapAlloc->flags, pVidHeapAlloc->attr) == ADDR_FBMEM) && (pAllocRequest->pPmaAllocInfo[gpumgrGetSubDeviceInstanceFromGpu(pGpu)] == NULL)*NVRM: Trying to turn blacklisting pages off for this allocation of size: %llx **NVRM: Trying to turn blacklisting pages off for this allocation of size: %llx *call to _heapFreeBlacklistPages*call to _heapGetBankPlacement*NVRM: _heapGetBankPlacement failed for current allocation **NVRM: _heapGetBankPlacement failed for current allocation *currentBankInfo*NVRM: offset 0x%llx not aligned to 0x%llx **NVRM: offset 0x%llx not aligned to 0x%llx *NVRM: no free blocks **NVRM: no free blocks *call to _isAllocValidForFBRegion*NVRM: failed NVOS32_ALLOC_FLAGS_FIXED_ADDRESS_ALLOCATE @%llx (%lld bytes) **NVRM: failed NVOS32_ALLOC_FLAGS_FIXED_ADDRESS_ALLOCATE @%llx (%lld bytes) *ignoreBankPlacement*call to _heapSetTexturePlacement*NVRM: non-contig vidmem requested **NVRM: non-contig vidmem requested *call to memmgrAreFbRegionsSupported*pMemoryManager->Ram.numFBRegionPriority > 0**pMemoryManager->Ram.numFBRegionPriority > 0*fbRegionPriority**fbRegionPriority*pMemoryManager->Ram.fbRegionPriority[pMemoryManager->Ram.numFBRegionPriority-1-i] < pMemoryManager->Ram.numFBRegions**pMemoryManager->Ram.fbRegionPriority[pMemoryManager->Ram.numFBRegionPriority-1-i] < pMemoryManager->Ram.numFBRegions*!pMemoryManager->Ram.fbRegion[pMemoryManager->Ram.fbRegionPriority[pMemoryManager->Ram.numFBRegionPriority-1-i]].bRsvdRegion**!pMemoryManager->Ram.fbRegion[pMemoryManager->Ram.fbRegionPriority[pMemoryManager->Ram.numFBRegionPriority-1-i]].bRsvdRegion*pMemoryManager->Ram.fbRegionPriority[i] < pMemoryManager->Ram.numFBRegions**pMemoryManager->Ram.fbRegionPriority[i] < pMemoryManager->Ram.numFBRegions*!pMemoryManager->Ram.fbRegion[pMemoryManager->Ram.fbRegionPriority[i]].bRsvdRegion**!pMemoryManager->Ram.fbRegion[pMemoryManager->Ram.fbRegionPriority[i]].bRsvdRegion*NVRM: Contig vidmem allocation failed, running noncontig allocator **NVRM: Contig vidmem allocation failed, running noncontig allocator *NVRM: cannot alloc memDesc! **NVRM: cannot alloc memDesc! *call to _heapAllocNoncontig**pHwResource*NVRM: failed to allocate block. Heap total=0x%llx free=0x%llx **NVRM: failed to allocate block. Heap total=0x%llx free=0x%llx *call to _heapBlacklistChunksInFreeBlocks*NVRM: blacklisting chunk from addr: 0x%llx to 0x%llx, new begin :0x%llx, end:0x%llx **NVRM: blacklisting chunk from addr: 0x%llx to 0x%llx, new begin :0x%llx, end:0x%llx *baseChunkAddress*endChunkAddress*NVRM: removing from blacklist... page start %llx, page end:%llx **NVRM: removing from blacklist... page start %llx, page end:%llx *pBlacklistChunk*pBlacklistChunk != NULL**pBlacklistChunk != NULL*NVRM: Error 0x%x creating memdesc for blacklisted chunk for address0x%llx, skipping **NVRM: Error 0x%x creating memdesc for blacklisted chunk for address0x%llx, skipping *NVRM: Error 0x%x creating page for blacklisting address: 0x%llx, skipping **NVRM: Error 0x%x creating page for blacklisting address: 0x%llx, skipping *call to memmgrLookupFbRegionByOffset_IMPL*NVRM: Reserved region. Rejecting placement **NVRM: Reserved region. Rejecting placement *NVRM: Compression not supported. Rejecting placement **NVRM: Compression not supported. Rejecting placement *NVRM: ISO surface type #%d not supported. Rejecting placement **NVRM: ISO surface type #%d not supported. Rejecting placement *NVRM: Protection mismatch. Rejecting placement **NVRM: Protection mismatch. Rejecting placement *NVRM: pFbAllocInfo->type != NVOS32_TYPE_RESERVED **NVRM: pFbAllocInfo->type != NVOS32_TYPE_RESERVED *mostRecentIndex*clientFound*mostRecentAllocatedFlag*placementFlags*bankPlacementType*placementStrategy**placementStrategy*bankPlacement*NVRM: Heap Manager: HEAP ABOUT TO BE DESTROYED. **NVRM: Heap Manager: HEAP ABOUT TO BE DESTROYED. *headptr_updated*pBlockNext**pBlockNext*pBlockFirst*pHeapTypeSpecificData**pHeapTypeSpecificData*pPmsaMemDesc**pPmsaMemDesc***pHeapTypeSpecificData*call to memmgrSetPmaInitialized*NVRM: Heap Manager: HEAP ABOUT TO BE CREATED. (Base: 0x%llx Size: 0x%llx) **NVRM: Heap Manager: HEAP ABOUT TO BE CREATED. (Base: 0x%llx Size: 0x%llx) *bHasFbRegions*pPtr != NULL**pPtr != NULL*typeDataSize*pHeap->pHeapTypeSpecificData != NULL**pHeap->pHeapTypeSpecificData != NULL**pBlockTree*call to memmgrGetBankPlacementData_DISPATCH*NVRM: Heap Manager unable to get bank placement policy from HAL. **NVRM: Heap Manager unable to get bank placement policy from HAL. *NVRM: Heap Manager defaulting to BAD placement policy. **NVRM: Heap Manager defaulting to BAD placement policy. *call to memmgrRegionSetupForPma_IMPL*fbRegionBase*NVRM: Reserve at %llx of size %llx **NVRM: Reserve at %llx of size %llx *call to heapReserveRegion*NVRM: failed to reserve %llx..%llx **NVRM: failed to reserve %llx..%llx *(offset < heapSize)**(offset < heapSize)*allocRequest*allocData*pFbAllocInfo != NULL**pFbAllocInfo != NULL*pFbAllocPageFormat != NULL**pFbAllocPageFormat != NULL*call to memUtilsInitFBAllocInfo*call to memmgrAllocResources_IMPL*memmgrAllocResources(pGpu, pMemoryManager, &allocRequest, pFbAllocInfo)**memmgrAllocResources(pGpu, pMemoryManager, &allocRequest, pFbAllocInfo)*call to vidmemAllocResources*vidmemAllocResources(pGpu, pMemoryManager, &allocRequest, pFbAllocInfo, pHeap)**vidmemAllocResources(pGpu, pMemoryManager, &allocRequest, pFbAllocInfo, pHeap)*NVRM: Reserved heap for %s %llx..%llx **NVRM: Reserved heap for %s %llx..%llx *PMA**PMA*memdescGetAddressSpace(pMemDesc) == ADDR_SYSMEM*src/kernel/gpu/mem_mgr/mem_ctrl.c**memdescGetAddressSpace(pMemDesc) == ADDR_SYSMEM**src/kernel/gpu/mem_mgr/mem_ctrl.c*subdeviceGetByHandle(pCallContext->pClient, pParams->hSubdevice, &pSubdevice)**subdeviceGetByHandle(pCallContext->pClient, pParams->hSubdevice, &pSubdevice)*memdescMapIommu(pMemDesc, GPU_RES_GET_GPU(pSubdevice)->busInfo.iovaspaceId)**memdescMapIommu(pMemDesc, GPU_RES_GET_GPU(pSubdevice)->busInfo.iovaspaceId)*pMemory->pHwResource**pMemory->pHwResource*call to memmgrUpdateSurfaceCompression_5baef9*pUpdateParams*pPageSizeParams*pCacheFlushParams*NVRM: Must specify at least one of WRITE_BACK or INVALIDATE **NVRM: Must specify at least one of WRITE_BACK or INVALIDATE *NVRM: Memory descriptor not found for hMemory 0x%x, unable to flush! **NVRM: Memory descriptor not found for hMemory 0x%x, unable to flush! *NVRM: Cannot flush an uncached allocation **NVRM: Cannot flush an uncached allocation *NVRM: Cannot flush address space 0x%x **NVRM: Cannot flush address space 0x%x *call to memdescFillMemdescForPhysAttr*pGPAP*memdescFillMemdescForPhysAttr(pMemory->pMemDesc, AT_GPU, &pGPAP->memOffset, &pGPAP->memAperture, &pGPAP->memFormat, &pGPAP->gpuCacheAttr, &pGPAP->gpuP2PCacheAttr, &pGPAP->contigSegmentSize)**memdescFillMemdescForPhysAttr(pMemory->pMemDesc, AT_GPU, &pGPAP->memOffset, &pGPAP->memAperture, &pGPAP->memFormat, &pGPAP->gpuCacheAttr, &pGPAP->gpuP2PCacheAttr, &pGPAP->contigSegmentSize)*call to _memmgrGetSurfaceComprInfo*_memmgrGetSurfaceComprInfo(pMemory->pMemDesc, &pGPAP->comprOffset, &pGPAP->comprFormat, &unused, &unused)**_memmgrGetSurfaceComprInfo(pMemory->pMemDesc, &pGPAP->comprOffset, &pGPAP->comprFormat, &unused, &unused)*pSurfaceInfoParams*pSurfaceInfos*(NvU64)data == size**(NvU64)data == size*memmgrGetKindComprFromMemDesc(pMemoryManager, pMemDesc, 0, &unused, &comprInfo)**memmgrGetKindComprFromMemDesc(pMemoryManager, pMemDesc, 0, &unused, &comprInfo)*surfBase*surfLimit*(pageArrayGranularity & (pageArrayGranularity - 1)) == 0*src/kernel/gpu/mem_mgr/mem_desc.c**(pageArrayGranularity & (pageArrayGranularity - 1)) == 0**src/kernel/gpu/mem_mgr/mem_desc.c*pMemDesc->_pteArray[0] == 0**pMemDesc->_pteArray[0] == 0*pCpuMappingAttr*NVRM: using coh system memory for %s **NVRM: using coh system memory for %s *NVRM: using ncoh system memory for %s **NVRM: using ncoh system memory for %s *NVRM: using video memory for %s **NVRM: using video memory for %s *!pMemDesc->Allocated**!pMemDesc->Allocated*pGpa**pGpa*(gpa >= pKernelMemorySystem->coherentCpuFbBase) && (gpa <= pKernelMemorySystem->coherentCpuFbEnd)**(gpa >= pKernelMemorySystem->coherentCpuFbBase) && (gpa <= pKernelMemorySystem->coherentCpuFbEnd)*MemoryManager*kind == tempKind**kind == tempKind*tempComprInfo*tempComprInfo.compPageShift == pComprInfo->compPageShift && tempComprInfo.kind == pComprInfo->kind && tempComprInfo.compPageIndexLo == pComprInfo->compPageIndexLo && tempComprInfo.compPageIndexHi == pComprInfo->compPageIndexHi && tempComprInfo.compTagLineMin == pComprInfo->compTagLineMin && tempComprInfo.compTagLineMultiplier == pComprInfo->compTagLineMultiplier**tempComprInfo.compPageShift == pComprInfo->compPageShift && tempComprInfo.kind == pComprInfo->kind && tempComprInfo.compPageIndexLo == pComprInfo->compPageIndexLo && tempComprInfo.compPageIndexHi == pComprInfo->compPageIndexHi && tempComprInfo.compTagLineMin == pComprInfo->compTagLineMin && tempComprInfo.compTagLineMultiplier == pComprInfo->compTagLineMultiplier*bTempIsMemContiguous*bIsMemContiguous == bTempIsMemContiguous**bIsMemContiguous == bTempIsMemContiguous*tempPageSize*tempPageOffset*pageSize == tempPageSize**pageSize == tempPageSize*pageOffset == tempPageOffset**pageOffset == tempPageOffset**pIOVAS*call to iovaspaceReleaseMapping_IMPL*call to iovaspaceAcquireMapping_IMPL*call to gpuGetDmaEndAddress_IMPL*NVRM: 0x%llx-0x%llx is not addressable by GPU 0x%x [0x0-0x%llx] **NVRM: 0x%llx-0x%llx is not addressable by GPU 0x%x [0x0-0x%llx] *NVRM: 0x%llx is not addressable by GPU 0x%x [0x0-0x%llx] **NVRM: 0x%llx is not addressable by GPU 0x%x [0x0-0x%llx] *ppTmpIommuMap**ppTmpIommuMap***ppTmpIommuMap**ppTmpIommuMap != NULL***ppTmpIommuMap != NULL**_pIommuMappings*pMemDesc->_pIommuMappings == pIommuMap**pMemDesc->_pIommuMappings == pIommuMap*(pMemDesc->_pIommuMappings == NULL) || (!memdescIsSubMemoryMemDesc(pMemDesc))**(pMemDesc->_pIommuMappings == NULL) || (!memdescIsSubMemoryMemDesc(pMemDesc))**pIommuMap*_pageSize*pMemDesc->_addressSpace == ADDR_FBMEM**pMemDesc->_addressSpace == ADDR_FBMEM*_pMemDataReleaseCallback***_address*flag != MEMDESC_FLAGS_PHYSICALLY_CONTIGUOUS**flag != MEMDESC_FLAGS_PHYSICALLY_CONTIGUOUS*call to _memdescSetSubAllocatorFlag*_memdescSetSubAllocatorFlag(pMemDesc->pGpu, pMemDesc, bValue)**_memdescSetSubAllocatorFlag(pMemDesc->pGpu, pMemDesc, bValue)*call to _memdescSetGuestAllocatedFlag*_memdescSetGuestAllocatedFlag(pMemDesc->pGpu, pMemDesc, bValue)**_memdescSetGuestAllocatedFlag(pMemDesc->pGpu, pMemDesc, bValue)*_guestId**_pMemDestroyCallbackList**_pStandbyBuffer***_kernelMappingPriv*kernelMappingPriv**kernelMappingPriv***_kernelMapping*kernelMapping**kernelMapping*_pteKindCompressed*call to memmgrGetSwPteKindFromHwPteKind_DISPATCH*_pteKind*call to memmgrGetHwPteKindFromSwPteKind_DISPATCH*_cpuCacheAttrib*_gpuP2PCacheAttrib*pMemDesc->Allocated == NV_FALSE**pMemDesc->Allocated == NV_FALSE*_gpuCacheAttrib*call to memdescCalculateActualSize*memdescCalculateActualSize(pMemDesc, pMemDesc->Size, &tmpSize) == NV_OK**memdescCalculateActualSize(pMemDesc, pMemDesc->Size, &tmpSize) == NV_OK*memdescSetAllocSizeFields(pMemDesc, NV_ALIGN_UP64(tmpSize, pMemDesc->pageArrayGranularity), pMemDesc->pageArrayGranularity) == NV_OK**memdescSetAllocSizeFields(pMemDesc, NV_ALIGN_UP64(tmpSize, pMemDesc->pageArrayGranularity), pMemDesc->pageArrayGranularity) == NV_OK*pRemoveCb*call to memdescSetDestroyCallbackList**pCb*pMemDescOne*pMemDescTwo*call to _memIsSriovMappingsEnabled*call to _memdescUpdateSpaArray*pIovaMap*PteIndex == 0**PteIndex == 0*PteIndex < pMemDesc->pageArraySize**PteIndex < pMemDesc->pageArraySize**pPteSpaMappings*pageIndex < pMemDesc->PageCount**pageIndex < pMemDesc->PageCount*allocCnt*call to _memdescFillGpaEntriesForSpaTranslation*gpaEntries**gpaEntries*NVRM: Getting SPA for GPA failed: GFID=%u, GPA=0x%llx **NVRM: Getting SPA for GPA failed: GFID=%u, GPA=0x%llx *spaEntries**spaEntries*pGpaEntries*Offset < pMemDesc->Size**Offset < pMemDesc->Size*portSafeAddU64(Offset, Size, &tmpEnd) && tmpEnd <= pMemDesc->Size**portSafeAddU64(Offset, Size, &tmpEnd) && tmpEnd <= pMemDesc->Size*pMemDescNew*bUsingSuballocator**_pParentDescriptor*subMemOffset*OffsetAdjust*PteAdjust*call to memdescFillPages*call to _memdescAllocEgmArray*_memdescAllocEgmArray(pMemDescNew)**_memdescAllocEgmArray(pMemDescNew)*_subDeviceAllocCount**pNew**pMemDescNew**_pNext*pMemDesc->_pParentDescriptor == NULL && !pMemDesc->bRmExclusiveUse && pMemDesc->DupCount == 1**pMemDesc->_pParentDescriptor == NULL && !pMemDesc->bRmExclusiveUse && pMemDesc->DupCount == 1*pageSize > 0**pageSize > 0*call to _memdescFillPagesAtNativeGranularity*offset4k < pMemDesc->pageArraySize**offset4k < pMemDesc->pageArraySize*portSafeAddU32(offset4k, pageCount4k, &result4k)**portSafeAddU32(offset4k, pageCount4k, &result4k)*bClippedMemFill*pageCount4k*0 == (pageSize & (RM_PAGE_SIZE - 1))**0 == (pageSize & (RM_PAGE_SIZE - 1))*limit4k*totalFilledPageCount*pageIndex < pMemDesc->pageArraySize**pageIndex < pMemDesc->pageArraySize*portSafeAddU32(pageIndex, pageCount, &fillLimit)**portSafeAddU32(pageIndex, pageCount, &fillLimit)*fillLimit <= pMemDesc->pageArraySize**fillLimit <= pMemDesc->pageArraySize*call to memdescSetPageArrayGranularity*(pMemDesc->RefCount == 1) && (memdescGetDestroyCallbackList(pMemDesc) == NULL) && (pMemDesc->PteAdjust == 0)**(pMemDesc->RefCount == 1) && (memdescGetDestroyCallbackList(pMemDesc) == NULL) && (pMemDesc->PteAdjust == 0)*pMemDesc->_pIommuMappings == NULL**pMemDesc->_pIommuMappings == NULL*pMemDesc->Allocated == 0**pMemDesc->Allocated == 0*NVRM: unable to check Base 0x%016llx for DMA window **NVRM: unable to check Base 0x%016llx for DMA window *NV_FLOOR_TO_QUANTA(Base, pMemDesc->Alignment) == Base**NV_FLOOR_TO_QUANTA(Base, pMemDesc->Alignment) == Base*pMemDesc->_pInternalMapping != NULL && pMemDesc->_internalMappingRefCount != 0**pMemDesc->_pInternalMapping != NULL && pMemDesc->_internalMappingRefCount != 0*call to memdescGetMapInternalType*mapType*_pInternalMappingPriv**_pInternalMappingPriv***_pInternalMapping***_pInternalMappingPriv*_internalMappingRefCount*call to memdescFlushGpuCaches*pMemDesc->_internalMappingRefCount**pMemDesc->_internalMappingRefCount*call to kbusValidateBar2ApertureMapping_DISPATCH**call to kbusValidateBar2ApertureMapping_DISPATCH*pMemDesc->_pInternalMapping != NULL**pMemDesc->_pInternalMapping != NULL**call to kbusMapBar2Aperture_DISPATCH*call to osFlushCpuCache*call to kbusUseDirectSysmemMap_DISPATCH**Address**Priv*!(pMemDesc->_flags & MEMDESC_FLAGS_CPU_ONLY)**!(pMemDesc->_flags & MEMDESC_FLAGS_CPU_ONLY)*((Offset + Size) <= memdescGetSize(pMemDesc))**((Offset + Size) <= memdescGetSize(pMemDesc))*bar1PhysAddr**bar1PhysAddr*NVRM: Allocating coherent link mapping. VA: %p PA: 0x%llx size: 0x%llx **NVRM: Allocating coherent link mapping. VA: %p PA: 0x%llx size: 0x%llx *pAddressP64**pAddressP64*pPrivP64**pPrivP64*call to osUnlockMem*call to osLockMem*!pMemDesc->_pInternalMapping**!pMemDesc->_pInternalMapping*call to _memdescFreeIommuMappings*call to _memSubDeviceFreeAndDestroy*bDeferredFree*call to _memdescFreeInternal*memdesc being freed**memdesc being freed*oldSize*call to osFreePagesInternal*call to ctxBufPoolFree*NVRM: ctx buf pool not found **NVRM: ctx buf pool not found *NVRM: Failed to free memdesc from context buffer pool **NVRM: Failed to free memdesc from context buffer pool *call to memmgrFree_IMPL*call to heapRemoveRef_IMPL*NVRM: WARNING FB alloc on ZERO_FB config moved to sysmem **NVRM: WARNING FB alloc on ZERO_FB config moved to sysmem *NVRM: Unsupported FB bound allocation on broken FB(0FB) platform **NVRM: Unsupported FB bound allocation on broken FB(0FB) platform *pMemDesc->pHeap == NULL**pMemDesc->pHeap == NULL*pCtxBufPool != NULL**pCtxBufPool != NULL*!((pMemDesc->_flags & MEMDESC_FLAGS_ALLOC_PER_SUBDEVICE) && !gpumgrGetBcEnabledStatus(pGpu))**!((pMemDesc->_flags & MEMDESC_FLAGS_ALLOC_PER_SUBDEVICE) && !gpumgrGetBcEnabledStatus(pGpu))*reAcquire*call to _memdescAllocInternal*_memdescAllocInternal(pMemDesc)**_memdescAllocInternal(pMemDesc)*call to _memdescAllocVprRegion*call to osAllocPagesInternal*_memdescAllocEgmArray(pMemDesc)**_memdescAllocEgmArray(pMemDesc)*Allocated*NVRM: SMMU mapping allocation is not supported for ARMv7. **NVRM: SMMU mapping allocation is not supported for ARMv7. *call to ctxBufPoolAllocate*ctxBufPoolAllocate(pCtxBufPool, pMemDesc)**ctxBufPoolAllocate(pCtxBufPool, pMemDesc)*NVRM: Non-CPR region still not created **NVRM: Non-CPR region still not created *call to heapAddRef_IMPL*memdesc allocated**memdesc allocated**pPteEgmMappings*pMemDesc->pPteEgmMappings != NULL**pMemDesc->pPteEgmMappings != NULL*pNextMemDesc**pNextMemDesc*pMemDesc->childDescriptorCnt == 0**pMemDesc->childDescriptorCnt == 0*pMemDesc->_addressSpace == ADDR_FBMEM || pMemDesc->pHeap == NULL**pMemDesc->_addressSpace == ADDR_FBMEM || pMemDesc->pHeap == NULL*NVRM: Destroying unfreed memory %p **NVRM: Destroying unfreed memory %p *NVRM: Please call memdescFree() **NVRM: Please call memdescFree() *pSubMemDescList**pSubMemDescList*memdescHasSubDeviceMemDescs(pMemDesc) == NV_FALSE**memdescHasSubDeviceMemDescs(pMemDesc) == NV_FALSE*call to iovaMappingDestroy*pTmpIovaMapping*call to _memdescCalculateAllocSize*MdSize*MdSize <= 0xffffffffULL**MdSize <= 0xffffffffULL*RefCount*DupCount*call to memdescSetCpuCacheAttrib*_memdescSetSubAllocatorFlag(pGpu, pMemDesc, NV_TRUE)**_memdescSetSubAllocatorFlag(pGpu, pMemDesc, NV_TRUE)*_memdescSetGuestAllocatedFlag(pGpu, pMemDesc, NV_TRUE)**_memdescSetGuestAllocatedFlag(pGpu, pMemDesc, NV_TRUE)*allocSizeOutput*NVRM: Unsetting MEMDESC_FLAGS_GUEST_ALLOCATED not supported **NVRM: Unsetting MEMDESC_FLAGS_GUEST_ALLOCATED not supported *vgpuGetCallingContextGfid(pGpu, &pMemDesc->gfid)**vgpuGetCallingContextGfid(pGpu, &pMemDesc->gfid)*NVRM: Unsetting MEMDESC_FLAGS_OWNED_BY_CURRENT_DEVICE not supported **NVRM: Unsetting MEMDESC_FLAGS_OWNED_BY_CURRENT_DEVICE not supported *!(pMemDesc->_flags & MEMDESC_FLAGS_OWNED_BY_CTX_BUF_POOL)**!(pMemDesc->_flags & MEMDESC_FLAGS_OWNED_BY_CTX_BUF_POOL)*pHeap == NULL || pHeap->heapType == HEAP_TYPE_PHYS_MEM_SUBALLOCATOR**pHeap == NULL || pHeap->heapType == HEAP_TYPE_PHYS_MEM_SUBALLOCATOR*addrlist*Queue size too small*src/kernel/gpu/mem_mgr/mem_mapper.c**Queue size too small**src/kernel/gpu/mem_mgr/mem_mapper.c*pNewOperationQueue*pNewOperationQueue != NULL**pNewOperationQueue != NULL*pOperationQueue*operationQueueGet*newQueuePut < pParams->maxQueueSize**newQueuePut < pParams->maxQueueSize**pOperationQueue*operationQueueLen*operationQueuePut**pOperations*pCallContext->secInfo.privLevel == pMemoryMapper->secInfo.privLevel**pCallContext->secInfo.privLevel == pMemoryMapper->secInfo.privLevel*pOperataionsParams*!pMemoryMapper->bError**!pMemoryMapper->bError*pOperationParams*call to semsurfValidateIndex_IMPL*pOperation*semsurfValidateIndex(pMemoryMapper->pSemSurf, pOperation->data.semaphore.index)**semsurfValidateIndex(pMemoryMapper->pSemSurf, pOperation->data.semaphore.index)*call to memmapperSubmitSemaphoreWait*bQueueWorker*call to memmapperSetError*operationsProcessedCount*call to memmapperQueueWork_IMPL*pSemaphoreWait*status == NV_OK || status == NV_ERR_ALREADY_SIGNALLED**status == NV_OK || status == NV_ERR_ALREADY_SIGNALLED*NVRM: Destructor is called after worker freed the params **NVRM: Destructor is called after worker freed the params **pMemoryMapper*NVRM: Destructor is called , reference count = 0x%x **NVRM: Destructor is called , reference count = 0x%x **pWorkerParams*pMemoryMapper->pSubdevice != NULL**pMemoryMapper->pSubdevice != NULL*pAllocParams->maxQueueSize != 0**pAllocParams->maxQueueSize != 0*pMemoryMapper->pOperationQueue != NULL**pMemoryMapper->pOperationQueue != NULL*clientGetResourceRef(pCallContext->pClient, pAllocParams->hNotificationMemory, &pNotificationMemoryRef)**clientGetResourceRef(pCallContext->pClient, pAllocParams->hNotificationMemory, &pNotificationMemoryRef)*pNotificationMemoryRef**pNotificationMemory*pMemoryMapper->pNotificationMemory != NULL**pMemoryMapper->pNotificationMemory != NULL*notificationSurface*pNotification**pNotification*pMemoryMapper->pNotification != NULL**pMemoryMapper->pNotification != NULL*call to memmgrGetInternalClientHandles_IMPL*memmgrGetInternalClientHandles(pGpu, pMemoryManager, GPU_RES_GET_DEVICE(pMemoryMapper), &pMemoryMapper->hInternalClient, &pMemoryMapper->hInternalDevice, &pMemoryMapper->hInternalSubdevice)**memmgrGetInternalClientHandles(pGpu, pMemoryManager, GPU_RES_GET_DEVICE(pMemoryMapper), &pMemoryMapper->hInternalClient, &pMemoryMapper->hInternalDevice, &pMemoryMapper->hInternalSubdevice)*pRmApi->DupObject(pRmApi, pMemoryMapper->hInternalClient, pMemoryMapper->hInternalSubdevice, &pMemoryMapper->hInternalSemaphoreSurface, RES_GET_CLIENT_HANDLE(pMemoryMapper), pAllocParams->hSemaphoreSurface, NV04_DUP_HANDLE_FLAGS_NONE)**pRmApi->DupObject(pRmApi, pMemoryMapper->hInternalClient, pMemoryMapper->hInternalSubdevice, &pMemoryMapper->hInternalSemaphoreSurface, RES_GET_CLIENT_HANDLE(pMemoryMapper), pAllocParams->hSemaphoreSurface, NV04_DUP_HANDLE_FLAGS_NONE)*pInternalClient != NULL**pInternalClient != NULL*pInternalRsClient*clientGetResourceRef(pInternalRsClient, pMemoryMapper->hInternalSemaphoreSurface, &pSemSurfRef)**clientGetResourceRef(pInternalRsClient, pMemoryMapper->hInternalSemaphoreSurface, &pSemSurfRef)*pSemSurfRef**pSemSurf*pMemoryMapper->pSemSurf != NULL**pMemoryMapper->pSemSurf != NULL*numRefs*pMemoryMapper->pWorkerParams != NULL**pMemoryMapper->pWorkerParams != NULL*semaphoreCallback*refAddDependant(pNotificationMemoryRef, RES_GET_REF(pMemoryMapper))**refAddDependant(pNotificationMemoryRef, RES_GET_REF(pMemoryMapper))*osQueueWorkItem(GPU_RES_GET_GPU(pMemoryMapper), memoryMapperWorker, pMemoryMapper->pWorkerParams, (OsQueueWorkItemFlags){ .bDontFreeParams = NV_TRUE, .bDropOnUnloadQueueFlush = NV_TRUE, .bFallbackToDpc = NV_TRUE, .apiLock = WORKITEM_FLAGS_API_LOCK_READ_WRITE})**osQueueWorkItem(GPU_RES_GET_GPU(pMemoryMapper), memoryMapperWorker, pMemoryMapper->pWorkerParams, (OsQueueWorkItemFlags){ .bDontFreeParams = NV_TRUE, .bDropOnUnloadQueueFlush = NV_TRUE, .bFallbackToDpc = NV_TRUE, .apiLock = WORKITEM_FLAGS_API_LOCK_READ_WRITE})*NVRM: worker is wrongly called after all work was finished **NVRM: worker is wrongly called after all work was finished *call to memmapperProcessWork*NVRM: Worker is called , with reference count = 0x%x and pMemoryMapper not null **NVRM: Worker is called , with reference count = 0x%x and pMemoryMapper not null *NVRM: worker is freeing param **NVRM: worker is freeing param *NVRM: processing MemoryMapper work **NVRM: processing MemoryMapper work *NVRM: return from MemoryMapper worker (error) **NVRM: return from MemoryMapper worker (error) *call to memmapperCheckGpuFullPower*memmapperCheckGpuFullPower(GPU_RES_GET_GPU(pMemoryMapper)) == NV_OK**memmapperCheckGpuFullPower(GPU_RES_GET_GPU(pMemoryMapper)) == NV_OK*call to memmapperCheckGpuFullPowerForMemory*memmapperCheckGpuFullPowerForMemory(pMemoryMapper->pSemSurf->pShared->pSemaphoreMem) == NV_OK**memmapperCheckGpuFullPowerForMemory(pMemoryMapper->pSemSurf->pShared->pSemaphoreMem) == NV_OK*memmapperCheckGpuFullPowerForMemory(pMemoryMapper->pSemSurf->pShared->pMaxSubmittedMem) == NV_OK**memmapperCheckGpuFullPowerForMemory(pMemoryMapper->pSemSurf->pShared->pMaxSubmittedMem) == NV_OK*memmapperCheckGpuFullPowerForMemory(pMemoryMapper->pNotificationMemory) == NV_OK**memmapperCheckGpuFullPowerForMemory(pMemoryMapper->pNotificationMemory) == NV_OK*bMapExecuted*call to memmapperExecuteMap*memmapperExecuteMap(pMemoryMapper, &pOperation->data.map, pRmApi)**memmapperExecuteMap(pMemoryMapper, &pOperation->data.map, pRmApi)*call to memmapperExecuteUnmap*memmapperExecuteUnmap(pMemoryMapper, &pOperation->data.unmap, pRmApi)**memmapperExecuteUnmap(pMemoryMapper, &pOperation->data.unmap, pRmApi)*call to memmapperExecuteSemaphoreWait*(status == NV_OK) || (status == NV_ERR_BUSY_RETRY)**(status == NV_OK) || (status == NV_ERR_BUSY_RETRY)*call to memmapperExecuteSemaphoreSignal*NVRM: return from MemoryMapper worker **NVRM: return from MemoryMapper worker *errorStatus != NV_OK**errorStatus != NV_OK*bError*pSignal*NVRM: signal index:0x%x val:0x%llx **NVRM: signal index:0x%x val:0x%llx *pRmApi->Control(pRmApi, pMemoryMapper->hInternalClient, pMemoryMapper->hInternalSemaphoreSurface, NV_SEMAPHORE_SURFACE_CTRL_CMD_SET_VALUE, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pMemoryMapper->hInternalClient, pMemoryMapper->hInternalSemaphoreSurface, NV_SEMAPHORE_SURFACE_CTRL_CMD_SET_VALUE, ¶ms, sizeof(params))*call to semsurfGetValue_IMPL*pWait*NVRM: wait index:0x%x val:0x%llx cur_var:0x%llx **NVRM: wait index:0x%x val:0x%llx cur_var:0x%llx *pUnmap*clientGetResourceRef(pClient, pUnmap->hVirtualMemory, &pVirtualResourceRef)**clientGetResourceRef(pClient, pUnmap->hVirtualMemory, &pVirtualResourceRef)*pVirtualResourceRef*staticCast(pVirtualMemory, Memory)->pMemDesc->pGpu == GPU_RES_GET_GPU(pMemoryMapper)**staticCast(pVirtualMemory, Memory)->pMemDesc->pGpu == GPU_RES_GET_GPU(pMemoryMapper)*NVRM: unmap virt:(0x%x:0x%llx) size:0x%llx dmaFlags:0x%x **NVRM: unmap virt:(0x%x:0x%llx) size:0x%llx dmaFlags:0x%x *unmapDmaParams*pRmApi->UnmapWithSecInfo(pRmApi, &unmapDmaParams, &pMemoryMapper->secInfo)**pRmApi->UnmapWithSecInfo(pRmApi, &unmapDmaParams, &pMemoryMapper->secInfo)*clientGetResourceRef(pClient, pMap->hVirtualMemory, &pVirtualResourceRef)**clientGetResourceRef(pClient, pMap->hVirtualMemory, &pVirtualResourceRef)*clientGetResourceRef(pClient, pMap->hPhysicalMemory, &pResourceRef)**clientGetResourceRef(pClient, pMap->hPhysicalMemory, &pResourceRef)*pMemory != NULL**pMemory != NULL*memmapperCheckGpuFullPowerForMemory(pMemory)**memmapperCheckGpuFullPowerForMemory(pMemory)*NVRM: map virt:(0x%x:0x%llx) phys:(0x%x:0x%llx) size:0x%llx dmaFlags:0x%x **NVRM: map virt:(0x%x:0x%llx) phys:(0x%x:0x%llx) size:0x%llx dmaFlags:0x%x *kindOverride*pRmApi->MapWithSecInfo(pRmApi, &mapDmaParams, &pMemoryMapper->secInfo)**pRmApi->MapWithSecInfo(pRmApi, &mapDmaParams, &pMemoryMapper->secInfo)*!pGpu->getProperty(pGpu, PDB_PROP_GPU_IN_PM_CODEPATH)**!pGpu->getProperty(pGpu, PDB_PROP_GPU_IN_PM_CODEPATH)*kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, pExternalDevice, &pMigInstanceRef)*src/kernel/gpu/mem_mgr/mem_mgr.c**kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, pExternalDevice, &pMigInstanceRef)**src/kernel/gpu/mem_mgr/mem_mgr.c*pMigInstanceRef*pMemoryManager->pCeUtils == NULL**pMemoryManager->pCeUtils == NULL*call to rmcfg_IsTURING_CLASSIC_GPUSorBetter*objCreate(&pMemoryManager->pCeUtils, pMemoryManager, CeUtils, ENG_GET_GPU(pMemoryManager), pKernelMIGGPUInstance, &ceUtilsParams)**objCreate(&pMemoryManager->pCeUtils, pMemoryManager, CeUtils, ENG_GET_GPU(pMemoryManager), pKernelMIGGPUInstance, &ceUtilsParams)*call to memmgrTestCeUtils*call to memmgrIsLocalEgmSupported*call to osGetEgmInfo*localEgmBasePhysAddr*localEgmSize*localEgmNodeId*localEgmPeerId*localEgmOverride*egmPeerId*NVRM: SBIOS allocated EGM address is incorrect. **NVRM: SBIOS allocated EGM address is incorrect. *NVRM: HSHUB programming failed for EGM Peer ID: %u. status: %d **NVRM: HSHUB programming failed for EGM Peer ID: %u. status: %d *NVRM: Peer ID specified for local EGM already in use! **NVRM: Peer ID specified for local EGM already in use! *pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMMGR_GET_VGPU_CONFIG_HOST_RESERVED_FB, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMMGR_GET_VGPU_CONFIG_HOST_RESERVED_FB, ¶ms, sizeof(params))*ppMemdesc != NULL**ppMemdesc != NULL*memdescCreate(ppMemdesc, pGpu, allocSize, RM_PAGE_SIZE, NV_TRUE, ADDR_FBMEM, NV_MEMORY_UNCACHED, memdescFlags)**memdescCreate(ppMemdesc, pGpu, allocSize, RM_PAGE_SIZE, NV_TRUE, ADDR_FBMEM, NV_MEMORY_UNCACHED, memdescFlags)*call to memdescSetHeapOffset*NVRM: Cannot allocate the memory with range allocation **NVRM: Cannot allocate the memory with range allocation *call to heapInfo_IMPL*heapInfo(pHeap, &freeMem, &bytesTotal, &base, &offset, &size)**heapInfo(pHeap, &freeMem, &bytesTotal, &base, &offset, &size)*call to _memmgrGetFullMIGAddrRange*pbTopLevelScrubberEnabled*pbTopLevelScrubberConstructed*pKernelMIGGPUInstance->pMemoryPartitionHeap != NULL**pKernelMIGGPUInstance->pMemoryPartitionHeap != NULL*call to pmaGetTotalMemory*call to heapGetSize_IMPL*call to pmaGetFreeMemory*call to heapGetFree_IMPL*MIGMemoryPartitioningInfo*pMemoryManager->MIGMemoryPartitioningInfo.hClient == NV01_NULL_OBJECT**pMemoryManager->MIGMemoryPartitioningInfo.hClient == NV01_NULL_OBJECT*rmapiutilAllocClientAndDeviceHandles(pRmApi, pGpu, &pMemoryManager->MIGMemoryPartitioningInfo.hClient, &pMemoryManager->MIGMemoryPartitioningInfo.hDevice, &pMemoryManager->MIGMemoryPartitioningInfo.hSubdevice)**rmapiutilAllocClientAndDeviceHandles(pRmApi, pGpu, &pMemoryManager->MIGMemoryPartitioningInfo.hClient, &pMemoryManager->MIGMemoryPartitioningInfo.hDevice, &pMemoryManager->MIGMemoryPartitioningInfo.hSubdevice)*call to pmaGetClientAddrSpaceSize*call to heapGetClientAddrSpaceSize_IMPL*blackListCount*pBlacklistPages**pBlacklistPages*NVRM: PMA: Register FB region[%d] %llx..%llx EXTERNAL **NVRM: PMA: Register FB region[%d] %llx..%llx EXTERNAL *pmaRegion*blRegionCount*bIsDynamic*blPageIndex*NVRM: Register FB region %llx..%llx of size %llx with PMA **NVRM: Register FB region %llx..%llx of size %llx with PMA *call to pmaRegisterRegion*call to memmgrEccScrubInProgress_3dd2c9*NVRM: failed to register FB region %llx..%llx with PMA **NVRM: failed to register FB region %llx..%llx with PMA *call to memmgrScrubInternalRegions_b3696a*allocOptions*call to pmaAllocatePages*pMemoryManager->Ram.numFBRegions == 0**pMemoryManager->Ram.numFBRegions == 0*mapRamSizeMb*NVRM: Bug 594534: HACK: Report 32MB of framebuffer instead of reading registers. **NVRM: Bug 594534: HACK: Report 32MB of framebuffer instead of reading registers. *call to memmgrInitZeroFbRegionsHal_DISPATCH*NVRM: Failed to setup carveout. Carevout functionality is disabled **NVRM: Failed to setup carveout. Carevout functionality is disabled *call to memmgrInitBaseFbRegions_DISPATCH*memmgrInitBaseFbRegions_HAL(pGpu, pMemoryManager)**memmgrInitBaseFbRegions_HAL(pGpu, pMemoryManager)*call to memmgrReserveVbiosVgaRegions_56cd7a*memmgrReserveVbiosVgaRegions_HAL(pGpu, pMemoryManager)**memmgrReserveVbiosVgaRegions_HAL(pGpu, pMemoryManager)*call to memmgrInitFbRegionsHal_56cd7a*memmgrInitFbRegionsHal_HAL(pGpu, pMemoryManager)**memmgrInitFbRegionsHal_HAL(pGpu, pMemoryManager)*call to memmgrRegenerateFbRegionPriority_IMPL*call to memmgrSetPlatformPmaSupport_IMPL*memmgrSetPlatformPmaSupport(pGpu, pMemoryManager)**memmgrSetPlatformPmaSupport(pGpu, pMemoryManager)*memmgrIsPmaEnabled(pMemoryManager) && memmgrIsPmaSupportedOnPlatform(pMemoryManager)**memmgrIsPmaEnabled(pMemoryManager) && memmgrIsPmaSupportedOnPlatform(pMemoryManager)*call to memmgrIsPmaForcePersistence*NVRM: Initializing PMA with NUMA flag. **NVRM: Initializing PMA with NUMA flag. *NVRM: Initializing PMA with NUMA_AUTO_ONLINE flag. **NVRM: Initializing PMA with NUMA_AUTO_ONLINE flag. *call to pmaInitialize*ppPma**ppPma*NVRM: Failed to initialize PMA! **NVRM: Failed to initialize PMA! *call to pmaRegisterUpdateStatsCb*RmNumaAllocSkipReclaimPercent**RmNumaAllocSkipReclaimPercent*numaSkipReclaimVal*call to pmaNumaSetReclaimSkipThreshold*memPartitionNumaInfo*call to pmaNumaOnlined*pmaNumaOnlined(pPma, pGpu->numaNodeId, pKernelMemorySystem->coherentCpuFbBase, pKernelMemorySystem->numaOnlineSize)**pmaNumaOnlined(pPma, pGpu->numaNodeId, pKernelMemorySystem->coherentCpuFbBase, pKernelMemorySystem->numaOnlineSize)*memmgrIsPmaInitialized(pMemoryManager)**memmgrIsPmaInitialized(pMemoryManager)*freeSize*pBar1Info**pBar1Info*ppMemPoolInfo**ppMemPoolInfo*ppMemPoolInfo != NULL**ppMemPoolInfo != NULL*pMemPool**pMemPool*pPageTableMemPool*pPageLevelReserve*pMemPool != NULL**pMemPool != NULL*call to rmMemPoolDestroy**pPageLevelReserve*call to rmMemPoolSetup**pPmaObject*call to rmMemPoolAllocateProtectedMemory**pComprInfo*call to memmgrFillComprInfoUncompressed_IMPL*pKernelMIGManager != NULL**pKernelMIGManager != NULL*call to kmemsysNumaRemoveMemory_DISPATCH*call to kmigmgrGetSwizzIdInUseMask_IMPL*call to kmemsysNumaAddMemory_DISPATCH*kmemsysNumaAddMemory_HAL(pGpu, pKernelMemorySystem, 0, pKernelMemorySystem->numaOnlineBase, pKernelMemorySystem->numaOnlineSize, &numaNodeId)**kmemsysNumaAddMemory_HAL(pGpu, pKernelMemorySystem, 0, pKernelMemorySystem->numaOnlineBase, pKernelMemorySystem->numaOnlineSize, &numaNodeId)*numaNodeId == pGpu->numaNodeId**numaNodeId == pGpu->numaNodeId*pmaNumaOnlined(GPU_GET_HEAP(pGpu)->pPmaObject, pGpu->numaNodeId, pKernelMemorySystem->coherentCpuFbBase, pKernelMemorySystem->numaOnlineSize)**pmaNumaOnlined(GPU_GET_HEAP(pGpu)->pPmaObject, pGpu->numaNodeId, pKernelMemorySystem->coherentCpuFbBase, pKernelMemorySystem->numaOnlineSize)*objCreate(ppMemoryPartitionHeap, pMemoryManager, Heap)**objCreate(ppMemoryPartitionHeap, pMemoryManager, Heap)**pMemoryPartitionHeap*call to memmgrPmaInitialize_IMPL*memmgrPmaInitialize(pGpu, pMemoryManager, &(pMemoryPartitionHeap->pPmaObject))**memmgrPmaInitialize(pGpu, pMemoryManager, &(pMemoryPartitionHeap->pPmaObject))*pKernelMemorySystem->memPartitionNumaInfo[swizzId].bInUse**pKernelMemorySystem->memPartitionNumaInfo[swizzId].bInUse*partitionBaseAddr*partitionSize*pmaNumaOnlined(pMemoryPartitionHeap->pPmaObject, pKernelMemorySystem->memPartitionNumaInfo[swizzId].numaNodeId, pKernelMemorySystem->coherentCpuFbBase, pKernelMemorySystem->numaOnlineSize)**pmaNumaOnlined(pMemoryPartitionHeap->pPmaObject, pKernelMemorySystem->memPartitionNumaInfo[swizzId].numaNodeId, pKernelMemorySystem->coherentCpuFbBase, pKernelMemorySystem->numaOnlineSize)*call to heapInit_IMPL*heapInit(pGpu, pMemoryPartitionHeap, partitionBaseAddr, partitionSize, HEAP_TYPE_PARTITION_LOCAL, GPU_GFID_PF, NULL)**heapInit(pGpu, pMemoryPartitionHeap, partitionBaseAddr, partitionSize, HEAP_TYPE_PARTITION_LOCAL, GPU_GFID_PF, NULL)*call to memmgrPmaRegisterRegions_IMPL*memmgrPmaRegisterRegions(pGpu, pMemoryManager, pMemoryPartitionHeap, pMemoryPartitionHeap->pPmaObject)**memmgrPmaRegisterRegions(pGpu, pMemoryManager, pMemoryPartitionHeap, pMemoryPartitionHeap->pPmaObject)*call to kmemsysGetMIGGPUInstanceMemInfo_IMPL*kmemsysGetMIGGPUInstanceMemInfo(pGpu, pKernelMemorySystem, swizzId, pAddrRange)**kmemsysGetMIGGPUInstanceMemInfo(pGpu, pKernelMemorySystem, swizzId, pAddrRange)*call to pmaNumaOfflined*kmemsysNumaAddMemory_HAL(pGpu, pKernelMemorySystem, swizzId, partitionBaseAddr, partitionSize, &numaNodeId)**kmemsysNumaAddMemory_HAL(pGpu, pKernelMemorySystem, swizzId, partitionBaseAddr, partitionSize, &numaNodeId)*NVRM: failed to grab RM-Lock **NVRM: failed to grab RM-Lock *NVRM: Unable to allocate physical memory for GPU instance. **NVRM: Unable to allocate physical memory for GPU instance. *call to _memmgrInitMIGMemoryPartitionHeap*NVRM: Unable to initialize memory partition heap **NVRM: Unable to initialize memory partition heap *NVRM: Allocated memory partition heap for swizzId - %d with StartAddr - 0x%llx, endAddr - 0x%llx. **NVRM: Allocated memory partition heap for swizzId - %d with StartAddr - 0x%llx, endAddr - 0x%llx. *partitionableBar1Start*partitionableBar1End*partitionableBar1Start >= vaspaceGetVaStart(pBar1VAS)**partitionableBar1Start >= vaspaceGetVaStart(pBar1VAS)*partitionableBar1End <= vaspaceGetVaLimit(pBar1VAS)**partitionableBar1End <= vaspaceGetVaLimit(pBar1VAS)*partitionableBar1Range*partitionableMemoryRange*call to memmgrGetKindComprForGpu_KERNEL*call to memdescGetHwResId*pMappingMemSysConfig*bPhysBasedComptags*compTagStartOffset != ~(NvU32)0**compTagStartOffset != ~(NvU32)0*bottomRegionIdx*NVRM: More than two discontigous rsvd regions found. Rsvd region base - 0x%llx, Rsvd region Size - 0x%llx **NVRM: More than two discontigous rsvd regions found. Rsvd region base - 0x%llx, Rsvd region Size - 0x%llx *topRegionIdx*call to pmaGetRegionInfo*pmaGetRegionInfo(pHeap->pPmaObject, &numPmaRegions, &pFirstPmaRegionDesc)**pmaGetRegionInfo(pHeap->pPmaObject, &numPmaRegions, &pFirstPmaRegionDesc)*pFirstPmaRegionDesc*NVRM: No partitionable memory. MIG memory partitioning can't be enabled. **NVRM: No partitionable memory. MIG memory partitioning can't be enabled. *NVRM: Partitionable memory start - 0x%llx not aligned with RM reserved region base-end - 0x%llx **NVRM: Partitionable memory start - 0x%llx not aligned with RM reserved region base-end - 0x%llx *partitionableMemSize*bottomRsvdSize*topRsvdSize*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMSYS_SET_PARTITIONABLE_MEM, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMSYS_SET_PARTITIONABLE_MEM, ¶ms, sizeof(params))*!rangeIsEmpty(pMemoryManager->MIGMemoryPartitioningInfo.partitionableMemoryRange)**!rangeIsEmpty(pMemoryManager->MIGMemoryPartitioningInfo.partitionableMemoryRange)*call to memmgrSetMIGPartitionableBAR1Range_IMPL*memmgrSetMIGPartitionableBAR1Range(pGpu, pMemoryManager)**memmgrSetMIGPartitionableBAR1Range(pGpu, pMemoryManager)*call to kmemsysReadMIGMemoryCfg_DISPATCH*pMemdescOwnerGpu**pMemdescOwnerGpu*call to vidmemPmaFree*NVRM: Freeing PMA allocation **NVRM: Freeing PMA allocation *call to heapFree_IMPL*bAllocProtected*call to memmgrCalcReservedFbSpaceHal_DISPATCH*idxFastRegion*bFastAssigned*idxSlowRegion*bSlowAssigned*idxISORegion*bIsoAssigned*bFastAssigned && bSlowAssigned && bIsoAssigned**bFastAssigned && bSlowAssigned && bIsoAssigned*pMemoryManager->Ram.fbRegion[idxISORegion].bSupportISO**pMemoryManager->Ram.fbRegion[idxISORegion].bSupportISO*!pMemoryManager->Ram.fbRegion[idxISORegion].bRsvdRegion**!pMemoryManager->Ram.fbRegion[idxISORegion].bRsvdRegion*!pMemoryManager->Ram.fbRegion[idxFastRegion].bRsvdRegion**!pMemoryManager->Ram.fbRegion[idxFastRegion].bRsvdRegion*!pMemoryManager->Ram.fbRegion[idxSlowRegion].bRsvdRegion**!pMemoryManager->Ram.fbRegion[idxSlowRegion].bRsvdRegion*!pMemoryManager->Ram.fbRegion[idxISORegion].bProtected**!pMemoryManager->Ram.fbRegion[idxISORegion].bProtected*!pMemoryManager->Ram.fbRegion[idxFastRegion].bProtected**!pMemoryManager->Ram.fbRegion[idxFastRegion].bProtected*!pMemoryManager->Ram.fbRegion[idxSlowRegion].bProtected**!pMemoryManager->Ram.fbRegion[idxSlowRegion].bProtected*NVRM: Reserve space for bar2 Page dirs offset = 0x%llx size = 0x%x **NVRM: Reserve space for bar2 Page dirs offset = 0x%llx size = 0x%x *NVRM: Reserve space for bar2 Page tables offset = 0x%llx size = 0x%x **NVRM: Reserve space for bar2 Page tables offset = 0x%llx size = 0x%x **pReservedConsoleMemDesc*bPmaSupportedOnPlatform*call to memmgrSetClientPageTablesPmaManaged*call to memmgrLargePageSupported_IMPL*bIsBigPageSupported*NVRM: Big/Huge/512MB/256GB page size not supported in sysmem. **NVRM: Big/Huge/512MB/256GB page size not supported in sysmem. *call to _memmgrPickDefaultSysmemPageSize*call to _memmgrPickDefaultGpuPageSize*call to osGetSupportedSysmemPageSizeMask*zcbitmap*isSupported*NVRM: isSupported=%s **NVRM: isSupported=%s *pTempInfo**pTempInfo*call to vgpuIsGuestManagedHwAlloc*pMemoryManagerLoop**pMemoryManagerLoop*call to memmgrFreeHal_DISPATCH*call to memmgrAllocHal_DISPATCH*NVRM: PMA usage is non-zero, freeMem = 0x%llx bytes totalMem = 0x%llx bytes **NVRM: PMA usage is non-zero, freeMem = 0x%llx bytes totalMem = 0x%llx bytes *NVRM: Failed to get free heap size of GSP-RM **NVRM: Failed to get free heap size of GSP-RM *serverutilGenResourceHandle(pMemoryManager->hClient, &hThirdPartyP2P)**serverutilGenResourceHandle(pMemoryManager->hClient, &hThirdPartyP2P)*NVRM: Error creating internal ThirdPartyP2P object: %x **NVRM: Error creating internal ThirdPartyP2P object: %x *call to _memmgrFreeInternalClientObjects*call to fbsrObjectInit_IMPL*call to memmgrHandleSizeOverrides_DISPATCH*newHeap*call to memmgrValidateFBEndReservation_56cd7a*memmgrValidateFBEndReservation_HAL(pGpu, pMemoryManager)**memmgrValidateFBEndReservation_HAL(pGpu, pMemoryManager)*call to memmgrReserveMemoryForFakeWPR_56cd7a*memmgrReserveMemoryForFakeWPR_HAL(pGpu, pMemoryManager)**memmgrReserveMemoryForFakeWPR_HAL(pGpu, pMemoryManager)*call to memmgrReserveMemoryForPmu_56cd7a*memmgrReserveMemoryForPmu_HAL(pGpu, pMemoryManager)**memmgrReserveMemoryForPmu_HAL(pGpu, pMemoryManager)*call to memmgrReserveMemoryForFsp_IMPL*NVRM: Failed to reserve vidmem for WPR and FRTS. **NVRM: Failed to reserve vidmem for WPR and FRTS. *call to kmemsysPostHeapCreate_KERNEL*kmemsysPostHeapCreate_HAL(pGpu, pKernelMemorySystem)**kmemsysPostHeapCreate_HAL(pGpu, pKernelMemorySystem)*call to _memmgrCreateFBSR*call to gpuDestroyRusdMemory_IMPL*call to memmgrPageLevelPoolsDestroy_IMPL*call to kmemsysPreHeapDestruct_b3696a*call to memmgrReleaseConsoleRegion_IMPL*fbsrReservedRanges**fbsrReservedRanges***fbsrReservedRanges*call to fbsrDestroy_DISPATCH*call to memmgrScrubDestroy_b3696a*call to _memmgrIsZbcSurfaceReferenced*(flags & GPU_STATE_FLAGS_PRESERVING) || !_memmgrIsZbcSurfaceReferenced(pGpu, pMemoryManager)**(flags & GPU_STATE_FLAGS_PRESERVING) || !_memmgrIsZbcSurfaceReferenced(pGpu, pMemoryManager)*call to memmgrFinishHandleSizeOverrides_DISPATCH*call to memmgrScrubInit_56cd7a*call to memmgrDumpFbRegions_IMPL*pTestBuffer**pTestBuffer*pTestBuffer != NULL**pTestBuffer != NULL*NVRM: #################################################### **NVRM: #################################################### *NVRM: Read back of data using GSP shows mismatch **NVRM: Read back of data using GSP shows mismatch *NVRM: Test data: 0x%x Read Data: 0x%x **NVRM: Test data: 0x%x Read Data: 0x%x *NVRM: Read back of data using GSP confirms write **NVRM: Read back of data using GSP confirms write *call to memmgrInitReservedMemory_DISPATCH*memmgrInitReservedMemory_HAL(pGpu, pMemoryManager, pMemoryManager->Ram.fbAddrSpaceSizeMb << 20)**memmgrInitReservedMemory_HAL(pGpu, pMemoryManager, pMemoryManager->Ram.fbAddrSpaceSizeMb << 20)*call to _memmgrInitRegistryOverrides*call to memmgrEnableDynamicPageOfflining_DISPATCH*call to memmgrScrubRegistryOverrides_DISPATCH*kfifoAddSchedulingHandler(pGpu, GPU_GET_KERNEL_FIFO(pGpu), memmgrPostSchedulingEnableHandler, NULL, memmgrPreSchedulingDisableHandler, NULL)**kfifoAddSchedulingHandler(pGpu, GPU_GET_KERNEL_FIFO(pGpu), memmgrPostSchedulingEnableHandler, NULL, memmgrPreSchedulingDisableHandler, NULL)*call to memmgrReserveConsoleRegion_56cd7a*call to memmgrAllocateConsoleRegion_DISPATCH*memmgrAllocateConsoleRegion_HAL(pGpu, pMemoryManager)**memmgrAllocateConsoleRegion_HAL(pGpu, pMemoryManager)*call to memmgrCreateHeap_DISPATCH*call to memmgrPageLevelPoolsCreate_IMPL*fbsrStartMode*call to fbsrInit_DISPATCH*NVRM: fbsrInit failed for supported type %d suspend-resume scheme **NVRM: fbsrInit failed for supported type %d suspend-resume scheme *call to gpuCreateRusdMemory_DISPATCH*call to _memmgrInitRUSDHeapSize*call to _memmgrAllocInternalClientObjects*_memmgrAllocInternalClientObjects(pGpu, pMemoryManager)**_memmgrAllocInternalClientObjects(pGpu, pMemoryManager)*call to memmgrRegisterSuspendCallbacks*memmgrRegisterSuspendCallbacks(pMemoryManager)**memmgrRegisterSuspendCallbacks(pMemoryManager)*eventParams*pRmApi->Alloc(pRmApi, pMemoryManager->hClient, pMemoryManager->hSubdevice, &hEvent, NV01_EVENT_KERNEL_CALLBACK_EX, &eventParams, sizeof(eventParams))**pRmApi->Alloc(pRmApi, pMemoryManager->hClient, pMemoryManager->hSubdevice, &hEvent, NV01_EVENT_KERNEL_CALLBACK_EX, &eventParams, sizeof(eventParams))*pRmApi->Control(pRmApi, pMemoryManager->hClient, pMemoryManager->hSubdevice, NV2080_CTRL_CMD_EVENT_SET_NOTIFICATION, &eventNotificationParams, sizeof(eventNotificationParams))**pRmApi->Control(pRmApi, pMemoryManager->hClient, pMemoryManager->hSubdevice, NV2080_CTRL_CMD_EVENT_SET_NOTIFICATION, &eventNotificationParams, sizeof(eventNotificationParams))*call to memmgrDestroyInternalChannels_IMPL*call to memmgrInitInternalChannels_IMPL*NVRM: Destroying global CeUtils instance **NVRM: Destroying global CeUtils instance *call to memmgrScrubHandlePreSchedulingDisable_DISPATCH*memmgrScrubHandlePreSchedulingDisable_HAL(pGpu, pMemoryManager)**memmgrScrubHandlePreSchedulingDisable_HAL(pGpu, pMemoryManager)*call to memmgrScrubHandlePostSchedulingEnable_DISPATCH*memmgrScrubHandlePostSchedulingEnable_HAL(pGpu, pMemoryManager)**memmgrScrubHandlePostSchedulingEnable_HAL(pGpu, pMemoryManager)*NVRM: Skipping global CeUtils creation (supported platform but useless) **NVRM: Skipping global CeUtils creation (supported platform but useless) *NVRM: Skipping global CeUtils creation (unsupported platform) **NVRM: Skipping global CeUtils creation (unsupported platform) *NVRM: Skipping global CeUtils creation **NVRM: Skipping global CeUtils creation *NVRM: Initializing global CeUtils instance **NVRM: Initializing global CeUtils instance *memmgrInitCeUtils(pMemoryManager, NV_FALSE, NV_TRUE)**memmgrInitCeUtils(pMemoryManager, NV_FALSE, NV_TRUE)*pMemoryManager->pCeUtils != NULL**pMemoryManager->pCeUtils != NULL*memdescCreate(&pVidMemDesc, pGpu, sizeof vidmemData, RM_PAGE_SIZE, NV_TRUE, pGpu->pGpuArch->bGpuArchIsZeroFb ? ADDR_SYSMEM : ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pVidMemDesc, pGpu, sizeof vidmemData, RM_PAGE_SIZE, NV_TRUE, pGpu->pGpuArch->bGpuArchIsZeroFb ? ADDR_SYSMEM : ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*memdescCreate(&pSysMemDesc, pGpu, sizeof sysmemData, 0, NV_TRUE, (RMCFG_FEATURE_PLATFORM_GSP && !pGpu->pGpuArch->bGpuArchIsZeroFb) ? ADDR_FBMEM : ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pSysMemDesc, pGpu, sizeof sysmemData, 0, NV_TRUE, (RMCFG_FEATURE_PLATFORM_GSP && !pGpu->pGpuArch->bGpuArchIsZeroFb) ? ADDR_FBMEM : ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*memmgrMemSet(pMemoryManager, &vidSurface, 0, sizeof vidmemData, TRANSFER_FLAGS_PREFER_CE)**memmgrMemSet(pMemoryManager, &vidSurface, 0, sizeof vidmemData, TRANSFER_FLAGS_PREFER_CE)*memmgrMemWrite(pMemoryManager, &vidSurface, &vidmemData, sizeof vidmemData, TRANSFER_FLAGS_NONE)**memmgrMemWrite(pMemoryManager, &vidSurface, &vidmemData, sizeof vidmemData, TRANSFER_FLAGS_NONE)*memmgrMemWrite(pMemoryManager, &sysSurface, &sysmemData, sizeof sysmemData, TRANSFER_FLAGS_NONE)**memmgrMemWrite(pMemoryManager, &sysSurface, &sysmemData, sizeof sysmemData, TRANSFER_FLAGS_NONE)*memmgrMemCopy (pMemoryManager, &sysSurface, &vidSurface, sizeof vidmemData, TRANSFER_FLAGS_PREFER_CE)**memmgrMemCopy (pMemoryManager, &sysSurface, &vidSurface, sizeof vidmemData, TRANSFER_FLAGS_PREFER_CE)*memmgrMemRead (pMemoryManager, &sysSurface, &sysmemData, sizeof sysmemData, TRANSFER_FLAGS_NONE)**memmgrMemRead (pMemoryManager, &sysSurface, &sysmemData, sizeof sysmemData, TRANSFER_FLAGS_NONE)*sysmemData == vidmemData**sysmemData == vidmemData*call to memmgrInitFbRegions_DISPATCH*memmgrInitFbRegions(pGpu, pMemoryManager)**memmgrInitFbRegions(pGpu, pMemoryManager)*call to memmgrPreInitReservedMemory_DISPATCH*memmgrPreInitReservedMemory_HAL(pGpu, pMemoryManager)**memmgrPreInitReservedMemory_HAL(pGpu, pMemoryManager)*OverrideFbSize**OverrideFbSize*NVRM: Regkey %s = %dM **NVRM: Regkey %s = %dM *fbOverrideSizeMb*RMDisableScrubOnFree**RMDisableScrubOnFree*RMDisableFastScrubber**RMDisableFastScrubber*RMAllowSysmemLargePages**RMAllowSysmemLargePages*bAllowSysmemHugePages*RMIncreaseRsvdMemorySizeMB**RMIncreaseRsvdMemorySizeMB*NVRM: User specified increase in reserved size = %d MBs **NVRM: User specified increase in reserved size = %d MBs *overrideMaxContextSizeRsvdMemory*RMOverrideMaxContextSizeRsvdMemoryMB**RMOverrideMaxContextSizeRsvdMemoryMB*NVRM: User specified max context size = %d MBs **NVRM: User specified max context size = %d MBs *NVRM: Invalid value for RMOverrideMaxContextSizeRsvdMemoryMB: %d **NVRM: Invalid value for RMOverrideMaxContextSizeRsvdMemoryMB: %d *RMDisableNoncontigAlloc**RMDisableNoncontigAlloc*RmFbsrPagedDMA**RmFbsrPagedDMA*bEnableFbsrPagedDma*RmFbsrFileMode**RmFbsrFileMode*bEnableFbsrFileMode*RMEnablePMA**RMEnablePMA*RmFbsrWDDMMode**RmFbsrWDDMMode*bFbsrWddmModeEnabled*RMEnablePmaManagedPtables**RMEnablePmaManagedPtables*call to memmgrGetLocalizedMemorySupported_DISPATCH*bLocalizedMemorySupported*RmEnableLocalizedMemory**RmEnableLocalizedMemory*call to memmgrGetLocalizedOffset_DISPATCH*RmDisableGlobalCeUtils**RmDisableGlobalCeUtils*bDisableGlobalCeUtils*RMEnableLocalEgmPeerId**RMEnableLocalEgmPeerId*bCePhysicalVidmemAccessNotSupported*RmEnableLargePageSizeSysmemDefault**RmEnableLargePageSizeSysmemDefault*NVRM: Large page sysmem default override to 0x%x via regkey. **NVRM: Large page sysmem default override to 0x%x via regkey. *RmForceEnableFlaSysmem**RmForceEnableFlaSysmem*bForceEnableFlaSysmem*NVRM: Enabled FLA+sysmem via regkey. **NVRM: Enabled FLA+sysmem via regkey. *call to memmgrDestroyScanoutCarveoutHeap_DISPATCH*call to _memmgrCreateChildObjects*call to _memmgrInitRegistryOverridesAtConstruct*monitoredFenceThresholdOffset*maxSubmittedSemaphoreValueOffset*src/kernel/gpu/mem_mgr/mem_mgr_ctrl.c**src/kernel/gpu/mem_mgr/mem_mgr_ctrl.c*pGFBRIParams*numFBRegions*supportCompressed*supportISO*blackList**blackList*_size*_alignment*AllocHint**pAlignment**pAttr**pAttr2**pHeight*pWidth**pWidth**pPitch**pKind*call to heapAllocHint_IMPL*NVRM: heapAllocHint failed **NVRM: heapAllocHint failed *pIsKindParams*rmResult*call to CliFindMappingInClient*cpuVirtAddress**cpuVirtAddress*pFbMemParams*pCpuMapping->pPrivate->memArea.numRanges == 1**pCpuMapping->pPrivate->memArea.numRanges == 1*gpuVirtAddress*defaultVidmemPhysicalityOverride*pFbCapsParams*call to memmgrGetFbCaps*pFbCaps*call to memmgrGetDeviceCaps**pMemorySystemConfig**pFbCaps*src/kernel/gpu/mem_mgr/mem_mgr_gsp_client.c*NVRM: Missing static info. **src/kernel/gpu/mem_mgr/mem_mgr_gsp_client.c**NVRM: Missing static info. *pFbRegionInfoParams**pFbRegionInfoParams*NVRM: Missing FB region table in GSP Init arguments. **NVRM: Missing FB region table in GSP Init arguments. *NVRM: Static info struct has more FB regions (%u) than FB supports (%u). **NVRM: Static info struct has more FB regions (%u) than FB supports (%u). *reservedMemSize*fbUsableMemSize*pFbRegionInfo**pFbRegionInfo*bias*fbTotalMemSizeMb*fbAddrSpaceSizeMb*pMemoryManager->Ram.fbAddrSpaceSizeMb >= pMemoryManager->Ram.fbTotalMemSizeMb**pMemoryManager->Ram.fbAddrSpaceSizeMb >= pMemoryManager->Ram.fbTotalMemSizeMb*NVRM: FB Memory from Static info: **NVRM: FB Memory from Static info: *NVRM: Reserved Memory=0x%llx, Usable Memory=0x%llx **NVRM: Reserved Memory=0x%llx, Usable Memory=0x%llx *NVRM: fbTotalMemSizeMb=0x%llx, fbAddrSpaceSizeMb=0x%llx **NVRM: fbTotalMemSizeMb=0x%llx, fbAddrSpaceSizeMb=0x%llx *call to memmgrRemoveMemNodes_IMPL**pActiveFbsr*pMemNodeTmp**pMemNodeTmp**pMemNode**pMemHeadNode*pMemTailNode**pMemTailNode*pAllocMemDesc**pAllocMemDesc*bSaveNode*src/kernel/gpu/mem_mgr/mem_mgr_pwr_mgmt.c*NVRM: pAllocMemDesc base 0x%llx (size 0x%llx) block owner 0x%X memdesc flags 0x%llx **src/kernel/gpu/mem_mgr/mem_mgr_pwr_mgmt.c**NVRM: pAllocMemDesc base 0x%llx (size 0x%llx) block owner 0x%X memdesc flags 0x%llx *NVRM: pAllocMemDesc being saved **NVRM: pAllocMemDesc being saved *call to memmgrAddMemNode_IMPL*memmgrAddMemNode(pGpu, pMemoryManager, pAllocMemDesc, NV_FALSE)**memmgrAddMemNode(pGpu, pMemoryManager, pAllocMemDesc, NV_FALSE)*NVRM: Failure during memmgrAddMemNodes: %d **NVRM: Failure during memmgrAddMemNodes: %d *NVRM: can't allocate FB_MEM_NODE, err:0x%x! **NVRM: can't allocate FB_MEM_NODE, err:0x%x! *call to fbsrCopyMemoryMemDesc_DISPATCH*call to _memmgrAllocFbsrReservedRanges*_memmgrAllocFbsrReservedRanges(pGpu, pMemoryManager)**_memmgrAllocFbsrReservedRanges(pGpu, pMemoryManager)*memdescCreate(&pMemoryManager->fbsrReservedRanges[FBSR_RESERVED_INST_MEMORY_BEFORE_BAR2PTE], pGpu, size, 0, NV_TRUE, ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pMemoryManager->fbsrReservedRanges[FBSR_RESERVED_INST_MEMORY_BEFORE_BAR2PTE], pGpu, size, 0, NV_TRUE, ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*memdescCreate(&pMemoryManager->fbsrReservedRanges[FBSR_RESERVED_INST_MEMORY_AFTER_BAR2PTE], pGpu, size, 0, NV_TRUE, ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pMemoryManager->fbsrReservedRanges[FBSR_RESERVED_INST_MEMORY_AFTER_BAR2PTE], pGpu, size, 0, NV_TRUE, ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalDevice, NV0080_CTRL_CMD_FB_GET_COMPBIT_STORE_INFO, &compbitStoreInfoParams, sizeof(compbitStoreInfoParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalDevice, NV0080_CTRL_CMD_FB_GET_COMPBIT_STORE_INFO, &compbitStoreInfoParams, sizeof(compbitStoreInfoParams))*compbitStoreInfoParams*NVRM: Failed to allocate FBSR memory for GSP heap: %d **NVRM: Failed to allocate FBSR memory for GSP heap: %d *NVRM: Failure during allocation of FBSR Reserved ranges: %d **NVRM: Failure during allocation of FBSR Reserved ranges: %d *NVRM: Concurrent access**NVRM: Concurrent access*NVRM: !!!!! Calling Resume on an active GPU or the previous Suspend call might have failed !!!!!! **NVRM: !!!!! Calling Resume on an active GPU or the previous Suspend call might have failed !!!!!! *NVRM: !!!!! So ignoring the resume request !!!!!! **NVRM: !!!!! So ignoring the resume request !!!!!! *bIsGpuLost*call to fbsrBegin_DISPATCH*fbsrBegin_HAL(pGpu, pFbsr, bIsGpuLost ? FBSR_OP_DESTROY : FBSR_OP_RESTORE)**fbsrBegin_HAL(pGpu, pFbsr, bIsGpuLost ? FBSR_OP_DESTROY : FBSR_OP_RESTORE)*call to _memmgrWalkHeap*_memmgrWalkHeap(pGpu, pMemoryManager, pFbsr)**_memmgrWalkHeap(pGpu, pMemoryManager, pFbsr)*call to fbsrEnd_DISPATCH*fbsrEnd_HAL(pGpu, pFbsr)**fbsrEnd_HAL(pGpu, pFbsr)*hGspHeapSysMemHandle*NVRM: !!!!! Calling Suspend on a suspended GPU or **NVRM: !!!!! Calling Suspend on a suspended GPU or *NVRM: the previous Resume call might have failed !!!!!! **NVRM: the previous Resume call might have failed !!!!!! *NVRM: !!!!! Trying a suspend anyway !!!!!! **NVRM: !!!!! Trying a suspend anyway !!!!!! *call to memmgrAddMemNodes_IMPL*call to pmaBuildAllocatedBlocksList*call to pmaBuildPersistentList**pSaveCurr*pSaveList*pMemDescPma*call to pmaFreeAllocatedBlocksList*call to pmaFreePersistentList*numFBRegionPriority*src/kernel/gpu/mem_mgr/mem_mgr_regions.c*NVRM: FB region table: numFBRegions = %u. **src/kernel/gpu/mem_mgr/mem_mgr_regions.c**NVRM: FB region table: numFBRegions = %u. *NVRM: FB region %u - Base=0x%llx, Limit=0x%llx, RsvdSize=0x%llx **NVRM: FB region %u - Base=0x%llx, Limit=0x%llx, RsvdSize=0x%llx *NVRM: FB region %u - Reserved=%d, InternalHeap=%d, Compressed=%d, ISO=%d, Protected=%d, Performance=%u, LostOnSuspend=%d, PreserveOnSuspend=%d **NVRM: FB region %u - Reserved=%d, InternalHeap=%d, Compressed=%d, ISO=%d, Protected=%d, Performance=%u, LostOnSuspend=%d, PreserveOnSuspend=%d *call to memmgrCalcReservedFbSpace_IMPL*call to memmgrRegionSetupCommon_IMPL*osNumaMemblockSize(&memblockSize) == NV_OK**osNumaMemblockSize(&memblockSize) == NV_OK*unusedBlockSize*usableBlockSize >= pKernelMemorySystem->numaOnlineSize**usableBlockSize >= pKernelMemorySystem->numaOnlineSize*pInsertRegion*insertRegion*NVRM: New Region does not belong to any existing FB Regions **NVRM: New Region does not belong to any existing FB Regions *NVRM: New Region belongs to FB Region 0x%x **NVRM: New Region belongs to FB Region 0x%x *call to _memmgrShiftFbRegions*pMemoryManager->Ram.numFBRegions < MAX_FB_REGIONS**pMemoryManager->Ram.numFBRegions < MAX_FB_REGIONS*NVRM: STRADDLING REGION! **NVRM: STRADDLING REGION! *(pVSI != NULL)*src/kernel/gpu/mem_mgr/mem_mgr_vgpu.c**(pVSI != NULL)**src/kernel/gpu/mem_mgr/mem_mgr_vgpu.c*NVRM: Invalid number of FB regions (%d) **NVRM: Invalid number of FB regions (%d) *NVRM: Mixed density FB regions not supported (%d) **NVRM: Mixed density FB regions not supported (%d) *NVRM: FB Region 0 : %x'%08x - %x'%08x perf = %d rsvd = %s ISO = %s internal = %s **NVRM: FB Region 0 : %x'%08x - %x'%08x perf = %d rsvd = %s ISO = %s internal = %s *NVRM: FB Reserved Memory = %x'%08x FB Usable Memory = %x'%08x FB Address Space (MB) = %x'%08x **NVRM: FB Reserved Memory = %x'%08x FB Usable Memory = %x'%08x FB Address Space (MB) = %x'%08x **ppScrubList != NULL*src/kernel/gpu/mem_mgr/mem_scrub.c***ppScrubList != NULL**src/kernel/gpu/mem_mgr/mem_scrub.c*pScrubber*memdescCreate(&pMemDesc, pScrubber->pGpu, size, 0, NV_TRUE, ADDR_FBMEM, dstCpuCacheAttrib, MEMDESC_FLAGS_NONE)**memdescCreate(&pMemDesc, pScrubber->pGpu, size, 0, NV_TRUE, ADDR_FBMEM, dstCpuCacheAttrib, MEMDESC_FLAGS_NONE)*memsetParams*call to sec2utilsMemset_IMPL*sec2utilsMemset(pScrubber->pSec2Utils, &memsetParams)**sec2utilsMemset(pScrubber->pSec2Utils, &memsetParams)*lastSubmittedWorkId*ceutilsMemset(pScrubber->pCeUtils, &memsetParams)**ceutilsMemset(pScrubber->pCeUtils, &memsetParams)*pScrubber != NULL**pScrubber != NULL*vgpuScrubBuffRing*pScrubBuffRingHeader*hwCurrentCompletedId*lastSWSemaphoreDone*call to sec2utilsUpdateProgress_IMPL*pScrubList*pScrubber->pScrubList[idx].id == 0**pScrubber->pScrubList[idx].id == 0*call to _scrubGetFreeEntries*_scrubGetFreeEntries(pScrubber) <= MAX_SCRUB_ITEMS**_scrubGetFreeEntries(pScrubber) <= MAX_SCRUB_ITEMS*call to _searchScrubList*idToWait*call to _scrubCheckProgress*call to _serviceInterrupts*NVRM: Timed out when waiting for scrub job %llu to finish. **NVRM: Timed out when waiting for scrub job %llu to finish. *NVRM: Timed out when waiting for scrub jobs to finish. **NVRM: Timed out when waiting for scrub jobs to finish. *call to _scrubCopyListItems*call to sec2utilsServiceInterrupts_IMPL*call to ceutilsServiceInterrupts_IMPL*blockStart*maxId*pScrubListCopy*NVRM: pages need to be saved off, but stash list is invalid **NVRM: pages need to be saved off, but stash list is invalid *call to _scrubWaitAndSave*_scrubWaitAndSave(pScrubber, pScrubListCopy, pagesToScrubCheck)**_scrubWaitAndSave(pScrubber, pScrubListCopy, pagesToScrubCheck)*NVRM: Submitting work, Id: %llx, base: %llx, size: %llx **NVRM: Submitting work, Id: %llx, base: %llx, size: %llx *call to _scrubMemory*NVRM: Failing because the work didn't submit. **NVRM: Failing because the work didn't submit. *call to _scrubAddWorkToList*itemsToSave <= MAX_SCRUB_ITEMS**itemsToSave <= MAX_SCRUB_ITEMS*pScrubberMutex*totalItems*_scrubWaitAndSave(pScrubber, pList, requiredItemsToSave)**_scrubWaitAndSave(pScrubber, pList, requiredItemsToSave)*call to _scrubCombinePages*_scrubCombinePages(pPages, chunkSize, pageCount, &pScrubList, &scrubListSize)**_scrubCombinePages(pPages, chunkSize, pageCount, &pScrubList, &scrubListSize)*call to _waitForPayload*_waitForPayload(pScrubber, pScrubList[iter].base, (pScrubList[iter].base + pScrubList[iter].size - 1))**_waitForPayload(pScrubber, pScrubList[iter].base, (pScrubList[iter].base + pScrubList[iter].size - 1))**pScrubList*pageCount > 0**pageCount > 0*NVRM: submitting pages, pageCount = 0x%llx chunkSize = 0x%llx **NVRM: submitting pages, pageCount = 0x%llx chunkSize = 0x%llx *freeEntriesInList*numPagesToScrub**pScrubListCopy*scrubCount*call to _scrubCheckAndSubmit*numFinished*totalSubmitted*NVRM: totalSubmitted :%llx != pageCount: %llx **NVRM: totalSubmitted :%llx != pageCount: %llx *call to _scrubCheckLocked*currentCompletedId*call to pmaGetMemScrub**pScrubber*call to pmaUnregMemScrub*call to _isScrubWorkPending*NVRM: Timed out when waiting for the scrub to complete the pending work . **NVRM: Timed out when waiting for the scrub to complete the pending work . *_scrubCheckLocked(pScrubber, &pPmaScrubList, &count)**_scrubCheckLocked(pScrubber, &pPmaScrubList, &count)*call to pmaClearScrubbedPages*pPmaScrubList**pPmaScrubList**pScrubberMutex*workPending*lastCompleted*call to portSyncMutexInitialize*portSyncMutexInitialize(pScrubber->pScrubberMutex)**portSyncMutexInitialize(pScrubber->pScrubberMutex)*NVRM: Starting to init CeUtils for scrubber. **NVRM: Starting to init CeUtils for scrubber. *call to memmgrUseVasForCeMemoryOps*ceUtilsAllocParams*objCreate(&pScrubber->pSec2Utils, pHeap, Sec2Utils, pGpu, pKernelMIGGPUInstance)**objCreate(&pScrubber->pSec2Utils, pHeap, Sec2Utils, pGpu, pKernelMIGGPUInstance)*bIsEngineTypeSec2*objCreate(&pScrubber->pCeUtils, pHeap, CeUtils, pGpu, pKernelMIGGPUInstance, &ceUtilsAllocParams)**objCreate(&pScrubber->pCeUtils, pHeap, CeUtils, pGpu, pKernelMIGGPUInstance, &ceUtilsAllocParams)*call to pmaRegMemScrub*pmaRegMemScrub(pPma, pScrubber)**pmaRegMemScrub(pPma, pScrubber)*call to _memmgrMemUtilsScrubInitScheduleChannel*_memmgrMemUtilsScrubInitScheduleChannel(pGpu, pChannel)*src/kernel/gpu/mem_mgr/mem_utils.c**_memmgrMemUtilsScrubInitScheduleChannel(pGpu, pChannel)**src/kernel/gpu/mem_mgr/mem_utils.c*CliGetKernelChannelWithDevice(pChannel->pRsClient, pChannel->deviceId, pChannel->channelId, &pFifoKernelChannel)**CliGetKernelChannelWithDevice(pChannel->pRsClient, pChannel->deviceId, pChannel->channelId, &pFifoKernelChannel)*kchannelGetClassEngineID_HAL(pGpu, pFifoKernelChannel, pChannel->engineObjectId, &pChannel->classEngineID, &classID, &engineID)**kchannelGetClassEngineID_HAL(pGpu, pFifoKernelChannel, pChannel->engineObjectId, &pChannel->classEngineID, &classID, &engineID)*call to _memmgrMemUtilsScrubInitRegisterCallback*_memmgrMemUtilsScrubInitRegisterCallback(pGpu, pChannel)**_memmgrMemUtilsScrubInitRegisterCallback(pGpu, pChannel)*call to kfifoRmctrlGetWorkSubmitToken_DISPATCH*kfifoRmctrlGetWorkSubmitToken_HAL(pKernelFifo, pChannel->hClient, pChannel->channelId, &pChannel->workSubmitToken)**kfifoRmctrlGetWorkSubmitToken_HAL(pKernelFifo, pChannel->hClient, pChannel->channelId, &pChannel->workSubmitToken)*NVRM: Unable to bind Channel, status: %x **NVRM: Unable to bind Channel, status: %x *nvA06fScheduleParams*NVRM: Unable to schedule channel, status: %x **NVRM: Unable to schedule channel, status: %x *NVRM: Unable to get subdevice handle. Allocating subdevice **NVRM: Unable to get subdevice handle. Allocating subdevice *NVRM: Unable to allocate a subdevice. **NVRM: Unable to allocate a subdevice. *subDeviceHandle*nv0005AllocParams*NVRM: event allocation failed **NVRM: event allocation failed *nv2080EventNotificationParams*NVRM: event notification control failed **NVRM: event notification control failed *(pMemDesc != NULL) && (pMemDesc->Size & (sizeOfDWord-1)) == 0**(pMemDesc != NULL) && (pMemDesc->Size & (sizeOfDWord-1)) == 0*pKernelBus->virtualBar2[GPU_GFID_PF].pCpuMapping == NULL**pKernelBus->virtualBar2[GPU_GFID_PF].pCpuMapping == NULL*(physAddr & (sizeOfDWord-1)) == 0**(physAddr & (sizeOfDWord-1)) == 0*physAddrOrig*kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, physAddr & ~0xffffULL)**kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, physAddr & ~0xffffULL)*call to kbusGetBAR0WindowAddress_GM107*bar0Addr*kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, physAddrOrig)**kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, physAddrOrig)*memdescMapOld(pMemDesc, 0, pMemDesc->Size, NV_TRUE, NV_PROTECT_READ_WRITE, (void **)&pMap, &pPriv)**memdescMapOld(pMemDesc, 0, pMemDesc->Size, NV_TRUE, NV_PROTECT_READ_WRITE, (void **)&pMap, &pPriv)*bAllocedHwRes*NVRM: fbAlloc failure! **NVRM: fbAlloc failure! *call to memmgrSetAllocParameters_DISPATCH*adjustedSize*memdescGetKernelMapping(pMemDesc) == NULL**memdescGetKernelMapping(pMemDesc) == NULL*memdescGetKernelMappingPriv(pMemDesc) == NULL**memdescGetKernelMappingPriv(pMemDesc) == NULL*!(flags & TRANSFER_FLAGS_ALLOW_MAPPING_REUSE)**!(flags & TRANSFER_FLAGS_ALLOW_MAPPING_REUSE)*transferSurface*transferSurface.pMapping == pMapping**transferSurface.pMapping == pMapping*pMappingPriv**pMappingPriv***pMapping***pMappingPriv*call to memmgrGetMemTransferType*pTransferInfo*call to memmgrCheckSurfaceBounds*memmgrCheckSurfaceBounds(pTransferInfo, memSz) == NV_OK**memmgrCheckSurfaceBounds(pTransferInfo, memSz) == NV_OK*memmgrMemWrite(pMemoryManager, pTransferInfo, pTransferInfo->pMapping, memSz, flags)**memmgrMemWrite(pMemoryManager, pTransferInfo, pTransferInfo->pMapping, memSz, flags)*pTransferInfo->pMapping == NULL**pTransferInfo->pMapping == NULL*pTransferInfo->pMappingPriv == NULL**pTransferInfo->pMappingPriv == NULL*memdescMapOld(pMemDesc, offset, memSz, NV_TRUE, protect, &pPtr, &pPriv) == NV_OK**memdescMapOld(pMemDesc, offset, memSz, NV_TRUE, protect, &pPtr, &pPriv) == NV_OK*(pPtr = memdescMapInternal(pGpu, pMemDesc, flags)) != NULL**(pPtr = memdescMapInternal(pGpu, pMemDesc, flags)) != NULL***pPtr*(pPtr = portMemAllocNonPaged(memSz))**(pPtr = portMemAllocNonPaged(memSz))*memmgrMemRead(pMemoryManager, pTransferInfo, pPtr, memSz, flags)**memmgrMemRead(pMemoryManager, pTransferInfo, pPtr, memSz, flags)*pSrcInfo*call to memmgrMemReadWithTransferType*call to memmgrMemReadOrWriteInBlocks*pDstInfo*call to memmgrMemWriteWithTransferType*memdescCreateSubMem(&pSubMemDesc, pMemDesc, pMemDesc->pGpu, offset + baseOffset, copySize)**memdescCreateSubMem(&pSubMemDesc, pMemDesc, pMemDesc->pGpu, offset + baseOffset, copySize)*pSubMemDesc*tmpSurf*call to memmgrMemSetWithTransferType*call to memmgrMemCopyWithTransferType*memmgrMemSet(pMemoryManager, &surf, value, pSubMemDesc->Size, flags)**memmgrMemSet(pMemoryManager, &surf, value, pSubMemDesc->Size, flags)*desiredOffset*a > b**a > b*lcm*Alignment limit exceeded**Alignment limit exceeded*memmgrCheckSurfaceBounds(pSrcInfo, size)**memmgrCheckSurfaceBounds(pSrcInfo, size)*pBuf != NULL**pBuf != NULL*call to _memmgrMemReadOrWriteWithGsp*_memmgrMemReadOrWriteWithGsp(pGpu, pSrcInfo, pBuf, size, NV_TRUE )**_memmgrMemReadOrWriteWithGsp(pGpu, pSrcInfo, pBuf, size, NV_TRUE )*call to _memmgrMemReadOrWriteUsingStagingBuffer*_memmgrMemReadOrWriteUsingStagingBuffer(pMemoryManager, pSrcInfo, pBuf, size, transferType, NV_TRUE )**_memmgrMemReadOrWriteUsingStagingBuffer(pMemoryManager, pSrcInfo, pBuf, size, transferType, NV_TRUE )*memmgrCheckSurfaceBounds(pDstInfo, size)**memmgrCheckSurfaceBounds(pDstInfo, size)*pDst != NULL**pDst != NULL*NVRM: Calling GSP DMA task **NVRM: Calling GSP DMA task *_memmgrMemReadOrWriteWithGsp(pGpu, pDstInfo, pBuf, size, NV_FALSE )**_memmgrMemReadOrWriteWithGsp(pGpu, pDstInfo, pBuf, size, NV_FALSE )*_memmgrMemReadOrWriteUsingStagingBuffer(pMemoryManager, pDstInfo, pBuf, size, transferType, NV_FALSE )**_memmgrMemReadOrWriteUsingStagingBuffer(pMemoryManager, pDstInfo, pBuf, size, transferType, NV_FALSE )*call to _memmgrMemsetWithGsp*_memmgrMemsetWithGsp(pGpu, pDstInfo, value, size)**_memmgrMemsetWithGsp(pGpu, pDstInfo, value, size)*NVRM: BAR0 memset unimplemented **NVRM: BAR0 memset unimplemented *pAlloc*call to _memmgrAllocAndMapSurface*_memmgrAllocAndMapSurface(ENG_GET_GPU(pMemoryManager), size, &pStagingBuf, &pStagingBufMap, &pStagingBufPriv)**_memmgrAllocAndMapSurface(ENG_GET_GPU(pMemoryManager), size, &pStagingBuf, &pStagingBufMap, &pStagingBufPriv)*pStagingBuf*pStagingBufMap**pStagingBufMap*memmgrMemCopyWithTransferType(pMemoryManager, pDst, pSrc, size, transferType, 0)**memmgrMemCopyWithTransferType(pMemoryManager, pDst, pSrc, size, transferType, 0)*call to _memmgrUnmapAndFreeSurface*pStagingBufPriv**pStagingBufPriv*call to memdescDescIsEqual*!memdescDescIsEqual(pDstInfo->pMemDesc, pSrcInfo->pMemDesc)**!memdescDescIsEqual(pDstInfo->pMemDesc, pSrcInfo->pMemDesc)*pDst != NULL && pSrc != NULL**pDst != NULL && pSrc != NULL*call to _memmgrMemcpyWithGsp*_memmgrMemcpyWithGsp(pGpu, pDstInfo, pSrcInfo, size)**_memmgrMemcpyWithGsp(pGpu, pDstInfo, pSrcInfo, size)*gspParams*memop*baseAddr*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMMGR_MEMORY_TRANSFER_WITH_GSP, &gspParams, sizeof(gspParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMMGR_MEMORY_TRANSFER_WITH_GSP, &gspParams, sizeof(gspParams))*_memmgrAllocAndMapSurface(pGpu, size, &pStagingBuf, &pStagingBufMap, &pStagingBufPriv)**_memmgrAllocAndMapSurface(pGpu, size, &pStagingBuf, &pStagingBufMap, &pStagingBufPriv)*memdescMapOld(pSrc->pMemDesc, 0, size, NV_TRUE, NV_PROTECT_READ_WRITE, (void**)&pMap, &pPriv)**memdescMapOld(pSrc->pMemDesc, 0, size, NV_TRUE, NV_PROTECT_READ_WRITE, (void**)&pMap, &pPriv)*NVRM: Fatal error detected in GSP-DMA encrypt: IV Overflow! **NVRM: Fatal error detected in GSP-DMA encrypt: IV Overflow! *memdescMapOld(pDst->pMemDesc, 0, size, NV_TRUE, NV_PROTECT_READ_WRITE, (void**)&pMap, &pPriv)**memdescMapOld(pDst->pMemDesc, 0, size, NV_TRUE, NV_PROTECT_READ_WRITE, (void**)&pMap, &pPriv)*NVRM: Fatal error detected in GSP-DMA decrypt: 0x%x! **NVRM: Fatal error detected in GSP-DMA decrypt: 0x%x! *ppMap**ppMap*ppMap != NULL**ppMap != NULL*ppPriv != NULL**ppPriv != NULL*memdescCreate(ppMemDesc, pGpu, size, RM_PAGE_SIZE, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)**memdescCreate(ppMemDesc, pGpu, size, RM_PAGE_SIZE, NV_TRUE, ADDR_SYSMEM, NV_MEMORY_CACHED, flags)*memdescMapOld(*ppMemDesc, 0, size, NV_TRUE, NV_PROTECT_READ_WRITE, ppMap, ppPriv)**memdescMapOld(*ppMemDesc, 0, size, NV_TRUE, NV_PROTECT_READ_WRITE, ppMap, ppPriv)*pSurface != NULL**pSurface != NULL*pSurface->pMemDesc != NULL**pSurface->pMemDesc != NULL*pSurface->offset <= pSurface->pMemDesc->Size**pSurface->offset <= pSurface->pMemDesc->Size*pSurface->offset + size <= pSurface->pMemDesc->Size**pSurface->offset + size <= pSurface->pMemDesc->Size*NVRM: Can't copy using CE, falling back to other methods **NVRM: Can't copy using CE, falling back to other methods *call to semaphoreFillGPUVATimestamp*src/kernel/gpu/mem_mgr/method_notification.c*NVRM: Can't find mapping; semaphore not released **src/kernel/gpu/mem_mgr/method_notification.c**NVRM: Can't find mapping; semaphore not released *NVRM: offset+size doesn't fit into mapping; semaphore not released **NVRM: offset+size doesn't fit into mapping; semaphore not released *timeLo*timeHi*NVRM: KernelVAddr==NULL; semaphore not released **NVRM: KernelVAddr==NULL; semaphore not released **pSemaphore**nanoseconds*call to notifyFillNotifierMemoryTimestamp*pDebugNotifier**pDebugNotifier*pDebugNotifier != NULL**pDebugNotifier != NULL*memdescGetAddressSpace(pMemory->pMemDesc) == ADDR_SYSMEM || !kbusIsBarAccessBlocked(pKernelBus)**memdescGetAddressSpace(pMemory->pMemDesc) == ADDR_SYSMEM || !kbusIsBarAccessBlocked(pKernelBus)*call to notifyFillNotifierGPUVATimestamp*notifyGPUVA*NVRM: Can't find mapping; notifier not written **NVRM: Can't find mapping; notifier not written *NVRM: offset+size doesn't fit into mapping; notifier not written **NVRM: offset+size doesn't fit into mapping; notifier not written *NVRM: KernelVAddr==NULL; notifier not written **NVRM: KernelVAddr==NULL; notifier not written *call to notifyFillNOTIFICATION*call to notifyWriteNotifier*NotifyXlate*TimeLo*TimeHi*pNotifyBuffer*infoStatus*Info16Status_16*OtherInfo16*Info16Status*call to ctxdmaGetKernelVA_IMPL*blAddress*blType*blIndex*unusedEntries*src/kernel/gpu/mem_mgr/objheap.c**src/kernel/gpu/mem_mgr/objheap.c*call to heapInitRegistryOverrides_IMPL*NVRM: Error 0x%x reading registry **NVRM: Error 0x%x reading registry *guestFbOffsetSpa*call to heapInitInternal_IMPL*pPmaLock*numaReclaimSkipThreshold*call to osGetPageShift*PMA_PAGE_SHIFT >= osPageShift*src/kernel/gpu/mem_mgr/phys_mem_allocator/numa.c**PMA_PAGE_SHIFT >= osPageShift**src/kernel/gpu/mem_mgr/phys_mem_allocator/numa.c*NVRM: Freeing pPage[0] = %llx pageCount %lld **NVRM: Freeing pPage[0] = %llx pageCount %lld *call to findRegionID*pRegDescriptors**pRegDescriptors***pRegDescriptors*pMapInfo*pRegions**pRegions***pRegions*pPages[i] < pPma->coherentCpuFbSize**pPages[i] < pPma->coherentCpuFbSize*nextPage*currentStatus*sysPagePhysAddr*call to osAllocReleasePage*pmaStats*pStatsUpdateCtx**pStatsUpdateCtx*allocationOptions*NVRM: Cannot allocate from NUMA node %d on a non-NUMA system. **NVRM: Cannot allocate from NUMA node %d on a non-NUMA system. *NVRM: Cannot allocate with more than 512MB contiguity. **NVRM: Cannot allocate with more than 512MB contiguity. *NVRM: Cannot allocate from NUMA node %d before it is onlined. **NVRM: Cannot allocate from NUMA node %d before it is onlined. *NVRM: Localized contig allocation size is too large **NVRM: Localized contig allocation size is too large *NVRM: Only one ugpu can be specified for localized allocations **NVRM: Only one ugpu can be specified for localized allocations *partialFlag*pagesPerLocalizedStride*NVRM: Cannot allocate more than 4GB contiguous memory in one call. **NVRM: Cannot allocate more than 4GB contiguous memory in one call. *call to pmaSelector*regionList**regionList*pmaSelector(pPma, allocationOptions, regionList)**pmaSelector(pPma, allocationOptions, regionList)*pScrubberValidLock*call to portAtomicExOrS64*NVRM: PMA object is not valid **NVRM: PMA object is not valid *resultFlags*call to _pmaNumaAllocateRange*call to _pmaNumaAllocatePages*NVRM: SUCCESS allocCount %lld, allocsize %llx eviction? %s pinned ? %s contig? %s **NVRM: SUCCESS allocCount %lld, allocsize %llx eviction? %s pinned ? %s contig? %s *NOTALLOWED**NOTALLOWED*ALLOWED**ALLOWED*PINNED**PINNED*UNPINNED**UNPINNED*CONTIG**CONTIG*DISCONTIG**DISCONTIG***pMap*regAddrBase*frameCount*frameOffset*curStatus*finalAllocatedCount*numPagesAllocated*NVRM: FAILED allocCount %lld, allocsize %lld eviction? %s pinned ? %s contig? %s **NVRM: FAILED allocCount %lld, allocsize %lld eviction? %s pinned ? %s contig? %s *call to pmaNumaFreeInternal**allocationCount*pageSize >= osGetPageSize()**pageSize >= osGetPageSize()*call to _pmaCheckFreeFramesToSkipReclaim*call to osAllocPagesNode*NVRM: Alloc from OS failed for i= %lld allocationCount = %lld pageSize = %lld! **NVRM: Alloc from OS failed for i= %lld allocationCount = %lld pageSize = %lld! *call to _pmaTranslateKernelPage*NVRM: Alloc from OS invalid for i= %lld allocationCount = %lld pageSize = %lld! **NVRM: Alloc from OS invalid for i= %lld allocationCount = %lld pageSize = %lld! *call to osAllocAcquirePage*call to scrubSubmitPages*call to _pmaClearScrubBit*call to _pmaCheckScrubbedPages*NVRM: ERROR: scrubber OOM! **NVRM: ERROR: scrubber OOM! *call to _pmaNumaAvailableEvictablePage*validRegionList*addrLimit*call to _pmaEvictPages*NVRM: Frames %lld evicted in region %d of total allocationCount %lld Scrub status 0x%x! **NVRM: Frames %lld evicted in region %d of total allocationCount %lld Scrub status 0x%x! *NVRM: Eviction Failed %d pages ! **NVRM: Eviction Failed %d pages ! *regionIdx*actualSize >= osGetPageSize()**actualSize >= osGetPageSize()*NVRM: Alloc from OS invalid for sysPhysAddr = 0x%llx actualSize = 0x%llx! **NVRM: Alloc from OS invalid for sysPhysAddr = 0x%llx actualSize = 0x%llx! *NVRM: Allocate from OS failed for allocation size = %lld! **NVRM: Allocate from OS failed for allocation size = %lld! *call to _pmaNumaAvailableEvictableRange*(evictEnd - evictStart + 1) == actualSize**(evictEnd - evictStart + 1) == actualSize*call to _pmaEvictContiguous*NVRM: Eviction Failed = %llx to %llx! **NVRM: Eviction Failed = %llx to %llx! *NVRM: Eviction succeeded = %llx to %llx Scrub status 0x%x! **NVRM: Eviction succeeded = %llx to %llx Scrub status 0x%x! *gpaPhysAddr*NVRM: pMap NULL cannot perform eviction **NVRM: pMap NULL cannot perform eviction *pGpaPhysAddr*pGpaPhysAddr != NULL**pGpaPhysAddr != NULL*call to pmaCheckRangeAgainstRegionDesc*frameState*NVRM: Evictable frame: FOUND **NVRM: Evictable frame: FOUND *NVRM: Evictable frame: NOT FOUND **NVRM: Evictable frame: NOT FOUND *totalBytesOverall >= totalBytesInProtectedRegion*src/kernel/gpu/mem_mgr/phys_mem_allocator/phys_mem_allocator.c**totalBytesOverall >= totalBytesInProtectedRegion**src/kernel/gpu/mem_mgr/phys_mem_allocator/phys_mem_allocator.c*call to pmaIsBlacklistingAddrUnique*blacklistPages*call to pmaRegisterBlacklistInfo*call to pmaQueryBlacklistInfo*pChunk**pChunk*pDynamicBlacklistSize*pStaticBlacklistSize*pChunks*pPageSize*pNumChunks*(pPma != NULL) && (pChunks != NULL) && (pPageSize != NULL) && (pNumChunks != NULL)**(pPma != NULL) && (pChunks != NULL) && (pPageSize != NULL) && (pNumChunks != NULL)**pBlacklistChunk**pChunks*call to pmaFreeList*ppPersistList**ppPersistList*call to pmaBuildList*pLargestOffset*NVRM: PMA Handle = 0x%p, Largest Free Bytes = 0x%llx, base = 0x%llx, largestOffset = 0x%llx **NVRM: PMA Handle = 0x%p, Largest Free Bytes = 0x%llx, base = 0x%llx, largestOffset = 0x%llx *pRegSize*ppRegionDesc**ppRegionDesc*!nodeOnlined**!nodeOnlined*call to osGetNumaMemoryUsage*numFreeFramesLocalizable**numFreeFramesLocalizable*pEvictionCallbacksLock*evictPagesCb*evictRangeCb*evictCtxPtr**evictCtxPtr***evictCtxPtr*call to pmaIsEvictionPending*evictionPending*call to pmaOsSchedule*ctxPtr**ctxPtr*pStatsUpdateCb***pStatsUpdateCtx*pCtxPtr**pCtxPtr*physBase*physLimit*call to pmaSetBlockStateAttrib*NVRM: Inside **NVRM: Inside *call to pmaRegionPrint*pPma != NULL**pPma != NULL*pageCount != 0**pageCount != 0*pPages != NULL**pPages != NULL*(size == _PMA_64KB) || (size == _PMA_128KB) || (size == _PMA_2MB) || (size == _PMA_512MB)**(size == _PMA_64KB) || (size == _PMA_128KB) || (size == _PMA_2MB) || (size == _PMA_512MB)*bScrubValid*NVRM: Scrubber object is not valid **NVRM: Scrubber object is not valid *call to _pmaReallocBlacklistPages*NVRM: Localizing and evicting state is undefined, exiting **NVRM: Localizing and evicting state is undefined, exiting *NVRM: Reclaiming localized frames 0x%llx through 0x%llx **NVRM: Reclaiming localized frames 0x%llx through 0x%llx *NVRM: Localized allocations cannot change pin state **NVRM: Localized allocations cannot change pin state *NVRM: Pin failed at page %d frame %d in region %d state %d **NVRM: Pin failed at page %d frame %d in region %d state %d *call to _pmaRollback*NVRM: NULL PMA object **NVRM: NULL PMA object *NVRM: NULL page list pointer **NVRM: NULL page list pointer *NVRM: count == 0 **NVRM: count == 0 *NVRM: pageSize=0x%llx (not 64K, 128K, 2M, or 512M) **NVRM: pageSize=0x%llx (not 64K, 128K, 2M, or 512M) *NVRM: NULL allocationOptions **NVRM: NULL allocationOptions *evictFlag*contigFlag*pinFlag*rangeFlag*persistFlag*alignFlag*blacklistOffFlag*skipScrubFlag*reverseFlag*localizedFlag*localizedUgpuNum*NVRM: Reverse allocation not supported on NUMA. **NVRM: Reverse allocation not supported on NUMA. *call to pmaNumaAllocate*bScrubOnFree*NVRM: Blacklist can only be turned off for contiguous allocations **NVRM: Blacklist can only be turned off for contiguous allocations *NVRM: Blacklist cannot be turned off when scrub on free is enabled **NVRM: Blacklist cannot be turned off when scrub on free is enabled *NVRM: base [0x%llx] or limit [0x%llx] not aligned to page size 0x%llx **NVRM: base [0x%llx] or limit [0x%llx] not aligned to page size 0x%llx *NVRM: alignment [%llx] is not aligned to 64KB or is not power of two.**NVRM: alignment [%llx] is not aligned to 64KB or is not power of two.*NVRM: alignment [%llx] larger than the pageSize [%llx] not supported for non-contiguous allocs **NVRM: alignment [%llx] larger than the pageSize [%llx] not supported for non-contiguous allocs *numFramesToAllocateTotal*pinOption*useFunc*NVRM: Region selector failed **NVRM: Region selector failed *tryEvict*NVRM: Attempt %s allocation of 0x%llx pages of size 0x%llx (0x%x frames per page) **NVRM: Attempt %s allocation of 0x%llx pages of size 0x%llx (0x%x frames per page) **contiguous*discontiguous**discontiguous*numPagesLeftToAllocate*numPagesAllocatedSoFar*curPages**curPages*call to pmaStateCheck*pmaStateCheck(pPma)**pmaStateCheck(pPma)*call to _pmaPredictOutOfMemory*prediction*NVRM: Returning OOM from prediction path. **NVRM: Returning OOM from prediction path. *regionList[regionIdx] < PMA_REGION_SIZE**regionList[regionIdx] < PMA_REGION_SIZE*blacklistOffPerRegion**blacklistOffPerRegion*blacklistOffAddrStart**blacklistOffAddrStart*blacklistOffRangeSize**blacklistOffRangeSize*call to _pmaFreeBlacklistPages*numPagesLeftToAllocate + numPagesAllocatedSoFar == allocationCount**numPagesLeftToAllocate + numPagesAllocatedSoFar == allocationCount*numPagesLeftToAllocate > 0**numPagesLeftToAllocate > 0*numPagesAllocatedThisTime*numPagesAllocatedThisTime <= numPagesLeftToAllocate**numPagesAllocatedThisTime <= numPagesLeftToAllocate*numPagesAllocatedThisTime == 0 || numPagesAllocatedThisTime == numPagesLeftToAllocate**numPagesAllocatedThisTime == 0 || numPagesAllocatedThisTime == numPagesLeftToAllocate*NVRM: Memory evictable, but eviction not allowed, returning **NVRM: Memory evictable, but eviction not allowed, returning *numPagesLeftToAllocate == 0**numPagesLeftToAllocate == 0*numPagesAllocatedSoFar == allocationCount**numPagesAllocatedSoFar == allocationCount*NVRM: Status no_memory **NVRM: Status no_memory *numPagesAllocatedThisTime == 0**numPagesAllocatedThisTime == 0*NVRM: Status evictable, region before eviction: **NVRM: Status evictable, region before eviction: *numPagesLeftToAllocate == allocationCount**numPagesLeftToAllocate == allocationCount*numPagesAllocatedSoFar == 0**numPagesAllocatedSoFar == 0*NVRM: Attempt %s eviction of 0x%llx pages of size 0x%llx, (0x%x frames per page) in the frame range 0x%llx..0x%llx **NVRM: Attempt %s eviction of 0x%llx pages of size 0x%llx, (0x%x frames per page) in the frame range 0x%llx..0x%llx *evictPhysBegin*evictPhysEnd*evictPhysBegin <= evictPhysEnd**evictPhysBegin <= evictPhysEnd*NVRM: Attempt %s eviction of 0x%llx pages of size 0x%llx, (0x%x frames per page), in the frame range 0x%llx..0x%llx **NVRM: Attempt %s eviction of 0x%llx pages of size 0x%llx, (0x%x frames per page), in the frame range 0x%llx..0x%llx *NVRM: Eviction/scrubbing failed, region after: **NVRM: Eviction/scrubbing failed, region after: *NVRM: ERROR: scrubber OOM **NVRM: ERROR: scrubber OOM *NVRM: Retrying after eviction/scrub **NVRM: Retrying after eviction/scrub *NVRM: Succeed partial allocation **NVRM: Succeed partial allocation *NVRM: Waiting for scrubber **NVRM: Waiting for scrubber *call to scrubCheckAndWaitForSize*tryAlloc*NVRM: Retrying after waiting for scrubber **NVRM: Retrying after waiting for scrubber *NVRM: Returning OOM after waiting for scrubber **NVRM: Returning OOM after waiting for scrubber *NVRM: Failing allocation because the scrubber is not valid. **NVRM: Failing allocation because the scrubber is not valid. *frameBase*NVRM: Successfully allocated frames 0x%llx through 0x%llx **NVRM: Successfully allocated frames 0x%llx through 0x%llx *NVRM: Localizing frames 0x%llx through 0x%llx **NVRM: Localizing frames 0x%llx through 0x%llx *frameRangeStart*nextExpectedFrame*frameRangeRegId*NVRM: Successfully allocated frames: **NVRM: Successfully allocated frames: *NVRM: 0x%llx through 0x%llx region %d **NVRM: 0x%llx through 0x%llx region %d *pRegionDesc*pBlacklistPageBase*NVRM: ERROR: NULL PMA object **NVRM: ERROR: NULL PMA object *NVRM: ERROR: Non-consecutive region ID %d (should be %d) **NVRM: ERROR: Non-consecutive region ID %d (should be %d) *NVRM: ERROR: NULL region descriptor **NVRM: ERROR: NULL region descriptor *NVRM: ERROR: Blacklist failure. List is NULL but count = %d **NVRM: ERROR: Blacklist failure. List is NULL but count = %d *NVRM: WARNING: registering regions on NUMA system. **NVRM: WARNING: registering regions on NUMA system. *NVRM: ERROR: Region range %llx..%llx unaligned **NVRM: ERROR: Region range %llx..%llx unaligned **pRegionDesc*pmaPortAtomicGet(&pPma->initScrubbing) != PMA_SCRUB_DONE**pmaPortAtomicGet(&pPma->initScrubbing) != PMA_SCRUB_DONE*call to portAtomicExCompareAndSwapS64*NVRM: Registered region: **NVRM: Registered region: *NVRM: %d region(s) now registered **NVRM: %d region(s) now registered *call to pmaScrubComplete*NVRM: Destroying PMA before node %d is offlined **NVRM: Destroying PMA before node %d is offlined *regSize**pMapInfo**pAllocLock**pScrubberValidLock**pEvictionCallbacksLock**pPmaLock*nodeOnlined*pPma && pPma->bScrubOnFree**pPma && pPma->bScrubOnFree**pScrubObj*NVRM: range %llx..%llx resides in PMA region=%llx..%llx **NVRM: range %llx..%llx resides in PMA region=%llx..%llx *call to portSyncSpinlockInitialize*call to portSyncRwLockInitialize*pmaMapInit*pmaMapDestroy*pmaMapChangeStateAttrib*pmaMapChangePageStateAttrib*pmaMapChangeBlockStateAttrib*pmaMapRead*pmaMapScanContiguous*pmaMapScanDiscontiguous*pmaMapGetSize*pmaMapGetLargestFree*pmaMapScanContiguousNumaEviction*pmaMapGetEvictingFrames*pmaMapSetEvictingFrames*bForcePersistence*bNuma*bNumaAutoOnline*numFreeFrames*num2mbPages*numFree2mbPages*numFreeFramesProtected*num2mbPagesProtected*numFree2mbPagesProtected*num2mbPagesLocalizable**num2mbPagesLocalizable*numFree2mbPagesLocalizable**numFree2mbPagesLocalizable*src/kernel/gpu/mem_mgr/phys_mem_allocator/phys_mem_allocator_util.c*NVRM: ERROR: Insufficient memory to allocate blacklisting tracking structure. **src/kernel/gpu/mem_mgr/phys_mem_allocator/phys_mem_allocator_util.c**NVRM: ERROR: Insufficient memory to allocate blacklisting tracking structure. *nextBlacklistEntry*alignedBlacklistAddr*NVRM: NUMA enabled - blacklisting page through kernel at address 0x%llx (GPA) 0x%llx (SPA) **NVRM: NUMA enabled - blacklisting page through kernel at address 0x%llx (GPA) 0x%llx (SPA) *call to osOfflinePageAtAddress*NVRM: osOfflinePageAtAddress() failed with status: %d **NVRM: osOfflinePageAtAddress() failed with status: %d *pRangeNext**pRangeNext**pRangeCurr*bBlockValid*pageState*pRangeList**pRangeList*cliManagedBlackFrame*pPma->bScrubOnFree == NV_FALSE**pPma->bScrubOnFree == NV_FALSE*call to pmaSetBlockStateAttribUnderPmaLock*reallocatedBlacklistCount*call to pmaSetClientManagedBlacklist*free2mbPages*bytesFree*call to scrubWaitPages*call to scrubCheck*count > 0**count > 0*regionList != NULL**regionList != NULL*allocationOptions != NULL**allocationOptions != NULL*regionDes**regionDes*regionBegin*regionEnd*allocPages*NVRM: evictPagesCb returned with status %llx **NVRM: evictPagesCb returned with status %llx *frameEvictionsInProcess >= numFramesToEvict**frameEvictionsInProcess >= numFramesToEvict*evictSize*numFramesToEvict*NVRM: evictRangeCb returned with status %llx **NVRM: evictRangeCb returned with status %llx *call to _pmaCleanupNumaReusePages*call to osGetPageRefcount*call to osCountTailPages*bRaisedRefcount*pNumFree != NULL**pNumFree != NULL*NV_IS_ALIGNED(base, PMA_GRANULARITY)**NV_IS_ALIGNED(base, PMA_GRANULARITY)*NV_IS_ALIGNED(size, PMA_GRANULARITY)**NV_IS_ALIGNED(size, PMA_GRANULARITY)*baseFrame*(base + size - 1) <= pPma->pRegDescriptors[regId]->limit**(base + size - 1) <= pPma->pRegDescriptors[regId]->limit*NVRM: Warning: NUMA state not onlined. **NVRM: Warning: NUMA state not onlined. *NVRM: NUMA node ID invalid. **NVRM: NUMA node ID invalid. **pState*pRegion != NULL**pRegion != NULL*NVRM: Region: 0x%llx..0x%llx **NVRM: Region: 0x%llx..0x%llx *NVRM: Total frames: 0x%llx **NVRM: Total frames: 0x%llx *currStatus*NVRM: %8llx..%8x: **NVRM: %8llx..%8x: *call to pmaPrintBlockStatus*STATE_FREE **STATE_FREE *STATE_UNPIN **STATE_UNPIN *STATE_PIN **STATE_PIN *UNKNOWN STATE**UNKNOWN STATE* | ATTRIB_PERSISTENT** | ATTRIB_PERSISTENT* ** * | ATTRIB_SCRUBBING ** | ATTRIB_SCRUBBING * | ATTRIB_EVICTING ** | ATTRIB_EVICTING * | ATTRIB_BLACKLIST ** | ATTRIB_BLACKLIST *pRegmap*mapMaxIndex***map*call to maxZerosGet*mapMaxZeros*currMaxZeros*bMaxInCurrBitmap*regionMaxZeros*regionMaxZeroStartingOffset*call to portUtilCountLeadingZeros64*mapTrailZeros*alignment == pageSize*src/kernel/gpu/mem_mgr/phys_mem_allocator/regmap.c**alignment == pageSize**src/kernel/gpu/mem_mgr/phys_mem_allocator/regmap.c*alignedAddrBase*rangeStart %% pageSize == 0**rangeStart %% pageSize == 0*(rangeEnd + 1) %% pageSize == 0**(rangeEnd + 1) %% pageSize == 0*call to _scanDiscontiguousSearchLoop*freeList*freeFound*call to _scanDiscontiguousSearchLoopReverse*evictFound*totalFound*call to alignUpToMod*call to _scanContiguousSearchLoop*frameFound*call to _scanContiguousSearchLoopReverse*call to alignDownToMod*latestFree**latestFree*bEvictablePage*frameBaseIdx*curMap*nextStrideStart*pRegmap != NULL**pRegmap != NULL*mapIndex*mapOffset*bitReadCount*frame + len <= pRegmap->totalFrames**frame + len <= pRegmap->totalFrames*call to _pmaRegmapDoSingleStateChange*pPmaStats*call to pmaRegmapChangeBlockStateAttrib**pRegmap*totalFrames**pPmaStats*call to pmaRegmapDestroy*frameLimit*frameStart*call to pmaRegmapRead*startFrameAllocState*endFrameAllocState*call to _pmaRegmapScanNumaUnevictable*firstUnevictableFrame*NVRM: Evictable frame = %lld evictstart = %llx evictEnd = %llx **NVRM: Evictable frame = %lld evictstart = %llx evictEnd = %llx *unpinBitmap*unevictableFrameIndex*unevictableIndex*call to _checkOne*evictBitmap*startMapIdx*startBitIdx*endMapIdx*endBitIdx*firstSetBit*mapIdx*endMask*startMask*(NvU64)firstSetBit >= startBitIdx**(NvU64)firstSetBit >= startBitIdx*startMapIdx == endMapIdx**startMapIdx == endMapIdx*firstSetBit != 64**firstSetBit != 64*maxZeros*bestStartPos*currentPos*pMap->map != NULL**pMap->map != NULL*NVRM: *** %d-th MAP *** **NVRM: *** %d-th MAP *** *NVRM: map[%d]: %llx **NVRM: map[%d]: %llx *NVRM: map[%d]: %llx **NVRM: map[%d]: %llx *(pSec2Utils != NULL) && (pSec2Utils->pChannel != NULL)*src/kernel/gpu/mem_mgr/sec2_utils.c**(pSec2Utils != NULL) && (pSec2Utils->pChannel != NULL)**src/kernel/gpu/mem_mgr/sec2_utils.c*NVRM: Invalid memdesc for Sec2Utils memset. **NVRM: Invalid memdesc for Sec2Utils memset. *pteArraySize*semaMthdAuthTagBuf*NVRM: Failed to finish previous scrub op before re-using method stream auth tag buf: lastCompleted = %d lastSubmitted = %lld **NVRM: Failed to finish previous scrub op before re-using method stream auth tag buf: lastCompleted = %d lastSubmitted = %lld *currentIndex*NVRM: Sec2Utils Memset dstAddr: %llx, size: %x **NVRM: Sec2Utils Memset dstAddr: %llx, size: %x *call to _sec2utilsSubmitPushBuffer*call to _sec2utilsGetNextAuthTagSlot*_sec2utilsGetNextAuthTagSlot(pSec2Utils)**_sec2utilsGetNextAuthTagSlot(pSec2Utils)*call to channelFillSec2Pb*scrubMthdAuthTagBuf*channelFillSec2Pb(pChannel, putIndex, bInsertFinishPayload, pChannelPbInfo, pSec2Utils->pCcslCtx, pSec2Utils->scrubMthdAuthTagBuf.pMemDesc, pSec2Utils->semaMthdAuthTagBuf.pMemDesc, pSec2Utils->scrubMthdAuthTagBuf.gpuVA, pSec2Utils->authTagPutIndex, pSec2Utils->semaMthdAuthTagBuf.gpuVA, nextIndex, &methodsLength)**channelFillSec2Pb(pChannel, putIndex, bInsertFinishPayload, pChannelPbInfo, pSec2Utils->pCcslCtx, pSec2Utils->scrubMthdAuthTagBuf.pMemDesc, pSec2Utils->semaMthdAuthTagBuf.pMemDesc, pSec2Utils->scrubMthdAuthTagBuf.gpuVA, pSec2Utils->authTagPutIndex, pSec2Utils->semaMthdAuthTagBuf.gpuVA, nextIndex, &methodsLength)*NVRM: Timed out waiting for next auth tag buf slot to free up: nextPut = %d get = %d **NVRM: Timed out waiting for next auth tag buf slot to free up: nextPut = %d get = %d *call to _sec2utilsUpdateGetPtr*authTagGetIndex*authTagPutIndex*NVRM: Possible double-free of Sec2Utils! **NVRM: Possible double-free of Sec2Utils! *NVRM: Bad state during sec2Utils teardown! **NVRM: Bad state during sec2Utils teardown! *((pConfCompute != NULL) && (pConfCompute->getProperty(pCC, PDB_PROP_CONFCOMPUTE_CC_FEATURE_ENABLED)))**((pConfCompute != NULL) && (pConfCompute->getProperty(pCC, PDB_PROP_CONFCOMPUTE_CC_FEATURE_ENABLED)))*pRmApi->AllocWithHandle(pRmApi, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_ROOT, &pSec2Utils->hClient, sizeof(pSec2Utils->hClient))**pRmApi->AllocWithHandle(pRmApi, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_ROOT, &pSec2Utils->hClient, sizeof(pSec2Utils->hClient))*serverGetClientUnderLock(&g_resServ, pChannel->hClient, &pChannel->pRsClient)**serverGetClientUnderLock(&g_resServ, pChannel->hClient, &pChannel->pRsClient)*subdeviceId*call to _sec2GetClass*_sec2GetClass(pGpu, &pSec2Utils->sec2Class)**_sec2GetClass(pGpu, &pSec2Utils->sec2Class)*channelSetupIDs(pChannel, pGpu, NV_FALSE, IS_MIG_IN_USE(pGpu))**channelSetupIDs(pChannel, pGpu, NV_FALSE, IS_MIG_IN_USE(pGpu))*memmgrMemUtilsChannelInitialize_HAL(pGpu, pMemoryManager, pChannel)**memmgrMemUtilsChannelInitialize_HAL(pGpu, pMemoryManager, pChannel)*NVRM: Channel alloc successful for Sec2Utils **NVRM: Channel alloc successful for Sec2Utils *call to memmgrMemUtilsSec2CtxInit_DISPATCH*memmgrMemUtilsSec2CtxInit_HAL(pGpu, pMemoryManager, pChannel)**memmgrMemUtilsSec2CtxInit_HAL(pGpu, pMemoryManager, pChannel)*call to _sec2InitBuffers*_sec2InitBuffers(pSec2Utils)**_sec2InitBuffers(pSec2Utils)*ccslContextInitViaChannel(&pSec2Utils->pCcslCtx, pSec2Utils->hClient, pSec2Utils->hSubdevice, pChannel->channelId)**ccslContextInitViaChannel(&pSec2Utils->pCcslCtx, pSec2Utils->hClient, pSec2Utils->hSubdevice, pChannel->channelId)*serverutilGenResourceHandle(pSec2Utils->hClient, &pSec2Utils->scrubMthdAuthTagBuf.hPhysMem)**serverutilGenResourceHandle(pSec2Utils->hClient, &pSec2Utils->scrubMthdAuthTagBuf.hPhysMem)*serverutilGenResourceHandle(pSec2Utils->hClient, &pSec2Utils->scrubMthdAuthTagBuf.hVirtMem)**serverutilGenResourceHandle(pSec2Utils->hClient, &pSec2Utils->scrubMthdAuthTagBuf.hVirtMem)*call to _sec2AllocAndMapBuffer*_sec2AllocAndMapBuffer(pSec2Utils, RM_PAGE_SIZE_64K, &pSec2Utils->scrubMthdAuthTagBuf)**_sec2AllocAndMapBuffer(pSec2Utils, RM_PAGE_SIZE_64K, &pSec2Utils->scrubMthdAuthTagBuf)*serverutilGenResourceHandle(pSec2Utils->hClient, &pSec2Utils->semaMthdAuthTagBuf.hPhysMem)**serverutilGenResourceHandle(pSec2Utils->hClient, &pSec2Utils->semaMthdAuthTagBuf.hPhysMem)*serverutilGenResourceHandle(pSec2Utils->hClient, &pSec2Utils->semaMthdAuthTagBuf.hVirtMem)**serverutilGenResourceHandle(pSec2Utils->hClient, &pSec2Utils->semaMthdAuthTagBuf.hVirtMem)*_sec2AllocAndMapBuffer(pSec2Utils, RM_PAGE_SIZE_64K, &pSec2Utils->semaMthdAuthTagBuf)**_sec2AllocAndMapBuffer(pSec2Utils, RM_PAGE_SIZE_64K, &pSec2Utils->semaMthdAuthTagBuf)*pSec2Buf*pRmApi->AllocWithHandle(pRmApi, pSec2Utils->hClient, pSec2Utils->hDevice, pSec2Buf->hPhysMem, NV01_MEMORY_SYSTEM, &memAllocParams, sizeof(memAllocParams))**pRmApi->AllocWithHandle(pRmApi, pSec2Utils->hClient, pSec2Utils->hDevice, pSec2Buf->hPhysMem, NV01_MEMORY_SYSTEM, &memAllocParams, sizeof(memAllocParams))*pRmApi->AllocWithHandle(pRmApi, pSec2Utils->hClient, pSec2Utils->hDevice, pSec2Buf->hVirtMem, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))**pRmApi->AllocWithHandle(pRmApi, pSec2Utils->hClient, pSec2Utils->hDevice, pSec2Buf->hVirtMem, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))*pClass != NULL**pClass != NULL*gpuGetClassList(pGpu, &numClasses, NULL, ENG_SEC2)**gpuGetClassList(pGpu, &numClasses, NULL, ENG_SEC2)*(numClasses != 0)**(numClasses != 0)*call to _semsurfValidateIndex*_semsurfValidateIndex(pSemSurf->pShared, index)*src/kernel/gpu/mem_mgr/sem_surf.c**_semsurfValidateIndex(pSemSurf->pShared, index)**src/kernel/gpu/mem_mgr/sem_surf.c*call to _semsurfGetValue*_semsurfValidateIndex(pSemSurf->pShared, pParams->index)**_semsurfValidateIndex(pSemSurf->pShared, pParams->index)***notificationHandle*NVRM: Invalid semaphore surface notification handle: 0x%016llx, status: %s (0x%08x) **NVRM: Invalid semaphore surface notification handle: 0x%016llx, status: %s (0x%08x) *call to _semsurfDelWaiter*NVRM: SemMem(0x%08x, 0x%08x): Entering spinlock **NVRM: SemMem(0x%08x, 0x%08x): Entering spinlock **pIndexListeners*vlIter**pValueListeners*NVRM: SemSurf(0x%08x, 0x%08x): Unregistered event notification %p from semaphore surface listener at index %llu, value %llu. **NVRM: SemSurf(0x%08x, 0x%08x): Unregistered event notification %p from semaphore surface listener at index %llu, value %llu. *call to _semsurfSetMonitoredValue*NVRM: SemMem(0x%08x, 0x%08x): Exited spinlock **NVRM: SemMem(0x%08x, 0x%08x): Exited spinlock *call to _semsurfSetValueAndNotify*call to _semsurfAddWaiter*NVRM: SemSurf(0x%08x, 0x%08x): Requested backwards update from %llu->%llu at idx %llu **NVRM: SemSurf(0x%08x, 0x%08x): Requested backwards update from %llu->%llu at idx %llu *prevMinWaitValue*semValue*NVRM: SemSurf(0x%08x, 0x%08x): Detected already signalled wait for %llu at idx %llu current val %llu **NVRM: SemSurf(0x%08x, 0x%08x): Detected already signalled wait for %llu at idx %llu current val %llu *NVRM: SemSurf(0x%08x, 0x%08x): Failed to allocate a semaphore index listeners node **NVRM: SemSurf(0x%08x, 0x%08x): Failed to allocate a semaphore index listeners node *NVRM: SemSurf(0x%08x, 0x%08x): Duplicate entry found for new index listener list **NVRM: SemSurf(0x%08x, 0x%08x): Duplicate entry found for new index listener list *NVRM: SemSurf(0x%08x, 0x%08x): Failed to allocate a semaphore value listener node **NVRM: SemSurf(0x%08x, 0x%08x): Failed to allocate a semaphore value listener node *NVRM: SemSurf(0x%08x, 0x%08x): Existing value-updating waiter at index %llu for wait value %llu: Existing update value: %llu Requested update value: %llu **NVRM: SemSurf(0x%08x, 0x%08x): Existing value-updating waiter at index %llu for wait value %llu: Existing update value: %llu Requested update value: %llu *NVRM: SemSurf(0x%08x, 0x%08x): Notification handle already registered at index %llu for wait value %llu. **NVRM: SemSurf(0x%08x, 0x%08x): Notification handle already registered at index %llu for wait value %llu. **pListener*NVRM: SemSurf(0x%08x, 0x%08x): Failed to register event notification for semaphore surface listener at index %llu, value %llu. Status: 0x%08x **NVRM: SemSurf(0x%08x, 0x%08x): Failed to register event notification for semaphore surface listener at index %llu, value %llu. Status: 0x%08x *NVRM: SemSurf(0x%08x, 0x%08x): Registered semaphore surface value listener at index %llu, value %llu current value %llu post-wait value %llu notification: %p **NVRM: SemSurf(0x%08x, 0x%08x): Registered semaphore surface value listener at index %llu, value %llu current value %llu post-wait value %llu notification: %p *call to _semsurfSetValue*curValue**valueNode*minWaitValue*NVRM: Checking index %llu value waiter %llu against semaphore value %llu from CPU write **NVRM: Checking index %llu value waiter %llu against semaphore value %llu from CPU write *vlIter.pValue->newValue >= newValue**vlIter.pValue->newValue >= newValue*minWaitValue == NV_U64_MAX**minWaitValue == NV_U64_MAX*call to _semsurfNotifyCompleted*valueChanged*!valueChanged || (newValue > curValue)**!valueChanged || (newValue > curValue)*CliGetKernelChannel(pRsClient, pParams->hChannel, &pKernelChannel)**CliGetKernelChannel(pRsClient, pParams->hChannel, &pKernelChannel)*pChannelNode**pChannelNode*pChannelNode != NULL**pChannelNode != NULL*call to _semsurfUnbindChannel*notifyIndices**notifyIndices*mapKey*pNotNode**pNotNode*pNotNode->nUsers >= 1**pNotNode->nUsers >= 1*NVRM: SemSurf(0x%08x, 0x%08x): GPU instance 0x%08x notify index 0x%08x number of bound channels at max **NVRM: SemSurf(0x%08x, 0x%08x): GPU instance 0x%08x notify index 0x%08x number of bound channels at max *NVRM: SemSurf(0x%08x, 0x%08x): Bound to existing event for GPU instance 0x%08x notify index 0x%08x **NVRM: SemSurf(0x%08x, 0x%08x): Bound to existing event for GPU instance 0x%08x notify index 0x%08x *NVRM: SemSurf(0x%08x, 0x%08x): Failed to allocate an event notifier map node **NVRM: SemSurf(0x%08x, 0x%08x): Failed to allocate an event notifier map node *call to _semsurfRegisterCallback*NVRM: SemSurf(0x%08x, 0x%08x): Duplicate entry found for new event notifier map node **NVRM: SemSurf(0x%08x, 0x%08x): Duplicate entry found for new event notifier map node *call to _semsurfUnregisterCallback*NVRM: SemSurf(0x%08x, 0x%08x): Bound to new event for GPU instance 0x%08x notify index 0x%08x **NVRM: SemSurf(0x%08x, 0x%08x): Bound to new event for GPU instance 0x%08x notify index 0x%08x *NVRM: SemSurf(0x%08x, 0x%08x): Failed to allocate an channel binding map node for channel 0x%08x **NVRM: SemSurf(0x%08x, 0x%08x): Failed to allocate an channel binding map node for channel 0x%08x *numNotifyIndices*NVRM: SemSurf(0x%08x, 0x%08x): Attempt to register duplicate channel binding for channel 0x%08x **NVRM: SemSurf(0x%08x, 0x%08x): Attempt to register duplicate channel binding for channel 0x%08x *call to _semsurfRemoveNotifyBinding*pRmApi->DupObject(pRmApi, RES_GET_CLIENT_HANDLE(pSemSurf), hDeviceDst, &hSemMemOut, pShared->hClient, pShared->hSemaphoreMem, 0)**pRmApi->DupObject(pRmApi, RES_GET_CLIENT_HANDLE(pSemSurf), hDeviceDst, &hSemMemOut, pShared->hClient, pShared->hSemaphoreMem, 0)*bSemMemDuped*pRmApi->DupObject(pRmApi, RES_GET_CLIENT_HANDLE(pSemSurf), hDeviceDst, &hMaxMemOut, pShared->hClient, pShared->hMaxSubmittedMem, 0)**pRmApi->DupObject(pRmApi, RES_GET_CLIENT_HANDLE(pSemSurf), hDeviceDst, &hMaxMemOut, pShared->hClient, pShared->hMaxSubmittedMem, 0)*bMaxMemDuped*hMaxMemOut*hSemaphoreMem*hMaxSubmittedMem*pShared->pSpinlock**pShared->pSpinlock*NVRM: SemSurf(0x%08x, 0x%08x): Destructor with SemMem(0x%08x, 0x%08x) **NVRM: SemSurf(0x%08x, 0x%08x): Destructor with SemMem(0x%08x, 0x%08x) *curIdx*pNextListener**pNextListener*NVRM: SemSurf(0x%08x, 0x%08x): Deleting active waiter at index %llu value %llu **NVRM: SemSurf(0x%08x, 0x%08x): Deleting active waiter at index %llu value %llu *pNextValueListeners**pNextValueListeners*pShared->refCount > 0**pShared->refCount > 0*call to _semsurfDestroyShared*call to semsurfCopyConstruct*pAllocParams->flags == 0ULL**pAllocParams->flags == 0ULL*pShared != NULL**pShared != NULL*pShared->pSpinlock != NULL**pShared->pSpinlock != NULL*memmgrGetInternalClientHandles(pGpu, pMemoryManager, GPU_RES_GET_DEVICE(pSemSurf), &pShared->hClient, &pShared->hDevice, &pShared->hSubdevice)**memmgrGetInternalClientHandles(pGpu, pMemoryManager, GPU_RES_GET_DEVICE(pSemSurf), &pShared->hClient, &pShared->hDevice, &pShared->hSubdevice)*pRmApi->Control(pRmApi, pShared->hClient, pShared->hSubdevice, NV2080_CTRL_CMD_FB_GET_SEMAPHORE_SURFACE_LAYOUT, &pShared->layout, sizeof pShared->layout)**pRmApi->Control(pRmApi, pShared->hClient, pShared->hSubdevice, NV2080_CTRL_CMD_FB_GET_SEMAPHORE_SURFACE_LAYOUT, &pShared->layout, sizeof pShared->layout)*bIs64Bit*bHasMonitoredFence*call to _semsurfDupMemory*_semsurfDupMemory(pSemSurf, pAllocParams)**_semsurfDupMemory(pSemSurf, pAllocParams)*memGetByHandle(pRsClient, pShared->hSemaphoreMem, &pShared->pSemaphoreMem)**memGetByHandle(pRsClient, pShared->hSemaphoreMem, &pShared->pSemaphoreMem)*DRF_VAL(OS32, _ATTR, _LOCATION, pShared->pSemaphoreMem->Attr) == NVOS32_ATTR_LOCATION_PCI**DRF_VAL(OS32, _ATTR, _LOCATION, pShared->pSemaphoreMem->Attr) == NVOS32_ATTR_LOCATION_PCI*pRmApi->MapToCpu(pRmApi, pShared->hClient, pShared->hDevice, pShared->hSemaphoreMem, 0, pShared->pSemaphoreMem->pMemDesc->Size, &pShared->semKernAddr, 0)**pRmApi->MapToCpu(pRmApi, pShared->hClient, pShared->hDevice, pShared->hSemaphoreMem, 0, pShared->pSemaphoreMem->pMemDesc->Size, &pShared->semKernAddr, 0)*slotCount**pSem*memGetByHandle(pRsClient, pShared->hMaxSubmittedMem, &pShared->pMaxSubmittedMem)**memGetByHandle(pRsClient, pShared->hMaxSubmittedMem, &pShared->pMaxSubmittedMem)*pShared->pMaxSubmittedMem->pMemDesc->Size >= pShared->pSemaphoreMem->pMemDesc->Size**pShared->pMaxSubmittedMem->pMemDesc->Size >= pShared->pSemaphoreMem->pMemDesc->Size*pRmApi->MapToCpu(pRmApi, pShared->hClient, pShared->hDevice, pShared->hMaxSubmittedMem, 0, pShared->pMaxSubmittedMem->pMemDesc->Size, &pShared->maxSubmittedKernAddr, 0)**pRmApi->MapToCpu(pRmApi, pShared->hClient, pShared->hDevice, pShared->hMaxSubmittedMem, 0, pShared->pMaxSubmittedMem->pMemDesc->Size, &pShared->maxSubmittedKernAddr, 0)**pMaxSubmitted**pMaxSubmittedMem*DRF_VAL(OS32, _ATTR, _LOCATION, pShared->pMaxSubmittedMem->Attr) == NVOS32_ATTR_LOCATION_PCI**DRF_VAL(OS32, _ATTR, _LOCATION, pShared->pMaxSubmittedMem->Attr) == NVOS32_ATTR_LOCATION_PCI*maxSubmittedCoherency*(maxSubmittedCoherency != NVOS32_ATTR_COHERENCY_UNCACHED) && (maxSubmittedCoherency != NVOS32_ATTR_COHERENCY_WRITE_COMBINE)**(maxSubmittedCoherency != NVOS32_ATTR_COHERENCY_UNCACHED) && (maxSubmittedCoherency != NVOS32_ATTR_COHERENCY_WRITE_COMBINE)*NVRM: SemSurf(0x%08x, 0x%08x): Constructed with SemMem(0x%08x, 0x%08x) **NVRM: SemSurf(0x%08x, 0x%08x): Constructed with SemMem(0x%08x, 0x%08x) *mapCount(&pShared->notifierMap) == 0**mapCount(&pShared->notifierMap) == 0*notIter***maxSubmittedKernAddr***semKernAddr**pSemaphoreMem*call to _semsurfFreeMemory*pSrcSemSurf*pSemSurf->pShared->refCount > 0**pSemSurf->pShared->refCount > 0*NVRM: SemSurf(0x%08x, 0x%08x): Copied with SemMem(0x%08x, 0x%08x) **NVRM: SemSurf(0x%08x, 0x%08x): Copied with SemMem(0x%08x, 0x%08x) *NVRM: SemSurf(0x%08x, 0x%08x): Unbound event for GPU instance 0x%08x notify index 0x%08x **NVRM: SemSurf(0x%08x, 0x%08x): Unbound event for GPU instance 0x%08x notify index 0x%08x *pNotNode != NULL**pNotNode != NULL*memmgrGetInternalClientHandles(pGpu, GPU_GET_MEMORY_MANAGER(pGpu), GPU_RES_GET_DEVICE(pKernelChannel), &pNotNode->hClient, NULL, &hSubdevice)**memmgrGetInternalClientHandles(pGpu, GPU_GET_MEMORY_MANAGER(pGpu), GPU_RES_GET_DEVICE(pKernelChannel), &pNotNode->hClient, NULL, &hSubdevice)*pRmApi->Alloc(pRmApi, pNotNode->hClient, hSubdevice, &pNotNode->hEvent, NV01_EVENT_KERNEL_CALLBACK_EX, &nv0005AllocParams, sizeof(nv0005AllocParams))**pRmApi->Alloc(pRmApi, pNotNode->hClient, hSubdevice, &pNotNode->hEvent, NV01_EVENT_KERNEL_CALLBACK_EX, &nv0005AllocParams, sizeof(nv0005AllocParams))*nUsers*pRmApi->DupObject(pRmApi, pShared->hClient, pShared->hDevice, &pShared->hSemaphoreMem, RES_GET_CLIENT_HANDLE(pSemSurf), pAllocParams->hSemaphoreMem, NV04_DUP_HANDLE_FLAGS_NONE)**pRmApi->DupObject(pRmApi, pShared->hClient, pShared->hDevice, &pShared->hSemaphoreMem, RES_GET_CLIENT_HANDLE(pSemSurf), pAllocParams->hSemaphoreMem, NV04_DUP_HANDLE_FLAGS_NONE)*!pSemSurf->pShared->bIs64Bit**!pSemSurf->pShared->bIs64Bit*pRmApi->DupObject(pRmApi, pShared->hClient, pShared->hDevice, &pShared->hMaxSubmittedMem, RES_GET_CLIENT_HANDLE(pSemSurf), pAllocParams->hMaxSubmittedMem, NV04_DUP_HANDLE_FLAGS_NONE)**pRmApi->DupObject(pRmApi, pShared->hClient, pShared->hDevice, &pShared->hMaxSubmittedMem, RES_GET_CLIENT_HANDLE(pSemSurf), pAllocParams->hMaxSubmittedMem, NV04_DUP_HANDLE_FLAGS_NONE)*NVRM: SemMem(0x%08x, 0x%08x): Got a callback **NVRM: SemMem(0x%08x, 0x%08x): Got a callback *NVRM: hEvent: 0x%08x surf event: 0x%08x, data 0x%08x, status 0x%08x **NVRM: hEvent: 0x%08x surf event: 0x%08x, data 0x%08x, status 0x%08x *removedIndex*ilIter*NVRM: Checking index %llu value waiter %llu against semaphore value %llu **NVRM: Checking index %llu value waiter %llu against semaphore value %llu *valuesChanged*NVRM: SemMem(0x%08x, 0x%08x): Setting monitored fence value at index %llu to %llu **NVRM: SemMem(0x%08x, 0x%08x): Setting monitored fence value at index %llu to %llu *pendIter*pVNode**pVNode*NVRM: SemMem(0x%08x, 0x%08x): Delivered OS events for value %llu at idx %llu. Status: %s (0x%08x) **NVRM: SemMem(0x%08x, 0x%08x): Delivered OS events for value %llu at idx %llu. Status: %s (0x%08x) *ppListeners**ppListeners***ppListeners*NVRM: SemMem(0x%08x, 0x%08x): Value updated by waiter to %llu at idx %llu **NVRM: SemMem(0x%08x, 0x%08x): Value updated by waiter to %llu at idx %llu *NVRM: Updated semaphore surface value as 64-bit native to %llu **NVRM: Updated semaphore surface value as 64-bit native to %llu *pMaxSubmittedBase*origMax*call to portAtomicExCompareAndSwapU64*exchanged*oldMax*NVRM: Updated maxSubmitted from %llu to %llu and 32-bit semVal %u at semaphore index %llu **NVRM: Updated maxSubmitted from %llu to %llu and 32-bit semVal %u at semaphore index %llu *NVRM: Read semaphore surface value as 64-bit native **NVRM: Read semaphore surface value as 64-bit native *NVRM: Read maxSubmitted %llu and 32-bit semVal %llu from semaphore index %llu **NVRM: Read maxSubmitted %llu and 32-bit semVal %llu from semaphore index %llu *call to _sysmemscrubProcessCompletedEntries*call to _sysmemscrubFreeWorkerParams*listCount(&pSysmemScrubber->asyncScrubList) == 0*src/kernel/gpu/mem_mgr/sysmem_scrub.c**listCount(&pSysmemScrubber->asyncScrubList) == 0**src/kernel/gpu/mem_mgr/sysmem_scrub.c*rmDeviceGpuLockIsOwner(pSysmemScrubber->pGpu->gpuInstance) || rmGpuLockIsOwner()**rmDeviceGpuLockIsOwner(pSysmemScrubber->pGpu->gpuInstance) || rmGpuLockIsOwner()*pMemDesc->Size == pMemDesc->ActualSize**pMemDesc->Size == pMemDesc->ActualSize*call to _sysmemscrubScrubAndFreeAsync*NVRM: pMemDesc=%p RefCount=%u DupCount=%u **NVRM: pMemDesc=%p RefCount=%u DupCount=%u *call to _sysmemscrubScrubAndFreeSync*pMemDesc->RefCount == 1**pMemDesc->RefCount == 1*!memdescIsSubMemoryMemDesc(pMemDesc)**!memdescIsSubMemoryMemDesc(pMemDesc)*semaphoreValue*NVRM: scrub completed callback **NVRM: scrub completed callback *call to portAtomicAddU32*call to _sysmemscrubIsWorkPending*osQueueWorkItem(pSysmemScrubber->pGpu, _sysmemscrubProcessCompletedEntriesCb, pWorkerParams, (OsQueueWorkItemFlags){ .bDontFreeParams = NV_TRUE, .bFallbackToDpc = NV_TRUE, .bLockGpuGroupDevice = NV_TRUE, .bFullGpuSanity = NV_TRUE}) == NV_OK**osQueueWorkItem(pSysmemScrubber->pGpu, _sysmemscrubProcessCompletedEntriesCb, pWorkerParams, (OsQueueWorkItemFlags){ .bDontFreeParams = NV_TRUE, .bFallbackToDpc = NV_TRUE, .bLockGpuGroupDevice = NV_TRUE, .bFullGpuSanity = NV_TRUE}) == NV_OK*bWorkPending*NVRM: processing completed scrub work in deferred work item **NVRM: processing completed scrub work in deferred work item *NVRM: freeing scrubbed pMemDesc=%p RefCount=%u DupCount=%u **NVRM: freeing scrubbed pMemDesc=%p RefCount=%u DupCount=%u *pMemoryManager->bFastScrubberSupportsSysmem**pMemoryManager->bFastScrubberSupportsSysmem*bAsync*RMDisableAsyncSysmemScrub**RMDisableAsyncSysmemScrub*pWorkerParams != NULL**pWorkerParams != NULL*pWorkerParams->pSpinlock != NULL**pWorkerParams->pSpinlock != NULL*objCreate(&pSysmemScrubber->pCeUtils, pSysmemScrubber, CeUtils, pGpu, NULL, &ceUtilsAllocParams)**objCreate(&pSysmemScrubber->pCeUtils, pSysmemScrubber, CeUtils, pGpu, NULL, &ceUtilsAllocParams)*src/kernel/gpu/mem_mgr/vaspace_api.c**src/kernel/gpu/mem_mgr/vaspace_api.c*pEntryGpu*bEntryBcState*bOrigBcState*call to memmgrPageLevelPoolsGetInfo_IMPL*memmgrPageLevelPoolsGetInfo(pGpu, pMemoryManager, pDevice, &pMemPool)**memmgrPageLevelPoolsGetInfo(pGpu, pMemoryManager, pDevice, &pMemPool)*call to rmMemPoolReserve*call to rmMemPoolRelease*call to rmMemPoolTrim*bEntryBcState == gpumgrGetBcEnabledStatus(pEntryGpu)**bEntryBcState == gpumgrGetBcEnabledStatus(pEntryGpu)*pNvVASpaceAllocParams*!((flags & VASPACE_FLAGS_ENABLE_ATS) && !((flags & VASPACE_FLAGS_IS_EXTERNALLY_OWNED) || ((flags & VASPACE_FLAGS_SHARED_MANAGEMENT) && (bKernelClient || IS_GFID_VF(gfid)))))**!((flags & VASPACE_FLAGS_ENABLE_ATS) && !((flags & VASPACE_FLAGS_IS_EXTERNALLY_OWNED) || ((flags & VASPACE_FLAGS_SHARED_MANAGEMENT) && (bKernelClient || IS_GFID_VF(gfid)))))*!((flags & VASPACE_FLAGS_ENABLE_FAULTING) && !(flags & VASPACE_FLAGS_IS_EXTERNALLY_OWNED))**!((flags & VASPACE_FLAGS_ENABLE_FAULTING) && !(flags & VASPACE_FLAGS_IS_EXTERNALLY_OWNED))*bBar1VA*bFlaVA*call to _vaspaceapiManagePageLevelsForSplitVaSpace*NVRM: Skipping Legacy FLA vaspace destruct, gpu:%x **NVRM: Skipping Legacy FLA vaspace destruct, gpu:%x *sva_handle**sva_handle*call to os_iommu_sva_unbind*NVRM: Destroyed vaspaceapi 0x%x, hParent 0x%x, device 0x%x, client 0x%x varef 0x%p, deviceref 0x%p **NVRM: Destroyed vaspaceapi 0x%x, hParent 0x%x, device 0x%x, client 0x%x varef 0x%p, deviceref 0x%p *pSrcVaspaceApi*NVRM: Shared vaspaceapi 0x%x, device 0x%x, client 0x%x, as vaspace 0x%x for hParent 0x%x device 0x%x client 0x%x varef 0x%p, deviceref 0x%p **NVRM: Shared vaspaceapi 0x%x, device 0x%x, client 0x%x, as vaspace 0x%x for hParent 0x%x device 0x%x client 0x%x varef 0x%p, deviceref 0x%p *rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_ALL, GPU_LOCK_FLAGS_SAFE_LOCK_UPGRADE, RM_LOCK_MODULES_MEM, &gpuMask)**rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_ALL, GPU_LOCK_FLAGS_SAFE_LOCK_UPGRADE, RM_LOCK_MODULES_MEM, &gpuMask)*call to vaspaceapiCopyConstruct_IMPL**pNvVASpaceAllocParams*originalVaBase*originalVaSize*call to translateAllocFlagsToVASpaceFlags*translateAllocFlagsToVASpaceFlags(allocFlags, &flags, (pCallContext->secInfo.privLevel >= RS_PRIV_LEVEL_KERNEL), gfid)**translateAllocFlagsToVASpaceFlags(allocFlags, &flags, (pCallContext->secInfo.privLevel >= RS_PRIV_LEVEL_KERNEL), gfid)*NVRM: VASpace alloc should be called without acquiring GPU lock **NVRM: VASpace alloc should be called without acquiring GPU lock *pSys->getProperty(pSys, PDB_PROP_SYS_ENABLE_RM_TEST_ONLY_CODE)**pSys->getProperty(pSys, PDB_PROP_SYS_ENABLE_RM_TEST_ONLY_CODE)*call to os_iommu_sva_bind*os_iommu_sva_bind(pGpu->pOsGpuInfo, &pVaspaceApi->sva_handle, &pNvVASpaceAllocParams->pasid)**os_iommu_sva_bind(pGpu->pOsGpuInfo, &pVaspaceApi->sva_handle, &pNvVASpaceAllocParams->pasid)*NVRM: os_iommu_sva_bind pasid: %d **NVRM: os_iommu_sva_bind pasid: %d *bSendRPC*call to kbusGetFlaVaspace_DISPATCH*call to translatePageSizeToVASpaceFlags*vasLimit*NVRM: Integer overflow !!! Invalid parameters for vaBase:%llx, vaSize:%llx **NVRM: Integer overflow !!! Invalid parameters for vaBase:%llx, vaSize:%llx *NVRM: Could not construct VA space. Status %x **NVRM: Could not construct VA space. Status %x *processAddrSpaceId*NVRM: pasid: %d **NVRM: pasid: %d *NVRM: Created vaspaceapi 0x%x, hParent 0x%x, device 0x%x, client 0x%x, varef 0x%p, parentref 0x%p **NVRM: Created vaspaceapi 0x%x, hParent 0x%x, device 0x%x, client 0x%x, varef 0x%p, parentref 0x%p *call to dmaInitGart_GM107*call to dmaConstructHal_VF*call to dmaInitRegistryOverrides*src/kernel/gpu/mem_mgr/virt_mem_allocator.c*NVRM: , Could not apply registry overrides **src/kernel/gpu/mem_mgr/virt_mem_allocator.c**NVRM: , Could not apply registry overrides *call to dmaInit_GM107*memBoundaryCfgTable*partition_id < 8*src/kernel/gpu/mem_sys/arch/ampere/kern_mem_sys_ga100.c**partition_id < 8**src/kernel/gpu/mem_sys/arch/ampere/kern_mem_sys_ga100.c*migMemoryPartitionTable*call to memmgrGetMIGPartitionableMemoryRange_IMPL*call to _kmemsysSwizzIdToFbMemRange_GA100*_kmemsysSwizzIdToFbMemRange_GA100(pGpu, pKernelMemorySystem, swizzId, vmmuSegmentSize, partitionableMemoryRange, &addrRange)**_kmemsysSwizzIdToFbMemRange_GA100(pGpu, pKernelMemorySystem, swizzId, vmmuSegmentSize, partitionableMemoryRange, &addrRange)*numBoundaries*partitionDivFactor*call to kmemsysInitMIGGPUInstanceMemConfigForSwizzId_IMPL*kmemsysInitMIGGPUInstanceMemConfigForSwizzId(pGpu, pKernelMemorySystem, swizzId, startingVmmuSegment, memSizeInVmmuSegment)**kmemsysInitMIGGPUInstanceMemConfigForSwizzId(pGpu, pKernelMemorySystem, swizzId, startingVmmuSegment, memSizeInVmmuSegment)*kmemsysSwizzIdToMIGMemRange(pGpu, pKernelMemorySystem, swizzId, totalRange, pAddrRange)**kmemsysSwizzIdToMIGMemRange(pGpu, pKernelMemorySystem, swizzId, totalRange, pAddrRange)*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMSYS_GET_MIG_MEMORY_PARTITION_TABLE, &pKernelMemorySystem->migMemoryPartitionTable, sizeof(pKernelMemorySystem->migMemoryPartitionTable))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMSYS_GET_MIG_MEMORY_PARTITION_TABLE, &pKernelMemorySystem->migMemoryPartitionTable, sizeof(pKernelMemorySystem->migMemoryPartitionTable))*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_KMEMSYS_GET_MIG_MEMORY_CONFIG, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_KMEMSYS_GET_MIG_MEMORY_CONFIG, ¶ms, sizeof(params))*memBoundaryCfgA*memBoundaryCfgB*memBoundaryCfgC*pSysmemFlushBufferMemDesc*call to kmemsysGetFlushSysmemBufferAddrShift_DISPATCH*NVRM: Could not allocate sysmem flush buffer: %x **NVRM: Could not allocate sysmem flush buffer: %x *sysmemFlushBuffer*pKernelMemorySystem->sysmemFlushBuffer != 0**pKernelMemorySystem->sysmemFlushBuffer != 0*alignedSysmemFlushBufferAddr*alignedSysmemFlushBufferAddrHi*(alignedSysmemFlushBufferAddrHi & (~NV_PFB_NISO_FLUSH_SYSMEM_ADDR_HI_MASK)) == 0**(alignedSysmemFlushBufferAddrHi & (~NV_PFB_NISO_FLUSH_SYSMEM_ADDR_HI_MASK)) == 0*pHshub0IoAperture != NULL*src/kernel/gpu/mem_sys/arch/blackwell/kern_mem_sys_gb100.c**pHshub0IoAperture != NULL**src/kernel/gpu/mem_sys/arch/blackwell/kern_mem_sys_gb100.c*(alignedSysmemFlushBufferAddrHi & (~NV_PFB_HSHUB_PCIE_FLUSH_SYSMEM_ADDR_HI_ADR_MASK)) == 0**(alignedSysmemFlushBufferAddrHi & (~NV_PFB_HSHUB_PCIE_FLUSH_SYSMEM_ADDR_HI_ADR_MASK)) == 0*regValHi*regValLo*call to kmemsysDestroyHshub0Aperture_DISPATCH*regHshubPcieFlushSysmemAddrValLo*regHshubPcieFlushSysmemAddrValHi*regHshubEgPcieFlushSysmemAddrValLo*regHshubEgPcieFlushSysmemAddrValHi*hshub0PriBaseAddress*src/kernel/gpu/mem_sys/arch/blackwell/kern_mem_sys_gb10b.c**src/kernel/gpu/mem_sys/arch/blackwell/kern_mem_sys_gb10b.c*(alignedSysmemFlushBufferAddrHi & (~NV_PFB_FBHUB0_PCIE_FLUSH_SYSMEM_ADDR_HI_ADR_MASK)) == 0**(alignedSysmemFlushBufferAddrHi & (~NV_PFB_FBHUB0_PCIE_FLUSH_SYSMEM_ADDR_HI_ADR_MASK)) == 0*src/kernel/gpu/mem_sys/arch/hopper/kern_mem_sys_gh100.c*NVRM: MC FLA mapping subPageOffset must be 0 **src/kernel/gpu/mem_sys/arch/hopper/kern_mem_sys_gh100.c**NVRM: MC FLA mapping subPageOffset must be 0 *swizzId < KMIGMGR_MAX_GPU_SWIZZID**swizzId < KMIGMGR_MAX_GPU_SWIZZID*pSwizzIdFbMemPageRanges*pStaticInfo->pSwizzIdFbMemPageRanges != NULL**pStaticInfo->pSwizzIdFbMemPageRanges != NULL*call to kmemsysIsSwizzIdRejectedByHW_DISPATCH*fbMemPageRanges**fbMemPageRanges*(swizzId == 0)**(swizzId == 0)*NVRM: GPU Instance Mem Config for swizzId = 0x%x is rejected by HW **NVRM: GPU Instance Mem Config for swizzId = 0x%x is rejected by HW *call to osNumaRemoveGpuMemory*NVRM: memory partition: %u removed successfully! **NVRM: memory partition: %u removed successfully! *NV_IS_ALIGNED(size, memblockSize)**NV_IS_ALIGNED(size, memblockSize)*NVRM: Memory partition: %u is already in use! **NVRM: Memory partition: %u is already in use! *call to osNumaAddGpuMemory*NVRM: Memory partition: %u added successfully! numa id: %u offset: 0x%llx size: 0x%llx **NVRM: Memory partition: %u added successfully! numa id: %u offset: 0x%llx size: 0x%llx *(alignedSysmemFlushBufferAddrHi & (~NV_PFB_FBHUB_PCIE_FLUSH_SYSMEM_ADDR_HI_ADR_MASK)) == 0**(alignedSysmemFlushBufferAddrHi & (~NV_PFB_FBHUB_PCIE_FLUSH_SYSMEM_ADDR_HI_ADR_MASK)) == 0*NVRM: called with null %s **NVRM: called with null %s *cache operation**cache operation*memory target**memory target*NVRM: Invalidate not supported, promoting to an evict (writeback + invalidate clean lines). **NVRM: Invalidate not supported, promoting to an evict (writeback + invalidate clean lines). *call to kmemsysDoCacheOp_DISPATCH*call to kmemsysSendFlushL2AllRamsAndCaches_IMPL*tokenRangeMask*bMemopBusy*NVRM: - timeout error waiting for reg 0x%x update cnt=%d **NVRM: - timeout error waiting for reg 0x%x update cnt=%d *call to memdescOverridePhysicalAddressWidthWindowsWAR*call to osDmaSetAddressSize*src/kernel/gpu/mem_sys/arch/maxwell/kern_mem_sys_gm107.c**src/kernel/gpu/mem_sys/arch/maxwell/kern_mem_sys_gm107.c*NVRM: GPU 0x%x: Allocated sysmem flush buffer not addressable 0x%llx **NVRM: GPU 0x%x: Allocated sysmem flush buffer not addressable 0x%llx *bMakeItFatal*call to kmemsysWriteL2PeermemInvalidateReg_DISPATCH*call to kmemsysWriteL2SysmemInvalidateReg_DISPATCH*call to kmemsysReadL2PeermemInvalidateReg_DISPATCH*regValueRead*call to kmemsysReadL2SysmemInvalidateReg_DISPATCH*src/kernel/gpu/mem_sys/arch/maxwell/kern_mem_sys_gm200.c**src/kernel/gpu/mem_sys/arch/maxwell/kern_mem_sys_gm200.c*call to kmemsysGetMaxFbpas_DISPATCH*call to kmemsysGetEccDedCountSize_DISPATCH*call to kmemsysGetEccDedCountRegAddr_DISPATCH*fbpaDedCountRegAddr*call to _kmemsysReadRegAndMaskPriError*call to kmemsysGetL2EccDedCountRegAddr_DISPATCH*ltcDedCountRegAddr*call to _kmemsysRemoveAtsPeers*src/kernel/gpu/mem_sys/arch/volta/kern_mem_sys_gv100.c*NVRM: Failed to remove ATS peer access between GPU%d and GPU%d **src/kernel/gpu/mem_sys/arch/volta/kern_mem_sys_gv100.c**NVRM: Failed to remove ATS peer access between GPU%d and GPU%d *call to _kmemsysSetupAtsPeers**pLocalKernelMs**pRemoteKernelMs*call to _kmemsysResetAtsPeerConfiguration*NVRM: Removing ATS p2p config between GPU%u and GPU%u failed with status %x **NVRM: Removing ATS p2p config between GPU%u and GPU%u failed with status %x *NVRM: Removinging ATS p2p config between GPU%u and GPU%u failed with status %x **NVRM: Removinging ATS p2p config between GPU%u and GPU%u failed with status %x *call to _kmemsysConfigureAtsPeers*NVRM: Configuring ATS p2p config between GPU%u and GPU%u failed with status %x **NVRM: Configuring ATS p2p config between GPU%u and GPU%u failed with status %x *pLocalRmApi*pLocalRmApi->Control(pLocalRmApi, pLocalGpu->hInternalClient, pLocalGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMSYS_GET_LOCAL_ATS_CONFIG, &getParams, sizeof(NV2080_CTRL_INTERNAL_MEMSYS_GET_LOCAL_ATS_CONFIG_PARAMS))**pLocalRmApi->Control(pLocalRmApi, pLocalGpu->hInternalClient, pLocalGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMSYS_GET_LOCAL_ATS_CONFIG, &getParams, sizeof(NV2080_CTRL_INTERNAL_MEMSYS_GET_LOCAL_ATS_CONFIG_PARAMS))*setParams*addrSysPhys*addrWidth*maskWidth*pLocalRmApi->Control(pLocalRmApi, pLocalGpu->hInternalClient, pLocalGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMSYS_SET_PEER_ATS_CONFIG, &setParams, sizeof(NV2080_CTRL_INTERNAL_MEMSYS_SET_PEER_ATS_CONFIG_PARAMS))**pLocalRmApi->Control(pLocalRmApi, pLocalGpu->hInternalClient, pLocalGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMSYS_SET_PEER_ATS_CONFIG, &setParams, sizeof(NV2080_CTRL_INTERNAL_MEMSYS_SET_PEER_ATS_CONFIG_PARAMS))*pRemoteRmApi*pRemoteRmApi->Control(pRemoteRmApi, pRemoteGpu->hInternalClient, pRemoteGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMSYS_GET_LOCAL_ATS_CONFIG, &getParams, sizeof(NV2080_CTRL_INTERNAL_MEMSYS_GET_LOCAL_ATS_CONFIG_PARAMS))**pRemoteRmApi->Control(pRemoteRmApi, pRemoteGpu->hInternalClient, pRemoteGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MEMSYS_GET_LOCAL_ATS_CONFIG, &getParams, sizeof(NV2080_CTRL_INTERNAL_MEMSYS_GET_LOCAL_ATS_CONFIG_PARAMS))*call to osGetFbNumaInfo*coherentCpuFbBaseOverride*NVRM: NUMA FB Physical address overrided to 0x%llx via regkey. **NVRM: NUMA FB Physical address overrided to 0x%llx via regkey. *NVRM: NUMA FB Physical address: 0x%llx Node ID: 0x%x **NVRM: NUMA FB Physical address: 0x%llx Node ID: 0x%x *call to kbusVerifyCoherentLink_DISPATCH*kbusVerifyCoherentLink_HAL(pGpu, GPU_GET_KERNEL_BUS(pGpu))*src/kernel/gpu/mem_sys/kern_mem_sys.c**kbusVerifyCoherentLink_HAL(pGpu, GPU_GET_KERNEL_BUS(pGpu))**src/kernel/gpu/mem_sys/kern_mem_sys.c*call to memmgrSavePowerMgmtState_KERNEL*memmgrSavePowerMgmtState(pGpu, pMemoryManager)**memmgrSavePowerMgmtState(pGpu, pMemoryManager)*call to memmgrRestorePowerMgmtState_KERNEL*memmgrRestorePowerMgmtState(pGpu, pMemoryManager)**memmgrRestorePowerMgmtState(pGpu, pMemoryManager)*call to kmemsysReadUsableFbSize_DISPATCH*call to kbusTeardownCoherentCpuMapping_DISPATCH*PDB_PROP_GPU_COHERENT_CPU_MAPPING*NVRM: Force disabling NVLINK/C2C mappings through regkey. **NVRM: Force disabling NVLINK/C2C mappings through regkey. *call to kmemsysGetFbNumaInfo_DISPATCH*kmemsysGetFbNumaInfo_HAL(pGpu, pKernelMemorySystem, &pKernelMemorySystem->coherentCpuFbBase, &pKernelMemorySystem->coherentRsvdFbBase, &numaNodeId)**kmemsysGetFbNumaInfo_HAL(pGpu, pKernelMemorySystem, &pKernelMemorySystem->coherentCpuFbBase, &pKernelMemorySystem->coherentRsvdFbBase, &numaNodeId)*RMOverrideGpuNumaNodeId**RMOverrideGpuNumaNodeId*NVRM: Override GPU NUMA node ID %d! **NVRM: Override GPU NUMA node ID %d! *NVRM: Failed to get NUMA node id for GPU memory **NVRM: Failed to get NUMA node id for GPU memory *NVRM: Failed to get coherent GPU memory base address **NVRM: Failed to get coherent GPU memory base address *coherentCpuFbEnd*numaOnlineSize*rsvdSize >= totalRsvdBytes**rsvdSize >= totalRsvdBytes*totalRsvdBytes*NVRM: fbSize: 0x%llx NUMA reserved memory size: 0x%llx online memory size: 0x%llx **NVRM: fbSize: 0x%llx NUMA reserved memory size: 0x%llx online memory size: 0x%llx *numaOnlineBase*kmemsysNumaAddMemory_HAL(pGpu, pKernelMemorySystem, 0, 0, numaOnlineSize, &numaNodeId)**kmemsysNumaAddMemory_HAL(pGpu, pKernelMemorySystem, 0, 0, numaOnlineSize, &numaNodeId)*call to kbusCreateCoherentCpuMapping_DISPATCH*kbusCreateCoherentCpuMapping_HAL(pGpu, pKernelBus, numaOnlineSize, bFlush)**kbusCreateCoherentCpuMapping_HAL(pGpu, pKernelBus, numaOnlineSize, bFlush)*call to kmemsysInitFlushSysmemBuffer_DISPATCH*gpuInstanceMemConfig**gpuInstanceMemConfig*NVRM: GPU Instance Mem Config for swizzId = 0x%x : MemStartSegment = 0x%llx, MemSizeInSegments = 0x%llx **NVRM: GPU Instance Mem Config for swizzId = 0x%x : MemStartSegment = 0x%llx, MemSizeInSegments = 0x%llx *pKernelMemorySystem->gpuInstanceMemConfig[swizzId].bInitialized**pKernelMemorySystem->gpuInstanceMemConfig[swizzId].bInitialized*(vmmuSegmentSize != 0)**(vmmuSegmentSize != 0)*alignedStartAddr*alignedEndAddr*call to kmemsysSwizzIdToVmmuSegmentsRange_DISPATCH*kmemsysSwizzIdToVmmuSegmentsRange_HAL(pGpu, pKernelMemorySystem, swizzId, vmmuSegmentSize, totalVmmuSegments)**kmemsysSwizzIdToVmmuSegmentsRange_HAL(pGpu, pKernelMemorySystem, swizzId, vmmuSegmentSize, totalVmmuSegments)*numaMigPartitionSize**numaMigPartitionSize*call to kmigmgrMemSizeFlagToSwizzIdRange_DISPATCH*call to _kmemsysSetNumaMigPartitionSizeSubArrayToMinimumValue*swizzRange*bNumaMigPartitionSizeEnumerated*minPartitionSize*pAddrRange != NULL**pAddrRange != NULL*startAddr*endAddr*!rangeIsEmpty(totalRange)**!rangeIsEmpty(totalRange)*call to kmemsysSwizzIdToMIGMemSize_IMPL*kmemsysSwizzIdToMIGMemSize(pGpu, pKernelMemorySystem, swizzId, totalRange, &memSizeFlag, &memSize)**kmemsysSwizzIdToMIGMemSize(pGpu, pKernelMemorySystem, swizzId, totalRange, &memSizeFlag, &memSize)*swizzIdRange*!rangeIsEmpty(swizzIdRange)**!rangeIsEmpty(swizzIdRange)*minSwizzId*unalignedStartAddr*NVRM: Unsupported SwizzId %d **NVRM: Unsupported SwizzId %d *NVRM: Insufficient memory **NVRM: Insufficient memory *pMemorySystemConfig->bOneToOneComptagLineAllocation || pMemorySystemConfig->bUseRawModeComptaglineAllocation**pMemorySystemConfig->bOneToOneComptagLineAllocation || pMemorySystemConfig->bUseRawModeComptaglineAllocation*!FLD_TEST_DRF(OS32, _ALLOC, _COMPTAG_OFFSET_USAGE, _FIXED, pFbAllocInfo->ctagOffset)**!FLD_TEST_DRF(OS32, _ALLOC, _COMPTAG_OFFSET_USAGE, _FIXED, pFbAllocInfo->ctagOffset)*NVRM: Compressible surfaces cannot be allocated on a system, where scrub on free is disabled **NVRM: Compressible surfaces cannot be allocated on a system, where scrub on free is disabled *memmgrUseVasForCeMemoryOps(pMemoryManager)**memmgrUseVasForCeMemoryOps(pMemoryManager)*call to kmemsysNumaRemoveAllMemory_DISPATCH**memPartitionNumaInfo**pSysmemFlushBufferMemDesc*pKernelMemorySystem != NULL**pKernelMemorySystem != NULL*call to kmemsysTeardownCoherentCpuLink_IMPL**pStaticConfig*call to kmemsysRemoveAllAtsPeers_DISPATCH*call to kmemsysSetupAllAtsPeers_DISPATCH*NVRM: ATS peer setup failed. **NVRM: ATS peer setup failed. *call to kmemsysAssertSysmemFlushBufferValid_56cd7a*kmemsysAssertSysmemFlushBufferValid_HAL(pGpu, pKernelMemorySystem)**kmemsysAssertSysmemFlushBufferValid_HAL(pGpu, pKernelMemorySystem)*call to kmemsysEnsureSysmemFlushBufferInitialized_IMPL*kmemsysEnsureSysmemFlushBufferInitialized(pGpu, pKernelMemorySystem)**kmemsysEnsureSysmemFlushBufferInitialized(pGpu, pKernelMemorySystem)*NVRM: Failed to allocate memory for numa information. **NVRM: Failed to allocate memory for numa information. *NVRM: ATS supported **NVRM: ATS supported *PDB_PROP_GPU_C2C_SYSMEM*call to kmemsysSetupCoherentCpuLink_IMPL*kmemsysSetupCoherentCpuLink(pGpu, pKernelMemorySystem, NV_FALSE)**kmemsysSetupCoherentCpuLink(pGpu, pKernelMemorySystem, NV_FALSE)*call to rmcfg_IsHOPPER_CLASSIC_GPUS*call to kgmmuCheckAndDecideBigPageSize_DISPATCH*kgmmuCheckAndDecideBigPageSize_HAL(pGpu, pKernelGmmu)**kgmmuCheckAndDecideBigPageSize_HAL(pGpu, pKernelGmmu)*(pStaticConfig != NULL)**(pStaticConfig != NULL)*call to kmemsysInitStaticConfig_KERNEL*kmemsysInitStaticConfig_HAL( pGpu, pKernelMemorySystem, pStaticConfig)**kmemsysInitStaticConfig_HAL( pGpu, pKernelMemorySystem, pStaticConfig)*call to kmemsysInitRegistryOverrides*kmemsysInitFlushSysmemBuffer_HAL(pGpu, pKernelMemorySystem)**kmemsysInitFlushSysmemBuffer_HAL(pGpu, pKernelMemorySystem)*RmL2CleanFbPull**RmL2CleanFbPull*RMOverrideToGMK**RMOverrideToGMK*RmOverrideCoherentCpuFbBase**RmOverrideCoherentCpuFbBase*RmOverrideNonPasidAtsSupport**RmOverrideNonPasidAtsSupport*nonPasIdAtsOverride*src/kernel/gpu/mem_sys/kern_mem_sys_ctrl.c**src/kernel/gpu/mem_sys/kern_mem_sys_ctrl.c*fbLtcInfoForFbp**fbLtcInfoForFbp*ltcMask*fbDynamicBlacklistedPages**fbDynamicBlacklistedPages*validEntries*writeMode*NVRM: Invalid write mode : %d **NVRM: Invalid write mode : %d *bypassMode*NVRM: Invalid bypass mode : %d **NVRM: Invalid bypass mode : %d *rcmState*call to kmemsysFlushGpuCache_IMPL*bWriteback*NVRM: Invalid aperture. **NVRM: Invalid aperture. *NVRM: Invalid array size (0x%x) **NVRM: Invalid array size (0x%x) *NVRM: Invalid memBlock size (0x%llx) **NVRM: Invalid memBlock size (0x%llx) *NVRM: Invalid FLUSH_MODE 0x%x **NVRM: Invalid FLUSH_MODE 0x%x *kbusSendSysmembar(pGpu, pKernelBus)**kbusSendSysmembar(pGpu, pKernelBus)*call to _kmemsysSetZbcReferenced*subdeviceGetByInstance(RES_GET_CLIENT(pDevice), RES_GET_HANDLE(pDevice), pParams->subdevInstance, &pSubdevice)**subdeviceGetByInstance(RES_GET_CLIENT(pDevice), RES_GET_HANDLE(pDevice), pParams->subdevInstance, &pSubdevice)*IS_GFID_VF(gfid) || pCallContext->secInfo.privLevel >= RS_PRIV_LEVEL_KERNEL**IS_GFID_VF(gfid) || pCallContext->secInfo.privLevel >= RS_PRIV_LEVEL_KERNEL*call to pmaGetUgpuTotalMemory*ugpuTotalMemory**ugpuTotalMemory*call to pmaGetUgpuFreeMemory*ugpuFreeMemory**ugpuFreeMemory*bStaticBar1WriteCombined*staticBar1StartOffset*staticBar1Size*numaMemAddr*numaMemSize*numaMemOffset*NVRM: retired page address 0x%llx not in NUMA region **NVRM: retired page address 0x%llx not in NUMA region *call to pmaGetClientBlacklistedPages*numChunks <= NV2080_CTRL_FB_OFFLINED_PAGES_MAX_PAGES**numChunks <= NV2080_CTRL_FB_OFFLINED_PAGES_MAX_PAGES*pOsOfflinedParams*offlinedPages**offlinedPages*pageAddress*pFbInfoParams*call to _kmemsysGetFbInfos*bIsClientMIGMonitor*bIsClientMIGProfiler*fbInfoListIndicesUnset*call to kmemsysGetFbInfos_DISPATCH*pFbInfos*(kmigmgrGetMemoryPartitionHeapFromDevice(pGpu, pKernelMIGManager, pDevice, &pMemoryPartitionHeap) == NV_OK)**(kmigmgrGetMemoryPartitionHeapFromDevice(pGpu, pKernelMIGManager, pDevice, &pMemoryPartitionHeap) == NV_OK)*NVRM: [zero-FB, No local RAM] TOTAL_RAM_SIZE = 0 **NVRM: [zero-FB, No local RAM] TOTAL_RAM_SIZE = 0 *NvU64_HI32(bytesTotal >> 10) == 0**NvU64_HI32(bytesTotal >> 10) == 0*heapSizeKb*NvU64_HI32(size >> 10) == 0**NvU64_HI32(size >> 10) == 0*0 == NvU64_HI32(pMemoryManager->Ram.fbTotalMemSizeMb << 10)**0 == NvU64_HI32(pMemoryManager->Ram.fbTotalMemSizeMb << 10)*NVRM: [zero-FB, No local RAM] RAM_SIZE = 0 **NVRM: [zero-FB, No local RAM] RAM_SIZE = 0 *NVRM: [zero-FB, No local RAM] USABLE_RAM_SIZE = 0 **NVRM: [zero-FB, No local RAM] USABLE_RAM_SIZE = 0 *0 == NvU64_HI32(pMemoryManager->Ram.fbUsableMemSize >> 10)**0 == NvU64_HI32(pMemoryManager->Ram.fbUsableMemSize >> 10)*NVRM: [zero-FB, No local RAM HEAP] HEAP_SIZE = 0 **NVRM: [zero-FB, No local RAM HEAP] HEAP_SIZE = 0 *call to heapGetUsableSize_IMPL*call to pmaGetLargestFree*call to memmgrCalculateHeapOffsetWithGSP_46f6a7*(NvU64) data == pKernelMemorySystem->fbOverrideStartKb**(NvU64) data == pKernelMemorySystem->fbOverrideStartKb*call to heapGetBase_IMPL*((NvU64) data << 10ULL) == heapBase**((NvU64) data << 10ULL) == heapBase*NVRM: [zero-FB, No local HEAP] HEAP_SIZE = 0 **NVRM: [zero-FB, No local HEAP] HEAP_SIZE = 0 *call to memmgrGetFreeMemoryForAllMIGGPUInstances_IMPL*call to memmgrGetTotalMemoryForAllMIGGPUInstances_IMPL*NvU64_HI32(bytesFree >> 10) == 0**NvU64_HI32(bytesFree >> 10) == 0*call to memmgrGetReservedHeapSizeMb_DISPATCH*call to memmgrGetMappableRamSizeMb_IMPL*bytesTotal*NvU64_HI32(rsvdSize) == 0**NvU64_HI32(rsvdSize) == 0*call to pmaGetFreeProtectedMemory*physIdx*pFbInfos[i].index == pRpcParams->fbInfoList[physIdx].index**pFbInfos[i].index == pRpcParams->fbInfoList[physIdx].index*nvPopCount64(fbInfoListIndicesUnset) == 0**nvPopCount64(fbInfoListIndicesUnset) == 0*src/kernel/gpu/mem_sys/zbc_api.c**src/kernel/gpu/mem_sys/zbc_api.c**zbcTableSizes*indexStart*indexEnd*src/kernel/gpu/mig_mgr/arch/ampere/kmigmgr_ga100.c**src/kernel/gpu/mig_mgr/arch/ampere/kmigmgr_ga100.c*pCIProfiles*profiles**profiles*call to kmigmgrIsA100ReducedConfig*spanLen*NVRM: Unsupported swizzid 0x%x **NVRM: Unsupported swizzid 0x%x *maxValidSwizzId*NVRM: Unsupported mem size flag 0x%x **NVRM: Unsupported mem size flag 0x%x *call to kmigmgrIsGPUInstanceFlagValid_DISPATCH*call to kmigmgrIsGPUInstanceFlagLegal_IMPL*kmigmgrIsGPUInstanceFlagLegal(pGpu, pKernelMIGManager, gpuInstanceFlag)**kmigmgrIsGPUInstanceFlagLegal(pGpu, pKernelMIGManager, gpuInstanceFlag)**engines*call to kfifoEngineListHasChannel_IMPL*call to kmigmgrIsMIGNvlinkP2PSupportOverridden*call to kbusIsGpuP2pAlive_IMPL*!kbusIsGpuP2pAlive(pGpu, GPU_GET_KERNEL_BUS(pGpu))**!kbusIsGpuP2pAlive(pGpu, GPU_GET_KERNEL_BUS(pGpu))*heapInfo(pHeap, &unused, &unused, &unused, &base, &largestFreeSize)**heapInfo(pHeap, &unused, &unused, &unused, &base, &largestFreeSize)*freeBlock*src/kernel/gpu/mig_mgr/arch/blackwell/kmigmgr_gb100.c**src/kernel/gpu/mig_mgr/arch/blackwell/kmigmgr_gb100.c*src/kernel/gpu/mig_mgr/arch/blackwell/kmigmgr_gb10b.c**src/kernel/gpu/mig_mgr/arch/blackwell/kmigmgr_gb10b.c*pProfile != NULL**pProfile != NULL*pStaticInfo->pCIProfiles != NULL**pStaticInfo->pCIProfiles != NULL*maxGpc*actualMaxGpc*compSize*NVRM: Found matching Compute Profile:%d for gpcCount=%d **NVRM: Found matching Compute Profile:%d for gpcCount=%d **pProfile*NVRM: Found no Compute Profile for gpcCount=%d **NVRM: Found no Compute Profile for gpcCount=%d *call to s_kmigmgrIsSingleSliceConfig_GB202*src/kernel/gpu/mig_mgr/arch/blackwell/kmigmgr_gb202.c**src/kernel/gpu/mig_mgr/arch/blackwell/kmigmgr_gb202.c*actualMigCount*isSingleSliceProfile*smallestComputeSizeFlag*smallestComputeSizeFlag != KMIGMGR_COMPUTE_SIZE_INVALID*src/kernel/gpu/mig_mgr/arch/hopper/kmigmgr_gh100.c**smallestComputeSizeFlag != KMIGMGR_COMPUTE_SIZE_INVALID**src/kernel/gpu/mig_mgr/arch/hopper/kmigmgr_gh100.c*call to bitVectorSetRange_IMPL*NVRM: Setup CE mappings on LCEs 0x%x as part of GPU Instance creation **NVRM: Setup CE mappings on LCEs 0x%x as part of GPU Instance creation *call to kceGetMappingsForMIGGpuInstance_DISPATCH*call to kceApplyMIGMappings_IMPL*NVRM: Failed to apply MIG Mappings on LCE mask 0x%x **NVRM: Failed to apply MIG Mappings on LCE mask 0x%x *NVRM: Unmapping LCEs 0x%x as part of GPU Instance clean up **NVRM: Unmapping LCEs 0x%x as part of GPU Instance clean up *NVRM: Failure to clear MIG Mappings on lceAvailableMask 0x%x **NVRM: Failure to clear MIG Mappings on lceAvailableMask 0x%x *NVRM: Failed to update PCE-LCE mappings **NVRM: Failed to update PCE-LCE mappings *ppComputeInstanceSubscription**ppComputeInstanceSubscription*ppComputeInstanceSubscription != NULL*src/kernel/gpu/mig_mgr/compute_instance_subscription.c**ppComputeInstanceSubscription != NULL**src/kernel/gpu/mig_mgr/compute_instance_subscription.c*call to serverutilFindChildRefByType*call to gisubscriptionCleanupOnUnsubscribe_IMPL*kmigmgrDecRefCount(pComputeInstanceSubscription->pMIGComputeInstance->pShare)**kmigmgrDecRefCount(pComputeInstanceSubscription->pMIGComputeInstance->pShare)*pComputeInstanceSubscriptionSrc**pMIGComputeInstance*kmigmgrIncRefCount(pComputeInstanceSubscription->pMIGComputeInstance->pShare)**kmigmgrIncRefCount(pComputeInstanceSubscription->pMIGComputeInstance->pShare)*pRmAllocParams*call to cisubscriptionCopyConstruct_IMPL*Compute instance Subscription failed: MIG GPU partitioning not done**Compute instance Subscription failed: MIG GPU partitioning not done*NVRM: Capability validation failed: ID 0x%0x! **NVRM: Capability validation failed: ID 0x%0x! *kmigmgrIncRefCount(pMIGComputeInstance->pShare)**kmigmgrIncRefCount(pMIGComputeInstance->pShare)*!pGPUInstanceSubscription->bDeviceProfiling*src/kernel/gpu/mig_mgr/gpu_instance_subscription.c**!pGPUInstanceSubscription->bDeviceProfiling**src/kernel/gpu/mig_mgr/gpu_instance_subscription.c*call to kmigmgrComputeProfileGetCapacity_IMPL*handleCount*call to gisubscriptionShouldClassBeFreedOnUnsubscribe_IMPL*NVRM: Will be freeing resource class id 0x%x on unsubscription! **NVRM: Will be freeing resource class id 0x%x on unsubscription! **pHandles*i < handleCount**i < handleCount*i == handleCount**i == handleCount*bShouldFree*ciInfo*call to kmigmgrCreateComputeInstances_DISPATCH*pGPUInstance*kmigmgrCreateComputeInstances_HAL(pGpu, pKernelMIGManager, pGPUInstance, NV_FALSE, restore, &pParams->id, pParams->bCreateCap)**kmigmgrCreateComputeInstances_HAL(pGpu, pKernelMIGManager, pGPUInstance, NV_FALSE, restore, &pParams->id, pParams->bCreateCap)*call to _gisubscriptionAllocKernelWatchdog*_gisubscriptionAllocKernelWatchdog(pGpu, &pGPUInstance->MIGComputeInstance[pParams->id])**_gisubscriptionAllocKernelWatchdog(pGpu, &pGPUInstance->MIGComputeInstance[pParams->id])*pRmApi->Control(pRmApi, RES_GET_CLIENT_HANDLE(pGPUInstanceSubscription), RES_GET_HANDLE(pGPUInstanceSubscription), NVC637_CTRL_CMD_EXEC_PARTITIONS_DELETE, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, RES_GET_CLIENT_HANDLE(pGPUInstanceSubscription), RES_GET_HANDLE(pGPUInstanceSubscription), NVC637_CTRL_CMD_EXEC_PARTITIONS_DELETE, ¶ms, sizeof(params))*sharedEngFlags*gfxGpcCount*veidOffset*gpcIds**gpcIds*gpcIdx*call to bitVectorToRaw_IMPL*enginesMask**enginesMask*pGpu->getProperty(pGpu, PDB_PROP_GPU_MIG_SUPPORTED)**pGpu->getProperty(pGpu, PDB_PROP_GPU_MIG_SUPPORTED)*IS_MIG_IN_USE(pGpu)**IS_MIG_IN_USE(pGpu)*execPartUuid**execPartUuid*ciIdx*bEnumerateAll*call to cisubscriptionGetComputeInstanceSubscription_IMPL*call to cisubscriptionGetMIGComputeInstance*pTargetComputeInstanceInfo**pTargetComputeInstanceInfo**execPartInfo*pOutInfo**pOutInfo*call to kmigmgrEngBitVectorXlate_IMPL*kmigmgrEngBitVectorXlate(&pKernelMIGGpuInstance->resourceAllocation.localEngines, &pMIGComputeInstance->resourceAllocation.engines, &pKernelMIGGpuInstance->resourceAllocation.engines, &globalEngines)**kmigmgrEngBitVectorXlate(&pKernelMIGGpuInstance->resourceAllocation.localEngines, &pMIGComputeInstance->resourceAllocation.engines, &pKernelMIGGpuInstance->resourceAllocation.engines, &globalEngines)*call to kmigmgrCountEnginesInRange_IMPL*call to kmigmgrGetAsyncCERange_DISPATCH*ceCount*nvEncCount*nvDecCount*nvJpgCount*ofaCount*sharedEngFlag*veidStartOffset*NVRM: Non-privileged context issued privileged cmd **NVRM: Non-privileged context issued privileged cmd *execPartId < KMIGMGR_MAX_COMPUTE_INSTANCES**execPartId < KMIGMGR_MAX_COMPUTE_INSTANCES*pKernelMIGGpuInstance->MIGComputeInstance[execPartId].bValid**pKernelMIGGpuInstance->MIGComputeInstance[execPartId].bValid*execPartIdx*call to _gisubscriptionFreeKernelWatchdog*_gisubscriptionFreeKernelWatchdog(pGpu, &pKernelMIGGpuInstance->MIGComputeInstance[pParams->execPartId[execPartIdx]])**_gisubscriptionFreeKernelWatchdog(pGpu, &pKernelMIGGpuInstance->MIGComputeInstance[pParams->execPartId[execPartIdx]])*call to kmigmgrDeleteComputeInstance_IMPL*kmigmgrDeleteComputeInstance(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, pParams->execPartId[execPartIdx], NV_FALSE)**kmigmgrDeleteComputeInstance(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, pParams->execPartId[execPartIdx], NV_FALSE)*call to gpumgrCacheDestroyComputeInstance_IMPL*requestFlags*export*call to gpumgrCacheCreateComputeInstance_IMPL*pRmApi->Control(pRmApi, pKernelMIGGpuInstance->instanceHandles.hClient, pKernelMIGGpuInstance->instanceHandles.hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_EXPORT, &export, sizeof(export))**pRmApi->Control(pRmApi, pKernelMIGGpuInstance->instanceHandles.hClient, pKernelMIGGpuInstance->instanceHandles.hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_EXPORT, &export, sizeof(export))*kmigmgrCreateComputeInstances_HAL(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, NV_FALSE, restore, &pParams->execPartId[i], NV_TRUE)**kmigmgrCreateComputeInstances_HAL(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, NV_FALSE, restore, &pParams->execPartId[i], NV_TRUE)*_gisubscriptionAllocKernelWatchdog(pGpu, &pKernelMIGGpuInstance->MIGComputeInstance[pParams->execPartId[i]])**_gisubscriptionAllocKernelWatchdog(pGpu, &pKernelMIGGpuInstance->MIGComputeInstance[pParams->execPartId[i]])*NVRM: Freeing KERNEL_WATCHDOG object for CI hClient 0x%x, gfxGpcCount(%d) **NVRM: Freeing KERNEL_WATCHDOG object for CI hClient 0x%x, gfxGpcCount(%d) *serverutilGetResourceRefWithType(pMIGComputeInstance->instanceHandles.hClient, KERNEL_WATCHDOG_OBJECT_ID, classId(KernelWatchdog), &pKernelWatchdogRef)**serverutilGetResourceRefWithType(pMIGComputeInstance->instanceHandles.hClient, KERNEL_WATCHDOG_OBJECT_ID, classId(KernelWatchdog), &pKernelWatchdogRef)*pKernelWatchdogRef**pKernelWatchdog*pKernelWatchdog != NULL**pKernelWatchdog != NULL*krcWatchdogShutdown(pGpu, pKernelRc, pKernelWatchdog)**krcWatchdogShutdown(pGpu, pKernelRc, pKernelWatchdog)*NVRM: Allocating KERNEL_WATCHDOG object for CI hClient 0x%x, hSubdevice 0x%x, gfxGpcCount(%d) **NVRM: Allocating KERNEL_WATCHDOG object for CI hClient 0x%x, hSubdevice 0x%x, gfxGpcCount(%d) *pRmApi->AllocWithHandle(pRmApi, pMIGComputeInstance->instanceHandles.hClient, pMIGComputeInstance->instanceHandles.hSubdevice, KERNEL_WATCHDOG_OBJECT_ID, KERNEL_WATCHDOG, NvP64_NULL, 0)**pRmApi->AllocWithHandle(pRmApi, pMIGComputeInstance->instanceHandles.hClient, pMIGComputeInstance->instanceHandles.hSubdevice, KERNEL_WATCHDOG_OBJECT_ID, KERNEL_WATCHDOG, NvP64_NULL, 0)*krcWatchdogInit(pGpu, pKernelRc, pKernelWatchdog)**krcWatchdogInit(pGpu, pKernelRc, pKernelWatchdog)*ppGPUInstanceSubscription**ppGPUInstanceSubscription*NULL != ppGPUInstanceSubscription**NULL != ppGPUInstanceSubscription*call to kmigmgrClearDeviceProfilingInUse_IMPL*bDeviceProfiling*kmigmgrDecRefCount(pGPUInstanceSubscription->pKernelMIGGpuInstance->pShare)**kmigmgrDecRefCount(pGPUInstanceSubscription->pKernelMIGGpuInstance->pShare)*NVRM: Client 0x%x unsubscribed from swizzid 0x%0x. **NVRM: Client 0x%x unsubscribed from swizzid 0x%0x. *pGPUInstanceSubscriptionSrc*NVRM: Subscription failed: Duping not allowed for Device-level-SwizzId **NVRM: Subscription failed: Duping not allowed for Device-level-SwizzId *bIsDuped*kmigmgrIncRefCount(pGPUInstanceSubscription->pKernelMIGGpuInstance->pShare)**kmigmgrIncRefCount(pGPUInstanceSubscription->pKernelMIGGpuInstance->pShare)*call to gisubscriptionCopyConstruct_IMPL*Subscription failed: MIG not enabled **Subscription failed: MIG not enabled *call to rmapiControlCacheSetMode*call to kmigmgrIsDeviceProfilingInUse_IMPL*NVRM: Subscription failed: Device-Level-Monitoring already in use **NVRM: Subscription failed: Device-Level-Monitoring already in use *call to kmigmgrSetDeviceProfilingInUse_IMPL*kmigmgrSetDeviceProfilingInUse(pGpu, pKernelMIGManager)**kmigmgrSetDeviceProfilingInUse(pGpu, pKernelMIGManager)*Subscription failed: MIG GPU instancing not done **Subscription failed: MIG GPU instancing not done *call to kmigmgrIsSwizzIdInUse_IMPL*NVRM: Subscription failed: swizzid 0x%0x doesn't exist! **NVRM: Subscription failed: swizzid 0x%0x doesn't exist! *call to _gisubscriptionClientSharesVASCrossPartition*NVRM: Subscription failed: Client shares VAS with client not subscribed to target GPU instance! **NVRM: Subscription failed: Client shares VAS with client not subscribed to target GPU instance! *kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, swizzId, &pGPUInstanceSubscription->pKernelMIGGpuInstance)**kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, swizzId, &pGPUInstanceSubscription->pKernelMIGGpuInstance)*NVRM: Capability validation failed: swizzid 0x%0x! **NVRM: Capability validation failed: swizzid 0x%0x! *NVRM: GPU instance ref-counting failed: swizzid 0x%0x! **NVRM: GPU instance ref-counting failed: swizzid 0x%0x! *NVRM: Client 0x%x subscribed to swizzid 0x%0x. **NVRM: Client 0x%x subscribed to swizzid 0x%0x. *pGPUInstanceSubscription != NULL**pGPUInstanceSubscription != NULL*serverGetClientUnderLock(&g_resServ, pDevice->hClientShare, &pRsClientShare)**serverGetClientUnderLock(&g_resServ, pDevice->hClientShare, &pRsClientShare)*pRsClientShare*shareRef*bClientShareHasMatchingInstance*src/kernel/gpu/mig_mgr/kernel_mig_manager.c*NVRM: Unrecognized GPU mem partitioning flag 0x%x **src/kernel/gpu/mig_mgr/kernel_mig_manager.c**NVRM: Unrecognized GPU mem partitioning flag 0x%x *NVRM: Unrecognized GPU compute partitioning flag 0x%x **NVRM: Unrecognized GPU compute partitioning flag 0x%x *NVRM: Unrecognized GPU GFX partitioning flag 0x%x **NVRM: Unrecognized GPU GFX partitioning flag 0x%x *NVRM: Unrecognized GPU all media partitioning flag 0x%x **NVRM: Unrecognized GPU all media partitioning flag 0x%x *call to kmigmgrIsCTSAlignmentRequired_DISPATCH*pParams->computeSize < NV2080_CTRL_GPU_PARTITION_FLAG_COMPUTE_SIZE__SIZE**pParams->computeSize < NV2080_CTRL_GPU_PARTITION_FLAG_COMPUTE_SIZE__SIZE*call to kmigmgrComputeProfileSizeToCTSIdRange_IMPL*slotBasisComputeSize != KMIGMGR_COMPUTE_SIZE_INVALID**slotBasisComputeSize != KMIGMGR_COMPUTE_SIZE_INVALID*slotBasisIdRange*!rangeIsEmpty(slotBasisIdRange)**!rangeIsEmpty(slotBasisIdRange)*slotBasisMask*validQueryMask*inUseIdMask*call to kmigmgrGetInvalidCTSIdMask_IMPL*kmigmgrGetInvalidCTSIdMask(pGpu, pKernelMIGManager, ctsId, &invalidMask)**kmigmgrGetInvalidCTSIdMask(pGpu, pKernelMIGManager, ctsId, &invalidMask)*totalSpanCount*availableSpanCount*bCheckClientGI*giComputeSize*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_CHECK_CTS_ID_VALID, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_CHECK_CTS_ID_VALID, ¶ms, sizeof(params))*totalSpans**totalSpans*availableSpans**availableSpans*totalSpansCount*totalProfileCount*availableSpansCount*profileCount*kmigmgrGetComputeProfileFromSize(pGpu, pKernelMIGManager, pParams->computeSize, &profile)**kmigmgrGetComputeProfileFromSize(pGpu, pKernelMIGManager, pParams->computeSize, &profile)*call to kgrmgrGetVeidInUseMask*GPUInstancePseudoMask*veidSlotCount*tempMask*NVRM: CI request is at GPU instance's limit. Using GPU instance's size: %d **NVRM: CI request is at GPU instance's limit. Using GPU instance's size: %d *call to kmigmgrGetComputeProfileFromSmCount_IMPL*call to kmigmgrGetComputeProfileFromGpcCount_DISPATCH*maxMIG*NVRM: maxMIG(%d) is unsupported **NVRM: maxMIG(%d) is unsupported *pKernelMIGManager->bBootConfigSupported**pKernelMIGManager->bBootConfigSupported*IS_MIG_ENABLED(pGpu)**IS_MIG_ENABLED(pGpu)*call to _kmigmgrReadBootConfig*_kmigmgrReadBootConfig(pGpu, pKernelMIGManager, &bootConfig)**_kmigmgrReadBootConfig(pGpu, pKernelMIGManager, &bootConfig)*bootConfig*GIs**GIs*partitionInfo**partitionInfo*placement*call to _kmigmgrProcessGPUInstanceEntry*_kmigmgrProcessGPUInstanceEntry(pGpu, pKernelMIGManager, &partitionInfo[GIIdx])**_kmigmgrProcessGPUInstanceEntry(pGpu, pKernelMIGManager, &partitionInfo[GIIdx])*GIIdx*CIs**CIs*kmigmgrGetComputeProfileFromSize(pGpu, pKernelMIGManager, bootConfig.CIs[CIIdx].flags, &computeProfileInfo)**kmigmgrGetComputeProfileFromSize(pGpu, pKernelMIGManager, bootConfig.CIs[CIIdx].flags, &computeProfileInfo)*NVRM: Invalid partition flags 0x%x for CI #%u **NVRM: Invalid partition flags 0x%x for CI #%u *kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, partitionInfo[bootConfig.CIs[CIIdx].GIIdx].swizzId, &pKernelMIGGpuInstance)**kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, partitionInfo[bootConfig.CIs[CIIdx].GIIdx].swizzId, &pKernelMIGGpuInstance)*computeProfileInfo*pRmApi->Control(pRmApi, pKernelMIGGpuInstance->instanceHandles.hClient, pKernelMIGGpuInstance->instanceHandles.hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_CREATE, &createParams, sizeof(createParams))**pRmApi->Control(pRmApi, pKernelMIGGpuInstance->instanceHandles.hClient, pKernelMIGGpuInstance->instanceHandles.hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_CREATE, &createParams, sizeof(createParams))*CIIdx*kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, partitionInfo[i].swizzId, &pKernelMIGGpuInstance)**kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, partitionInfo[i].swizzId, &pKernelMIGGpuInstance)*pRmApi->Control(pRmApi, pKernelMIGGpuInstance->instanceHandles.hClient, pKernelMIGGpuInstance->instanceHandles.hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_GET, &getParams, sizeof(getParams))**pRmApi->Control(pRmApi, pKernelMIGGpuInstance->instanceHandles.hClient, pKernelMIGGpuInstance->instanceHandles.hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_GET, &getParams, sizeof(getParams))*deleteParams*pRmApi->Control(pRmApi, pKernelMIGGpuInstance->instanceHandles.hClient, pKernelMIGGpuInstance->instanceHandles.hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_DELETE, &deleteParams, sizeof(deleteParams))**pRmApi->Control(pRmApi, pKernelMIGGpuInstance->instanceHandles.hClient, pKernelMIGGpuInstance->instanceHandles.hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_DELETE, &deleteParams, sizeof(deleteParams))*_kmigmgrProcessGPUInstanceEntry(pGpu, pKernelMIGManager, &partitionInfo[i])**_kmigmgrProcessGPUInstanceEntry(pGpu, pKernelMIGManager, &partitionInfo[i])*regStr*RmMIGBootConfigurationGI_%u**regStr**RmMIGBootConfigurationGI_%u*pBootConfig*NVRM: Found a GI config regkey '%s': flags=0x%x, placementLo=%llu, placementHi=%llu **NVRM: Found a GI config regkey '%s': flags=0x%x, placementLo=%llu, placementHi=%llu *call to kmigmgrIsGPUInstanceCombinationValid_DISPATCH*NVRM: Invalid partition flags 0x%x in %s **NVRM: Invalid partition flags 0x%x in %s *RmMIGBootConfigurationCIAssignment**RmMIGBootConfigurationCIAssignment*NVRM: Found a CI assignment regkey 'RmMIGBootConfigurationCIAssignment': value=%x**NVRM: Found a CI assignment regkey 'RmMIGBootConfigurationCIAssignment': value=%x*bCIAssignmentPresent*RmMIGBootConfigurationCI_%u**RmMIGBootConfigurationCI_%u*NVRM: CI assignment for GI #%u must be 0 **NVRM: CI assignment for GI #%u must be 0 *NVRM: Regkey 'RmMIGBootConfigurationCIAssignment' is missing **NVRM: Regkey 'RmMIGBootConfigurationCIAssignment' is missing *NVRM: Found a CI config regkey '%s': flags=0x%x, placementLo=%u, CEs=%u, DECs=%u, ENCs=%u, JPGs=%u, OFAs=%u **NVRM: Found a CI config regkey '%s': flags=0x%x, placementLo=%u, CEs=%u, DECs=%u, ENCs=%u, JPGs=%u, OFAs=%u *NVRM: Regkey 'RmMIGBootConfigurationGI_%u' is missing **NVRM: Regkey 'RmMIGBootConfigurationGI_%u' is missing *NVRM: Regkey 'RmMIGBootConfigurationCI_%u' is missing **NVRM: Regkey 'RmMIGBootConfigurationCI_%u' is missing *kmigmgrGetInvalidCTSIdMask(pGpu, pKernelMIGManager, i, &mask)**kmigmgrGetInvalidCTSIdMask(pGpu, pKernelMIGManager, i, &mask)*call to kmigmgrGetComputeSizeFromCTSId_IMPL*computeSizeIdRange*call to kmigmgrGetSlotBasisMask_IMPL*kmigmgrGetSlotBasisMask(pGpu, pKernelMIGManager, &slotBasisMask)**kmigmgrGetSlotBasisMask(pGpu, pKernelMIGManager, &slotBasisMask)*kmigmgrGetInvalidCTSIdMask(pGpu, pKernelMIGManager, computeSizeIdRange.lo, &computeSizeIdMask)**kmigmgrGetInvalidCTSIdMask(pGpu, pKernelMIGManager, computeSizeIdRange.lo, &computeSizeIdMask)*slotsPerCTS*pMask != NULL**pMask != NULL*slotBasisComputeSize*pCtsId*pCtsId != NULL**pCtsId != NULL*spanStart < nvPopCount64(slotBasisMask)**spanStart < nvPopCount64(slotBasisMask)*NVRM: Compute span start of %d is not aligned **NVRM: Compute span start of %d is not aligned *(computeSizeIdRange.lo <= *pCtsId) && (*pCtsId <= computeSizeIdRange.hi)**(computeSizeIdRange.lo <= *pCtsId) && (*pCtsId <= computeSizeIdRange.hi)*grCtsIdMap**grCtsIdMap*ctsId != KMIGMGR_CTSID_INVALID**ctsId != KMIGMGR_CTSID_INVALID*call to kmigmgrGetNextComputeSize_IMPL*call to kmigmgrGetSkylineFromSize_IMPL*!rangeIsEmpty(ctsRange)**!rangeIsEmpty(ctsRange)*validMask*shadowValidCTSIdMask*ctsRange*gfxGrCount*call to kmigmgrIsSmgEnabled*maxRemainingCapacity*idealCTSId*NVRM: Unsupported CTS ID 0x%x **NVRM: Unsupported CTS ID 0x%x *NULL != pInvalidCTSIdMask**NULL != pInvalidCTSIdMask*gpcSlot**gpcSlot*NVRM: GPC count %d doesn't match compute size %d **NVRM: GPC count %d doesn't match compute size %d *smCount != 0**smCount != 0*pStaticInfo->pCIProfiles->profileCount < 32**pStaticInfo->pCIProfiles->profileCount < 32*indexMask*NVRM: Found no Compute Profile for smCount=%d **NVRM: Found no Compute Profile for smCount=%d *NVRM: Profiles aliased. Falling back to GPC look-up **NVRM: Profiles aliased. Falling back to GPC look-up *NVRM: Found no Compute Profile for computeSize=%d **NVRM: Found no Compute Profile for computeSize=%d *ppSkyline**ppSkyline*ppSkyline != NULL**ppSkyline != NULL*pSkylineInfo*pStaticInfo->pSkylineInfo != NULL**pStaticInfo->pSkylineInfo != NULL*skylineTable**skylineTable*NVRM: No skyline for with compute size %d **NVRM: No skyline for with compute size %d *computeSize <= KMIGMGR_COMPUTE_SIZE_INVALID**computeSize <= KMIGMGR_COMPUTE_SIZE_INVALID*computeSizeFlags**computeSizeFlags*call to kmigmgrGetGpuProfileFromFlag_IMPL*kmigmgrGetGpuProfileFromFlag(pGpu, pKernelMIGManager, pParams->partitionFlag, &profile)**kmigmgrGetGpuProfileFromFlag(pGpu, pKernelMIGManager, pParams->partitionFlag, &profile)*kmigmgrComputeProfileGetCapacity(pGpu, pKernelMIGManager, &profile, NULL, RES_GET_CLIENT_HANDLE(pSubdevice), RES_GET_HANDLE(pSubdevice), ¶ms)**kmigmgrComputeProfileGetCapacity(pGpu, pKernelMIGManager, &profile, NULL, RES_GET_CLIENT_HANDLE(pSubdevice), RES_GET_HANDLE(pSubdevice), ¶ms)*maxSmCount*maxPhysicalSlotCount*kmigmgrGetGpuProfileFromFlag(pGpu, pKernelMIGManager, pParams->partitionFlag, &giProfile)**kmigmgrGetGpuProfileFromFlag(pGpu, pKernelMIGManager, pParams->partitionFlag, &giProfile)*giProfile*pStaticInfo->pCIProfiles->profileCount <= NV_ARRAY_ELEMENTS(pParams->profiles)**pStaticInfo->pCIProfiles->profileCount <= NV_ARRAY_ELEMENTS(pParams->profiles)*kmigmgrComputeProfileGetCapacity(pGpu, pKernelMIGManager, &giProfile, NULL, RES_GET_CLIENT_HANDLE(pSubdevice), RES_GET_HANDLE(pSubdevice), ¶ms)**kmigmgrComputeProfileGetCapacity(pGpu, pKernelMIGManager, &giProfile, NULL, RES_GET_CLIENT_HANDLE(pSubdevice), RES_GET_HANDLE(pSubdevice), ¶ms)*pProfiles**pProf*call to kmigmgrSetMIGState_DISPATCH*kmigmgrSetMIGState(pGpu, GPU_GET_KERNEL_MIG_MANAGER(pGpu), bMemoryPartitioningNeeded, NV_TRUE, NV_FALSE)**kmigmgrSetMIGState(pGpu, GPU_GET_KERNEL_MIG_MANAGER(pGpu), bMemoryPartitioningNeeded, NV_TRUE, NV_FALSE)*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_IMPORT_GPU_INSTANCE, pParams, sizeof(*pParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_IMPORT_GPU_INSTANCE, pParams, sizeof(*pParams))*pSave != NULL**pSave != NULL*call to kmigmgrCreateGPUInstance_IMPL**pSave*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_SET_GPU_INSTANCES, pParams, sizeof(*pParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_SET_GPU_INSTANCES, pParams, sizeof(*pParams))*kmigmgrSetMIGState(pGpu, GPU_GET_KERNEL_MIG_MANAGER(pGpu), bMemoryPartitioningNeeded, NV_FALSE, NV_FALSE)**kmigmgrSetMIGState(pGpu, GPU_GET_KERNEL_MIG_MANAGER(pGpu), bMemoryPartitioningNeeded, NV_FALSE, NV_FALSE)*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_EXPORT_GPU_INSTANCE, pParams, sizeof(*pParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_EXPORT_GPU_INSTANCE, pParams, sizeof(*pParams))*pRmApi->Control(pRmApi, pRmCtrlParams->hClient, pRmCtrlParams->hObject, NV2080_CTRL_CMD_INTERNAL_MIGMGR_GET_GPU_INSTANCES, pRpcParams, sizeof(*pRpcParams))**pRmApi->Control(pRmApi, pRmCtrlParams->hClient, pRmCtrlParams->hObject, NV2080_CTRL_CMD_INTERNAL_MIGMGR_GET_GPU_INSTANCES, pRpcParams, sizeof(*pRpcParams))*NVRM: MIG not supported on this GPU. **NVRM: MIG not supported on this GPU. *NVRM: Entered MIG API with MIG disabled. **NVRM: Entered MIG API with MIG disabled. *validPartitionCount*NVRM: Non privileged client requesting global gpu instance info **NVRM: Non privileged client requesting global gpu instance info *validSwizzIdMask*NVRM: Unable to get gpu instance info for swizzId - %d **NVRM: Unable to get gpu instance info for swizzId - %d *pResourceAllocation**pResourceAllocation*grEngCount*virtualGpcCount*nvOfaCount*validCTSIdMask*validGfxCTSIdMask*bPartitionError*gpcsPerGr**gpcsPerGr*veidsPerGr**veidsPerGr*virtualGpcsPerGr**virtualGpcsPerGr*gfxGpcPerGr**gfxGpcPerGr*pRpcParams->queryPartitionInfo[i].bValid**pRpcParams->queryPartitionInfo[i].bValid*pParams->queryPartitionInfo[i].swizzId == pRpcParams->queryPartitionInfo[i].swizzId**pParams->queryPartitionInfo[i].swizzId == pRpcParams->queryPartitionInfo[i].swizzId*kmigmgrIsGPUInstanceCombinationValid_HAL(pGpu, pKernelMIGManager, partitionFlag)**kmigmgrIsGPUInstanceCombinationValid_HAL(pGpu, pKernelMIGManager, partitionFlag)*_kmigmgrProcessGPUInstanceEntry(pGpu, pKernelMIGManager, &pParams->partitionInfo[i])**_kmigmgrProcessGPUInstanceEntry(pGpu, pKernelMIGManager, &pParams->partitionInfo[i])*_kmigmgrProcessGPUInstanceEntry(pGpu, pKernelMIGManager, &pParams->partitionInfo[j])**_kmigmgrProcessGPUInstanceEntry(pGpu, pKernelMIGManager, &pParams->partitionInfo[j])*call to kmigmgrIsMemoryPartitioningRequested_DISPATCH*kmigmgrSetMIGState(pGpu, pKernelMIGManager, bMemoryPartitioningRequested, NV_TRUE, NV_FALSE)**kmigmgrSetMIGState(pGpu, pKernelMIGManager, bMemoryPartitioningRequested, NV_TRUE, NV_FALSE)*kmigmgrCreateGPUInstance(pGpu, pKernelMIGManager, pEntry->swizzId, pEntry->uuid, request, pEntry->bValid, NV_TRUE )**kmigmgrCreateGPUInstance(pGpu, pKernelMIGManager, pEntry->swizzId, pEntry->uuid, request, pEntry->bValid, NV_TRUE )*call to gpumgrCacheDestroyGpuInstance_IMPL*call to gpumgrCacheCreateGpuInstance_IMPL*kmigmgrSetMIGState(pGpu, pKernelMIGManager, bMemoryPartitioningNeeded, NV_FALSE, NV_FALSE)**kmigmgrSetMIGState(pGpu, pKernelMIGManager, bMemoryPartitioningNeeded, NV_FALSE, NV_FALSE)*kmigmgrSetMIGState(pGpu, pKernelMIGManager, bMemoryPartitioningRequested, NV_FALSE, NV_FALSE)**kmigmgrSetMIGState(pGpu, pKernelMIGManager, bMemoryPartitioningRequested, NV_FALSE, NV_FALSE)*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_SET_PARTITIONING_MODE, pParams, sizeof(*pParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_SET_PARTITIONING_MODE, pParams, sizeof(*pParams))*call to kmigmgrSetPartitioningMode_IMPL*kmigmgrSetPartitioningMode(pGpu, pKernelMIGManager)**kmigmgrSetPartitioningMode(pGpu, pKernelMIGManager)*call to kmigmgrDescribeGPUInstances_IMPL*kmigmgrIsMIGSupported(pGpu, pKernelMIGManager)**kmigmgrIsMIGSupported(pGpu, pKernelMIGManager)*totalPartitionCount*NVRM: MIG Mode has not been turned on. **NVRM: MIG Mode has not been turned on. **swizzId*execPartCount <= NVC637_CTRL_MAX_EXEC_PARTITIONS**execPartCount <= NVC637_CTRL_MAX_EXEC_PARTITIONS*call to kvgpumgrGetHostVgpuDeviceFromGfid*kvgpumgrGetHostVgpuDeviceFromGfid(pGpu->gpuId, gfid, &pKernelHostVgpuDevice)**kvgpumgrGetHostVgpuDeviceFromGfid(pGpu->gpuId, gfid, &pKernelHostVgpuDevice)*serverGetClientUnderLock(&g_resServ, pKernelHostVgpuDevice->hMigClient, &pRsClient)**serverGetClientUnderLock(&g_resServ, pKernelHostVgpuDevice->hMigClient, &pRsClient)*subdeviceGetByInstance(pRsClient, pKernelHostVgpuDevice->hMigDevice, 0, &pSubdevice)**subdeviceGetByInstance(pRsClient, pKernelHostVgpuDevice->hMigDevice, 0, &pSubdevice)*gisubscriptionGetGPUInstanceSubscription(pRsClient, RES_GET_HANDLE(pSubdevice), &pGPUInstanceSubscription)**gisubscriptionGetGPUInstanceSubscription(pRsClient, RES_GET_HANDLE(pSubdevice), &pGPUInstanceSubscription)*execPartExportParams*pExecPartId*pRmApi->Control(pRmApi, pKernelMIGGpuInstance->instanceHandles.hClient, pKernelMIGGpuInstance->instanceHandles.hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_EXPORT, &execPartExportParams, sizeof(execPartExportParams))**pRmApi->Control(pRmApi, pKernelMIGGpuInstance->instanceHandles.hClient, pKernelMIGGpuInstance->instanceHandles.hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_EXPORT, &execPartExportParams, sizeof(execPartExportParams))*kmigmgrCreateComputeInstances_HAL(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, NV_FALSE, restore, &pExecPartId[i], NV_TRUE)**kmigmgrCreateComputeInstances_HAL(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, NV_FALSE, restore, &pExecPartId[i], NV_TRUE)*kmigmgrDeleteComputeInstance(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, pExecPartId[i], NV_FALSE)**kmigmgrDeleteComputeInstance(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, pExecPartId[i], NV_FALSE)*call to gpumgrGetSystemMIGInstanceTopo_IMPL*saveGI**saveGI*pTopologySave*call to gpumgrSetSystemMIGEnabled_IMPL*savedGIIdx*giInfo*call to kmigmgrSaveComputeInstances_IMPL*saveCI**saveCI*kmigmgrSaveComputeInstances(pGpu, pKernelMIGManager, pKernelMIGGPUInstance, pGPUInstanceSave->saveCI)**kmigmgrSaveComputeInstances(pGpu, pKernelMIGManager, pKernelMIGGPUInstance, pGPUInstanceSave->saveCI)*call to kgrmgrIsGlobalCtxBufSupported_IMPL*pBufInfo*pBufInfo != NULL**pBufInfo != NULL*call to kgraphicsInitCtxBufPool_IMPL*kgraphicsInitCtxBufPool(pGpu, pKernelGraphics, pHeap)**kgraphicsInitCtxBufPool(pGpu, pKernelGraphics, pHeap)*call to kgraphicsGetCtxBufPool_IMPL*pGrCtxBufPool**pGrCtxBufPool*ctxBufPoolReserve(pGpu, pGrCtxBufPool, &globalCtxBufInfo[0], bufCount)**ctxBufPoolReserve(pGpu, pGrCtxBufPool, &globalCtxBufInfo[0], bufCount)*call to kmigmgrDestroyGPUInstanceGrBufPools_IMPL*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)rmEngineType, ENGINE_INFO_TYPE_RUNLIST, &runlistId)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32)rmEngineType, ENGINE_INFO_TYPE_RUNLIST, &runlistId)*kfifoGetRunlistBufInfo(pGpu, pKernelFifo, runlistId, NV_TRUE, 0, &rlSize, &rlAlign)**kfifoGetRunlistBufInfo(pGpu, pKernelFifo, runlistId, NV_TRUE, 0, &rlSize, &rlAlign)*runlistBufInfo**runlistBufInfo*ctxBufPoolInit(pGpu, pHeap, &pKernelFifo->pRunlistBufPool[rmEngineType])**ctxBufPoolInit(pGpu, pHeap, &pKernelFifo->pRunlistBufPool[rmEngineType])*pKernelFifo->pRunlistBufPool[rmEngineType] != NULL**pKernelFifo->pRunlistBufPool[rmEngineType] != NULL*ctxBufPoolReserve(pGpu, pKernelFifo->pRunlistBufPool[rmEngineType], &runlistBufInfo[0], NUM_BUFFERS_PER_RUNLIST)**ctxBufPoolReserve(pGpu, pKernelFifo->pRunlistBufPool[rmEngineType], &runlistBufInfo[0], NUM_BUFFERS_PER_RUNLIST)*NVRM: Assumption that PMA is empty at this time is broken **NVRM: Assumption that PMA is empty at this time is broken *NVRM: free space = 0x%llx bytes total space = 0x%llx bytes **NVRM: free space = 0x%llx bytes total space = 0x%llx bytes *NVRM: This means PMA allocations may trigger UVM evictions at this point causing deadlocks! **NVRM: This means PMA allocations may trigger UVM evictions at this point causing deadlocks! *call to kmigmgrInitGPUInstanceRunlistBufPools_IMPL*kmigmgrInitGPUInstanceRunlistBufPools(pGpu, pKernelMIGManager, pKernelMIGGpuInstance)**kmigmgrInitGPUInstanceRunlistBufPools(pGpu, pKernelMIGManager, pKernelMIGGpuInstance)*call to kmigmgrInitGPUInstanceGrBufPools_IMPL*kmigmgrInitGPUInstanceGrBufPools(pGpu, pKernelMIGManager, pKernelMIGGpuInstance)**kmigmgrInitGPUInstanceGrBufPools(pGpu, pKernelMIGManager, pKernelMIGGpuInstance)*pKernelMIGGpuInstance->pMemoryPartitionHeap != NULL**pKernelMIGGpuInstance->pMemoryPartitionHeap != NULL*rmMemPoolSetup((void*)pKernelMIGGpuInstance->pMemoryPartitionHeap->pPmaObject, &pKernelMIGGpuInstance->pPageTableMemPool, version)**rmMemPoolSetup((void*)pKernelMIGGpuInstance->pMemoryPartitionHeap->pPmaObject, &pKernelMIGGpuInstance->pPageTableMemPool, version)*!kmigmgrIsSwizzIdInUse(pGpu, pKernelMIGManager, swizzId)**!kmigmgrIsSwizzIdInUse(pGpu, pKernelMIGManager, swizzId)*call to kmigmgrEnableAllLCEs_IMPL*kmigmgrEnableAllLCEs(pGpu, pKernelMIGManager, NV_TRUE)**kmigmgrEnableAllLCEs(pGpu, pKernelMIGManager, NV_TRUE)*call to kmigmgrSetGPUInstanceInfo_IMPL*kmigmgrSetGPUInstanceInfo(pGpu, pKernelMIGManager, swizzId, pUuid, params)**kmigmgrSetGPUInstanceInfo(pGpu, pKernelMIGManager, swizzId, pUuid, params)*call to kmigmgrSetSwizzIdInUse_IMPL*kmigmgrSetSwizzIdInUse(pGpu, pKernelMIGManager, swizzId)**kmigmgrSetSwizzIdInUse(pGpu, pKernelMIGManager, swizzId)*kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, swizzId, &pKernelMIGGpuInstance)**kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, swizzId, &pKernelMIGGpuInstance)*call to kmigmgrAllocGPUInstanceHandles_IMPL*kmigmgrAllocGPUInstanceHandles(pGpu, swizzId, pKernelMIGGpuInstance)**kmigmgrAllocGPUInstanceHandles(pGpu, swizzId, pKernelMIGGpuInstance)*call to kmigmgrInitGPUInstanceBufPools_IMPL*kmigmgrInitGPUInstanceBufPools(pGpu, pKernelMIGManager, pKernelMIGGpuInstance)**kmigmgrInitGPUInstanceBufPools(pGpu, pKernelMIGManager, pKernelMIGGpuInstance)*call to kmigmgrCreateGPUInstanceRunlists_DISPATCH*call to kmemsysInitMIGMemoryPartitionTable_DISPATCH*kmemsysInitMIGMemoryPartitionTable_HAL(pGpu, pKernelMemorySystem)**kmemsysInitMIGMemoryPartitionTable_HAL(pGpu, pKernelMemorySystem)*kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, kmigmgrMakeGIReference(pKernelMIGGpuInstance), rmEngineType, &localEngineType)**kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, kmigmgrMakeGIReference(pKernelMIGGpuInstance), rmEngineType, &localEngineType)*call to fecsSetRoutingInfo*call to kmigmgrInitGPUInstancePool_IMPL*kmigmgrInitGPUInstancePool(pGpu, pKernelMIGManager, pKernelMIGGpuInstance)**kmigmgrInitGPUInstancePool(pGpu, pKernelMIGManager, pKernelMIGGpuInstance)*call to kmigmgrInitGPUInstanceScrubber_IMPL*kmigmgrInitGPUInstanceScrubber(pGpu, pKernelMIGManager, pKernelMIGGpuInstance)**kmigmgrInitGPUInstanceScrubber(pGpu, pKernelMIGManager, pKernelMIGGpuInstance)*call to kmigmgrGpuInstanceSupportVgpuTimeslice_DISPATCH*call to kvgpuMgrReserveVgpuPlacementInfoPerGI*kvgpuMgrReserveVgpuPlacementInfoPerGI(pGpu, swizzId)**kvgpuMgrReserveVgpuPlacementInfoPerGI(pGpu, swizzId)*call to intrRefetchInterruptTable_IMPL*intrRefetchInterruptTable_HAL(pGpu, GPU_GET_INTR(pGpu))**intrRefetchInterruptTable_HAL(pGpu, GPU_GET_INTR(pGpu))*call to osRmCapRegisterSmcPartition*osRmCapRegisterSmcPartition(pGpu->pOsRmCaps, &pKernelMIGGpuInstance->pOsRmCaps, pKernelMIGGpuInstance->swizzId)**osRmCapRegisterSmcPartition(pGpu->pOsRmCaps, &pKernelMIGGpuInstance->pOsRmCaps, pKernelMIGGpuInstance->swizzId)*NVRM: CREATING GPU instance **NVRM: CREATING GPU instance *call to kmigmgrPrintGPUInstanceInfo_IMPL*NVRM: Invalidating swizzId - %d. **NVRM: Invalidating swizzId - %d. *call to kmigmgrInvalidateGPUInstance_IMPL*kmigmgrInvalidateGPUInstance(pGpu, pKernelMIGManager, swizzId, NV_FALSE)**kmigmgrInvalidateGPUInstance(pGpu, pKernelMIGManager, swizzId, NV_FALSE)*call to kgrmgrDiscoverMaxGlobalCtxBufSizes_IMPL*pKGr*kgrmgrDiscoverMaxGlobalCtxBufSizes(pGpu, pKernelGraphicsManager, pKGr, bMemoryPartitioningNeeded)**kgrmgrDiscoverMaxGlobalCtxBufSizes(pGpu, pKernelGraphicsManager, pKGr, bMemoryPartitioningNeeded)*call to kmigmgrDisableWatchdog_IMPL*kmigmgrDisableWatchdog(pGpu, pKernelMIGManager)**kmigmgrDisableWatchdog(pGpu, pKernelMIGManager)*memmgrDestroyInternalChannels(pGpu, pMemoryManager)**memmgrDestroyInternalChannels(pGpu, pMemoryManager)*call to kmigmgrCreateGPUInstanceCheck_DISPATCH*kmigmgrCreateGPUInstanceCheck_HAL(pGpu, pKernelMIGManager, bMemoryPartitioningNeeded)**kmigmgrCreateGPUInstanceCheck_HAL(pGpu, pKernelMIGManager, bMemoryPartitioningNeeded)*gpuDeleteClassFromClassDBByClassId(pGpu, NV50_P2P)**gpuDeleteClassFromClassDBByClassId(pGpu, NV50_P2P)*call to memmgrAllocMIGMemoryAllocationInternalHandles_IMPL*memmgrAllocMIGMemoryAllocationInternalHandles(pGpu, pMemoryManager)**memmgrAllocMIGMemoryAllocationInternalHandles(pGpu, pMemoryManager)*pKernelFifo->pppRunlistBufMemDesc == NULL**pKernelFifo->pppRunlistBufMemDesc == NULL*pppMemDesc**pppMemDesc***pppMemDesc****pppMemDesc***call to portMemAllocNonPaged**pppMemDesc != NULL***pppMemDesc != NULL*call to kmemsysPopulateMIGGPUInstanceMemConfig_DISPATCH*kmemsysPopulateMIGGPUInstanceMemConfig_HAL(pGpu, pKernelMemorySystem)**kmemsysPopulateMIGGPUInstanceMemConfig_HAL(pGpu, pKernelMemorySystem)*call to memmgrFreeMIGMemoryAllocationInternalHandles_IMPL*kmigmgrEnableAllLCEs(pGpu, pKernelMIGManager, NV_FALSE)**kmigmgrEnableAllLCEs(pGpu, pKernelMIGManager, NV_FALSE)*call to gpuAddClassToClassDBByClassId_IMPL*gpuAddClassToClassDBByClassId(pGpu, NV50_P2P)**gpuAddClassToClassDBByClassId(pGpu, NV50_P2P)*gpuFabricProbeResume(pGpu->pGpuFabricProbeInfoKernel)**gpuFabricProbeResume(pGpu->pGpuFabricProbeInfoKernel)*memmgrInitInternalChannels(pGpu, pMemoryManager)**memmgrInitInternalChannels(pGpu, pMemoryManager)*kgraphicsLoadStaticInfo(pGpu, pKGr, KMIGMGR_SWIZZID_INVALID)**kgraphicsLoadStaticInfo(pGpu, pKGr, KMIGMGR_SWIZZID_INVALID)*call to kmigmgrRestoreWatchdog_IMPL*kmigmgrRestoreWatchdog(pGpu, pKernelMIGManager)**kmigmgrRestoreWatchdog(pGpu, pKernelMIGManager)*kgraphicsCreateGoldenImageChannel(pGpu, pKGr)**kgraphicsCreateGoldenImageChannel(pGpu, pKGr)*call to kmigmgrApplyDefaultCeMappings_DISPATCH*NVRM: %s client %x device %x currently subscribed to swizzId %u **NVRM: %s client %x device %x currently subscribed to swizzId %u **Kernel*Usermode**Usermode*(pKernelMIGGpuInstance->pPageTableMemPool == NULL)**(pKernelMIGGpuInstance->pPageTableMemPool == NULL)*NVRM: page table memory pool not setup **NVRM: page table memory pool not setup **pPageTableMemPool*call to kgraphicsDestroyCtxBufPool_IMPL*bMemoryPartitionScrubberInitialized*scrubberConstruct(pGpu, pKernelMIGGpuInstance->pMemoryPartitionHeap)**scrubberConstruct(pGpu, pKernelMIGGpuInstance->pMemoryPartitionHeap)*NVRM: No valid gpu instance with SwizzId - %d found **NVRM: No valid gpu instance with SwizzId - %d found *call to kmigmgrIsGPUInstanceReadyToBeDestroyed_IMPL*NVRM: Gpu instance with SwizzId - %d still in use by other clients **NVRM: Gpu instance with SwizzId - %d still in use by other clients *call to kmigmgrPrintSubscribingClients_IMPL*NVRM: Cannot destroy gpu instance %u with valid compute instance %d **NVRM: Cannot destroy gpu instance %u with valid compute instance %d *NVRM: FREEING GPU INSTANCE **NVRM: FREEING GPU INSTANCE *call to kmigmgrInvalidateGr_IMPL*kmigmgrInvalidateGr(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, engineIdx)**kmigmgrInvalidateGr(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, engineIdx)*call to fecsClearRoutingInfo*call to kmigmgrFreeGPUInstanceHandles_IMPL*call to kmigmgrClearEnginesInUse_IMPL*kmigmgrClearEnginesInUse(pGpu, pKernelMIGManager, &pKernelMIGGpuInstance->resourceAllocation.engines)**kmigmgrClearEnginesInUse(pGpu, pKernelMIGManager, &pKernelMIGGpuInstance->resourceAllocation.engines)*call to kmigmgrClearSwizzIdInUse_IMPL*kmigmgrClearSwizzIdInUse(pGpu, pKernelMIGManager, swizzId)**kmigmgrClearSwizzIdInUse(pGpu, pKernelMIGManager, swizzId)*!(NVBIT64(swizzId) & pKernelMIGManager->swizzIdInUseMask)**!(NVBIT64(swizzId) & pKernelMIGManager->swizzIdInUseMask)*call to kmigmgrDestroyGPUInstanceScrubber_IMPL*call to kmigmgrDestroyGPUInstancePool_IMPL*call to kvgpuMgrClearVgpuPlacementInfoPerGI*kvgpuMgrClearVgpuPlacementInfoPerGI(pGpu, swizzId)**kvgpuMgrClearVgpuPlacementInfoPerGI(pGpu, swizzId)*call to kmigmgrDeleteGPUInstanceRunlists_DISPATCH*kmigmgrDeleteGPUInstanceRunlists_HAL(pGpu, pKernelMIGManager, pKernelMIGGpuInstance)**kmigmgrDeleteGPUInstanceRunlists_HAL(pGpu, pKernelMIGManager, pKernelMIGGpuInstance)*call to kmigmgrDestroyGPUInstanceRunlistBufPools_IMPL*call to memmgrFreeMIGGPUInstanceMemory_IMPL*memmgrFreeMIGGPUInstanceMemory(pGpu, pMemoryManager, swizzId, pKernelMIGGpuInstance->hMemory, &pKernelMIGGpuInstance->pMemoryPartitionHeap)**memmgrFreeMIGGPUInstanceMemory(pGpu, pMemoryManager, swizzId, pKernelMIGGpuInstance->hMemory, &pKernelMIGGpuInstance->pMemoryPartitionHeap)**pShare*call to kmigmgrInitGPUInstanceInfo_IMPL*call to kmigmgrInvalidateGrGpcMapping_IMPL*kmigmgrInvalidateGrGpcMapping(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, grIdx)**kmigmgrInvalidateGrGpcMapping(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, grIdx)*call to kgrmgrClearVeidsForGrIdx_IMPL*call to kmigmgrSetCTSIdInUse_IMPL*call to kgraphicsClearCtxBufferInfo_IMPL*NVRM: Invalid swizzId - %d. **NVRM: Invalid swizzId - %d. *bAssigning*checkGrs**checkGrs*pConfigRequestsPerCi*pConfigRequestsPerCi[localIdx].ctsId != KMIGMGR_CTSID_INVALID**pConfigRequestsPerCi[localIdx].ctsId != KMIGMGR_CTSID_INVALID*NVRM: Invalid GPC count - %d requested for GrIdx - %d. **NVRM: Invalid GPC count - %d requested for GrIdx - %d. *call to kgrmgrAllocVeidsForGrIdx_IMPL*kgrmgrAllocVeidsForGrIdx(pGpu, pKernelGraphicsManager, engineIdx, pConfigRequestsPerCi[localIdx].veidSpanStart, pConfigRequestsPerCi[localIdx].profile.veidCount, pKernelMIGGpuInstance)**kgrmgrAllocVeidsForGrIdx(pGpu, pKernelGraphicsManager, engineIdx, pConfigRequestsPerCi[localIdx].veidSpanStart, pConfigRequestsPerCi[localIdx].profile.veidCount, pKernelMIGGpuInstance)*call to _kmigmgrPrintComputeInstances**pKGr*call to kgrmgrDiscoverMaxLocalCtxBufInfo_IMPL*NVRM: Failed to configure GPU instance. Invalidating GRID - %d **NVRM: Failed to configure GPU instance. Invalidating GRID - %d *NVRM: %s *----------------------------------------------------**NVRM: %s **----------------------------------------------------*NVRM: | %14s | %14s | %14s | *SwizzId*GR Count*Gpc Count**NVRM: | %14s | %14s | %14s | **SwizzId**GR Count**Gpc Count*NVRM: | %14d | %14d | %14d | **NVRM: | %14d | %14d | %14d | *pComputeResourceAllocation**pComputeResourceAllocation*call to kmigmgrEngineTypeXlate_IMPL*kmigmgrEngineTypeXlate(&pComputeResourceAllocation->localEngines, RM_ENGINE_TYPE_GR(0), &pComputeResourceAllocation->engines, &rmEngineType)**kmigmgrEngineTypeXlate(&pComputeResourceAllocation->localEngines, RM_ENGINE_TYPE_GR(0), &pComputeResourceAllocation->engines, &rmEngineType)*NVRM: | %23s | %23s | *Gr Engine IDX*GPC Mask**NVRM: | %23s | %23s | **Gr Engine IDX**GPC Mask*NVRM: | %23d | %23X | **NVRM: | %23d | %23X | *GPC Count**GPC Count*CIID < NV_ARRAY_ELEMENTS(pKernelMIGGpuInstance->MIGComputeInstance)**CIID < NV_ARRAY_ELEMENTS(pKernelMIGGpuInstance->MIGComputeInstance)*NVRM: Compute Instance with id - %d still in use by other clients **NVRM: Compute Instance with id - %d still in use by other clients *pConfigRequestPerCi**pConfigRequestPerCi*pConfigRequestPerCi != NULL**pConfigRequestPerCi != NULL*!bitVectorTestAllCleared(&grEngines)**!bitVectorTestAllCleared(&grEngines)*call to kmigmgrConfigureGPUInstance_IMPL*kmigmgrConfigureGPUInstance(pGpu, pKernelMIGManager, swizzId, pConfigRequestPerCi, updateEngMask)**kmigmgrConfigureGPUInstance(pGpu, pKernelMIGManager, swizzId, pConfigRequestPerCi, updateEngMask)*call to kmigmgrMakeCIReference_IMPL*kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(0), &globalRmEngType)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_GR(0), &globalRmEngType)*call to kmigmgrFreeComputeInstanceHandles_IMPL*call to kmigmgrReleaseComputeInstanceEngines_IMPL*call to kccuDeInitVgpuMigSharedBuffer_IMPL*NVRM: De-initialization process of the MIG GPM buffer for vGPU failed. **NVRM: De-initialization process of the MIG GPM buffer for vGPU failed. *pMIGComputeInstance != NULL**pMIGComputeInstance != NULL*pGlobalMask**pGlobalMask*pLocalMask**pLocalMask*globalEngineType*pCIIDs*params.type == KMIGMGR_CREATE_COMPUTE_INSTANCE_PARAMS_TYPE_RESTORE**params.type == KMIGMGR_CREATE_COMPUTE_INSTANCE_PARAMS_TYPE_RESTORE*restore*params.inst.restore.pComputeInstanceSave != NULL**params.inst.restore.pComputeInstanceSave != NULL*params.inst.restore.pComputeInstanceSave->bValid**params.inst.restore.pComputeInstanceSave->bValid*!bQuery**!bQuery*!pMIGComputeInstance->bValid**!pMIGComputeInstance->bValid*call to kmigmgrXlateSpanStartToCTSId_IMPL*kmigmgrXlateSpanStartToCTSId(pGpu, pKernelMIGManager, info.computeSize, info.spanStart, &pConfigRequestPerCi[0].ctsId)**kmigmgrXlateSpanStartToCTSId(pGpu, pKernelMIGManager, info.computeSize, info.spanStart, &pConfigRequestPerCi[0].ctsId)*call to kmigmgrIsCTSIdAvailable_IMPL*kmigmgrIsCTSIdAvailable(pGpu, pKernelMIGManager, pKernelMIGGpuInstance->pProfile->validCTSIdMask, pKernelMIGGpuInstance->ctsIdsInUseMask, pConfigRequestPerCi[0].ctsId)**kmigmgrIsCTSIdAvailable(pGpu, pKernelMIGManager, pKernelMIGGpuInstance->pProfile->validCTSIdMask, pKernelMIGGpuInstance->ctsIdsInUseMask, pConfigRequestPerCi[0].ctsId)*tempGpcMask*call to bitVectorFromRaw_IMPL*call to kmigmgrGetLocalEngineMask_IMPL*kmigmgrEngineTypeXlate(&pComputeResourceAllocation->localEngines, RM_ENGINE_TYPE_GR(0), &pComputeResourceAllocation->engines, &localEngineType)**kmigmgrEngineTypeXlate(&pComputeResourceAllocation->localEngines, RM_ENGINE_TYPE_GR(0), &pComputeResourceAllocation->engines, &localEngineType)*veidSpanStart*shadowVeidInUseMask*call to kgrmgrCheckVeidsRequest_IMPL*kgrmgrCheckVeidsRequest(pGpu, pKernelGraphicsManager, &shadowVeidInUseMask, pConfigRequestPerCi[0].profile.veidCount, &pConfigRequestPerCi[0].veidSpanStart, pKernelMIGGpuInstance)**kgrmgrCheckVeidsRequest(pGpu, pKernelGraphicsManager, &shadowVeidInUseMask, pConfigRequestPerCi[0].profile.veidCount, &pConfigRequestPerCi[0].veidSpanStart, pKernelMIGGpuInstance)*pKernelMIGGpuInstance->MIGComputeInstance[CIIdx].id == KMIGMGR_COMPUTE_INSTANCE_ID_INVALID**pKernelMIGGpuInstance->MIGComputeInstance[CIIdx].id == KMIGMGR_COMPUTE_INSTANCE_ID_INVALID*call to osRmCapRegisterSmcExecutionPartition*osRmCapRegisterSmcExecutionPartition(pKernelMIGGpuInstance->pOsRmCaps, &pKernelMIGGpuInstance->MIGComputeInstance[CIIdx].pOsRmCaps, pKernelMIGGpuInstance->MIGComputeInstance[CIIdx].id)**osRmCapRegisterSmcExecutionPartition(pKernelMIGGpuInstance->pOsRmCaps, &pKernelMIGGpuInstance->MIGComputeInstance[CIIdx].pOsRmCaps, pKernelMIGGpuInstance->MIGComputeInstance[CIIdx].id)*serverAllocShare(&g_resServ, classInfo(RsShared), &pKernelMIGGpuInstance->MIGComputeInstance[CIIdx].pShare)**serverAllocShare(&g_resServ, classInfo(RsShared), &pKernelMIGGpuInstance->MIGComputeInstance[CIIdx].pShare)*call to kmigmgrAllocComputeInstanceHandles_IMPL*kmigmgrAllocComputeInstanceHandles(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, &pKernelMIGGpuInstance->MIGComputeInstance[CIIdx])**kmigmgrAllocComputeInstanceHandles(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, &pKernelMIGGpuInstance->MIGComputeInstance[CIIdx])*kmigmgrEngineTypeXlate(&pComputeResourceAllocation->localEngines, RM_ENGINE_TYPE_GR(0), &pComputeResourceAllocation->engines, &globalEngineType)**kmigmgrEngineTypeXlate(&pComputeResourceAllocation->localEngines, RM_ENGINE_TYPE_GR(0), &pComputeResourceAllocation->engines, &globalEngineType)*kmigmgrEngineTypeXlate(&pResourceAllocation->localEngines, globalEngineType, &pResourceAllocation->engines, &globalEngineType)**kmigmgrEngineTypeXlate(&pResourceAllocation->localEngines, globalEngineType, &pResourceAllocation->engines, &globalEngineType)*pComputeInstanceInfo**pComputeInstanceInfo*pComputeInstanceInfo != NULL**pComputeInstanceInfo != NULL*inUseGpcCount*call to kmigmgrGetComputeProfileForRequest_IMPL*kmigmgrGetComputeProfileForRequest(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, smCount, gpcCount, computeSize, &ciProfile)**kmigmgrGetComputeProfileForRequest(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, smCount, gpcCount, computeSize, &ciProfile)*ciProfile*pKernelMIGGpuInstance->resourceAllocation.virtualGpcCount >= inUseGpcCount**pKernelMIGGpuInstance->resourceAllocation.virtualGpcCount >= inUseGpcCount*remainingGpcCount*shadowCTSInUseMask*pReqComputeInstanceInfo*pCIProfile**pCIProfile*kmigmgrXlateSpanStartToCTSId(pGpu, pKernelMIGManager, pCIProfile->computeSize, spanStart, &ctsId)**kmigmgrXlateSpanStartToCTSId(pGpu, pKernelMIGManager, pCIProfile->computeSize, spanStart, &ctsId)*kmigmgrIsCTSIdAvailable(pGpu, pKernelMIGManager, pKernelMIGGpuInstance->pProfile->validCTSIdMask, shadowCTSInUseMask, ctsId)**kmigmgrIsCTSIdAvailable(pGpu, pKernelMIGManager, pKernelMIGGpuInstance->pProfile->validCTSIdMask, shadowCTSInUseMask, ctsId)*call to kmigmgrGetFreeCTSId_IMPL*kmigmgrGetFreeCTSId(pGpu, pKernelMIGManager, &ctsId, pKernelMIGGpuInstance->pProfile->validCTSIdMask, 0x0, shadowCTSInUseMask, pCIProfile->computeSize, NV_FALSE, NV_FALSE)**kmigmgrGetFreeCTSId(pGpu, pKernelMIGManager, &ctsId, pKernelMIGGpuInstance->pProfile->validCTSIdMask, 0x0, shadowCTSInUseMask, pCIProfile->computeSize, NV_FALSE, NV_FALSE)*ctsId < KMIGMGR_MAX_GPU_CTSID**ctsId < KMIGMGR_MAX_GPU_CTSID*call to kmigmgrGetSpanStartFromCTSId_IMPL*kgrmgrCheckVeidsRequest(pGpu, pKernelGraphicsManager, &shadowVeidInUseMask, pCIProfile->veidCount, &pConfigRequestPerCi[CIIdx].veidSpanStart, pKernelMIGGpuInstance)**kgrmgrCheckVeidsRequest(pGpu, pKernelGraphicsManager, &shadowVeidInUseMask, pCIProfile->veidCount, &pConfigRequestPerCi[CIIdx].veidSpanStart, pKernelMIGGpuInstance)*NVRM: Not enough remaining GPCs (%d) for compute instance request (%d). **NVRM: Not enough remaining GPCs (%d) for compute instance request (%d). *call to bitVectorTestEqual_IMPL*bitVectorTestEqual(&engines, &pResourceAllocation->engines)**bitVectorTestEqual(&engines, &pResourceAllocation->engines)*(kmigmgrCountEnginesOfType(&engines, RM_ENGINE_TYPE_GR(0)) == 1)**(kmigmgrCountEnginesOfType(&engines, RM_ENGINE_TYPE_GR(0)) == 1)*bitVectorTestAllCleared(&tempVector)**bitVectorTestAllCleared(&tempVector)*call to kmigmgrAllocateInstanceEngines_IMPL*kmigmgrAllocateInstanceEngines(&pKernelMIGGpuInstance->resourceAllocation.engines, ((pMIGComputeInstance->sharedEngFlag & NVC637_CTRL_EXEC_PARTITIONS_SHARED_FLAG_NONE) != 0x0), RM_ENGINE_RANGE_GR(), grCount, &pResourceAllocation->engines, &shadowExclusiveEngMask, &shadowSharedEngMask, &pKernelMIGGpuInstance->resourceAllocation.engines)**kmigmgrAllocateInstanceEngines(&pKernelMIGGpuInstance->resourceAllocation.engines, ((pMIGComputeInstance->sharedEngFlag & NVC637_CTRL_EXEC_PARTITIONS_SHARED_FLAG_NONE) != 0x0), RM_ENGINE_RANGE_GR(), grCount, &pResourceAllocation->engines, &shadowExclusiveEngMask, &shadowSharedEngMask, &pKernelMIGGpuInstance->resourceAllocation.engines)*kmigmgrAllocateInstanceEngines(&pKernelMIGGpuInstance->resourceAllocation.engines, ((pMIGComputeInstance->sharedEngFlag & NVC637_CTRL_EXEC_PARTITIONS_SHARED_FLAG_CE) != 0x0), RM_ENGINE_RANGE_COPY(), ceCount, &pResourceAllocation->engines, &shadowExclusiveEngMask, &shadowSharedEngMask, &pKernelMIGGpuInstance->resourceAllocation.engines)**kmigmgrAllocateInstanceEngines(&pKernelMIGGpuInstance->resourceAllocation.engines, ((pMIGComputeInstance->sharedEngFlag & NVC637_CTRL_EXEC_PARTITIONS_SHARED_FLAG_CE) != 0x0), RM_ENGINE_RANGE_COPY(), ceCount, &pResourceAllocation->engines, &shadowExclusiveEngMask, &shadowSharedEngMask, &pKernelMIGGpuInstance->resourceAllocation.engines)*kmigmgrAllocateInstanceEngines(&pKernelMIGGpuInstance->resourceAllocation.engines, ((pMIGComputeInstance->sharedEngFlag & NVC637_CTRL_EXEC_PARTITIONS_SHARED_FLAG_NVDEC) != 0x0), RM_ENGINE_RANGE_NVDEC(), decCount, &pResourceAllocation->engines, &shadowExclusiveEngMask, &shadowSharedEngMask, &pKernelMIGGpuInstance->resourceAllocation.engines)**kmigmgrAllocateInstanceEngines(&pKernelMIGGpuInstance->resourceAllocation.engines, ((pMIGComputeInstance->sharedEngFlag & NVC637_CTRL_EXEC_PARTITIONS_SHARED_FLAG_NVDEC) != 0x0), RM_ENGINE_RANGE_NVDEC(), decCount, &pResourceAllocation->engines, &shadowExclusiveEngMask, &shadowSharedEngMask, &pKernelMIGGpuInstance->resourceAllocation.engines)*kmigmgrAllocateInstanceEngines(&pKernelMIGGpuInstance->resourceAllocation.engines, ((pMIGComputeInstance->sharedEngFlag & NVC637_CTRL_EXEC_PARTITIONS_SHARED_FLAG_NVENC) != 0x0), RM_ENGINE_RANGE_NVENC(), encCount, &pResourceAllocation->engines, &shadowExclusiveEngMask, &shadowSharedEngMask, &pKernelMIGGpuInstance->resourceAllocation.engines)**kmigmgrAllocateInstanceEngines(&pKernelMIGGpuInstance->resourceAllocation.engines, ((pMIGComputeInstance->sharedEngFlag & NVC637_CTRL_EXEC_PARTITIONS_SHARED_FLAG_NVENC) != 0x0), RM_ENGINE_RANGE_NVENC(), encCount, &pResourceAllocation->engines, &shadowExclusiveEngMask, &shadowSharedEngMask, &pKernelMIGGpuInstance->resourceAllocation.engines)*kmigmgrAllocateInstanceEngines(&pKernelMIGGpuInstance->resourceAllocation.engines, ((pMIGComputeInstance->sharedEngFlag & NVC637_CTRL_EXEC_PARTITIONS_SHARED_FLAG_NVJPG) != 0x0), RM_ENGINE_RANGE_NVJPEG(), jpgCount, &pResourceAllocation->engines, &shadowExclusiveEngMask, &shadowSharedEngMask, &pKernelMIGGpuInstance->resourceAllocation.engines)**kmigmgrAllocateInstanceEngines(&pKernelMIGGpuInstance->resourceAllocation.engines, ((pMIGComputeInstance->sharedEngFlag & NVC637_CTRL_EXEC_PARTITIONS_SHARED_FLAG_NVJPG) != 0x0), RM_ENGINE_RANGE_NVJPEG(), jpgCount, &pResourceAllocation->engines, &shadowExclusiveEngMask, &shadowSharedEngMask, &pKernelMIGGpuInstance->resourceAllocation.engines)*kmigmgrAllocateInstanceEngines(&pKernelMIGGpuInstance->resourceAllocation.engines, ((pMIGComputeInstance->sharedEngFlag & NVC637_CTRL_EXEC_PARTITIONS_SHARED_FLAG_OFA) != 0x0), RM_ENGINE_RANGE_OFA(), ofaCount, &pResourceAllocation->engines, &shadowExclusiveEngMask, &shadowSharedEngMask, &pKernelMIGGpuInstance->resourceAllocation.engines)**kmigmgrAllocateInstanceEngines(&pKernelMIGGpuInstance->resourceAllocation.engines, ((pMIGComputeInstance->sharedEngFlag & NVC637_CTRL_EXEC_PARTITIONS_SHARED_FLAG_OFA) != 0x0), RM_ENGINE_RANGE_OFA(), ofaCount, &pResourceAllocation->engines, &shadowExclusiveEngMask, &shadowSharedEngMask, &pKernelMIGGpuInstance->resourceAllocation.engines)*RM_ENGINE_TYPE_IS_GR(localEngineType)**RM_ENGINE_TYPE_IS_GR(localEngineType)*updateEngMaskShadow*kmigmgrEngineTypeXlate(&pComputeResourceAllocation->localEngines, RM_ENGINE_TYPE_GR(0), &pComputeResourceAllocation->engines, &localRmEngineType)**kmigmgrEngineTypeXlate(&pComputeResourceAllocation->localEngines, RM_ENGINE_TYPE_GR(0), &pComputeResourceAllocation->engines, &localRmEngineType)*RM_ENGINE_TYPE_IS_GR(localRmEngineType)**RM_ENGINE_TYPE_IS_GR(localRmEngineType)*configRequestsPerCiOrdered**configRequestsPerCiOrdered*CIIdx < count**CIIdx < count*createdInstances*pMIGComputeInstance->id == CIIdx**pMIGComputeInstance->id == CIIdx*osRmCapRegisterSmcExecutionPartition(pKernelMIGGpuInstance->pOsRmCaps, &pMIGComputeInstance->pOsRmCaps, pMIGComputeInstance->id)**osRmCapRegisterSmcExecutionPartition(pKernelMIGGpuInstance->pOsRmCaps, &pMIGComputeInstance->pOsRmCaps, pMIGComputeInstance->id)*call to kmigmgrGenerateComputeInstanceUuid_DISPATCH*kmigmgrGenerateComputeInstanceUuid_HAL(pGpu, pKernelMIGManager, swizzId, globalGrIdx, &pMIGComputeInstance->uuid)**kmigmgrGenerateComputeInstanceUuid_HAL(pGpu, pKernelMIGManager, swizzId, globalGrIdx, &pMIGComputeInstance->uuid)*serverAllocShare(&g_resServ, classInfo(RsShared), &pMIGComputeInstance->pShare)**serverAllocShare(&g_resServ, classInfo(RsShared), &pMIGComputeInstance->pShare)*kmigmgrAllocComputeInstanceHandles(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, pMIGComputeInstance)**kmigmgrAllocComputeInstanceHandles(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, pMIGComputeInstance)*call to kccuInitVgpuMigSharedBuffer_IMPL*NVRM: Initialization process of the MIG GPM buffer for vGPU failed. **NVRM: Initialization process of the MIG GPM buffer for vGPU failed. *call to nvGenerateSmcUuid*nvGenerateSmcUuid(chipId, gid, swizzId, globalGrIdx, pUuid)**nvGenerateSmcUuid(chipId, gid, swizzId, globalGrIdx, pUuid)*rmapiutilAllocClientAndDeviceHandles(pRmApi, pGpu, &hClient, &hDevice, &hSubdevice)**rmapiutilAllocClientAndDeviceHandles(pRmApi, pGpu, &hClient, &hDevice, &hSubdevice)*pRmApi->Alloc(pRmApi, hClient, hSubdevice, &hGPUInstanceSubscription, AMPERE_SMC_PARTITION_REF, ¶ms, sizeof(params))**pRmApi->Alloc(pRmApi, hClient, hSubdevice, &hGPUInstanceSubscription, AMPERE_SMC_PARTITION_REF, ¶ms, sizeof(params))*pRmApi->Alloc(pRmApi, hClient, hGPUInstanceSubscription, &hComputeInstanceSubscription, AMPERE_SMC_EXEC_PARTITION_REF, ¶ms, sizeof(params))**pRmApi->Alloc(pRmApi, hClient, hGPUInstanceSubscription, &hComputeInstanceSubscription, AMPERE_SMC_EXEC_PARTITION_REF, ¶ms, sizeof(params))*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_EXPORT_GPU_INSTANCE, &export, sizeof(export))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_EXPORT_GPU_INSTANCE, &export, sizeof(export))*pComputeInstanceSaves*(pKernelMIGGpuInstance != NULL) && (pComputeInstanceSaves != NULL)**(pKernelMIGGpuInstance != NULL) && (pComputeInstanceSaves != NULL)**pComputeInstanceSave*partitionDescs**partitionDescs*kmemsysGetMIGGPUInstanceMemInfo(pGpu, pKernelMemorySystem, swizzId, &addrRange)**kmemsysGetMIGGPUInstanceMemInfo(pGpu, pKernelMemorySystem, swizzId, &addrRange)*descCount*!bitVectorTestAllCleared(&ces)**!bitVectorTestAllCleared(&ces)*pRef != NULL**pRef != NULL*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_GET_SMC_MODE, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GPU_GET_SMC_MODE, ¶ms, sizeof(params))*params.smcMode != NV2080_CTRL_GPU_INFO_GPU_SMC_MODE_UNSUPPORTED**params.smcMode != NV2080_CTRL_GPU_INFO_GPU_SMC_MODE_UNSUPPORTED*call to gpumgrCacheSetMIGEnabled_IMPL*call to kmigmgrLoadStaticInfo_DISPATCH*kmigmgrLoadStaticInfo_HAL(pGpu, pKernelMIGManager)**kmigmgrLoadStaticInfo_HAL(pGpu, pKernelMIGManager)*gpuDisableAccounting(pGpu, NV_TRUE)**gpuDisableAccounting(pGpu, NV_TRUE)*call to kccuMigShrBufHandler_DISPATCH*NULL != pUnsupportedSwizzIdMask**NULL != pUnsupportedSwizzIdMask*gpuSlice**gpuSlice*pStaticInfo->pProfiles != NULL**pStaticInfo->pProfiles != NULL*NULL != pPartnerListParams**NULL != pPartnerListParams*kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, rmEngineType, &newEngineType)**kmigmgrGetGlobalToLocalEngineType(pGpu, pKernelMIGManager, ref, rmEngineType, &newEngineType)*bAddEngine*pEngineTypes*kmigmgrIsMIGReferenceValid(&ref)**kmigmgrIsMIGReferenceValid(&ref)*RM_ENGINE_TYPE_IS_VALID(globalEngType)**RM_ENGINE_TYPE_IS_VALID(globalEngType)*pLocalEngType*NVRM: Global Engine type 0x%x is not allocated to GPU instance **NVRM: Global Engine type 0x%x is not allocated to GPU instance *NVRM: GPU instance Local Engine type 0x%x is not allocated to compute instance **NVRM: GPU instance Local Engine type 0x%x is not allocated to compute instance *RM_ENGINE_TYPE_IS_VALID(localEngType)**RM_ENGINE_TYPE_IS_VALID(localEngType)*NVRM: Compute instance Local Engine type 0x%x is not allocated to Compute instance **NVRM: Compute instance Local Engine type 0x%x is not allocated to Compute instance *NVRM: GPU instance Local Engine type 0x%x is not allocated to GPU instance **NVRM: GPU instance Local Engine type 0x%x is not allocated to GPU instance *kernelMIGGpuInstance**kernelMIGGpuInstance*call to memmgrAllocMIGGPUInstanceMemory_DISPATCH*memParams*memAddrRange*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_KMIGMGR_PROMOTE_GPU_INSTANCE_MEM_RANGE, &memParams, sizeof(memParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_KMIGMGR_PROMOTE_GPU_INSTANCE_MEM_RANGE, &memParams, sizeof(memParams))*call to kmigmgrGetProfileByPartitionFlag_IMPL*kmigmgrGetProfileByPartitionFlag(pGpu, pKernelMIGManager, partitionFlag, &pKernelMIGGpuInstance->pProfile)**kmigmgrGetProfileByPartitionFlag(pGpu, pKernelMIGManager, partitionFlag, &pKernelMIGGpuInstance->pProfile)*serverAllocShare(&g_resServ, classInfo(RsShared), &pKernelMIGGpuInstance->pShare)**serverAllocShare(&g_resServ, classInfo(RsShared), &pKernelMIGGpuInstance->pShare)*call to kmigmgrSwizzIdToResourceAllocation_IMPL*kmigmgrSwizzIdToResourceAllocation(pGpu, pKernelMIGManager, swizzId, params, pKernelMIGGpuInstance, &pKernelMIGGpuInstance->resourceAllocation)**kmigmgrSwizzIdToResourceAllocation(pGpu, pKernelMIGManager, swizzId, params, pKernelMIGGpuInstance, &pKernelMIGGpuInstance->resourceAllocation)*call to kmigmgrSetEnginesInUse_IMPL*kmigmgrSetEnginesInUse(pGpu, pKernelMIGManager, &pKernelMIGGpuInstance->resourceAllocation.engines)**kmigmgrSetEnginesInUse(pGpu, pKernelMIGManager, &pKernelMIGGpuInstance->resourceAllocation.engines)*i < KMIGMGR_MAX_GPU_INSTANCES**i < KMIGMGR_MAX_GPU_INSTANCES*-----------------------------------------------------------------**-----------------------------------------------------------------*NVRM: | %18s | %18s | %18s | *SwizzId Table Mask**NVRM: | %18s | %18s | %18s | **SwizzId Table Mask*NVRM: | %18d | %18s | %18d | *NOT IMPLEMENTED**NVRM: | %18d | %18s | %18d | **NOT IMPLEMENTED*OBJGR Count*OBJCE Count*NVDEC Count**OBJGR Count**OBJCE Count**NVDEC Count*NVRM: | %61s | *Note: GRCE Count is same as OBJGR Count**NVRM: | %61s | **Note: GRCE Count is same as OBJGR Count*Note: OBJCE Count does not include GRCEs**Note: OBJCE Count does not include GRCEs*NVRM: | %18d | %18d | %18d | **NVRM: | %18d | %18d | %18d | *NVENC Count*NVJPG Count*NVOFA Count**NVENC Count**NVJPG Count**NVOFA Count*VEID Offset*VEID Count*VEID-GR Map**VEID Offset**VEID Count**VEID-GR Map*NVRM: | %18d | %18d | %18llx | **NVRM: | %18d | %18d | %18llx | *Partitionable**Partitionable*Memory Start Addr*Memory End Addr*L2TLB Mask**Memory Start Addr**Memory End Addr**L2TLB Mask*NVRM: | %18llx | %18llx | %18x | **NVRM: | %18llx | %18llx | %18x | *Local Instance**Local Instance*Size in Bytes**Size in Bytes*NVRM: | %18llx | %18llx | %18llx | **NVRM: | %18llx | %18llx | %18llx | *Start VMMU Seg.*End VMMU Seg.*Size in VMMU Seg.**Start VMMU Seg.**End VMMU Seg.**Size in VMMU Seg.*call to kmemsysGetMIGGPUInstanceMemConfigFromSwizzId_IMPL*pGPUInstanceMemConfig*NVRM: Failed to get GPU instance for non-privileged client hClient=0x%08x! **NVRM: Failed to get GPU instance for non-privileged client hClient=0x%08x! *ppMemoryPartitionHeap != NULL**ppMemoryPartitionHeap != NULL*NVRM: GPU instance heap found for hClient = 0x%08x with swizzId = %d! **NVRM: GPU instance heap found for hClient = 0x%08x with swizzId = %d! *subdeviceGetByInstance(pRsClient, RES_GET_HANDLE(pDevice), 0, &pSubdevice)**subdeviceGetByInstance(pRsClient, RES_GET_HANDLE(pDevice), 0, &pSubdevice)*call to kceFindFirstInstance_IMPL*kceFindFirstInstance(pGpu, &pKCe)**kceFindFirstInstance(pGpu, &pKCe)*kceTopLevelPceLceMappingsUpdate(pGpu, pKCe)**kceTopLevelPceLceMappingsUpdate(pGpu, pKCe)*call to gisubscriptionIsDeviceProfiling*bDeviceProfilingInUse*!kmigmgrIsDeviceProfilingInUse(pGpu, pKernelMIGManager)**!kmigmgrIsDeviceProfilingInUse(pGpu, pKernelMIGManager)*!pKernelMIGGpuInstance->MIGComputeInstance[i].bValid**!pKernelMIGGpuInstance->MIGComputeInstance[i].bValid*pMIGGpuInstance**pMIGGpuInstance*bTopologyValid*NVRM: Skipping reinitialization of persistent MIG instances due to MIG disablement! **NVRM: Skipping reinitialization of persistent MIG instances due to MIG disablement! *call to gpumgrUnregisterRmCapsForMIGGI_IMPL*kmigmgrSetMIGState(pGpu, pKernelMIGManager, bMemoryPartitioningNeeded, NV_TRUE, NV_FALSE)**kmigmgrSetMIGState(pGpu, pKernelMIGManager, bMemoryPartitioningNeeded, NV_TRUE, NV_FALSE)*call to kmigmgrGenerateGPUInstanceUuid_DISPATCH*kmigmgrGenerateGPUInstanceUuid_HAL(pGpu, pKernelMIGManager, swizzId, &uuid)**kmigmgrGenerateGPUInstanceUuid_HAL(pGpu, pKernelMIGManager, swizzId, &uuid)*kmigmgrCreateGPUInstance(pGpu, pKernelMIGManager, swizzId, uuid.uuid, restore, NV_TRUE, NV_FALSE)**kmigmgrCreateGPUInstance(pGpu, pKernelMIGManager, swizzId, uuid.uuid, restore, NV_TRUE, NV_FALSE)*kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, swizzId, &pKernelMIGGPUInstance)**kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, swizzId, &pKernelMIGGPUInstance)*kmigmgrCreateComputeInstances_HAL(pGpu, pKernelMIGManager, pKernelMIGGPUInstance, NV_FALSE, restore, &id, NV_FALSE)**kmigmgrCreateComputeInstances_HAL(pGpu, pKernelMIGManager, pKernelMIGGPUInstance, NV_FALSE, restore, &id, NV_FALSE)*kmigmgrDeleteComputeInstance(pGpu, pKernelMIGManager, pKernelMIGGPUInstance, CIIdx, NV_TRUE)**kmigmgrDeleteComputeInstance(pGpu, pKernelMIGManager, pKernelMIGGPUInstance, CIIdx, NV_TRUE)*kmigmgrInvalidateGPUInstance(pGpu, pKernelMIGManager, pKernelMIGGPUInstance->swizzId, NV_TRUE)**kmigmgrInvalidateGPUInstance(pGpu, pKernelMIGManager, pKernelMIGGPUInstance->swizzId, NV_TRUE)*call to kmigmgrRestoreFromBootConfig_DISPATCH*kmigmgrRestoreFromBootConfig_HAL(pGpu, pKernelMIGManager)**kmigmgrRestoreFromBootConfig_HAL(pGpu, pKernelMIGManager)*pPartImportParams**pPartImportParams*pPartImportParams != NULL**pPartImportParams != NULL*pExecPartImportParams**pExecPartImportParams*pExecPartImportParams != NULL**pExecPartImportParams != NULL*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_KMIGMGR_IMPORT_GPU_INSTANCE, pPartImportParams, sizeof(*pPartImportParams))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_INTERNAL_KMIGMGR_IMPORT_GPU_INSTANCE, pPartImportParams, sizeof(*pPartImportParams))*kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, pGPUInstanceSave->swizzId, &pKernelMIGGpuInstance)**kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, pGPUInstanceSave->swizzId, &pKernelMIGGpuInstance)*pRmApi->AllocWithSecInfo(pRmApi, hClient, hSubdevice, &hSubscription, AMPERE_SMC_PARTITION_REF, &alloc, sizeof(alloc), RMAPI_ALLOC_FLAGS_NONE, NULL, &pRmApi->defaultSecInfo)**pRmApi->AllocWithSecInfo(pRmApi, hClient, hSubdevice, &hSubscription, AMPERE_SMC_PARTITION_REF, &alloc, sizeof(alloc), RMAPI_ALLOC_FLAGS_NONE, NULL, &pRmApi->defaultSecInfo)*pRmApi->Control(pRmApi, hClient, hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_IMPORT, pExecPartImportParams, sizeof(*pExecPartImportParams))**pRmApi->Control(pRmApi, hClient, hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_IMPORT, pExecPartImportParams, sizeof(*pExecPartImportParams))*pKernelMIGGpuInstance->runlistIdMask == 0**pKernelMIGGpuInstance->runlistIdMask == 0*ppRlBuffer**ppRlBuffer***ppRlBuffer*bufIdx*call to kfifoGetNumEschedDrivenEngines_IMPL*call to kfifoRunlistGetBufAllocParams_IMPL*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, index, ENGINE_INFO_TYPE_RUNLIST, &runlistId)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, index, ENGINE_INFO_TYPE_RUNLIST, &runlistId)*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RUNLIST, runlistId, ENGINE_INFO_TYPE_ENG_DESC, &engDesc)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_RUNLIST, runlistId, ENGINE_INFO_TYPE_ENG_DESC, &engDesc)*call to kfifoRunlistAllocBuffers_IMPL*kfifoRunlistAllocBuffers(pGpu, pKernelFifo, NV_TRUE, aperture, runlistId, attr, allocFlags, 0, NV_TRUE, pKernelFifo->pppRunlistBufMemDesc[runlistId])**kfifoRunlistAllocBuffers(pGpu, pKernelFifo, NV_TRUE, aperture, runlistId, attr, allocFlags, 0, NV_TRUE, pKernelFifo->pppRunlistBufMemDesc[runlistId])*runlistAlign*runlistIdMask*rlBuffers**rlBuffers***rlBuffers*pSourceMemDesc*call to kmigmgrTrimInstanceRunlistBufPools_IMPL*call to kfifoGetRunlistBufPool_IMPL*call to ctxBufPoolTrim*pLocalEngines**pLocalEngines*bitVectorTestEqual(&tempEngines, pEngines)**bitVectorTestEqual(&tempEngines, pEngines)*call to bitVectorComplement_IMPL*bitVectorTestAllCleared(&tempEngines)**bitVectorTestAllCleared(&tempEngines)*NVRM: SwizzID - %d not in use **NVRM: SwizzID - %d not in use *NVRM: SwizzID - %d already in use **NVRM: SwizzID - %d already in use *pKernelMigManager*call to krcWatchdogEnable_IMPL*bRestoreWatchdog*bReenableWatchdog*watchdog*call to krcWatchdogGetReservationCounts_IMPL*NVRM: Failed to disable watchdog with outstanding reservations - enable: %d disable: %d softDisable: %d. **NVRM: Failed to disable watchdog with outstanding reservations - enable: %d disable: %d softDisable: %d. *kmigmgrSetSwizzIdInUse(pGpu, pKernelMIGManager, 0)**kmigmgrSetSwizzIdInUse(pGpu, pKernelMIGManager, 0)*call to kmigmgrSaveToPersistenceFromVgpuStaticInfo_DISPATCH*kmigmgrSaveToPersistenceFromVgpuStaticInfo_HAL(pGpu, pKernelMIGManager)**kmigmgrSaveToPersistenceFromVgpuStaticInfo_HAL(pGpu, pKernelMIGManager)*call to kgrmgrSetVeidInUseMask*NVRM: VF VEID in use mask: 0x%llX **NVRM: VF VEID in use mask: 0x%llX *call to memmgrDiscoverMIGPartitionableMemoryRange_DISPATCH*memmgrDiscoverMIGPartitionableMemoryRange_HAL(pGpu, pMemoryManager, &memoryRange)**memmgrDiscoverMIGPartitionableMemoryRange_HAL(pGpu, pMemoryManager, &memoryRange)*call to memmgrSetMIGPartitionableMemoryRange_IMPL*kmigmgrGenerateGPUInstanceUuid_HAL(pGpu, pKernelMIGManager, 0, &uuid)**kmigmgrGenerateGPUInstanceUuid_HAL(pGpu, pKernelMIGManager, 0, &uuid)*kmigmgrSetGPUInstanceInfo(pGpu, pKernelMIGManager, 0 , uuid.uuid, params)**kmigmgrSetGPUInstanceInfo(pGpu, pKernelMIGManager, 0 , uuid.uuid, params)*kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, 0, &pKernelMIGGpuInstance)**kmigmgrGetGPUInstanceInfo(pGpu, pKernelMIGManager, 0, &pKernelMIGGpuInstance)*localGrIdx*call to kgrmgrSetGrIdxVeidMask*kgraphicsLoadStaticInfo_HAL(pGpu, pKernelGraphics, 0)**kgraphicsLoadStaticInfo_HAL(pGpu, pKernelGraphics, 0)**pGPUInstanceSave*grceRange*asyncCeRange*assignableGrMask*nvPopCount32(assignableGrMask) <= pVSI->execPartitionInfo.execPartCount**nvPopCount32(assignableGrMask) <= pVSI->execPartitionInfo.execPartCount*osRmCapRegisterSmcPartition(pGpu->pOsRmCaps, &pGPUInstanceSave->pOsRmCaps, pGPUInstanceSave->swizzId)**osRmCapRegisterSmcPartition(pGpu->pOsRmCaps, &pGPUInstanceSave->pOsRmCaps, pGPUInstanceSave->swizzId)*savedCIIdx**syspipeId*pExecPartInfo*pComputeInstanceSave->id == grIdx**pComputeInstanceSave->id == grIdx*ceRange*osRmCapRegisterSmcExecutionPartition(pGPUInstanceSave->pOsRmCaps, &(pComputeInstanceSave->pOsRmCaps), pComputeInstanceSave->id)**osRmCapRegisterSmcExecutionPartition(pGPUInstanceSave->pOsRmCaps, &(pComputeInstanceSave->pOsRmCaps), pComputeInstanceSave->id)*bVgpuRestoredFromStaticInfo*kmigmgrEnableAllLCEs(pGpu, pKernelMIGManager, NV_TRUE) == NV_OK**kmigmgrEnableAllLCEs(pGpu, pKernelMIGManager, NV_TRUE) == NV_OK*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KMIGMGR_GET_PARTITIONABLE_ENGINES, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KMIGMGR_GET_PARTITIONABLE_ENGINES, ¶ms, sizeof(params))**engineMask**pSkylineInfo*pPrivate->staticInfo.pSkylineInfo != NULL**pPrivate->staticInfo.pSkylineInfo != NULL*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_GRMGR_GET_SKYLINE_INFO, pPrivate->staticInfo.pSkylineInfo, sizeof(*pPrivate->staticInfo.pSkylineInfo))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_GRMGR_GET_SKYLINE_INFO, pPrivate->staticInfo.pSkylineInfo, sizeof(*pPrivate->staticInfo.pSkylineInfo))**pSwizzIdFbMemPageRanges*pPrivate->staticInfo.pSwizzIdFbMemPageRanges != NULL**pPrivate->staticInfo.pSwizzIdFbMemPageRanges != NULL**pCIProfiles*pPrivate->staticInfo.pCIProfiles != NULL**pPrivate->staticInfo.pCIProfiles != NULL*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KMIGMGR_GET_COMPUTE_PROFILES, pPrivate->staticInfo.pCIProfiles, sizeof(*pPrivate->staticInfo.pCIProfiles))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KMIGMGR_GET_COMPUTE_PROFILES, pPrivate->staticInfo.pCIProfiles, sizeof(*pPrivate->staticInfo.pCIProfiles))**pProfiles*pPrivate->staticInfo.pProfiles != NULL**pPrivate->staticInfo.pProfiles != NULL*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KMIGMGR_GET_PROFILES, pPrivate->staticInfo.pProfiles, sizeof(*pPrivate->staticInfo.pProfiles))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_KMIGMGR_GET_PROFILES, pPrivate->staticInfo.pProfiles, sizeof(*pPrivate->staticInfo.pProfiles))*kmigmgrEnableAllLCEs(pGpu, pKernelMIGManager, NV_FALSE) == NV_OK**kmigmgrEnableAllLCEs(pGpu, pKernelMIGManager, NV_FALSE) == NV_OK*pPartitionDesc*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, engine, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *)&rmEngineType)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_INVALID, engine, ENGINE_INFO_TYPE_RM_ENGINE_TYPE, (NvU32 *)&rmEngineType)*pVSI->ciProfiles.profileCount <= NV_ARRAY_ELEMENTS(pPrivate->staticInfo.pCIProfiles->profiles)**pVSI->ciProfiles.profileCount <= NV_ARRAY_ELEMENTS(pPrivate->staticInfo.pCIProfiles->profiles)*physicalSlots*call to kmigmgrSetStaticInfo_DISPATCH*i < NV_ARRAY_ELEMENTS(pKernelMIGManager->kernelMIGGpuInstance)**i < NV_ARRAY_ELEMENTS(pKernelMIGManager->kernelMIGGpuInstance)*RmMIGBootConfigurationFeatureFlags**RmMIGBootConfigurationFeatureFlags*bGlobalBootConfigUsed*bBootConfigSupported*bAutoUpdateBootConfig*bIsSmgEnabled*RmEnableMIGGfx**RmEnableMIGGfx*call to kmigmgrClearStaticInfo_DISPATCH*call to kmigmgrIsDevinitMIGBitSet_DISPATCH*RMSetMIGAutoOnlineMode**RMSetMIGAutoOnlineMode*bMIGAutoOnlineEnabled*kfifoAddSchedulingHandler(pGpu, GPU_GET_KERNEL_FIFO(pGpu), _kmigmgrHandlePostSchedulingEnableCallback, NULL, _kmigmgrHandlePreSchedulingDisableCallback, NULL)**kfifoAddSchedulingHandler(pGpu, GPU_GET_KERNEL_FIFO(pGpu), _kmigmgrHandlePostSchedulingEnableCallback, NULL, _kmigmgrHandlePreSchedulingDisableCallback, NULL)*call to kmigmgrSaveToPersistence_IMPL*kmigmgrSaveToPersistence(pGpu, pKernelMIGManager)**kmigmgrSaveToPersistence(pGpu, pKernelMIGManager)*NVRM: Invalidating valid gpu instance with swizzId = %d **NVRM: Invalidating valid gpu instance with swizzId = %d *NVRM: Invalidating valid compute instance with id = %d **NVRM: Invalidating valid compute instance with id = %d *kmigmgrDeleteComputeInstance(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, CIIdx, NV_TRUE)**kmigmgrDeleteComputeInstance(pGpu, pKernelMIGManager, pKernelMIGGpuInstance, CIIdx, NV_TRUE)*pRmApi->Control(pRmApi, pKernelMIGGpuInstance->instanceHandles.hClient, pKernelMIGGpuInstance->instanceHandles.hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_DELETE, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pKernelMIGGpuInstance->instanceHandles.hClient, pKernelMIGGpuInstance->instanceHandles.hSubscription, NVC637_CTRL_CMD_EXEC_PARTITIONS_DELETE, ¶ms, sizeof(params))*kmigmgrInvalidateGPUInstance(pGpu, pKernelMIGManager, swizzId, NV_TRUE)**kmigmgrInvalidateGPUInstance(pGpu, pKernelMIGManager, swizzId, NV_TRUE)*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_SET_GPU_INSTANCES, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_SET_GPU_INSTANCES, ¶ms, sizeof(params))*NVRM: leaked swizzid mask 0x%llx !! **NVRM: leaked swizzid mask 0x%llx !! *kmigmgrSetMIGState(pGpu, pKernelMIGManager, NV_TRUE, NV_FALSE, NV_TRUE)**kmigmgrSetMIGState(pGpu, pKernelMIGManager, NV_TRUE, NV_FALSE, NV_TRUE)*call to memmgrGetTopLevelScrubberStatus_IMPL*call to memmgrSetPartitionableMem_DISPATCH*memmgrSetPartitionableMem_HAL(pGpu, pMemoryManager)**memmgrSetPartitionableMem_HAL(pGpu, pMemoryManager)*call to gpumgrIsSystemMIGEnabled_IMPL*partitioningMode*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_SET_PARTITIONING_MODE, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_MIGMGR_SET_PARTITIONING_MODE, ¶ms, sizeof(params))*call to kmigmgrDetectReducedConfig_DISPATCH*call to kmigmgrRestoreFromPersistence_DISPATCH*kmigmgrRestoreFromPersistence_HAL(pGpu, pKernelMIGManager)**kmigmgrRestoreFromPersistence_HAL(pGpu, pKernelMIGManager)*NVRM: Deleting valid GPU instance with swizzId - %d. Should have been deleted before shutdown! **NVRM: Deleting valid GPU instance with swizzId - %d. Should have been deleted before shutdown! *NVRM: Deleting valid compute instance - %d. Should have been deleted before shutdown! **NVRM: Deleting valid compute instance - %d. Should have been deleted before shutdown! *swizzIdInUseMask*call to kmigmgrInitRegistryOverrides_IMPL*targetRefCount*actualRefCount*actualRefCount == targetRefCount**actualRefCount == targetRefCount*pRmApi->Alloc(pRmApi, hClient, hSubdevice, &hSubscription, AMPERE_SMC_PARTITION_REF, ¶ms, sizeof(params))**pRmApi->Alloc(pRmApi, hClient, hSubdevice, &hSubscription, AMPERE_SMC_PARTITION_REF, ¶ms, sizeof(params))*call to _kmigmgrAllocThirdPartyP2PObject*_kmigmgrAllocThirdPartyP2PObject(pGpu, pKernelMIGGpuInstance)**_kmigmgrAllocThirdPartyP2PObject(pGpu, pKernelMIGGpuInstance)*NVRM: Error creating internal ThirdPartyP2P object: 0x%x **NVRM: Error creating internal ThirdPartyP2P object: 0x%x *pLocalEngineMask*pPhysicalEngineMask*pAllocatableEngines*pSourceEngines*bitVectorTestEqual(&engines, pAllocatableEngines)**bitVectorTestEqual(&engines, pAllocatableEngines)*pSharedEngines*pOutEngines*pExclusiveEngines*pSrcRef != NULL**pSrcRef != NULL*pDstRef != NULL**pDstRef != NULL*kmigmgrEngineTypeXlate(pSrcRef, srcRmEngineType, pDstRef, &dstRmEngineType)**kmigmgrEngineTypeXlate(pSrcRef, srcRmEngineType, pDstRef, &dstRmEngineType)*srcRmEngineType*pDstEngineType*pDstEngineType != NULL**pDstEngineType != NULL*tempSrcEngineType**bFound*giID*ciID*firstAsyncCE*allCesRange*call to gpuGetFirstAsyncLce_DISPATCH*call to bitVectorCountSetBits_IMPL*pRefA*pRefB*pRef->pKernelMIGGpuInstance->bValid**pRef->pKernelMIGGpuInstance->bValid*pRef->pMIGComputeInstance->bValid**pRef->pMIGComputeInstance->bValid*refCount > 0**refCount > 0*pMIGConfigSession*src/kernel/gpu/mig_mgr/mig_config_session.c**src/kernel/gpu/mig_mgr/mig_config_session.c*pMIGMonitorSession*src/kernel/gpu/mig_mgr/mig_monitor_session.c**src/kernel/gpu/mig_mgr/mig_monitor_session.c*call to kgmmuFmtInitLevels_GP10X*bPageTable*NVLink remote translation error: faulted @ 0x%x_%08x. Fault is of type %s %s*call to kgmmuGetFaultTypeString_DISPATCH*call to kfifoGetFaultAccessTypeString_DISPATCH**NVLink remote translation error: faulted @ 0x%x_%08x. Fault is of type %s %s*call to kgmmuServiceMmuFault_GV100*pFmtFamilies**pFmtFamilies***pFmtFamilies*pFam**pFam*call to kgmmuSetupWarForBug2720120FmtFamily_GA100*kgmmuSetupWarForBug2720120FmtFamily_GA100(pKernelGmmu, pFam)*src/kernel/gpu/mmu/arch/ampere/kern_gmmu_ga100.c**kgmmuSetupWarForBug2720120FmtFamily_GA100(pKernelGmmu, pFam)**src/kernel/gpu/mmu/arch/ampere/kern_gmmu_ga100.c*minCeMmuFaultId*maxCeMmuFaultId*minMmuFaultId*maxMmuFaultId*NVRM: Failed to find any MMU Fault ID **NVRM: Failed to find any MMU Fault ID *NVRM: CE MMU Fault ID range [0x%x - 0x%x] **NVRM: CE MMU Fault ID range [0x%x - 0x%x] *pPageDir0*pPageDir1*pWarSmallPageTable**pWarSmallPageTable**pWarPageDirectory0*pSmallPT*call to kgmmuGetPTEAperture*memdescCreate(&pKernelGmmu->pWarSmallPageTable, pGpu, mmuFmtLevelSize(pSmallPT), RM_PAGE_SIZE, NV_TRUE, kgmmuGetPTEAperture(pKernelGmmu), kgmmuGetPTEAttr(pKernelGmmu), 0)**memdescCreate(&pKernelGmmu->pWarSmallPageTable, pGpu, mmuFmtLevelSize(pSmallPT), RM_PAGE_SIZE, NV_TRUE, kgmmuGetPTEAperture(pKernelGmmu), kgmmuGetPTEAttr(pKernelGmmu), 0)*memmgrMemDescMemSet(pMemoryManager, pKernelGmmu->pWarSmallPageTable, 0, TRANSFER_FLAGS_NONE)**memmgrMemDescMemSet(pMemoryManager, pKernelGmmu->pWarSmallPageTable, 0, TRANSFER_FLAGS_NONE)*pPde0Fmt*bug2720120WarPde0*memdescCreate(&pKernelGmmu->pWarPageDirectory0, pGpu, mmuFmtLevelSize(pPageDir0), RM_PAGE_SIZE, NV_TRUE, kgmmuGetPTEAperture(pKernelGmmu), kgmmuGetPTEAttr(pKernelGmmu), 0)**memdescCreate(&pKernelGmmu->pWarPageDirectory0, pGpu, mmuFmtLevelSize(pPageDir0), RM_PAGE_SIZE, NV_TRUE, kgmmuGetPTEAperture(pKernelGmmu), kgmmuGetPTEAttr(pKernelGmmu), 0)*memmgrMemWrite(pMemoryManager, &pageDirEntry, pFam->bug2720120WarPde0.v8, pPageDir0->entrySize, TRANSFER_FLAGS_NONE)**memmgrMemWrite(pMemoryManager, &pageDirEntry, pFam->bug2720120WarPde0.v8, pPageDir0->entrySize, TRANSFER_FLAGS_NONE)*pPde1Fmt*call to kgmmuFmtInitLevels_GH10X*call to uvmIsAccessCntrBufferEnabled_DISPATCH*pMmuFaultType != NULL*src/kernel/gpu/mmu/arch/blackwell/kern_gmmu_gb100.c**pMmuFaultType != NULL**src/kernel/gpu/mmu/arch/blackwell/kern_gmmu_gb100.c**FAULT_PDE**FAULT_PDE_SIZE**FAULT_PTE**FAULT_VA_LIMIT_VIOLATION**FAULT_UNBOUND_INST_BLOCK**FAULT_PRIV_VIOLATION**FAULT_RO_VIOLATION**FAULT_PITCH_MASK_VIOLATION**FAULT_WORK_CREATION**FAULT_UNSUPPORTED_APERTURE**FAULT_CC_VIOLATION**FAULT_INFO_TYPE_UNSUPPORTED_KIND**FAULT_INFO_TYPE_REGION_VIOLATION**FAULT_INFO_TYPE_POISONED**FAULT_INFO_TYPE_ATOMIC_VIOLATION**UNRECOGNIZED_FAULT*IS_GFID_PF(pParams->gfid)**IS_GFID_PF(pParams->gfid)*call to kgmmuSetPdbToInvalidate_DISPATCH*call to kgmmuCheckPendingInvalidates_DISPATCH*version == GMMU_FMT_VERSION_3*src/kernel/gpu/mmu/arch/hopper/kern_gmmu_fmt_gh10x.c**version == GMMU_FMT_VERSION_3**src/kernel/gpu/mmu/arch/hopper/kern_gmmu_fmt_gh10x.c*maskPos*maskNeg*bInvert*regionCount**subLevels*pPdeBig*pPdeSmall*numLevels >= 7**numLevels >= 7*virtAddrBitHi*virtAddrBitLo*numSubLevels*pageLevelIdTag*fakeSparseEntry**fakeSparseEntry*memdescCreate(&pKernelGmmu->pFakeSparseBuffer, pGpu, RM_PAGE_SIZE * NV_GMMU_FAKE_SPARSE_TABLE_LEVELS, RM_PAGE_SIZE, NV_TRUE, ADDR_FBMEM, NV_MEMORY_WRITECOMBINED, MEMDESC_FLAGS_NONE)*src/kernel/gpu/mmu/arch/hopper/kern_gmmu_gh100.c**memdescCreate(&pKernelGmmu->pFakeSparseBuffer, pGpu, RM_PAGE_SIZE * NV_GMMU_FAKE_SPARSE_TABLE_LEVELS, RM_PAGE_SIZE, NV_TRUE, ADDR_FBMEM, NV_MEMORY_WRITECOMBINED, MEMDESC_FLAGS_NONE)**src/kernel/gpu/mmu/arch/hopper/kern_gmmu_gh100.c*memdescAlloc(pKernelGmmu->pFakeSparseBuffer)**memdescAlloc(pKernelGmmu->pFakeSparseBuffer)*call to kgmmuCreateFakeSparseTablesInternal_KERNEL*kgmmuCreateFakeSparseTablesInternal(pGpu, pKernelGmmu)**kgmmuCreateFakeSparseTablesInternal(pGpu, pKernelGmmu)*osMapPciMemoryKernel64(pGpu, bufferBaseOffset, RM_PAGE_SIZE * NV_GMMU_FAKE_SPARSE_TABLE_LEVELS, NV_PROTECT_READ_WRITE, (NvP64 *)&pTablesBasePtr, NV_MEMORY_UNCACHED)**osMapPciMemoryKernel64(pGpu, bufferBaseOffset, RM_PAGE_SIZE * NV_GMMU_FAKE_SPARSE_TABLE_LEVELS, NV_PROTECT_READ_WRITE, (NvP64 *)&pTablesBasePtr, NV_MEMORY_UNCACHED)*call to kgmmuFillFakeSparseTables*pTablesBasePtr**pTablesBasePtr*pBar0WindowAddress*kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, bufferBaseOffset & ~bufBar0OffsetMask)**kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, bufferBaseOffset & ~bufBar0OffsetMask)*kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, origBar0Mapping) == NV_OK**kbusSetBAR0WindowVidOffset_HAL(pGpu, pKernelBus, origBar0Mapping) == NV_OK*pFam != NULL**pFam != NULL*pFldAddr**pFldAddr*call to kgmmuTranslatePdePcfFromSw_DISPATCH*(kgmmuTranslatePdePcfFromSw_HAL(pKernelGmmu, templatePdePcfSw, &templatePdePcfHw) == NV_OK)**(kgmmuTranslatePdePcfFromSw_HAL(pKernelGmmu, templatePdePcfSw, &templatePdePcfHw) == NV_OK)*templateFakeEntry*pde*(kgmmuTranslatePdePcfFromSw_HAL(pKernelGmmu, pdePcfSw, &pdePcfHw) == NV_OK)**(kgmmuTranslatePdePcfFromSw_HAL(pKernelGmmu, pdePcfSw, &pdePcfHw) == NV_OK)*fillEntry**v64*pTablePtr*call to osGetCpuVaAddrShift*Invalid oor address mode type.**Invalid oor address mode type.*NVRM: UVM has not defined what to do here, doing nothing **NVRM: UVM has not defined what to do here, doing nothing *pClientShadowFaultBuf*pFaultBufferSharedMemoryAddress**pFaultBufferSharedMemoryAddress*pFaultBufSharedMem**pFaultBufSharedMem*call to kgmmuCopyFaultPacketToClientShadowBuffer_GV100*hwFaultBuffers**hwFaultBuffers*pHwFaultBuffer**pHwFaultBuffer**pClientShadowFaultBuffer***pClientShadowFaultBuffer**pClientShadowFaultBuf*call to kgmmuFaultBufferGetFault_DISPATCH***pSrc**call to kgmmuFaultBufferGetFault_DISPATCH*call to kgmmuFaultBufferClearPackets_DISPATCH*nextGetIndex*cachedGetIndex*call to kgmmuWriteFaultBufferGetPtr_DISPATCH*call to kgmmuIsReplayableShadowFaultBufferFull_DISPATCH*faultPacketsPerPage*faultPacketPageIndex*faultPacketPageOffset*metadataStartIndex*metadataPerPage*metadataPageIndex*metadataPageOffset*pDstMetadata**pDstMetadata*NVRM: Plaintext valid bit not reset by client. **NVRM: Plaintext valid bit not reset by client. *call to kgmmuGetShadowFaultBufferCslContext_IMPL*pCslCtx**pCslCtx***pCslCtx**call to kgmmuGetShadowFaultBufferCslContext_IMPL*NVRM: CSL context for type 0x%x unexpectedtly NULL **NVRM: CSL context for type 0x%x unexpectedtly NULL *NVRM: Fatal error detected in fault buffer packet encryption: IV overflow! **NVRM: Fatal error detected in fault buffer packet encryption: IV overflow! *NVRM: Error detected in fault buffer packet encryption: 0x%x **NVRM: Error detected in fault buffer packet encryption: 0x%x *call to kgspIssueNotifyOp_DISPATCH*mmuFaultBuffer**mmuFaultBuffer*clientShadowFaultBuffer**clientShadowFaultBuffer*pFaultBufferSharedMemDesc*pFaultBufferSharedMemoryPriv**pFaultBufferSharedMemoryPriv*NVRM: Fault-Buffer is disabled. Flush Seq memory cannot be created **NVRM: Fault-Buffer is disabled. Flush Seq memory cannot be created **pFaultBufferSharedMemDesc*(index < NUM_FAULT_BUFFERS)**(index < NUM_FAULT_BUFFERS)*call to kgmmuGetFaultRegisterMappings_TU102**pHubIntr**pHubIntrEnSet**pHubIntrEnClear*leafReg*bar0Mapping**bar0Mapping*NVRM: Unsupported HW PTE PCF pattern requested : %x **NVRM: Unsupported HW PTE PCF pattern requested : %x *NVRM: Unsupported SW PTE PCF pattern requested : %x **NVRM: Unsupported SW PTE PCF pattern requested : %x *sparsePde*pdeMulti*pPdeFmt*sparsePdeMulti*nv4kPte*call to nvFieldEnumEntryInit*version == GMMU_FMT_VERSION_1*src/kernel/gpu/mmu/arch/maxwell/kern_gmmu_fmt_gm10x.c**version == GMMU_FMT_VERSION_1**src/kernel/gpu/mmu/arch/maxwell/kern_gmmu_fmt_gm10x.c*numLevels >= 3**numLevels >= 3*bigPageShift == 16 || bigPageShift == 17**bigPageShift == 16 || bigPageShift == 17*src/kernel/gpu/mmu/arch/maxwell/kern_gmmu_fmt_gm20x.c**src/kernel/gpu/mmu/arch/maxwell/kern_gmmu_fmt_gm20x.c*bSparseHwSupport*src/kernel/gpu/mmu/arch/maxwell/kern_gmmu_gm107.c**src/kernel/gpu/mmu/arch/maxwell/kern_gmmu_gm107.c*call to kgmmuFmtIsVersionSupported_DISPATCH*maxFmtVersionSupported*maxVASize*call to kgmmuIsIgnoreHubTlbInvalidate*NVRM: disable_mmu_invalidate flag, skipping hub invalidate. **NVRM: disable_mmu_invalidate flag, skipping hub invalidate. **pVgpu*bDoVgpuRpc*pdbAperture*call to kgmmuSetTlbInvalidateMembarWarParameters_DISPATCH*flushCount*call to kgmmuSetTlbInvalidationScope_DISPATCH*call to kgmmuCommitTlbInvalidate_DISPATCH*kgmmuCheckPendingInvalidates_HAL(pGpu, pKernelGmmu, ¶ms.timeout)**kgmmuCheckPendingInvalidates_HAL(pGpu, pKernelGmmu, ¶ms.timeout)*fldVolatile*version == GMMU_FMT_VERSION_2*src/kernel/gpu/mmu/arch/pascal/kern_gmmu_fmt_gp10x.c**version == GMMU_FMT_VERSION_2**src/kernel/gpu/mmu/arch/pascal/kern_gmmu_fmt_gp10x.c*numLevels >= 6**numLevels >= 6*call to kgmmuGetBigPageSizeOverride*bigPageSizeOverride == RM_PAGE_SIZE_64K*src/kernel/gpu/mmu/arch/pascal/kern_gmmu_gp100.c**bigPageSizeOverride == RM_PAGE_SIZE_64K**src/kernel/gpu/mmu/arch/pascal/kern_gmmu_gp100.c*kgmmuCheckPendingInvalidates_HAL(pGpu, pKernelGmmu, &pParams->timeout)**kgmmuCheckPendingInvalidates_HAL(pGpu, pKernelGmmu, &pParams->timeout)*kgmmuCommitTlbInvalidate_HAL(pGpu, pKernelGmmu, pParams)**kgmmuCommitTlbInvalidate_HAL(pGpu, pKernelGmmu, pParams)**FAULT_COMPRESSION_FAILURE*subctxId == FIFO_PDB_IDX_BASE**subctxId == FIFO_PDB_IDX_BASE*pPDB != NULL**pPDB != NULL*physAdd*addrLo*NVRM: A channel must have a pVAS if it does not support TSG sub context! **NVRM: A channel must have a pVAS if it does not support TSG sub context! *src/kernel/gpu/mmu/arch/turing/kern_gmmu_fmt_tu10x.c**src/kernel/gpu/mmu/arch/turing/kern_gmmu_fmt_tu10x.c*fldAddrPeer*pHiVal*pHiVal != NULL*src/kernel/gpu/mmu/arch/turing/kern_gmmu_tu102.c**pHiVal != NULL**src/kernel/gpu/mmu/arch/turing/kern_gmmu_tu102.c*pLoVal*pLoVal != NULL**pLoVal != NULL*call to kmemsysCbcIsSafe_88bc07*NVRM: MMU Fault: CBC Backingstore unsafe, this can be reported as UNSUPPORTED_KIND error **NVRM: MMU Fault: CBC Backingstore unsafe, this can be reported as UNSUPPORTED_KIND error *call to kgmmuPrintFaultInfo_GV100*call to CliGetEventNotificationList*call to kgmmuTestAccessCounterWriteNak_DISPATCH*call to kgmmuReadMmuFaultStatus_DISPATCH*call to _kgmmuCreateExceptionDataFromPriv_GV100*_kgmmuCreateExceptionDataFromPriv_GV100(pGpu, pKernelGmmu, &parsedFaultEntry, &mmuExceptionData)*src/kernel/gpu/mmu/arch/volta/kern_gmmu_gv100.c**_kgmmuCreateExceptionDataFromPriv_GV100(pGpu, pKernelGmmu, &parsedFaultEntry, &mmuExceptionData)**src/kernel/gpu/mmu/arch/volta/kern_gmmu_gv100.c*parsedFaultEntry*vfFaultType*NVRM: BAR1 MMU Fault **NVRM: BAR1 MMU Fault *call to kgmmuPrintFaultInfo_DISPATCH*call to kgmmuIsP2PUnboundInstFault_DISPATCH*NVRM: BAR2 MMU Fault **NVRM: BAR2 MMU Fault *call to _kgmmuServiceBar2Faults_GV100*bBarFault*call to kgmmuIsFaultEnginePhysical_DISPATCH*NVRM: Physical MMU fault **NVRM: Physical MMU fault *mmuExceptionData*NVRM: Unbound Instance MMU fault **NVRM: Unbound Instance MMU fault *call to kgmmuServiceUnboundInstBlockFault_DISPATCH*kgmmuServiceUnboundInstBlockFault_HAL(pGpu, pKernelGmmu, NV_PTR_TO_NvP64(&parsedFaultEntry), &mmuExceptionData)**kgmmuServiceUnboundInstBlockFault_HAL(pGpu, pKernelGmmu, NV_PTR_TO_NvP64(&parsedFaultEntry), &mmuExceptionData)*call to kgmmuReadMmuFaultBufferSize_DISPATCH*NVRM: MMU Fault : Replayable fault with fault-buffer disabled. Initiating cancel **NVRM: MMU Fault : Replayable fault with fault-buffer disabled. Initiating cancel *call to _kgmmuHandleReplayablePrivFault_GV100*_kgmmuHandleReplayablePrivFault_GV100(pGpu, pKernelGmmu, &parsedFaultEntry)**_kgmmuHandleReplayablePrivFault_GV100(pGpu, pKernelGmmu, &parsedFaultEntry)*call to kgmmuServiceMmuFault_DISPATCH*kgmmuServiceMmuFault_HAL(pGpu, pKernelGmmu, NV_PTR_TO_NvP64(&parsedFaultEntry), &mmuExceptionData)**kgmmuServiceMmuFault_HAL(pGpu, pKernelGmmu, NV_PTR_TO_NvP64(&parsedFaultEntry), &mmuExceptionData)*call to kgmmuServiceVfPriFaults_DISPATCH*call to kgmmuWriteMmuFaultStatus_DISPATCH*pKernelRC*PDB_PROP_GPU_IN_FATAL_ERROR*call to kfifoRecoverAllChannels_DISPATCH*gpuUpdateErrorContainmentState_HAL(pGpu, NV_ERROR_CONT_ERR_ID_E13_MMU_POISON, loc, NULL)**gpuUpdateErrorContainmentState_HAL(pGpu, NV_ERROR_CONT_ERR_ID_E13_MMU_POISON, loc, NULL)*cancelInfo*instBlock*call to kgmmuFaultCancelTargeted_DISPATCH*call to kgmmuCheckAccessCounterBar2FaultServicingState_DISPATCH*call to kgmmuTestVidmemAccessBitBufferError_DISPATCH*call to kgmmuDisableFaultBuffer_DISPATCH*call to uvmDisableAccessCntr_DISPATCH*call to kgmmuDisableVidmemAccessBitBuf_DISPATCH*call to kgmmuServiceMthdBuffFaultInBar2Fault_DISPATCH*call to kgmmuEnableFaultBuffer_DISPATCH*call to _kgmmuResetFaultBufferError_GV100*call to uvmEnableAccessCntr_DISPATCH*call to kgmmuClearAccessCounterWriteNak_DISPATCH*call to kgmmuEnableVidmemAccessBitBuf_DISPATCH*(faultBufType < NUM_FAULT_BUFFERS)**(faultBufType < NUM_FAULT_BUFFERS)*pParsedFaultEntry != NULL**pParsedFaultEntry != NULL*pMmuExceptionData != NULL**pMmuExceptionData != NULL*call to kgmmuReadMmuFaultInstHiLo_DISPATCH*tempLo*tempHi*mmuFaultInstBlock*call to kgmmuReadMmuFaultAddrHiLo_DISPATCH*call to kgmmuSignExtendFaultAddress_DISPATCH*call to kgmmuReadMmuFaultInfo_DISPATCH*regDataLo*call to kgmmuGetFaultType_DISPATCH*mmuFaultClientType*mmuFaultGpcId*bFaultEntryValid*bFaultInProtectedMode*bFaultTypeReplayable*bReplayableFaultEn*addrHi*bGpc*faultEngineId*NVRM: Could not get chid from inst addr **NVRM: Could not get chid from inst addr *NVRM: RM control call to read MMU debug mode failed, rmStatus 0x%x **NVRM: RM control call to read MMU debug mode failed, rmStatus 0x%x *bIsMmuDebugModeEnabled*NVRM: bIsMmuDebugModeEnabled: %s **NVRM: bIsMmuDebugModeEnabled: %s *TRUE**TRUE*FALSE**FALSE*resetChannelParams*resetReason*NVRM: Failed to set error notifier, rmStatus 0x%x **NVRM: Failed to set error notifier, rmStatus 0x%x *NVRM: RM control call to reset channel failed, rmStatus 0x%x **NVRM: RM control call to reset channel failed, rmStatus 0x%x *call to kchannelFillMmuExceptionInfo_IMPL*call to kgrctxRecordMmuFault_IMPL*call to kgmmuGetFaultInfoFromFaultPckt_DISPATCH*call to kgmmuServiceChannelMmuFault_92bfc3*NVRM: Could not service MMU fault for channel 0x%08x **NVRM: Could not service MMU fault for channel 0x%08x *pShadowFaultBuf**pShadowFaultBuf*pShadowFaultBuf != NULL**pShadowFaultBuf != NULL*call to circularQueueIsEmpty_IMPL*call to circularQueuePopAndCopy_IMPL*call to kgmmuHandleNonReplayableFaultPacket_DISPATCH*call to kgmmuNotifyNonReplayableFault_DISPATCH*pClientShadowBuffer**pClientShadowBuffer*call to kgmmuParseFaultPacket_DISPATCH*MmuExceptionData*dieletId*faultEntry*pParsedEntry*timestampLo*timestampHi*mmuFaultTimestamp*call to kgmmuFaultBufferUnmap_DISPATCH*call to kgmmuClientShadowFaultBufferFree_DISPATCH*call to circularQueueDestroy_IMPL***pRmShadowFaultBuffer**pShadowFaultBufLock*call to kgmmuFaultBufferFree_IMPL*NVRM: HW couldn't flush %s buffer. **NVRM: HW couldn't flush %s buffer. *REPLAYABLE_FAULT_BUFFER**REPLAYABLE_FAULT_BUFFER*NON_REPLAYABLE_FAULT_BUFFER**NON_REPLAYABLE_FAULT_BUFFER*call to kgmmuWriteMmuFaultBufferSize_DISPATCH*faultBufferHi*faultBufferLo*call to kgmmuWriteMmuFaultBufferHiLo_DISPATCH*call to kgmmuEnableMmuFaultInterrupts_DISPATCH*call to kgmmuEnableMmuFaultOverflowIntr_DISPATCH*call to kgmmuFaultBufferMap_DISPATCH*call to kgmmuWriteClientShadowBufPutIndex_DISPATCH*call to kgmmuFaultBufferAlloc_IMPL*call to _kgmmuAllocShadowFaultBuffer_GV100*call to portIsInitialized*NVRM: NvPort needed but not initaiized **NVRM: NvPort needed but not initaiized *NVRM: Fault-Buffer is disabled. ShadowBuffer cannot be created **NVRM: Fault-Buffer is disabled. ShadowBuffer cannot be created *pFaultBuffer->faultBufferSize != 0**pFaultBuffer->faultBufferSize != 0*gmmuShadowFaultBuf**gmmuShadowFaultBuf*gmmuShadowFaultBuf != NULL**gmmuShadowFaultBuf != NULL*queueMaxEntries*call to circularQueueInit_IMPL*pQueueAddress**pQueueAddress*call to circularQueuePushNonManaged_IMPL**pFault**bufferAddr*fstPktInPage*clearInThisPage*type == REPLAYABLE_FAULT_BUFFER || type == NON_REPLAYABLE_FAULT_BUFFER**type == REPLAYABLE_FAULT_BUFFER || type == NON_REPLAYABLE_FAULT_BUFFER*call to kgmmuClearReplayableFaultIntr_DISPATCH*pRmShadowFaultBuf**pRmShadowFaultBuf*pHwFaultBuffer->faultBufferSize**pHwFaultBuffer->faultBufferSize*call to _kgmmuFaultBufferHasMapping*NVRM: GPU %d HW's fault buffer doesn't have kernel mappings **NVRM: GPU %d HW's fault buffer doesn't have kernel mappings *NVRM: GPU %d RM's shadow buffer should be setup **NVRM: GPU %d RM's shadow buffer should be setup *call to kgmmuReadFaultBufferGetPtr_DISPATCH*call to kgmmuReadFaultBufferPutPtr_DISPATCH*call to _kgmmuFaultEntryRmServiceable_GV100*call to _kgmmuCopyFaultPktInShadowBuf_GV100*NVRM: Failed to copy faults into GPU %d's %s shadow buffer **NVRM: Failed to copy faults into GPU %d's %s shadow buffer *Client**Client*curGetIndex*curCount*!bRmServiceable**!bRmServiceable*bPrevRmServiceable*call to kgmmuReadClientShadowBufPutIndex_DISPATCH*origShadowBufPutIndex*copyIndex*call to circularQueuePush_IMPL*copiedCount*call to kgmmuCopyFaultPacketToClientShadowBuffer_DISPATCH*type == REPLAYABLE_FAULT_BUFFER**type == REPLAYABLE_FAULT_BUFFER*mwValid*mwUvmHandledNonFatal*mwEngineId*mwValid == mwUvmHandledNonFatal**mwValid == mwUvmHandledNonFatal*NVRM: Timed out while waiting for valid bit. **NVRM: Timed out while waiting for valid bit. *bUvmHandledNonFatal*bUvmHandledReplayable*bEngineCE*bPbdmaFault*NVRM: MMU Fault: inst:0x%x dev:0x%x subdev:0x%x **NVRM: MMU Fault: inst:0x%x dev:0x%x subdev:0x%x *NVRM: MMU Fault: ENGINE 0x%x (%s %s) faulted @ 0x%x_%08x. Fault is of type 0x%x (%s). Access type is 0x%x (%s)*call to kfifoPrintInternalEngine_DISPATCH*call to kfifoGetClientIdString_DISPATCH**NVRM: MMU Fault: ENGINE 0x%x (%s %s) faulted @ 0x%x_%08x. Fault is of type 0x%x (%s). Access type is 0x%x (%s)* on GPC %d** on GPC %d* on VEID %d** on VEID %d*call to kgmmuFmtInitPeerPteFld_DISPATCH*faultBufferGet*faultBufferPut*NVRM: MMU Fault: GPU %d: NON_REPLAYABLE_INTR is high when OVERFLOW is detected **NVRM: MMU Fault: GPU %d: NON_REPLAYABLE_INTR is high when OVERFLOW is detected *NVRM: MMU Fault: GPU %d: Buffer overflow detected due to GET > SIZE **NVRM: MMU Fault: GPU %d: Buffer overflow detected due to GET > SIZE *NVRM: MMU Fault: GPU %d: Buffer overflow detected due to incorrect SIZE **NVRM: MMU Fault: GPU %d: Buffer overflow detected due to incorrect SIZE *NVRM: MMU Fault: GPU %d: Buffer SIZE is expected to handle max faults possible in system **NVRM: MMU Fault: GPU %d: Buffer SIZE is expected to handle max faults possible in system *NVRM: MMU Fault: GPU %d: STATUS - 0x%x GET - 0x%x, PUT - 0x%x SIZE - 0x%x **NVRM: MMU Fault: GPU %d: STATUS - 0x%x GET - 0x%x, PUT - 0x%x SIZE - 0x%x *call to kgmmuInstBlkPageDirBaseGet_GP100*(pPDB != NULL)**(pPDB != NULL)*call to kgmmuInstBlkVaLimitGet_GP100*NV_OK == vaspaceGetPasid(pVAS, &pasid)**NV_OK == vaspaceGetPasid(pVAS, &pasid)*NVRM: Invalid PASID %d (max width %d bits) **NVRM: Invalid PASID %d (max width %d bits) *call to kgmmuClientShadowFaultBufferDestroy_IMPL*call to kgmmuClientShadowFaultBufferAllocate_IMPL*call to kgmmuFmtFamiliesInit_GM200*____trace_printk_check_format*__simple_attr_check_format*super_set_sysfs_name_generic*acpi_debug_print_raw*acpi_debug_print*exercise_error_forwarding_va*snd_printdd*_snd_printd*snd_printd*nvlink_print*uvm_push_set_description*dpTraceEvent*dpPrintf*nvswitch_lib_smbpbi_log_sxid*nvEvoLogInfoString*nvEvoLogInfoStringRaw*nvEvoLog*nvEvoLogDisp*nvEvoLogDev*osErrorLog*nvDbg_Printf*nvErrorLog_va*NV_RM_RPC_UPDATE_GPU_PDES*NV_RM_RPC_GET_PLCABLE_ADDRESS_KIND*NV_RM_RPC_ALLOC_CONTEXT_DMA*NV_RM_RPC_RESET_CURRENT_GR_CONTEXT*NV_RM_RPC_SET_SEMA_MEM_VALIDATION_STATE*NV_RM_RPC_TRANSLATE_GUEST_GPU_PTES*NV_RM_RPC_UPDATE_PDE_2*NV_RM_RPC_GET_CONSOLIDATED_STATIC_INFO*NV_RM_RPC_UNMAP_SEMA_MEMORY*NV_RM_RPC_MAP_SEMA_MEMORY*NV_RM_RPC_FREE_VIDMEM_VIRT*NV_RM_RPC_REMOVE_DEFERRED_API*NV_RM_RPC_DEFERRED_API_CONTROL*NV_RM_RPC_DESTROY_FB_SEGMENT*NV_RM_RPC_CREATE_FB_SEGMENT*NV_RM_RPC_DMA_FILL_PTE_MEM*NV_RM_RPC_UNMAP_MEMORY*NV_RM_RPC_MAP_MEMORY*NV_RM_RPC_ALLOC_VIRTMEM*NV_RM_RPC_ALLOC_VIDMEM*NV_RM_RPC_ALLOC_LOCAL_USER*NV_RM_RPC_PMA_SCRUBBER_SHARED_BUFFER_GUEST_PAGES_OPERATION*NV_RM_RPC_SIM_FREE_INFRA*NV_RM_RPC_SIM_UPDATE_DISP_CHANNEL_INFO*NV_RM_RPC_SIM_DELETE_DISP_CONTEXT_DMA*NV_RM_RPC_SIM_UPDATE_DISP_CONTEXT_DMA*NV_RM_RPC_MANAGE_HW_RESOURCE_FREE*NV_RM_RPC_MANAGE_HW_RESOURCE_ALLOC*nvErrorLog2_va*crashcatEnginePrintf*gccFakePrintf*libspdm_debug_print*HdmiLibPrint*LogModeValidationEnd*NvPushImportLogError**Object*~Object**~Object*AuxBus**AuxBus*~AuxBus**~AuxBus*setDevicePlugged**setDevicePlugged**fecTransaction*~List**~List**List**contains**clear*insertBefore**insertBefore*insertBack**insertBack*insertFront**insertFront**isEmpty**~ListElement*ListElement**ListElement**checkCallbacksOfSameContext*cancelAllCallbacks**cancelAllCallbacks*cancelCallbacksWithoutContext**cancelCallbacksWithoutContext*queueCallbackInOrder**queueCallbackInOrder*cancelCallback**cancelCallback*cancelCallbacks**cancelCallbacks*sleep**sleep**getTimeUs**queueCallback*~Timer**~Timer*~Callback**~Callback*Timer**Timer**Callback*_pump**_pump**fire**expired*~PendingCallback**~PendingCallback*PendingCallback**PendingCallback*TimerCallback**TimerCallback*~RawTimer**~RawTimer*RawTimer**RawTimer**~Timeout**valid**remainingUs*Timeout**Timeout*~AuxLogger**~AuxLogger**transactionSize*AuxLogger**AuxLogger**AuxRetry**transaction**writeTransaction**readTransaction**remaining**isError**seek*Stream**Stream**operator==**swap*~Buffer**~Buffer*Buffer**Buffer**getLength**reset*memZero**memZero**resize**align*BitStreamWriter**BitStreamWriter*BitStreamReader**BitStreamReader**readOrDefault**under**operator!=**operator<=**operator>**operator>=**operator<*prepend**prepend*append**append**MessageHeader*~EncodedMessage**~EncodedMessage*EncodedMessage**EncodedMessage*ModesetInfo**ModesetInfo*~LinkConfiguration**~LinkConfiguration*~LinkPolicy**~LinkPolicy**~LinkRates*LinkConfiguration**LinkConfiguration**getTotalDataRate**PBNForSlots**slotsForPBN*pbnRequired**pbnRequired**getBytesPerTimeslot**pbnTotal*setLaneRate**setLaneRate**linkOverhead*setChannelCoding**setChannelCoding**lowerConfig**isValid*setEnhancedFraming**setEnhancedFraming*LinkPolicy**LinkPolicy*enableFEC**enableFEC**convertLinkRateToDataRate**convertMinRateToDataRate**getLTCounter*setLTCounter**setLTCounter*setSkipFallBack**setSkipFallBack**skipFallback**LinkRates**getNumLinkRates**getNumElements**getMaxRate**getLowerRate**insert**import**isDriverEnabled**isMonitorEnabled**~VrrEnablement*VrrEnablement**VrrEnablement**vrrEnableDriver**vrrEnableMonitor**vrrWaitOnEnableStatus**vrrGetPublicInfo**~DpMonitorDenylistData**DpMonitorDenylistData*~MainLink**~MainLink*MainLink**MainLink**dscCrcTransaction**isSupportedDPLinkConfig**setDp2xLaneData**getDp2xLaneData**clearFlushMode**setFlushMode*invalidateLinkRatesInFallbackTable**invalidateLinkRatesInFallbackTable**isCableVconnSourceUnknown**isConnectorUSBTypeC*updateFallbackMap**updateFallbackMap**queryGPUCapability**isRgFlushSequenceUsed**getUHBRSupported*getLinkConfigWithFEC**getLinkConfigWithFEC*makeGuid**makeGuid*GUIDBuilder**GUIDBuilder**random*copyFrom**copyFrom**isGuidZero*GUID**GUID*~EvoMainLink**~EvoMainLink*applyStuffDummySymbolWAR**applyStuffDummySymbolWAR**configureFec**configureLinkRateTable*configureTriggerAll**configureTriggerAll*configureTriggerSelect**configureTriggerSelect**vrrRunEnablementStage**getEdpPowerData**queryAndUpdateDfpParams**freeDisplayId**allocDisplayId**monitorDenylistInfo*configurePowerState**configurePowerState**setDpLaneData**getDpLaneData**setDpTestPattern**getDpTestPattern**controlRateGoverning**supportMSAOverMST**isMSTPCONCapsReadDisabled**skipPowerdownEdpPanelWhenHeadDetach**isEDP**isActive*configureMsScratchRegisters**configureMsScratchRegisters*configureSingleHeadMultiStreamMode**configureSingleHeadMultiStreamMode*configureMultiStream**configureMultiStream*configureSingleStream**configureSingleStream**headToStream**rmUpdateDynamicDfpCache*configureHDCPGetHDCPState**configureHDCPGetHDCPState*configureHDCPRenegotiate**configureHDCPRenegotiate*triggerACT**triggerACT**setDpStereoMSAParameters**setDpMSAParameters**getMaxLinkConfigFromUefi*getLinkConfig**getLinkConfig**retrieveRingBuffer**train**isInbandStereoSignalingSupported**getSorIndex**getRegkeyValue*postLinkTraining**postLinkTraining*preLinkTraining**preLinkTraining**getUSBCCableIDInfo**physicalLayerSetDP2xTestPattern**physicalLayerSetTestPattern**getDynamicMuxState**isLttprSupported**getRootDisplayId**getDscCaps**isDpTunnelingHwBugWarEnabled**isAvoidHBR3WAREnabled**isDownspreadSupported**isInternalPanelDynamicMuxCapable**isDynamicMuxCapable**applyEdidOverrideByRmCtrl**fetchEdidByRmCtrl**isForceRmEdidRequired**maxLinkRateSupported**isStreamCloningEnabled**isFECSupported**getGpuDpSupportedVersions**isPC2Disabled**hasMultistream**hasIncreasedWatermarkLimits*EvoMainLink**EvoMainLink*applyRegkeyOverrides**applyRegkeyOverrides*initializeRegkeyDatabase**initializeRegkeyDatabase*~EvoAuxBus**~EvoAuxBus*EvoAuxBus**EvoAuxBus*EvoInterface**EvoInterface*Connector**Connector*~Connector**~Connector*EventSink**EventSink*Group**Group*~Group**~Group*Device**Device*~Device**~Device*DscParams**DscParams*DpModesetParams**DpModesetParams**isCableIdHandshakeCompleted*setUSBCCableIDInfo**setUSBCCableIDInfo*setConnectorTypeC**setConnectorTypeC**getMainLinkChannelCoding**setMainLinkChannelCoding*initialize**initialize*setCableVconnSourceUnknown**setCableVconnSourceUnknown*overrideCableIdCap**overrideCableIdCap*setIgnoreCableIdCaps**setIgnoreCableIdCaps**isDp2xChannelCodingCapable**notifySDPErrDetectionCapability**isDpTunnelBwAllocationEnabled*setDpTunnelingBwAllocationSupport**setDpTunnelingBwAllocationSupport**isDpInTunnelingBwAllocationSupported**isDpInTunnelingPanelReplayOptimizationSupported**isDpInTunnelingSupported*readPsrCapabilities**readPsrCapabilities*resetProtocolConverter**resetProtocolConverter**getVideoFallbackSupported**isIndexedLinkrateCapable**isIndexedLinkrateEnabled*setIndexedLinkrateEnabled**setIndexedLinkrateEnabled*setGpuFECSupported**setGpuFECSupported**getDpcdMultiStreamCap**getMultiStreamCapOverride*overrideMultiStreamCap**overrideMultiStreamCap**getTransactionSize**getUpRequestMessageBoxSize**readUpRequestMessageBox**getDownReplyMessageBoxSize**readDownReplyMessageBox**getUpReplyMessageBoxSize**writeUpReplyMessageBox**getDownRequestMessageBoxSize**writeDownRequestMessageBox**setTestResponseChecksum*getSquarePatternNum**getSquarePatternNum*get264BitsCustomTestPattern**get264BitsCustomTestPattern*get80BitsCustomTestPattern**get80BitsCustomTestPattern**getPhyTestPattern*getTestRequestTraining**getTestRequestTraining**getPendingTestRequestPhyCompliance**getPendingTestRequestEdidRead**getPendingAutomatedTestRequest**getPendingTestRequestTraining**getDownStreamPortStatusChange**getInterlaneAlignDone**getLaneStatusChannelEqualizationDone**getLaneStatusClockRecoveryDone**getLaneStatusSymbolLock**interruptUpRequestReady**interruptDownReplyReady**intteruptMCCS**interruptContentProtection*setSinkCount**setSinkCount**getSinkCount*parseAndReadInterrupts**parseAndReadInterrupts*parseAndReadInterruptsESI**parseAndReadInterruptsESI*parseAndReadInterruptsLegacy**parseAndReadInterruptsLegacy*setDirtyLinkStatus**setDirtyLinkStatus*clearDpTunnelingIrq**clearDpTunnelingIrq**getDpTunnelingIrq*clearStreamStatusChanged**clearStreamStatusChanged**getStreamStatusChanged*clearHdmiLinkStatusChanged**clearHdmiLinkStatusChanged**getHdmiLinkStatusChanged*clearLinkStatusChanged**clearLinkStatusChanged**getLinkStatusChanged*clearPanelReplayError**clearPanelReplayError**isPanelReplayErrorSet*clearInterruptCapabilitiesChanged**clearInterruptCapabilitiesChanged**interruptCapabilitiesChanged**isPostLtAdjustRequestSupported*overrideOptimalLinkRate**overrideOptimalLinkRate**overrideOptimalLinkCfg**skipCableBWCheck**overrideMaxLaneCount*notifyIRQ**notifyIRQ*setSupportsESI**setSupportsESI**getSupportsMultistream**getOuiSupported**getMsaTimingparIgnored**getLegacyPortCount**getSupportsNoHandshakeTraining**getDownstreamPort**getEnhancedFraming**getPhyRepeaterCount**getNoLinkTraining**lttprGetRevisionMinor**lttprGetRevisionMajor**getRevisionMinor**getRevisionMajor*setLttprSupported**setLttprSupported*setPC2Disabled**setPC2Disabled*setDPCDOffline**setDPCDOffline**isDpcdOffline*setAuxBus**setAuxBus*~DPCDHALImpl**~DPCDHALImpl*~DPCDHAL**~DPCDHAL*DPCDHALImpl**DPCDHALImpl*DPCDHAL**DPCDHAL*TestRequest**TestRequest*LaneStatus**LaneStatus*LinkState**LinkState*LinkCapabilities**LinkCapabilities*OUI**OUI*HDCP**HDCP**_LegacyPort*LegacyPort**LegacyPort**getMaxTmdsClkRate**getDownstreamNonEDIDPortAttribute**getDownstreamPortType**mapLinkBandiwdthToLinkrate**lttprIsVersion**lttprIsAtLeastVersion**isVersion**isAtLeastVersion*notifyHPD**notifyHPD*populateFakeDpcd**populateFakeDpcd**overrideMaxLinkRate**getSDPExtnForColorimetry**getRootAsyncSDPSupported**getNumberOfAudioEndpoints*updateDPCDOffline**updateDPCDOffline**auxAccessAvailable**getRawLinkRateTable**setPowerState**getPowerState**getGUID**setGUID**setMessagingEnable**setMultistreamLink*payloadTableClearACT**payloadTableClearACT**payloadWaitForACTReceived**clearPendingMsg**isMessagingEnabled**setMultistreamHotplugMode*clearInterruptContentProtection**clearInterruptContentProtection*clearInterruptMCCS**clearInterruptMCCS*clearInterruptDownReplyReady**clearInterruptDownReplyReady*clearInterruptUpRequestReady**clearInterruptUpRequestReady*readPanelReplayError**readPanelReplayError*refreshLinkStatus**refreshLinkStatus**isLinkStatusValid*setGpuDPSupportedVersions**setGpuDPSupportedVersions**setSourceControlMode**checkPCONFrlReady**setupPCONFrlLinkAssessment**checkPCONFrlLinkStatus**queryHdmiLinkStatus**restorePCONFrlLink**updatePsrConfiguration**readPsrConfiguration**readPsrState**readPsrDebugInfo**writePsrErrorStatus**readPsrErrorStatus**writePsrEvtIndicator**readPsrEvtIndicator**readPrSinkDebugInfo**getDpTunnelEstimatedBw**getDpTunnelGranularityMultiplier**getDpTunnelBwRequestStatus**setDpTunnelBwAllocation**hasDpTunnelEstimatedBwChanged**hasDpTunnelBwAllocationCapabilityChanged**writeDpTunnelRequestedBw**clearDpTunnelingBwRequestStatus**clearDpTunnelingEstimatedBwStatus**clearDpTunnelingBwAllocationCapStatus**setTestResponse*setPostLtAdjustRequestGranted**setPostLtAdjustRequestGranted**getIsPostLtAdjRequestInProgress**getTrainingPatternSelect**setTrainingMultiLaneSet**readTraining**isLaneSettingsChanged**setIgnoreMSATimingParamters**setLinkQualLaneSet**setLinkQualPatternSet**getMaxLinkRate**getMaxLaneCount**getMaxLaneCountSupportedAtLinkRate**setOuiSource**getOuiSource**getOuiSink**getBKSV**getBCaps**getHdcp22BCaps**getBinfo**getRxStatus*configureDpTunnelBwAllocation**configureDpTunnelBwAllocation*fetchLinkStatusLegacy**fetchLinkStatusLegacy*fetchLinkStatusESI**fetchLinkStatusESI*resetIntrLaneStatus**resetIntrLaneStatus*readLTTPRLinkStatus**readLTTPRLinkStatus**parseTestRequestPhy*parseAutomatedTestRequest**parseAutomatedTestRequest**parseTestRequestTraining*parsePortDescriptors**parsePortDescriptors*parseAndReadCaps**parseAndReadCaps*DPCDHALImpl2x**DPCDHALImpl2x*resetTxCableCaps**resetTxCableCaps*parseAndSetCableId**parseAndSetCableId*performCableIdHandshakeForTypeC**performCableIdHandshakeForTypeC*performCableIdHandshake**performCableIdHandshake**getMessageBoxSize**writeMessageBox*~UpReplyManager**~UpReplyManager*~OutgoingTransactionManager**~OutgoingTransactionManager*UpReplyManager**UpReplyManager*OutgoingTransactionManager**OutgoingTransactionManager*~DownRequestManager**~DownRequestManager*DownRequestManager**DownRequestManager*writeToWindow**writeToWindow**cancel**send*~OutgoingMessage**~OutgoingMessage*OutgoingMessage**OutgoingMessage*OutgoingTransactionManagerEventSink**OutgoingTransactionManagerEventSink*MessageTransactionSplitter**MessageTransactionSplitter*clearMessageBoxInterrupt**clearMessageBoxInterrupt**readMessageBox*~DownReplyManager**~DownReplyManager*~IncomingTransactionManager**~IncomingTransactionManager*DownReplyManager**DownReplyManager*IncomingTransactionManager**IncomingTransactionManager*~UpRequestManager**~UpRequestManager*UpRequestManager**UpRequestManager*mailboxInterrupt**mailboxInterrupt*IncomingTransactionManagerEventSink**IncomingTransactionManagerEventSink*~MessageTransactionMerger**~MessageTransactionMerger*MessageTransactionMerger**MessageTransactionMerger*~IncompleteMessage**~IncompleteMessage*IncompleteMessage**IncompleteMessage**messageCompleted**messageFailed*GenericMessageCompletion**GenericMessageCompletion*~MessageManager**~MessageManager*postReply**postReply*post**post*registerReceiver**registerReceiver*MessageManager**MessageManager*IRQDownReply**IRQDownReply*IRQUpReqest**IRQUpReqest*pause**pause*cancelAll**cancelAll*cancelAllByType**cancelAllByType*messagedReceived**messagedReceived*transmitAwaitingUpReplies**transmitAwaitingUpReplies*transmitAwaitingDownRequests**transmitAwaitingDownRequests*onDownReplyReceived**onDownReplyReceived*onUpRequestReceived**onUpRequestReceived*~Message**~Message*setMessagePriority**setMessagePriority*Message**Message**(unnamed constructor)*splitterTransmitted**splitterTransmitted*splitterFailed**splitterFailed**parseResponse**getSinkPort**getMsgType*MessageEventSink**MessageEventSink**~MessageReceiver*MessageReceiver**MessageReceiver**process*MessageReceiverEventSink**MessageReceiverEventSink*SinkEventNotifyMessage**SinkEventNotifyMessage**processByType*~PowerDownPhyMessage**~PowerDownPhyMessage**replyPortNumber*PowerDownPhyMessage**PowerDownPhyMessage**parseResponseAck*~RemoteI2cWriteMessage**~RemoteI2cWriteMessage*RemoteI2cWriteMessage**RemoteI2cWriteMessage*~RemoteI2cReadMessage**~RemoteI2cReadMessage**replyNumOfBytesReadI2C*RemoteI2cReadMessage**RemoteI2cReadMessage**replyAllocatedPBN*QueryPayloadMessage**QueryPayloadMessage*~AllocatePayloadMessage**~AllocatePayloadMessage**replyVirtualChannelPayloadId**replyPBN*AllocatePayloadMessage**AllocatePayloadMessage*~EnumPathResMessage**~EnumPathResMessage*EnumPathResMessage**EnumPathResMessage*~ClearPayloadIdTableMessage**~ClearPayloadIdTableMessage*ClearPayloadIdTableMessage**ClearPayloadIdTableMessage*I2cWriteTransaction**I2cWriteTransaction*~ResStatusNotifyMessage**~ResStatusNotifyMessage*ResStatusNotifyMessage**ResStatusNotifyMessage*~LinkAddressMessage**~LinkAddressMessage**resultCount*LinkAddressMessage**LinkAddressMessage**Result*~PowerUpPhyMessage**~PowerUpPhyMessage*PowerUpPhyMessage**PowerUpPhyMessage*~RemoteDpcdReadMessage**~RemoteDpcdReadMessage**replyNumOfBytesReadDPCD*RemoteDpcdReadMessage**RemoteDpcdReadMessage*~RemoteDpcdWriteMessage**~RemoteDpcdWriteMessage*RemoteDpcdWriteMessage**RemoteDpcdWriteMessage*~GenericUpReplyMessage**~GenericUpReplyMessage*GenericUpReplyMessage**GenericUpReplyMessage*~ConnStatusNotifyMessage**~ConnStatusNotifyMessage*ConnStatusNotifyMessage**ConnStatusNotifyMessage*edidAttemptDone**edidAttemptDone**readNextBlock**startReadingEdid*~EdidReadMultistream**~EdidReadMultistream*EdidReadMultistream**EdidReadMultistream**Edid*EdidAssembler**EdidAssembler*EdidReadMultistreamEventSink**EdidReadMultistreamEventSink**readIsComplete**readNextRequest*validateCheckSum**validateCheckSum*resetData**resetData**getYearWeek**getProductId**getManufId**isValidHeader**isPatchedChecksum*setPatchedChecksum**setPatchedChecksum*setFallbackFlag**setFallbackFlag*setForcedEdidChecksum**setForcedEdidChecksum*patchCrc**patchCrc*applyEdidWorkArounds**applyEdidWorkArounds**isFallbackEdid**isChecksumValid**getEdidSize**getBlockCount**getEdidVersion**verifyCRC**getLastPageChecksum**getFirstPageChecksum**~Edid*notifyLongPulse**notifyLongPulse*~DiscoveryManager**~DiscoveryManager**~ReceiverSink*DiscoveryManager**DiscoveryManager*ReceiverSink**ReceiverSink**detectBranch*removeDeviceTree**removeDeviceTree*removeDevice**removeDevice*handleLinkAddressDownReply**handleLinkAddressDownReply*detectCompleted**detectCompleted*~BranchDetection**~BranchDetection*BranchDetection**BranchDetection*handleRemoteDpcdReadDownReply**handleRemoteDpcdReadDownReply*~SinkDetection**~SinkDetection*SinkDetection**SinkDetection*DiscoveryManagerEventSink**DiscoveryManagerEventSink*~CsnUpReplyContainer**~CsnUpReplyContainer*~CsnUpReply**~CsnUpReply*CsnUpReplyContainer**CsnUpReplyContainer*CsnUpReply**CsnUpReply*postUpReply**postUpReply*queueUpReply**queueUpReply*handleCSN**handleCSN*messageProcessed**messageProcessed**setModeList**getParentSpecificData**getDeviceSpecificData*setDscDecompressionDevice**setDscDecompressionDevice**getDscDecoderColorDepthSupportMask**getDscMaxSliceWidth**getDscPeakThroughputModel**getDscPeakThroughputMode0**isDscYCbCr420NativeSupported**isDscYCbCr422NativeSupported**isDscYCbCrSimple422Supported**isDscYCbCr444Supported**isDscRgbSupported**getDscMaxBitsPerPixel**isDscBlockPredictionSupported**getDscLineBufferBitDepth**getDscMaxSlicesPerSink**getDscRcBufferBlockSize**getDscRcBufferSize**getDscVersionMinor**getDscVersionMajor**getDscEnable**setDscEnableDPToHDMIPCON**setDscEnable**parseBranchSpecificDscCaps**parseDscCaps**readAndParseBranchSpecificDSCCaps**readAndParseDSCCaps**isDSCPossible**isDSCDecompressionSupported**isDSCSupported**isDynamicDscToggleSupported**isDynamicPPSSupported**isDSCPassThroughSupported**getFECSupport**getDSCSupport**isAuxLessAlpmSupported**getAlpmStatus**setAlpmConfig*getAlpmCaps**getAlpmCaps**isLinkOffSupportedAfterAsSdpInPr**isdscDecodeNotSupportedInPr**isAdaptiveSyncSdpNotSupportedInPr**getSelectiveUpdateCaps**enableAdaptiveSyncSdp**isEarlyRegionTpSupported**isSelectiveUpdateSupported**getPanelReplayStatus**getPanelReplayConfig**setPanelReplayConfig*getPanelReplayCaps**getPanelReplayCaps**isPanelReplaySupported**isVrrDriverEnabled**isVrrMonitorEnabled*resetVrrEnablement**resetVrrEnablement**startVrrEnablement*switchToComplianceFallback**switchToComplianceFallback**getRawEpr**setI2cData**getI2cData**dscCrcControl**validatePPSData**setValidatedRawDscCaps**setRawDscCaps**getRawDscCaps**isMarkedForDeletion*markDeviceForDeletion**markDeviceForDeletion**queryFecData**setDpcdData**getDpcdData*setPanelPowerParams**setPanelPowerParams**isPowerSuspended**getPanelFwRevision**getAsyncSDPSupported**getSDPExtnForColorimetrySupported*queryGUID2**queryGUID2**getApplyStuffDummySymbolsWAR**getStuffDummySymbolsFor8b10b**getStuffDummySymbolsFor128b132b**getMaxModeBwRequired**isBranchDevice**isVideoSink**isAudioSink**isVirtualPeerDevice**setIgnoreMSAEnable**getIgnoreMSACap**getDpcdRevision*dpcdOverrides**dpcdOverrides**isOptimalLinkConfigOverridden**ignoreRedundantHotplug**skipRedundantLt**forceMaxLinkConfig**powerOnMonitorBeforeLt**bypassDpcdPowerOff**isPreviouslyFakedMuxDevice**isFakedMuxDevice**isMSAOverMSTCapable**hdcpAvailable**hdcpAvailableHop**getPortMap**getConnectorType**getTopologyAddress**isPlugged**isMustDisconnect**isRedundant**isLoop**isNativeDPCD**isMultistream*applyOUIOverrides**applyOUIOverrides**getRawEDID**getRawEDIDSize**getEDID**getEDIDSize**isZombie**isLogical**isCableOk*~DeviceImpl**~DeviceImpl*DeviceImpl**DeviceImpl**isPendingHDCPCapDone**isPendingBandwidthChange**isPendingCableOk**isPendingZombie**isPendingLostDevice**isPendingNewDevice*inferPathConstraints**inferPathConstraints*resetCacheInferredLink**resetCacheInferredLink*readRemoteHdcp1xCaps**readRemoteHdcp1xCaps**hdcpValidateKsv*waivePendingHDCPCapDoneNotification**waivePendingHDCPCapDoneNotification*~DeviceHDCPDetection**~DeviceHDCPDetection*DeviceHDCPDetection**DeviceHDCPDetection**~BandWidth**BandWidth**~Shadow**Shadow*setTimeslotAllocated**setTimeslotAllocated**isTimeslotAllocated*setHeadAttached**setHeadAttached**isHeadAttached*cancelHdcpCallbacks**cancelHdcpCallbacks*destroy**destroy**hdcpGetEncrypted*updateVbiosScratchRegister**updateVbiosScratchRegister**update*~GroupImpl**~GroupImpl*~LinkedList**~LinkedList*GroupImpl**GroupImpl*LinkedList**LinkedList**~TestMessage*setupTestMessage**setupTestMessage**sendDPTestMessage*TestMessage**TestMessage*setParent**setParent**DPTestMessageCompletion**isValidStruct*~DevicePendingEDIDRead**~DevicePendingEDIDRead*DevicePendingEDIDRead**DevicePendingEDIDRead*resetLinkTrainingCounter**resetLinkTrainingCounter**getDownspreadDisabled*setDisableDownspread**setDisableDownspread**setDp2xLaneConfig**getDp2xLaneConfig*applyTimeslotWAR**applyTimeslotWAR*decPendingRemoteHdcpDetection**decPendingRemoteHdcpDetection*incPendingRemoteHdcpDetection**incPendingRemoteHdcpDetection**~_CompoundQueryAttachMSTInfo*_CompoundQueryAttachMSTInfo**_CompoundQueryAttachMSTInfo**linkUseMultistream*applyOuiWARs**applyOuiWARs*handleEdidWARs**handleEdidWARs*mstEdidCompleted**mstEdidCompleted*mstEdidReadFailed**mstEdidReadFailed*~ConnectorImpl**~ConnectorImpl**getMaxLinkConfig**getActiveLinkConfig*getCurrentLinkConfig**getCurrentLinkConfig**getPanelDataClockMultiplier**getGpuDataClockMultiplier*powerdownLink**powerdownLink*enableLinkHandsOff**enableLinkHandsOff*releaseLinkHandsOff**releaseLinkHandsOff*beginCompoundQuery**beginCompoundQuery**compoundQueryAttach**endCompoundQuery**dpLinkIsModePossible**isHeadShutDownNeeded**isLinkTrainingNeededForModeset**notifyAttachBegin*dpPreModeset**dpPreModeset*dpPostModeset**dpPostModeset*readRemoteHdcpCaps**readRemoteHdcpCaps*notifyAttachEnd**notifyAttachEnd**isLinkAwaitingTransition*notifyDetachBegin**notifyDetachBegin*notifyDetachEnd**notifyDetachEnd**assessPCONLinkCapability*notifyShortPulse**notifyShortPulse*notifyAcpiInitDone**notifyAcpiInitDone*notifyGPUCapabilityChange**notifyGPUCapabilityChange*notifyHBR2WAREngage**notifyHBR2WAREngage**dpUpdateDscStream*createFakeMuxDevice**createFakeMuxDevice*deleteFakeMuxDevice**deleteFakeMuxDevice*setPolicyModesetOrderMitigation**setPolicyModesetOrderMitigation*setPolicyForceLTAtNAB**setPolicyForceLTAtNAB*setPolicyAssessLinkSafely**setPolicyAssessLinkSafely**setPreferredLinkConfig**resetPreferredLinkConfig*setAllowMultiStreaming**setAllowMultiStreaming**getAllowMultiStreaming**getSinkMultiStreamCap*setDp11ProtocolForced**setDp11ProtocolForced*resetDp11ProtocolForced**resetDp11ProtocolForced**isDp11ProtocolForced**getHDCPAbortCodesDP12**getIgnoreSourceOuiHandshake*setIgnoreSourceOuiHandshake**setIgnoreSourceOuiHandshake**isMultiStreamCapable**isFlushSupported**isFECCapable**getTestPattern**setTestPattern**getLaneConfig**setLaneConfig**getStreamIDs**updatePsrLinkState*enableDpTunnelingBwAllocationSupport**enableDpTunnelingBwAllocationSupport**willLinkSupportModeSST**getUSBDpInAdapterInfo*discoveryDetectComplete**discoveryDetectComplete*discoveryNewDevice**discoveryNewDevice*discoveryLostDevice**discoveryLostDevice*getCurrentLinkConfigWithFEC**getCurrentLinkConfigWithFEC**isAcpiInitDone*notifyLongPulseInternal**notifyLongPulseInternal*disconnectDeviceList**disconnectDeviceList*configInit**configInit*handlePanelReplayError**handlePanelReplayError*sortActiveGroups**sortActiveGroups*handleHdmiLinkStatusChanged**handleHdmiLinkStatusChanged*handleDpTunnelingIrq**handleDpTunnelingIrq*handleMCCSIRQ**handleMCCSIRQ*handleSSC**handleSSC**handleCPIRQ*flushTimeslotsToHardware**flushTimeslotsToHardware*freeTimeslice**freeTimeslice**allocateTimeslice*clearTimeslices**clearTimeslices**deleteAllVirtualChannels**checkIsModePossibleMST**beforeAddStreamMST*disableFlush**disableFlush*afterDeleteStream**afterDeleteStream*beforeDeleteStream**beforeDeleteStream*afterAddStream**afterAddStream**beforeAddStream**enableFlush**rawTrain**setDeviceDscState**trainPCONFrlLink**validateLinkConfiguration*populateDscModesetInfo**populateDscModesetInfo*populateDscBranchCaps**populateDscBranchCaps*populateDscSinkCaps**populateDscSinkCaps*populateForcedDscParams**populateForcedDscParams*populateDscGpuCaps**populateDscGpuCaps*populateDscCaps**populateDscCaps*populateUpdatedLaneSettings**populateUpdatedLaneSettings**postLTAdjustment**getValidLowestLinkConfig**trainLinkOptimizedSingleHeadMultipleSST**trainLinkOptimized**isNoActiveStreamAndPowerdown**trainSingleHeadMultipleSSTLinkNotAlive**isLinkLost**isLinkActive**isLinkInD3*assessLink**assessLink*cancelDpTunnelBwAllocation**cancelDpTunnelBwAllocation**getMaxTunnelBw**allocateMaxDpTunnelBw**allocateDpTunnelBw**requestDpTunnelBw**updateDpTunnelBwAllocation*forceLinkTraining**forceLinkTraining**performIeeeOuiHandshake*ensureMstNodesPoweredUp**ensureMstNodesPoweredUp**needToEnableFEC*fireEventsInternal**fireEventsInternal*fireEvents**fireEvents**compoundQueryAttachSSTDsc**compoundQueryAttachSSTIsDscPossible**compoundQueryAttachSST**compoundQueryAttachMSTGeneric**compoundQueryAttachMSTDsc**compoundQueryAttachMSTIsDscPossible**compoundQueryAttachMST**compoundQueryAttachTunneling**populateAllDpConfigs**handleTestLinkTrainRequest**handlePhyPatternRequest**detectSinkCountChange**initMaxLinkConfig*hardwareWasReset**hardwareWasReset*applyEdidWARs**applyEdidWARs*processNewDevice**processNewDevice*ConnectorImpl**ConnectorImpl*pruneCache**pruneCache*~HashMap**~HashMap*HashMap**HashMap**~HashMapElement**HashMapElement*~ConnectorImpl2x**~ConnectorImpl2x*applyDP2xRegkeyOverrides**applyDP2xRegkeyOverrides*ConnectorImpl2x**ConnectorImpl2x**willLinkSupportMode**isEqual**hash*~WatermarkCacheElement**~WatermarkCacheElement*WatermarkCacheElement**WatermarkCacheElement*~Element**~Element*Element**Element*EvoMainLink2x**EvoMainLink2x**trainDP2xChannelCoding*validateIlrInFallbackMap**validateIlrInFallbackMap**resetDPRXLink**getFallbackForDP2xLinkTraining**pollDP2XLinkTrainingStageDone**detectSink*addDevice**addDevice**getLinkIndex**getDisplayId**getSubdeviceIndex**rmControl5070**rmControl0073**~ConnectorEventSink**newDevice*lostDevice**lostDevice*notifyMustDisconnect**notifyMustDisconnect*notifyDetectComplete**notifyDetectComplete*bandwidthChangeNotification**bandwidthChangeNotification*notifyZombieStateChange**notifyZombieStateChange*notifyCableOkStateChange**notifyCableOkStateChange*notifyHDCPCapDone**notifyHDCPCapDone*notifyMCCSEvent**notifyMCCSEvent*ConnectorEventSink**ConnectorEventSink*~_nv_dplibtimer**~_nv_dplibtimer*_nv_dplibtimer**_nv_dplibtimer*fireExpiredTimers**fireExpiredTimers*fireIfExpired**fireIfExpired**isExpired**allocFailed*onTimerFired**onTimerFired**sa***v*topage**topage*pg**pg**bug**op**g***dst***src***q**eax**ebx**ecx**edx***__p**remainder***x**pgdir**oldp***stack***stackend**ti**restart**entry1**entry2**addr1**addr2**addr3*re**re**src3***mask**dstp**srcp**srcp1**srcp2**src1p**src2p**src3p**cpumask**srcp3**mask1**mask2***info**tlb***table**l**wq_head**wq_entry***_T**seq**sl**maskp**origp**relmapp**newp**r**rgosp1**rgosp2**rhp***key**victim***rb_link**mt**sem**lhs**rhs**mod**tsk**err**its*descr**descr**ssp**fbc**vmi**ptdesc**resource_name**zone**zoneref**nc***page_array*percpu_countp**percpu_countp***percpu_countp*page_ext**page_ext**gfpu**soft**pudp**pmdp**ptep**pgd**p4d**xp**pud***__ptr**r1**r2**dentry**xas**lru**xa***old**set1**set2**interval_sub***where**t**poll**key_ref**u**cred**_cred**ctmr**p1**p2**ksig**rsp**call**guid**idmap**fs_userns**userns**ki**stat**iocb**fops**kiocb**kiocb_src**acl**ida**idr**new_parent**new_name**page1**page2**altmap**new_pol**pt***ptlp***ptl*maxrss**maxrss***pagep*vmap**vmap***vmap*mm_lock_seq**mm_lock_seq***val***s**new_first**new_last**vm***start**watermark***watermark***object*slab**slab***word**path1**path2***cb_arg**sc*treelock**treelock**old_name**cpus**iov**__msg**__cmsg***__ctl**arch*cprm**cprm**ltr**sym**drv**grp**dma_iter**piter**sgl**prv**chain_sg**dma_addr**dma_handle***virt***begin***end**devname***dev_id***percpu_dev_id***dev**pdev**intspec**out_hwirq**out_type**con_id**propname***array***pointer**lockp***memory**aml*function_name**function_name**module_name**uid2**hid2**argv4**wake_capable**ares**pa**only**esc**bufp***bufp**ucounts**syncp**namefmt**skcd**of**ancestor**pl**wb*urb**urb**bvec**iter_all**bv**fbatch**bs**bl**bl2**bdev**fi**wbc**dom*total_scanned**total_scanned*lruvecp**lruvecp***lruvecp**locked_lruvec**min**rac**ractl**wait_page**si**regulator**nb*ovcs_id**ovcs_id*overlay_fdt**overlay_fdt***overlay_fdt*target_base**target_base**out_value**out_values***output***out_strs**a1**a2**list_name**cells_name**out_args**phandle_name*map_name**map_name*map_mask_name**map_mask_name***target*id_out**id_out**prop*pu**pu*dn**dn*compats**compats***compats*stem**stem*stem_name**stem_name*out_string**out_string***out_string**cpu_node**lenp***compat*prop_name**prop_name***opts***match***host_data**of_node**adap**bl_dev**bd***regbase*new_regs**new_regs**lock_class**request_class***ip**init_desc**vpset***input**ww_class**f1**f2***fencep**cursor**scan**hole_node**file_mapping**mgr**env*res_cma**res_cma***res_cma**attach**dmabuf*clkspec**clkspec**tz**opp****virt_devs**versions**opp_table*uW**uW**kHz**cpu_dev**mdev***a**alg_name**wait**algt*stream_id**stream_id**dirty**iotlb_gather***dst_data**src_array**src_data***drvdata***vmas**loglvl**ret_regs**fregs**rec**optinsn***cur**rp**ri**file_priv**property**dmode**bus_flags**req_driver**old_plane_state***in**sctx**keyring**authkey***ptep**area**details*parent_spec**parent_spec*subdomain_spec**subdomain_spec**genpd**wait_address**msgs***sha512_context***new_sha512_context***sha512_ctx***sha384_context***new_sha384_context***sha256_context***new_sha256_context***sha256_ctx**hmac_value***hmac_sha512_ctx***new_hmac_sha512_ctx***hmac_sha384_ctx***new_hmac_sha384_ctx***hmac_sha256_ctx***new_hmac_sha256_ctx**data_out_size**tag_out***sm2_context***ecd_context***ec_context***rsa_context**cert_chain***cert**cert_length**root_cert**basic_constraints**basic_constraints_size**usage**usage_size***date_time1***date_time2**date_time_str***date_time**date_time_size**from_size**to_size**oid**extension_data**extension_data_size**cert_issuer**issuer_size**serial_number_size**cert_subject**subject_size**length**prk_out**sig_size**peer_public**key_size**public_size**public_key_size**p_dstlen**pPageIndex***pte_array**pReturnValue**spawn**shash**ahash**walk*requester_info**requester_info*subject_name**subject_name*csr_len**csr_len*csr_pointer**csr_pointer***csr_pointer**oid_size*name_buffer**name_buffer*name_buffer_size**name_buffer_size*common_name**common_name*common_name_size**common_name_size*tbs_cert**tbs_cert***tbs_cert*tbs_cert_size**tbs_cert_size*x509_stack**x509_stack***x509_stack*x509_cert**x509_cert***x509_cert*single_x509_cert**single_x509_cert***single_x509_cert*dummy1**dummy1***dummy1*dummy2**dummy2***dummy2**pMethodName**outDataSize**pInData***pEdidBuffer***pInParams**outStatus***pOutData***arg3***inParams***outData**acpi_object**handlesPresent**ac_plugged**limit*pnv_nstimer**pnv_nstimer***pnv_nstimer**pMinFreqKHz**pMaxFreqKHz**pCurrFreqKHz*pWhichClkOSparent**pWhichClkOSparent***mem_desc**flcn_addr***cb_context***cmd**window_head_mask***dsiPanelInfo***msg***usrCtx**pinNum**direction**pinValue**num_instances*tegra_imp_import_data**tegra_imp_import_data**scl**sda*nv_msgs**nv_msgs*request_data**request_data***request_data*response_data**response_data***response_data*api_ret**api_ret**dispIsoStreamId**dispNisoStreamId***pPages**pNumPages***sgt*pFbWidth**pFbWidth**pFbHeight**pFbDepth**pFbPitch**pFbSize***os_info**duped_fd**egm_node_id*compr_addr_sys_phys**compr_addr_sys_phys*addr_guest_phys**addr_guest_phys*rsvd_phys**rsvd_phys*addr_width**addr_width**node_id***os_private***fw_handle*fw_buf**fw_buf***fw_buf*fw_size**fw_size**dma_peer**peer_dma_dev**va_array***priv_data***import_sgt***pAllocPriv*pUserAddress**pUserAddress***ppPrivate**brightness**bpmp**device_name**attachment*max_seg_size**max_seg_size**con**nvmem**rtc**reason**npages*pci_dev_out**pci_dev_out***pci_dev_out*dma_start**dma_start*dma_limit**dma_limit**gpu_ids**gpu_count***vm_priv**girq***label*parents**parents***parents**hw**prate**panel**np_panel**panelInfo*array_size**array_size***args**mmap_context**rm_ops*reg_info**reg_info***reg_info***dma_mapping***page_table*page_size_index**page_size_index**cell_name**pargs*id_table**id_table**trip**freq**lookup**bus_id**rstcs**np_config**num_maps**plat_dev**filep**ppos*pnvpp**pnvpp***pnvpp**dst_list**src_list**uvmCslContext**messageNum**outputBuffer**addAuthData***contextList**methodStream**srcVaSpace**dstAddress***channel**channelInfo***pFaultPacket***resourceDescriptor**externalMappingInfo***retainedChannel**channelResourceBindParams**channelInstanceInfo**gpuExternalPhysAddrInfo**gpuExternalMappingInfo**device1**device2**hP2pObject**nvlinkInfo**importedEvents**pAccessCntrInfo**pAccessCntrConfig**pAccessBitsInfo**pFaultInfo***pFaultBuffer**numFaults**hasPendingFaults**eccInfo**fbInfo**hDupMemory**pGpuMemoryInfo**dstVaSpace**dmaAddress**gpuUuid**pGpuClientInfo***tsg***cpuPtr***pPma**pPmaAllocOptions***callbackData***pPmaPubStats**p2pCapsParams**gpuPointer**allocInfo***vaSpace**vaSpaceInfo***device***session**gpuInfo**tc**cc**frac**dmab**substream**max**pcm**runtime**tv***bufs*accuracy**accuracy**report**constrs**azx_dev*entryp**entryp***entryp**line**follower**dst_id**src_kctl**kctl**change**num_pci_devices**num_platform_devices***nvlfp_raw**cdev**newEvents***hLock***s1***s2**pfn**pci_dev_str**pci_domain**pci_bus**pci_slot**pci_func**commit**pResInfo**count*nvkms_format**nvkms_format**crcs**drmCrcs*adjusted_mode**adjusted_mode**nvkms_lut**drm_lut**head_modeset_config**nv_drm_crtc_state**nv_drm_plane_state**surface_params**nvkms_csc**drm_ctm_3x4**drm_ctm***blob**replaced**newNvKmsCallback**newWaitValueOut**newTimeoutOut**old_crtc_state**reply_config**layer_reply_config**worker****pages**displayMode**funcsTable**dpyIdList**rrParams***params_address***kptr***ops**fname***buff**nvkms_timer*rwlock**rwlock***lock**uvm_context**parent_mask**subset**mask_in1**mask_in2*rm_mem_out**rm_mem_out***rm_mem_out***pool_out*pushbuffer_out**pushbuffer_out***pushbuffer_out***channel_out**semaphore_channel**access_channel**wlc_channel**lcic_channel*channel_manager_out**channel_manager_out***channel_manager_out**gpu_address**chunk1**chunk2**out_tracker***new_chunks***new_chunk**new_range_vec*range_vec_out**range_vec_out***range_vec_out**existing**single*inout_node**inout_node***inout_node**inout_region*out_node**out_node***out_node*out_region**out_region**old_va_block***new_ptr**startp*endp**endp**out_accessed_by_set***vma_out***policy*outerp**outerp**existing_va_block**new_block***vma***va_block_ptr**va_space_events**make_resident_context**event_data**modules_data**bitmap_tree**faulted_pages**out_hint**out_prefetch_mask**remaining_encryptions**remaining_decryptions**src_iv*dma_buffer_out**dma_buffer_out***dma_buffer_out*out_va_space**out_va_space***out_va_space*out_gpu**out_gpu***out_gpu**fault*cpu_addr_out**cpu_addr_out***cpu_addr_out**dma_address_out**parent_gpu0**parent_gpu1**local_gpu**remote_gpu**accessing_gpu**user_rm_device**parent_gpu_error***gpu_out**cur_gpu**mask_out***ptr_val**populate_mask**thrashing_hint**read_duplicate**region**mask_in**out_chunk_size**dst_mem**src_mem***out_block***hmm_vma**out_mask**block_retry***new_va_block**new_managed_range**map_page_mask**thrashing_processors**revoke_processor_mask**revoke_page_mask**map_processor_mask**prefetch_page_mask**platform_info**candidates**nid**user_rm_va_space*numa_enabled**numa_enabled**numa_node_id**uuid_out**changing_gpu*va_space_ptr**va_space_ptr***va_space_ptr**range_allocator***mem_out**user_va_space**attrs_out**managed**existing_managed_range***new_managed_range_out**per_gpu_attrs***out_semaphore_pool_range***out_sked_reflected_range***out_channel_range***out_external_range***out_managed_range*instance_ptr_lo**instance_ptr_lo*instance_ptr_hi**instance_ptr_hi**ats_context**faults_serviced_mask***buffer_entries**auth_tag_mem**mem1**mem2**begin_pool**wlc_pool**lcic_pool**paired_wlc**lcic**wlc**loc**ce_caps**channel_types***tsg_handle**sec2_push**launch_channel***launch_channel*gpu_address_out**gpu_address_out**rng**counter_mem**snapshot_mem**outer**spin***ppCache**pUuidStruct**ext_mapping_info**memory_info**parent_uuid***fault_packet**gpu_platform_info**p2p_handle*parent_gpu_out**parent_gpu_out***parent_gpu_out*out_id**out_id***parent_gpu_pair**out_index**accessed_pages***notification_start**migrated_mask***_a***_b***user_channel***_user_channel**cached_faults**fatal_faults**non_fatal_faults**block_faults*sub_batch_base**sub_batch_base*sub_batch_fault_index**sub_batch_fault_index**service_block_context**last_entry**dest_table**src_table**row**pvmw**src_vma**anon_vma**uvm_hmm_migrate_event**uvm_hmm_gpu_fault_event**same_devmem_page_mask**populated_page_mask**devmem_fault_context*out_va_block**out_va_block***out_va_block***new_block_ptr**init_method**user_rm_mem**map_rm_params**deferred_list**existing_map***new_map**range_node*need_l2_invalidate_out**need_l2_invalidate_out**ptes*ptes_out**ptes_out***ptes_out**need_l2_invalidate_at_unmap***vp_data**offset_in_chunk**gpu_page_tree***phys_addr***sys_mem**semaphore_mem**semaphore_va_range**release_after_tracker**first_managed_range*num_unmap_pages**num_unmap_pages**next_addr**uvm_sgt**out_channel_type**allocation_failed_mask**sysmem_mapping**pte_dir*cur_depth**cur_depth**blackwell**hopper**ampere**turing**volta**pascal**maxwell**fake_gpu***void_data**inval**prefetch_pages**common_locations**unmap_processors**provider***void_pmm***out_chunk***chunk_out**dma_addrs_mask*out_gpu0**out_gpu0***out_gpu0*out_gpu1**out_gpu1***out_gpu1**verif_mem***expected_children**test_state**local_tracker**processor_uuid**first_managed_range_to_migrate***first_managed_range_to_migrate**sema_to_acquire**range_group_ids***parent***next**cover**bounds***new_node**flip**isr**array_index_hint**new_thread_context**stage_mem**processor**sn**lists*inserted_lists**inserted_lists**new_user_channel*out_channel**out_channel***out_channel**out_discarded_pages**map_processors**existing_mask**new_mask**routing_gpu**pages_revoked**allowed_mask**allowed_nid_mask**pending_tracker**page_table_range**pages_changing**page_mask_after**pages_to_unmap**pages_to_write**big_ptes_to_split**pte_batch**tlb_batch**big_ptes_to_merge**new_pages_mask**write_page_mask**clear_page_mask**scratch_page_mask**mapped_pages**unmap_pages**copy_mask**tracker_out**copy_tracker*copied_pages**copied_pages**copy_state**bca**base_address**chunk_offset**mapped_procs**resident_gpus**authorized_processors**authorized_gpus**big_ptes_in*out_gpu_chunk**out_gpu_chunk***out_gpu_chunk**populate_page_mask**node_pages_mask**allocated_region***chunk**tracking**next_chunk_page**first_chunk_page**accessed_by_mask**uvm_lite_gpus*p2p_mem_out**p2p_mem_out***p2p_mem_out***out_va_range**flushed_parent_gpus*out_gpu_va_space**out_gpu_va_space***out_gpu_va_space**gpu_uuid_1**gpu_uuid_2***gpu0***gpu1***pvBuf1***pvBuf2***source**fecStatus***fecErrorCount**replacement**replacee**insertBeforeThis**_timer**other**reader***(unnamed parameter 0)***left***right**modesetInfo**linkConfig**dpInfo**base_pbn**slots_pbn**headIndex**numLanes**laneCount**linkRate**bFECEnable**pLinkRateTable**pLinkRates**panelPowerOn**dpcdPowerStateD0**pDenylistData**testPattern**dfpCache**hdcpState**msaparams**bFECEnabled**retLink**cableIDInfo**patternInfo**muxState**pbDscSupported**pEncoderColorFormatMask**pLineBufferSizeKB**pRateBufferSizeKB**pBitsPerPixelPrecision**pMaxNumHztSlices**pLineBufferBitDepth**pEdidInfo**pCtrl**pUserCtrl**pProductName**pDtdIndex**rawData**pVer**pEI**pStereoStructureMask**pSideBySideHalfDetail**pT1**pT2**pModesetInfo**pWARData**pBitsPerPixelX16*pSliceCountMask**pSliceCountMask**pOpaqueWorkarea***targets**newModesetInfo**sqNum**rate**lanes**portType**estimatedBw**granularityMultiplier**prDbgInfo**psrEvt**psrErr**psrDbgState**psrState**psrcfg**bLinkActive**bLinkReady**frlRateMask**bFrlReady**dpRegkeyDatabase**bInfo**bCaps**rawByte**bKSV**oldVoltageSwingLane**newVoltageSwingLane**oldPreemphasisLane**newPreemphasisLane**voltageSwingLane**preemphasisLane**trainingScoreLane**postCursor**voltSwingSet**preEmphasisSet**ouiId**chipRevision**model***tag**assemblyBuffer**addressPrefix**receiver**nakData**writeData**numBytes**transactions**SDPStreamSink**dpAux**seg**childAddr**container**modeList**oui**devIdString**hwRevision**swMajorRevision**swMinorRevision**pEnable**pAlpmStatus**pPrStatus**pPrcfg**totalEpr**freeEpr**gpuData**sinkData**revision**major**minor**pPCONCaps**totalLinkSlots**ksv**lastDev**msgManager**pDpStatus**pStreamIDs**psrConfig**lc**hdcpAbortCodesDP12**targetGroup**group**lConfig**pconControl**pConControl**pModesetParams**dscInfo**forcedParams**lowestSelected**pDscParams**pGroupAttached**modesetParams**pErrorCode**localInfo**query**sinkCrc0*sinkCrc1**sinkCrc1*sinkCrc2**sinkCrc2*pErrorCount**pErrorCount**pStatus**pInfoframe**pOutPktBuffer**pVidTransInfo**pClientCtrl**pSrcCaps**pSinkCaps**pSinkEdid**pBaseReg***libHandle*pClientHandles**pClientHandles**pFRLParams**pResults*pBppMinX16**pBppMinX16*pBppMaxX16**pBppMaxX16*pFrlBitRateGbps**pFrlBitRateGbps*pNumLanes**pNumLanes**pDscModesetInfo*pMaxFRLRate**pMaxFRLRate**p861ExtBlock***pVoidDescriptor*pCurrentDBLength**pCurrentDBLength***pRawInfo**pVsvdb**vsdbInfo**pMapSz**pHdmiInfo**pTotalEdidExtensions**pEdidExt**vtbCount**pSectionBytes**primary_use_case*newSliceCount**newSliceCount*pMinSliceCount**pMinSliceCount**vfd**type2**type1**localTxSubLink**remoteRxSubLink***ppLinks***remote_end***pLinks**connLink**endPointInfo**endPoint**linkState**conn_info***conn**localLink***connArray**linkParams**ctrlParams**capParams**infoParams**postinitoptimizeParams**initoptimizeParams**subLinkParams**trainParams**removeParams**addParams**getParams*connParams**connParams**readParams**writeParams**statusParams**idParams**versionParams***master**numDevices***link1***link2**pFormat**validPortsMask***ppClientEvent**device_fabric_state**device_blacklist_reason**driver_fabric_state**phys_id**fatal**nonfatal***pciInfo***return_device**user_version**arch_error**hw_error**firmware**pMaskAll**pMaskPresent**laneMask**pGrading**pBezelMarking**pLedState**pMaskPresentChange**pPacketState**pMsgqHead**pMsgqTail**pQueueHead**pQueueTail**boardId**statusData**remap_ram_sel**eeprom**pEngDescUc**pEngDescBc**pGenMsg*pBDisabled**pBDisabled**err_event**word1**word2***error_event**pObjectFormat**pVersion*pSubVersion**pSubVersion*pOldPackedObject**pOldPackedObject**pLinkTrainingErrorInfo**pLinkRuntimeErrorInfo**pPayloadBuffer**port_events**dataSize**switch_pll***fn_args**time**nvswitch_link_handlers**errors**error_count***ppQueue**pEvtDesc**pSeqDesc**pLinkMask**pEncodedValue**resLength**pBank**pPage**pModuleMask**pLinkMaskAll**osfp**pBoardId**lpThreshold**isL1Capable**subMode**bUnlocked***structure**ppacked_size**pEccGeneric**pLinkTrainingErrorInfoParams**pRegValue**pExtErrAddr**pExtErrStat**pEmemCAddr**pEmemDAddr**pLinkMaskActive**pLinkMaskActivePending**pLinkMaskFault**pSavedBank**pSavedPage**pBRewind**pBytesRead**pCounts**pErrorLog**pNvlError*pDay**pDay*pMonth**pMonth**pLaneCrcRates**pSubtype*pErrorSubtype**pErrorSubtype*pBlockType**pBlockType**link_entries**identified_link_entriesCount**localLinkIdx**publicId*ppHeader**ppHeader***ppHeader*objectFormat**objectFormat***ppPackedObject*pPackedObjectSize**pPackedObjectSize**pRomImage**pFieldSrc**pFieldDest*discovery_list_size**discovery_list_size**entry_device**entry_id**entry_version*entry_type**entry_type*entry_chain**entry_chain**discovery_table**discovery_handlers*pCorrectedTotal**pCorrectedTotal*pUncorrectedTotal**pUncorrectedTotal***pNvlGeneric***pNvlErrorCounts***pNvlErrorEvent**pEccError*pEntryIndex**pEntryIndex**idx**pMsgCount**counter_values**routing_lan**routing_id**link_base_entry**packedSize**unpackedSize*packedData**packedData*unpackedData**unpackedData**fieldsCount**minion_ucode_data**minion_ucode_header**pRecSize***cpuAddr***soe_ucode_data***soe_ucode_header*timeoutVal**timeoutVal**inBandData**vc_hop**ports_per_spray_group**replica_offset**replica_valid**port_list**pri_replica_offsets**replica_valid_array**vchop_array**entries_used**spray_group**vchop_array_sg*column_port_offset**column_port_offset**pCertChainLength**outBufferSize**der**bufferUsed**bufferEnd*pCertLength**pCertLength**port_event_count**bit_header**bit_token***pBackingStore***pAddr**aPtr**zPtr**digest***ppProcFiles***pOpenVoid**pA**pB***oldPtr**push_buffer***pClassTable**rmctx*uniDev**uniDev***uniDev***devices**deviceCount**stringMightBeNull**safeString**pStartTime*inMigMode**inMigMode**pModeTimings**pSurf**blendStateColor**blendStateAlpha**textureBindingIndices***values***compressedData**pStream**tex*pSpaVersion**pSpaVersion*pMaxWarps**pMaxWarps*pThreadsPerWarp**pThreadsPerWarp***ppLinearAddress***pVidHeapControlParms***ppAddress**pDmaOffset***pAllocParams***pCpuAddress**pGpuAddress*pCoherent**pCoherent**segment**pHandlePool*pindex**pindex*pmask**pmask***location***newValue***oldValue***value***payloadPtr***maxSubmittedPtr*gpEntry0**gpEntry0*gpEntry1**gpEntry1**rc**dict*symStart**symStart*symEnd**symEnd***directory***filename**outputLine**outputColumn**matchedAddress***name**symbolName**image**pResolver**stream*presult**presult***string**targetName*physicalEntry**physicalEntry**pLibosBuf**pElfSectionName***buildId**buildIdSection**pSourceName**pLogDecode**pNvLogBuffer**totalNumNewEntries***elfSectionName*logEntrySize**logEntrySize**taskId**pRec**decodedLine**resolver*pTimeoutUs**pTimeoutUs*pScale**pScale**pFlags***pSema**pOut*pSysmemBaseAddr**pSysmemBaseAddr*pSysmemTotalSize**pSysmemTotalSize*pProcessInfo**pProcessInfo***pProcessInfo*ppProcessInfo**ppProcessInfo***ppProcessInfo**pThreadId**ProcName*pMinVoltageuV**pMinVoltageuV*pMaxVoltageuV**pMaxVoltageuV*pStepVoltageuV**pStepVoltageuV*pGpuSpeedoHv**pGpuSpeedoHv*pGpuSpeedoLv**pGpuSpeedoLv*pGpuIddq**pGpuIddq*pChipSkuId**pChipSkuId*pVoltageuV**pVoltageuV*pSetVoltageuV**pSetVoltageuV**nv_i2c_msgs*pOutputBuffer**pOutputBuffer***pOutputBuffer*pInputBuffer**pInputBuffer***pInputBuffer*pOutputBuffer0**pOutputBuffer0***pOutputBuffer0*pOutputBuffer1**pOutputBuffer1***pOutputBuffer1*pTimeInNs**pTimeInNs***virtualAddress***pVirtualAddress*pRegList**pRegList**pRegTable**pBufferLength**cbLen**pRegParmStr*pCbLen**pCbLen**memblock_size**NotifyEvent***eventID*pDeviceExtension**pDeviceExtension***pDeviceExtension***pOsGpuInfo*vgpuHandled**vgpuHandled**pNumaNodeId**free_memory_bytes**total_memory_bytes***pOsInfo***pBaseVAddr*ppBaseVAddr**ppBaseVAddr***ppBaseVAddr***hEvent**pArg1**pPrivData**pNumIntances***pTimer**pRsdpAddr***ppTable**retSize*pInParam**pInParam***pInParam*pOutStatus**pOutStatus*pOutDataSize**pOutDataSize***pOSEvent**bOpenRm**majorVer**minorVer**buildNum*unusedPatchVersion**unusedPatchVersion*unusedProductType**unusedProductType*pTimingsPerStream**pTimingsPerStream*pNumTimings**pNumTimings**pNsPid*pSectionHandle**pSectionHandle***pSectionHandle*ppSectionHandle**ppSectionHandle***ppSectionHandle***pMdl***pFile***ppFile***pArg1***pClientSecurityToken***pCurrentSecurityToken***pUidToken1***pUidToken2**barSizes**sparseOffsets**sparseSizes**sparseCount**isBar064bit**lineRate**pLinkConnection**pPhysAddr**pNodeId*pAddrPhys**pAddrPhys*pAddrRsvdPhys**pAddrRsvdPhys*pAddrSysPhys**pAddrSysPhys*pAddrWidth**pAddrWidth*pMaskWidth**pMaskWidth***ppWq***ppOsRmCaps**pPartitionOsRmCaps***ppExecPartitionOsRmCaps**pGpuOsRmCaps***ppPartitionOsRmCaps**dupedCapDescriptor*pCapDescriptor**pCapDescriptor**pVendor**pSeconds**pMicroSeconds**pGfid**pTegraImpImportData***pRequestData***pResponseData**pRet**pApiRet***ppThis*pfnRmRCReenablePusher**pfnRmRCReenablePusher***pfnRmRCReenablePusher***pPageData**Value*pGuid**pGuid**pOutSize**pObj**arg_this**allocSizeOutput*suffix**suffix*ppAllocList**ppAllocList***ppAllocList**pCpuMappingAttr**pAddrSpace*pMemAperture**pMemAperture*pMemKind**pMemKind*pGpuCacheAttr**pGpuCacheAttr*pGpuP2PCacheAttr**pGpuP2PCacheAttr*contigSegmentSize**contigSegmentSize**pRootOffset**pStandbyBuffer***kernelMappingPriv***kernelMapping*pPrefixMessage**pPrefixMessage**pMemDescOne**pMemDescTwo**pAddresses*ppMemDescNew**ppMemDescNew***ppMemDescNew***Address***Priv**pRemoveCb**addrlist**pInvokingClient**pSharePolicy**pResource**pReference**pDuplicate***ppGpuResource**arg_pCallContext**arg_pParams*pEventNotificationList**pEventNotificationList*ppEventNotificationList**ppEventNotificationList***ppEventNotificationList***ppEventNotification*pEventClient**pEventClient*ppNotifier**ppNotifier***ppNotifier*pppEventNotification**pppEventNotification***pppEventNotification****pppEventNotification**isEventNotified***ppNotifierShare**pNotifierShare***ppNotifShare**pNotifShare*pNotifyIndex**pNotifyIndex**pTimerApi*pRegOps**pRegOps**pbBroadcast***ppGpu**pContextRef**pIsEngineRequired**pRcErrorCode**pTableSize**pSliLinkCircular**pSliLinkEndsMask**pVidLinkCount**arg2**pMboxAperture***pMsgBuf**pMsgSize***pMsgAddr*pNumClassDescriptors**pNumClassDescriptors**pNumEntries***pErrorString**pUgidData**pThreadState**pbDrainRecommended**pbResetRequired***nameStringBuffer**bFipsEnabled**pPdi**pdi***arg2***pRecords**pRetVal***pValue**pSmcInfo**pRef**pPidArray**pPidArrayCount***pNotifyParams**pOffsetsSizesArr**pAccessMap**pComprData**pDefault**pPending*ppFlcn**ppFlcn***ppFlcn***ppGidString**pGidStrlen**pFlagsFailed**pNV2080EngineTypeCap**pRmEngineTypeCap**pRmEngineList**pNv2080EngineList*pClientEngineID**pClientEngineID*pNumClasses**pNumClasses***ppClassDesc***ppDeviceEntry**pAttachArg**pGpuUuid**arg_pUuid**arg_pGpuArch**pDstGpu**pDstDevice**pSurfaceInfoParams**pGPAP**pPageSizeParams**pCacheFlushParams**pAllocData**ppMemory***ppMemory**pGpuMgr*clid**clid*pBEnable**pBEnable*pBRemove**pBRemove***ppGpuGrp**pDeviceInstance*pGpuIdsOrdinal**pGpuIdsOrdinal*pStartIndex**pStartIndex*pbEnabledByDefault**pbEnabledByDefault*pWinRmFwPolicyArg**pWinRmFwPolicyArg*pbRequestFirmware**pbRequestFirmware*pbAllowFallbackToMonolithicRm**pbAllowFallbackToMonolithicRm*ppUuidStr**ppUuidStr***ppUuidStr*pUuidStrLen**pUuidStrLen*pGpuInitStatus**pGpuInitStatus*pGpuIdsParams**pGpuIdsParams**gpuDomainBusDevice*pGpuCnt**pGpuCnt*pP2PWriteCapsStatus**pP2PWriteCapsStatus*pP2PReadCapsStatus**pP2PReadCapsStatus*pActiveDeviceIdsParams**pActiveDeviceIdsParams*ppTopology**ppTopology***ppTopology*pbSkipHwNvlinkDisable**pbSkipHwNvlinkDisable**pVidHeapAlloc**pHeapFlag**pMemoryRange**phMemory**pAddrRange***ppMemoryPartitionHeap**pteKind**pbIsValid**pMmuLockLo**pMmuLockHi**rsvdISOSize**arg1**pFlaOwnerGpu**pFlaOwnerMemoryManager**pPhysMemDesc*pPteKind**pPteKind**bar1Info**pMemSize**pAlign**pBankPlacementLowData**pPlacementStrategy**pRetAttr**pRetAttr2***arg5***arg4**pMappingGpu***ppMemdesc**pbTopLevelScrubberEnabled**pbTopLevelScrubberConstructed***ppMemPoolInfo**pExternalDevice**phClient**phDevice**phSubdevice***ppPma**kind**pInsertRegion*pFbUsedSize**pFbUsedSize**pTransferInfo**pSrcInfo***pBuf**pDstInfo**pAllocRequest**o**pReleaseLocks**pIntrService**pEngines**pEngMask***ppIntrTable***pInterruptVectors**pLeafVals**pIntrmode***ppMcEngines**pIntrPending**pbCtxswLog**pSmallestVector**pDPCQueue**pServiced**pRange**pKernelGraphicsContextShared**pKernelGraphicsContext**pKernelGraphicsContextUnicast**pKernelGraphicsObject*pbAddEntry**pbAddEntry**pPhysAddrs*pbNoMorePages**pbNoMorePages***ppBuffers**pCtxBufferType*pBufferCountOut**pBufferCountOut**pFirstGlobalBuffer**pKernelSMDebuggerSession*pMmuFaultInfo**pMmuFaultInfo*ppKernelGraphicsContextUnicast**ppKernelGraphicsContextUnicast***ppKernelGraphicsContextUnicast**pKernelGraphicsContextDst**pFifoEngineId**pInternalId**pExternalId*ppKernelGraphicsContext**ppKernelGraphicsContext***ppKernelGraphicsContext***clients***channels**pMmuExceptInfo**pMmuExceptionInfo*ppPbdmaFaultIds**ppPbdmaFaultIds***ppPbdmaFaultIds**pNumPbdmas**pMigDevice**pEngineFifoList**pVChid**pPresent**pUserdAperture**pUserdAttribute**pAddrShift**bar1Offset**bar1MapSize*pBar1MapOffset**pBar1MapOffset*pBar1MapSize**pBar1MapSize**pPartnerListParams**pEngineInfoList***ppPbdmaIds**pGeneratedToken**pOutVal***ppKernelChannel**pWorkSubmitToken**pShift**pbInstProtectedMem**pInstAttr***ppInstAllocList*pPreviousRLPreemptedOffset**pPreviousRLPreemptedOffset**pPremptedOffset**arg10*pEngLookupTblSize**pEngLookupTblSize*pRmEngineType**pRmEngineType*pPbdmaId**pPbdmaId**pNumEngines***pPostSchedulingEnableHandlerData***pPreSchedulingDisableHandlerData*pKernel**pKernel**pChGrpID**pChidOffset**pChannelCount**pCIIDs**pCIIds**pExecPartId**pCtsId***ppSkyline**pConfigRequestsPerCi*pCeInst**pCeInst*pSmallestComputeSize**pSmallestComputeSize**pComputeInstanceSaves*ppProfile**ppProfile***ppProfile**pEngineTypes**pEngineCount**pLocalEngType*pGlobalEngType**pGlobalEngType*ppKernelMIGGpuInstance**ppKernelMIGGpuInstance***ppKernelMIGGpuInstance**pKernelMigManager**pPhysicalEngineMask**pLocalEngineMask**pSourceEngines**pOutEngines**pExclusiveEngines**pSharedEngines**pAllocatableEngines**pRefA**pRefB**pSrcRef**pDstRef**pDstEngineType**numaNodeId**KernelMemorySystem**pIoAperture**rsvdPhysAddr*ppGPUInstanceMemConfig**ppGPUInstanceMemConfig***ppGPUInstanceMemConfig*pPartitionSizeFlag**pPartitionSizeFlag*pSizeInBytes**pSizeInBytes**pGspFeaturesParams*pCeUpdatePceLceMappingsParams**pCeUpdatePceLceMappingsParams**pRcRecovery**pConfigParams**pChannelInfo**pDisableChannelParams**pBlackListParams*pOfflinedParams**pOfflinedParams**pVideoEventParams**powerInfoParams**pSampleParams**pInfoParams**pLevelInfoParams*pPowerInfoParams**pPowerInfoParams*pPowerStateParams**pPowerStateParams**pPerfmonParams**pBiosPostTime**pBiosGetSKUInfoParams**pBiosInfoParams*pSpdmRetrieveSessionTranscriptParams**pSpdmRetrieveSessionTranscriptParams*pRecordParams**pRecordParams*pReportParams**pReportParams*pDumpParams**pDumpParams*pDumpSizeParams**pDumpSizeParams*pWatchdogInfoParams**pWatchdogInfoParams*pReadVirtMemParam**pReadVirtMemParam*pTimerRegOffsetParams**pTimerRegOffsetParams*pSetSemaMemValidationParams**pSetSemaMemValidationParams*pSetSemMemoryParams**pSetSemMemoryParams*pSetMemoryNotifiesParams**pSetMemoryNotifiesParams**pSetEventParams*pTriggerFifoParams**pTriggerFifoParams*pGetPidInfoParams**pGetPidInfoParams*pGetPidsParams**pGetPidsParams*pGidInfoParams**pGidInfoParams*pIdParams**pIdParams*pQueryRulesParams**pQueryRulesParams*pSetRulesParams**pSetRulesParams*pRegParams**pRegParams*pClassParams**pClassParams*pEngineParams**pEngineParams*pGpuSimulationInfoParams**pGpuSimulationInfoParams*pSdmParams**pSdmParams*pEncoderCapacityParams**pEncoderCapacityParams*pShortNameStringParams**pShortNameStringParams*pNameStringParams**pNameStringParams*pGpuOptimusInfoParams**pGpuOptimusInfoParams*pBridgeInfoParams**pBridgeInfoParams**pAttribBufferSizeParams**pGetChannelUidParams**pGetCidGrpParams**pUserdLocationParams**pChannelMemParams**pFifoInfoParams*pTlbInvalidateParams**pTlbInvalidateParams**pOsOfflinedParams**pGpuCacheParams**pFbInfoParams**pIsKindParams**pFbMemParams**pGFBRIParams*pBoostParams**pBoostParams*pSystemExecuteParams**pSystemExecuteParams**pBarInfoParams**pBusInfoParams**pPciInfoParams**pDmaInfoParams**pServiceInterruptParams**pReplayableFaultOwnrshpParams**pManufacturerParams**pArchInfoParams**ppSubdevice***ppSubdevice*pGpuTime**pGpuTime*pCpuTime**pCpuTime*pBar0MapOffset**pBar0MapOffset*pBar0MapSize**pBar0MapSize**pGpuTimestampOffsetLo**pGpuTimestampOffsetHi**pPublicEvent**pEventPublic*pTime**pTime*pTimeUntilCallbackNs**pTimeUntilCallbackNs***Object*pFutureTime**pFutureTime*pPastTime**pPastTime*pDiffTime**pDiffTime***ppEventPublic**powerInfo***pArgs**bTryAgain**bGC6Support*bGCOFFSupport**bGCOFFSupport**pRegkeyValue*pOption**pOption***vf_pci_info**hbmAddr*sysfs_val**sysfs_val**numVgpuTypes**vgpuTypeIds**vgpuId**gpu_instance_id**placement_id**faultsCopied***ops_cmd***pGpuInstanceInfo***ppGpuInstanceInfo*pbStaticPhysAddrs**pbStaticPhysAddrs*pbAcquireReleaseAllGpuLockOnDup**pbAcquireReleaseAllGpuLockOnDup***pMemInfo**pMemArea*phMemoryDuped**phMemoryDuped*ppMemInfo**ppMemInfo***ppMemInfo*pbCanMmap**pbCanMmap*pCacheType**pCacheType*pbReadOnlyMem**pbReadOnlyMem*pMemoryType**pMemoryType**pDmaAddresses***ppPriv***p2pObject***pMigInfo***pPlatformData**pPhysicalAddresses**pEntries***pGpuInfo***ppMigInfo**pMemCpuCacheable***ppGpuUuid*ppGpuInfo**ppGpuInfo***ppGpuInfo**pWreqMbH**pRreqMbH***pI2cAdapter***pNvWorkItem***strp**nvRegistryDwords**load*NeedBottomHalf**NeedBottomHalf**pMapperRef**pMappableRef***ppMapping*pDependantRef**pDependantRef*pDescendantRef**pDescendantRef*pAncestorRef**pAncestorRef*ppAncestorRef**ppAncestorRef***ppAncestorRef*ppResourceRef**ppResourceRef***ppResourceRef**pResDesc**pRsParams***ppEntry*ppCallContext**ppCallContext***ppCallContext*ppParams**ppParams***ppParams*pResourceCommmon**pResourceCommmon**pIndex*pClientRes**pClientRes***ppFirstLowPriRef***ppCpuMapping**pSecInfo**pClientEntry*pTargetRef**pTargetRef**phResource*pClientDst**pClientDst*pRightsRequired**pRightsRequired*ppResource**ppResource***ppResource**arg_pAllocator**pFbBaseAddress***clientDevNodeAddress***clientParmStrAddress***clientBinaryDataAddress**pBinaryDataLength**Entry*p2pOptimalReadCEs**p2pOptimalReadCEs*p2pOptimalWriteCEs**p2pOptimalWriteCEs*pBusPeerIds**pBusPeerIds*pBusEgmPeerIds**pBusEgmPeerIds**pRmCliRes*controlParams**controlParams*pExtFabricMgmtParams**pExtFabricMgmtParams*pVgpuStatusParams**pVgpuStatusParams*vgpuVersionInfo**vgpuVersionInfo*pAcctPidsParams**pAcctPidsParams*pAcctInfoParams**pAcctInfoParams**pAddressSpaceParams*pSystemEventDataParams**pSystemEventDataParams*pEventSetNotificationParams**pEventSetNotificationParams*pRpcDumpParams**pRpcDumpParams*pRpcProfileParams**pRpcProfileParams*pGsyncIdInfoParams**pGsyncIdInfoParams*pGsyncAttachedIds**pGsyncAttachedIds*pMemOpEnableParams**pMemOpEnableParams*pGpuDetachIds**pGpuDetachIds*pWaitAttachIdParams**pWaitAttachIdParams*pAsyncAttachIdParams**pAsyncAttachIdParams*pGpuAttachIds**pGpuAttachIds*pGpuProbedIds**pGpuProbedIds*pDeviceIdsParams**pDeviceIdsParams*pGpuInitStatusParams**pGpuInitStatusParams*pGpuIdInfoParams**pGpuIdInfoParams*pGpuAttachedIds**pGpuAttachedIds*pTimestampParams**pTimestampParams*pRmInstanceIdParams**pRmInstanceIdParams*pGpusPowerStatus**pGpusPowerStatus*pP2PParams**pP2PParams*pSysParams**pSysParams*pChipsetInfo**pChipsetInfo**pAcpiMethodParams*pFeaturesParams**pFeaturesParams*pCpuInfoParams**pCpuInfoParams*pFd**pFd*pSkipDeviceRef**pSkipDeviceRef*ppRsClient**ppRsClient***ppRsClient*pbKernel**pbKernel*pOSInfo**pOSInfo***pOSInfo*ppClientHandleList**ppClientHandleList***ppClientHandleList*pClientHandleListSize**pClientHandleListSize*pRmFreeParams**pRmFreeParams**pVgpuNsIntr**pPasid**pTarget**pAllocInfo**pRangeLo**pRangeHi**ppVAS***ppVAS*pMemReserveInfo**pMemReserveInfo**pChunkSize**pPageSize**pPoolAllocMemDesc**pPoolMemDesc***pCtx*ppMemReserveInfo**ppMemReserveInfo***ppMemReserveInfo**pVeidCount**pSpanStart**pGrIdx**pVeidStart**pVeidSizePerSpan**pVeidStepSize**pPpcMask***ppKernelGraphics**pRouteInfo**pObjectType**bHeterogeneousModeEnabled**placementId*guestFbLength**guestFbLength*guestFbOffset**guestFbOffset*gspHeapOffset**gspHeapOffset*guestBar1PFOffset**guestBar1PFOffset*ppHostVgpuDevice**ppHostVgpuDevice***ppHostVgpuDevice**pPhysGpuInfo**vfPciInfo**partitionFlag**user_min_supported_version**user_max_supported_version*pgpuString**pgpuString**pKernelVgpuMgr*vgpuUuid**vgpuUuid*ppKernelHostVgpuDevice**ppKernelHostVgpuDevice***ppKernelHostVgpuDevice*availInstances**availInstances**gpuInstanceId*maxInstanceVgpu**maxInstanceVgpu*ppVgpuConfigEventInfoNode**ppVgpuConfigEventInfoNode***ppVgpuConfigEventInfoNode*pVgpuInfo**pVgpuInfo**vgpuType***vgpuType**pFbInfo*pKernelHostVgpuDeviceShr**pKernelHostVgpuDeviceShr**pBandwidth**pGpu0**pKernelBif0**pGpu1**bufConfigSpace**addrReg*pAddrReg**pAddrReg**pMirrorBase**pMirrorSize**bifAtomicsmask**pNumAreas**pBif**pciStart**pcieStart**pRegmapRef**pBusInfo**pPciePowerControlValue**pciLinkMaxSpeed**pMemoryList**pSpaValue**pMemDescIn**pCpuPtrIn**memDescIn*pOrigVidOffset**pOrigVidOffset*pbAllowDirectMap**pbAllowDirectMap*bDirectSysMappingAllowed**bDirectSysMappingAllowed*ucFlaBase**ucFlaBase*ucFlaSize**ucFlaSize**pKernelBus0**pKernelBus1**dma_size**pDmaAddress**pDmaSize**peer0***ppP2PDomMemDesc**nvlinkPeer**pGpuPeer***ppWMBoxMemDesc**pMailboxAreaSize**pMailboxAlignmentSize**pMailboxMaxOffset64KB**pMailboxBar1MaxOffset64KB***pCpuPtr**pCpu**pBar1VARange**pAperOffset**numAreas**config_params*config_value**config_value*isBar64bit**isBar64bit**pNvjpgCapsParams**pBspCapParams**pMsencCapsParams**pGetLatencyBufferSizeParams**flushParams**pModeParams**pSubDeviceCountParams**pClassListParams**pHostCapsParams**pChannelParams**pKfifoCapsParams**pFbCapsParams**pDmaCapsParams**pBifPciePowerControlParams***ppDevice*pAcsRoutingConfig**pAcsRoutingConfig*pBR04BusArray**pBR04BusArray*pBR04RevArray**pBR04RevArray*pBR04Count**pBR04Count*pBR03Bus**pBR03Bus*pBR04Bus**pBR04Bus*pBRNot3rdParty**pBRNot3rdParty*pNoUnsupportedBRFound**pNoUnsupportedBRFound*pNoOnboardBR04**pNoOnboardBR04*pGpu2**pGpu2*pPciSwitchBus**pPciSwitchBus*cap_offset**cap_offset*pfunc**pfunc*pvendorID**pvendorID*pdeviceID**pdeviceID**pbusBrdg*pdevice**pdevice*vendorID**vendorID*deviceID**deviceID***pGpus**pDevCtrlStatus**pL1Ss**pAER*pPort**pPort*pChipsetInfoIndex**pChipsetInfoIndex*pGSI**pGSI*pTempLtrSupported**pTempLtrSupported*pLinkCaps2**pLinkCaps2**genSpeed*ppEngineCallback**ppEngineCallback***ppEngineCallback**pNvDumpState**pRcdb**pReasonData**pRcdError***pVoidGpu***ppRec***ppRmDiagWrapBuffRec**pCommonGsp**pRmDiagGsp**pRmDiagWrapBuffRec**pFieldDesc***pDelete**phDstObject**pDstObject**pVGpu*pIsCallingContextPlugin**pIsCallingContextPlugin***node*ppErrorBlock**ppErrorBlock***ppErrorBlock*pVirtAddr**pVirtAddr*pTbl**pTbl**osPageCount*rmPageCount**rmPageCount*pOldCallContext**pOldCallContext*ppOldCallContext**ppOldCallContext***ppOldCallContext*pNewCallContext**pNewCallContext**ppClientEntry***ppClientEntry*pUnmapParams**pUnmapParams*pAccess**pAccess*pRmCtrlExecuteCookie**pRmCtrlExecuteCookie*pClientLockType**pClientLockType**pRmCtrlParams*pbSupportForceROLock**pbSupportForceROLock*exportedEntry**exportedEntry*pFreeParams**pFreeParams**pRmAllocParams***pControlParams*phSecondClient**phSecondClient**pParamCopy*ppParamCopy**ppParamCopy***ppParamCopy*pParamsSize**pParamsSize**pClassInfo*ppShare**ppShare***ppShare**pHalspecParent*phClientList**phClientList*pAccessControl**pAccessControl*phDomain**phDomain**pSession**pNvfbcSession**pNvencSession**pEventBufferRef**arg_pParent**pNotifyBuffer**NotifyXlate**pChannelDescendant***ppMethods**pNumMethods**pbMemCpuCacheable*pHalImpl**pHalImpl**feature***unused*pProtection**pProtection***pWorkItem**MoreEvents**pUnbindCtxDmaParams**pBindCtxDmaParams**pUpdateCtxDmaParams*ppContextDma**ppContextDma***ppContextDma**pCachedIntr**headIntrMask**thisGpu**pLowLatencyLock**pDpModesetData**pDpmodesetData**pChildGpu**pMasterScanLock**pMasterScanLockPin**pSlaveScanLock**pSlaveScanLockPin**pOrigLsrMinTime**pComputedLsrMinTime*computedLsrMinTime**computedLsrMinTime**pLineCount**pFrameCount**pChannelNum**pRgLineCallback**pBufferContextDma**pIntrServicedHeadMask**pDispChnClass**pNotifierType**keyMaterialBundle**classEngineID**classID**rmEngineID*pClassEngineID**pClassEngineID*pClassID**pClassID**pRmEngineID**pGetKmbParams**engDesc**pUserdAddr**pUserdAper**phUserdMemory**pUserdOffset*bar1MapOffset**bar1MapOffset**userBase**pChannelGpfifoParams*pStopChannelParams**pStopChannelParams**pTokenParams**pSetErrorNotifierParams***ppCpuVirtAddr**pKernelRc*pGpuPartitionId**pGpuPartitionId*pComputeInstanceId**pComputeInstanceId*pFlushFlags**pFlushFlags*pDisable**pDisable*pSoftDisable**pSoftDisable*pErrorParams**pErrorParams*num_infos**num_infos**pGenericKernelFalcon**pGenKernFlcn**pBufDesc**pFalconConfig**arg_pGpu**arg_pFalconConfig**pErrorStatus**pCore**pCode**pc**pKerneFlcn**pProtobufData*phObject**phObject***pParamStructPtr**pPreserveLogBufferFull**pKernelGSp**pMailbox0**pMailbox1**preparedCmd**pPreparedCmd***ppVbiosImg**pPayLoad***ppBinStorageImage***ppBinStorageDesc**pReport***ppBooterUnloadUcode***ppBooterLoadUcode***ppScrubberUcode***ppFwsecUcode**pVbiosVersionCombined***pRunCpuSeqParams***ppMemdescRadix3**pGspInitArgs**pVmm*pPteSpaceMap**pPteSpaceMap*pRusd**pRusd*policyInfo**policyInfo*pUpstreamPortPciInfo**pUpstreamPortPciInfo**pGpuDb**pMQI*ppMQI**ppMQI***ppMQI***ppMQCollection**pDceClient***msgData***gspFwHandle***gspFwLogHandle**pDeviceReference**aperture**pci_info**pImportSgt***pImportPriv**link_change**pKernelNvlink0**pKernelNvlink1**switchLinkMasks*sysmemOptimalReadCEs**sysmemOptimalReadCEs*sysmemOptimalWriteCEs**sysmemOptimalWriteCEs**pNumActiveLinksPerIoctrl***paramAddr*src/kernel/gpu/mmu/bar2_walk.c**src/kernel/gpu/mmu/bar2_walk.c*pStagingBufferDesc*memdescMapOld(pStagingBufferDesc, 0, pStagingBufferDesc->Size, NV_TRUE, NV_PROTECT_READ_WRITE, (void **)&pStagingDescMapping, &pPriv) == NV_OK**memdescMapOld(pStagingBufferDesc, 0, pStagingBufferDesc->Size, NV_TRUE, NV_PROTECT_READ_WRITE, (void **)&pStagingDescMapping, &pPriv) == NV_OK*pStagingDescMapping*pStagingBufferMapping**pStagingBufferMapping*pOutputBufferDesc*pOutputBufferMapping**pOutputBufferMapping*pOutputDescMapping*oldBar0Mapping*currentBar0Mapping*bRestore*pWindowAddress*pSp**pSp***pSp*pDataOffset**pDataOffset**bGlobalEntry**pStagingDescMapping**string1*string2**string2***ctx*pageDirInit*pageTblInit***channelHandle**pGpuExternalMappingInfo*pKernelBus->bar2[gfid].pageDirInit + pKernelBus->bar2[gfid].pageTblInit < pKernelBus->bar2[gfid].numPageDirs + pKernelBus->bar2[gfid].numPageTbls**pKernelBus->bar2[gfid].pageDirInit + pKernelBus->bar2[gfid].pageTblInit < pKernelBus->bar2[gfid].numPageDirs + pKernelBus->bar2[gfid].numPageTbls**pGpuExternalPhysAddrInfo***pPmaObject***pPmaStats***faultBuffer***pFmt**bEccDbeSet*pKernelBus->bar2[gfid].bBootstrap || IS_GFID_VF(gfid) || KBUS_BAR0_PRAMIN_DISABLED(pGpu) || kbusIsCpuVisibleBar2Disabled(pKernelBus) || kbusIsBarAccessBlocked(pKernelBus)**pKernelBus->bar2[gfid].bBootstrap || IS_GFID_VF(gfid) || KBUS_BAR0_PRAMIN_DISABLED(pGpu) || kbusIsCpuVisibleBar2Disabled(pKernelBus) || kbusIsBarAccessBlocked(pKernelBus)**pDeviceId**pSubdeviceId**hClient**hDevice**hSubDevice**gpuGuid*cesCaps**cesCaps**gpuOffset***tsgHandle*outDevice**outDevice***outDevice**pcieLinkRate**pBytesFree**pDynamicBlacklistSize**pStaticBlacklistSize**pNumChunks***ppPersistList*pLargestFree**pLargestFree*pRegionBase**pRegionBase**pLargestOffset**pRegSize***ppRegionDesc***pCtxPtr***ctxPtr**allocationOptions**pBlacklistPageBase**authTagData**gpuExternalPhysAddrsInfo**accessCntrInfo**accessCntrConfig**accessBitsInfo**gpuMemoryInfo**vaspace*NVRM: PA 0x%llX for VA 0x%llX-0x%llX **NVRM: PA 0x%llX for VA 0x%llX-0x%llX *call to mmuFmtLevelVirtAddrHi***dupedVaspace**pGPUInstanceSubscription***ppGPUInstanceSubscription*pSubLevels**pSubLevels*NVRM: SubLevel %u = PA 0x%llX ***ppMemBuffer*ppMemPriv**ppMemPriv***ppMemPriv**NVRM: SubLevel %u = PA 0x%llX *NVRM: SubLevel %u = INVALID **NVRM: SubLevel %u = INVALID **pRpcStructureCopy*memmgrMemWrite(pMemoryManager, &surf, entry.v8, pLevelFmt->entrySize, transferFlags) == NV_OK**memmgrMemWrite(pMemoryManager, &surf, entry.v8, pLevelFmt->entrySize, transferFlags) == NV_OK*entryStart**pGpuState*NULL != pKernelBus->virtualBar2[gfid].pPageLevels**NULL != pKernelBus->virtualBar2[gfid].pPageLevels***pParsedFaultInfo**pCancelInfo**pMmuFaultType**pMmuFaultAddress**pHiVal**pLoVal*pFakeSparse**pFakeSparse*pLevelFmt->numSubLevels == 1**pLevelFmt->numSubLevels == 1**pClientFaultBuf**pFaultsCopied***pParsedFaultEntry**entriesCopied*pKernelBus->virtualBar2[gfid].pPageLevels != NULL**pKernelBus->virtualBar2[gfid].pPageLevels != NULL**pPutOffset**pGetOffset**pGmmu***pFaultBufferGet***pFaultBufferPut***pFaultBufferInfo***faultIntr***faultIntrSet***faultIntrClear**faultMask***pPrefetchCtrl***pHubIntr***pHubIntrEnSet***pHubIntrEnClear*pPdePcfSw**pPdePcfSw*pPdePcfHw**pPdePcfHw*pPtePcfSw**pPtePcfSw*pPtePcfHw**pPtePcfHw**pPdeMulti**pPdeApertures**pLevels**pPde**pPteApertures*bUseTempMemDesc*call to kbusInitInstBlk_DISPATCH**pTimeOut**pTestParams**pRootPageDir**pOffsetLo**pDataLo**pOffsetHi**pDataHi*call to kgmmuGetFaultRegisterMappings_DISPATCH*pGetParams*pFmtGmmu*maxV3Levels*level < maxV3Levels*src/kernel/gpu/mmu/gmmu_trace.c**level < maxV3Levels**src/kernel/gpu/mmu/gmmu_trace.c*level < 4**level < 4*level == 0**level == 0**pFmtEntry*pGmmuEntry*call to kgmmuTranslatePdePcfFromHw_DISPATCH*kgmmuTranslatePdePcfFromHw_HAL(pKernelGmmu, pdePcfHw, gmmuFieldGetAperture(&pFmtPde->fldAperture, pGmmuEntry->v8), &pdePcfSw) == NV_OK**kgmmuTranslatePdePcfFromHw_HAL(pKernelGmmu, pdePcfHw, gmmuFieldGetAperture(&pFmtPde->fldAperture, pGmmuEntry->v8), &pdePcfSw) == NV_OK*call to nvFieldGetBool**pFmtPte*call to _gmmuGetPtePa*pFmtLevel*PTE_256G**PTE_256G*PTE_512M**PTE_512M*PTE_2M**PTE_2M*PTE_128K**PTE_128K*PTE_64K**PTE_64K*PTE_4K**PTE_4K*[0x%x]: **[0x%x]: *call to _gmmuPrintPa*fldValid*Vld=%d, **Vld=%d, *Kind=0x%x, **Kind=0x%x, *PtePcf=%d**PtePcf=%d*call to kgmmuTranslatePtePcfFromHw_DISPATCH*kgmmuTranslatePtePcfFromHw_HAL(GPU_GET_KERNEL_GMMU(pGpu), ptePcfHw, nvFieldGetBool(&pFmt->fldValid, pGmmuEntry->v8), &ptePcfSw) == NV_OK**kgmmuTranslatePtePcfFromHw_HAL(GPU_GET_KERNEL_GMMU(pGpu), ptePcfHw, nvFieldGetBool(&pFmt->fldValid, pGmmuEntry->v8), &ptePcfSw) == NV_OK*(Vol=%d, Priv=%d, RO=%d, Atomic=%d, ACE=%d)**(Vol=%d, Priv=%d, RO=%d, Atomic=%d, ACE=%d)*fldPrivilege*Priv=%d, **Priv=%d, *fldReadOnly*RO=%d, **RO=%d, *RD=%d, **RD=%d, *WD=%d, **WD=%d, *fldEncrypted*Enc=%d, **Enc=%d, *Vol=%d, **Vol=%d, *Lock=%d, **Lock=%d, *AtomDis=%d, **AtomDis=%d, *CTL=0x%x, **CTL=0x%x, *CTL_MSB=%d, **CTL_MSB=%d, *pFaultBufferAddrSpace**pFaultBufferAddrSpace*pFaultBufferAttr**pFaultBufferAttr*pInstBlkDesc**pInstBlkDesc*pInstBlkParams**pInstBlkParams*pFaultBufferPages**pFaultBufferPages*pCompr**pCompr*pPteInfo**pPteInfo**serviced*regh**regh*regl**regl*res_lo**res_lo*res_hi**res_hi**pData0**pData1**pAlloc***pMem**pPreallocatedBlock***pPreallocatedBlock***pSpinlock***pKernel*pUser**pUser***pUser*call to _gmmuGetPdePa**pFmtPde*PT_128K: **PT_128K: *PT_64K: **PT_64K: *PT_4K: **PT_4K: *(Dual) **(Dual) **pStats**pTracking***pData0***pData1*pPattern**pPattern*invalid ***pDestination***pSource**invalid *pRwLock**pRwLock*Size=1/%c**Size=1/%c**pMutex*PdePcf=%d**PdePcf=%d*kgmmuTranslatePdePcfFromHw_HAL(GPU_GET_KERNEL_GMMU(pGpu), pdePcfHw, aperture, &pdePcfSw) == NV_OK**kgmmuTranslatePdePcfFromHw_HAL(GPU_GET_KERNEL_GMMU(pGpu), pdePcfHw, aperture, &pdePcfSw) == NV_OK*(Sparse=%d, Vol=%d, ATS=%d)**(Sparse=%d, Vol=%d, ATS=%d)*Vol=%d**Vol=%d*NVRM: MMUTRACE: VA[0x%08llx-%08llx] PDB: **NVRM: MMUTRACE: VA[0x%08llx-%08llx] PDB: **pRes*[%d]**[%d]*[0x%08llx] **[0x%08llx] *call to gmmuFieldGetAddress**pPrng*substr**substr**delim*saveptr**saveptr***saveptr*separator**separator*call to gmmuFmtEntryIsPte*cat**cat*str1**str1*str2**str2**pCpuInfo**pFromObj**pExportInfo*ppNewObject**ppNewObject***ppNewObject**printf_format*pszExpr**pszExpr**pszFileName**pBufferHandle***pList**sysnoncoh**syscoh***pNext**pFirst***pFirst***pLast**pPool*pFreeListLength**pFreeListLength*pPartialListLength**pPartialListLength*pSrcDesc*pFullListLength**pFullListLength*pPageHandle**pPageHandle*pPageHandleList**pPageHandleList*pDstDesc*src/kernel/gpu/mmu/gmmu_walk.c*NVRM: [GPU%u]: GVAS(%p) PA 0x%llX -> PA 0x%llX, Entries 0x%X-0x%X **src/kernel/gpu/mmu/gmmu_walk.c**NVRM: [GPU%u]: GVAS(%p) PA 0x%llX -> PA 0x%llX, Entries 0x%X-0x%X *memmgrMemCopy(GPU_GET_MEMORY_MANAGER(pGpu), &dest, &src, sizeOfEntries, transferFlags)**memmgrMemCopy(GPU_GET_MEMORY_MANAGER(pGpu), &dest, &src, sizeOfEntries, transferFlags)*call to _getMaxPageDirs***pLeaf*pAccessMask**pAccessMask*pRightsArray**pRightsArray*pRightsPresent**pRightsPresent*pRightsRequested**pRightsRequested*pShareListDst**pShareListDst*pShareListSrc**pShareListSrc*pShareList**pShareList*pRightsToUpdate**pRightsToUpdate*pEntries != NULL**pEntries != NULL*pAvailableRights**pAvailableRights*NVRM: [GPU%u]: PA 0x%llX, Entries 0x%X-0x%X = %s **NVRM: [GPU%u]: PA 0x%llX, Entries 0x%X-0x%X = %s *paramCopies**paramCopies**baseRanges*pSparseEntry**pSparseEntry*numBaseRanges**numBaseRanges**carveouts**pBigRange**pSecondPartAfterSplit*pBitVectorDst**pBitVectorDst*pBitVectorSrc**pBitVectorSrc*pBitVector**pBitVector*pRawMask**pRawMask***pRawMask*pBitVectorA**pBitVectorA*pBitVectorB**pBitVectorB*NVRM: [GPU%u]: PA 0x%llX, Entries 0x%X-0x%X = %s FAIL **NVRM: [GPU%u]: PA 0x%llX, Entries 0x%X-0x%X = %s FAIL *pIndices**pIndices**pAccessCounterBuffer*pNv4kEntry**pGetParams*NVRM: [GPU%u]: PA 0x%llX, Entry 0x%X **NVRM: [GPU%u]: PA 0x%llX, Entry 0x%X *ONEBITSET(curMemSize)**ONEBITSET(curMemSize)*recipExp*recipExp == minRecipExp**recipExp == minRecipExp*memmgrMemWrite(GPU_GET_MEMORY_MANAGER(pGpu), &dest, entry.v8, pLevelFmt->entrySize, transferFlags)**memmgrMemWrite(GPU_GET_MEMORY_MANAGER(pGpu), &dest, entry.v8, pLevelFmt->entrySize, transferFlags)*NVRM: [GPU%u]: PA 0x%llX (%s) **NVRM: [GPU%u]: PA 0x%llX (%s) *null**null*NVRM: [GPU%u]: PA 0x%llX for VA 0x%llX-0x%llX **NVRM: [GPU%u]: PA 0x%llX for VA 0x%llX-0x%llX *call to rmMemPoolFree*NULL != pGpuState**NULL != pGpuState*pParentMemDesc**pParentMemDesc*pParentMemDesc->RefCount - listCount(pParentMemDesc->pSubMemDescList) == 1**pParentMemDesc->RefCount - listCount(pParentMemDesc->pSubMemDescList) == 1*pParentMemDescNext**pParentMemDescNext*(NULL != pMemDesc)**(NULL != pMemDesc)*(memSize <= pMemDesc->ActualSize)**(memSize <= pMemDesc->ActualSize)*pMemDesc->pSubMemDescList != NULL**pMemDesc->pSubMemDescList != NULL*(NV_OK == status)**(NV_OK == status)**pMemDescTmp*call to kgmmuGetPDBAllocSize_DISPATCH*newMemSize*call to kgmmuGetPDEBAR1Aperture*call to kgmmuGetPDEBAR1Attr*bAllowSysmem*call to kgmmuGetPDEAperture*call to kgmmuGetPDEAttr**pIdx*memPoolList**memPoolList*pDataSize**pDataSize**pBinStorage*ppData**ppData***ppData*zArray**zArray*outBuffer**outBuffer**pDpuIpHal**pDispIpHal**pRmVariantHal**pTegraChipHal**pChipHal**pTD***pCondData**arg_pApertures**pParentAperture**arg_pParentAperture**arg_pMapping**pRegisterAccess*pFilter**pFilter*pParam**pParam***pParam*ppFilter**ppFilter***ppFilter**pGroup**pNbsiObj*pWantedGlobSource**pWantedGlobSource*pWantedGlobIdx**pWantedGlobIdx*pRtnObj**pRtnObj*pRtnObjSize**pRtnObjSize*pTotalObjSize**pTotalObjSize*pRtnGlobStatus**pRtnGlobStatus*call to mmuFmtFindLevelParent*pElementHash**pElementHash*pRetSize**pRetSize*NULL != pParent**NULL != pParent*fldAddr*fldAddrSysmem*partialPtVaRangeBase**partialPtVaRangeBase*fldSizeRecipExp*rtnHashArray**rtnHashArray*bPartialTbl**pAcpiDsmFunction*newMemSize >= minMemSize*pAcpiDsmSubFunction**pAcpiDsmSubFunction**newMemSize >= minMemSize*pGetObjByTypeSubFunction**pGetObjByTypeSubFunction*pGetAllObjsSubFunction**pGetAllObjsSubFunction*newMemSize > *pMemSize**newMemSize > *pMemSize**pRemappedDsmSubFunction*call to kgmmuGetPTEBAR1Aperture*pNbsiObjData**pNbsiObjData***pNbsiObjData*call to kgmmuGetPTEBAR1Attr*pSzOfpNbsiObjData**pSzOfpNbsiObjData**thisHal***pNode**pRoot***pRoot*pTracker**pTracker*pDepends**pDepends*ppPrereq**ppPrereq***ppPrereq*pVect**pVect***pVect*memPoolListCount <= NV_ARRAY_ELEMENTS(memPoolList)*ppValue**ppValue***ppValue**memPoolListCount <= NV_ARRAY_ELEMENTS(memPoolList)*pVector**pVector*pMemDescTemp*call to memdescSetAddressSpace*pUserCtx->pGpuState->pPageTableMemPool != NULL**pUserCtx->pGpuState->pPageTableMemPool != NULL*call to rmMemPoolAllocate*call to _gmmuScrubMemDesc*rm_page_table_surface*call to mmuFmtConvertLevelIdToSuffix**rm_page_table_surface***vardataBuffAddr*recordBuffAddr**recordBuffAddr***recordBuffAddr*pBNotify**pBNotify*pPostTelemetryEvent**pPostTelemetryEvent**pUpdateParams*pEnableParams**pEnableParams*bPacked*call to _gmmuMemDescCacheAlloc*call to _gmmuMemDescCacheCreate*NVRM: [GPU%u]: [%s] PA 0x%llX (0x%X bytes) for VA 0x%llX-0x%llX **NVRM: [GPU%u]: [%s] PA 0x%llX (0x%X bytes) for VA 0x%llX-0x%llX *Packed**Packed*Unpacked**Unpacked**pMsgHdr*pNotifyGfidMask**pNotifyGfidMask**pInbandRcvParams*linkMaskToBeReduced**linkMaskToBeReduced*pRemapTableIdx**pRemapTableIdx*pFabricHealthStatusMask**pFabricHealthStatusMask*pFabricCliqueId**pFabricCliqueId*numProbes**numProbes*pEgmGpaAddress**pEgmGpaAddress*pFlaAddressRange**pFlaAddressRange*pFlaAddress**pFlaAddress*pGpaAddressRange**pGpaAddressRange*memmgrMemSet(GPU_GET_MEMORY_MANAGER(pGpu), &dest, 0, (NvU32)memdescGetSize(pMemDesc), TRANSFER_FLAGS_NONE)*pGpaAddress**pGpaAddress**memmgrMemSet(GPU_GET_MEMORY_MANAGER(pGpu), &dest, 0, (NvU32)memdescGetSize(pMemDesc), TRANSFER_FLAGS_NONE)*pFabricPartitionId**pFabricPartitionId**pClusterUuid**INVALID**SPARSE*pFmCaps**pFmCaps**NV4K*pGfId**pGfId*invalidateAll*call to kgmmuCommitInvalidateTlbTest_DISPATCH*ppGpuFabricProbeInfoKernel**ppGpuFabricProbeInfoKernel***ppGpuFabricProbeInfoKernel*src/kernel/gpu/mmu/kern_gmmu.c*NVRM: GMMU_UNREGISTER_FAULT_BUFFER **src/kernel/gpu/mmu/kern_gmmu.c**NVRM: GMMU_UNREGISTER_FAULT_BUFFER *call to kgmmuFaultBufferReplayableDestroy_IMPL*PDB_PROP_KGMMU_REPLAYABLE_FAULT_BUFFER_IN_USE*NVRM: GMMU_REGISTER_FAULT_BUFFER **NVRM: GMMU_REGISTER_FAULT_BUFFER *call to kgmmuFaultBufferReplayableSetup_IMPL*faultBufferPteArray**faultBufferPteArray*pFaultBufferMemDesc*call to _kgmmuFaultBufferDescribe*call to kgmmuFaultBufferLoad_DISPATCH*call to kgmmuFaultBufferUnregister_IMPL*hFaultBufferClient*hFaultBufferObject*call to kgmmuFaultBufferCreateMemDesc_IMPL**pFaultBufferMemDesc*call to kgmmuFaultBufferGetAddressSpace_IMPL*replayableFaultBufferSize*nonReplayableFaultBufferSize*replayableShadowFaultBufferMetadataSize*nonReplayableShadowFaultBufferMetadataSize*call to kgmmuFaultCancelIssueInvalidate_DISPATCH*IS_VIRTUAL_WITH_SRIOV(pGpu)**IS_VIRTUAL_WITH_SRIOV(pGpu)*entryData**entryData***entryData*pCustomAllocator**pCustomAllocator**pElapsedTimeUs***ppThreadNode**pCertSize***pCert*hCpuFaultBuffer**hCpuFaultBuffer**pCertCount*pBufferPage**pBufferPage**pBufferPages***pEncapCertChain**pEncapCertChainSize***kernelVaddr*bar2FaultBufferAddr**pResponse**pResponseSize*pFaultBuffer->bar2FaultBufferAddr == 0**pFaultBuffer->bar2FaultBufferAddr == 0**pNonce***pAttestationReport**pAttestationReportSize**pbIsCecAttestationReportPresent***pCecAttestationReport**pCecAttestationReportSize***pKeyExCertChain**pKeyExCertChainSize***pAttestationCertChain**pAttestationCertChainSize*pKeyOut**pKeyOut*pCertRespCtx**pCertRespCtx***pCertRespCtx*pCertChainOut**pCertChainOut*pCertChainOutSize**pCertChainOutSize*pCertResponder**pCertResponder*pteFlags**pField**pEnum*kgmmuTranslatePtePcfFromHw_HAL(pKernelGmmu, ptePcfHw, bPteValid, &ptePcfSw) == NV_OK*pTargetFmt**pTargetFmt**kgmmuTranslatePtePcfFromHw_HAL(pKernelGmmu, ptePcfHw, bPteValid, &ptePcfSw) == NV_OK*pSubLevel**pSubLevel*pRefCount**pRefCount***ppChanGrpRef*kgmmuTranslatePdePcfFromHw_HAL(pKernelGmmu, pdePcfHw, GMMU_APERTURE_INVALID, &pdePcfSw) == NV_OK**kgmmuTranslatePdePcfFromHw_HAL(pKernelGmmu, pdePcfHw, GMMU_APERTURE_INVALID, &pdePcfSw) == NV_OK*comptagLine*call to kgmmuService_4a4dee*Failed to service non-replayable MMU fault error**Failed to service non-replayable MMU fault error*call to kgmmuReportFaultBufferOverflow_DISPATCH*Failed to report non-replayable MMU fault buffer overflow error**Failed to report non-replayable MMU fault buffer overflow error*call to kgmmuServiceReplayableFault_DISPATCH*Failed to service replayable MMU fault error**Failed to service replayable MMU fault error*Failed to report replayable MMU fault buffer overflow error**Failed to report replayable MMU fault buffer overflow error*NVRM: Unexpected replayable interrupt routed to RM. Verify UVM took ownership. **NVRM: Unexpected replayable interrupt routed to RM. Verify UVM took ownership. **pSubctxId*Failed to trigger INFO_FAULT doorbell**Failed to trigger INFO_FAULT doorbell***ppVASpace*call to kgmmuServicePriFaults_DISPATCH*ppGlobalVASpace**ppGlobalVASpace***ppGlobalVASpace*Failed to service PRI fault error**Failed to service PRI fault error*pVideoLinksParams**pVideoLinksParams*pPinsetIndex**pPinsetIndex*pGpuParent**pGpuParent*call to intrservClearInterrupt_IMPL*pSliLinkTestDone**pSliLinkTestDone*pBoostMgr**pBoostMgr*pEngineIdxList**pEngineIdxList*pBoostGrpId**pBoostGrpId*pBoostConfig**pBoostConfig*ppCtxBufPool**ppCtxBufPool***ppCtxBufPool*pBufInfoList**pBufInfoList***pQueue*pCopyTo**pCopyTo***pCopyTo*pElements**pElements***pElements**pUserCtx*NVRM: Unexpected addressSpace (%u) when mapping to GMMU_APERTURE. **NVRM: Unexpected addressSpace (%u) when mapping to GMMU_APERTURE. *call to kgmmuInstBlkVaLimitGet_DISPATCH*call to kgmmuInstBlkPageDirBaseGet_DISPATCH*kgmmuInstBlkPageDirBaseGet_HAL(pGpu, pKernelGmmu, pVAS, pInstBlkParams, subctxId, &dirBaseLoOffset, &dirBaseLoData, &dirBaseHiOffset, &dirBaseHiData)**kgmmuInstBlkPageDirBaseGet_HAL(pGpu, pKernelGmmu, pVAS, pInstBlkParams, subctxId, &dirBaseLoOffset, &dirBaseLoData, &dirBaseHiOffset, &dirBaseHiData)*call to kgmmuInstBlkAtsGet_DISPATCH*(status == NV_ERR_NOT_READY)**(status == NV_ERR_NOT_READY)*call to kgmmuInstBlkMagicValueGet_DISPATCH*pInstBlk**pInstBlk*pNewMem**pNewMem**pRootFmt*ppWalk**ppWalk***ppWalk**pStagingBuffer***ivMask**pGlobalH2DKey**pGlobalD2HKey**pD2HKey**pKeyId**keyspace*call to kgmmuClientShadowFaultBufferUnregister_IMPL*call to kgmmuFaultBufferFreeSharedMemory_DISPATCH*call to kgmmuClientShadowFaultBufferPagesDestroy_IMPL*call to kgmmuClientShadowFaultBufferQueueDestroy_IMPL*!pKernelGmmu->getProperty(pKernelGmmu, PDB_PROP_KGMMU_FAULT_BUFFER_DISABLED)**!pKernelGmmu->getProperty(pKernelGmmu, PDB_PROP_KGMMU_FAULT_BUFFER_DISABLED)*pStaticInfo->nonReplayableFaultBufferSize != 0**pStaticInfo->nonReplayableFaultBufferSize != 0*call to _kgmmuClientShadowFaultBufferQueueAllocate*call to _kgmmuClientShadowFaultBufferPagesAllocate*call to kgmmuFaultBufferAllocSharedMemory_DISPATCH*call to kgmmuClientShadowFaultBufferRegister_IMPL*shadowFaultBufferType*NVRM: Unregistering %s fault buffer failed (status=0x%08x), proceeding... **NVRM: Unregistering %s fault buffer failed (status=0x%08x), proceeding... *non-replayable**non-replayable**replayable*queueCapacity*call to circularQueueInitNonManaged_IMPL*bQueueAllocated**pStatsInfo*pLowerThreshold**pLowerThreshold*pUpperThreshold**pUpperThreshold*pBufferMemDesc**pBufferMemDesc*numBufferPages**pSlot***ppCtx**ctr**pGrceMask**pceAvailableMaskPerHshub*shadowFaultBufferPteArray**shadowFaultBufferPteArray*shadowFaultBufferQueuePhysAddr*faultBufferSharedMemoryPhysAddr*pBufferAddress**pBufferAddress*pBufferPriv**pBufferPriv**pAvailablePceMaskForConnectingHub*shadowFaultBufferSizeTotal*pFaultBufferMetadataAddress**pFaultBufferMetadataAddress***pFaultBufferMetadataAddress***pQueueAddress*pQueuePriv**pQueuePriv***pQueuePriv*queueContext*pCopyData*pQueueData**pQueueData*pClientData***pDst*call to kgmmuEncodePhysAddrs_IMPL*aperture != GMMU_APERTURE_INVALID**aperture != GMMU_APERTURE_INVALID*call to _kgmmuEncodePeerAddrs*kce**kce*pTopoIdx**pTopoIdx***pAutoConfigTable*pLargestTopoIdx**pLargestTopoIdx**pGrceConfig**rd**wr*pReadLce**pReadLce*pWriteLce**pWriteLce*NVRM: Unregistering Replayable Fault buffer failed (status=0x%08x), proceeding... **NVRM: Unregistering Replayable Fault buffer failed (status=0x%08x), proceeding... *call to kgmmuFaultBufferUnload_DISPATCH*NVRM: Unloading Replayable Fault buffer failed (status=0x%08x), proceeding... **NVRM: Unloading Replayable Fault buffer failed (status=0x%08x), proceeding... *NVRM: Destroying Replayable Fault buffer failed (status=0x%08x), proceeding... **NVRM: Destroying Replayable Fault buffer failed (status=0x%08x), proceeding... *call to memdescGetPtePhysAddrs*faultBufferGenerationCounter*rm_replayable_fault_buffer_surface**rm_replayable_fault_buffer_surface*rm_non_replayable_fault_buffer_surface**rm_non_replayable_fault_buffer_surface*bAllocInVidmem*faultBufferAddrSpace*faultBufferAttr*UVM non-replayable fault**UVM non-replayable fault**pSysmemReadCE**pSysmemWriteCE*ppShimKCe**ppShimKCe***ppShimKCe**pPceAvailableMask**pNumLcesToMap**pLceAvailableMask**pNumMinPcesPerLce**pNumPcesPerLce**pNumLces**pSupportedPceMask**pSupportedLceMask**pPcesPerHshub*ppKCe**ppKCe***ppKCe*pReuseMappingDb**pReuseMappingDb***pAllocCtx*pMemoryArea**pMemoryArea***pGlobalCtx*returnHandle**returnHandle**pChannelPbInfo**pCcslCtx**pAuthTagBufMemDesc**pSemaMemDesc**pMethodLength**pPutIndex**pCeUtilsApi**pCeInstance**arg_pKernelMIGGPUInstance**arg_pAllocParams*UVM replayable fault**UVM replayable fault*NVRM: Fault buffers must be in CPR vidmem when HCC is enabled **NVRM: Fault buffers must be in CPR vidmem when HCC is enabled *call to memmgrDetermineComptag_DISPATCH*call to gmmuFmtInitPteCompTags*pLevel != NULL**pLevel != NULL*pFmts**pFmts***pFmts**pHWBC*(pFam->pFmts[b] != NULL)**(pFam->pFmts[b] != NULL)*pLvls**pLvls**pComputeInstanceSubscription*call to kgmmuFmtInitLevels_DISPATCH*call to kgmmuFmtInitCaps_GM20X*GMMU PDE**GMMU PDE*BAR1 PDE**BAR1 PDE*GMMU PTE**GMMU PTE*BAR1 PTE**BAR1 PTE*bEnablePerVaspaceBigPage*RmDisableBigPagePerAddressSpace***ppComputeInstanceSubscription**RmDisableBigPagePerAddressSpace*RMFermiBigPageSize**RMFermiBigPageSize*NVRM: The %s regkey cannot be used with the %s regkey! **NVRM: The %s regkey cannot be used with the %s regkey! *RmDisableHwFaultBuffer**RmDisableHwFaultBuffer*NVRM: Overriding HW Fault buffer state to 0x%x due to regkey! **NVRM: Overriding HW Fault buffer state to 0x%x due to regkey! *PDB_PROP_KGMMU_FAULT_BUFFER_DISABLED*call to kgmmuFaultBufferDestroy_DISPATCH*call to _kgmmuDestroyGlobalVASpace*NVRM: Failed to destory GVASpace, status:%x **NVRM: Failed to destory GVASpace, status:%x *call to _kgmmuCreateGlobalVASpace**pConfComputeApi**pConsoleMemory*NVRM: Failed to create GVASpace, status:%x **NVRM: Failed to create GVASpace, status:%x *call to kgmmuEnableComputePeerAddressing_DISPATCH*NVRM: Failed to enable compute peer addressing, status:%x **NVRM: Failed to enable compute peer addressing, status:%x *call to kgmmuInitCeMmuFaultIdRange_DISPATCH*kgmmuInitCeMmuFaultIdRange_HAL(pGpu, pKernelGmmu)**kgmmuInitCeMmuFaultIdRange_HAL(pGpu, pKernelGmmu)*call to kgmmuEnableNvlinkComputePeerAddressing_DISPATCH*NVRM: Failed to enable GMMU property compute addressing for GPU %x , status:%x **NVRM: Failed to enable GMMU property compute addressing for GPU %x , status:%x *call to gpugrpDestroyGlobalVASpace_IMPL*pGpuGrp != NULL**pGpuGrp != NULL*call to gpugrpCreateGlobalVASpace_IMPL*(NV_OK == rmStatus)**(NV_OK == rmStatus)*call to _kgmmuInitStaticInfo*call to kgmmuFaultBufferInit_DISPATCH*kgmmuFaultBufferInit_HAL(pGpu, pKernelGmmu)**kgmmuFaultBufferInit_HAL(pGpu, pKernelGmmu)*call to rmcfg_IsdBLACKWELL*bBug4686457WAR*pKernelGmmu->pStaticInfo != NULL**pKernelGmmu->pStaticInfo != NULL*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GMMU_GET_STATIC_INFO, pKernelGmmu->pStaticInfo, sizeof(*pKernelGmmu->pStaticInfo))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GMMU_GET_STATIC_INFO, pKernelGmmu->pStaticInfo, sizeof(*pKernelGmmu->pStaticInfo))*call to kgmmuDetermineMaxVASize_DISPATCH*call to kgmmuFmtInitPdeApertures_DISPATCH*pdeApertures**pdeApertures*call to kgmmuFmtInitPteApertures_DISPATCH*pteApertures**pteApertures*(pFam != NULL)**(pFam != NULL)*call to kgmmuFmtInitPdeMulti_DISPATCH*call to kgmmuFmtInitPde_DISPATCH*call to kgmmuFmtInitPte_DISPATCH*call to kgmmuFmtInitPteComptagLine_DISPATCH*call to kgmmuFmtInit_IMPL*kgmmuFmtInit(pKernelGmmu)**kgmmuFmtInit(pKernelGmmu)*call to kgmmuFmtFamiliesInit_DISPATCH*kgmmuFmtFamiliesInit_HAL(pGpu, pKernelGmmu)**kgmmuFmtFamiliesInit_HAL(pGpu, pKernelGmmu)*PDEAperture*PDEAttr*PDEBAR1Aperture*PDEBAR1Attr*PTEAperture*PTEAttr*PTEBAR1Aperture*PTEBAR1Attr*call to _kgmmuInitRegistryOverrides**pCrashCatEng**pWayfinder**pCrashCatWayfinderHal**arg_pQueueConfig***pMsg*field_desc**field_desc*src/kernel/gpu/mmu/mmu_fault_buffer.c**src/kernel/gpu/mmu/mmu_fault_buffer.c*msg_desc**msg_desc**pCrashcatProtobufData***pReportBytes*ppReportBytes**ppReportBytes***ppReportBytes***arg_ppReportBytes**pCrashCatReportHal**pDebugBufferApi***hCpuFaultBuffer*call to kgmmuFaultBufferReplayableAllocate_IMPL*NVRM: Failed to setup Replayable Fault buffer (status=0x%08x). **NVRM: Failed to setup Replayable Fault buffer (status=0x%08x). *src/kernel/gpu/mmu/mmu_fault_buffer_ctrl.c**src/kernel/gpu/mmu/mmu_fault_buffer_ctrl.c*NVRM: Client shadow fault buffer for replayable faults does not exist **NVRM: Client shadow fault buffer for replayable faults does not exist *pShadowBuffer**pShadowBuffer*NVRM: Given client shadow fault buffer for replayable faults does not match with the actual **NVRM: Given client shadow fault buffer for replayable faults does not match with the actual *NVRM: Error freeing client shadow fault buffer for replayable faults **NVRM: Error freeing client shadow fault buffer for replayable faults *NVRM: Client shadow fault buffer for replayable faults already allocated **NVRM: Client shadow fault buffer for replayable faults already allocated *call to kgmmuClientShadowFaultBufferAlloc_DISPATCH*NVRM: Error allocating client shadow fault buffer for replayable faults **pDeferredApiObj**pRemoveApi**NVRM: Error allocating client shadow fault buffer for replayable faults ***pShadowBuffer*pShadowBufferMetadata**pShadowBufferMetadata***pShadowBufferMetadata*NVRM: Client shadow fault buffer for non-replayable faults does not exist **NVRM: Client shadow fault buffer for non-replayable faults does not exist *NVRM: Given client shadow fault buffer for non-replayable faults does not match with the actual **NVRM: Given client shadow fault buffer for non-replayable faults does not match with the actual *NVRM: Error freeing client shadow fault buffer for non-replayable faults **NVRM: Error freeing client shadow fault buffer for non-replayable faults *NVRM: Client shadow fault buffer for non-replayable faults already allocated **NVRM: Client shadow fault buffer for non-replayable faults already allocated *NVRM: Error allocating client shadow fault buffer for non-replayable faults **NVRM: Error allocating client shadow fault buffer for non-replayable faults *pShadowBufferContext**pShadowBufferContext***pShadowBufferContext*pMmuTraceArg*hasMore**pMmuTraceArg*call to mmuFmtVirtAddrPageOffset*pLayout*entryVaLevelLimit*entryVaLimit**pDispCapabilities*(offset + pFmtLevel->entrySize) <= pMemDesc->Size*src/kernel/gpu/mmu/mmu_trace.c**(offset + pFmtLevel->entrySize) <= pMemDesc->Size**src/kernel/gpu/mmu/mmu_trace.c*pBase != NULL**pBase != NULL*isPt*pDone*call to _mmuPrintPte*pDispChannelDma**pDispChannelDma*destroyMemDesc*call to mmuWalkGetPageLevelInfo*mmuWalkGetPageLevelInfo(pWalk, &pFmtLevel->subLevels[subLevelIdx], entryVa, (const MMU_WALK_MEMDESC**)&pTempMemDesc, &memSize)**mmuWalkGetPageLevelInfo(pWalk, &pFmtLevel->subLevels[subLevelIdx], entryVa, (const MMU_WALK_MEMDESC**)&pTempMemDesc, &memSize)*call to _mmuPrintPdeValid**pFmtLevel*invalidRange*call to mmuWalkSetTraceInfo*bInvalid*indexLimit*call to _mmuPrintPdeInvalid*savedStatus*call to mmuWalkGetTraceInfo*pDispChannelPio**pDispChannelPio***ppDispChannel*NVRM: MMUTRACE: VA[0x%08llx-%08llx]**NVRM: MMUTRACE: VA[0x%08llx-%08llx]*_level*PDE%u[0x%x]: **PDE%u[0x%x]: *call to _mmuPrintPt*PDE%u[0x%x**PDE%u[0x%x*-%03x**-%03x*]: invalid **]: invalid *pFmtSub*PTE**PTE*[0x%x**[0x%x*-%x**-%x**pTotalInstMemSize**pHashTableSize**phParent*pGmmuFmt**pGmmuFmt*pFmtRoot**pFmtRoot**pTraceCb*pParams->pArg != NULL**pParams->pArg != NULL*modeValid*traceMode*pteFunc*translateFunc*vaArg*dumpMappingFunc**modeValid*call to mmuTraceWalk*call to _mmuInitLayout*call to _mmuTraceWalk*call to uvmswInitSwMethodState_IMPL*methodA*methodB*bCancelMethodASet*bCancelMethodBSet*bClearMethodASet*src/kernel/gpu/nvdec/kernel_nvdec_ctx.c**src/kernel/gpu/nvdec/kernel_nvdec_ctx.c**pCrashLockCounterInfoParams*NVRM: Requested NVDEC Id 0x%x is not present. Hence, returning capabilities of NVDEC0 **pLoadVCounterInfoParams**NVRM: Requested NVDEC Id 0x%x is not present. Hence, returning capabilities of NVDEC0 **pVBEnableParams**pVBCounterParams*ppDispCommon**ppDispCommon***ppDispCommon*NVRM: nvdecctxDestruct for 0x%x **NVRM: nvdecctxDestruct for 0x%x *call to kflcnFreeContext_IMPL*kflcnFreeContext(pGpu, pKernelFalcon, pKernelChannel, RES_GET_EXT_CLASS_ID(pChannelDescendant))**kflcnFreeContext(pGpu, pKernelFalcon, pKernelChannel, RES_GET_EXT_CLASS_ID(pChannelDescendant))*NVRM: nvdecctxConstruct for 0x%x **NVRM: nvdecctxConstruct for 0x%x **pDispSwObj**pNvDispApi*pNvdispApi**pNvdispApi*call to kflcnAllocContext_IMPL*pNvdecAllocParams*(pNvdecAllocParams != NULL)*src/kernel/gpu/nvdec/kernel_nvdec_engdesc.c*ppDispObject**ppDispObject***ppDispObject**(pNvdecAllocParams != NULL)**src/kernel/gpu/nvdec/kernel_nvdec_engdesc.c*NVRM: createParams size mismatch (rm = 0x%x / client = 0x%x) **NVRM: createParams size mismatch (rm = 0x%x / client = 0x%x) *engineInstance**pGpuGroup***ppDisp*kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, dynamicCast(pDeviceRef->pResource, Device), &ref)**kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, dynamicCast(pDeviceRef->pResource, Device), &ref)*kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_NVDEC(engineInstance), &rmEngineType)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_NVDEC(engineInstance), &rmEngineType)*src/kernel/gpu/nvenc/kernel_nvenc_ctx.c**src/kernel/gpu/nvenc/kernel_nvenc_ctx.c*NVRM: msencctxDestruct for 0x%x **NVRM: msencctxDestruct for 0x%x *NVRM: msencctxConstruct for 0x%x **NVRM: msencctxConstruct for 0x%x *pMsencAllocParms*src/kernel/gpu/nvenc/kernel_nvenc_engdesc.c**src/kernel/gpu/nvenc/kernel_nvenc_engdesc.c*NVRM: Supported msenc class Id (classId = 0x%x / engineInstance = 0x%x) **NVRM: Supported msenc class Id (classId = 0x%x / engineInstance = 0x%x) *NVRM: Not supported msenc class Id (classId = 0x%x / engineInstance = 0x%x) **NVRM: Not supported msenc class Id (classId = 0x%x / engineInstance = 0x%x) *kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_NVENC(engineInstance), &rmEngineType)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_NVENC(engineInstance), &rmEngineType)*src/kernel/gpu/nvenc/nvencsession.c*NVRM: NVENC session queuing async callback failed, status=%x **src/kernel/gpu/nvenc/nvencsession.c**NVRM: NVENC session queuing async callback failed, status=%x *call to _gpuNvEncSessionDataProcessing*bNvEncSessionDataProcessingWorkItemPending*NVRM: NVENC Sessions GPU instance is invalid **NVRM: NVENC Sessions GPU instance is invalid **pDispSfUser*pNvencSessionListItem*pNvencSessionListItemNext**pNvencSessionListItemNext*call to _gpuNvEncSessionProcessBuffer*nvencSessionEntry*timestampBufferSize**pNvencSessionListItem*pSessionStatsBuffer**pSessionStatsBuffer*pSessionInfoBuffer**pSessionInfoBuffer*NVRM: GPU : 0x%0x, NvEnc session stats buffer pointer is null. **NVRM: GPU : 0x%0x, NvEnc session stats buffer pointer is null. *pLocalSessionInfoBuffer**pLocalSessionInfoBuffer*NVRM: GPU : 0x%0x, Failed to allocate memory for local stats buffer. **NVRM: GPU : 0x%0x, Failed to allocate memory for local stats buffer. *region1*frameInfo**frameInfo*pRegion1**pRegion1*submissionTSEntry*currIndex*minFrameId*timeTakenToEncodeNs*processedFrameCount*latestFrameIndex*region2*startTSEntry*lastProcessedFrameTS*pSubmissionTSEntry**pSubmissionTSEntry*pStartTSEntry**pStartTSEntry*pEndTSEntry**pEndTSEntry*latestFrameId*endTSEntry*latestFrameEndTS*timeDiffFrameTS*lastProcessedIndex*lastProcessedFrameId*hNvencSessionHandle**pDispSw*NVRM: Creating NVENC session above max copyout limit. **NVRM: Creating NVENC session above max copyout limit. *pNvA0BCAllocParams**pNvA0BCAllocParams*NVRM: Unable to find mem corresponding to handle : 0x%0x. **NVRM: Unable to find mem corresponding to handle : 0x%0x. *NVRM: Error mapping memory to CPU VA space, error : 0x%0x. **NVRM: Error mapping memory to CPU VA space, error : 0x%0x. ***pSessionStatsBuffer*subProcessId*codecType*src/kernel/gpu/nvenc/nvencsessionctrl.c**src/kernel/gpu/nvenc/nvencsessionctrl.c**pStandardMemory*timestampBuffer**timestampBuffer*call to _nvencsessionCtrlCmdNvencSwSessionUpdateInfo**pVideoMemory**pSystemMemory*timeStampBuffer*tempTimestampBufferSize*timeToEncodeBuffer*pRmCtrlParams->cmd == NVA0BC_CTRL_CMD_NVENC_SW_SESSION_UPDATE_INFO || pRmCtrlParams->cmd == NVA0BC_CTRL_CMD_NVENC_SW_SESSION_UPDATE_INFO_V2**pRmCtrlParams->cmd == NVA0BC_CTRL_CMD_NVENC_SW_SESSION_UPDATE_INFO || pRmCtrlParams->cmd == NVA0BC_CTRL_CMD_NVENC_SW_SESSION_UPDATE_INFO_V2*src/kernel/gpu/nvjpg/kernel_nvjpg_ctx.c**src/kernel/gpu/nvjpg/kernel_nvjpg_ctx.c*NVRM: Requested NVJPGS Id 0x%x is not present. Hence, returning capabilities of NVJPGS0 **NVRM: Requested NVJPGS Id 0x%x is not present. Hence, returning capabilities of NVJPGS0 *jpegCaps**jpegCaps*NVRM: nvjpgctxDestruct for 0x%x **NVRM: nvjpgctxDestruct for 0x%x *NVRM: nvjpgctxConstruct for 0x%x **NVRM: nvjpgctxConstruct for 0x%x *pNvjpgAllocParms*(pNvjpgAllocParms != NULL)*src/kernel/gpu/nvjpg/kernel_nvjpg_engdesc.c**(pNvjpgAllocParms != NULL)**src/kernel/gpu/nvjpg/kernel_nvjpg_engdesc.c**pExtendedGpuMemory*NVRM: Supported nvjpg class Id (classId = 0x%x / engineInstance = 0x%x) **NVRM: Supported nvjpg class Id (classId = 0x%x / engineInstance = 0x%x) *NVRM: Not supported nvjpg class Id (classId = 0x%x / engineInstance = 0x%x) **NVRM: Not supported nvjpg class Id (classId = 0x%x / engineInstance = 0x%x) *kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_NVJPEG(engineInstance), &rmEngineType)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_NVJPEG(engineInstance), &rmEngineType)*src/kernel/gpu/nvlink/arch/ampere/kernel_nvlink_ga100.c*NVRM: Failed to execute GSP-RM GPC to get if the gpu has a reduced Nvlink config **src/kernel/gpu/nvlink/arch/ampere/kernel_nvlink_ga100.c**NVRM: Failed to execute GSP-RM GPC to get if the gpu has a reduced Nvlink config *mapTypeMask**pFabric**pExportUuid**pEventArray*pNumEvents**pNumEvents**pEvents***pOsEvent*call to osGetForcedNVLinkConnection*NVRM: Not using forced config! **NVRM: Not using forced config! *forcedConfigParams*bLegacyForcedConfig*NVRM: Failed to process forced NVLink configurations ! **NVRM: Failed to process forced NVLink configurations ! *bOverrideComputePeerMode*call to knvlinkSetupTopologyForForcedConfig_IMPL*call to knvlinkGetALID*localGpuAlid*call to knvlinkGetCLID*localGpuClid*remoteGpuAlid*remoteGpuClid*src/kernel/gpu/nvlink/arch/blackwell/kernel_nvle_gb100.c*NVRM: Failed to execute GSP-RM GPC to update Nvlink topology in GSP **src/kernel/gpu/nvlink/arch/blackwell/kernel_nvle_gb100.c**NVRM: Failed to execute GSP-RM GPC to update Nvlink topology in GSP *flaRemapTabAddr**flaRemapTabAddr**pCopyServerReservedPdesParams*pFreeSize**pFreeSize**pFbAllocInfoClient*gpaRemapTabAddr**gpaRemapTabAddr*remapTableIdx*pVAddr**pVAddr*ppVirtualMemory**ppVirtualMemory***ppVirtualMemory*NVRM: %s: pGpuFabricProbeInfoKernel is NULL **NVRM: %s: pGpuFabricProbeInfoKernel is NULL *NVRM: %s: Fabric probe has not been received **NVRM: %s: Fabric probe has not been received *lidParams**memType*knvlinkExecGspRmRpc(pGpu, pKernelNvlink, NV2080_CTRL_NVLINK_GET_UPDATE_NVLE_LIDS, (void *)&lidParams, sizeof(lidParams))**knvlinkExecGspRmRpc(pGpu, pKernelNvlink, NV2080_CTRL_NVLINK_GET_UPDATE_NVLE_LIDS, (void *)&lidParams, sizeof(lidParams))**tgtPteMem*alidList**alidList**pTgtPteMem*call to gpuMgrIsNvleAlidPresent*call to gpuMgrCacheNvleAlid*(gpuMgrCacheNvleAlid(pGpuMgr, lidParams.alidList[remapTableIdx].alid, &clid) == NV_TRUE)**(gpuMgrCacheNvleAlid(pGpuMgr, lidParams.alidList[remapTableIdx].alid, &clid) == NV_TRUE)*alidClidMap**alid*call to gpuFabricProbeInvalidate*src/kernel/gpu/nvlink/arch/blackwell/kernel_nvlink_gb100.c**src/kernel/gpu/nvlink/arch/blackwell/kernel_nvlink_gb100.c*kfifoAddSchedulingHandler(pGpu, GPU_GET_KERNEL_FIFO(pGpu), _knvlinkHandlePostSchedulingEnableCallback_GB100, NULL, NULL, NULL)**kfifoAddSchedulingHandler(pGpu, GPU_GET_KERNEL_FIFO(pGpu), _knvlinkHandlePostSchedulingEnableCallback_GB100, NULL, NULL, NULL)*knvlinkExecGspRmRpc(pGpu, pKernelNvlink, NV2080_CTRL_CMD_INTERNAL_NVLINK_REPLAY_SUPPRESSED_ERRORS, NULL, 0)**knvlinkExecGspRmRpc(pGpu, pKernelNvlink, NV2080_CTRL_CMD_INTERNAL_NVLINK_REPLAY_SUPPRESSED_ERRORS, NULL, 0)**pCliMapInfo*NVRM: CC and Nvlink encryption features are enabled, but encrypt enable bit or PRC knob is not set! Disabling Nvlink encryption **NVRM: CC and Nvlink encryption features are enabled, but encrypt enable bit or PRC knob is not set! Disabling Nvlink encryption *NVRM: CC and Nvlink encryption features are enabled on the GPU **NVRM: CC and Nvlink encryption features are enabled on the GPU *bIsNvleEnabled*NVRM: Failed to execute RPC to set Nvlink Enablement Status **NVRM: Failed to execute RPC to set Nvlink Enablement Status *NVRM: Failed to execute RPC to get Nvlink Encrypt Enable Info **NVRM: Failed to execute RPC to get Nvlink Encrypt Enable Info *bMmuNvlinkEncryptEn*bNvlinkTlwEncryptEn*NVRM: Invalid EGM fabric address: 0x%llx **NVRM: Invalid EGM fabric address: 0x%llx **pUnused*pFabricMemDesc**pFabricMemDesc*pFabricMemdesc**pFabricMemdesc*ppAdjustedMemdesc**ppAdjustedMemdesc***ppAdjustedMemdesc**freeSize**ppAddr***ppAddr**pNumAddr*hshubSupportedRbmModesList**hshubSupportedRbmModesList*call to _nvlinkLinkCountToRbmMode*rbmModesList**rbmModesList*rbmTotalModes*call to gpuFabricProbeGetlinkMaskToBeReduced*NVRM: Reducing nvlinkMask from 0x%x to updated 0x%llx **NVRM: Reducing nvlinkMask from 0x%x to updated 0x%llx *gpuNvlinkHshubSupportedRbmList**gpuNvlinkHshubSupportedRbmList*totalRbmModes*NVRM: Legacy BW modes are not supported on this platform. **NVRM: Legacy BW modes are not supported on this platform. *NVRM: RBM not supported by GFM. LinkCount: %d; MaxLinkCount: %d **NVRM: RBM not supported by GFM. LinkCount: %d; MaxLinkCount: %d *NVRM: RBM requested is not supported. LinkCount: %d **NVRM: RBM requested is not supported. LinkCount: %d *rbmMode**pFmSessionApi*knvlinkExecGspRmRpc(pGpu, pKernelNvlink, NV2080_CTRL_CMD_NVLINK_GET_ERR_INFO, (void *)pParams, sizeof(*pParams))**knvlinkExecGspRmRpc(pGpu, pKernelNvlink, NV2080_CTRL_CMD_NVLINK_GET_ERR_INFO, (void *)pParams, sizeof(*pParams))*failure*failures**failures**pGenericEngineApi*NVRM: ALI Error for GPU %d::linkId %d: 0x%x **NVRM: ALI Error for GPU %d::linkId %d: 0x%x *NVLink: Link training failed for links 0x%llx(0x%x, 0x%x, 0x%x, 0x%x, 0x%x, 0x%x, 0x%x) **NVLink: Link training failed for links 0x%llx(0x%x, 0x%x, 0x%x, 0x%x, 0x%x, 0x%x, 0x%x) *qword**qword**counterMask*engineFifoList**engineFifoList*pVgpuTypeNode**pVgpuTypeNode**pUtilSampleBuffer**pClearAcctDataParams**pSetAcctModeParams**pGetAcctModeParams*pGpuMgmt**pGpuMgmt*src/kernel/gpu/nvlink/arch/hopper/kernel_nvlink_gh100.c*NVRM: Fail to call direct conect check command **src/kernel/gpu/nvlink/arch/hopper/kernel_nvlink_gh100.c**NVRM: Fail to call direct conect check command *NVRM: EGM is not enabled in RM for GPU %x **NVRM: EGM is not enabled in RM for GPU %x *call to knvlinkValidateFabricEgmBaseAddress_DISPATCH*NVRM: EGM Fabric base addr validation failed for GPU %x **NVRM: EGM Fabric base addr validation failed for GPU %x *NVRM: The same EGM fabric base addr is being re-assigned to GPU %x **NVRM: The same EGM fabric base addr is being re-assigned to GPU %x *NVRM: EGM Fabric base addr is already assigned to GPU %x **NVRM: EGM Fabric base addr is already assigned to GPU %x *NVRM: EGM Fabric base addr %llx is assigned to GPU %x **NVRM: EGM Fabric base addr %llx is assigned to GPU %x *call to knvlinkValidateFabricBaseAddress_DISPATCH*NVRM: Fabric addr validation failed for GPU %x **NVRM: Fabric addr validation failed for GPU %x *NVRM: The same fabric addr is being re-assigned to GPU %x **NVRM: The same fabric addr is being re-assigned to GPU %x *NVRM: Fabric addr is already assigned to GPU %x **NVRM: Fabric addr is already assigned to GPU %x *NVRM: Fabric base addr %llx is assigned to GPU %x **NVRM: Fabric base addr %llx is assigned to GPU %x *directConnectBwModeList**directConnectBwModeList*switchBwModeList**switchBwModeList*NVRM: BW mode requested is not supported. Mode: %d **NVRM: BW mode requested is not supported. Mode: %d *remotePeerLinkMask*nvPopCount32(remotePeerLinkMask) == nvPopCount32(peerLinkMask)**nvPopCount32(remotePeerLinkMask) == nvPopCount32(peerLinkMask)*call to knvlinkGetNumLinksToBeReducedPerIoctrl_DISPATCH*numLinksToBeReduced*effectivePeerLinkMask*call to knvlinkGetTotalNumLinksPerIoctrl_IMPL*peerLinkMaskPerIoctrl**pGsyncApi*remoteEndInfo**pGsyncGetVersionParams*call to knvlinkGetNumActiveLinksPerIoctrl_IMPL*numlinks*NVRM: Cannot reach here %s %d mode=%d **NVRM: Cannot reach here %s %d mode=%d **pProxyGpu**pGsyncInfo**pGsyncIdsParams*nvlinkErrInfoParams**nvlinkErrInfoParams*NVRM: Error getting debug info for link training! **NVRM: Error getting debug info for link training! *NVLink: Link training failed for link %u(0x%x, 0x%x, 0x%x, 0x%x, 0x%x, 0x%x, 0x%x)**NVLink: Link training failed for link %u(0x%x, 0x%x, 0x%x, 0x%x, 0x%x, 0x%x, 0x%x)*NVRM: ALI Error for GPU %d::linkId %d: NVLIPT: CTRL_LINK_STATE_REQUEST_STATUS = %X NVLDL : NV_NVLDL_RXSLSM_ERR_CNTL = %X NV_NVLDL_TOP_LINK_STATE = %X NV_NVLDL_TOP_INTR = %X MINION DLSTAT: DLSTAT MN00 = %X DLSTAT UC01 = %X NV_MINION_NVLINK_LINK_INTR = %X **NVRM: ALI Error for GPU %d::linkId %d: NVLIPT: CTRL_LINK_STATE_REQUEST_STATUS = %X NVLDL : NV_NVLDL_RXSLSM_ERR_CNTL = %X NV_NVLDL_TOP_LINK_STATE = %X NV_NVLDL_TOP_INTR = %X MINION DLSTAT: DLSTAT MN00 = %X DLSTAT UC01 = %X NV_MINION_NVLINK_LINK_INTR = %X *pFaultLink**pFaultLink*pFaultLink != NULL**pFaultLink != NULL*NVRM: GPU (ID: %d) tmrEventScheduleRel failed for linkid %d **NVRM: GPU (ID: %d) tmrEventScheduleRel failed for linkid %d *nvlinkPostFaultUpParams**nvlinkPostFaultUpParams*NVRM: Failed to send Faultup RPC **NVRM: Failed to send Faultup RPC *NVRM: Invalid pPeerGpu. **NVRM: Invalid pPeerGpu. *NVRM: loopback P2P on GPU%u disabled by regkey **NVRM: loopback P2P on GPU%u disabled by regkey *NVRM: Input mask contains a GPU on which NVLink is disabled. **NVRM: Input mask contains a GPU on which NVLink is disabled. *NVRM: Not in ALI, checking PostRxDetLinks not supported. *pHdacodecApi**pHdacodecApi**NVRM: Not in ALI, checking PostRxDetLinks not supported. *call to knvlinkUpdatePostRxDetectLinkMask_IMPL*NVRM: Getting peer0's postRxDetLinkMask failed! **NVRM: Getting peer0's postRxDetLinkMask failed! **pPageNumbersWithEccOn**pPageNumbersWithECcOff**pAllocHint***ppMemBlock*usableSize**usableSize**free**bytesFree**bytesTotal**largestOffset**largestFree***pHwResource*pNoncontigAllocation**pNoncontigAllocation**pRmTimeout**pMemoryHwResources*NVRM: Getting peer1's postRxDetLinkMask failed! **NVRM: Getting peer1's postRxDetLinkMask failed! *NVRM: Got 0 post RxDet Links on GPU %d or GPU %d! **NVRM: Got 0 post RxDet Links on GPU %d or GPU %d! *postSetupNvlinkPeerParams*NVRM: Failed to program post active settings and bufferready! **NVRM: Failed to program post active settings and bufferready! *NVRM: Failed to get ALI enablement status! **NVRM: Failed to get ALI enablement status! *programLinkSpeedParams*bPlatformLinerateDefined*platformLineRate*src/kernel/gpu/nvlink/arch/pascal/kernel_nvlink_gp100.c*NVRM: Failed to program NVLink speed for links! **src/kernel/gpu/nvlink/arch/pascal/kernel_nvlink_gp100.c**NVRM: Failed to program NVLink speed for links! *nvlinkLinkSpeed*NVRM: P2P loopback is disabled on GPU%u, aborting peer setup (0x%x) **NVRM: P2P loopback is disabled on GPU%u, aborting peer setup (0x%x) *call to knvlinkGetEffectivePeerLinkMask_DISPATCH*preSetupNvlinkPeerParams*bNvswitchConn**pI2cApi**pImexSessionApi**pInstrumentationManager**pSysmemBuffer*call to kNvlinkGetLinkMaskAsPrimitve*src/kernel/gpu/nvlink/arch/turing/kernel_nvlink_tu102.c*NVRM: Connections forced through chiplib. ConnectedLinksMask same as enabledLinks = 0x%llx **src/kernel/gpu/nvlink/arch/turing/kernel_nvlink_tu102.c**NVRM: Connections forced through chiplib. ConnectedLinksMask same as enabledLinks = 0x%llx *NVRM: GPU%d: Link%d not yet registered in core lib. Connectivity will be established after RXDET **NVRM: GPU%d: Link%d not yet registered in core lib. Connectivity will be established after RXDET *call to kioctrlGetMinionEnableDefault_DISPATCH*call to knvlinkGetMinionControl*src/kernel/gpu/nvlink/arch/volta/kernel_minion_gv100.c*NVRM: NVLink MINION is not supported on this platform, disabling. **src/kernel/gpu/nvlink/arch/volta/kernel_minion_gv100.c**NVRM: NVLink MINION is not supported on this platform, disabling. *RMNvLinkMinionControl**RMNvLinkMinionControl*NVRM: %s: 0x%x **NVRM: %s: 0x%x *NVRM: NVLink MINION force enable requested by command line override. **NVRM: NVLink MINION force enable requested by command line override. *PDB_PROP_KIOCTRL_MINION_FORCE_BOOT*bEnableMinion*NVRM: NVLink MINION force disable requested by command line override. **NVRM: NVLink MINION force disable requested by command line override. *NVRM: Regkey: Minion seed caching is force enabled **NVRM: Regkey: Minion seed caching is force enabled *PDB_PROP_KIOCTRL_MINION_CACHE_SEEDS*NVRM: Regkey: Minion seed caching is force disabled **NVRM: Regkey: Minion seed caching is force disabled *NVRM: Regkey: ALI training is force enabled **NVRM: Regkey: ALI training is force enabled *PDB_PROP_KNVLINK_MINION_FORCE_ALI_TRAINING*PDB_PROP_KNVLINK_MINION_FORCE_NON_ALI_TRAINING*NVRM: Regkey: non-ALI training is force enabled **NVRM: Regkey: non-ALI training is force enabled *NVRM: Regkey: Minion boot from GFW disabled **NVRM: Regkey: Minion boot from GFW disabled *NVRM: Regkey: Minion boot from GFW enabled by default **NVRM: Regkey: Minion boot from GFW enabled by default *NVRM: Regkey: Minion boot from GFW disabled by default **NVRM: Regkey: Minion boot from GFW disabled by default *call to knvlinkEnableLinksPostTopology_DISPATCH*src/kernel/gpu/nvlink/arch/volta/kernel_nvlink_gv100.c*NVRM: Nvlink post topology links setup failed on GPU %x **src/kernel/gpu/nvlink/arch/volta/kernel_nvlink_gv100.c**NVRM: Nvlink post topology links setup failed on GPU %x *NVRM: Operation failed due to no NVSwitch connectivity to the GPU %x **NVRM: Operation failed due to no NVSwitch connectivity to the GPU %x **bytesRead*NVRM: Failed to stash fabric address for GPU %x **NVRM: Failed to stash fabric address for GPU %x *NVRM: Failed to get nvswitch fabric address for GPU %x **NVRM: Failed to get nvswitch fabric address for GPU %x *NVRM: Failed to set unique NVSwitch fabric base address for GPU %x **NVRM: Failed to set unique NVSwitch fabric base address for GPU %x *NVRM: Failed to enable compute addressing for GPU %x **NVRM: Failed to enable compute addressing for GPU %x *NVRM: Failed to enable compute peer addressing! **NVRM: Failed to enable compute peer addressing! *call to osGetPlatformNvlinkLinerate*platformLinerateDefined*bLinkDisconnected*bLinkDisconnected != NULL**bLinkDisconnected != NULL*convertBitVectorToLinkMasks(&pKernelNvlink->enabledLinks, &pParams->linkMask, sizeof(pParams->linkMask), &pParams->links)**convertBitVectorToLinkMasks(&pKernelNvlink->enabledLinks, &pParams->linkMask, sizeof(pParams->linkMask), &pParams->links)*bSublinkStateInst*knvlinkExecGspRmRpc(pGpu, pKernelNvlink, NV2080_CTRL_CMD_INTERNAL_NVLINK_GET_LINK_AND_CLOCK_INFO, (void *)pParams, sizeof(*pParams))**knvlinkExecGspRmRpc(pGpu, pKernelNvlink, NV2080_CTRL_CMD_INTERNAL_NVLINK_GET_LINK_AND_CLOCK_INFO, (void *)pParams, sizeof(*pParams))*NVRM: Trying to access incorrect link from link mask! %d **NVRM: Trying to access incorrect link from link mask! %d *pCgStatusMask**pCgStatusMask**pKernelHwpm**pNumCblock**pNumChannels**pNumCblocksPerPma**bLinkDisconnected*call to _knvlinkAreLinksDisconnected*_knvlinkAreLinksDisconnected(pGpu, pKernelNvlink, bLinkDisconnected)**_knvlinkAreLinksDisconnected(pGpu, pKernelNvlink, bLinkDisconnected)*bUpdateConnStatus**pPmaStream*call to knvlinkUpdateLinkConnectionStatus_IMPL**pKernelPerf*NVRM: Failed to enable Links post topology! **NVRM: Failed to enable Links post topology! *initializedLinks*bVerifTrainingEnable*pOutputBitVector*pLinkMask1**pLinkMask1*(linkMask1Size == sizeof(NvU32) || linkMask1Size == sizeof(NvU64))*src/kernel/gpu/nvlink/bitvector_nvlink.c**(linkMask1Size == sizeof(NvU32) || linkMask1Size == sizeof(NvU64))**src/kernel/gpu/nvlink/bitvector_nvlink.c*pLinkMask2**arg_pResource*pLocalLinkMask*pOutputLinkMask1**pOutputLinkMask1*call to convertBitVectorToLinkMask32*pOutputLinkMask2*lenMasks**pKernelPmu*call to rmcfg_IsPASCAL_CLASSIC_GPUSorBetter*src/kernel/gpu/nvlink/common_nvlinkapi.c**src/kernel/gpu/nvlink/common_nvlinkapi.c*call to convertLinkMasksToBitVector*NVRM: Failed to convert enabled link masks to bit vector! 0x%x **NVRM: Failed to convert enabled link masks to bit vector! 0x%x *i <= NV2080_CTRL_NVLINK_MAX_LINKS**i <= NV2080_CTRL_NVLINK_MAX_LINKS*deviceUUID*pDeviceInfo**deviceUUID*NVRM: Trying to access out of bounds link from link mask! %d **NVRM: Trying to access out of bounds link from link mask! %d **pDeviceInfo*pLoopGpu*NVRM: MIG NVLink P2P is not supported. **NVRM: MIG NVLink P2P is not supported. *pTmpData**pTmpData*NVRM: Kernel NVLink is unavailable. Returning. **NVRM: Kernel NVLink is unavailable. Returning. *NVRM: Nvlink is not ready yet! **NVRM: Nvlink is not ready yet! *bIsNvlinkReady*call to knvlinkFilterBridgeLinks_DISPATCH*NVRM: Failed to convert bit vector to enabled link masks! 0x%x **NVRM: Failed to convert bit vector to enabled link masks! 0x%x *nvlinkLinkAndClockInfoParams**pKernelCcuApi*NVRM: Failed to convert bit vector to link masks! 0x%x **NVRM: Failed to convert bit vector to link masks! 0x%x *NVRM: Failed to collect nvlink status info! **NVRM: Failed to collect nvlink status info! *NVRM: Trying to access incorrect link from mask! %d **NVRM: Trying to access incorrect link from mask! %d **pKernelCcu*remoteLinkNumber*remoteChipSid*remoteDomain*remoteBus*remoteDevice*remoteFunction*remotePciDeviceId*call to knvlinkIsP2pLoopbackSupportedPerLink_IMPL*bLoopbackSupported*bNvleModeEnabled*call to _getNvlinkStatus*call to knvlinkGetDegradedMode_IMPL*call to knvlinkIsUncontainedErrorRecoveryActive_IMPL*NVRM: Failed to convert enabled link masks to bitvector **NVRM: Failed to convert enabled link masks to bitvector *enabledNvlpwMask**pKCeContext*bPeerLink*bSysmemLink*bSwitchLink*NVRM: Trying to access out out bounds link from link mask! %d **NVRM: Trying to access out out bounds link from link mask! %d *pLinkAndClockValues**pLinkAndClockValues*bridgeSensableLinks*nvlpwIdx**pKernelCrashCatEng*fabricRecoveryStatusMask*nvlinkLinkClockKHz*nvlinkRefClkSpeedKHz*nvlinkCommonClockSpeedKHz*nvlinkCommonClockSpeedMhz*nvlinkMinL1Threshold*nvlinkMaxL1Threshold*nvlinkL1ThresholdUnits**pEngConfig*remotePeer0**remotePeer0*discoveredLinkMask**pLocalLinkMask*call to _calculateNvlinkCaps*pDiscoveredLinks*NVRM: Failed to get discovered link mask! **NVRM: Failed to get discovered link mask! **pbPromote*NVRM: Failed to convert bit vector to discovered link mask! 0x%x **NVRM: Failed to convert bit vector to discovered link mask! 0x%x *NVRM: Failed to convert bit vector to discovered links! 0x%x **NVRM: Failed to convert bit vector to discovered links! 0x%x *NVRM: Failed to convert bit vector to enabled link mask! 0x%x **NVRM: Failed to convert bit vector to enabled link mask! 0x%x *NVRM: Failed to convert bit vector to enabled links! 0x%x **NVRM: Failed to convert bit vector to enabled links! 0x%x *lowestNvlinkVersion*highestNvlinkVersion*lowestNciVersion*highestNciVersion*ppFecsGlobalTraceInfo**ppFecsGlobalTraceInfo***ppFecsGlobalTraceInfo***ppFecsTraceInfo**pReasonCode*ipVerIoctrl*call to kioctrlMinionConstruct_DISPATCH*str != NULL*src/kernel/gpu/nvlink/kernel_nvlink.c**str != NULL**src/kernel/gpu/nvlink/kernel_nvlink.c*length > (NvU64) (ptr - temp)**length > (NvU64) (ptr - temp)*call to knvlinkEncryptionGetUpdateGpuIdentifiers_DISPATCH*knvlinkEncryptionGetUpdateGpuIdentifiers_HAL(pGpu0, pKernelNvlink0, NV_FALSE, NV_TRUE)**knvlinkEncryptionGetUpdateGpuIdentifiers_HAL(pGpu0, pKernelNvlink0, NV_FALSE, NV_TRUE)*knvlinkEncryptionGetUpdateGpuIdentifiers_HAL(pGpu0, pKernelNvlink0, NV_TRUE, NV_TRUE)**knvlinkEncryptionGetUpdateGpuIdentifiers_HAL(pGpu0, pKernelNvlink0, NV_TRUE, NV_TRUE)*knvlinkEncryptionGetUpdateGpuIdentifiers_HAL(pGpu0, pKernelNvlink0, NV_FALSE, NV_FALSE)**knvlinkEncryptionGetUpdateGpuIdentifiers_HAL(pGpu0, pKernelNvlink0, NV_FALSE, NV_FALSE)*knvlinkEncryptionGetUpdateGpuIdentifiers_HAL(pGpu0, pKernelNvlink0, NV_TRUE, NV_FALSE)**knvlinkEncryptionGetUpdateGpuIdentifiers_HAL(pGpu0, pKernelNvlink0, NV_TRUE, NV_FALSE)*bGotNvleIdentifiers*call to knvlinkEncryptionUpdateTopology_DISPATCH*knvlinkEncryptionUpdateTopology_HAL(pGpu0, pKernelNvlink0, pGpu1, pKernelNvlink1)**knvlinkEncryptionUpdateTopology_HAL(pGpu0, pKernelNvlink0, pGpu1, pKernelNvlink1)*knvlinkEncryptionUpdateTopology_HAL(pGpu1, pKernelNvlink1, pGpu0, pKernelNvlink0)**knvlinkEncryptionUpdateTopology_HAL(pGpu1, pKernelNvlink1, pGpu0, pKernelNvlink0)*call to knvlinkSetupEncryptionKeys_IMPL*knvlinkSetupEncryptionKeys(pGpu0, pKernelNvlink0, pGpu1, pKernelNvlink1)**knvlinkSetupEncryptionKeys(pGpu0, pKernelNvlink0, pGpu1, pKernelNvlink1)*call to knvlinkValidateRemapTableSlots_IMPL*knvlinkValidateRemapTableSlots(pGpu0, pKernelNvlink0, pGpu1, pKernelNvlink1)**knvlinkValidateRemapTableSlots(pGpu0, pKernelNvlink0, pGpu1, pKernelNvlink1)*NVRM: GPU%d Failed to lock remap table and MSE **NVRM: GPU%d Failed to lock remap table and MSE *NVRM: GPU%d Successfully locked remap table and MSE **NVRM: GPU%d Successfully locked remap table and MSE *bRemapTableMseLocked**pCC*bCCFeatureEnabled*NVRM: FLA Remap table validation failed for table index = 0x%x **NVRM: FLA Remap table validation failed for table index = 0x%x *NVRM: GPA Remap table validation failed for table index = 0x%x **NVRM: GPA Remap table validation failed for table index = 0x%x *call to knvlinkIsNvleEnabled_DISPATCH*bNvleKeyRefreshEnabled*bNvleKeySetup**bNvleKeySetup*call to libspdm_random_bytes*call to _knvlinkRefreshEncryptionKeys*nvleKey**nvleKey*_knvlinkRefreshEncryptionKeys(pLocalGpu, pLocalKernelNvlink, nvleKey, remoteCLID, stage, epoch)**_knvlinkRefreshEncryptionKeys(pLocalGpu, pLocalKernelNvlink, nvleKey, remoteCLID, stage, epoch)*_knvlinkRefreshEncryptionKeys(pRemoteGpu, pRemoteKernelNvlink, nvleKey, localCLID, stage, epoch)**_knvlinkRefreshEncryptionKeys(pRemoteGpu, pRemoteKernelNvlink, nvleKey, localCLID, stage, epoch)*NVRM: Setting nvle keys between GPU%d and GPU%d **NVRM: Setting nvle keys between GPU%d and GPU%d *call to _knvlinkSendEncryptionKeys*_knvlinkSendEncryptionKeys(pLocalGpu, pLocalKernelNvlink, nvleKey, remoteCLID)**_knvlinkSendEncryptionKeys(pLocalGpu, pLocalKernelNvlink, nvleKey, remoteCLID)*_knvlinkSendEncryptionKeys(pRemoteGpu, pRemoteKernelNvlink, nvleKey, localCLID)**_knvlinkSendEncryptionKeys(pRemoteGpu, pRemoteKernelNvlink, nvleKey, localCLID)**nvleKeyReqBuf*pNvleKeyReq*pSpdmReqHdr**pSpdmReqHdr*nvleKeyReqSize*pGspReqHdr**pGspReqHdr*cmdId*wrappedKeyEntries**wrappedKeyEntries*keyEntriesTag**keyEntriesTag**pKernelHfrp*ccslEncrypt(pConfCompute->pNvleP2pWrappingCcslCtx, sizeof(pGspReqHdr->wrappedKeyEntries), (NvU8 *)pGspReqHdr->wrappedKeyEntries, NULL, 0, (NvU8 *)pGspReqHdr->wrappedKeyEntries, (NvU8 *)pGspReqHdr->keyEntriesTag)**ccslEncrypt(pConfCompute->pNvleP2pWrappingCcslCtx, sizeof(pGspReqHdr->wrappedKeyEntries), (NvU8 *)pGspReqHdr->wrappedKeyEntries, NULL, 0, (NvU8 *)pGspReqHdr->wrappedKeyEntries, (NvU8 *)pGspReqHdr->keyEntriesTag)**pHfrp***pCommandPayload**pResponseStatus***pResponsePayload**pResponsePayloadSize*pSequenceIdIndex**pSequenceIdIndex**pPayloadArray**pKernelIoctrl*call to spdmSendApplicationMessage_IMPL*spdmSendApplicationMessage(pGpu, pSpdm, pNvleKeyReq, nvleKeyReqSize, (NvU8 *)&nvleKeyRsp, &nvleKeyRspSize)**spdmSendApplicationMessage(pGpu, pSpdm, pNvleKeyReq, nvleKeyReqSize, (NvU8 *)&nvleKeyRsp, &nvleKeyRspSize)*NVRM: NVLE response from GSP of invalid size! rspSize: 0x%x! **NVRM: NVLE response from GSP of invalid size! rspSize: 0x%x! *nvleKeyRsp*NVRM: GSP returned NVLE response with error code 0x%x! **NVRM: GSP returned NVLE response with error code 0x%x! *NVRM: Unexpected NVLE response from GSP! cmdType: 0x%x rspSize: 0x%x! **NVRM: Unexpected NVLE response from GSP! cmdType: 0x%x rspSize: 0x%x! *bForKeyRotation*call to portAtomicExOrU64*NVLINK Uncontained error recovery re-triggered unexpectedly!**NVLINK Uncontained error recovery re-triggered unexpectedly!**pNvdecContext**arg_pNvdecContext*tmrGetCurrentTime(pTmr, &pInfo->startTime)**tmrGetCurrentTime(pTmr, &pInfo->startTime)*uuidLength == sizeof(pInfo->uuid)**uuidLength == sizeof(pInfo->uuid)**pMsencContext**arg_pMsencContext*osQueueWorkItem(pGpu, knvlinkFatalErrorRecovery_WORKITEM, pInfo, (OsQueueWorkItemFlags){ .bLockSema = NV_TRUE, .apiLock = WORKITEM_FLAGS_API_LOCK_READ_WRITE, .bLockGpuGroupSubdevice = NV_TRUE, .bDontFreeParams = NV_TRUE})**osQueueWorkItem(pGpu, knvlinkFatalErrorRecovery_WORKITEM, pInfo, (OsQueueWorkItemFlags){ .bLockSema = NV_TRUE, .apiLock = WORKITEM_FLAGS_API_LOCK_READ_WRITE, .bLockGpuGroupSubdevice = NV_TRUE, .bDontFreeParams = NV_TRUE})*osQueueWorkItem(pGpu, knvlinkUncontainedErrorRecoveryUvmIdle_WORKITEM, pInfo, (OsQueueWorkItemFlags){.bDontFreeParams = NV_TRUE})**osQueueWorkItem(pGpu, knvlinkUncontainedErrorRecoveryUvmIdle_WORKITEM, pInfo, (OsQueueWorkItemFlags){.bDontFreeParams = NV_TRUE})*osSchedule1HzCallback(pGpu, knvlinkUncontainedErrorRecoveryReadyCheck_WORKITEM, pInfo, NV_OS_1HZ_REPEAT)**osSchedule1HzCallback(pGpu, knvlinkUncontainedErrorRecoveryReadyCheck_WORKITEM, pInfo, NV_OS_1HZ_REPEAT)*osSchedule1HzCallback(pGpu, knvlinkUncontainedErrorRecovery_WORKITEM, pInfo, NV_OS_1HZ_REPEAT)**osSchedule1HzCallback(pGpu, knvlinkUncontainedErrorRecovery_WORKITEM, pInfo, NV_OS_1HZ_REPEAT)*osQueueWorkItem(pGpu, knvlinkAbortUncontainedErrorRecovery_WORKITEM, NULL, (OsQueueWorkItemFlags){ .bLockSema = NV_TRUE, .apiLock = WORKITEM_FLAGS_API_LOCK_READ_WRITE, .bLockGpuGroupSubdevice = NV_TRUE})**osQueueWorkItem(pGpu, knvlinkAbortUncontainedErrorRecovery_WORKITEM, NULL, (OsQueueWorkItemFlags){ .bLockSema = NV_TRUE, .apiLock = WORKITEM_FLAGS_API_LOCK_READ_WRITE, .bLockGpuGroupSubdevice = NV_TRUE})*pInfo != NULL**pInfo != NULL*tmrGetCurrentTime(pTmr, ¤tTime)**tmrGetCurrentTime(pTmr, ¤tTime)*bDegrade*bSuccessful*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_NVLINK_POST_FATAL_ERROR_RECOVERY, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_NVLINK_POST_FATAL_ERROR_RECOVERY, ¶ms, sizeof(params))*osQueueWorkItem(pGpu, knvlinkUncontainedErrorRecoveryUvmResume_WORKITEM, pInfo, (OsQueueWorkItemFlags){.bDontFreeParams = NV_TRUE})**osQueueWorkItem(pGpu, knvlinkUncontainedErrorRecoveryUvmResume_WORKITEM, pInfo, (OsQueueWorkItemFlags){.bDontFreeParams = NV_TRUE})*NVRM: Failed to recover from uncontained NVLINK error. Triggering Degraded Mode! **NVRM: Failed to recover from uncontained NVLINK error. Triggering Degraded Mode! *call to osQueueResumeP2PHandler**pNvjpgContext**arg_pNvjpgContext***pLinkMask1**pLinkMask2**pOutputBitVector***pOutputLinkMask1**pOutputLinkMask2**linkMask*call to osQueueDrainP2PHandler*NVRM: Failed to idle UVM peer traffic with status 0x%x. This will lead to NVLINK Degradation! **NVRM: Failed to idle UVM peer traffic with status 0x%x. This will lead to NVLINK Degradation! *p2pIt*deviceIt**pOfaContext**arg_pOfaContext*kchannelIt*NVRM: Detected fabric idle with lazy fatal error pending. Triggering fatal recovery! **NVRM: Detected fabric idle with lazy fatal error pending. Triggering fatal recovery! *pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_NVLINK_POST_LAZY_ERROR_RECOVERY, NULL, 0)**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_NVLINK_POST_LAZY_ERROR_RECOVERY, NULL, 0)*call to gpumgrGetGpuInitDisabledNvlinks_IMPL*NVRM: Failed to get init disabled links from gpumgr **NVRM: Failed to get init disabled links from gpumgr *initDisabledLinks*NVRM: Failed to process init disabled links in GSP **NVRM: Failed to process init disabled links in GSP *convertLinkMasksToBitVector(NULL, 0, ¶ms.initDisabledLinks, &localLinkMask)**convertLinkMasksToBitVector(NULL, 0, ¶ms.initDisabledLinks, &localLinkMask)*convertBitVectorToLinkMasks(&localLinkMask, &pKernelNvlink->initDisabledLinksMask, sizeof(pKernelNvlink->initDisabledLinksMask), NULL)**convertBitVectorToLinkMasks(&localLinkMask, &pKernelNvlink->initDisabledLinksMask, sizeof(pKernelNvlink->initDisabledLinksMask), NULL)*NVRM: Failed to get the total number of links per IOCTRL **NVRM: Failed to get the total number of links per IOCTRL *NVRM: Failed to get the number of active links per IOCTRL **NVRM: Failed to get the number of active links per IOCTRL *bLaneShutdownOnUnload*NVRM: Failed to sync NVLink shutdown properties with GSP! **NVRM: Failed to sync NVLink shutdown properties with GSP! *bRegistryLinkOverride*registryLinkMask*bChiplibConfig*physLink*NVRM: ARCH_CONNECTION info from chiplib: ENABLED Logical link %d (Physical link %d) = 0x%X **NVRM: ARCH_CONNECTION info from chiplib: ENABLED Logical link %d (Physical link %d) = 0x%X *NVRM: ARCH_CONNECTION info from chiplib: DISABLED Logical link %d (Physical link %d) = 0x%X ***ppDesc***ppImg**NVRM: ARCH_CONNECTION info from chiplib: DISABLED Logical link %d (Physical link %d) = 0x%X *call to convertMaskToBitVector*convertMaskToBitVector(KNVLINK_GET_MASK(pKernelNvlink, registryLinkMask, 64), ®istryLinkMaskVec)**convertMaskToBitVector(KNVLINK_GET_MASK(pKernelNvlink, registryLinkMask, 64), ®istryLinkMaskVec)*ioctrlInfoParams*ioctrlIdx*NVRM: NVLink is unavailable **NVRM: NVLink is unavailable *NVRM: Failed to retrieve device info for IOCTRL %d! **NVRM: Failed to retrieve device info for IOCTRL %d! ***pKernelIoctrl*localGlobalLinkOffset*ioctrlDiscoverySize*ipRevisions*ipVerMinion*pNvlinkInfoParams**pNvlinkInfoParams*NVRM: Failed to retrieve all nvlink device info! **NVRM: Failed to retrieve all nvlink device info! *ioctrlMask*ioctrlNumEntries*ioctrlSize*convertLinkMasksToBitVector(NULL, 0U, &pNvlinkInfoParams->discoveredLinks, &pKernelNvlink->discoveredLinks)**convertLinkMasksToBitVector(NULL, 0U, &pNvlinkInfoParams->discoveredLinks, &pKernelNvlink->discoveredLinks)*maxSupportedLinks*ioctrlId*pllMasterLinkId*pllSlaveLinkId*ipVerDlPl*NVRM: Failed to update Rx Detect Link mask! **NVRM: Failed to update Rx Detect Link mask! *convertLinkMasksToBitVector(¶ms.postRxDetLinkMask, sizeof(params.postRxDetLinkMask), NULL, &pKernelNvlink->postRxDetLinkMask)**convertLinkMasksToBitVector(¶ms.postRxDetLinkMask, sizeof(params.postRxDetLinkMask), NULL, &pKernelNvlink->postRxDetLinkMask)**laneRxdetStatusMask*NVRM: Failed to change ALI Links to active! **NVRM: Failed to change ALI Links to active! *NVRM: Failed to execute Pre Link Training ALI steps! **NVRM: Failed to execute Pre Link Training ALI steps! **pDbgSession*NVRM: Failed to update Link connection status! **NVRM: Failed to update Link connection status! *discoveredLinks*connectedLinksMask*convertBitVectorToLinkMasks(&pKernelNvlink->discoveredLinks, NULL, 0, ¶ms.discoveredLinkMasks)**convertBitVectorToLinkMasks(&pKernelNvlink->discoveredLinks, NULL, 0, ¶ms.discoveredLinkMasks)*convertBitVectorToLinkMasks(&pKernelNvlink->connectedLinksMask, NULL, 0, ¶ms.connectedLinks)**convertBitVectorToLinkMasks(&pKernelNvlink->connectedLinksMask, NULL, 0, ¶ms.connectedLinks)*convertBitVectorToLinkMasks(&pKernelNvlink->bridgeSensableLinks, NULL, 0, ¶ms.bridgeSensableLinkMasks)**convertBitVectorToLinkMasks(&pKernelNvlink->bridgeSensableLinks, NULL, 0, ¶ms.bridgeSensableLinkMasks)*bridgedLinkMasks*pTraceBuffer**pTraceBuffer*pDataOut**pDataOut*convertLinkMasksToBitVector(¶ms.vbiosDisabledLinkMask, sizeof(params.vbiosDisabledLinkMask), ¶ms.vbiosDisabledLinks, &pKernelNvlink->vbiosDisabledLinkMask)**convertLinkMasksToBitVector(¶ms.vbiosDisabledLinkMask, sizeof(params.vbiosDisabledLinkMask), ¶ms.vbiosDisabledLinks, &pKernelNvlink->vbiosDisabledLinkMask)*convertLinkMasksToBitVector(¶ms.initializedLinks, sizeof(params.initializedLinks), ¶ms.initializedLinkMasks, &localLinkMaskBitVector)**convertLinkMasksToBitVector(¶ms.initializedLinks, sizeof(params.initializedLinks), ¶ms.initializedLinkMasks, &localLinkMaskBitVector)*convertBitVectorToLinkMasks(&localLinkMaskBitVector, &pKernelNvlink->initializedLinks, sizeof(pKernelNvlink->initializedLinks), NULL)**convertBitVectorToLinkMasks(&localLinkMaskBitVector, &pKernelNvlink->initializedLinks, sizeof(pKernelNvlink->initializedLinks), NULL)*convertLinkMasksToBitVector(¶ms.initDisabledLinksMask, sizeof(params.initDisabledLinksMask), ¶ms.initDisabledLinks, &localLinkMaskBitVector)**convertLinkMasksToBitVector(¶ms.initDisabledLinksMask, sizeof(params.initDisabledLinksMask), ¶ms.initDisabledLinks, &localLinkMaskBitVector)*convertBitVectorToLinkMasks(&localLinkMaskBitVector, &pKernelNvlink->initDisabledLinksMask, sizeof(pKernelNvlink->initDisabledLinksMask), NULL)**convertBitVectorToLinkMasks(&localLinkMaskBitVector, &pKernelNvlink->initDisabledLinksMask, sizeof(pKernelNvlink->initDisabledLinksMask), NULL)*bEnableSafeModeAtLoad*bEnableTrainingAtLoad*call to kbusValidateFlaBaseAddress_DISPATCH*NVRM: FLA base addr validation failed for GPU %x **NVRM: FLA base addr validation failed for GPU %x *NVRM: Failed to stash fla base address for GPU %x **NVRM: Failed to stash fla base address for GPU %x *NVRM: FLA base addr %llx is assigned to GPU %x **NVRM: FLA base addr %llx is assigned to GPU %x *linkTrainedParams*NVRM: Failed to convert enabled links to RMCTRL mask **NVRM: Failed to convert enabled links to RMCTRL mask *bActiveOnly*NVRM: Failed to get the link train status for links **NVRM: Failed to get the link train status for links *bIsLinkActive**bIsLinkActive*NVRM: Failed to get fabric address for GPU %x **NVRM: Failed to get fabric address for GPU %x *bNvswitchProxy*PDB_PROP_KNVLINK_L2_POWER_STATE_ENABLED*NVRM: NVLink fabric is externally managed, skipping **NVRM: NVLink fabric is externally managed, skipping *NVRM: failed to reset HSHUB on GPU%u while preparing for GPU%u XVE reset (0x%x) **NVRM: failed to reset HSHUB on GPU%u while preparing for GPU%u XVE reset (0x%x) *NVRM: failed to reset HSHUB on GPU%u while preparing XVE reset: %s (0x%x) **NVRM: failed to reset HSHUB on GPU%u while preparing XVE reset: %s (0x%x) *call to knvlinkCoreShutdownDeviceLinks_IMPL*NVRM: failed to shutdown links on GPU%u while preparing XVE reset: %s (0x%x) **NVRM: failed to shutdown links on GPU%u while preparing XVE reset: %s (0x%x) *pExportPacket**pExportPacket**pMemoryExport**pFabricImportDesc**pMemoryFabricImportV2*call to knvlinkCoreResetDeviceLinks_IMPL*NVRM: failed to reset links on GPU%u while preparing XVE reset: %s (0x%x) **NVRM: failed to reset links on GPU%u while preparing XVE reset: %s (0x%x) *resetLinksparams**pMemoryFabricImportedRef*NVRM: Failed to sync peerLinksMask from GPU%d to GPU%d **NVRM: Failed to sync peerLinksMask from GPU%d to GPU%d *NVRM: on GPU%d NVLink is disabled. **NVRM: on GPU%d NVLink is disabled. *numPeerLinks*numSysmemLinks*NVRM: Message type received is Out of Bounds. Dropping the msg **NVRM: Message type received is Out of Bounds. Dropping the msg *NVRM: No Callback Registered for type %d. Dropping the msg **NVRM: No Callback Registered for type %d. Dropping the msg *NVRM: Out of memory, Dropping message **NVRM: Out of memory, Dropping message **pMemoryFabric*bOwnsLock*NVRM: Updating current NVLink config failed **NVRM: Updating current NVLink config failed *PDB_PROP_GPU_NVLINK_SYSMEM*NVRM: NVLink P2P is NOT supported between GPU%d and GPU%d **NVRM: NVLink P2P is NOT supported between GPU%d and GPU%d *NVRM: NVLink P2P is supported between GPU%d and GPU%d **NVRM: NVLink P2P is supported between GPU%d and GPU%d *(knvlinkGetNumLinksToPeer(pGpu1, pKernelNvlink1, pGpu0) == numPeerLinks)**(knvlinkGetNumLinksToPeer(pGpu1, pKernelNvlink1, pGpu0) == numPeerLinks)*(pKernelNvlink0->nvlinkBwMode == pKernelNvlink1->nvlinkBwMode)**(pKernelNvlink0->nvlinkBwMode == pKernelNvlink1->nvlinkBwMode)*call to knvlinkCheckNvswitchP2pConfig_IMPL*knvlinkCheckNvswitchP2pConfig(pGpu0, pKernelNvlink0, pGpu1)**knvlinkCheckNvswitchP2pConfig(pGpu0, pKernelNvlink0, pGpu1)*knvlinkCheckNvswitchP2pConfig(pGpu1, pKernelNvlink1, pGpu0)**knvlinkCheckNvswitchP2pConfig(pGpu1, pKernelNvlink1, pGpu0)*NVRM: NVLink P2P is NOT supported between between GPU%d and GPU%d **NVRM: NVLink P2P is NOT supported between between GPU%d and GPU%d **pMemoryMulticastFabric*call to _knvlinkCheckFabricCliqueId*call to _knvlinkCheckFabricProbeHealth*NVRM: GPU %d doesn't have a fabric address **NVRM: GPU %d doesn't have a fabric address *NVRM: GPU %d doesn't have a unique fabric address **NVRM: GPU %d doesn't have a unique fabric address *NVRM: non-NVSwitch GPU %d has a valid fabric address **NVRM: non-NVSwitch GPU %d has a valid fabric address *call to _knvlinkCheckNvswitchEgmAddressSanity*NVRM: GPU %d doesn't have a EGM fabric address **NVRM: GPU %d doesn't have a EGM fabric address *NVRM: non-NVSwitch GPU %d has a valid EGM fabric address **NVRM: non-NVSwitch GPU %d has a valid EGM fabric address *call to gpuFabricProbeGetFabricHealthStatus*call to nvlinkGetFabricHealthSummary*call to gpuFabricProbeGetFabricCliqueId*NVRM: GPU %d failed to get fabric clique Id: 0x%x **NVRM: GPU %d failed to get fabric clique Id: 0x%x *NVRM: GPU %d failed to get fabric clique Id 0x%x **NVRM: GPU %d failed to get fabric clique Id 0x%x *NVRM: GPU %d and Peer GPU %d cliqueId doesn't match **NVRM: GPU %d and Peer GPU %d cliqueId doesn't match *call to knvlinkIsBandwidthModeOff_DISPATCH**pMIGConfigSession*call to gpumgrGetGpuNvlinkBwModeScope_IMPL*bwModeScope**pMIGMonitorSession**pMmuFaultBuffer*src/kernel/gpu/nvlink/kernel_nvlinkapi.c*NVRM: NVLink unavailable. Return **src/kernel/gpu/nvlink/kernel_nvlinkapi.c**NVRM: NVLink unavailable. Return *NVRM: RBM not currently implemented on direct connect systems. **NVRM: RBM not currently implemented on direct connect systems. *NVRM: Requested RBM mode is not supported by GPU. **NVRM: Requested RBM mode is not supported by GPU. *call to gpumgrSetGpuNvlinkBwModePerGpu_IMPL*call to knvlinkGetSupportedBwMode_DISPATCH*call to knvlinkGetSupportedCounters_DISPATCH*NVRM: Unsetting USE_NVLINK_PEER field not supported **NVRM: Unsetting USE_NVLINK_PEER field not supported *enableNvlinkPeerParams*NVRM: GPU%d Failed to update USE_NVLINK_PEER for peer mask 0x%x **NVRM: GPU%d Failed to update USE_NVLINK_PEER for peer mask 0x%x *convertLinkMasksToBitVector(&pParams->linkMask, sizeof(pParams->linkMask), &pParams->links, &localLinkMask)**convertLinkMasksToBitVector(&pParams->linkMask, sizeof(pParams->linkMask), &pParams->links, &localLinkMask)*pEnabledLinkMask**pEnabledLinkMask*bitVectorAnd(&matchingLinkMask, &localLinkMask, pEnabledLinkMask)**bitVectorAnd(&matchingLinkMask, &localLinkMask, pEnabledLinkMask)*NVRM: Links not enabled. Return. **NVRM: Links not enabled. Return. *bitVectorToRaw(&localLinkMask, &tmpLinkMask, sizeof(tmpLinkMask))**bitVectorToRaw(&localLinkMask, &tmpLinkMask, sizeof(tmpLinkMask))*NVRM: Transition to L0 for GPU%d: linkMask 0x%llx in progress... Waiting for remote endpoints to request L2 exit **NVRM: Transition to L0 for GPU%d: linkMask 0x%llx in progress... Waiting for remote endpoints to request L2 exit *NVRM: Error setting power state %d on linkmask 0x%llx **NVRM: Error setting power state %d on linkmask 0x%llx *NVRM: Transition to L2 for GPU%d: linkMask 0x%llx in progress... Waiting for remote endpoints to request L2 entry **NVRM: Transition to L2 for GPU%d: linkMask 0x%llx in progress... Waiting for remote endpoints to request L2 entry *NVRM: Unsupported power state %d requested. **NVRM: Unsupported power state %d requested. *pMpsApi**pMpsApi**pNoDeviceMemory*NVRM: NVLink is unavailable, failing. **NVRM: NVLink is unavailable, failing. *bitVectorClrAll(&matchingLinkMask)**bitVectorClrAll(&matchingLinkMask)*NVRM: Requested link exceeds max allowed links: %d **NVRM: Requested link exceeds max allowed links: %d *numRecoveries**numRecoveries*errorRecoveries**errorRecoveries*call to knvlinkSetLinkMaskToPeer_IMPL**pNvlinkDev*pKernelNvlink->pNvlinkDev != NULL*src/kernel/gpu/nvlink/kernel_nvlinkcorelib.c**pKernelNvlink->pNvlinkDev != NULL**src/kernel/gpu/nvlink/kernel_nvlinkcorelib.c**pSwEng**core_link*NVRM: NVLink device isn't available. **NVRM: NVLink device isn't available. **pSweng**pOsDescMemory*NVRM: Skipping registration of link %d on simulation. **NVRM: Skipping registration of link %d on simulation. *NVRM: Link %d already registered in NVLINK core! **NVRM: Link %d already registered in NVLINK core! *Link**Link**linkIdx*call to knvlinkUtoa*NVRM: Failed to create nvlink_link struct **NVRM: Failed to create nvlink_link struct *call to osGetNvlinkLinkCallbacks*NVRM: link handlers not found **NVRM: link handlers not found *NVRM: Failed to register link %d in NVLINK core! **NVRM: Failed to register link %d in NVLINK core! *NVRM: LINK%d: %s registered successfully in NVLINK core **NVRM: LINK%d: %s registered successfully in NVLINK core *NVRM: Failed to update GPU UUID **NVRM: Failed to update GPU UUID *call to _knvlinkUpdateRemoteEndUuidInfo*call to gpuGetNameString_DISPATCH*devIdx**devIdx*(devIdx - pKernelNvlink->deviceName) < NVLINK_DEVICE_NAME_LENGTH**(devIdx - pKernelNvlink->deviceName) < NVLINK_DEVICE_NAME_LENGTH*call to knvlinkCoreGetDevicePciInfo_DISPATCH*call to nvlink_lib_update_uuid_and_device_name*NVRM: GPU already registered in NVLINK core! **NVRM: GPU already registered in NVLINK core! *NVIDIA GPU DRIVER**NVIDIA GPU DRIVER*NVRM: Failed to allocate memory for device name **NVRM: Failed to allocate memory for device name *NVRM: Failed to create nvlink_device struct for GPU **NVRM: Failed to create nvlink_device struct for GPU **pP2PApi**pPhysicalMemory**pPfm*call to knvlinkIsGpuReducedNvlinkConfig_DISPATCH*bReducedNvlinkConfig*call to knvlinkGetSupportedCoreLinkStateMask_DISPATCH*linkStateSupportedMask*bLinkStatesSymmetric*NVRM: Failed to register GPU in NVLINK core! **NVRM: Failed to register GPU in NVLINK core! *NVRM: GPU registered successfully in NVLINK core **NVRM: GPU registered successfully in NVLINK core *NVRM: NVLink core lib isn't initialized yet! **NVRM: NVLink core lib isn't initialized yet! *pNvlinkLink*src/kernel/gpu/nvlink/kernel_nvlinkcorelibcallback.c*NVRM: Error processing link info! **src/kernel/gpu/nvlink/kernel_nvlinkcorelibcallback.c**NVRM: Error processing link info! *callbackType*callbackParams*pGetUphyLoadParams**pGetUphyLoadParams*NVRM: Error issuing NvLink Get Uphy Load callback! **NVRM: Error issuing NvLink Get Uphy Load callback! **pPlatformRequestHandler*pPlatformEdppLimit**pPlatformEdppLimit*call to knvlinkPreTrainLinksToActiveAli_IMPL*pCounterVal**pCounterVal*call to knvlinkTrainLinksToActiveAli_IMPL*pbPM1Available**pbPM1Available*NVRM: Failed to request Link %d to transition to active **NVRM: Failed to request Link %d to transition to active **pNvlinkLink**pProfiler*NVRM: Error issuing NvLink Training Complete callback! **NVRM: Error issuing NvLink Training Complete callback! *NVRM: Bad token address provided! **NVRM: Bad token address provided! *pReadDiscoveryTokenParams**pReadDiscoveryTokenParams*NVRM: Error reading discovery token! **NVRM: Error reading discovery token! *NVRM: R4 Tokens not supported on the chip! **NVRM: R4 Tokens not supported on the chip! *NVRM: Error updating Local/Remote SID Info! **NVRM: Error updating Local/Remote SID Info! *remoteLocalSidInfo*pWriteDiscoveryTokenParams**pWriteDiscoveryTokenParams**pProfDev**pProfCtx*NVRM: Error writing Discovery Token! **NVRM: Error writing Discovery Token! *pGetRxDetectParams**pGetRxDetectParams*NVRM: RXDET (Receiver Detect) failed on link! **NVRM: RXDET (Receiver Detect) failed on link! *pSetRxDetectParams**pSetRxDetectParams*NVRM: Error performing RXDET (Receiver Detect) on link! **NVRM: Error performing RXDET (Receiver Detect) on link! **pNewParams*getRxSublinkMode*sublinkMode*sublinkSubMode**pProfBase**pClientPermissions*NVRM: Error getting current RX sublink state! **NVRM: Error getting current RX sublink state! *getTxSublinkMode*NVRM: Error getting current TX sublink state! **NVRM: Error getting current TX sublink state! *pSetRxSublinkModeParams**pSetRxSublinkModeParams*NVRM: Error setting RX sublink mode! **NVRM: Error setting RX sublink mode! *pSetTxSublinkModeParams**pSetTxSublinkModeParams*NVRM: Error setting TX sublink mode. mode = 0x%08llx **NVRM: Error setting TX sublink mode. mode = 0x%08llx *NVRM: Error getting current link state! **NVRM: Error getting current link state! *getTlLinkMode*pSetTlLinkModeParams**pSetTlLinkModeParams*NVRM: Error setting current link state! **NVRM: Error setting current link state! *getDlLinkMode**pRegisterMemory*pSetDlLinkModeParams**pSetDlLinkModeParams*linkModeParams*linkModePreHsParams*linkModeInitPhase1Params*seedDataDest**seedDataDest*seedDataSrc**seedDataSrc*NVRM: Thread state not initialized! **NVRM: Thread state not initialized! *bDoThreadStateFree*NVRM: Error getting current thread! **NVRM: Error getting current thread! *pThreadNode == &threadNode**pThreadNode == &threadNode*initoptimizeTimeout*NVRM: Error calling and polling for Init Optimize status! link 0x%x **NVRM: Error calling and polling for Init Optimize status! link 0x%x *linkModePostInitOptimizeParams*NVRM: Error setting current link state: 0x%llx! **NVRM: Error setting current link state: 0x%llx! *bStateSaved*linkModeOffParams**pSequence*pPostInitNegotiateParams**pPostInitNegotiateParams**pArg3***pArg3*pArg5**pArg5***pArg5*link_change->master->master**link_change->master->master*pWorkItemData**pWorkItemData***pWorkItemData*pWorkItemData != NULL**pWorkItemData != NULL*linkChangeData**linkChangeData*logstr**logstr*pGpu == pNvlinkLink->pGpu**pGpu == pNvlinkLink->pGpu*call to knvlinkRetrainLink_IMPL*NVRM: master GPU does not support NVLINK! **NVRM: master GPU does not support NVLINK! *NVRM: failed to acquire slave lock! **NVRM: failed to acquire slave lock! *NVRM: failed to acquire the master lock! *pArg6**pArg6***pArg6**NVRM: failed to acquire the master lock! *NVRM: failed to acquire the RM semaphore! **NVRM: failed to acquire the RM semaphore! *NVRM: pKernelNvlink is NULL, returning early **NVRM: pKernelNvlink is NULL, returning early *NVRM: released device GPU locks **NVRM: released device GPU locks *NVRM: decremented device lock refcnt to %u **NVRM: decremented device lock refcnt to %u *NVRM: device lock acquired outside of the core library callbacks **NVRM: device lock acquired outside of the core library callbacks *NVRM: incremented device lock refcnt to %u **NVRM: incremented device lock refcnt to %u *NVRM: acquired device GPU locks *phclients**phclients*phdevices**phdevices*phchannels**phchannels*pArg7**pArg7***pArg7**pPerfClkInfos**encoderCapacity**NVRM: acquired device GPU locks *NVRM: failed to acquire device GPU locks! **NVRM: failed to acquire device GPU locks! *src/kernel/gpu/nvlink/kernel_nvlinkcorelibtrain.c**src/kernel/gpu/nvlink/kernel_nvlinkcorelibtrain.c*NVRM: GPU%02u cached topology: **NVRM: GPU%02u cached topology: *NVRM: Unable to determine sysmem link mask **NVRM: Unable to determine sysmem link mask *NVRM: sysmem link mask : 0x%x **NVRM: sysmem link mask : 0x%x *NVRM: GPU%02u link mask : 0x%llx **NVRM: GPU%02u link mask : 0x%llx *NVRM: unknown link mask: 0x%llx **NVRM: unknown link mask: 0x%llx *NVRM: GPU%u requesting GPU%u NVLINK config update **NVRM: GPU%u requesting GPU%u NVLINK config update *call to _knvlinkPrintTopologySummary*bPeerUpdated**pArg4*call to knvlinkDiscoverPostRxDetLinks_DISPATCH*call to knvlinkCheckTrainingIsComplete_IMPL*trainingStatus*more**more*NVRM: Timedout while checking to see if training complete! **NVRM: Timedout while checking to see if training complete! *call to nvlink_lib_train_links_from_L2_to_active*saveRestoreHshubStateParams*programBufferRdyParams*bSysmem*pArg8**pArg8*pGpfifoAllocParams**pGpfifoAllocParams*pChID**pChID*NVRM: Transition to L0 for GPU%d: linkMask 0x%x in progress... Waiting for remote endpoints to request L2 exit **NVRM: Transition to L0 for GPU%d: linkMask 0x%x in progress... Waiting for remote endpoints to request L2 exit *NVRM: Unable to wakeup the linkmask 0x%x of GPU%d from SLEEP **NVRM: Unable to wakeup the linkmask 0x%x of GPU%d from SLEEP *NVRM: Error setting link: %d to sleep! **NVRM: Error setting link: %d to sleep! *call to nvlink_lib_powerdown_links_from_active_to_L2*NVRM: Transition to L2 for GPU%d: linkMask 0x%x in progress... Waiting for remote endpoints to request L2 entry **NVRM: Transition to L2 for GPU%d: linkMask 0x%x in progress... Waiting for remote endpoints to request L2 entry *NVRM: Unable to put the linkmask 0x%x of GPU%d to SLEEP **NVRM: Unable to put the linkmask 0x%x of GPU%d to SLEEP ***vgpu_get_latency_buffer_size***ceCapsPtr**eccStatusParams**grSmIssueRateModifier**range_params**ctxBuffInfo**grSmIssueRateModifierV2**vgpuStaticProperties**busGetInfoV2**ropInfoParams**mcEngineNotificationIntrVectors**mcStaticIntrTable**grSmIssueThrottleCtrl**gpuPartitionInfo*nvlinkSysmemParams**pcieSupportedGpuAtomics*sysmemLinkMask***vgpuBspCaps**execSyspipeInfo**cegetAllCaps*NVRM: Failed to setup HSHUB NVLink sysmem links state **NVRM: Failed to setup HSHUB NVLink sysmem links state *call to nvlink_lib_set_link_master***fifoDeviceInfoTablePtr*NVL_SUCCESS == nvlink_lib_set_link_master( pKernelNvlink->nvlinkLinks[linkId].core_link)**NVL_SUCCESS == nvlink_lib_set_link_master( pKernelNvlink->nvlinkLinks[linkId].core_link)*call to knvlinkTrainSysmemLinksToActive_IMPL*NVRM: FAILED TO TRAIN CPU/SYSMEM LINKS TO ACTIVE on GPU%d!!! **NVRM: FAILED TO TRAIN CPU/SYSMEM LINKS TO ACTIVE on GPU%d!!! *updateHshubMuxParams*updateType*bSysMem**zcullInfoParams**floorsweepMaskParams**grZcullInfo**vgxSystemInfo**ppcMaskParams**grPdbPropertiesParams***vgpuFbGetLtcInfoForFbp***fbDynamicBlacklistedPagesPtr**grInfoParams**execPartitionInfo**ccuSampleInfoParams*NVL_SUCCESS == nvlink_lib_set_link_master( pKernelNvlink0->nvlinkLinks[linkId].core_link)**NVL_SUCCESS == nvlink_lib_set_link_master( pKernelNvlink0->nvlinkLinks[linkId].core_link)**gidInfo**skuInfo*NVL_SUCCESS == nvlink_lib_set_link_master( pKernelNvlink1->nvlinkLinks[remoteLinkId].core_link)**NVL_SUCCESS == nvlink_lib_set_link_master( pKernelNvlink1->nvlinkLinks[remoteLinkId].core_link)**smOrderParams**vgpuConfig**pOutput**c2cInfo**ciProfiles**fbRegionInfoParams**pRpcstructurecopy**pSec2Context**arg_pSec2Context*call to knvlinkApplyNvswitchDegradedModeSettings_DISPATCH*call to _knvlinkActivateDiscoveredP2pConn*call to _knvlinkActivateDiscoveredSwitchConn*call to _knvlinkActivateDiscoveredSysmemConn*NVRM: Failed to activate link%d on GPU%d!!! **NVRM: Failed to activate link%d on GPU%d!!! *call to _knvlinkUpdateSwitchLinkMasksGpuDegraded*call to _knvlinkUpdateSwitchLinkMasks*call to _knvlinkUpdatePeerConfigs*call to knvlinkIsFloorSweepingNeeded_DISPATCH**pSec2Utils*call to knvlinkDirectConnectCheck_DISPATCH*call to nvlink_lib_powerdown_floorswept_links_to_off*convertBitVectorToLinkMasks(&pKernelNvlink->enabledLinks, NULL, 0, &linkTrainedParams.linkMask)**convertBitVectorToLinkMasks(&pKernelNvlink->enabledLinks, NULL, 0, &linkTrainedParams.linkMask)*convertMaskToBitVector((NvU64)tmpEnabledLinkMask, &pKernelNvlink->enabledLinks)**convertMaskToBitVector((NvU64)tmpEnabledLinkMask, &pKernelNvlink->enabledLinks)*disconnectedLinkMask*initDisabledLinksMask*call to knvlinkProcessInitDisabledLinks_IMPL*NVRM: Floorsweeping didn't work! enabledMaskCount: 0x%x and numActiveLinksTotal: 0x%x. Current link info cached in SW: discoveredLinks:0x%llxenabledLinks: 0x%llx; disconnectedLinks:0x%llx; initDisabledLinksMask:0x%x **NVRM: Floorsweeping didn't work! enabledMaskCount: 0x%x and numActiveLinksTotal: 0x%x. Current link info cached in SW: discoveredLinks:0x%llxenabledLinks: 0x%llx; disconnectedLinks:0x%llx; initDisabledLinksMask:0x%x *bFloorSwept*call to _knvlinkRetrainLinkPrologue*call to nvlink_lib_get_link_master*NVRM: link master could not be found from GPU%u link %u **pSpdmPartitionParams**pDiagApi**NVRM: link master could not be found from GPU%u link %u *master != slave**master != slave*change_type*call to knvlinkRetrainLinkFromOff*call to knvlinkRetrainLinkFromSafe*NVRM: core lib device is either externally managed or not present, skipping **NVRM: core lib device is either externally managed or not present, skipping *NVRM: No links to reset for the GPU%d **NVRM: No links to reset for the GPU%d *call to nvlink_lib_reset_links*NVRM: Unable to reset link(s) for GPU%d **NVRM: Unable to reset link(s) for GPU%d *NVRM: Lane shutdown not enabled, skipping link(s) reset for GPU%d **NVRM: Lane shutdown not enabled, skipping link(s) reset for GPU%d *NVRM: No links to shutdown for the GPU%d **NVRM: No links to shutdown for the GPU%d *NVRM: GFW boot is enabled. Link shutdown is not required, skipping **NVRM: GFW boot is enabled. Link shutdown is not required, skipping *call to nvlink_lib_powerdown_links_from_active_to_swcfg*call to nvlink_lib_powerdown_links_from_active_to_off*NVRM: Need to shutdown all links unilaterally for GPU%d **NVRM: Need to shutdown all links unilaterally for GPU%d *NVRM: Unable to turn off links for the GPU%d **NVRM: Unable to turn off links for the GPU%d *NVRM: NVLink L2 is not supported. Returning **NVRM: NVLink L2 is not supported. Returning *NVRM: Skipping L2 entry/exit since fabric is externally managed **NVRM: Skipping L2 entry/exit since fabric is externally managed *NVRM: GPU%d: Link%d is not connected. Returning **NVRM: GPU%d: Link%d is not connected. Returning **pSwTest*NVRM: GPU%d: Links sharing PLL should enter/exit L2 together. Returning **NVRM: GPU%d: Links sharing PLL should enter/exit L2 together. Returning **pSwIntr*NVRM: GPU%d: not registered in core lib. Returning **NVRM: GPU%d: not registered in core lib. Returning *call to _knvlinkEnterSleep*call to _knvlinkExitSleep*NVRM: Skipping link training due to regkey on GPU%d **NVRM: Skipping link training due to regkey on GPU%d *NVRM: Fabric is externally managed, skip link training **NVRM: Fabric is externally managed, skip link training *NVRM: Nvlink in Forced Config - skip link training. **NVRM: Nvlink in Forced Config - skip link training. *call to nvlink_lib_train_links_from_swcfg_to_active*NVLink: failed to train link %d to remote PCI:%04x:%02x:%02x**NVLink: failed to train link %d to remote PCI:%04x:%02x:%02x*call to knvlinkPoweredUpForD3_DISPATCH*NVRM: Skip link training on GPU%d in RTD3/FGC6 exit. Links will train to ACTIVE in L2 exit path **NVRM: Skip link training on GPU%d in RTD3/FGC6 exit. Links will train to ACTIVE in L2 exit path *NVRM: Skipping link due to forced configuration **NVRM: Skipping link due to forced configuration **pSyncGpuBoost*NVRM: P2P links are all trained already, return **NVRM: P2P links are all trained already, return *convertBitVectorToLinkMasks(&pKernelNvlink0->enabledLinks, NULL, 0, &linkTrainedParams.linkMask)**convertBitVectorToLinkMasks(&pKernelNvlink0->enabledLinks, NULL, 0, &linkTrainedParams.linkMask)**pSyncpointMemory*bTrainLinks*NVRM: Enabled links are all trained already, return **NVRM: Enabled links are all trained already, return *version >= NVLINK_VERSION_22**version >= NVLINK_VERSION_22***ppMappingInfo***platformData***ppVidmemInfo**pVASpaceToken***ppThirdPartyP2P*ppClientOut**ppClientOut***ppClientOut*NVLink: Failed to train link %d to remote PCI:%04x:%02x:%02x**NVLink: Failed to train link %d to remote PCI:%04x:%02x:%02x**pRegisterPidParams**pUnregisterVidmemParams**pRegisterVidmemParams**pUnregisterVaSpaceParams**pRegisterVaSpaceParams***ppVASpaceInfo**pTimedSemSw*version >= NVLINK_VERSION_40**pTimedSemaSwObject**version >= NVLINK_VERSION_40*pReleaseParams**pReleaseParams*pGetTimeParams**pGetTimeParams*pFlushParams**pFlushParams*call to nvlink_lib_check_training_complete*NVRM: Links aren't fully trained yet! **NVRM: Links aren't fully trained yet! *call to knvlinkLogAliDebugMessages_DISPATCH*NVRM: Error updating Local/Remote Sid Info! **NVRM: Error updating Local/Remote Sid Info! **pUserModeApi*NVRM: Skipping unsupported sysmem link training on GPU%d **NVRM: Skipping unsupported sysmem link training on GPU%d **pUvmChannelRetainer*NVRM: Training sysmem links for GPU%d **NVRM: Training sysmem links for GPU%d *NVRM: resetting timeout after link training **NVRM: resetting timeout after link training *bNvswitchProxyPresent*call to knvlinkFloorSweep_IMPL*NVRM: Failed to floorsweep valid nvlink config! **NVRM: Failed to floorsweep valid nvlink config! *call to _knvlinkGetNumPortEvents*numPortEvents*pAccessRight**pAccessRight*phSubDevice**phSubDevice*NVRM: L2 supported. Skip topology discovery on GPU%d in RTD3/FGC6 exit **NVRM: L2 supported. Skip topology discovery on GPU%d in RTD3/FGC6 exit **pUvm*call to _knvlinkActivateDiscoveredConns*NVRM: Failed to activate the discovered connections on GPU%d **NVRM: Failed to activate the discovered connections on GPU%d *bSkipLinkTraining***arg6***arg7***arg8***arg9**pAccessCntrBufferGet***pAccessCntrBufferGet**pAccessCntrBufferPut***pAccessCntrBufferPut*pAccessCntrBufferFull**pAccessCntrBufferFull***pAccessCntrBufferFull*pAccessCntrMask**pAccessCntrMask*pFullFlag**pFullFlag*bDisableL2Mode*bLinkTrainingDebugSpew*src/kernel/gpu/nvlink/kernel_nvlinkoverrides.c*NVRM: registryControl: 0x%x **src/kernel/gpu/nvlink/kernel_nvlinkoverrides.c**NVRM: registryControl: 0x%x *NVRM: Disabling NVLINK (forced disable via regkey) **NVRM: Disabling NVLINK (forced disable via regkey) *call to knvlinkIsNvlinkDefaultEnabled_IMPL*NVRM: Disabling NVLINK (disabled by platform default) **NVRM: Disabling NVLINK (disabled by platform default) *NVRM: Conflict in Nvlink Force Enable/Disable. Reverting to platform default. **NVRM: Conflict in Nvlink Force Enable/Disable. Reverting to platform default. *NVRM: NVLink is enabled **NVRM: NVLink is enabled *NVRM: Overriding NvLink training during driver load via regkey. **NVRM: Overriding NvLink training during driver load via regkey. *bForceAutoconfig*NVRM: Link training debug spew turned on! **NVRM: Link training debug spew turned on! *RMNvLinkverboseControlMask**RMNvLinkverboseControlMask*NVRM: Forcing NVLINK Verbose Reg Init Prints enabled via regkey **NVRM: Forcing NVLINK Verbose Reg Init Prints enabled via regkey *RMNvLinkDisableLinks**RMNvLinkDisableLinks*RMNvLinkDisableLinks2**RMNvLinkDisableLinks2*convertMaskToBitVector(tmp, &pKernelNvlink->regkeyDisabledLinksMask)**convertMaskToBitVector(tmp, &pKernelNvlink->regkeyDisabledLinksMask)*NVRM: disable links regkey set with a value of **NVRM: disable links regkey set with a value of *RMNvLinkEnable**RMNvLinkEnable**pUvmSw*NVRM: Enable NvLinks 0x%llx via regkey **NVRM: Enable NvLinks 0x%llx via regkey *RMNvLinkDisableP2PLoopback**RMNvLinkDisableP2PLoopback*PDB_PROP_GPU_NVLINK_P2P_LOOPBACK_DISABLED*RMNvLinkControlLinkPM**RMNvLinkControlLinkPM*NVRM: RM NVLink Link PM controlled via regkey **NVRM: RM NVLink Link PM controlled via regkey *NVRM: NVLink L2 power state disabled via regkey **NVRM: NVLink L2 power state disabled via regkey *RMNvLinkForceLaneshutdown**RMNvLinkForceLaneshutdown*NVRM: NVLink lanedisable and laneshutdown is forced enabled via regkey **NVRM: NVLink lanedisable and laneshutdown is forced enabled via regkey *RMNvLinkForcedSysmemDeviceType**RMNvLinkForcedSysmemDeviceType*NVRM: Forcing NVLINK SYSMEM device type with 0x%x via regkey **NVRM: Forcing NVLINK SYSMEM device type with 0x%x via regkey *forcedSysmemDeviceType*RMNvLinkForcedLoopbackOnSwitch**RMNvLinkForcedLoopbackOnSwitch*PDB_PROP_KNVLINK_FORCED_LOOPBACK_ON_SWITCH_MODE_ENABLED*NVRM: Forced Loopback on switch is enabled **NVRM: Forced Loopback on switch is enabled *RmNvlinkEncryption**RmNvlinkEncryption*bNvleModeRegkey*NVRM: Nvlink Encryption is enabled via regkey **NVRM: Nvlink Encryption is enabled via regkey *NVRM: Nvlink Encryption is disabled via regkey **NVRM: Nvlink Encryption is disabled via regkey *NVRM: Nvlink Encryption is enabled by default **NVRM: Nvlink Encryption is enabled by default *NVRM: Nvlink Encryption is disabled by default since CC is disabled **NVRM: Nvlink Encryption is disabled by default since CC is disabled *nvleKeyRefreshInterval*RmNvlinkNvleKeyRefresh**RmNvlinkNvleKeyRefresh*NVRM: NVLE key refresh is disabled because of incorrect refresh interval via regkey **NVRM: NVLE key refresh is disabled because of incorrect refresh interval via regkey *NVRM: NVLE key refresh is enabled via regkey, key refresh interval is 0x%x seconds **NVRM: NVLE key refresh is enabled via regkey, key refresh interval is 0x%x seconds *NVRM: NVLE key refresh is disabled via regkey **NVRM: NVLE key refresh is disabled via regkey *NVRM: NVLE key refresh is enabled by default, key refresh interval is 0x%x seconds **NVRM: NVLE key refresh is enabled by default, key refresh interval is 0x%x seconds *NVRM: NVLE key refresh is disabled by default **NVRM: NVLE key refresh is disabled by default *call to knvlinkPostSetupNvlinkPeer_DISPATCH*src/kernel/gpu/nvlink/kernel_nvlinkstate.c*NVRM: Failed to perform NvLink post setup! **src/kernel/gpu/nvlink/kernel_nvlinkstate.c**NVRM: Failed to perform NvLink post setup! *NVRM: Failed to program bufferready for the sysmem nvlinks! **NVRM: Failed to program bufferready for the sysmem nvlinks! *NVRM: Failed to snable ATS functionality for NVLink sysmem! **NVRM: Failed to snable ATS functionality for NVLink sysmem! *call to _knvlinkPurgeState*call to knvlinkCoreDriverUnloadWar_IMPL*pVaspaceGetHostRmManagedSizeParams**pVaspaceGetHostRmManagedSizeParams*pReleaseEntriesParams**pReleaseEntriesParams*pReserveEntriesParams**pReserveEntriesParams*pPageLevelInfoParams**pPageLevelInfoParams*(linkId == -1) || ((linkId >= 0) && (linkId < 32))**(linkId == -1) || ((linkId >= 0) && (linkId < 32))*pGmmuFormatParams**pGmmuFormatParams*NVRM: Failed to get Local Nvlink info for linkId %d to update Degraded GPU%d status **NVRM: Failed to get Local Nvlink info for linkId %d to update Degraded GPU%d status *NVRM: GPU%d marked Degraded. Error originated on linkId %d! **NVRM: GPU%d marked Degraded. Error originated on linkId %d! *osQueueWorkItem(pGpu, knvlinkShutdownLinks_WORKITEM, NULL, (OsQueueWorkItemFlags){ .apiLock = WORKITEM_FLAGS_API_LOCK_READ_ONLY, .bLockGpuGroupSubdevice = NV_TRUE})**osQueueWorkItem(pGpu, knvlinkShutdownLinks_WORKITEM, NULL, (OsQueueWorkItemFlags){ .apiLock = WORKITEM_FLAGS_API_LOCK_READ_ONLY, .bLockGpuGroupSubdevice = NV_TRUE})*(pGpu != NULL && pKernelNvlink != NULL)**(pGpu != NULL && pKernelNvlink != NULL)*knvlinkCoreShutdownDeviceLinks(pGpu, pKernelNvlink, NV_TRUE) == NV_OK**knvlinkCoreShutdownDeviceLinks(pGpu, pKernelNvlink, NV_TRUE) == NV_OK*NVRM: Skipping device/link un-registration in MIG enabled path **NVRM: Skipping device/link un-registration in MIG enabled path *NVRM: Skipping device/link un-registration in RTD3 GC6 entry path **NVRM: Skipping device/link un-registration in RTD3 GC6 entry path *call to knvlinkCoreRemoveLink_IMPL*call to knvlinkCoreRemoveDevice_IMPL*call to kioctrlDestructEngine_IMPL*NVRM: Failed to disable DL interrupts for the links **NVRM: Failed to disable DL interrupts for the links *NVRM: Failed to pseudo-clean shutdown the links for GPU%d **NVRM: Failed to pseudo-clean shutdown the links for GPU%d *call to knvlinkClearEncryptionKeys_IMPL**pVgpuApi*NVRM: Failed to clear NVLE keys for the GPU **NVRM: Failed to clear NVLE keys for the GPU *call to knvlinkPostSchedulingEnableCallbackUnregister_DISPATCH*call to knvlinkCoreUpdateDeviceUUID_IMPL**pVgpuConfigApi*pGpuMetadataStringParams**pGpuMetadataStringParams*pMigrationCapParams**pMigrationCapParams*pEncoderParams**pEncoderParams*pGetCapabilityParams**pGetCapabilityParams*pSetCapabilityParams**pSetCapabilityParams*call to knvlinkGetEncryptionBits_DISPATCH*knvlinkGetEncryptionBits_HAL(pGpu, pKernelNvlink)**knvlinkGetEncryptionBits_HAL(pGpu, pKernelNvlink)*NVRM: NVLE not enabled on GPU%d **NVRM: NVLE not enabled on GPU%d *call to knvlinkStatePostLoadHal_DISPATCH*NVRM: failed for GPU 0x%x **NVRM: failed for GPU 0x%x *call to knvlinkSetDirectConnectBaseAddress_DISPATCH*knvlinkSetDirectConnectBaseAddress_HAL(pGpu, pKernelNvlink)**knvlinkSetDirectConnectBaseAddress_HAL(pGpu, pKernelNvlink)*call to _knvlinkFilterDiscoveredLinks*call to _knvlinkFilterIoctrls*call to knvlinkSetPowerFeatures_IMPL*call to knvlinkIsAliSupported_DISPATCH**arg_pVidmemAccessBitBuffer*NVRM: Failed to get ALI status **NVRM: Failed to get ALI status *call to knvlinkCoreAddDevice_IMPL*NVRM: Failed to add GPU device to nvlink core **NVRM: Failed to add GPU device to nvlink core *NVRM: MIG Enabled or NVLink L2 is supported on chip. Skip device registration in RTD3/FGC6 exit **NVRM: MIG Enabled or NVLink L2 is supported on chip. Skip device registration in RTD3/FGC6 exit *convertMaskToBitVector(~((NvU64)pKernelNvlink->initDisabledLinksMask), &localBitVector)**convertMaskToBitVector(~((NvU64)pKernelNvlink->initDisabledLinksMask), &localBitVector)*preInitializedLinks*call to knvlinkProgramLinkSpeed_DISPATCH*call to knvlinkOverrideConfig_DISPATCH**pVidmemAccessBitBuffer*pVidmem**pVidmem*call to knvlinkCoreAddLink_IMPL*NVRM: Failed to register Link%d in nvlink core **NVRM: Failed to register Link%d in nvlink core *NVRM: NVLink L2 is supported on the chip. Skip link registration in RTD3/FGC6 exit **NVRM: NVLink L2 is supported on the chip. Skip link registration in RTD3/FGC6 exit *NVRM: Failed to perform pre-topology setup on mask of enabled links **NVRM: Failed to perform pre-topology setup on mask of enabled links *call to knvlinkDetectNvswitchProxy_IMPL*call to sysForceInitFabricManagerState_IMPL*call to sysSyncExternalFabricMgmtWAR_IMPL*call to _knvlinkProcessSysmemLinks**pVmRange*NVRM: Init FLA failed, status:0x%x **NVRM: Init FLA failed, status:0x%x *NVRM: Failed to create TmrEvent for Link %d **NVRM: Failed to create TmrEvent for Link %d **arg_pZbcApi*call to knvlinkDumpCallbackRegister_DISPATCH**pZbcApi**pGetZBCClearTableSizeParams*call to knvlinkPostSchedulingEnableCallbackRegister_DISPATCH*call to knvlinkGetHshubSupportedRbmModes_DISPATCH*call to knvlinkCopyNvlinkDeviceInfo_IMPL*ppKernelPtr**ppKernelPtr***ppKernelPtr**pHObject*pOs32Flags**pOs32Flags*pOs02Flags**pOs02Flags**pHSubDevice*pBMustFree**pBMustFree*pClassId**pClassId*call to knvlinkRemoveMissingIoctrlObjects_IMPL*call to gpuIsCCMultiGpuNvleModeEnabled_IMPL*call to knvlinkCopyIoctrlDeviceInfo_IMPL*call to knvlinkCoreDriverLoadWar_IMPL*pHeapSize**pHeapSize*call to knvlinkCoreIsDriverSupported_IMPL*call to knvlinkApplyRegkeyOverrides_IMPL**pContextInternal*call to knvlinkConstructHal_DISPATCH*call to _knvlinkCreateIoctrl*call to kioctrlGetLocalDiscoveredLinks*call to kioctrlGetGlobalToLocalMask***api*call to kioctrlGetPublicId*bitVectorComplement(&localBitVector, &pKernelNvlink->vbiosDisabledLinkMask)**bitVectorComplement(&localBitVector, &pKernelNvlink->vbiosDisabledLinkMask)*bitVectorAnd(&pKernelNvlink->discoveredLinks, &pKernelNvlink->discoveredLinks, &localBitVector)**bitVectorAnd(&pKernelNvlink->discoveredLinks, &pKernelNvlink->discoveredLinks, &localBitVector)*NVRM: Discovered Links: **NVRM: Discovered Links: *bitVectorComplement(&localBitVector, &pKernelNvlink->regkeyDisabledLinksMask)**bitVectorComplement(&localBitVector, &pKernelNvlink->regkeyDisabledLinksMask)*NVRM: Links after applying disable links regkey **NVRM: Links after applying disable links regkey *src/kernel/gpu/ofa/kernel_ofa_ctx.c*NVRM: ofactxDestruct for 0x%x **src/kernel/gpu/ofa/kernel_ofa_ctx.c**NVRM: ofactxDestruct for 0x%x **api_intf**sod**eob**eod**pBinStoragePvt**pRpcHal*NVRM: ofactxConstruct for 0x%x **NVRM: ofactxConstruct for 0x%x *pOfaAllocParams*(pOfaAllocParams != NULL)*src/kernel/gpu/ofa/kernel_ofa_engdesc.c**(pOfaAllocParams != NULL)**src/kernel/gpu/ofa/kernel_ofa_engdesc.c*kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_OFA(engineInstance), &rmEngineType)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, RM_ENGINE_TYPE_OFA(engineInstance), &rmEngineType)*pbCudaLimit*(pbCudaLimit != NULL)*src/kernel/gpu/perf/kern_cuda_limit.c**(pbCudaLimit != NULL)**src/kernel/gpu/perf/kern_cuda_limit.c*call to kperfCudaLimitCliGet*kperfCudaLimitCliGet(pDevice, &bCudaLimitBefore)**kperfCudaLimitCliGet(pDevice, &bCudaLimitBefore)*call to kperfCudaLimitCliSet*kperfCudaLimitCliSet(pDevice, pParams->bCudaLimit)**kperfCudaLimitCliSet(pDevice, pParams->bCudaLimit)*kperfCudaLimitCliGet(pDevice, &bCudaLimitAfter)**kperfCudaLimitCliGet(pDevice, &bCudaLimitAfter)**pRpcstructurecopyHal*nCudaLimitRefCnt*src/kernel/gpu/perf/kern_perf.c*NVRM: Code reentered. function %02x, reentered %02x set %d **src/kernel/gpu/perf/kern_perf.c**NVRM: Code reentered. function %02x, reentered %02x set %d *call to pfmreqhndlrStateDestroy_IMPL*call to pfmreqhndlrStateUnload_IMPL*call to kperfGpuBoostSyncStateInit_DISPATCH*call to pfmreqhndlrStateLoad_IMPL*call to pfmreqhndlrStateInit_IMPL*boostParams2x*call to rmcfg_IsGB10Y*src/kernel/gpu/perf/kern_perf_boost.c***ra**pGpuLockedMask*ppThreadNodeTime**ppThreadNodeTime***ppThreadNodeTime*pThreadStateDatabaseTimeoutMsecs**pThreadStateDatabaseTimeoutMsecs*ppIsrlocklessThreadNode**ppIsrlocklessThreadNode***ppIsrlocklessThreadNode**pVMInstanceInfo***ppReturnedCommon***ppReturnedNocatEntry***ppCommon**pTdrReasonStr***ppRingBuffer***pGpu*pEncoder**pEncoder***pEncoder*NVRM: The specified duration exceed maximum %d! **src/kernel/gpu/perf/kern_perf_boost.c**pDclRecord**NVRM: The specified duration exceed maximum %d! *call to osTegraiGpuPerfBoost*(pKernelPerf != NULL)**(pKernelPerf != NULL)*call to kperfBoostSet_IMPL*call to RmRpcPerfGetCurrentPstate*ppBuffer**ppBuffer***ppBuffer**pat*src/kernel/gpu/perf/kern_perf_ctrl.c*NVRM: Call not supported with SMC Enabled **src/kernel/gpu/perf/kern_perf_ctrl.c**NVRM: Call not supported with SMC Enabled *pForce**pForce*ppDmaMappingInfo**ppDmaMappingInfo***ppDmaMappingInfo**pDmaMapping*ppDmaMapping**ppDmaMapping***ppDmaMapping**pErrorContSmcSettings**pErrorContSettings*pOldLevelInfoParams**pOldLevelInfoParams*pVenId**pVenId*pDvsecLen**pDvsecLen*pCapBaseAddr**pCapBaseAddr*perfGetClkInfoListSize*perfGetClkInfoList**perfGetClkInfoList***perfGetClkInfoList*pRegAddr**pRegAddr**pAllowList**pgc6VirtAddr*gfwBootProgressVal**gfwBootProgressVal**pBarBaseAddress**pIs64BitBar*pRmApi->Control(pRmApi, RES_GET_CLIENT_HANDLE(pSubdevice), RES_GET_HANDLE(pSubdevice), NV2080_CTRL_CMD_PERF_GET_GPUMON_PERFMON_UTIL_SAMPLES_V2, pParams, sizeof(*pParams))**pRmApi->Control(pRmApi, RES_GET_CLIENT_HANDLE(pSubdevice), RES_GET_HANDLE(pSubdevice), NV2080_CTRL_CMD_PERF_GET_GPUMON_PERFMON_UTIL_SAMPLES_V2, pParams, sizeof(*pParams))*numEntries <= NV2080_CTRL_PERF_GPUMON_SAMPLE_COUNT_PERFMON_UTIL**numEntries <= NV2080_CTRL_PERF_GPUMON_SAMPLE_COUNT_PERFMON_UTIL*pSample**pSample*nvjpg*nvofa*pBoostGroups**pBoostGroups**pEndStr***pEndStr*numFound**numFound**tempBuffer***tempBuffer*pBitField**pBitField*pBitField32**pBitField32*rsvd**rsvd***rsvd**c2cPeer*pSrcKernelBus**pSrcKernelBus*pKernelBusUnused**pKernelBusUnused***pToken**pLevelMem**pProgress**pEntryValue*thisHeap**thisHeap**pBusInfos***ppThirdPartyP2PInfo***ppPhysicalAddresses***ppWreqMbH***ppRreqMbH*src/kernel/gpu/perf/kern_perf_gpuboostsync.c*NVRM: Trying to activate and already active Sync GPU Boost Group = 0x%08x. **src/kernel/gpu/perf/kern_perf_gpuboostsync.c**NVRM: Trying to activate and already active Sync GPU Boost Group = 0x%08x. *NULL != (pBoostMgr)**NULL != (pBoostMgr)*call to gpuboostmgrGpuItr_IMPL*pGpuItr*sliGpuBoostSync*call to kperfGpuBoostSyncActivate_IMPL**limits*NVRM: Failed to toggle Sync Gpu Boost state on Gpu 0x%08x **NVRM: Failed to toggle Sync Gpu Boost state on Gpu 0x%08x *pGpuItr2*NVRM: OS Semaphore acquire failed **NVRM: OS Semaphore acquire failed *NVRM: GPU lock acquire failed **NVRM: GPU lock acquire failed *call to gpuboostmgrGetBoostGrpIdFromGpu_IMPL*call to gpuboostmgrIsBoostGrpActive_IMPL**pThirdPartyP2PInfo*perfGpuBoostSyncParamsSet***ppExtentInfo**pMappingStart**pMappingLength**pP2PWriteCapStatus**pP2PReadCapStatus*pP2PAtomicsCapStatus**pP2PAtomicsCapStatus*pConnectivity**pConnectivity*pP2PWriteCapable**pP2PWriteCapable*pP2PReadCapable**pP2PReadCapable*pP2PAtomicsCapable**pP2PAtomicsCapable*spdm_secured_message_context**spdm_secured_message_context***spdm_secured_message_context*last_spdm_error**last_spdm_error*secured_message**secured_message***secured_message**app_message_size*app_message**app_message***app_message*spdm_secured_message_callbacks**spdm_secured_message_callbacks**secured_message_size*out_bin**out_bin*out_bin_size**out_bin_size*session_keys**session_keys***session_keys**session_keys_size*export_master_secret**export_master_secret***export_master_secret*export_master_secret_size**export_master_secret_size*dhe_secret**dhe_secret***dhe_secret***spdm_context*local_public_key_buffer**local_public_key_buffer***local_public_key_buffer*local_public_key_buffer_size**local_public_key_buffer_size*peer_public_key_buffer**peer_public_key_buffer***peer_public_key_buffer*peer_public_key_buffer_size**peer_public_key_buffer_size*cert_chain_data**cert_chain_data***cert_chain_data**cert_chain_data_size*prevLimits**prevLimits*cert_chain_buffer**cert_chain_buffer***cert_chain_buffer**cert_chain_buffer_size*bUpdate*diffns*spdm_session_info**spdm_session_info***spdm_session_info*scratch_buffer**scratch_buffer***scratch_buffer*prevChangeTsns*secured_contexts**secured_contexts***secured_contexts*parameter**parameter*req_info**req_info***public_key*rand**rand**endian**der_data*ikm**ikm*new_hmac_ctx**new_hmac_ctx***new_hmac_ctx*hash_context**hash_context***hash_context*hash_ctx**hash_ctx***hash_ctx*new_hash_ctx**new_hash_ctx***new_hash_ctx*file_name**file_name*destination_buffer**destination_buffer***destination_buffer*source_buffer**source_buffer***source_buffer*dst_buf**dst_buf***dst_buf*src_buf**src_buf***src_buf*spdm_mel**spdm_mel***spdm_mel*spdm_mel_size**spdm_mel_size*measurement_summary_hash**measurement_summary_hash***opaque_data**opaque_data_size*content_changed**content_changed*device_measurement_count**device_measurement_count*device_measurement**device_measurement***device_measurement*device_measurement_size**device_measurement_size*NVRM: Failed to read Sync Gpu Boost init state, status=0x%x *need_reset**need_reset**capabilities**NVRM: Failed to read Sync Gpu Boost init state, status=0x%x **key_usage_capabilities**current_key_usage**asym_algo_capabilities**current_asym_algo**assoc_cert_slot_mask**public_key_info_len*hysteresisus***subscribe_list***supported_event_groups_list**supported_event_groups_list_len*bHystersisEnable**event_group_count*bSliGpuBoostSyncEnable*th2_hash_data**th2_hash_data*th1_hash_data**th1_hash_data*th_hmac_buffer_size**th_hmac_buffer_size*th_hmac_buffer**th_hmac_buffer***th_hmac_buffer*th_hash_buffer_size**th_hash_buffer_size*th_hash_buffer**th_hash_buffer***th_hash_buffer*session_info**session_info***session_info***message*receiver_buffer**receiver_buffer***receiver_buffer**receiver_buffer_size*max_msg_size**max_msg_size*msg_buf_ptr**msg_buf_ptr***msg_buf_ptr*sender_buffer**sender_buffer***sender_buffer**sender_buffer_size*common_version**common_version*req_ver_set**req_ver_set*res_ver_set**res_ver_set*ver_set**ver_set***data_in*get_element_ptr**get_element_ptr***get_element_ptr*get_element_len**get_element_len*l1l2_hash_size**l1l2_hash_size*l1l2_hash**l1l2_hash***l1l2_hash*sign_data**sign_data***sign_data*public_key_hash**public_key_hash***public_key_hash*certificate_chain_hash**certificate_chain_hash***certificate_chain_hash*failedReservationHandle*trust_anchor**trust_anchor***trust_anchor*trust_anchor_size**trust_anchor_size*m_buffer**m_buffer***m_buffer*pRmApi->Control(pRmApi, hClient, it.pResourceRef->hResource, NV2080_CTRL_CMD_INTERNAL_PERF_PERFMON_CLIENT_RESERVATION_SET, ¶ms, sizeof(params))*src/kernel/gpu/perf/kern_perf_pm.c**pRmApi->Control(pRmApi, hClient, it.pResourceRef->hResource, NV2080_CTRL_CMD_INTERNAL_PERF_PERFMON_CLIENT_RESERVATION_SET, ¶ms, sizeof(params))**src/kernel/gpu/perf/kern_perf_pm.c*dhe_context**dhe_context***dhe_context*msg_buffer**msg_buffer***msg_buffer*extended_error_data**extended_error_data**response_size***response**session_id***request*psk_hint**psk_hint***psk_hint*heartbeat_period**heartbeat_period***measurement_hash*requester_random_in**requester_random_in***requester_random_in*requester_random**requester_random***requester_random*requester_random_size**requester_random_size*responder_random**responder_random***responder_random*responder_random_size**responder_random_size*requester_opaque_data**requester_opaque_data***requester_opaque_data*responder_opaque_data**responder_opaque_data***responder_opaque_data*responder_opaque_data_size**responder_opaque_data_size***requester_context*number_of_blocks**number_of_blocks*measurement_record_length**measurement_record_length*measurement_record**measurement_record***measurement_record***requester_nonce_in***requester_nonce***responder_nonce**cert_chain_size***cert_chain**slot_mask*total_digest_buffer**total_digest_buffer***total_digest_buffer**der_cert**pem_cert**p_pem_size**p_der_size*ceIdx**ceIdx*pBFirstIter**pBFirstIter*pHshubId**pHshubId**cryptBundle*src/kernel/gpu/perf/kern_perf_pwr.c**src/kernel/gpu/perf/kern_perf_pwr.c**keyId*NVRM: NV2080_CTRL_CMD_PERF_SET_POWERSTATE called in power down state. **NVRM: NV2080_CTRL_CMD_PERF_SET_POWERSTATE called in power down state. *NVRM: Returning NV_ERR_GPU_NOT_FULL_POWER. **kmb**NVRM: Returning NV_ERR_GPU_NOT_FULL_POWER. *pAllocParamSizeBytes**pAllocParamSizeBytes*NVRM: NV2080_CTRL_CMD_PERF_SET_POWERSTATE RPC failed **NVRM: NV2080_CTRL_CMD_PERF_SET_POWERSTATE RPC failed *call to _kperfSendPostPowerStateCallback*powerEventNotificationParams*bSwitchToAC*bGPUCapabilityChanged*displayMaskAffected*pPowerStateParams->powerState < NV2080_CTRL_PERF_AUX_POWER_STATE_COUNT**pPowerStateParams->powerState < NV2080_CTRL_PERF_AUX_POWER_STATE_COUNT*call to gpuGetPerfPostPowerStateFunc*NVRM: Error getting Aux Power State:0x%x **NVRM: Error getting Aux Power State:0x%x *inOutData*NVRM: PostPState callback error:0x%x **NVRM: PostPState callback error:0x%x *pBAllowNull**pBAllowNull**pDeferredApiObject*ppCliDeferredApi**ppCliDeferredApi***ppCliDeferredApi***pDeferredApi*pbIsFirstDevice**pbIsFirstDevice*pChnCtl**pChnCtl*pChnStatus**pChnStatus*NVRM: Non-Privileged clients are not allowed to access clock controls with SMC enabled. *pGpioFunction**pGpioFunction*pGpioPin**pGpioPin*pGpioDirection**pGpioDirection**NVRM: Non-Privileged clients are not allowed to access clock controls with SMC enabled. **rgPacketMode*pChannelInstance**pChannelInstance*pHObjectBuffer**pHObjectBuffer*pInitialGetPutOffset**pInitialGetPutOffset*pAllowGrabWithinSameClient**pAllowGrabWithinSameClient*pConnectPbAtGrab**pConnectPbAtGrab**channelPBSize*pSubDeviceId**pSubDeviceId*NVRM: Non-Privileged clients are not allowed to use Turbo Boost clock controls. **pMuxStatus*pIsEmbeddedDisplayPort**pIsEmbeddedDisplayPort**NVRM: Non-Privileged clients are not allowed to use Turbo Boost clock controls. *call to perfbufferPrivilegeCheck_IMPL*perfbufferPrivilegeCheck(pResource)**bEnable*src/kernel/gpu/perf/kern_perfbuffer.c**perfbufferPrivilegeCheck(pResource)**src/kernel/gpu/perf/kern_perfbuffer.c*pDisplayMasks**pDisplayMasks*pRefresh**pRefresh*pDisplayMask**pDisplayMask*pPresence**pPresence*pStartDelay**pStartDelay*pSyncSkew**pSyncSkew*pNSync**pNSync*pVideoMode**pVideoMode*pSyncPolarity**pSyncPolarity**iface*ppExtdevs**ppExtdevs***ppExtdevs*pPrintBuf*pKernelPmu->pPrintBuf != NULL*src/kernel/gpu/pmu/kern_pmu.c**pKernelPmu->pPrintBuf != NULL**src/kernel/gpu/pmu/kern_pmu.c**pPrintBuf***workerThreadData*pKernelPmu->pPrintBuf == NULL*pVRR**pVRR**pKernelPmu->pPrintBuf == NULL**pServerGpu*printBufSize**houseSyncMode*NVRISCV*pVrr**pVrr**NVRISCV*call to kpmuGetIsSelfInit_DISPATCH*pKernelFsp->pCotPayload->frtsVidmemOffset > 0U**pKernelFsp->pCotPayload->frtsVidmemOffset > 0U**delay*call to kpmuReservedMemoryBackingStoreSizeGet_IMPL*call to kpmuReservedMemorySurfacesSizeGet_IMPL*call to kpmuReservedMemoryMiscSizeGet_IMPL**skew**videoMode**polarity*src/kernel/gpu/rc/kernel_rc.c**src/kernel/gpu/rc/kernel_rc.c*NVRM: RC all %schannels for critical error %d. **NVRM: RC all %schannels for critical error %d. *user **user *call to kfifoStartChannelHalt_DISPATCH*call to kfifoCompleteChannelHalt_DISPATCH*krcErrorSetNotifier(pGpu, pKernelRc, pKernelChannel, exceptType, kchannelGetEngineType(pKernelChannel), RC_NOTIFIER_SCOPE_CHANNEL)**krcErrorSetNotifier(pGpu, pKernelRc, pKernelChannel, exceptType, kchannelGetEngineType(pKernelChannel), RC_NOTIFIER_SCOPE_CHANNEL)*krcErrorSendEventNotifications_HAL(pGpu, pKernelRc, pKernelChannel, kchannelGetEngineType(pKernelChannel), 0, exceptType, RC_NOTIFIER_SCOPE_CHANNEL, 0, NV_FALSE)**krcErrorSendEventNotifications_HAL(pGpu, pKernelRc, pKernelChannel, kchannelGetEngineType(pKernelChannel), 0, exceptType, RC_NOTIFIER_SCOPE_CHANNEL, 0, NV_FALSE)*RmSuppressXidDump*suppressXid**RmSuppressXidDump**suppressXid*xidDumpSuppressed*call to _krcValidateAndDumpToKernelLog**pNumPluginChannels***pRequesterID***pIsolationID*pBootArgsGspSysmemOffset**pBootArgsGspSysmemOffset**pChildPresentList**gpuArch**gpuImpl*IS_GSP_CLIENT(ENG_GET_GPU(pKernelRc))**IS_GSP_CLIENT(ENG_GET_GPU(pKernelRc))**pAttachedGpu**pTransition**pEngDescriptorList*NVRM: PCI-E corelogic: Pending errors in DEV_CTRL_STATUS = %08X **NVRM: PCI-E corelogic: Pending errors in DEV_CTRL_STATUS = %08X *clDevCtrlStatusFlags_Org*NVRM: PCI-E corelogic: CORR_ERROR_DETECTED **NVRM: PCI-E corelogic: CORR_ERROR_DETECTED **pEngDescriptor*NVRM: PCI-E corelogic: NON_FATAL_ERROR_DETECTED **NVRM: PCI-E corelogic: NON_FATAL_ERROR_DETECTED **ppChildPtr***ppChildPtr*ppClassInfo**ppClassInfo***ppClassInfo*NVRM: PCI-E corelogic: FATAL_ERROR_DETECTED **NVRM: PCI-E corelogic: FATAL_ERROR_DETECTED *NVRM: PCI-E corelogic: UNSUPP_REQUEST_DETECTED **NVRM: PCI-E corelogic: UNSUPP_REQUEST_DETECTED *call to clPcieReadAerCapability_IMPL*clAer*NVRM: PCI-E Advanced Error Reporting Corelogic Info: **NVRM: PCI-E Advanced Error Reporting Corelogic Info: *NVRM: Uncorr Error Status Register : %08X **NVRM: Uncorr Error Status Register : %08X *NVRM: Uncorr Error Mask Register : %08X **NVRM: Uncorr Error Mask Register : %08X *NVRM: Uncorr Error Severity Register : %08X **NVRM: Uncorr Error Severity Register : %08X *NVRM: Corr Error Status Register : %08X **NVRM: Corr Error Status Register : %08X *NVRM: Corr Error Mask Register : %08X **NVRM: Corr Error Mask Register : %08X *NVRM: Advanced Err Cap & Ctrl Register: %08X **NVRM: Advanced Err Cap & Ctrl Register: %08X *NVRM: Header Log [0-3] : %08X **NVRM: Header Log [0-3] : %08X *HeaderLogReg**Header*NVRM: Header Log [4-7] : %08X **NVRM: Header Log [4-7] : %08X *NVRM: Header Log [8-B] : %08X **NVRM: Header Log [8-B] : %08X *NVRM: Header Log [C-F] : %08X **NVRM: Header Log [C-F] : %08X *NVRM: Root Error Command Register : %08X **NVRM: Root Error Command Register : %08X *NVRM: Root Error Status : %08X **NVRM: Root Error Status : %08X *NVRM: Error Source ID Register : %08X **NVRM: Error Source ID Register : %08X *rootCause*pidStr**pidStr**pExternalClassId**pTargetedHeap*allocProcName**allocProcName*call to osGetCurrentProcessName*call to _krcLogUuidOnce**pNotifyType**pInfo32*call to krcGetMigAttributionForError_KERNEL*rootCauseXidStr* caused by previous Xid %d**rootCauseXidStr** caused by previous Xid %d**pPolledDataMask*pPollingIntervalMs**pPollingIntervalMs*NVRM: Xid (PCI:%04x:%02x:%02x GPU-I:%02u GPU-CI:%02u): %d, pid=%s, name=%s, %s%s **pSeq**pHash**pDigest***pInfo**pNotifyRecord***ppFecsMemDesc*pFecsRecordSize**pFecsRecordSize***pGrIndex**pDebuggerRef**pGrResourceRef**pTargetGpu***pCpuVirtAddr**NVRM: Xid (PCI:%04x:%02x:%02x GPU-I:%02u GPU-CI:%02u): %d, pid=%s, name=%s, %s%s *NVRM: Xid (PCI:%04x:%02x:%02x GPU-I:%02u GPU-CI:%02u): %d, %s%s **NVRM: Xid (PCI:%04x:%02x:%02x GPU-I:%02u GPU-CI:%02u): %d, %s%s *NVRM: Xid (PCI:%04x:%02x:%02x GPU-I:%02u): %d, pid=%s, name=%s, %s%s **pBooterUcode**NVRM: Xid (PCI:%04x:%02x:%02x GPU-I:%02u): %d, pid=%s, name=%s, %s%s *NVRM: Xid (PCI:%04x:%02x:%02x GPU-I:%02u): %d, %s%s **NVRM: Xid (PCI:%04x:%02x:%02x GPU-I:%02u): %d, %s%s **pBiosSize**pExpansionRomOffset**pIfrSize*NVRM: Xid (PCI:%04x:%02x:%02x): %d, pid=%s, name=%s, %s%s **NVRM: Xid (PCI:%04x:%02x:%02x): %d, pid=%s, name=%s, %s%s ***pElfData**pSectionName***ppSectionData**pSectionSize**pSectionNameBuf**pSectionPrefix*NVRM: Xid (PCI:%04x:%02x:%02x): %d, %s%s **pNameInMsg**NVRM: Xid (PCI:%04x:%02x:%02x): %d, %s%s **pGpusLockedMask**pbRetry**pVbiosVersionStr*NVRM: GPU at PCI:%04x:%02x:%02x: %s *gidString**NVRM: GPU at PCI:%04x:%02x:%02x: %s **szMemoryId**szPrefix**gidString***pKernelGsp*NVRM: GPU Board Serial Number: %s **NVRM: GPU Board Serial Number: %s *ppRpc**ppRpc***ppRpc*bGpuUuidLoggedOnce*RmBreakonRC**RmBreakonRC*pDurationUnitsChar**pDurationUnitsChar*NVRM: BreakOnRc set by regkey RmBreakonRC= 0x%08x **NVRM: BreakOnRc set by regkey RmBreakonRC= 0x%08x *bBreakOnRc*NVRM: BreakOnRc overridden by NV_DEBUG_BREAK_FLAGS_RC **NVRM: BreakOnRc overridden by NV_DEBUG_BREAK_FLAGS_RC *NVRM: BreakOnRc = %d **NVRM: BreakOnRc = %d *RmWatchDogTimeOut**RmWatchDogTimeOut*watchdogPersistent*timeoutSecs*NVRM: RC Watchdog timeout forced to %d seconds. ***eventData**NVRM: RC Watchdog timeout forced to %d seconds. **pHistory*RmWatchDogInterval**RmWatchDogInterval**data0**data1***ppFlcnUcode*intervalSecs**pFlcnUcodeDescFromBit**pDescV3**pDescV2**pFwsecUcodeDescFromBit**pBitAddr***pStructure**pMessageHeader*pPayloadSize**pPayloadSize*ppCurrentKernelChannel**ppCurrentKernelChannel***ppCurrentKernelChannel*ppCpuAddr**ppCpuAddr***ppCpuAddr**pOldContext***ppOldContext*bLogEvents*NVRM: RC Error Logging is enabled **NVRM: RC Error Logging is enabled *bRcOnBar2Fault*RmRobustChannels**RmRobustChannels*bRobustChannelsEnabled*UseUncachedPCIMappings**UseUncachedPCIMappings*call to _krcInitRegistryOverrides*src/kernel/gpu/rc/kernel_rc_callback.c**src/kernel/gpu/rc/kernel_rc_callback.c*pNumBlocks**pNumBlocks*NVRM: RcErroCallback requested for an unsupported engine 0x%x (0x%x) **NVRM: RcErroCallback requested for an unsupported engine 0x%x (0x%x) *call to osCheckCallback_v2*call to osCheckCallback*bCheckCallback*pRcErrorContext**pRcErrorContext*secChId*sechClient*EngineId*faultStr**faultStr*NVRM: Gpu marked for reset. Triggering TDR. **NVRM: Gpu marked for reset. Triggering TDR. *call to osRCCallback_v2*clientAction*call to osRCCallback*fabricEgmAddr**fabricEgmAddr**fabricAddr*pVaMaxPageSize**pVaMaxPageSize*NVRM: -- Drivers tells RM to ignore **NVRM: -- Drivers tells RM to ignore *ppPtr**ppPtr***ppPtr**pMethodBuf**pCopyType*bReturn**pMemoryRef*bAllocedMemDesc**bAllocedMemDesc*ppHwResource**ppHwResource***ppHwResource*ppBlockNew**ppBlockNew***ppBlockNew*ppBlockSplit**ppBlockSplit***ppBlockSplit**maxOffset*maxFree**maxFree***ppBlock**pBlackList**ignoreBankPlacement**textureClientIndex**currentBankInfo**placement*pComprOffset**pComprOffset*pComprKind**pComprKind*pLineMin**pLineMin*pLineMax**pLineMax**pGpaEntries**pSemaphoreWait**pSignal**pWait**pUnmap*pUnusedData**pUnusedData***pUnusedData*ppScrubList**ppScrubList***ppScrubList*call to _vgpuRcResetCallback*call to osCondAcquireRmSema***ppMap**evictStart**evictEnd**pLargestFreeOffset**freeList*numPagesAlloc**numPagesAlloc*NVRM: -- No context skipping reset of channel... **NVRM: -- No context skipping reset of channel... **pBlacklistCount**pbClientManagedBlacklist***ppBlacklistChunks**pNumFree**evictPages**allocPages**pRegion**validRegionList*allocatedPages**allocatedPages*allocatedCount**allocatedCount**pGpaPhysAddr*pNumEvictablePages**pNumEvictablePages**delta2m**delta64k*pStartPos**pStartPos**pSec2Buf*rcEnable*pOffsetTableIndex**pOffsetTableIndex**recordBuffer*outputRecordSize*pLocalKernelMemorySystem**pLocalKernelMemorySystem*src/kernel/gpu/rc/kernel_rc_ctrl.c*NVRM: NVRM-RC: unknown error element type: %d **src/kernel/gpu/rc/kernel_rc_ctrl.c**NVRM: NVRM-RC: unknown error element type: %d **pBootConfig*bFaultValid**bFaultValid**pLevelBuffer*pBChanged**pBChanged**pOldMem***pSubLevels**pRootMem***pFmtEntry***pFmtPte***pFmtPde**pValid*pProtoBuf*dwSize**pDone*call to krcSubdeviceCtrlGetErrorInfoCheckPermissions_KERNEL**pLayout**pInitialFmtLevel*call to krcSubdeviceCtrlCmdRcGetErrorV2_IMPL**timeStampBuffer*call to krcSubdeviceCtrlCmdRcGetErrorCount_IMPL*call to krcReadVirtMem_IMPL*pageStartOffset*start4kPage*end4kPage*cursize*call to dmaXlateVAtoPAforChannel_DISPATCH*dmaXlateVAtoPAforChannel_HAL(pGpu, pDma, pKernelChannel, virtAddr, &physaddr, &memtype)*src/kernel/gpu/rc/kernel_rc_misc.c**dmaXlateVAtoPAforChannel_HAL(pGpu, pDma, pKernelChannel, virtAddr, &physaddr, &memtype)**src/kernel/gpu/rc/kernel_rc_misc.c*memmgrMemRead(pMemoryManager, &surf, pMem, RM_PAGE_SIZE, TRANSFER_FLAGS_NONE)**memmgrMemRead(pMemoryManager, &surf, pMem, RM_PAGE_SIZE, TRANSFER_FLAGS_NONE)**nvlinkLinkAndClockInfoParams**bridgeSensableLinks*cur4kPage*call to krcErrorSendEventNotificationsCtxDma_FWCLIENT*krcErrorSendEventNotificationsCtxDma_HAL(pGpu, pKernelRc, pKernelChannel, scope)*src/kernel/gpu/rc/kernel_rc_notification.c**krcErrorSendEventNotificationsCtxDma_HAL(pGpu, pKernelRc, pKernelChannel, scope)**src/kernel/gpu/rc/kernel_rc_notification.c***linkChangeData**pbCudaLimit**lastXidTimestamp**message_size***session_id**is_app_message*transport_message**transport_message***transport_message**transport_message_size*sequence_number_buffer**sequence_number_buffer*pSessionInfo**pSessionInfo**entryCount*pPlxCount**pPlxCount*pBridgeId**pBridgeId**bridgeIndex**fwVersion**oemVersion**siliconRevision**bcRes**domainId**busId**deviceId**funcId*pOpSmIds**pOpSmIds*pTimerScheduleParams**pTimerScheduleParams*NVRM: failed to create notification list **NVRM: failed to create notification list *bNewListCreated*NVRM: failed to insert channel into notification list **NVRM: failed to insert channel into notification list **notifierStatus*pNextAlarmTime**pNextAlarmTime*kfifoChannelListCreate(pGpu, pKernelFifo, &pChanList)**kfifoChannelListCreate(pGpu, pKernelFifo, &pChanList)*kfifoChannelListAppend(pGpu, pKernelFifo, pKernelChannel, pChanList)**kfifoChannelListAppend(pGpu, pKernelFifo, pKernelChannel, pChanList)*pChipId0**pChipId0*pChipId1**pChipId1*pSocChipId0**pSocChipId0*pRmVariant**pRmVariant*pTegraType**pTegraType*pChipArch**pChipArch*pChipImpl**pChipImpl*pHidrev**pHidrev*pIsVirtual**pIsVirtual*NVRM: Notifier requested for an unsupported engine 0x%x (0x%x) *pGpuId**pGpuId**NVRM: Notifier requested for an unsupported engine 0x%x (0x%x) *pbUuidValid**pbUuidValid*call to krcErrorWriteNotifier_CPU**pNumRegions*pPdeCopyParams**pPdeCopyParams*krcErrorWriteNotifier_HAL(pGpu, pKernelRc, pKernelChannel, exceptType, localRmEngineType, 0xffff , &flushFlags)**krcErrorWriteNotifier_HAL(pGpu, pKernelRc, pKernelChannel, exceptType, localRmEngineType, 0xffff , &flushFlags)**pEnv***pEnv*pContinue**pContinue*pInvalCursor**pInvalCursor*ppGVAS**ppGVAS***ppGVAS*pFullPdeCoverage**pFullPdeCoverage*pPartialPdeExpMax**pPartialPdeExpMax*call to _krcErrorWriteNotifierCpuMemHelper*NVRM: notified (ECC) channel 0x%08x **NVRM: notified (ECC) channel 0x%08x *pHwAlloc**pHwAlloc*NVRM: No valid error notifier found for ECC error context 0x%x Skipping notification update **NVRM: No valid error notifier found for ECC error context 0x%x Skipping notification update *pImpClient**pImpClient*pSrcParentInfo**pSrcParentInfo*pImpParentHandle**pImpParentHandle*phParentClient**phParentClient*phParentObject**phParentObject*pExportParams**pExportParams*ppGpuOsInfo**ppGpuOsInfo***ppGpuOsInfo*NVRM: notified channel 0x%08x **pSrcClient*pDestParentInfo**pDestParentInfo**NVRM: notified channel 0x%08x **pParentInfo*pPhysPageSize**pPhysPageSize*pAttachInfo**pAttachInfo*pOwnerGpu**pOwnerGpu*ppPhysMemDesc**ppPhysMemDesc***ppPhysMemDesc*ppPhysMemory**ppPhysMemory***ppPhysMemory*pAttrs**pAttrs*pPhysAttrs**pPhysAttrs**pPhysMemory**pMulticastFabricDesc*pNvlAttr**pNvlAttr*pMemoryInfo**pMemoryInfo**pDstVirtualMemory*NVRM: No valid error notifier found for error context 0x%x Skipping notification update **NVRM: No valid error notifier found for error context 0x%x Skipping notification update *rtnvalue**rtnvalue*pChipsetNameStr**pChipsetNameStr*pVendorNameStr**pVendorNameStr*pSliBondNameStr**pSliBondNameStr*pSubSysVendorNameStr**pSubSysVendorNameStr**subvendorID**subdeviceID*pWatchdogChannelInfo***pControlGPFifo*GPPut < WATCHDOG_GPFIFO_ENTRIES*src/kernel/gpu/rc/kernel_rc_watchdog.c**GPPut < WATCHDOG_GPFIFO_ENTRIES**src/kernel/gpu/rc/kernel_rc_watchdog.c*pWatchdogState*notifierToken*pbOffset*ptrbase**ptrbase*ptrbase1**ptrbase1*pScript**pScript*pDeviceHandle**pDeviceHandle***pDeviceHandle*pScriptEntry**pScriptEntry*pCapMap**pCapMap**deviceHandle***deviceHandle*pMcfgTable**pMcfgTable*ppMcfgTable**ppMcfgTable***ppMcfgTable*pRsdtAddr**pRsdtAddr*pXsdtAddr**pXsdtAddr*pbusRp**pbusRp*pdevRp**pdevRp*pfuncRp**pfuncRp*pvendorIDRp**pvendorIDRp*pdeviceIDRp**pdeviceIDRp*call to rmcfg_IsHOPPER_CLASSIC_GPUSorBetter*!IsHOPPERorBetter(pGpu)**!IsHOPPERorBetter(pGpu)*pbRoutingCap**pbRoutingCap*dataOut**dataOut*pCpuidInfo**pCpuidInfo*pPlx**pPlx*pBR04**pBR04*pBR03**pBR03*pNbsiElement**pNbsiElement*pActualGlobIdx**pActualGlobIdx***pRtnObj*pbFound**pbFound**inOutData*tmpBuffer**tmpBuffer*pNbsiDir**pNbsiDir***pNbsiDir*pNbsiDirSize**pNbsiDirSize*pbFreeDirMemRequired**pbFreeDirMemRequired*pAcpiMethod**pAcpiMethod*pRtnMethod**pRtnMethod*rtnNbsiDirSize**rtnNbsiDirSize*globTypeRtnStatus**globTypeRtnStatus*pWantedGlobIndex**pWantedGlobIndex*pCurTbl**pCurTbl**pTestObj**pNbsiGenObj*pDriverVersion**pDriverVersion*pRtnDirSize**pRtnDirSize*pNumGlobs**pNumGlobs**data_v***data_v*szStr**szStr*pbCommonPciSwitch**pbCommonPciSwitch*pPfmreqhndlrData**pPfmreqhndlrData*pGpuSliStatus**pGpuSliStatus*ppResDesc**ppResDesc***ppResDesc***pRightsRequested***notifiers*ppResourceList**ppResourceList***ppResourceList*numResources**numResources*NVRM: Busflush failed. *pUserAllocParams**pUserAllocParams***pUserAllocParams**NVRM: Busflush failed. *call to krcWatchdogWriteNotifierToGpfifo_IMPL*ppUidToken**ppUidToken***ppUidToken*ppUserInfo**ppUserInfo***ppUserInfo*busPeerIds**busPeerIds*controllerTableOffset**controllerTableOffset*pEntryCount**pEntryCount*pPackedData**pPackedData*pUnpackedData**pUnpackedData*pUnpackedSize**pUnpackedSize*pFieldsCount**pFieldsCount*pPackedSize**pPackedSize**_pContext**pWatchdogChannelInfo*NVRM: Unable to allocate a watchdog client **NVRM: Unable to allocate a watchdog client *bHandleValid*pNv0080**pNv0080***kernelAddr*kernelPriv**kernelPriv***kernelPriv***userAddr*userPriv**userPriv***userPriv*NVRM: Unable to allocate a watchdog device **NVRM: Unable to allocate a watchdog device *pNv2080**pNv2080*NVRM: Unable to allocate a watchdog subdevice **NVRM: Unable to allocate a watchdog subdevice **pKernelAddr***pKernelAddr*pKernelPriv**pKernelPriv***pKernelPriv*pUserAddr**pUserAddr***pUserAddr*pUserPriv**pUserPriv***pUserPriv*ppOldEvent**ppOldEvent***ppOldEvent*pRmEngineId**pRmEngineId**pEventNotify*pCpuVirtualAddress**pCpuVirtualAddress***pCpuVirtualAddress*pPrivateData**pPrivateData***pPrivateData**engineType**hostClass**ceClass**computeClass**faultBufferClass**accessCounterBufferClass**accessBitsBufferClass**sec2Class*rmCeCaps**rmCeCaps*pGpuConfComputeCaps**pGpuConfComputeCaps**virtMode*phVaSpace**phVaSpace**hSubdevice*hVaSpace**hVaSpace***desc*hwChannelId**hwChannelId**vaOffset*paOffset**paOffset**newKind**readOnly**atomic*pMemOwnerGpu**pMemOwnerGpu**isPeerSupported**isBar1Supported**peerId*pOwningGpu**pOwningGpu*nvlinkStatus1**nvlinkStatus1*nvlinkStatus2**nvlinkStatus2**nvlinkVersion*linkBandwidthMBps**linkBandwidthMBps*nvlinkStatus**nvlinkStatus*atomicSupported**atomicSupported*connectedToCpu**connectedToCpu*nvlinkStatusOut**nvlinkStatusOut***nvlinkStatusOut*grObj*gpfifoObj*class2dSubch*gpfifoMapping**gpfifoMapping*pushBufBytes*pVirtual**pVirtual*NVRM: Unable to allocate unified heap for watchdog **NVRM: Unable to allocate unified heap for watchdog *pbBytes*allocationSize*bCacheSnoop*p2pInfo**p2pInfo***p2pInfo*acquiredLocks**acquiredLocks**pDefaultSecInfo**pceMask*NVRM: Unable to allocate %s memory for watchdog *pbParamsAllocated**pbParamsAllocated**NVRM: Unable to allocate %s memory for watchdog *system**system*pGpuInst**pGpuInst*pCacheGpuFlags**pCacheGpuFlags**cachedEntry***cachedEntry**pProvidedParams***pProvidedParams*NVRM: Unable to map memory for watchdog **NVRM: Unable to map memory for watchdog *NVRM: Unable to map memory into watchdog's heap **NVRM: Unable to map memory into watchdog's heap *pGpuAddr*pCtxDma**pCtxDma*pObjRpcStructureCopy**pObjRpcStructureCopy*NVRM: Unable to set up watchdog's error context **NVRM: Unable to set up watchdog's error context *NVRM: Unable to set up watchdog's notifier **NVRM: Unable to set up watchdog's notifier *NVRM: Unable to allocate video memory for USERD **NVRM: Unable to allocate video memory for USERD *pChannelGPFifo**pChannelGPFifo*NVRM: Unable to alloc watchdog channel **NVRM: Unable to alloc watchdog channel *NVRM: Unable to create a watchdog GPFIFO mapping **NVRM: Unable to create a watchdog GPFIFO mapping *errorContext**errorContext**notifierToken*NVRM: Unable to allocate class %x **NVRM: Unable to allocate class %x *NVRM: Unable to obtain client object **NVRM: Unable to obtain client object *NVRM: CliGetKernelChannelWithDevice failed **NVRM: CliGetKernelChannelWithDevice failed *NVRM: Unable to get class engine ID %x **NVRM: Unable to get class engine ID %x *NVRM: Unable to schedule watchdog channel **NVRM: Unable to schedule watchdog channel *NVRM: Unable to get work submit token for watchdog **NVRM: Unable to get work submit token for watchdog *call to krcWatchdogInitPushbuffer_DISPATCH*pWatchdogPersistent*nextRunTime**pWatchdogState*deviceResetRd*bCurrentEnableRequest*bCurrentDisableRequest*bCurrentSoftDisableRequest*enable watchdog**enable watchdog*opstring**opstring*soft disable watchdog**soft disable watchdog*disable watchdog**disable watchdog*release all requests**release all requests*destroy RM client**destroy RM client*NVRM: Cannot %s on GPU 0x%x, due to another client's request (Enable requests: %d, Disable requests: %d) **NVRM: Cannot %s on GPU 0x%x, due to another client's request (Enable requests: %d, Disable requests: %d) *NVRM: (before) op: %s, GPU 0x%x, enableRefCt: %d, disableRefCt: %d, softDisableRefCt: %d, WDflags: 0x%x **NVRM: (before) op: %s, GPU 0x%x, enableRefCt: %d, disableRefCt: %d, softDisableRefCt: %d, WDflags: 0x%x **pInputParamStructPtr***pInputParamStructPtr*bRcWatchdogEnableRequested*bRcWatchdogDisableRequested*bRcWatchdogSoftDisableRequested*rpcToHost**rpcToHost**Params***Params**params_in***params_in**pCreateParms***pCreateParms*NVRM: (after) op: %s, GPU 0x%x, enableRefCt: %d, disableRefCt: %d, softDisableRefCt: %d, WDflags: 0x%x **pKernelCreateParams***pKernelCreateParams*pObjStructurecopy**pObjStructurecopy*pConsolidatedRpcPayload**pConsolidatedRpcPayload**bufferSize*guestPages**guestPages*pRPC**pRPC**NVRM: (after) op: %s, GPU 0x%x, enableRefCt: %d, disableRefCt: %d, softDisableRefCt: %d, WDflags: 0x%x *vgpu_static_info**vgpu_static_info*oldVblank**oldVblank*vblankFailureCount**vblankFailureCount*Head %08x Count %08x**Head %08x Count %08x*src/kernel/gpu/rc/kernel_rc_watchdog_callback.c*NVRM: NVRM-RC: RM has detected that %x Seconds without a Vblank Counter Update on head:%c%d **src/kernel/gpu/rc/kernel_rc_watchdog_callback.c**NVRM: NVRM-RC: RM has detected that %x Seconds without a Vblank Counter Update on head:%c%d *deviceReset**deviceReset*NVRM: krcWatchdogInit failed: %d **NVRM: krcWatchdogInit failed: %d *pCurrPstate**pCurrPstate***pCurrPstate*NVRM: RC watchdog: error on our channel (reinitializing). **NVRM: RC watchdog: error on our channel (reinitializing). *pEventEntry**pEventEntry*licenseInfo**licenseInfo*call to _krcTestChannelRecovery*channelTestCountdown*guestFbSegmentPageShift**guestFbSegmentPageShift*Spa**Spa*subdev_id**subdev_id*allNotifiersWritten*NVRM: RC watchdog: GPU is probably locked! Notify Timeout Seconds: %d **NVRM: RC watchdog: GPU is probably locked! Notify Timeout Seconds: %d *call to krcWatchdogRecovery_DISPATCH*NVRM: RC watchdog: Trying to recover. **NVRM: RC watchdog: Trying to recover. *NVRM: RC watchdog: GPU is possibly locked. Attempting to restart watchdog. **NVRM: RC watchdog: GPU is possibly locked. Attempting to restart watchdog. *notifyLimitTime*resetLimitTime*call to krcWatchdog_DISPATCH*call to krcWatchdogCallbackVblankRecovery_IMPL*call to krcWatchdogCallbackPerf_b3696a*call to _krcThwapChannel*pVgpuType**pVgpuType*ppVgpuType**ppVgpuType***ppVgpuType**bMatch**pUsed*bNodeIsRemoved**bNodeIsRemoved*pGzState**pGzState*oBuffer**oBuffer***t**parentOfX*blockFree**blockFree**pLargestFreeSize**pNumFreeBlocks*pNextValue**pNextValue***pNextValue*pTargetNode**pTargetNode*pTargetPosition**pTargetPosition*pPRoot**pPRoot***pPRoot**pClientData***pClientData*pEle**pEle*pMax**pMax***pAlloc*arr**arr***arr**pIo32State*pRiscv64Trace**pRiscv64Trace*pRiscv64GprState**pRiscv64GprState*pRiscv64CsrState**pRiscv64CsrState*pFormatVersion**pFormatVersion*pImplementerSig**pImplementerSig*pTraceV1**pTraceV1*pHdr**pHdr***pHdr***pStart***pEnd*recordHeader**recordHeader*m1m2_hash_size**m1m2_hash_size*m1m2_hash**m1m2_hash***m1m2_hash*NVRM: Unable to thwap channel 0x%02x, it's not in use **NVRM: Unable to thwap channel 0x%02x, it's not in use *NVRM: Thwapping channel channel 0x%08x. **NVRM: Thwapping channel channel 0x%08x. *!IS_MIG_ENABLED(pGpu)*src/kernel/gpu/rc/kernel_rc_watchdog_ctrl.c**!IS_MIG_ENABLED(pGpu)**src/kernel/gpu/rc/kernel_rc_watchdog_ctrl.c*call to krcWatchdogChangeState_IMPL*spdm_signing_context**spdm_signing_context***spdm_signing_context*context_size**context_size*oid_other**oid_other*hmac_data**hmac_data***hmac_data*hmac**hmac***hmac**req_slot_id_param*call to memdescAddrSpaceListToU32*version_number_entry_count**version_number_entry_count*version_number_entry**version_number_entry***data_out*encap_request**encap_request***encap_request*encap_response_size**encap_response_size*encap_response**encap_response***encap_response*call to ksec2PollForCanSend_IMPL*call to _ksec2ConfigEmemc_GB10B*call to _ksec2WriteToEmem_GB10B*key_updated**key_updated*handshake_secret**handshake_secret*finished_key**finished_key*major_secret**major_secret*pem_data**pem_data*password**password*rand_data**rand_data*pOpParams**pOpParams**pLevelInst*pSubLevelInsts**pSubLevelInsts***pSubLevelInsts*ppLevelInst**ppLevelInst***ppLevelInst*pPageFmtIn**pPageFmtIn*pPageFmtOut**pPageFmtOut*pIndexLoOut**pIndexLoOut*pIndexHiOut**pIndexHiOut**pOpCtx***pOpCtx*call to _ksec2UpdateQueueHeadTail_GB10B*pNewObject**pNewObject*pClassDef**pClassDef*pVendorIdLength**pVendorIdLength*pCpuTPMFeatures**pCpuTPMFeatures*pCpuArchPerfMonitor**pCpuArchPerfMonitor*pCpuFeatures**pCpuFeatures*pCpuVersion**pCpuVersion*pKernelSec2->pCotPayload != NULL*src/kernel/gpu/sec2/arch/blackwell/kernel_sec2_gb10b.c*vect**vect**pKernelSec2->pCotPayload != NULL**src/kernel/gpu/sec2/arch/blackwell/kernel_sec2_gb10b.c*call to ksec2SendMessage_IMPL*NVRM: Sent following content to SEC2: *pCounter**pCounter**NVRM: Sent following content to SEC2: *pPrereq**pPrereq*pRightsToRequest**pRightsToRequest*pParentRights**pParentRights*pSharePolicyA**pSharePolicyA*pSharePolicyB**pSharePolicyB*pRightsShared**pRightsShared*NVRM: Wait for SEC2 CMD handling: **NVRM: Wait for SEC2 CMD handling: *pDomain**pDomain*call to _ksec2WaitForCmdHandling*pClientEntry1**pClientEntry1*pClientEntry2**pClientEntry2*NVRM: Timed out waiting for SEC2 Cmd Handling. **NVRM: Timed out waiting for SEC2 Cmd Handling. *NVRM: After CMD timeout NV_PSEC_FALCON_MAILBOX0 = 0x%x *ppClientEntry1**ppClientEntry1***ppClientEntry1**NVRM: After CMD timeout NV_PSEC_FALCON_MAILBOX0 = 0x%x *ppClientEntry2**ppClientEntry2***ppClientEntry2*NVRM: After CMD timeout NV_PSEC_FALCON_MAILBOX1 = 0x%x **NVRM: After CMD timeout NV_PSEC_FALCON_MAILBOX1 = 0x%x *phClientOut**phClientOut*ppClientNext**ppClientNext***ppClientNext*call to ksec2CleanupBootState_IMPL*call to ksec2SafeToSendBootCommands_DISPATCH*pClientNext**pClientNext*NVRM: SEC2 not yet ready to accept GSP Boot command! **NVRM: SEC2 not yet ready to accept GSP Boot command! *(pKernelSec2->pCotPayload == NULL) ? NV_ERR_NO_MEMORY : NV_OK**(pKernelSec2->pCotPayload == NULL) ? NV_ERR_NO_MEMORY : NV_OK**pThreadEntry**pTlsEntry*call to ksec2SetupGspImages_DISPATCH***base*NVRM: GSP Ucode image preparation failed! **NVRM: GSP Ucode image preparation failed! ***pVidHeapControlParams*NVRM: Setup GSP FMC Images! **hObjectDest**NVRM: Setup GSP FMC Images! *call to _ksec2GetGspUcodeArchive*semaphoreSurface**semaphoreSurface**callbackHandle*pCallbackHandle**pCallbackHandle***pCallbackHandle**ss*pSemaphoreMap**pSemaphoreMap***pSemaphoreMap*pMaxSubmittedMap**pMaxSubmittedMap***pMaxSubmittedMap***arg1*minMinRefreshRate**minMinRefreshRate**maxMinRefreshRate**pOld**vrrActiveApiHeadMasks**B**crc32*kmsCrcs**kmsCrcs*requestedConfig**requestedConfig*replyConfig**replyConfig*headModeSetConfig**headModeSetConfig**tf*lutParams**lutParams*pLayerConfig**pLayerConfig**pSyncObject**kmsMode**preferredMode*pNumPlanes**pNumPlanes*pLog2GobsPerBlockY**pLog2GobsPerBlockY**pitch***dmaBuf***gem*srcDevice**srcDevice*srcMemory**srcMemory*handleOut**handleOut**numDisplays*displayHandles**displayHandles**notif*guidData**guidData*ppParsedEdid**ppParsedEdid***ppParsedEdid*NVRM: Setup GSP FMC SysMem Offset GPA 0x%llx, SPA 0x%llx ,Size = 0x%x! *pDynamicDpyCreated**pDynamicDpyCreated**NVRM: Setup GSP FMC SysMem Offset GPA 0x%llx, SPA 0x%llx ,Size = 0x%x! **pValidSyncs*bindataGetBufferSize(pGspImageSignature) == pKernelSec2->cotPayloadSignatureSize**bindataGetBufferSize(pGspImageSignature) == pKernelSec2->cotPayloadSignatureSize*NVRM: GSP FMC Signature Size 0x%08x **NVRM: GSP FMC Signature Size 0x%08x *bindataGetBufferSize(pGspImagePublicKey) == pKernelSec2->cotPayloadPublicKeySize**bindataGetBufferSize(pGspImagePublicKey) == pKernelSec2->cotPayloadPublicKeySize*NVRM: GSP FMC PK Size 0x%08x **NVRM: GSP FMC PK Size 0x%08x *call to _ksec2GetGspBootArgs*call to _ksec2GetQueueHeadTail_GB10B**pDpLibDevice**pNVDpLibConnector**pModesetUpdateState**dpCallback*NVRM: Loading Debug GSP FMC image for GSP RM using SEC2. **NVRM: Loading Debug GSP FMC image for GSP RM using SEC2. *NVRM: Loading Prod GSP FMC image for GSP RM using SEC2. **NVRM: Loading Prod GSP FMC image for GSP RM using SEC2. **pUpdateState**pModesetUpdate**pColorFormatsInfo*NVRM: About to send data to SEC, ememcOff=0x%x, size=0x%x **pHwState**updateState**crcOut*pCurrentColorSpace**pCurrentColorSpace*pCurrentColorBpc**pCurrentColorBpc*pCurrentColorRange**pCurrentColorRange*pColorRange**pColorRange**pMinIsoBandwidthKBPS**pMinDramFloorKBPS**pSupportedColorFormats**pColor**timings*pNumHeads**pNumHeads**pKmsMode**pViewPortSizeIn**pViewPortOut**pCurrDithering**pReqDithering**pScalerCaps*hTapsOut**hTapsOut*vTapsOut**vTapsOut**pScaling**NVRM: About to send data to SEC, ememcOff=0x%x, size=0x%x **pScanLine**pInBlankingPeriod**pInfoFrameCtrl**pVSInfoFrameCtrl*pPreModesetParams**pPreModesetParams*msaParams**msaParams***pParam2***subDeviceAddress**pEvoSyncpt**pSyncpt*stoppedBase**stoppedBase*src/kernel/gpu/sec2/arch/blackwell/kernel_sec2_gb20b.c**src/kernel/gpu/sec2/arch/blackwell/kernel_sec2_gb20b.c*call to ksec2SendAndReadMessage_IMPL***ppBase*NVRM: Sent following content to FWSEC: **NVRM: Sent following content to FWSEC: *call to _ksec2CheckGspBootStatus*PDB_PROP_KSEC2_BOOT_COMMAND_OK*NVRM: FWSEC failed to process boot command. RM cannot boot. **NVRM: FWSEC failed to process boot command. RM cannot boot. *call to ksec2DumpDebugState_DISPATCH*call to ksec2WaitForSecureBoot_DISPATCH*NVRM: FWSEC timed out processing COT command **NVRM: FWSEC timed out processing COT command *NVRM: FWSEC not ready to process COT command **NVRM: FWSEC not ready to process COT command *NVRM: RM cannot boot with SEC2 missing on silicon. **NVRM: RM cannot boot with SEC2 missing on silicon. *NVRM: Secure boot is disabled due to missing SEC2. **NVRM: Secure boot is disabled due to missing SEC2. *call to ksec2GspFmcIsEnforced_DISPATCH**pValidValues**pHwTimings**pValidationParams*scratchReg*NVRM: Devinit Boot Status = 0x%x **NVRM: Devinit Boot Status = 0x%x *NVRM: COT Polling Status = 0x%x **NVRM: COT Polling Status = 0x%x **pInfoFrameState*NVRM: FRTS Completion = 0x%x **NVRM: FRTS Completion = 0x%x *NVRM: FWSEC Completion = 0x%x **NVRM: FWSEC Completion = 0x%x *NVRM: FWSEC Error Code = 0x%x **NVRM: FWSEC Error Code = 0x%x *NVRM: FWSEC Offending PC = 0x%x **NVRM: FWSEC Offending PC = 0x%x *NVRM: FWSEC Offending Address = 0x%x **NVRM: FWSEC Offending Address = 0x%x *NVRM: FWSEC Additional info = 0x%x **NVRM: FWSEC Additional info = 0x%x *NVRM: FWSEC version = 0x%x **NVRM: FWSEC version = 0x%x *NVRM: Devinit version = 0x%x **NVRM: Devinit version = 0x%x *NVRM: Failing Register Addr = 0x%x **NVRM: Failing Register Addr = 0x%x *NVRM: Devinit PC = 0x%x **NVRM: Devinit PC = 0x%x *NVRM: PRI error code = 0x%x **NVRM: PRI error code = 0x%x *pNumHwHeadsUsed**pNumHwHeadsUsed*pDvc**pDvc*pCurrentDitheringDepth**pCurrentDitheringDepth*pCurrentDitheringMode**pCurrentDitheringMode*pCurrentDithering**pCurrentDithering*pDitheringDepth**pDitheringDepth*pDitheringMode**pDitheringMode*pDithering**pDithering*pBrightness**pBrightness**pInfoStr*keyhead**keyhead*keytail**keytail*pStoppedBase**pStoppedBase**apiHeadMaskPerSd*ppSurfaceEvo**ppSurfaceEvo***ppSurfaceEvo**y**reply*pTiledDisplayInfo**pTiledDisplayInfo*pOverrideMode**pOverrideMode*pOverrideViewPortSizeIn**pOverrideViewPortSizeIn*pOverrideViewPortPointIn**pOverrideViewPortPointIn*pModeOut**pModeOut*pCtxDmaHandle**pCtxDmaHandle**hDispCtxDma**pOpenDevice**pOpenDevSurfaceHandles***pBase**pCRC32Notifier*pImageParams**pImageParams*pSurfaceEvoNew**pSurfaceEvoNew**pCursorCompParams*pEvoCursorControl**pEvoCursorControl**dpy*call to ksec2GetGspBootArgs**passiveDpDongleMaxPclkKHz*pBpc**pBpc**pFlipHwState**postSyncpt**pHwSyncObject*NVRM: Loading GSP-RM image using SEC2. **NVRM: Loading GSP-RM image using SEC2. **pPossibleUsage**pHsOneHeadAllDisps**pDispEvo0**pDispEvo1**serverPin**clientPin**pHeadCaps*pFactor**pFactor*pTaps**pTaps*pTimingsProtocol**pTimingsProtocol**pGuaranteed*call to _ksec2GetMsgQueueHeadTail_GB20B**pFrameLockPin**pRasterLockPin**pFlipLockPin***pEventDataVoid*call to _ksec2ConfigEmemc_GB20B**pLutSurfEvo*NVRM: About to read data from SEC2, ememcOff=0, size=0x%x **NVRM: About to read data from SEC2, ememcOff=0, size=0x%x *pColorSpace**pColorSpace*pColorBpc**pColorBpc**pNumRasterLockGroups*call to _ksec2UpdateMsgQueueHeadTail_GB20B**pOverscanColor**pHeadState1**pHeadState2**pInfoFrameHeader*pDstType**pDstType*ocsc0MatrixOutput**ocsc0MatrixOutput**pOutputRoundingFix*numCRC32**numCRC32*NVRM: Expected SEC2 command response, but packet is not big enough for payload. Size: 0x%0x **NVRM: Expected SEC2 command response, but packet is not big enough for payload. Size: 0x%0x *pOldAccelerators**pOldAccelerators**idleChannelState**pChan*NVRM: Received SEC2 command response. Task ID: 0x%0x Command type: 0x%0x Error code: 0x%0x **NVRM: Received SEC2 command response. Task ID: 0x%0x Command type: 0x%0x Error code: 0x%0x *call to ksec2ErrorCode2NvStatusMap_DISPATCH*NVRM: Last command was processed by SEC2 successfully! **NVRM: Last command was processed by SEC2 successfully! *NVRM: SEC2 response reported error. Task ID: 0x%0x Command type: 0x%0x Error code: 0x%0x **NVRM: SEC2 response reported error. Task ID: 0x%0x Command type: 0x%0x Error code: 0x%0x *call to ksec2ProcessCommandResponse_DISPATCH*call to ksec2ValidateMctpPayloadHeader_DISPATCH**sourceFetchRect**hwFormatOut***pSurfaceDesc**lutSize*disableOcsc0**disableOcsc0**fpNormScale**isLutModeVss**pInput*pLeadingRasterLines**pLeadingRasterLines*pTrailingRasterLines**pTrailingRasterLines**pNullHwState**pViewPortMin**pViewPortMax**pEvoCaps*pStereoPin**pStereoPin**pOutputLut**lutState**pCompParams*confComputeDeriveSecrets_HAL(pConfCompute, MC_ENGINE_IDX_SEC2)*src/kernel/gpu/sec2/arch/hopper/kernel_sec2_gh100.c**confComputeDeriveSecrets_HAL(pConfCompute, MC_ENGINE_IDX_SEC2)**src/kernel/gpu/sec2/arch/hopper/kernel_sec2_gh100.c*ppDesc != NULL*src/kernel/gpu/sec2/arch/turing/kernel_sec2_tu102.c**ppDesc != NULL**src/kernel/gpu/sec2/arch/turing/kernel_sec2_tu102.c*ppImg != NULL**ppImg != NULL*pGenericBlUcodeDesc*pGenericBlUcodeImg*pKernelSec2->pGenericBlUcodeImg == NULL**pKernelSec2->pGenericBlUcodeImg == NULL*call to s_allocateGenericBlUcode*s_allocateGenericBlUcode(pGpu, pKernelSec2, &pKernelSec2->pGenericBlUcodeDesc, &pKernelSec2->pGenericBlUcodeImg)**s_allocateGenericBlUcode(pGpu, pKernelSec2, &pKernelSec2->pGenericBlUcodeDesc, &pKernelSec2->pGenericBlUcodeImg)*pKernelSec2->pGenericBlUcodeDesc != NULL**pKernelSec2->pGenericBlUcodeDesc != NULL*pKernelSec2->pGenericBlUcodeImg != NULL**pKernelSec2->pGenericBlUcodeImg != NULL**pImpWindow**pImpHead*call to ksec2GetBinArchiveBlUcode_DISPATCH*pBinDesc**pBinDesc*pBinDesc != NULL**pBinDesc != NULL*descSizeAligned**pGenericBlUcodeDesc**pSegmentSizes**pLUTEntries*bindataWriteToBuffer(pBinDesc, (NvU8 *) pGenericBlUcodeDesc, descSizeAligned)**bindataWriteToBuffer(pBinDesc, (NvU8 *) pGenericBlUcodeDesc, descSizeAligned)*pBinImg**pBinImg*imgSizeAligned**pCoreChannel**pGenericBlUcodeImg**pVscSdp*bindataWriteToBuffer(pBinImg, pGenericBlUcodeImg, imgSizeAligned)**bindataWriteToBuffer(pBinImg, pGenericBlUcodeImg, imgSizeAligned)*pAddressHi**pAddressHi**pAddressLo**pRequestParams**pReplyParams*call to _ksec2ReleaseGspBootImages*call to ksec2CanSendPacket_DISPATCH*src/kernel/gpu/sec2/kernel_sec2.c*NVRM: Timed out waiting for SEC2 command queue to be empty. **src/kernel/gpu/sec2/kernel_sec2.c*pBadFirmware**pBadFirmware**NVRM: Timed out waiting for SEC2 command queue to be empty. *NVRM: servicing nonstall intr for SEC2 engine **NVRM: servicing nonstall intr for SEC2 engine *expr**expr*pDispSfHandle**pDispSfHandle*hasSampleSize**hasSampleSize*hasMaxBitRate**hasMaxBitRate*pKernelFalcon->physEngDesc != ENG_INVALID**pKernelFalcon->physEngDesc != ENG_INVALID**pEld**pMaxFreqSupported*NVRM: Registering 0x%x/0x%x to handle SEC2 nonstall intr **NVRM: Registering 0x%x/0x%x to handle SEC2 nonstall intr *call to ksec2PollForResponse_IMPL*call to _ksec2ReadMessage*call to ksec2IsResponseAvailable_DISPATCH*NVRM: Tried to read SEC2 response but none is available **NVRM: Tried to read SEC2 response but none is available *call to ksec2GetMaxRecvPacketSize_DISPATCH*call to ksec2ReadPacket_DISPATCH*ksec2ReadPacket_HAL(pGpu, pKernelSec2, pPacketBuffer, recvBufferSize, &packetSize)**ksec2ReadPacket_HAL(pGpu, pKernelSec2, pPacketBuffer, recvBufferSize, &packetSize)*call to ksec2GetPacketInfo_DISPATCH*call to ksec2ProcessNvdmMessage_DISPATCH*call to ksec2GetMaxSendPacketSize_DISPATCH**pSrcPoint**pDstPoint*pNClips**pNClips*ppClipList**ppClipList***ppClipList*pOp**pOp*pVertexCount**pVertexCount**pChannelConfig**color**joinSwapGroupWorkArea**pImgParams**pChannelSyncObjects**pParamsNotif**pParamsNIso**pNIsoState**pHsChannelOld**pHsChannelConfigNew**pDevEvoHsConfig**pReplyHead**pFlipParams**pHSParams*ppSurface**ppSurface***ppSurface***pSurfaceEvoDst***pSurfaceEvoSrc**pFlipHeadOriginal**pViewPortPointIn**viewPortOut**viewPortIn**pRectIn**mat*pX**pX*pY**pY*pQ**pQ*pSrcEye**pSrcEye*pPixelShift**pPixelShift*call to ksec2NvdmToSeid_DISPATCH*call to ksec2CreateMctpHeader_DISPATCH*call to ksec2CreateNvdmHeader_DISPATCH*call to ksec2SendPacket_DISPATCH*call to _ksec2CheckResponseTimeout*NVRM: SEC2 command timed out **NVRM: SEC2 command timed out *call to _ksec2SetResponseTimeout*call to _ksec2WaitForResponse*call to ksec2ReleaseProxyImage_IMPL**pHsNotifiers*RMDevinitBySecureBoot**RMDevinitBySecureBoot*NVRM: RM to boot GSP due to regkey override. **pSemaSurface**NVRM: RM to boot GSP due to regkey override. *RmDisableSec2*pitchInBlocks**pitchInBlocks**sizeInBytes**RmDisableSec2*NVRM: SEC2 disabled due to regkey override. **NVRM: SEC2 disabled due to regkey override. *PDB_PROP_KSEC2_DISABLE_GSPFMC*call to ksec2ConfigureFalcon_DISPATCH*call to _ksec2InitRegistryOverrides*NVRM: KernelSec2 is disabled **NVRM: KernelSec2 is disabled *src/kernel/gpu/sec2/sec2_context.c**src/kernel/gpu/sec2/sec2_context.c**pFlipLutHwState**pLUTSurfaceParams**pLUTCaps**pGuaranteedUsage**pS*call to spdmGetBinArchiveL1Certificate_IMPL**pMaxS*call to spdmGetBinArchiveIndividualL2Certificate_DISPATCH*call to spdmGetBinArchiveIndividualL3Certificate_DISPATCH*call to spdmConvertCertificateToDer_DISPATCH**pVSCtrl**pModeUsage**failureReasonFormat*pLibspdmContext**pLibspdmContext**pCurrentModeIndex**hdmi3D**hdmi3DAvailable**pFlipRequest**pFlipReply**pProposedDisp**pProposedHead**pProposedSd***pParam1*gpuCertsSize*pGpuCerts**pGpuCerts**pFlip*call to libspdm_get_certificate*__spdmStatus**pCurrentModesetOpenDev*src/kernel/gpu/spdm/arch/hopper/spdm_certs_gh100.c*NVRM: SPDM failed with status 0x%0x **src/kernel/gpu/spdm/arch/hopper/spdm_certs_gh100.c**NVRM: SPDM failed with status 0x%0x *call to spdmSetupResponderCertCtx_IMPL*responderCertCtx**responderCertCtx***param1**pkt*NVRM: spdmRetrieveResponderCert() failed !!! **NVRM: spdmRetrieveResponderCert() failed !!! *call to spdmBuildCertChainDer_IMPL*NVRM: spdmBuildCertChainDer() failed !!! **NVRM: spdmBuildCertChainDer() failed !!! *call to spdmBuildCertChainPem_IMPL**pEvoSyncptOut*NVRM: spdmBuildCertChainPem() failed !!! **NVRM: spdmBuildCertChainPem() failed !!! **dpcdData*NVRM: SPDM failure most likely due to missing crypto implementation. **NVRM: SPDM failure most likely due to missing crypto implementation. **pAllocConnectorDispData*NVRM: Are the LKCA modules properly loaded? **NVRM: Are the LKCA modules properly loaded? *pemCertSize*pemCert**pemCert*derCertSize*call to libspdm_pem_to_der*NVRM: Decoded cert size is different from calculated! **NVRM: Decoded cert size is different from calculated! *src/kernel/gpu/spdm/arch/hopper/spdm_gh100.c**src/kernel/gpu/spdm/arch/hopper/spdm_gh100.c*pHeartbeatEvent*ccHeartbeatCtrl*NVRM: SPDM: Send/receive error in CC_HEARTBEAT_CTRL command! Status = 0x%0x **NVRM: SPDM: Send/receive error in CC_HEARTBEAT_CTRL command! Status = 0x%0x **enableApiHeadMasks*pEdidTimeoutMicroseconds**pEdidTimeoutMicroseconds***pParamsVoid**pModeset**pRevokingOpenDev**pRevokingPermissions***pExtraUserStateVoid**pCommonLutParams*ppRampsKernel**ppRampsKernel***ppRampsKernel**infoStringLenWritten**pExtra*ppInfoString**ppInfoString**pOpenDevExclude*NVRM: SPDM: RPC returned failure in CC_HEARTBEAT_CTRL command! status = 0x%0x **NVRM: SPDM: RPC returned failure in CC_HEARTBEAT_CTRL command! status = 0x%0x **pHeartbeatEvent*call to libspdm_key_update*NVRM: Key Update (single direction(0x%x)) failed, ret(0x%x), triggerId = (0x%x). **NVRM: Key Update (single direction(0x%x)) failed, ret(0x%x), triggerId = (0x%x). *call to nvspdm_check_and_clear_libspdm_assert*NVRM: SPDM: spdmCheckAndExecuteKeyUpdate() assert !! **NVRM: SPDM: spdmCheckAndExecuteKeyUpdate() assert !! *NVRM: SPDM: Key update successfully, triggerId = (0x%x)! **NVRM: SPDM: Key update successfully, triggerId = (0x%x)! *sessionMsgCount*NVRM: SPDM: Send/receive error in INIT_RM_DATA command **NVRM: SPDM: Send/receive error in INIT_RM_DATA command *NVRM: SPDM: RPC returned failure in INIT_RM_DATA command! status = 0x%0x **NVRM: SPDM: RPC returned failure in INIT_RM_DATA command! status = 0x%0x *call to spdmCheckAndExecuteKeyUpdate_DISPATCH*pFrameLockHandle**pFrameLockHandle*call to libspdm_reset_msg_log*pDispHandle**pDispHandle*call to libspdm_set_msg_log_mode*numBlocks*pMeasurementBuffer**pMeasurementBuffer***pMeasurementBuffer*measurementSize*call to libspdm_get_measurement_ex*ppOpenDev**ppOpenDev***ppOpenDev*ppOpenDisp**ppOpenDisp***ppOpenDisp*definition of groups**definition of groups***definition of groups*definition of xas*definition of ractl*NVRM: SPDM: spdmCheckAndClearLibspdmAssert() failed *definition of pxm_info*definition of vaf**NVRM: SPDM: spdmCheckAndClearLibspdmAssert() failed *call to libspdm_get_connection_version**definition of names***definition of names*vcaSize*call to libspdm_get_data*definition of _it*definition of pdevinfo*definition of wait*definition of mux_arg*definition of ddc_arg0*definition of exp_info*definition of peer_dma_dev*definition of _data*definition of local_rm_ops*definition of numa_info*definition of iov*call to libspdm_get_msg_log_size*definition of ctrl_params*definition of uuid_string**definition of uuid_string*definition of linear_modifiers**definition of linear_modifiers*definition of rrParams*definition of pfn_t*pMsgLog**pMsgLog*definition of reply_config*definition of props*definition of phys_a*definition of phys_b**definition of size**definition of copy_size*definition of orders**definition of orders*definition of small_sizes**definition of small_sizes*definition of gpu_addresses**definition of gpu_addresses**definition of pte_buffer*definition of user_rm_va_space*definition of client_info*definition of gpu_platform_info*definition of fb_info*definition of nvlink_info*definition of ecc_info*dataParam*definition of fault_context*definition of deferred_free_list*definition of user_rm_mem*definition of pte_maker_data*spdmVersion*actualAlgo*definition of zero_tracker*definition of invalid_ptes**definition of invalid_ptes*definition of implementations**definition of implementations*definition of local_gpu_sysmem*definition of peer_gpu_sysmem*definition of peer_gpu_vidmem*definition of module1_callbacks**definition of module1_callbacks*definition of module2_callbacks**definition of module2_callbacks*definition of options*definition of chunk_size_init**definition of chunk_size_init***definition of chunk_size_init*definition of evict*definition of walk_args*definition of allocations*definition of test_data*definition of on_complete_counter*definition of internal_nodes*definition of local_migrated_ranges*definition of whole*definition of inputs**definition of inputs*definition of alloc_info*definition of copy_sizes**definition of copy_sizes*definition of user_rm_channel*definition of contig_region*definition of contig_addr*definition of copy_state*definition of prev_region*definition of mem_alloc_params*definition of reader*definition of trainingCtrl**definition of trainingCtrl*definition of esiBuffer**definition of esiBuffer*definition of edpBuffer**definition of edpBuffer*definition of psrSetupTimeMap**definition of psrSetupTimeMap*definition of dpParams*definition of ddcReadEdid*definition of previousAssessedLC*definition of originalActiveLinkConfig*definition of cableIDInfo*definition of tmpEdid*definition of activeSortedGroups*definition of hdcpState*definition of clearPayload*definition of nack*definition of lastVoltageSwingLane**definition of lastVoltageSwingLane*definition of lastPreemphasisLane**definition of lastPreemphasisLane*definition of lastTrainingScoreLane**definition of lastTrainingScoreLane*definition of lastPostCursor**definition of lastPostCursor*definition of currVoltageSwingLane**definition of currVoltageSwingLane*definition of currPreemphasisLane**definition of currPreemphasisLane*definition of currTrainingScoreLane**definition of currTrainingScoreLane*definition of currPostCursor**definition of currPostCursor*definition of updatedLaneSettings**definition of updatedLaneSettings*definition of selectedConfig*definition of linkRateList**definition of linkRateList*definition of linkCfg*definition of desired*definition of preFlushModeActiveLinkConfig*definition of _origHighestAssessedLC*definition of dfpCache*definition of nakData*definition of powerUpPhyMessage*definition of modesetInfo*definition of lc*definition of lowerLc*definition of localInfo*definition of allocate*definition of laneCounts**definition of laneCounts*definition of powerDownPhyMsg*definition of processedEdid*definition of patternInfo*definition of query**definition of cnt*definition of dscParams*definition of fallbackEdid*definition of rawEpr*definition of nak*definition of read*definition of remoteI2cRead*definition of i2cWriteTransactions**definition of i2cWriteTransactions*definition of remoteI2cWrite*definition of epr*definition of bCaps*definition of tempBKSV**definition of tempBKSV*definition of newDevice*definition of childAddr*definition of validHeaderData**definition of validHeaderData*definition of flagTemp*definition of dataTemp*definition of requestRmLC*definition of setManualParams*definition of bsr*definition of writer*definition of crcReader*definition of nakUndef*definition of bodyReader*definition of previousEdid*definition of edidReaderManager*definition of pPacket**definition of pPacket*definition of rangeLimits*definition of validSliceNum**definition of validSliceNum*definition of ofs_und6**definition of ofs_und6*definition of ofs_und7**definition of ofs_und7*definition of ofs_und10**definition of ofs_und10*definition of ofs_und8**definition of ofs_und8*definition of ofs_und12**definition of ofs_und12*definition of ofs_und15**definition of ofs_und15*definition of frame_rate_factors**definition of frame_rate_factors*definition of skipConn**definition of skipConn*definition of nvl4AppSel**definition of nvl4AppSel*definition of i2c_params*definition of errorCounts*definition of discovery_table_lr10**definition of discovery_table_lr10*NVRM: SPDM: Invalid Responder CT exponent. *definition of discovery_table_nvlw**definition of discovery_table_nvlw**NVRM: SPDM: Invalid Responder CT exponent. *definition of discovery_table_npg**definition of discovery_table_npg*expectedFlags*definition of discovery_table_nxbar**definition of discovery_table_nxbar*NVRM: SPDM: Invalid Responder capabilities. **NVRM: SPDM: Invalid Responder capabilities. *algoCheckCount*pCheckEntry**pCheckEntry*definition of err_event*call to spdmMutualAuthSupported_DISPATCH*definition of credit_data*definition of buffer_data*definition of pri_timeout*definition of i2cIndexed*definition of bbxSysInfoData*definition of bbxTimeInfoData*definition of minionLinkIntr*NVRM: SPDM: Invalid crypto algorithms selected. **NVRM: SPDM: Invalid crypto algorithms selected. *NVRM: SPDM: AlgoCheckCount 0x%0x, i is 0x%0x, status is 0x%0x. **NVRM: SPDM: AlgoCheckCount 0x%0x, i is 0x%0x, status is 0x%0x. *NVRM: SPDM: Expected algo 0x%0x, actual algo 0x%0x **NVRM: SPDM: Expected algo 0x%0x, actual algo 0x%0x *definition of report_saw*definition of minion_error*definition of eeprom*definition of bit_header*definition of bit_nvinit_ptrs*definition of bit_clock_ptrs*definition of bit_bridge_fw*definition of eeprom_header*definition of defaultColor*definition of nv3dBlendStateColor*definition of xzBuf*definition of smVersionParams*definition of grInfoParams*definition of programsTable**definition of programsTable*definition of tmpSurf*definition of ctxdmaParams*definition of hopperParams*definition of vaAlloc**definition of vaAlloc*definition of vaMap**definition of vaMap*definition of busInfo*definition of gpFifoDmaClasses**definition of gpFifoDmaClasses*definition of scheduleParams*definition of notifParams*definition of tokenParams*definition of debugLines*definition of nl_arg**definition of nl_arg*definition of virtModeParams*definition of systemGetSupportedParams*definition of i2cPortInfoParams*definition of orInfoParams*definition of i2cPortIdParams*definition of rmApiContext*messagePending*definition of gpsControl*definition of releaseLocks*definition of gspFw*definition of deviceParams*definition of subDeviceParams*NVRM: Timeout waiting for response from SPDM Responder! *definition of preUnixConsoleParams*definition of postUnixConsoleParams**NVRM: Timeout waiting for response from SPDM Responder! *definition of pioFuncs*definition of memFuncs*definition of virtAllocParams*definition of allocVirtualParams*definition of vblankArgs*definition of findParams*definition of classIdParams*definition of parentParams*definition of updateParams*definition of rpcHalIfacesInitStruct_T264D*definition of rpcHalIfacesInitStruct_T234D*definition of rpcHalIfacesInitStruct_GB202*definition of rpcHalIfacesInitStruct_GB100*definition of rpcHalIfacesInitStruct_GH100*definition of rpcHalIfacesInitStruct_AD102*definition of rpcHalIfacesInitStruct_GA100*definition of rpcHalIfacesInitStruct_TU102*definition of rpcstructurecopyHalIfacesInitStruct_T264D*definition of rpcstructurecopyHalIfacesInitStruct_T234D*definition of rpcstructurecopyHalIfacesInitStruct_GB202*definition of rpcstructurecopyHalIfacesInitStruct_GB100*definition of rpcstructurecopyHalIfacesInitStruct_GH100*definition of rpcstructurecopyHalIfacesInitStruct_AD102*definition of rpcstructurecopyHalIfacesInitStruct_GA100*NVRM: SPDM: RPC failed! RPC status = 0x%x **NVRM: SPDM: RPC failed! RPC status = 0x%x *definition of rpcstructurecopyHalIfacesInitStruct_TU102*definition of nvDumpBuffer*definition of rpcParams*definition of waitArg*definition of prcKnobReadPayload**definition of sliLinkOutputMask**definition of bSliLinkCircular**definition of sliLinkEndsMask**definition of vidLinkCount*definition of c2cInfoParamsGpu0*definition of c2cInfoParamsGpu1*NVRM: SPDM: Send/receive failed! status = 0x%x *definition of pInstblkParams**NVRM: SPDM: Send/receive failed! status = 0x%x *definition of userCtx*definition of walkFlags*pMapMem**pMapMem*definition of mapTarget*definition of surf*bFreeShadowBuf*definition of params0*definition of params1*transportResponseSize*NVRM: SPDM: Error: transportResponseSize = 0x%x, responseSize = 0x%x **NVRM: SPDM: Error: transportResponseSize = 0x%x, responseSize = 0x%x *pTransportBuffer**pTransportBuffer*transportBufferSize*pendingResponseSize*call to libspdm_register_transport_layer_func*call to libspdm_register_device_io_func*bUsePolling*spdmStatus*NVRM: spdmStatus != LIBSPDM_STATUS_SUCCESS **NVRM: spdmStatus != LIBSPDM_STATUS_SUCCESS *NVRM: pSpdm == NULL, SPDM context probably corrupted !! **NVRM: pSpdm == NULL, SPDM context probably corrupted !! *definition of staticRange*definition of busGetInfoParams*definition of busParams*definition of hshubParams*definition of shimParams*definition of inParams*definition of ccuParams*definition of linksPerHshub**definition of linksPerHshub*definition of maxLcePerHshub**definition of maxLcePerHshub***definition of maxLcePerHshub*definition of grceMappings**definition of grceMappings*definition of pceLceMap**definition of pceLceMap*definition of grceConfig**definition of grceConfig*definition of nvlinkCapsParams*definition of ceUtilsParams*definition of Ivl**definition of Ivl*definition of ceTestPlaintext**definition of ceTestPlaintext*definition of decryptedData**definition of decryptedData*definition of encryptedData**definition of encryptedData*definition of dataAuth**definition of dataAuth*definition of srcSurface*definition of dstSurface*definition of authSurface*definition of ivSurface*definition of ceCapsv2Params*definition of partnerParams*definition of curKeySeed**definition of curKeySeed*definition of rmCtrlExecuteCookie*definition of root_alloc_params*definition of device_alloc_params*definition of pushBufferParams*definition of rgIntrMask**definition of rgIntrMask*definition of connectParams*definition of supportParams*definition of stateStrings**definition of stateStrings***definition of stateStrings*definition of memstats*definition of pConstruct**definition of pConstruct*definition of devIds**definition of devIds*definition of PeregrineCoreRegisters*definition of tSurf*definition of bufInfo*definition of tsgParams*definition of kctxshareParams*definition of pbdmaIdString**definition of pbdmaIdString***definition of pbdmaIdString*definition of grHostString**definition of grHostString***definition of grHostString*definition of numSecureChannelsParams*definition of numChannelsParams*definition of numChannelsPerGpu**definition of numChannelsPerGpu*definition of inputPayload*definition of GID_DATA**definition of GID_DATA*definition of rmEngineCaps**definition of rmEngineCaps*definition of nv2080EngineCaps**definition of nv2080EngineCaps*definition of removeP2PCapsParams*definition of listTypes**definition of listTypes*definition of moduleInfoParams*definition of powerStatusParams*definition of zeroGid**definition of zeroGid*definition of gspNvdEngines**definition of gspNvdEngines*definition of gpudbRusd*definition of sha1GroupEntryNum**definition of sha1GroupEntryNum*definition of nvC638AllocParams*definition of instlocOverrides**definition of instlocOverrides*definition of promoteNon3d**definition of promoteNon3d*definition of promote3d**definition of promote3d*definition of promoteSriovHeavy**definition of promoteSriovHeavy*definition of traceArg*definition of mmuParams*definition of loc*definition of logInitValues**definition of logInitValues*definition of logVgpuSetupParams*definition of vgpuLogBuffers**definition of vgpuLogBuffers***definition of vgpuLogBuffers*definition of hfrpParams*definition of clientPermissions*definition of pmaIdleParams*definition of clearParams*definition of serviceParams*definition of vidSurface*definition of sysSurface*definition of createParams*definition of physMemParams*definition of maxPageSizeParams*definition of vasCapsParams*definition of pageSizes**definition of pageSizes*definition of newRegion*definition of channelPbInfo*definition of hmacDigest**definition of hmacDigest*definition of allocRequest*definition of allocData*definition of tempComprInfo*definition of unmapDmaParams*definition of allocOptions*definition of eventParams*definition of eventNotificationParams*definition of AllocHint*definition of memsetParams*definition of ceUtilsAllocParams*definition of transferSurface*definition of staging*definition of blacklistPages*definition of blacklistOffPerRegion**definition of blacklistOffPerRegion*definition of blacklistOffAddrStart**definition of blacklistOffAddrStart*definition of blacklistOffRangeSize**definition of blacklistOffRangeSize*definition of getParams*definition of setParams*definition of restore*definition of partitionInfo**definition of partitionInfo*definition of bootConfig*definition of computeProfileInfo*definition of deleteParams*definition of gpcSlot**definition of gpcSlot*definition of computeSizeFlags**definition of computeSizeFlags*definition of runlistBufInfo**definition of runlistBufInfo*definition of configRequestsPerCiOrdered**definition of configRequestsPerCiOrdered*definition of gpuSlice**definition of gpuSlice*definition of pageDirEntry*definition of mmuExceptionData*definition of resetChannelParams*definition of MmuExceptionData***definition of pMemDesc*definition of invalidRange*definition of gpuNvlinkHshubSupportedRbmList**definition of gpuNvlinkHshubSupportedRbmList*definition of directConnectBwModeList**definition of directConnectBwModeList*definition of switchBwModeList**definition of switchBwModeList*definition of nvleKey**definition of nvleKey*definition of nvleKeyReqBuf**definition of nvleKeyReqBuf*definition of nvleKeyRsp*definition of boostParams2x*definition of perfGpuBoostSyncParamsSet*definition of powerEventNotificationParams*definition of gpfifoMapping**definition of gpfifoMapping*definition of alloc_params*definition of localDeviceInfoParams*definition of pGpuInfoV2*definition of gpuDevMapping**definition of totalSize*definition of listAllocParams*definition of devAllocParams*definition of pde*definition of pageLevelInfoParams*definition of surfaceInfoParam*definition of fbAllocPageFormat*definition of FbAllocInfo*definition of callbackOrderOfPrecedenceList**definition of callbackOrderOfPrecedenceList**definition of pciSubBaseClass*definition of portData*definition of pDpData**definition of pDpData***definition of pDpData*definition of needRes**definition of needRes*definition of laneWidth**definition of laneWidth*definition of nbsiDirLocs**definition of nbsiDirLocs*definition of nullPathStr**definition of nullPathStr*definition of acpiParamsEx*NVRM: spdmStatus != LIBSPDM_STATUS_SUCCESS **NVRM: spdmStatus != LIBSPDM_STATUS_SUCCESS *NVRM: pSpdm == NULL, SPDM context probably corrupted !! **NVRM: pSpdm == NULL, SPDM context probably corrupted !! *NVRM: pGpu == NULL, SPDM context probably corrupted !! **NVRM: pGpu == NULL, SPDM context probably corrupted !! *call to spdmMessageProcess_DISPATCH*pNvSpdmDescHdr**pNvSpdmDescHdr*NVRM: SPDM: Version mismatch: [Check] version = 0x%x, [SpdmRet] version = 0x%x **NVRM: SPDM: Version mismatch: [Check] version = 0x%x, [SpdmRet] version = 0x%x *pSpdmSecuredMsgHdr**pSpdmSecuredMsgHdr*call to libspdm_get_secured_message_context_via_session_id*pSecuredMessageContext**pSecuredMessageContext***pSecuredMessageContext**call to libspdm_get_secured_message_context_via_session_id*securedMessageInfo*get_sequence_number*get_max_random_number_count*definition of rmAllocParams*get_secured_spdm_version*call to libspdm_decode_secured_message*definition of common*definition of headerOut*definition of commonOut*definition of entriesOut*definition of sbiosEntry*definition of body*definition of sysdevHeader*definition of allocInfo*definition of p2pAllocParams*definition of stopChannelParams*definition of channelRetainerParams*definition of enableParams*definition of setConfigParams*definition of accessCntrBufferAllocParams*definition of sizeParams*definition of faultBufferAllocParams*definition of registermappingsParams*definition of nonReplayableFaultsParams*definition of replayableFaultsParams*definition of gpuIdInfoParams*definition of classParams*definition of getEnginesParams*definition of ceParams*definition of pceMaskParams*definition of rmCeCaps**definition of rmCeCaps*definition of getGidParams*definition of gpuMemoryInfo*definition of dumpParams*definition of archInfoParams*definition of gpuNameParams*definition of subDevParams*definition of simulationInfoParams*definition of getGIUuidParams*definition of confComputeAllocParams*definition of confComputeParams*definition of keyRotationParams*definition of cparams*definition of fabricProbeParams*definition of fabricParams*definition of ceAllocParams*definition of channelGrpParams*definition of vaAllocInfo*definition of pmaAllocOptionsTemp*definition of allocInfoTemp*NVRM: , spdmStatus != LIBSPDM_STATUS_SUCCESS **NVRM: , spdmStatus != LIBSPDM_STATUS_SUCCESS *definition of changeParams*definition of busInfoV2Params*definition of setNotificationParams*definition of eventDbeParams*definition of disableParams**definition of dmaOffset*definition of shutdownParams*definition of cleanupResourcesParams*definition of vgpu_aliases**definition of vgpu_aliases*definition of denyListAdaHopper**definition of denyListAdaHopper*definition of denyListAmpere**definition of denyListAmpere*definition of sriovCaps*definition of setGuestIDParams**definition of index*definition of wire_value*msgType*definition of zero_padding**definition of zero_padding*definition of oid_subject_alt_name**definition of oid_subject_alt_name*definition of oid_spdm_extension**definition of oid_spdm_extension*definition of hardware_identity_oid**definition of hardware_identity_oid*definition of eku_requester_auth_oid**definition of eku_requester_auth_oid*securedMessageSize*definition of eku_responder_auth_oid**definition of eku_responder_auth_oid*definition of basic_constraints_false_case1**definition of basic_constraints_false_case1*definition of basic_constraints_false_case2**definition of basic_constraints_false_case2*definition of basic_constraints_true_case**definition of basic_constraints_true_case*definition of encry_algo_oid_rsa3072**definition of encry_algo_oid_rsa3072*definition of encry_algo_oid_rsa3072_ohter**definition of encry_algo_oid_rsa3072_ohter*definition of encry_algo_oid_ecc384**definition of encry_algo_oid_ecc384*definition of get_encap_response_struct**definition of get_encap_response_struct*definition of zero_digest**definition of zero_digest*definition of pSubLevelInsts**definition of pSubLevelInsts***definition of pSubLevelInsts*definition of pSubMemDescs**definition of pSubMemDescs***definition of pSubMemDescs*definition of pCurSubLevelInsts**definition of pCurSubLevelInsts***definition of pCurSubLevelInsts*definition of opParams*definition of surfParams*call to libspdm_encode_secured_message*definition of paramsUnreg*definition of waiterParams*definition of refParams*definition of C*definition of fa*definition of kapi_gpu_info*definition of kmsEventParams*definition of kapiEvent*definition of paramsReg**definition of pitch*definition of paramsGetNumPages*definition of paramsGetPages*definition of exportParams*definition of importParams**transport_message_size > (securedMessageSize + sizeof(NV_SPDM_DESC_HEADER))***transport_message_size > (securedMessageSize + sizeof(NV_SPDM_DESC_HEADER))*definition of surfaceInfoParams*definition of paramsDpyStatic*definition of paramsConnector*definition of paramsDynamicConnector*definition of paramsDisp*definition of semsurfLayoutParams*msgSizeByte*definition of paramsRevoke*definition of paramsGrant*definition of paramsRelease*definition of paramsGrab*definition of paramsFree*definition of getNumSubDevicesParams*definition of idInfoParams*definition of subdevAllocParams*definition of fbCapsParams*definition of preModesetParams*definition of dpImpParams*definition of dpErrorCode**definition of dpErrorCode*definition of strs**definition of strs***definition of strs*definition of lens**definition of lens*definition of tiledDisplayInfo*definition of viewPortSizeIn*definition of viewPortPointIn*definition of nullTileDisplayTopoId*definition of idledChannelMask**definition of idledChannelMask*definition of PioChannelAllocParams*definition of timingParams*definition of dsiModeTimingParams*definition of hotplugParams*definition of dpCtrlParams*definition of pHeads**definition of pHeads*definition of impInput*definition of impOutput*definition of multiTileConfig**definition of multiTileConfig**definition of disp*definition of capsParams*definition of refClkChanged**definition of refClkChanged*definition of gsyncSetControlUnsyncParams*NVRM: SPDM: Fatal heartbeat failure detected 0x%x *definition of gsyncSetControlSyncParams*definition of statusParams*definition of currDithering*definition of headsToLock**definition of headsToLock*definition of activeHeads**definition of activeHeads*definition of gsyncOptTimingParams*definition of coreUpdateState*definition of hwState*definition of advanceInfoFrame*definition of advancedInfoFrame*definition of field_info**definition of field_info*definition of flag_info**definition of flag_info*definition of scalingUsageBounds*definition of numLayers**definition of numLayers*definition of satHueMatrix*definition of csc10Matrix*definition of csc11Matrix*definition of targetMaxLums**definition of targetMaxLums*definition of tenThousand*definition of one*definition of half*definition of oneHalf*definition of maxIntensity*definition of two*definition of negtwo*definition of three*definition of m1*definition of m2*definition of c3*definition of invm1*definition of invm2*definition of csc00Matrix*definition of csc01Matrix*definition of numRequiredTiles**definition of numRequiredTiles*definition of outputMultiTileConfig*definition of mapParams*definition of channelMaskPerSd**definition of channelMaskPerSd*definition of idleChannelState*definition of headMaskPerSd**definition of headMaskPerSd*definition of idleChannelMaskPerSd**definition of idleChannelMaskPerSd*definition of gsyncSetSwapBarrierParams*definition of gsyncSetControlParamsParams*definition of gsyncSetControlTestingParams*definition of gsyncGetControlSyncParams*definition of gsyncSetControlWatchdogParams*definition of gsyncGetStatusParams*definition of gsyncGetStatusSyncParams*definition of gsyncSetHouseSyncModeParams*definition of gsyncGetControlParamsParams*definition of attachedGsyncParams*definition of gsyncTopologyParams*definition of gsyncAllocParams*definition of gsyncGetCapsParams*definition of videoTransportInfo*definition of clientControl*definition of nvtTiming*definition of empCtrl*definition of audio_power_params*definition of hdrInfoFrame*definition of gcp**definition of gcp*definition of attribs**definition of attribs*definition of srcPoint*definition of dstPoint*definition of origin*definition of opTable**definition of opTable*definition of textureTable**definition of textureTable*definition of uniforms*definition of fragmentUniforms*definition of clearColor**definition of clearColor*definition of stateParams*definition of objectParams*definition of hsDisableApiHeadMaskPerSd**definition of hsDisableApiHeadMaskPerSd*definition of flipLockToggleApiHeadMaskPerSd**definition of flipLockToggleApiHeadMaskPerSd**definition of surfaceHandle*definition of cursorImage*definition of cursorPosition**definition of workArea***definition of pSurface***definition of pStagingSurface*definition of surfaceRect*definition of hsMask**definition of hsMask*definition of rectOut*definition of semReadyValue*definition of parsed*definition of surfaceHandles**definition of surfaceHandles*definition of newCurrent*definition of parsedSemaphore*definition of busyChannelMaskPerSd**definition of busyChannelMaskPerSd*definition of accelState*definition of sourceFetchRect*definition of registerSurfaceParams*definition of vsInfoFrameCtrl*definition of patchedApiHeadsMask**definition of patchedApiHeadsMask*definition of flipUpdateState*definition of clearHdmiFrlActiveRmId**definition of clearHdmiFrlActiveRmId*definition of dispParams*definition of acpiMap*definition of rgLineParams*definition of registration*definition of prefetchEventParams*definition of memoryVirtualParams*definition of idParams*definition of getIdParams*definition of pciInfoParams*definition of engineListParams*definition of classListParams*definition of syncptAllocParams*definition of ChannelAllocParams*definition of powerParams*definition of bracketParams*definition of setEventParams*definition of allocEventParams*definition of bootParams*definition of activeDpysParams*definition of numHeadsParams*definition of headMaskParams*definition of winHeadAssignParams*definition of windowAssigned**definition of windowAssigned*definition of allocConnectorDispData*definition of getSupportedParams*definition of getOrInfoParams*definition of simParams*definition of layerMaskPerSdApiHead**definition of layerMaskPerSdApiHead***definition of layerMaskPerSdApiHead**NVRM: SPDM: Fatal heartbeat failure detected 0x%x ***definition of pDevEvo*definition of apiHeadsUsed**definition of apiHeadsUsed*call to libspdm_heartbeat*constructor init of field legacyPort**constructor init of field legacyPort*constructor init of field res**constructor init of field res*constructor init of field currentDevices**constructor init of field currentDevices*constructor init of field child**constructor init of field child*constructor init of field m_hashMap**constructor init of field m_hashMap*src/kernel/gpu/spdm/kernel_spdm.c*NVRM: Failed deinitializing SPDM: 0x%x! **src/kernel/gpu/spdm/kernel_spdm.c**NVRM: Failed deinitializing SPDM: 0x%x! *NVRM: Performing late SPDM initialization! **NVRM: Performing late SPDM initialization! *spdmEstablishSession(pGpu, pSpdm, NV_SPDM_REQUESTER_ID_MONOLITHIC)**spdmEstablishSession(pGpu, pSpdm, NV_SPDM_REQUESTER_ID_MONOLITHIC)*call to spdmSendInitRmDataCommand_DISPATCH*spdmSendInitRmDataCommand_HAL(pGpu, pSpdm)**spdmSendInitRmDataCommand_HAL(pGpu, pSpdm)***pLibspdmContext*pLibspdmScratch**pLibspdmScratch***pLibspdmScratch*pDeviceIOContext**pDeviceIOContext***pDeviceIOContext***pMsgLog*pTranscriptLog**pTranscriptLog***pTranscriptLog*libspdmContextSize*libspdmScratchSize*msgLogMaxSize*transcriptLogSize*bSessionEstablished*bExportSecretCleared**pPayloadBufferMemDesc*nvSpdmRequesterId*call to spdmCheckRequesterIdValid_IMPL*src/kernel/gpu/spdm/spdm.c**src/kernel/gpu/spdm/spdm.c*call to spdmContextInit_IMPL*call to spdmStart_IMPL*NVRM: Error, invalid NV SPDM requester id(0x%x) !!!! **NVRM: Error, invalid NV SPDM requester id(0x%x) !!!! *pCertCtx**pCertCtx*certCtxIdx*certOutSize*call to libspdm_der_to_pem*NVRM: libspdm_der_to_pem() failed **NVRM: libspdm_der_to_pem() failed *call to spdmGetRequesterCertificateCount_DISPATCH*NVRM: spdmGetRequesterCertificateCount_HAL() failed **NVRM: spdmGetRequesterCertificateCount_HAL() failed *call to spdmGetIndividualCertificate_DISPATCH*NVRM: spdmGetIndividualCertificate_HAL() failed **NVRM: spdmGetIndividualCertificate_HAL() failed *NVRM: spdmGetRequesterCertificateCount failed !!! **NVRM: spdmGetRequesterCertificateCount failed !!! *pCertStartDest**pCertStartDest*NVRM: spdmGetIndividualCertificate() failed !!! **NVRM: spdmGetIndividualCertificate() failed !!! *call to libspdm_x509_verify_cert_chain*NVRM: libspdm_x509_verify_cert_chain() failed !!! **NVRM: libspdm_x509_verify_cert_chain() failed !!! *pCertEndSrc**pCertEndSrc*connection_info*algorithm*call to libspdm_get_hash_size*pCertStartSrc**pCertStartSrc*call to _calcX509CertSize*pSpdm != NULL**pSpdm != NULL*transcript**transcript*transcriptSize*responseSizeT*call to libspdm_send_receive_data*pSessionContext**pSessionContext*keySizeSizeT*call to libspdm_secured_message_export_master_secret*call to libspdm_secured_message_clear_export_master_secret*NVRM: SPDM: Starting new SPDM connection. **NVRM: SPDM: Starting new SPDM connection. *call to libspdm_init_connection*NVRM: SPDM: libspdm_init_connection() assert hit !!!. **NVRM: SPDM: libspdm_init_connection() assert hit !!!. *call to spdmCheckConnection_DISPATCH*NVRM: SPDM: Connection attributes did not match expected! **NVRM: SPDM: Connection attributes did not match expected! *call to spdmGetCertificates_DISPATCH*NVRM: SPDM: Certificate retrieval failed! **NVRM: SPDM: Certificate retrieval failed! *NVRM: SPDM: spdmGetCertificates_HAL() assert hit !!!. **NVRM: SPDM: spdmGetCertificates_HAL() assert hit !!!. *call to spdmDeviceSecuredSessionSupported_DISPATCH*NVRM: SPDM: Attempting to establish SPDM session. **NVRM: SPDM: Attempting to establish SPDM session. *call to libspdm_start_session*NVRM: SPDM: libspdm_start_session() assert hit !!!. **NVRM: SPDM: libspdm_start_session() assert hit !!!. *NVRM: SPDM: Responder returned unexpected heartbeat 0x%x **NVRM: SPDM: Responder returned unexpected heartbeat 0x%x *NVRM: SPDM: Session establishment successful: sessionId 0x%x. **NVRM: SPDM: Session establishment successful: sessionId 0x%x. *NVRM: SPDM: Session establishment failed! **NVRM: SPDM: Session establishment failed! *call to _spdmClearContext*call to spdmUnregisterFromHeartbeats_DISPATCH*NVRM: SPDM: Tearing down session. **NVRM: SPDM: Tearing down session. *call to spdmDeviceDeinit_DISPATCH*call to libspdm_check_crypto_backend*NVRM: SPDM cannot boot without proper crypto backend! **NVRM: SPDM cannot boot without proper crypto backend! *call to libspdm_get_context_size*call to libspdm_init_context*encapCertChainSize*call to spdmGetReqEncapCertificates_DISPATCH*ctExponent*call to libspdm_set_data*measSpec*baseAsymAlgo*baseHashAlgo*call to f32_to_f16*call to f16_to_f32*dheGroup*aeadSuite*keySched*reqAsymAlgo*additional_data**additional_data*maxSessionCount*maxRetries*call to libspdm_init_msg_log*call to spdmDeviceInit_DISPATCH*call to libspdm_register_device_buffer_func*call to libspdm_get_sizeof_required_scratch_buffer*call to libspdm_set_scratch_buffer*call to libspdm_register_verify_spdm_cert_chain_func*call to libspdm_check_context*NVRM: Failed libspdm context selftest! **NVRM: Failed libspdm context selftest! *NVRM: %s: pCert + DER_CERT_SIZE_FIELD_LENGTH(0x%x) is not valid value !! **NVRM: %s: pCert + DER_CERT_SIZE_FIELD_LENGTH(0x%x) is not valid value !! *NVRM: %s: pCert + (certSize(0x%x) - 1) is not a valid value !! **NVRM: %s: pCert + (certSize(0x%x) - 1) is not a valid value !! *call to libspdm_deinit_context*call to libspdm_reset_context*src/kernel/gpu/subdevice/generic_engine.c*NVRM: %s%saccess not allowed on class 0x%x **src/kernel/gpu/subdevice/generic_engine.c**NVRM: %s%saccess not allowed on class 0x%x *Read **Read *Write **Write *pKernelRc != NULL*src/kernel/gpu/subdevice/subdevice.c**pKernelRc != NULL**src/kernel/gpu/subdevice/subdevice.c*bMaxGrTickFreqRequested**pDeviceRef*pTimerEvent**pTimerEvent*call to subdeviceRestoreLockedClock*call to subdeviceRestoreVF*call to subdeviceRestoreGrTickFreq_IMPL*call to subdeviceReleaseNvlinkErrorInjectionMode*call to subdeviceReleaseComputeModeReservation_IMPL*call to subdeviceUnsetGpuDebugMode_IMPL*call to subdeviceRestoreWatchdog_IMPL*call to kfifoRestoreSchedPolicy_56cd7a*bSchedPolicySet*call to gpuUnregisterSubdevice_IMPL*call to subdeviceUnsetDynamicBoostLimit_IMPL*pNv2080AllocParams*call to gpumgrIsSubDeviceInstanceValid*pSubdevGpuRes*hNotifierMemory*hSemMemory*pPrimaryGpu*call to gpuRegisterSubdevice_IMPL*gpuRegisterSubdevice(pGpu, pSubdevice)**gpuRegisterSubdevice(pGpu, pSubdevice)*alloc_params*src/kernel/gpu/subdevice/subdevice_ctrl_event_kernel.c**src/kernel/gpu/subdevice/subdevice_ctrl_event_kernel.c*call to gspTraceAddBindpoint*call to videoAddBindpoint*pSemValue**pSemValue*vgpuNsIntr*nsSemValue*isSemaMemValidationEnabled*nsSemOffset*guestMSIAddr*guestMSIData*guestDomainId**pNotifierMemory*NVRM: wrong control call for timer event **NVRM: wrong control call for timer event *pRmApi->Control(pRmApi, RES_GET_CLIENT_HANDLE(pSubdevice), RES_GET_HANDLE(pSubdevice), NV2080_CTRL_CMD_EVENT_SET_NOTIFICATION, pSetEventParams, sizeof *pSetEventParams)**pRmApi->Control(pRmApi, RES_GET_CLIENT_HANDLE(pSubdevice), RES_GET_HANDLE(pSubdevice), NV2080_CTRL_CMD_EVENT_SET_NOTIFICATION, pSetEventParams, sizeof *pSetEventParams)*call to engineNonStallIntrNotifyEvent*src/kernel/gpu/subdevice/subdevice_ctrl_fla.c**src/kernel/gpu/subdevice/subdevice_ctrl_fla.c*call to fabricvaspaceGetFreeHeap_IMPL*kbusIsFlaSupported(pKernelBus)**kbusIsFlaSupported(pKernelBus)*call to kbusGetFlaRange_DISPATCH*NVRM: FLA is not supported in this platform **NVRM: FLA is not supported in this platform *NVRM: ERROR: FLA cannot be enabled with peer support disabled with MIG **NVRM: ERROR: FLA cannot be enabled with peer support disabled with MIG *call to _subdeviceFlaRangeModeInit*call to _subdeviceFlaRangeModeHostManagedVasInit*call to _subdeviceFlaRangeModeHostManagedVasDestroy*call to kbusVerifyFlaRange_DISPATCH*NVRM: Failed to set FLA range because of invalid config for gpu: %x **NVRM: Failed to set FLA range because of invalid config for gpu: %x *NVRM: Allocating new FLA Vaspace failed, status: %x, gpu: %x **NVRM: Allocating new FLA Vaspace failed, status: %x, gpu: %x *NVRM: FLA VAS is not allowed for base: %llx, size:%llx in gpu: %x **NVRM: FLA VAS is not allowed for base: %llx, size:%llx in gpu: %x *call to kbusAllocateHostManagedFlaVaspace_DISPATCH*NVRM: Host Managed FLA Vaspace failed, status: %x, gpu: %x **NVRM: Host Managed FLA Vaspace failed, status: %x, gpu: %x *call to kbusDestroyHostManagedFlaVaspace_DISPATCH*instructionListSize <= NV_ARRAY_ELEMENTS(pSystemExecuteParams->instructionList)*src/kernel/gpu/subdevice/subdevice_ctrl_gpu_kernel.c**instructionListSize <= NV_ARRAY_ELEMENTS(pSystemExecuteParams->instructionList)**src/kernel/gpu/subdevice/subdevice_ctrl_gpu_kernel.c*instructionList**instructionList*executed*bForwardRmctrl*thermalSystemExecuteV2Cache*operands*getInfoSensorsAvailable*availableSensors*sensors**sensors*bIsCached**bIsCached*getInfoProviderType*bTypeFound*getInfoSensorReadingRange*maximum*getInfoSensorProvider*providerIndex*getInfoSensorTarget*getStatusSensorReading*temperatureType*temperatures**temperatures*rusdTemperature*call to subdeviceCtrlCmdThermalSystemExecuteV2_updateCache*numSensors*bNumSensorsCached*getInfoTargetType*finnRmapiSize*rpcGspControlSize*rpcMessageHeaderSize*timestampFreq*targetEngine*NVRM: RPC TEST: mismatch in input data, expected %u, received %u **NVRM: RPC TEST: mismatch in input data, expected %u, received %u *call to gpuGetRecoveryAction_IMPL*pConfigParamsInternal**pConfigParamsInternal*RmIllumLogoBrightness**RmIllumLogoBrightness*RmIllumSLIBrightness**RmIllumSLIBrightness*call to gpuGetChipDetails_IMPL*faultPacket*call to gpuGetUgidInfo_IMPL*call to gpuGetShortNameString_DISPATCH**clusterUuid*rmapiLockIsOwner() && rmDeviceGpuLockIsOwner(gpuGetInstance(pGpu))**rmapiLockIsOwner() && rmDeviceGpuLockIsOwner(gpuGetInstance(pGpu))*call to gpuFabricProbeGetNumProbeReqs*NVRM: Error while retrieving numProbeReqs **NVRM: Error while retrieving numProbeReqs *call to gpuFabricProbeGetFmStatus*call to gpuFabricProbeGetClusterUuid*call to gpuFabricProbeGetFabricPartitionId*call to _convertGpuFabricProbeInfoCaps*fabricCaps*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_NVLINK_GET_LOCAL_DEVICE_INFO, &localDeviceInfoParams, sizeof(localDeviceInfoParams))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_NVLINK_GET_LOCAL_DEVICE_INFO, &localDeviceInfoParams, sizeof(localDeviceInfoParams))*localDeviceInfoParams*call to _convertGpuFabricProbeHealthSummary*numEngDescriptors < NV2080_CTRL_GPU_MAX_ENGINE_OBJECTS**numEngDescriptors < NV2080_CTRL_GPU_MAX_ENGINE_OBJECTS*engineIsInit**engineIsInit*engineStateLoadTime**engineStateLoadTime*call to gpuUpdateGfidP2pCapability*pP2PInfo != NULL**pP2PInfo != NULL*NVRM: Input GFID %d greater than max allowed GFID **NVRM: Input GFID %d greater than max allowed GFID *p2pFabricPartitionId*bSetP2PAccess*bAllowP2pAccess*pciFunction*NVRM: Computed GFID %d greater than max supported GFID **NVRM: Computed GFID %d greater than max supported GFID *gfidMask*rmapiLockIsOwner() && rmDeviceGpuLockIsOwner(GPU_RES_GET_GPU(pSubdevice)->gpuInstance)**rmapiLockIsOwner() && rmDeviceGpuLockIsOwner(GPU_RES_GET_GPU(pSubdevice)->gpuInstance)*call to isAddressWithinLimits*call to kmcGetMcBar0MapInfo_DISPATCH*pGpu->gpuUuid.isInitialized**pGpu->gpuUuid.isInitialized*call to gpudbGetGpuComputePolicyConfigs*numConfigs*configList**configList*timeslice*policyId*call to gpuIsComputePolicyTimesliceSupported*NVRM: Setting the timeslice policy is not supported for gpu with pci id 0x%llx **NVRM: Setting the timeslice policy is not supported for gpu with pci id 0x%llx *NVRM: Unsupported timeslice value %u specified for gpu with pci id 0x%llx **NVRM: Unsupported timeslice value %u specified for gpu with pci id 0x%llx *NVRM: Unsupported compute policy %u specified for gpu id 0x%llx **NVRM: Unsupported compute policy %u specified for gpu id 0x%llx *call to gpudbSetGpuComputePolicyConfig*(rmapiLockIsOwner() && rmDeviceGpuLockIsOwner(pGpu->gpuInstance)) || rmapiInRtd3PmPath()**(rmapiLockIsOwner() && rmDeviceGpuLockIsOwner(pGpu->gpuInstance)) || rmapiInRtd3PmPath()*maxSupportedPageSize*hwEngineID**hwEngineID*runlistPriBase**runlistPriBase**runlistId*(kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, pDevice, pRef) == NV_OK)**(kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, pDevice, pRef) == NV_OK)*pidInfoList**pidInfoList*pPidInfo**pPidInfo*pPidInfoData**pPidInfoData*call to gpuFindClientInfoWithPidIterator_IMPL*call to gpuGetProcWithObject_IMPL*rules*pRmApi->Control(pRmApi, pRmCtrlParams->hClient, pRmCtrlParams->hObject, pRmCtrlParams->cmd, pRmCtrlParams->pParams, pRmCtrlParams->paramsSize)**pRmApi->Control(pRmApi, pRmCtrlParams->hClient, pRmCtrlParams->hObject, pRmCtrlParams->cmd, pRmCtrlParams->pParams, pRmCtrlParams->paramsSize)*NVRM: Could not store compute mode rule in the registry, current setting may not persist if all clients disconnect! **NVRM: Could not store compute mode rule in the registry, current setting may not persist if all clients disconnect! *call to gpuGetLitterValues_KERNEL*maxGpcCount*numPesPerGpc**numPesPerGpc*numPesInGpc*activePesMask*maxTpcPerGpcCount*tpcToPesMap**tpcToPesMap*zcullMaskParams*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_GR_GET_ZCULL_MASK, &zcullMaskParams, sizeof(zcullMaskParams))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_GR_GET_ZCULL_MASK, &zcullMaskParams, sizeof(zcullMaskParams))*tpcMaskParams*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_GR_GET_TPC_MASK, &tpcMaskParams, sizeof(tpcMaskParams))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_GR_GET_TPC_MASK, &tpcMaskParams, sizeof(tpcMaskParams))*pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_GR_GET_GPC_MASK, &gpcMaskParams, sizeof(gpcMaskParams))**pRmApi->Control(pRmApi, hClient, hSubdevice, NV2080_CTRL_CMD_GR_GET_GPC_MASK, &gpcMaskParams, sizeof(gpcMaskParams))*gpcMaskParams*NVRM: NV2080_CTRL_CMD_GPU_GET_ENGINE_INFO failed **NVRM: NV2080_CTRL_CMD_GPU_GET_ENGINE_INFO failed *NVRM: Invalid engine ID 0x%x (0x%x) **NVRM: Invalid engine ID 0x%x (0x%x) *NVRM: Invalid class ID 0x%x **NVRM: Invalid class ID 0x%x *NVRM: Class 0x%x is not considered a partnership class. **NVRM: Class 0x%x is not considered a partnership class. *NVRM: partnerList space is too small, time to increase. This is fatal **NVRM: partnerList space is too small, time to increase. This is fatal *call to kmigmgrFilterEnginePartnerList_IMPL*NVRM: NV2080_CTRL_CMD_GPU_GET_ENGINE_CLASSLIST Invalid engine ID 0x%x **NVRM: NV2080_CTRL_CMD_GPU_GET_ENGINE_CLASSLIST Invalid engine ID 0x%x *NVRM: NV2080_CTRL_CMD_GPU_GET_ENGINE_CLASSLIST Class List query failed **NVRM: NV2080_CTRL_CMD_GPU_GET_ENGINE_CLASSLIST Class List query failed *NVRM: The engine database's size (0x%x) exceeds NV2080_GPU_MAX_ENGINES_LIST_SIZE (0x%x)! **NVRM: The engine database's size (0x%x) exceeds NV2080_GPU_MAX_ENGINES_LIST_SIZE (0x%x)! *call to kmigmgrFilterEngineList_IMPL*rmEngineTypeList**rmEngineTypeList*call to gpuGetNv2080EngineTypeList_IMPL*NVRM: The engine count (0x%x) exceeds NV2080_GPU_MAX_ENGINES_LIST_SIZE (0x%x)! **NVRM: The engine count (0x%x) exceeds NV2080_GPU_MAX_ENGINES_LIST_SIZE (0x%x)! *NVRM: ================NV2080_ENGINE List================== **NVRM: ================NV2080_ENGINE List================== *NVRM: SwizzId = %d CiId = %d GrId = %d **NVRM: SwizzId = %d CiId = %d GrId = %d *NVRM: engine[%d] = 0x%x **NVRM: engine[%d] = 0x%x *NVRM: ============================================= **NVRM: ============================================= *call to subdeviceCtrlCmdGpuGetEnginesV2_IMPL*pKernelEngineList*getEngineParamsV2*pParams->engineCount >= getEngineParamsV2.engineCount**pParams->engineCount >= getEngineParamsV2.engineCount**pKernelEngineList*call to gpuGetSubdeviceMask**sessionInfoTbl*call to nvfbcIsSessionDataStale*NVRM: more entries in pGpu->nvencSessionList than NV2080_CTRL_GPU_NVENC_SESSION_INFO_MAX_COPYOUT_ENTRIES **NVRM: more entries in pGpu->nvencSessionList than NV2080_CTRL_GPU_NVENC_SESSION_INFO_MAX_COPYOUT_ENTRIES *sessionInfoCount*sessionCount*call to _subdeviceCtrlCmdGpuGetNvencSwSessionInfo*sessionInfoTblEntry == NV2080_CTRL_GPU_NVENC_SESSION_INFO_MAX_COPYOUT_ENTRIES**sessionInfoTblEntry == NV2080_CTRL_GPU_NVENC_SESSION_INFO_MAX_COPYOUT_ENTRIES*encoderSessionCount*bPersistentStandbyBuffer*call to getUpstreamBridgeIds*physicalBridgeIds**physicalBridgeIds**bridgeList*call to getPlxFirmwareAndBusInfo*pBridgeVersionParams*call to getBridgeData*hPhysicalBridges**hPhysicalBridges*call to getPlxFirmwareVersion*pGpuHWBCList->pHWBC != NULL**pGpuHWBCList->pHWBC != NULL*bridgeIndex < NV2080_CTRL_MAX_PHYSICAL_BRIDGE**bridgeIndex < NV2080_CTRL_MAX_PHYSICAL_BRIDGE*call to getBridgeCountAndId**pPlxCount < NV2080_CTRL_MAX_PHYSICAL_BRIDGE***pPlxCount < NV2080_CTRL_MAX_PHYSICAL_BRIDGE*ctrlDev**pBridgeObject*IS_VGPU_GSP_PLUGIN_OFFLOAD_ENABLED(pGpu)**IS_VGPU_GSP_PLUGIN_OFFLOAD_ENABLED(pGpu)*call to getGpuInfos*NVRM: invalid groupId **NVRM: invalid groupId *ecidInfo**ecidInfo*call to gpuGetChipMinExtRev**bCanAccessHw*NVRM: Unable to retrieve SM version! **NVRM: Unable to retrieve SM version! *call to osGpuSupportsAts*call to kmemsysIsNonPasidAtsSupported_DISPATCH*call to osDmabufIsSupported*bPhysicalForward*src/kernel/gpu/subdevice/subdevice_ctrl_gpu_regops.c*NVRM: Invalid regOpCount: %ud **src/kernel/gpu/subdevice/subdevice_ctrl_gpu_regops.c**NVRM: Invalid regOpCount: %ud *call to subdeviceCtrlCmdGpuExecRegOps_cmn*smIds**smIds*NVRM: client 0x%x channel 0x%x **NVRM: client 0x%x channel 0x%x *NVRM: regOpCount is 0 **NVRM: regOpCount is 0 *NVRM: regOps is NULL **NVRM: regOps is NULL *NVRM: hClientTarget and hChannelTarget must both be set or both be 0 **NVRM: hClientTarget and hChannelTarget must both be set or both be 0 *bUseMigratableOps*call to initRegStatus*call to gpuValidateRegOffset_IMPL*src/kernel/gpu/subdevice/subdevice_ctrl_gpu_smc.c**src/kernel/gpu/subdevice/subdevice_ctrl_gpu_smc.c*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_GRMGR_GET_SKYLINE_INFO, &internalParams, sizeof(internalParams))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_STATIC_GRMGR_GET_SKYLINE_INFO, &internalParams, sizeof(internalParams))*internalParams.validEntries <= NV_ARRAY_ELEMENTS(pParams->skylineTable)**internalParams.validEntries <= NV_ARRAY_ELEMENTS(pParams->skylineTable)**call to __next_thread***call to ioremap_np*maxInstances*... < ...*... <= ...*singletonVgpcMask*... == ...*computeSizeFlag*vgpcId < NV_ARRAY_ELEMENTS(pParams->skylineTable[i].skylineVgpcSize)**vgpcId < NV_ARRAY_ELEMENTS(pParams->skylineTable[i].skylineVgpcSize)*skylineVgpcSize**skylineVgpcSize*vgpcId*... >= ...*call to gpuGetChipSubRev_DISPATCH*chipSubRev*call to gpuGetIsCmpSku_DISPATCH*isCmpSku*pciSubDeviceId*... > ...*nocatJournalRecord*call to rcdbSetNocatTdrReason*nocatJournalData*tagData*RCLOG**RCLOG*call to rcdbReportNextNocatJournalEntry*journalRecords**journalRecords*call to createTimestampFromTimer*nocatRecordCount*nocatOutstandingRecordCount*activityCounters**activityCounters*timeSinceBootNsec*timerFreq*timeValMsec*timeSinceBootMsec*currTimeMsec*timestampMs*src/kernel/gpu/subdevice/subdevice_ctrl_timer_kernel.c**src/kernel/gpu/subdevice/subdevice_ctrl_timer_kernel.c*cpuTime*NVRM: Could not get GPU time. status=0x%08x **NVRM: Could not get GPU time. status=0x%08x *call to tmrGetPtimerOffsetNs_DISPATCH*call to tmrReadTimeHiReg_DISPATCH*gpuTimeHiNew*gpuTimeHiOld**cpuTime*gpuTimeLo**gpuTimeLo*call to tmrReadTimeLoReg_DISPATCH*closestPairBeginIndex*gpuTime*NVRM: GPUTime = %llx CPUTime = %llx **NVRM: GPUTime = %llx CPUTime = %llx *call to tmrGetGpuAndCpuTimestampPair_DISPATCH*NVRM: Could not get CPU GPU time. status=0x%08x **NVRM: Could not get CPU GPU time. status=0x%08x *rmDeviceGpuLockIsOwner(GPU_RES_GET_GPU(pSubdevice)->gpuInstance)**rmDeviceGpuLockIsOwner(GPU_RES_GET_GPU(pSubdevice)->gpuInstance)*call to timerSchedule*NVRM: gpuControlTimerCallback: the timer is already scheduled for this subdevice **NVRM: gpuControlTimerCallback: the timer is already scheduled for this subdevice *NVRM: gpuControlTimerCallback: timer event is missing **NVRM: gpuControlTimerCallback: timer event is missing *NVRM: gpuControlTimer: cmd 0x%x: callback function is missing **NVRM: gpuControlTimer: cmd 0x%x: callback function is missing *tmrEventCreate(pTmr, &pSubdevice->pTimerEvent, gpuControlTimerCallback, pSubdevice, TMR_FLAGS_NONE)**tmrEventCreate(pTmr, &pSubdevice->pTimerEvent, gpuControlTimerCallback, pSubdevice, TMR_FLAGS_NONE)*call to tmrEventScheduleAbs_IMPL*NVRM: callback is called but the timer is not scheduled **NVRM: callback is called but the timer is not scheduled *NVRM: timer event is missing **NVRM: timer event is missing *NVRM: timer callback pointer is missing **NVRM: timer callback pointer is missing *src/kernel/gpu/subdevice/subdevice_ctrl_vgpu.c**src/kernel/gpu/subdevice/subdevice_ctrl_vgpu.c*pBiosInfos*!bMigEnabled**!bMigEnabled*eccStatus*units**units*sbe*dbe*vbiosPostTime*scrubberStatus*eccState*RMGuestECCState**RMGuestECCState*currentConfiguration*defaultConfiguration*src/kernel/gpu/subdevice/subdevice_diag.c*NVRM: gpuControlSubDevice: cmd 0x%x **src/kernel/gpu/subdevice/subdevice_diag.c**NVRM: gpuControlSubDevice: cmd 0x%x *src/kernel/gpu/timed_semaphore.c*NVRM: Semaphore fill failed, error 0x%x **src/kernel/gpu/timed_semaphore.c**NVRM: Semaphore fill failed, error 0x%x *overallStatus*NVRM: Notifier fill failed, error 0x%x **NVRM: Notifier fill failed, error 0x%x *pTimedSemEntry*pTimedSemEntryNext**pTimedSemEntryNext*call to _9074TimedSemReleaseNow*call to _9074TimedSemRelease**pTimedSemEntry*NVRM: Required methods were not written **NVRM: Required methods were not written *call to _9074TimedSemRequest*NVRM: WAIT_TIMESTAMP_HI not set **NVRM: WAIT_TIMESTAMP_HI not set *WaitTimestampLo*WaitTimestamp*WaitTimestampHi*NVRM: SEMAPHORE_HI not set **NVRM: SEMAPHORE_HI not set *NVRM: Mis-aligned address **NVRM: Mis-aligned address *SemaphoreLo*SemaphoreGPUVA*SemaphoreHi*NVRM: NOTIFIER_HI not set **NVRM: NOTIFIER_HI not set *NotifierLo*NotifierGPUVA*NotifierHi*call to _class9074TimerCallback*NVRM: Timed sem release failed, error 0x%x **NVRM: Timed sem release failed, error 0x%x *pTimedSemEntry != NULL**pTimedSemEntry != NULL*NotifyAction*NVRM: Event notify failed, error 0x%x **NVRM: Event notify failed, error 0x%x *src/kernel/gpu/timer/arch/ampere/timer_ga100.c**src/kernel/gpu/timer/arch/ampere/timer_ga100.c*call to tmrCallExpiredCallbacks_IMPL*call to tmrServiceSwrlCallbacks*call to tmrGetTmrBaseAddr_DISPATCH*constructor init of field m_forcedDscParams*call to gpuGetOneDeviceEntry_IMPL*src/kernel/gpu/timer/arch/blackwell/timer_gb100.c**src/kernel/gpu/timer/arch/blackwell/timer_gb100.c*osTimeNs*gpuTimerHi2*gpuTimerHi*gpuTimerLo*gpuTimerNs*gpuTimerNs < osTimeNs*src/kernel/gpu/timer/arch/blackwell/timer_gb10b.c**gpuTimerNs < osTimeNs**src/kernel/gpu/timer/arch/blackwell/timer_gb10b.c*sysTimerOffsetNs*src/kernel/gpu/timer/arch/hopper/timer_gh100.c*NVRM: NVRM-RC: Consistently Bad TimeLo value %x **src/kernel/gpu/timer/arch/hopper/timer_gh100.c**NVRM: NVRM-RC: Consistently Bad TimeLo value %x *TimeHi2*tmrId < NV_VIRTUAL_FUNCTION_PRIV_TIMER__SIZE_1**tmrId < NV_VIRTUAL_FUNCTION_PRIV_TIMER__SIZE_1*NVRM: osGetSystemTime returns 0x%x seconds, 0x%x useconds **NVRM: osGetSystemTime returns 0x%x seconds, 0x%x useconds *secTimerHi2*secTimerHi*secTimerLo*secTimerNs*secTimerNs < osTimeNs**secTimerNs < osTimeNs*call to kfspRequiresBug3957833WAR_DISPATCH*src/kernel/gpu/timer/arch/maxwell/timer_gm107.c**src/kernel/gpu/timer/arch/maxwell/timer_gm107.c*call to tmrServiceSwrlCallbacksPmcTree*call to tmrGetCallbackInterruptPending_IMPL*call to tmrResetCallbackInterrupt_IMPL*call to tmrGetTimeEx_DISPATCH*call to tmrGetNsecShiftMask_DISPATCH*NVRM: NVRM-RC: Bad TimeLo value %x, Let's see if it happens again. **NVRM: NVRM-RC: Bad TimeLo value %x, Let's see if it happens again. *grTickFreq*call to tmrGetGpuPtimerOffset_DISPATCH*call to portAtomicTimerBarrier*src/kernel/gpu/timer/arch/turing/timer_tu102.c**src/kernel/gpu/timer/arch/turing/timer_tu102.c*call to tmrServiceInterrupt_GM107*ptimerOffsetLo*ptimerOffsetHi*call to tmrGetGpuPtimerOffset_GV100*call to tmrGetGpuPtimerOffset_GM107*src/kernel/gpu/timer/arch/volta/timer_gv100.c**src/kernel/gpu/timer/arch/volta/timer_gv100.c*NVRM: ERROR: Write to PTIMER attempted even though Level 0 PLM is disabled. **NVRM: ERROR: Write to PTIMER attempted even though Level 0 PLM is disabled. *src/kernel/gpu/timer/timer.c**src/kernel/gpu/timer/timer.c*call to tmrClearSwrlCallbacksSemaphore*invalid EngineIdx**invalid EngineIdx*pRecords[MC_ENGINE_IDX_TMR].pInterruptService == NULL**pRecords[MC_ENGINE_IDX_TMR].pInterruptService == NULL*pRecords[MC_ENGINE_IDX_TMR_SWRL].pInterruptService == NULL**pRecords[MC_ENGINE_IDX_TMR_SWRL].pInterruptService == NULL*call to tmrIsOSTimer*call to _tmrScanCallbackOSTimer*call to tmrEventServiceOSTimerCallback_OSTIMER*pWrapper**pWrapper*pObj_Inner*pCallbackData_Outer**pCallbackData_Outer**pGrTickFreqRefcnt*call to tmrSetAlarmIntrDisable_56cd7a*call to tmrSetCountdownIntrDisable_DISPATCH*pScan*currentSysTime*call to tmrEventCancelOSTimer_OSTIMER**pScan*call to _tmrStateLoadCallbacks*call to tmrEventsExist*call to _tmrScanCallback*call to _tmrGetNextAlarmTime*call to _tmrScheduleCallbackInterrupt**pRmActiveEventList**pRmActiveOSTimerEventList*rmCallbackTable_OBSOLETE**rmCallbackTable_OBSOLETE*pRmCallbackFreeList_OBSOLETE**pRmCallbackFreeList_OBSOLETE*bLegacy*days*msecs*call to tmrSetCountdownIntrEnable_DISPATCH*call to tmrSetAlarmIntrEnable_56cd7a*startTimeNs*call to tmrEventScheduleRelOSTimer_OSTIMER*call to _tmrPullCallbackFromHead*bProccessedCallback*Attempting to execute callback with NULL procedure.**Attempting to execute callback with NULL procedure.*Attempting to execute callback with NULL timer event.**Attempting to execute callback with NULL timer event.**tmrScan*tmrNext**tmrNext*tmrCurrent**tmrCurrent*call to tmrCallbackOnList_IMPL*call to _tmrGetNextFreeCallback**tmrInsert*pTimeProc_OBSOLETE*NVRM: Proc %p Object %p already on tmrList **NVRM: Proc %p Object %p already on tmrList *Attempting to schedule callback with NULL procedure. Please update Bug 372159 with appropriate information.**Attempting to schedule callback with NULL procedure. Please update Bug 372159 with appropriate information.*call to _tmrInsertCallback*!pEvent->bInUse**!pEvent->bInUse*bAddedAsHead*nextAlarmTime*timens*call to _tmrInsertCallbackInList*!"Invalid call to insert, already in use"**!"Invalid call to insert, already in use"*pEvent->bLegacy**pEvent->bLegacy*tmrList**tmrList*onList*pEvent->bInUse**pEvent->bInUse*RelTimeNs*call to tmrScheduleCallbackRel_IMPL*portSafeAddU64(currentTime, RelTime, &AbsTime)**portSafeAddU64(currentTime, RelTime, &AbsTime)*call to tmrScheduleCallbackAbs_IMPL*tmrEventScheduleRelOSTimer_HAL(pTmr, pEvent, RelTime)**tmrEventScheduleRelOSTimer_HAL(pTmr, pEvent, RelTime)*pEventPvt*call to tmrGetCurrentTimeEx_IMPL*NVRM: failed to notify event in callback, status: 0x%08x **NVRM: failed to notify event in callback, status: 0x%08x *portSafeAddU64(pEvent->timens, pEvent->startTimeNs, &nextAlarmTime)**portSafeAddU64(pEvent->timens, pEvent->startTimeNs, &nextAlarmTime)*!pEvent->bLegacy**!pEvent->bLegacy*call to tmrEventDestroyOSTimer_OSTIMER*pChaser*NVRM: Failed in cancel of OS timer callback **NVRM: Failed in cancel of OS timer callback **pChaser*call to tmrGetCountdownPending_3dd2c9*call to tmrGetAlarmPending_3dd2c9*call to tmrSetCountdownIntrReset_DISPATCH*call to tmrSetAlarmIntrReset_56cd7a*countdownTime*call to tmrSetCountdown_DISPATCH*call to tmrSetAlarm_56cd7a*NVRM: Failed to allocate timer event **NVRM: Failed to allocate timer event *call to tmrEventCreateOSTimer_OSTIMER*NVRM: Failed to create OS timer **NVRM: Failed to create OS timer *pTmrSwrlLock**pTmrSwrlLock*call to osDestroy1HzCallbacks*call to tmrGrTickFreqChange_DISPATCH*call to _tmrGrTimeStampFreqRefcntInit*NVRM: Alloc spinlock failed **NVRM: Alloc spinlock failed *call to tmrInitCallbacks_IMPL*call to osInit1HzCallbacks*retryTimes*call to osDelayNs*pOSTmrCBdata**pOSTmrCBdata*call to osDestroyNanoTimer*src/kernel/gpu/timer/timer_ostimer.c*NVRM: No Timer event callback found, invalid timer SW state **src/kernel/gpu/timer/timer_ostimer.c**NVRM: No Timer event callback found, invalid timer SW state *call to osCancelNanoTimer*NVRM: ERROR No Timer event callback found, invalid timer SW state **NVRM: ERROR No Timer event callback found, invalid timer SW state *NVRM: OS Timer not created **NVRM: OS Timer not created *call to osStartNanoTimer*NVRM: OS Start timer FAILED! **NVRM: OS Start timer FAILED! *call to osCreateNanoTimer*lclMsk***pOSTmrCBdata*NVRM: OS create timer failed **NVRM: OS create timer failed *call to gpuIsReplayableTraceEnabled*src/kernel/gpu/timer/timer_ptimer.c*NVRM: Entered tmrDelay - %d **src/kernel/gpu/timer/timer_ptimer.c**NVRM: Entered tmrDelay - %d *NVRM: Too long delay w/o yield, use osDelay instead. **NVRM: Too long delay w/o yield, use osDelay instead. *call to tmrGetTimeLo_DISPATCH*NVRM: PTIMER may be stuck. Already at %d iterations for a delay of %d nsec **NVRM: PTIMER may be stuck. Already at %d iterations for a delay of %d nsec *numActiveHeads**pCurDisp**pDispEvoTmp*NVRM: Exiting tmrDelay **NVRM: Exiting tmrDelay *src/kernel/gpu/uvm/access_cntr_buffer.c**src/kernel/gpu/uvm/access_cntr_buffer.c*pAccessCounterBuffers*pUvm->pAccessCounterBuffers[pAccessCounterBuffer->accessCounterIndex].pAccessCounterBuffer == pAccessCounterBuffer**pUvm->pAccessCounterBuffers[pAccessCounterBuffer->accessCounterIndex].pAccessCounterBuffer == pAccessCounterBuffer*call to uvmTerminateAccessCntrBuffer_IMPL*uvmTerminateAccessCntrBuffer(pGpu, pUvm, pAccessCounterBuffer)**uvmTerminateAccessCntrBuffer(pGpu, pUvm, pAccessCounterBuffer)*pUvm != NULL**pUvm != NULL**pFrameLockEvoTmp**pDeferredRequestFifoTmp*pAccessCounterBuffer->accessCounterIndex < pUvm->accessCounterBufferCount**pAccessCounterBuffer->accessCounterIndex < pUvm->accessCounterBufferCount*pUvm->pAccessCounterBuffers[pAccessCounterBuffer->accessCounterIndex].pAccessCounterBuffer == NULL**pUvm->pAccessCounterBuffers[pAccessCounterBuffer->accessCounterIndex].pAccessCounterBuffer == NULL*call to uvmInitializeAccessCntrBuffer_IMPL*uvmInitializeAccessCntrBuffer(pGpu, pUvm, pAccessCounterBuffer)**uvmInitializeAccessCntrBuffer(pGpu, pUvm, pAccessCounterBuffer)*call to uvmAccessCntrBufferUnregister_DISPATCH*call to uvmAccessCntrBufferRegister_DISPATCH*bufferPteArray**bufferPteArray*call to uvmAccessCntrSetGranularity_DISPATCH*call to uvmAccessCntrSetCounterLimit_DISPATCH*call to uvmAccessCntrSetThreshold_DISPATCH*call to uvmResetAccessCntrBuffer_92bfc3*resetFlag*call to uvmReadAccessCntrBufferFullPtr_DISPATCH*call to uvmGetAccessCntrRegisterMappings_DISPATCH*call to uvmEnableAccessCntrIntr_DISPATCH*src/kernel/gpu/uvm/access_cntr_buffer_ctrl.c**src/kernel/gpu/uvm/access_cntr_buffer_ctrl.c*call to kgmmuAccessCntrChangeIntrOwnership_IMPL*accessCntrBufferSize*call to uvmReadAccessCntrBufferPutPtr_DISPATCH*call to uvmWriteAccessCntrBufferGetPtr_DISPATCH*call to uvmReadAccessCntrBufferGetPtr_DISPATCH*accessCounterIndex < NV_ARRAY_ELEMENTS(offsets)*src/kernel/gpu/uvm/arch/blackwell/uvm_gb100.c**accessCounterIndex < NV_ARRAY_ELEMENTS(offsets)**src/kernel/gpu/uvm/arch/blackwell/uvm_gb100.c**pVBlankCallbackTmp*secondaryMergeHeadSection*gpuIdIndex**pDpyEvoTmp**pConnectorEvoNext*crtIndex*dfpIndex**pDevEvo_tmp**pEntryTmp*uvmReadAccessCntrBufferGetPtr(pGpu, pUvm, pAccessCounterBuffer->accessCounterIndex, &get)*src/kernel/gpu/uvm/arch/turing/uvm_tu102.c**uvmReadAccessCntrBufferGetPtr(pGpu, pUvm, pAccessCounterBuffer->accessCounterIndex, &get)**src/kernel/gpu/uvm/arch/turing/uvm_tu102.c*uvmReadAccessCntrBufferPutPtr(pGpu, pUvm, pAccessCounterBuffer->accessCounterIndex, &put)**uvmReadAccessCntrBufferPutPtr(pGpu, pUvm, pAccessCounterBuffer->accessCounterIndex, &put)*notifyEvents(pGpu, *ppEventNotification, NVC365_NOTIFIERS_ACCESS_COUNTER, 0, 0, NV_OK, NV_OS_WRITE_THEN_AWAKEN)**notifyEvents(pGpu, *ppEventNotification, NVC365_NOTIFIERS_ACCESS_COUNTER, 0, 0, NV_OK, NV_OS_WRITE_THEN_AWAKEN)*call to uvmGetRegOffsetAccessCntrBufferInfo_DISPATCH*call to uvmGetRegOffsetAccessCntrBufferLo_DISPATCH*call to uvmGetRegOffsetAccessCntrBufferHi_DISPATCH*call to uvmGetRegOffsetAccessCntrBufferConfig_DISPATCH*call to uvmGetRegOffsetAccessCntrBufferSize_DISPATCH*call to uvmGetRegOffsetAccessCntrBufferGet_DISPATCH*call to uvmGetRegOffsetAccessCntrBufferPut_DISPATCH*ber_len*bin_value**p_write*sg_len*sg_off**submap*__ms*call to uvmDisableAccessCntrIntr_DISPATCH*call to uvmProgramAccessCntrBufferEnabled_DISPATCH*accessCounterIndex == 0**accessCounterIndex == 0*call to uvmGetAccessCounterBufferSize_DISPATCH*accessCntrBufferAperture*accessCntrBufferAttr*UVM access counter**UVM access counter*pUvmAccessCntrBufferDesc*rm_access_counter_buffer_surface**rm_access_counter_buffer_surface**pUvmAccessCntrAllocMemDesc**connector_state**conn_state**nv_next_deferred_flip**new_crtc_state**tmp_timer*current_long_idx**managed_range_next*pUvmAccessCntrMemDesc*bar2UvmAccessCntrBufferAddr*src/kernel/gpu/uvm/arch/volta/uvm_gv100.c*NVRM: Forcing bIsErrorRecovery = NV_TRUE because of WRITE_NACK. **src/kernel/gpu/uvm/arch/volta/uvm_gv100.c**NVRM: Forcing bIsErrorRecovery = NV_TRUE because of WRITE_NACK. *call to uvmIsAccessCntrBufferPushed_DISPATCH*NVRM: Timeout waiting for HW to write notification buffer. **NVRM: Timeout waiting for HW to write notification buffer. *pageSizeModBufSize == 0**pageSizeModBufSize == 0*gpu_addresses_length*NVRM: Failed to map access counter buffer while disabling it: %d **NVRM: Failed to map access counter buffer while disabling it: %d *pAccessCntrBufferPage**pAccessCntrBufferPage*inPageGetPtr*getPtr*NVRM: Failed disabling notification buffer. **NVRM: Failed disabling notification buffer. *call to uvmProgramWriteAccessCntrBufferAddress_DISPATCH*src/kernel/gpu/uvm/uvm.c**src/kernel/gpu/uvm/uvm.c*pParams->engineIdx == MC_ENGINE_IDX_ACCESS_CNTR**pParams->engineIdx == MC_ENGINE_IDX_ACCESS_CNTR*call to uvmAccessCntrService_DISPATCH**next_buff*uvmAccessCntrService_HAL(pGpu, pUvm)**uvmAccessCntrService_HAL(pGpu, pUvm)*call to uvmUnloadAccessCntrBuffer_DISPATCH*NVRM: Unloading UVM Access counters failed (status=0x%08x), proceeding... **NVRM: Unloading UVM Access counters failed (status=0x%08x), proceeding... *region_end**pUvmAccessCntrMemDesc**next_page*current_semaphore**next_chunk*call to uvmSetupAccessCntrBuffer_DISPATCH*NVRM: Setup of UVM Access counters failed (status=0x%08x) **NVRM: Setup of UVM Access counters failed (status=0x%08x) *call to _uvmUnloadAccessCntrBuffer*call to uvmDestroyAccessCntrBuffer_DISPATCH*NVRM: Freeing UVM Access counters failed (status=0x%08x), proceeding... **NVRM: Freeing UVM Access counters failed (status=0x%08x), proceeding... *call to uvmInitAccessCntrBuffer_DISPATCH*call to _uvmSetupAccessCntrBuffer**pAccessCounterBuffers*pUvm->pAccessCounterBuffers != NULL**pUvm->pAccessCounterBuffers != NULL*pTraceBuffer != NULL*src/kernel/gpu/video/kernel_video_engine.c**pTraceBuffer != NULL**src/kernel/gpu/video/kernel_video_engine.c*hasSize*pRecord != NULL**pRecord != NULL*pKernelVideoEngine->videoTraceInfo.pTraceBufferVariableData != NULL**pKernelVideoEngine->videoTraceInfo.pTraceBufferVariableData != NULL*call to _eventbufferGotoNextRecord*call to kvidengRingbufferGet_IMPL*current_alloc*src_cpu_node_count*src_gpu_proc_count*free_count**callback_desc_tmp**devmem_next*val64bits*skipSize*call to kvidengRingbufferMakeSpace_IMPL*pDataOut != NULL**pDataOut != NULL*num_pages_evicted_so_far*num_written*usedReadPtr*size2Top*readPtr*oldWritePtr*adjustedReadPtr*newReadPtr**pTraceBufferEngine**pTraceBufferEngineMemDesc**pTraceBufferVariableData*bVideoTraceEnabled*kvidengIsVideoTraceLogSupported(pGpu)**kvidengIsVideoTraceLogSupported(pGpu)*bAlwaysLogging**p8**p_native*eventBufferSize**next_channel*memdescCreate(&pKernelVideoEngine->videoTraceInfo.pTraceBufferEngineMemDesc, pGpu, eventBufferSize, 0, NV_TRUE, addressSpace, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(&pKernelVideoEngine->videoTraceInfo.pTraceBufferEngineMemDesc, pGpu, eventBufferSize, 0, NV_TRUE, addressSpace, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*memdescAlloc(pKernelVideoEngine->videoTraceInfo.pTraceBufferEngineMemDesc)**memdescAlloc(pKernelVideoEngine->videoTraceInfo.pTraceBufferEngineMemDesc)*next_page_index*pTraceBuf**pTraceBuf*pTraceBuf != NULL**pTraceBuf != NULL*NVRM: video engine event tracing is %s. **NVRM: video engine event tracing is %s. **supported*unsupported**unsupported*call to _gpudbFindGpuInfoByUuid*rusd*bShutdownState*src/kernel/gpu_mgr/gpu_db.c*NVRM: Append the list failed **src/kernel/gpu_mgr/gpu_db.c**NVRM: Append the list failed *pciPortInfo*upstreamPciPortInfo**block_tmp**service_context_tmp*num_preallocated_contexts**pBuf1**pBuf2*retryCount*clkPropTopPolsControl*chosenIdx**chosenIdx*NVRM: Gpu data base list lock init failed **NVRM: Gpu data base list lock init failed *src/kernel/gpu_mgr/gpu_group.c**src/kernel/gpu_mgr/gpu_group.c*call to gpugrpGetGpuMask_IMPL*pGlobalVASpace**pGlobalVASpace*ppGlobalVASpace != NULL**ppGlobalVASpace != NULL**parentGpu*bcEnabled*currentDevicesCount*call to gpudbSetShutdownState*m_currentAge*call to gpuWaitForBarFirewall_GB100*call to gpuIsMsixAllowed_TU102*pcieP2PCapsInfoLock**pcieP2PCapsInfoLock*pPcieCapsInfo*pPcieCapsInfoNext**pPcieCapsInfoNext*gpuIdLoop**gpuId**pPcieCapsInfo*bpc14*call to _gpumgrGetPcieP2PCapsFromCache*remainingGpuCount***pcieP2PCapsInfoLock**call to portSyncMutexCreate*call to portUtilIsInterruptContext*video_index*audio_index*currentGpuInstances**currentGpuInstances*speaker_index*svr_index*yuv420vdb_index*pSys != NULL*src/kernel/gpu_mgr/gpu_mgr.c**pSys != NULL*yuv420cmdb_index**src/kernel/gpu_mgr/gpu_mgr.c*pGpuMgr != NULL**pGpuMgr != NULL*pGpuInstanceSlot**pGpuInstanceSlot*call to gpumgrGetProbedGpuIds*gpumgrGetProbedGpuIds(&probedGpuIds)**gpumgrGetProbedGpuIds(&probedGpuIds)*cachedMIGInfoLock**cachedMIGInfoLock*probedGpuIds*cachedMIGInfo**cachedMIGInfo*pMIGState**gpuInstances*bGpuHandled*pGpuInstances**bValidComputeInstances*pbValidComputeInstances*total < NV0000_CTRL_GPU_MAX_ACTIVE_DEVICES**total < NV0000_CTRL_GPU_MAX_ACTIVE_DEVICES*pDevices*giIdx**nextLink*pMIGInfo**pMIGInfo*pMIGInfo != NULL**pMIGInfo != NULL**pGpuInstances*pGpuInstances != NULL**pGpuInstances != NULL*!"Unreachable"**!"Unreachable"*nextSeqDesc*ram_address*call to _gpumgrUnregisterRmCapsForMIGCI*MIGTopologyInfo**MIGTopologyInfo*report_idx*i < NV_ARRAY_ELEMENTS(pGpuMgr->MIGTopologyInfo)**i < NV_ARRAY_ELEMENTS(pGpuMgr->MIGTopologyInfo)*NVRM: Unable to override bw mode setting in current scope **NVRM: Unable to override bw mode setting in current scope *NVRM: Requested BW mode is already set. **NVRM: Requested BW mode is already set. *call to _gpumgrIsP2PObjectPresent*call to gpuFabricProbeSetBwMode*NVRM: Unable to override bw mode setting in current scope. **NVRM: Unable to override bw mode setting in current scope. *NVRM: Requested RBM link count is already set. **NVRM: Requested RBM link count is already set. *gpuFabricProbeSetBwModePerGpu(pGpu, mode)**gpuFabricProbeSetBwModePerGpu(pGpu, mode)*NVRM: BW mode already set. Cannot override with regkey. **NVRM: BW mode already set. Cannot override with regkey. *pStrChar**pStrChar*MIN**MIN*HALF**HALF*3QUARTER**3QUARTER*LINKCOUNT**LINKCOUNT*NVRM: nvlinkBwMode=%d **NVRM: nvlinkBwMode=%d *probedGpusLock**probedGpusLock*probedGpus**probedGpus*convertLinkMasksToBitVector(&mask, sizeof(mask), pLinks, &localLinkMask)**convertLinkMasksToBitVector(&mask, sizeof(mask), pLinks, &localLinkMask)*convertBitVectorToLinkMasks(&localLinkMask, NULL, 0, &links)**convertBitVectorToLinkMasks(&localLinkMask, NULL, 0, &links)*initDisabledNvlinks*nvlinkUncontainedErrorRecoveryInfo**nvlinkUncontainedErrorRecoveryInfo*nvlinkTopologyInfo**nvlinkTopologyInfo*arg_index*prefixLen*suffixLen*c2*gpuInstMaskTable**gpuInstMaskTable*gpuInst < NV_MAX_DEVICES**gpuInst < NV_MAX_DEVICES*call to osRemoveGpuSupported*bAttached*bStateChange*bDrainState*bRemoveIdle*call to osRemoveGpu*call to gpugrpCreate_IMPL*gpugrpInstance < NV_MAX_DEVICES**gpugrpInstance < NV_MAX_DEVICES*pGpuGrpTable**pGpuGrpTable***pGpuGrpTable*physFbAddr*pGpu->pGpuArch->bGpuArchIsZeroFb || physFbAddr**pGpu->pGpuArch->bGpuArchIsZeroFb || physFbAddr*idx1*SliLinkOrder**SliLinkOrder*gpuSliLinkRoute**gpuSliLinkRoute***gpuSliLinkRoute****gpuSliLinkRoute*****gpuSliLinkRoute****access to array*childPinset*call to gpumgrPinsetToPinsetTableIndex*pinsetIndex**gpuMask*call to clFindBrdgUpstreamPort_IMPL**call to clFindBrdgUpstreamPort_IMPL*bInitAttempted*pGpuInfoV2*call to gpumgrGetGpuIdInfoV2*gpuFlags*sliStatus*NVRM: gpumgrGetGpuInfoV2: bad gpuid spec: 0x%x **NVRM: gpumgrGetGpuInfoV2: bad gpuid spec: 0x%x *call to gpumgrIsSafeToReadGpuInfo*gpumgrIsSafeToReadGpuInfo()**gpumgrIsSafeToReadGpuInfo()*NVRM: gpumgrGetGpuInfoV2: deviceInstance not found **NVRM: gpumgrGetGpuInfoV2: deviceInstance not found *call to gpuIsInUse_IMPL*NVRM: gpumgrGetGpuInfoV2: gpu[0x%x]: device 0x%x subdevice 0x%x **NVRM: gpumgrGetGpuInfoV2: gpu[0x%x]: device 0x%x subdevice 0x%x *call to gpugrpSetParentGpu_IMPL*call to gpumgrIsSubDeviceCountOne*pGpuIds*pProbedGpu*excludedGpuIds**excludedGpuIds**gpuFlags*call to _gpumgrDetermineNvlinkEncryptionCapabilities*_gpumgrDetermineNvlinkEncryptionCapabilities(pGpuMgr, pGpu)**_gpumgrDetermineNvlinkEncryptionCapabilities(pGpuMgr, pGpu)*call to gpumgrSetGpuInitStatus*call to gpuStateInit_IMPL*call to gpuStatePreInit_IMPL*bIsSOC*call to gpumgrUpdateAttachInfo*gpuDeviceMapCount*bNvDomainBusDeviceFuncValid*gpuPhysInstAddr*gpuPhysIoAddr*instSetViaAttachArg*gpuInstAddr**gpuInstAddr*nextRecordId*gpuDeviceEnum*subDeviceCount > 0**subDeviceCount > 0*deviceInstance < NV_MAX_DEVICES**deviceInstance < NV_MAX_DEVICES*regCount*gpuMask != 0**gpuMask != 0*call to gpugrpDestroy_IMPL*call to gpumgrClearDeviceMaskFromGpuInstTable*call to gpumgrAllocDeviceInstance*call to gpumgrConstructGpuGrpObject*call to gpumgrAddDeviceMaskToGpuInstTable*NVRM: gpumgrCreateDevice: deviceInst 0x%x mask 0x%x **NVRM: gpumgrCreateDevice: deviceInst 0x%x mask 0x%x *call to osDpcDetachGpu*call to _gpumgrDestroyGpu*call to rmapiDelPendingClients*pGpuMgr->gpuAttachMask & NVBIT(gpuInstance)**pGpuMgr->gpuAttachMask & NVBIT(gpuInstance)*call to _gpumgrCreateGpu*prevGpuInst*call to gpumgrSetAttachInfo*oorArch*call to osDpcAttachGpu*call to osAttachGpu*(pGpuMgr->gpuAttachMask & NVBIT(gpuInstance)) == 0**(pGpuMgr->gpuAttachMask & NVBIT(gpuInstance)) == 0*call to _gpumgrRegisterRmCapsForGpu*call to gpuPostConstruct_IMPL*call to _gpumgrGetEncSessionStatsReportingState*call to gpumgrAddSystemNvlinkTopo_IMPL*call to gpumgrAddSystemMIGInstanceTopo_IMPL*call to _gpumgrDetermineConfComputeCapabilities*prefetchPass*_gpumgrDetermineConfComputeCapabilities(pGpuMgr, pGpu)**_gpumgrDetermineConfComputeCapabilities(pGpuMgr, pGpu)*possibleApiHeadMask*call to _gpumgrUnregisterRmCapsForGpu*retryEdidReadCount*pChildListMutex*call to _gpumgrGetGpuArchHalFactor*activeSorMask*curLUTIndex*waitingApiHeadMask*nextSorIndex*NVRM: checking GpuArch(chipArch:%x chipImpl:%x hidrev:%x tegraType:%x) **NVRM: checking GpuArch(chipArch:%x chipImpl:%x hidrev:%x tegraType:%x) *NVRM: creating GpuArch(chipArch:%x chipImpl:%x hidrev:%x tegraType:%x) **NVRM: creating GpuArch(chipArch:%x chipImpl:%x hidrev:%x tegraType:%x) *objCreate(&pGpuArch, pGpuMgr, GpuArch, chipArch, chipImpl, hidrev, tegraType, chipArch, chipImpl, hidrev, tegraType)**objCreate(&pGpuArch, pGpuMgr, GpuArch, chipArch, chipImpl, hidrev, tegraType, chipArch, chipImpl, hidrev, tegraType)*encSessionStatsReportingState*EncSessionStatsReportingState**EncSessionStatsReportingState*call to gpumgrGetGpuHalFactor*pGpuArch != NULL**pGpuArch != NULL*call to gpumgrGetRegisteredIds*gpumgrGetRegisteredIds(pAttachArg->nvDomainBusDeviceFunc, &gpuId, &gpuUuid, &bGpuUuidValid)**gpumgrGetRegisteredIds(pAttachArg->nvDomainBusDeviceFunc, &gpuId, &gpuUuid, &bGpuUuidValid)*call to gpuBindHalLegacy_IMPL*gpuHandleIDList**gpuHandleIDList*pGpuDevMapping**pGpuDevMapping*gpuDevMapping*call to gpumgrGetGpuHalFactorOfVirtual*call to gpumgrCheckRmFirmwarePolicy*usedApiHeadsMask*isFwClient*NVRM: ChipId0[0x%x] ChipId1[0x%x] SocChipId0[0x%x] isFwClient[%d] RmVariant[%d] tegraType[%d] *supportedFormatsUsageBound*usedHeadMask*frameLockServerMaskArmed*frameLockClientMaskArmed**NVRM: ChipId0[0x%x] ChipId1[0x%x] SocChipId0[0x%x] isFwClient[%d] RmVariant[%d] tegraType[%d] *rmDisplayMask*numRequestedDisps*numUsedGpus*usedHeads*bSize*flipTransitionWAR*updateChannelMask*coreInterlockMask*flipLockUpdateMask*flipLockAllMask*interlockFlags*windowInterlockFlags*updateImm*call to _gpumgrIsRmFirmwareCapableChip*NVRM: Disabling GSP offload -- GPU not supported **NVRM: Disabling GSP offload -- GPU not supported *bFirmwareCapable*call to _gpumgrIsVgxRmFirmwareDefaultChip*bEnabledByDefault*regkeyFirmwareMode*tileStart*numSlicesThisTile*toBeUnassignedTilesMask*toBeUnassignedPhywinsMask*outFreePhywinsMask*outFreeTilesMask*numReusedTiles*numReusedPhywins*call to gpumgrGetCachedUuid*connectedGpuMask*syncReadyGpuMask*bytesLeft*bUuidValid*frameSemaphoreIndex*bExcluded*call to _gpumgrCacheClearMIGGpuIdInfo*call to gpumgrRemovePcieP2PCapsFromCache_IMPL*call to _gpumgrUnregisterRmCapsForGpuUnderLock*NVRM: GPU id 0x%x already registered at index %u **NVRM: GPU id 0x%x already registered at index %u *nNonHsApiHeads*nHsApiHeads*headOriginal*nMembersReady*nMembers*nMembersPendingJoined*refCnt*activeViewportOffset*proposedBlockSize*flipLockQualifyingMask*layerSupportedCount*layerStaticMetadataCount*planePitch*sliRasterLockClientMask*sliRasterLockServerMask*realPixelClock*hdmi3DPixelClock*realHTotal*vRefresh*requestedDispsBitMask*requestedHeadsBitMask*dpyMask*NVRM: deviceInstance 0x%x does not exist! *tmpApiHead**NVRM: deviceInstance 0x%x does not exist! *assignedSorMask*NVRM: Could not find GPU Group for deviceInstance 0x%x! **NVRM: Could not find GPU Group for deviceInstance 0x%x! *call to gpumgrThreadHasExpandedGpuVisibility*handleSpace*NVRM: Failed to retrieve pGpu - Too early call!. **NVRM: Failed to retrieve pGpu - Too early call!. *call to osSyncWithGpuDestroy*call to _gpumgrShiftDownGpuHandles*lastMovedIndex*call to _gpumgrThreadHasExpandedGpuVisibilityInTls*refCount == 0**refCount == 0*NVRM: gpumgrDestruct **NVRM: gpumgrDestruct *call to _gpumgrDeleteCachedGpuArch*call to gpumgrDestroyPcieP2PCapsCache_IMPL*NVRM: gpumgrConstruct **NVRM: gpumgrConstruct *numGpuHandles**pChildListMutex*pGpuMgr->pChildListMutex != NULL**pGpuMgr->pChildListMutex != NULL*bytesPerBlock***probedGpusLock*pGpuMgr->probedGpusLock != NULL**pGpuMgr->probedGpusLock != NULL*gpuAttachCount*powerDisconnectedGpuCount*call to gpumgrInitPcieP2PCapsCache_IMPL*gpumgrInitPcieP2PCapsCache(pGpuMgr)**gpumgrInitPcieP2PCapsCache(pGpuMgr)***cachedMIGInfoLock**call to portSyncRwLockCreate*pGpuMgr->cachedMIGInfoLock != NULL**pGpuMgr->cachedMIGInfoLock != NULL*bNvlEncryptionEnabled*nvleCaps*pGpuMgr->nvleCaps.bNvlEncryptionEnabled == bNvlEncryptionEnabled**pGpuMgr->nvleCaps.bNvlEncryptionEnabled == bNvlEncryptionEnabled*bApmFeatureCapable*bHccFeatureCapable*bDevToolsModeEnabled*bMultiGpuProtectedPcieModeEnabled*bMultiGpuNvleModeEnabled*pGpuMgr->ccCaps.bCCFeatureEnabled == bCCFeatureEnabled**pGpuMgr->ccCaps.bCCFeatureEnabled == bCCFeatureEnabled*pGpuMgr->ccCaps.bHccFeatureCapable == pGpu->getProperty(pGpu, PDB_PROP_GPU_CC_FEATURE_CAPABLE)**pGpuMgr->ccCaps.bHccFeatureCapable == pGpu->getProperty(pGpu, PDB_PROP_GPU_CC_FEATURE_CAPABLE)*pGpuMgr->ccCaps.bDevToolsModeEnabled == gpuIsCCDevToolsModeEnabled(pGpu)**pGpuMgr->ccCaps.bDevToolsModeEnabled == gpuIsCCDevToolsModeEnabled(pGpu)*(pGpuMgr->ccCaps.bMultiGpuProtectedPcieModeEnabled == gpuIsCCMultiGpuProtectedPcieModeEnabled(pGpu) || pGpuMgr->ccCaps.bMultiGpuNvleModeEnabled == gpuIsCCMultiGpuNvleModeEnabled(pGpu))**(pGpuMgr->ccCaps.bMultiGpuProtectedPcieModeEnabled == gpuIsCCMultiGpuProtectedPcieModeEnabled(pGpu) || pGpuMgr->ccCaps.bMultiGpuNvleModeEnabled == gpuIsCCMultiGpuNvleModeEnabled(pGpu))*call to osRmCapRegisterGpu*pSliLinks**pSliLinks*pGpu->gpuInstance < NV_MAX_DEVICES*src/kernel/gpu_mgr/gpu_mgr_sli.c**pGpu->gpuInstance < NV_MAX_DEVICES**src/kernel/gpu_mgr/gpu_mgr_sli.c*childDrPort*NVRM: More than one bit set: 0x%x **NVRM: More than one bit set: 0x%x *NVRM: Unknown pin set value: 0x%x **NVRM: Unknown pin set value: 0x%x *pKernelDisp**pKernelDisp*pKernelDisp != NULL**pKernelDisp != NULL*PDB_PROP_GPU_HIGH_SPEED_BRIDGE_CONNECTED*rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_RPC, &gpuLockMask)**rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_RPC, &gpuLockMask)*!IS_GSP_CLIENT(pGpuParent) || pKernelDisp != NULL**!IS_GSP_CLIENT(pGpuParent) || pKernelDisp != NULL*call to kdispDetectSliLink_DISPATCH*linkFound*__seq**rb_right*need_pte_bits*ignore_flags*protval*keyIndex*d_flags*numConstructedFalcons*childIndex*tgtIdx*call to gpumgrAreGpusInitialized*NVRM: All GPUs do not have state initialized **NVRM: All GPUs do not have state initialized *NVRM: gpumgrDetectSliLinkFromGpus: Need >=2 GPUs to test SliLink. gpuMask = 0x%x **NVRM: gpumgrDetectSliLinkFromGpus: Need >=2 GPUs to test SliLink. gpuMask = 0x%x *nCount*linkTestDone*i_flags*kiocb_flags*pBcState**pBcState*NVRM: gpumgrDetectSliLinkFromGpus: Insufficient resources. **NVRM: gpumgrDetectSliLinkFromGpus: Insufficient resources. *gpuIndexInBcStateTable*call to gpuDetectSliLinkFromGpus_GK104*call to gpuIsVideoLinkDisabled*call to gpumgrDetectHighSpeedVideoBridges*call to osDisableConsoleManagement*src/kernel/mem_mgr/console_mem.c**src/kernel/mem_mgr/console_mem.c*tcp_ptr__*src/kernel/mem_mgr/ctx_buf_pool.c**src/kernel/mem_mgr/ctx_buf_pool.c*call to rmMemPoolSkipScrub***pMemPool*call to rmMemPoolIsScrubSkipped*phys_pages*pPageSize != NULL**pPageSize != NULL*NVRM: unsupported page size attr **NVRM: unsupported page size attr *isolated*bufferCountOut*cx*r11*r10*r9*r15*r14*r13*gsindex*es*dma_flags*tempAttr*NVRM: Couldn't determine buffer alignment **NVRM: Couldn't determine buffer alignment *call to rmMemPoolGetChunkAndPageSize*dev_flags*rmMemPoolGetChunkAndPageSize(pCtxBufPool->pMemPool[i - 1], &chunkSize, &pageSize)**rmMemPoolGetChunkAndPageSize(pCtxBufPool->pMemPool[i - 1], &chunkSize, &pageSize)*NVRM: couldn't find pool with chunksize >= 0x%llx **NVRM: couldn't find pool with chunksize >= 0x%llx *NVRM: Incorrect page size determination **NVRM: Incorrect page size determination *NVRM: Buffer updated size = 0x%llx with page size = 0x%llx **NVRM: Buffer updated size = 0x%llx with page size = 0x%llx *ppCtxBufPool != NULL**ppCtxBufPool != NULL*RM_ENGINE_TYPE_IS_VALID(rmEngineType)**RM_ENGINE_TYPE_IS_VALID(rmEngineType)*RM_ENGINE_TYPE_IS_GR(rmEngineType)**RM_ENGINE_TYPE_IS_GR(rmEngineType)*NVRM: Invalid buf Id = 0x%x requested **NVRM: Invalid buf Id = 0x%x requested *current_cmd_index*call to ctxBufPoolGetSizeAndPageSize*ctxBufPoolGetSizeAndPageSize(pCtxBufPool, pMemDesc->pGpu, pMemDesc->Alignment, RM_ATTR_PAGE_SIZE_DEFAULT, NV_TRUE, &size, &pageSize)**ctxBufPoolGetSizeAndPageSize(pCtxBufPool, pMemDesc->pGpu, pMemDesc->Alignment, RM_ATTR_PAGE_SIZE_DEFAULT, NV_TRUE, &size, &pageSize)*call to ctxBufPoolPageSizeToPoolIndex*poolIndex < POOL_CONFIG_MAX_SUPPORTED**poolIndex < POOL_CONFIG_MAX_SUPPORTED*call to memmgrMemsetInBlocks_IMPL*bi_opf*bi_flags*NVRM: Buffer freed from ctx buf pool with page size = 0x%llx **NVRM: Buffer freed from ctx buf pool with page size = 0x%llx *bi_sector*NVRM: ctx buf pool is only used for buffers to be allocated in FB SYSMEM buffers don't need memory to be pre-reserved in pool **NVRM: ctx buf pool is only used for buffers to be allocated in FB SYSMEM buffers don't need memory to be pre-reserved in pool *ctxBufPoolGetSizeAndPageSize(pCtxBufPool, pMemDesc->pGpu, pMemDesc->Alignment, RM_ATTR_PAGE_SIZE_DEFAULT, memdescGetContiguity(pMemDesc, AT_GPU), &pMemDesc->ActualSize, &newPageSize)**ctxBufPoolGetSizeAndPageSize(pCtxBufPool, pMemDesc->pGpu, pMemDesc->Alignment, RM_ATTR_PAGE_SIZE_DEFAULT, memdescGetContiguity(pMemDesc, AT_GPU), &pMemDesc->ActualSize, &newPageSize)*NVRM: Ctx buffer page size set to 0x%llx **NVRM: Ctx buffer page size set to 0x%llx *_nr_pages*_index*reclaimed*state_use_accessors*rmMemPoolAllocate(pPool, (RM_POOL_ALLOC_MEMDESC*)pMemDesc)**rmMemPoolAllocate(pPool, (RM_POOL_ALLOC_MEMDESC*)pMemDesc)*NVRM: Buffer allocated from ctx buf pool with page size = 0x%llx **NVRM: Buffer allocated from ctx buf pool with page size = 0x%llx *NVRM: Trimmed pool with RM_ATTR_PAGE_SIZE_* = 0x%x **NVRM: Trimmed pool with RM_ATTR_PAGE_SIZE_* = 0x%x *crt_flags*pBufInfoList != NULL**pBufInfoList != NULL*bufCount > 0**bufCount > 0*ctxBufPoolGetSizeAndPageSize(pCtxBufPool, pGpu, pBufInfoList[i].align, pBufInfoList[i].attr, pBufInfoList[i].bContig, &size, &pageSize)**ctxBufPoolGetSizeAndPageSize(pCtxBufPool, pGpu, pBufInfoList[i].align, pBufInfoList[i].attr, pBufInfoList[i].bContig, &size, &pageSize)**totalSize*NVRM: Reserving 0x%llx bytes for buf Id = 0x%x in pool with page size = 0x%llx **NVRM: Reserving 0x%llx bytes for buf Id = 0x%x in pool with page size = 0x%llx *rmMemPoolReserve(pCtxBufPool->pMemPool[i], totalSize[i], 0)**rmMemPoolReserve(pCtxBufPool->pMemPool[i], totalSize[i], 0)*NVRM: Reserved 0x%llx bytes in pool with RM_ATTR_PAGE_SIZE_* = 0x%x **NVRM: Reserved 0x%llx bytes in pool with RM_ATTR_PAGE_SIZE_* = 0x%x *ctxBufPoolTrim(pCtxBufPool)**ctxBufPoolTrim(pCtxBufPool)*NVRM: Failed to reserve memory. trimming all pools **NVRM: Failed to reserve memory. trimming all pools *NVRM: Unrecognized/unsupported page size = 0x%llx **NVRM: Unrecognized/unsupported page size = 0x%llx *(ppCtxBufPool != NULL) && (*ppCtxBufPool != NULL)**(ppCtxBufPool != NULL) && (*ppCtxBufPool != NULL)*NVRM: Ctx buf pool doesn't exist **NVRM: Ctx buf pool doesn't exist *NVRM: Ctx buf pool destroyed **NVRM: Ctx buf pool destroyed *modified_mask*(pCtxBufPool != NULL)**(pCtxBufPool != NULL)*rmMemPoolSetup((void*)pHeap->pPmaObject, &pCtxBufPool->pMemPool[i], (POOL_CONFIG_MODE) i)**rmMemPoolSetup((void*)pHeap->pPmaObject, &pCtxBufPool->pMemPool[i], (POOL_CONFIG_MODE) i)*NVRM: Ctx buf pool successfully initialized **NVRM: Ctx buf pool successfully initialized *NVRM: Ctx buffers not supported in PMA **NVRM: Ctx buffers not supported in PMA *NVRM: PMA is disabled. Ctx buffers will be allocated in RM reserved heap **NVRM: PMA is disabled. Ctx buffers will be allocated in RM reserved heap *call to memmgrBug3922001DisableCtxBufOnSim*NVRM: Ctx buffers not supported on simulation/emulation **NVRM: Ctx buffers not supported on simulation/emulation *NVRM: Guest RM/GSP don't support ctx buffers in PMA **NVRM: Guest RM/GSP don't support ctx buffers in PMA *NVRM: ctx buffers in PMA not supported for allocations host RM makes on behalf of guest **NVRM: ctx buffers in PMA not supported for allocations host RM makes on behalf of guest *NVRM: Ctx buffer pool enabled. Ctx buffers will be allocated from PMA **NVRM: Ctx buffer pool enabled. Ctx buffers will be allocated from PMA *src/kernel/mem_mgr/egm_mem.c*NVRM: Virtual-only flag used with physical allocation **src/kernel/mem_mgr/egm_mem.c**NVRM: Virtual-only flag used with physical allocation *NVRM: Virtual-only 32-bit pointer attr used with physical allocation **NVRM: Virtual-only 32-bit pointer attr used with physical allocation *NVRM: VA space handle used with physical allocation **NVRM: VA space handle used with physical allocation *!(pVidHeapAlloc->flags & NVOS32_ALLOC_FLAGS_WPR1) && !(pVidHeapAlloc->flags & NVOS32_ALLOC_FLAGS_WPR2)**!(pVidHeapAlloc->flags & NVOS32_ALLOC_FLAGS_WPR1) && !(pVidHeapAlloc->flags & NVOS32_ALLOC_FLAGS_WPR2)*NVRM: Expected fixed address allocation **NVRM: Expected fixed address allocation *call to memUtilsAllocMemDesc*memUtilsAllocMemDesc(pGpu, pAllocRequest, pFbAllocInfo, &pMemDesc, NULL, ADDR_EGM, bContig, &bAllocedMemDesc)**memUtilsAllocMemDesc(pGpu, pAllocRequest, pFbAllocInfo, &pMemDesc, NULL, ADDR_EGM, bContig, &bAllocedMemDesc)*call to stdmemValidateParams_IMPL*stdmemValidateParams(pGpu, pRmClient, pAllocData)**stdmemValidateParams(pGpu, pRmClient, pAllocData)*FLD_TEST_DRF(OS32, _ATTR2, _USE_EGM, _TRUE, pAllocData->attr2)**FLD_TEST_DRF(OS32, _ATTR2, _USE_EGM, _TRUE, pAllocData->attr2)*NVRM: Allocation requested from EGM when local EGM is not supported **NVRM: Allocation requested from EGM when local EGM is not supported *NVRM: NVOS32_ATTR2_USE_EGM can be set to true only when NVOS32_ATTR_LOCATION_PCI is set for SHH **NVRM: NVOS32_ATTR2_USE_EGM can be set to true only when NVOS32_ATTR_LOCATION_PCI is set for SHH *NVRM: NVOS32_ATTR2_USE_EGM can be set to true only when NVOS32_ATTR_LOCATION_VIDMEM is set **NVRM: NVOS32_ATTR2_USE_EGM can be set to true only when NVOS32_ATTR_LOCATION_VIDMEM is set *numaOfflineIdx*call to egmmemValidateParams_IMPL*egmmemValidateParams(pGpu, pRmClient, pAllocData)**egmmemValidateParams(pGpu, pRmClient, pAllocData)*call to stdmemDumpInputAllocParams_IMPL*NVRM: EGM Allocation requested **NVRM: EGM Allocation requested *checkGrCount*memmgrAllocResources(pGpu, pMemoryManager, pAllocRequest, pFbAllocInfo)**memmgrAllocResources(pGpu, pMemoryManager, pAllocRequest, pFbAllocInfo)*call to egmmemAllocResources*egmmemAllocResources(pGpu, pMemoryManager, pAllocRequest, pFbAllocInfo)**egmmemAllocResources(pGpu, pMemoryManager, pAllocRequest, pFbAllocInfo)*pAllocRequest->pMemDesc**pAllocRequest->pMemDesc*tempDstEngineType*memdescGetAddressSpace(pMemDesc) == ADDR_EGM**memdescGetAddressSpace(pMemDesc) == ADDR_EGM*memmgrSetMemDescPageSize_HAL(pGpu, GPU_GET_MEMORY_MANAGER(pGpu), pMemDesc, AT_GPU, pageSizeAttr)**memmgrSetMemDescPageSize_HAL(pGpu, GPU_GET_MEMORY_MANAGER(pGpu), pMemDesc, AT_GPU, pageSizeAttr)*memPoolListCount*memConstructCommon(pMemory, pAllocRequest->classNum, flags, pMemDesc, 0, NULL, pAllocData->attr, pAllocData->attr2, 0, 0, pAllocData->tag, &hwResource)**memConstructCommon(pMemory, pAllocRequest->classNum, flags, pMemDesc, 0, NULL, pAllocData->attr, pAllocData->attr2, 0, 0, pAllocData->tag, &hwResource)*call to RmDeprecatedConvertOs32ToOs02Flags*RmDeprecatedConvertOs32ToOs02Flags(pAllocData->attr, pAllocData->attr2, os32Flags, &os02Flags)**RmDeprecatedConvertOs32ToOs02Flags(pAllocData->attr, pAllocData->attr2, os32Flags, &os02Flags)*listAllocParams*bRpcAlloc*call to stdmemDumpOutputAllocParams_IMPL*call to gvaspaceIsInUse_IMPL*src/kernel/mem_mgr/fabric_vaspace.c**src/kernel/mem_mgr/fabric_vaspace.c*call to gvaspaceGetPageLevelInfo_IMPL*ucFabricBase*ucFabricLimit*ucFabricInUseSize*ucFabricFreeSize*NVRM: Setting UC Base: %llx, size: %llx **NVRM: Setting UC Base: %llx, size: %llx *pFabricGpu*pFabricMemDesc != NULL**pFabricMemDesc != NULL*pPhysMemDesc != NULL**pPhysMemDesc != NULL*call to _fabricVaspaceValidateMapAttrs*pPhysMemManager**pPhysMemManager*pPhysMemManager->bLocalEgmEnabled**pPhysMemManager->bLocalEgmEnabled*NVRM: Unsupported aperture **NVRM: Unsupported aperture *call to _fabricvaspaceGetMappingRegions*certCountReq*numRegions != 0**numRegions != 0*fabricPageCount*pTempMemdesc**pTempMemdesc*pFabricDma*call to fabricvaspaceUnmapPhysMemdesc_IMPL*pteBlockIndex*pteBlockIdx*mapLengthAligned*NVRM: Invalid offset passed for the fabric handle **NVRM: Invalid offset passed for the fabric handle *NVRM: Invalid offset passed for the physmem handle **NVRM: Invalid offset passed for the physmem handle *NVRM: Invalid map length passed for the physmem handle **NVRM: Invalid map length passed for the physmem handle *call to fabricIsMemAllocDisabled_IMPL*pFabricVAS->pGVAS != NULL**pFabricVAS->pGVAS != NULL*pageSize >= RM_PAGE_SIZE_HUGE**pageSize >= RM_PAGE_SIZE_HUGE*alignment != 0**alignment != 0*NV_IS_ALIGNED64(alignment, pageSize)**NV_IS_ALIGNED64(alignment, pageSize)*NV_IS_ALIGNED64(base, pageSize)**NV_IS_ALIGNED64(base, pageSize)*NV_IS_ALIGNED64(size, pageSize)**NV_IS_ALIGNED64(size, pageSize)*addr == base**addr == base*pFabricNode**pFabricNode*NVRM: Failed to insert addr 0x%llx into the memory fabric tree **NVRM: Failed to insert addr 0x%llx into the memory fabric tree *pFabricVaToGpaMap*ppAdjustedMemdesc != NULL**ppAdjustedMemdesc != NULL*call to knvlinkIsP2pLoopbackSupported_IMPL*btreeSearch(physAddr, &pNode, pFabricVAS->pFabricVaToGpaMap)**btreeSearch(physAddr, &pNode, pFabricVAS->pFabricVaToGpaMap)*NVRM: Failed to create submMemdesc for the GVA->PA mapping **NVRM: Failed to create submMemdesc for the GVA->PA mapping *bUcFla*vaspaceFreeV2(pFabricVAS->pGVAS, pAddr[idx], &freeSize) == NV_OK**vaspaceFreeV2(pFabricVAS->pGVAS, pAddr[idx], &freeSize) == NV_OK*pendingNotifyCount*call to _fabricvaspaceUnbindInstBlk*freeSize != NULL**freeSize != NULL*vaspaceFreeV2(pFabricVAS->pGVAS, vAddr, &blockSize) == NV_OK**vaspaceFreeV2(pFabricVAS->pGVAS, vAddr, &blockSize) == NV_OK*ppAddr != NULL**ppAddr != NULL*pNumAddr != NULL**pNumAddr != NULL*align != 0**align != 0*NV_IS_ALIGNED64(align, pageSize)**NV_IS_ALIGNED64(align, pageSize)*fabricvaspaceGetFreeHeap(pFabricVAS, &freeSize)**fabricvaspaceGetFreeHeap(pFabricVAS, &freeSize)*NVRM: Not enough memory in eheap, size requested = 0x%llx, free memory = 0x%llx **NVRM: Not enough memory in eheap, size requested = 0x%llx, free memory = 0x%llx *mbuf_ptr**mbuf_ptr*param_ptr**param_ptr*NVRM: Forcing both contiguous and noncontiguous is not allowed **NVRM: Forcing both contiguous and noncontiguous is not allowed *bDefaultAllocMode*sequence_base*call to _fabricvaspaceBindInstBlk*NVRM: Failed to bind instance block for fabric vaspace. Alloc failed **NVRM: Failed to bind instance block for fabric vaspace. Alloc failed *d_progress*allocated_size*total_skip_size*total_map_len*dma_len*mapped_nents*attached_size*NVRM: Failed to allocate contig vaspace **NVRM: Failed to allocate contig vaspace *seenSetBitsCount*NV_IS_ALIGNED64(addr, pageSize)**NV_IS_ALIGNED64(addr, pageSize)*NVRM: Failed to allocate vaspace **NVRM: Failed to allocate vaspace *call to fabricvaspaceBatchFree_IMPL*pRmApi->Free(pRmApi, pFabricVAS->hClient, pFabricVAS->hClient) == NV_OK**pRmApi->Free(pRmApi, pFabricVAS->hClient, pFabricVAS->hClient) == NV_OK*!gvaspaceIsInUse(dynamicCast(pFabricVAS->pGVAS, OBJGVASPACE))**!gvaspaceIsInUse(dynamicCast(pFabricVAS->pGVAS, OBJGVASPACE))*vasStart*call to gpumgrGetDefaultPrimaryGpu*FABRIC_VASPACE_A == classId**FABRIC_VASPACE_A == classId*lfb_size*vaStart <= vaLimit**vaStart <= vaLimit*dpd_dsi_pads*leaked_bytes*num_leaked_allocs*num_untracked_allocs*untracked_bytes*ONEBITSET(pVAS->gpuMask)**ONEBITSET(pVAS->gpuMask)*vaspaceId == pGpu->gpuId**vaspaceId == pGpu->gpuId*pat1*pat2*NVRM: failed creating client, status=0x%x **NVRM: failed creating client, status=0x%x *NVRM: failed creating device handle, status=0x%x **NVRM: failed creating device handle, status=0x%x *devAllocParams*num_soc_irqs*NVRM: Failed allocating gvaspace associated with the fabric vaspace, status=0x%x **NVRM: Failed allocating gvaspace associated with the fabric vaspace, status=0x%x *num_added_pages*pRmApi->Free(pRmApi, hClient, hClient) == NV_OK**pRmApi->Free(pRmApi, hClient, hClient) == NV_OK*numid*suspend_count*lo_val*hi_val*supported_rotations*vss_header_entry*u64_entry0*u64_entry1**maxSubmittedMapping*maxSubmitted*kbusSetupUnbindFla_HAL(pGpu, pKernelBus) == NV_OK**kbusSetupUnbindFla_HAL(pGpu, pKernelBus) == NV_OK*kgmmuInstBlkInit(pKernelGmmu, pKernelBus->flaInfo.pInstblkMemDesc, pKernelBus->flaInfo.pFlaVAS, FIFO_PDB_IDX_BASE, &instblkParams) == NV_OK**kgmmuInstBlkInit(pKernelGmmu, pKernelBus->flaInfo.pInstblkMemDesc, pKernelBus->flaInfo.pFlaVAS, FIFO_PDB_IDX_BASE, &instblkParams) == NV_OK*kbusSetupBindFla_HAL(pGpu, pKernelBus, pFabricVAS->gfid) == NV_OK**kbusSetupBindFla_HAL(pGpu, pKernelBus, pFabricVAS->gfid) == NV_OK*vm_pgoff*plane_mask*NVRM: FabricVAS and FlaVAS cannot be used simultaneously! Instance block setup for fabricVAS failed **NVRM: FabricVAS and FlaVAS cannot be used simultaneously! Instance block setup for fabricVAS failed *NVRM: Failed to unbind instance block for FlaVAS, status=0x%x **NVRM: Failed to unbind instance block for FlaVAS, status=0x%x *read_size*NVRM: Failed to setup instance block for fabricVAS, status=0x%x *stack_mismatches*num_scheduled**NVRM: Failed to setup instance block for fabricVAS, status=0x%x *NVRM: Failed to bind instance block for fabricVAS, status=0x%x **NVRM: Failed to bind instance block for fabricVAS, status=0x%x *numCreatable*src/kernel/mem_mgr/gpu_vaspace.c**src/kernel/mem_mgr/gpu_vaspace.c*call to _gvaspaceInternalFree*call to gvaspaceGetFreeHeap_IMPL*gvaspaceGetFreeHeap(pGVAS, &freeSize) == NV_OK**gvaspaceGetFreeHeap(pGVAS, &freeSize) == NV_OK*pFreeSize != NULL**pFreeSize != NULL*((pageSizeLockMask & RM_PAGE_SIZE) != 0)**((pageSizeLockMask & RM_PAGE_SIZE) != 0)*call to kgmmuGetSizeOfPageDirs_IMPL*tp**tp*call to kgmmuGetSizeOfPageTables_IMPL*inptr*poolSize*pdeInfo*levels**levels*levelFmt*numLevelsToCopy*virtAddrLo*virtAddrHi**p32*p16**p16*populate_flags*piecewiseStart*piecewiseEnd*pointer_dst**pointer_dst*pointer_src**pointer_src*call to _gvaspaceReleaseUnreservedPTEs*_gvaspaceReleaseUnreservedPTEs(pGVAS, pGpu, vaLo, vaHi, pLevelFmt)**_gvaspaceReleaseUnreservedPTEs(pGVAS, pGpu, vaLo, vaHi, pLevelFmt)*NVRM: Failed to Reserve Entries. **NVRM: Failed to Reserve Entries. *NVRM: Failed to sparsify reserved BAR1 page tables. **NVRM: Failed to sparsify reserved BAR1 page tables. *bImcBarDescInit*pDst32**pDst32*pSrc32**pSrc32*pDst8**pDst8*pSrc8**pSrc8**cp*call to _gvaspaceReleasePageTableEntries*NVRM: Cannot find the reserved PTE to release. **NVRM: Cannot find the reserved PTE to release. *handleGenIdx*newReservedPageTableEntry*listInsertValue(&pGpuState->reservedPageTableEntries, pIter, &newReservedPageTableEntry)**listInsertValue(&pGpuState->reservedPageTableEntries, pIter, &newReservedPageTableEntry)*call to _gvaspaceReservePageTableEntries*pMapTree*gpuMask == (pMapNode->gpuMask & gpuMask)*context_index**gpuMask == (pMapNode->gpuMask & gpuMask)*pMapNode->node.keyStart <= vaLo && pMapNode->node.keyEnd >= vaHi*current_gpfifo_count**pMapNode->node.keyStart <= vaLo && pMapNode->node.keyEnd >= vaHi*pMapNode->gpuMask == gpuMask**pMapNode->gpuMask == gpuMask*NVRM: Partial unmap: Removing vaLo: 0x%llx vaHi: 0x%llx. **NVRM: Partial unmap: Removing vaLo: 0x%llx vaHi: 0x%llx. *nodeVaLo*nodeVaHi*call to _gvaspaceMappingInsert*mainLink**mainLink*evoInterface**evoInterface*evtSink**evtSink*NVRM: Partial unmap: Inserting partial vaLo: 0x%llx vaHi: 0x%llx. status: 0x%x **NVRM: Partial unmap: Inserting partial vaLo: 0x%llx vaHi: 0x%llx. status: 0x%x *bLoEntryAdded*bi*completed_count*bHiEntryAdded*call to _gvaspaceMappingRemove*_gvaspaceMappingRemove(pGVAS, pGpu, pVASBlock, nodeVaLo, vaLo - 1)**_gvaspaceMappingRemove(pGVAS, pGpu, pVASBlock, nodeVaLo, vaLo - 1)*_gvaspaceMappingRemove(pGVAS, pGpu, pVASBlock, vaHi + 1, nodeVaHi)**_gvaspaceMappingRemove(pGVAS, pGpu, pVASBlock, vaHi + 1, nodeVaHi)*num_rotations_completed*btreeInsert(&pMapNode->node, (NODE**)&pVASBlock->pMapTree)**btreeInsert(&pMapNode->node, (NODE**)&pVASBlock->pMapTree)**pExternalPDB*!flags.bRemap**!flags.bRemap*gpuMask == (pVAS->gpuMask & gpuMask)**gpuMask == (pVAS->gpuMask & gpuMask)*0 == (pMapNode->gpuMask & gpuMask)**0 == (pMapNode->gpuMask & gpuMask)*pMapNode->node.keyStart == vaLo**pMapNode->node.keyStart == vaLo*pMapNode->node.keyEnd == vaHi**pMapNode->node.keyEnd == vaHi*NULL != pMapNode**NULL != pMapNode*total_refcount*num_read_faults*num_write_faults*num_atomic_faults*num_physical_faults*num_non_replayable_faults*num_prefetch_faults*NVRM: Virtual allocation leak in range 0x%llX-0x%llX **NVRM: Virtual allocation leak in range 0x%llX-0x%llX *num_replayable_faults*handling_ref_count*num_batches*num_throttled*page_fault_count*num_replays_ack_all*tries_left*value64*call to _gvaspaceControl_Prolog*_gvaspaceControl_Prolog(pVaspaceApi, pVaspaceGetHostRmManagedSizeParams->hSubDevice, pVaspaceGetHostRmManagedSizeParams->subDeviceId, &pGVAS, &pGpu)**_gvaspaceControl_Prolog(pVaspaceApi, pVaspaceGetHostRmManagedSizeParams->hSubDevice, pVaspaceGetHostRmManagedSizeParams->subDeviceId, &pGVAS, &pGpu)*requiredVaRange*pGpuStates*pCopyServerReservedPdesParams->numLevelsToCopy <= GMMU_FMT_MAX_LEVELS**pCopyServerReservedPdesParams->numLevelsToCopy <= GMMU_FMT_MAX_LEVELS*ONEBITSET(pCopyServerReservedPdesParams->pageSize)**ONEBITSET(pCopyServerReservedPdesParams->pageSize)*sign_size*call to gvaspaceReservePageTableEntries_IMPL*_gvaspaceControl_Prolog(pVaspaceApi, pCopyServerReservedPdesParams->hSubDevice, pCopyServerReservedPdesParams->subDeviceId, &pGVAS, &pGpu)**_gvaspaceControl_Prolog(pVaspaceApi, pCopyServerReservedPdesParams->hSubDevice, pCopyServerReservedPdesParams->subDeviceId, &pGVAS, &pGpu)*total_pages*call to gvaspaceCopyServerReservedPdes_IMPL*user_sizes*_gvaspaceControl_Prolog(pVaspaceApi, pReleaseEntriesParams->hSubDevice, pReleaseEntriesParams->subDeviceId, &pGVAS, &pGpu)**_gvaspaceControl_Prolog(pVaspaceApi, pReleaseEntriesParams->hSubDevice, pReleaseEntriesParams->subDeviceId, &pGVAS, &pGpu)*ONEBITSET(pReleaseEntriesParams->pageSize)**ONEBITSET(pReleaseEntriesParams->pageSize)*call to gvaspaceReleasePageTableEntries_IMPL*thrashing_reset_count*lapse_stat*_gvaspaceControl_Prolog(pVaspaceApi, pReserveEntriesParams->hSubDevice, pReserveEntriesParams->subDeviceId, &pGVAS, &pGpu)**_gvaspaceControl_Prolog(pVaspaceApi, pReserveEntriesParams->hSubDevice, pReserveEntriesParams->subDeviceId, &pGVAS, &pGpu)*ONEBITSET(pReserveEntriesParams->pageSize)**ONEBITSET(pReserveEntriesParams->pageSize)*pPageLevelInfoParams->flags <= NV90F1_CTRL_VASPACE_GET_PAGE_LEVEL_INFO_FLAG_BAR1**pPageLevelInfoParams->flags <= NV90F1_CTRL_VASPACE_GET_PAGE_LEVEL_INFO_FLAG_BAR1*_gvaspaceControl_Prolog(pVaspaceApi, pPageLevelInfoParams->hSubDevice, pPageLevelInfoParams->subDeviceId, &pGVAS, &pGpu)**_gvaspaceControl_Prolog(pVaspaceApi, pPageLevelInfoParams->hSubDevice, pPageLevelInfoParams->subDeviceId, &pGVAS, &pGpu)*num_pages_left_to_evict*pinned_leaf_chunks*num_to_skip*num_subchunks_curr*num_active_entries*push_count*total_push_size**push_start*allocated_ranges*total_allocs*target_count*size_sum*total_delay_ns*total_ranges**virtAddress*channel_count*_gvaspaceControl_Prolog(pVaspaceApi, pGmmuFormatParams->hSubDevice, pGmmuFormatParams->subDeviceId, &pGVAS, &pGpu)**_gvaspaceControl_Prolog(pVaspaceApi, pGmmuFormatParams->hSubDevice, pGmmuFormatParams->subDeviceId, &pGVAS, &pGpu)**call to gvaspaceGetGmmuFmt_IMPL*NULL != pGmmuFormatParams->pFmt**NULL != pGmmuFormatParams->pFmt*vaspaceGetByHandleOrDeviceDefault(RES_GET_CLIENT(pVaspaceApi), RES_GET_PARENT_HANDLE(pVaspaceApi), RES_GET_HANDLE(pVaspaceApi), &pVAS)**vaspaceGetByHandleOrDeviceDefault(RES_GET_CLIENT(pVaspaceApi), RES_GET_PARENT_HANDLE(pVaspaceApi), RES_GET_HANDLE(pVaspaceApi), &pVAS)*NULL != *ppGVAS**NULL != *ppGVAS*needed_free_entries*subdeviceGetByHandle(RES_GET_CLIENT(pVaspaceApi), hSubDevice, &pSubDevice)**subdeviceGetByHandle(RES_GET_CLIENT(pVaspaceApi), hSubDevice, &pSubDevice)*NULL != *ppGpu**NULL != *ppGpu*chunks_to_evict*merge_sizes*bFreeNeeded*call to _gvaspacePopulatePDEentries*call to _gvaspaceCopyServerRmReservedPdesToServerRm*chunk_alloc_flags*NV_OK == tmpStatus**NV_OK == tmpStatus*alloced_pages*globalCopyParams*_gvaspacePopulatePDEentries(pGVAS, pGpu, &globalCopyParams.PdeCopyParams)**_gvaspacePopulatePDEentries(pGVAS, pGpu, &globalCopyParams.PdeCopyParams)*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GMMU_COPY_RESERVED_SPLIT_GVASPACE_PDES_TO_SERVER, &globalCopyParams, sizeof(NV2080_CTRL_INTERNAL_GMMU_COPY_RESERVED_SPLIT_GVASPACE_PDES_TO_SERVER_PARAMS))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GMMU_COPY_RESERVED_SPLIT_GVASPACE_PDES_TO_SERVER, &globalCopyParams, sizeof(NV2080_CTRL_INTERNAL_GMMU_COPY_RESERVED_SPLIT_GVASPACE_PDES_TO_SERVER_PARAMS))*num_non_faultable_gpu_va_spaces*num_integrated_gpus*NULL != pTargetFmt**NULL != pTargetFmt*level < GMMU_FMT_MAX_LEVELS**level < GMMU_FMT_MAX_LEVELS*mmuWalkGetPageLevelInfo(pWalk, pLevelFmt, pParams->virtAddress, (const MMU_WALK_MEMDESC**)&pMemDesc, &memSize)**mmuWalkGetPageLevelInfo(pWalk, pLevelFmt, pParams->virtAddress, (const MMU_WALK_MEMDESC**)&pMemDesc, &memSize)*hopsWritten*sublevelFmt**sublevelFmt*call to mmuFmtGetNextLevel**pChanGrpRefCnt*faultyRetries*NVRM: grpID 0x%x on runlistId 0x%x not registered to the VAS **NVRM: grpID 0x%x on runlistId 0x%x not registered to the VAS *inputMap*isDP12AuthCap*pChanGrpRefCnt != NULL**pChanGrpRefCnt != NULL**pChanGrpRefCnt != 0***pChanGrpRefCnt != 0*NVRM: ChanGrp 0x%x on runlist 0x%x refCnt decreased to 0x%x **NVRM: ChanGrp 0x%x on runlist 0x%x refCnt decreased to 0x%x *count_ones*NVRM: ChanGrp 0x%x on runlist 0x%x unregistered. *decoderColorFormatMask**NVRM: ChanGrp 0x%x on runlist 0x%x unregistered. *decoderColorDepthMask*adjReqCount*NVRM: ChanGrp 0x%x on runlist 0x%x registered. **NVRM: ChanGrp 0x%x on runlist 0x%x registered. *NVRM: ChanGrp 0x%x on runlist 0x%x refCnt increased to 0x%x **NVRM: ChanGrp 0x%x on runlist 0x%x refCnt increased to 0x%x *cqaStatsCount*cqaStatsSumUs*pUserCtx->pGpuState**new[]**pUserCtx->pGpuState*pUserCtx->pGpuState->pWalk**pUserCtx->pGpuState->pWalk*call to mmuWalkGetUserCtx*pUserCtx == mmuWalkGetUserCtx(pUserCtx->pGpuState->pWalk)**pUserCtx == mmuWalkGetUserCtx(pUserCtx->pGpuState->pWalk)*hasVideo*mmuWalkSetUserCtx(pUserCtx->pGpuState->pWalk, NULL)**mmuWalkSetUserCtx(pUserCtx->pGpuState->pWalk, NULL)*NULL == mmuWalkGetUserCtx(pGpuState->pWalk)**NULL == mmuWalkGetUserCtx(pGpuState->pWalk)*modeItr*mmuWalkSetUserCtx(pUserCtx->pGpuState->pWalk, pUserCtx)**mmuWalkSetUserCtx(pUserCtx->pGpuState->pWalk, pUserCtx)*pPTBig**pPTBig*pPT4KB**pPT4KB*NULL != pPTBig**NULL != pPTBig*NULL != pPT4KB**NULL != pPT4KB*NULL != mapTarget.pLevelFmt**NULL != mapTarget.pLevelFmt*2 == mapTarget.pLevelFmt->numSubLevels**2 == mapTarget.pLevelFmt->numSubLevels**pPdeFmt*pPtParams*mmuWalkGetPageLevelInfo(pGpuState->pWalk, mapTarget.pLevelFmt, (pParams->pdeIndex * mmuFmtLevelPageSize(mapTarget.pLevelFmt)), (const MMU_WALK_MEMDESC**)&pMemDesc, &memSize)**mmuWalkGetPageLevelInfo(pGpuState->pWalk, mapTarget.pLevelFmt, (pParams->pdeIndex * mmuFmtLevelPageSize(mapTarget.pLevelFmt)), (const MMU_WALK_MEMDESC**)&pMemDesc, &memSize)*memmgrMemRead(pMemoryManager, &surf, pde.v8, mapTarget.pLevelFmt->entrySize, TRANSFER_FLAGS_DEFER_FLUSH)**memmgrMemRead(pMemoryManager, &surf, pde.v8, mapTarget.pLevelFmt->entrySize, TRANSFER_FLAGS_DEFER_FLUSH)*currAperture*(kgmmuTranslatePdePcfFromHw_HAL(pKernelGmmu, pdePcfHw, currAperture, &pdePcfSw) == NV_OK)**(kgmmuTranslatePdePcfFromHw_HAL(pKernelGmmu, pdePcfHw, currAperture, &pdePcfSw) == NV_OK)*ptIdx*!bSparse**!bSparse*nvFieldIsValid32(&pFmt->pPdeMulti->fldSizeRecipExp)**nvFieldIsValid32(&pFmt->pPdeMulti->fldSizeRecipExp)*vaLo >= pVAS->vasStart**vaLo >= pVAS->vasStart*vaHi <= pGVAS->vaLimitInternal**vaHi <= pGVAS->vaLimitInternal*entryIndexLo == entryIndexHi**entryIndexLo == entryIndexHi*memmgrMemWrite(pMemoryManager, &surf, pIter->entry.v8, pTarget->pLevelFmt->entrySize, TRANSFER_FLAGS_NONE) == NV_OK**memmgrMemWrite(pMemoryManager, &surf, pIter->entry.v8, pTarget->pLevelFmt->entrySize, TRANSFER_FLAGS_NONE) == NV_OK*!(pGVAS->flags & VASPACE_FLAGS_SHARED_MANAGEMENT)**!(pGVAS->flags & VASPACE_FLAGS_SHARED_MANAGEMENT)*vaLimitNew*vaLimitNew >= pVAS->vasLimit**vaLimitNew >= pVAS->vasLimit*vaLimitNew <= pGVAS->vaLimitMax**vaLimitNew <= pGVAS->vaLimitMax*NVRM: doesn't support clientVA expansion **NVRM: doesn't support clientVA expansion *call to _gvaspaceReserveTopForGrowth*_gvaspaceReserveTopForGrowth(pGVAS)**_gvaspaceReserveTopForGrowth(pGVAS)*call to mmuWalkMigrateLevelInstance**bDone*call to _gvaspaceSetExternalPageDirBase*pRootInternal*NULL != pGpuState->pRootInternal**NULL != pGpuState->pRootInternal*pRootMemNew**pRootMemNew*rootSizeNew*CopyEntries*FillEntries*call to mmuWalkSetCallbacks**pRootInternal*call to gvaspaceGetVaStart_DISPATCH*pPDB == NULL**pPDB == NULL*cpuCacheAttr*!"invalid aperture"**!"invalid aperture"*vaLimitOld*call to mmuFmtEntryIndexVirtAddrHi*vaLimitNew >= pGVAS->vaLimitInternal**vaLimitNew >= pGVAS->vaLimitInternal*(NvU64)pParams->numEntries * (NvU64)pGpuState->pFmt->pRoot->entrySize <= NV_U32_MAX**(NvU64)pParams->numEntries * (NvU64)pGpuState->pFmt->pRoot->entrySize <= NV_U32_MAX*NVRM: PASID: %u **NVRM: PASID: %u *(pGVAS->flags & VASPACE_FLAGS_SHARED_MANAGEMENT) || vaspaceIsExternallyOwned(pVAS)**(pGVAS->flags & VASPACE_FLAGS_SHARED_MANAGEMENT) || vaspaceIsExternallyOwned(pVAS)*call to osCreateMemFromOsDescriptor**physAddress*aperture == ADDR_FBMEM**aperture == ADDR_FBMEM**pProgress == (entryIndexHiClamp - entryIndexLoClamp + 1)***pProgress == (entryIndexHiClamp - entryIndexLoClamp + 1)*NULL != pParams**NULL != pParams*bDowngrade*pteBlocks**pteBlocks*pPteBlock**pPteBlock*memmgrMemRead(pMemoryManager, &surf, pte.v8, pLevelFmt->entrySize, TRANSFER_FLAGS_NONE)**memmgrMemRead(pMemoryManager, &surf, pte.v8, pLevelFmt->entrySize, TRANSFER_FLAGS_NONE)*call to isPteDowngrade*bEncrypted*fldAperture*_enum*(kgmmuTranslatePtePcfFromSw_HAL(GPU_GET_KERNEL_GMMU(pGpu), ptePcfSw, &ptePcfHw) == NV_OK)**(kgmmuTranslatePtePcfFromSw_HAL(GPU_GET_KERNEL_GMMU(pGpu), ptePcfSw, &ptePcfHw) == NV_OK)*call to memmgrFillComprInfo_IMPL*memmgrFillComprInfo(pGpu, pMemoryManager, pPteBlock->pageSize, 1, pPteBlock->kind, surfOffset, pPteBlock->comptagLine, &comprInfo)**memmgrFillComprInfo(pGpu, pMemoryManager, pPteBlock->pageSize, 1, pPteBlock->kind, surfOffset, pPteBlock->comptagLine, &comprInfo)*memmgrMemWrite(pMemoryManager, &surf, pte.v8, pLevelFmt->entrySize, TRANSFER_FLAGS_NONE)**memmgrMemWrite(pMemoryManager, &surf, pte.v8, pLevelFmt->entrySize, TRANSFER_FLAGS_NONE)*(kgmmuTranslatePtePcfFromHw_HAL(pKernelGmmu, ptePcfHw, nvFieldGetBool(&pFmt->pPte->fldValid, curPte.v8), &ptePcfSw) == NV_OK)**(kgmmuTranslatePtePcfFromHw_HAL(pKernelGmmu, ptePcfHw, nvFieldGetBool(&pFmt->pPte->fldValid, curPte.v8), &ptePcfSw) == NV_OK)*curPteReadOnly*NULL != pGpu**NULL != pGpu*mmuWalkGetPageLevelInfo(pWalk, pLevelFmt, pParams->gpuAddr, (const MMU_WALK_MEMDESC**)&pMemDesc, &memSize)**mmuWalkGetPageLevelInfo(pWalk, pLevelFmt, pParams->gpuAddr, (const MMU_WALK_MEMDESC**)&pMemDesc, &memSize)*memmgrMemRead(pMemoryManager, &surf, pte.v8, pLevelFmt->entrySize, TRANSFER_FLAGS_DEFER_FLUSH)**memmgrMemRead(pMemoryManager, &surf, pte.v8, pLevelFmt->entrySize, TRANSFER_FLAGS_DEFER_FLUSH)*pteBlockIndex < NV0080_CTRL_DMA_PDE_INFO_PTE_BLOCKS**pteBlockIndex < NV0080_CTRL_DMA_PDE_INFO_PTE_BLOCKS*pteEntrySize*call to kgmmuExtractPteInfo_IMPL*(pteBlockIndex > 0) || pParams->skipVASpaceInit**(pteBlockIndex > 0) || pParams->skipVASpaceInit*call to _gvaspacePinLazyPageTables*pParentFmt**pParentFmt*NULL != pParentFmt**NULL != pParentFmt*pdeVirtAddr*pdeEntrySize*pteBlockIdx < NV0080_CTRL_DMA_PDE_INFO_PTE_BLOCKS**pteBlockIdx < NV0080_CTRL_DMA_PDE_INFO_PTE_BLOCKS*ptePhysAddr*pdeVASpaceSize*pteCacheAttrib*pteAddrSpace*mmuWalkGetPageLevelInfo(pWalk, pRootFmt, 0, (const MMU_WALK_MEMDESC**)&pRootMem, &rootSize)**mmuWalkGetPageLevelInfo(pWalk, pRootFmt, 0, (const MMU_WALK_MEMDESC**)&pRootMem, &rootSize)*pdbAddr*pdeAddrSpace*vaBitCount*pSmallPageTable**pSmallPageTable*call to nvLogBase2*pBigPageTable**pBigPageTable*supportedPageSizeMask*dualPageTableSupported*pdeCoverageBitCount*pageTableBigFormat*pageTableCoverage*num4KPageTableFormats*pageTable4KFormat**pageTable4KFormat*idealVRAMPageSize*0 != (NVBIT(pGpu->gpuInstance) & pVAS->gpuMask)**0 != (NVBIT(pGpu->gpuInstance) & pVAS->gpuMask)*gvaspaceWalkUserCtxAcquire(pGVAS, pGpu, pVASBlock, &userCtx) == NV_OK**gvaspaceWalkUserCtxAcquire(pGVAS, pGpu, pVASBlock, &userCtx) == NV_OK*mmuWalkSparsify(userCtx.pGpuState->pWalk, vaLo, vaHi, NV_FALSE)**mmuWalkSparsify(userCtx.pGpuState->pWalk, vaLo, vaHi, NV_FALSE)*mmuWalkUnmap(userCtx.pGpuState->pWalk, vaLo, vaHi)**mmuWalkUnmap(userCtx.pGpuState->pWalk, vaLo, vaHi)*0 == (vaLo & (pageSize - 1))**0 == (vaLo & (pageSize - 1))*0 == ((vaHi + 1) & (pageSize - 1))**0 == ((vaHi + 1) & (pageSize - 1))*vaHi <= pMemBlock->end**vaHi <= pMemBlock->end*_gvaspaceMappingInsert(pGVAS, pGpu, pVASBlock, vaLo, vaHi, flags)**_gvaspaceMappingInsert(pGVAS, pGpu, pVASBlock, vaLo, vaHi, flags)*NVRM: Failed to acquire walk user context **NVRM: Failed to acquire walk user context *rootPdeCoverage*call to gvaspaceGetVaLimit_DISPATCH*NULL != pGVAS->pGpuStates**NULL != pGVAS->pGpuStates*pVAS->gpuMask & NVBIT32(pGpu->gpuInstance)**pVAS->gpuMask & NVBIT32(pGpu->gpuInstance)*call to nvMaskPos32*pPasid != NULL**pPasid != NULL*NVRM: ATS enabled: %u PASID: %u **NVRM: ATS enabled: %u PASID: %u *pGVAS->bIsAtsEnabled**pGVAS->bIsAtsEnabled*call to gpuIsAtsSupportedWithSmcMemPartitioning_DISPATCH*call to mmuFmtAllPageSizes*pRootFmtLvl*pVASpaceBlock**pVASpaceBlock*NVRM: Overriding page size to 4k in Cache only Mode **NVRM: Overriding page size to 4k in Cache only Mode *maxPageSize*NVRM: Invalid page size attr **NVRM: Invalid page size attr *NVRM: Cannot reserve VA on an externally owned VASPACE **NVRM: Cannot reserve VA on an externally owned VASPACE *NVRM: FabricVAS and FlaVAS cannot be used simultaneously! FlaVAS Alloc failed **NVRM: FabricVAS and FlaVAS cannot be used simultaneously! FlaVAS Alloc failed *size <= (rangeHi - rangeLo + 1)**size <= (rangeHi - rangeLo + 1)*origRangeLo <= rangeLo**origRangeLo <= rangeLo*rangeLo <= rangeHi**rangeLo <= rangeHi*rangeHi <= origRangeHi**rangeHi <= origRangeHi*pHeap->eheapSetAllocRange(pHeap, rangeLo, rangeHi)**pHeap->eheapSetAllocRange(pHeap, rangeLo, rangeHi)*management*acquireStatus == NV_OK**acquireStatus == NV_OK*call to _gvaspaceForceFreePageLevelInstances*NULL == listHead(&pGpuState->reservedPageTableEntries)**NULL == listHead(&pGpuState->reservedPageTableEntries)*refFindAncestorOfType(pDeviceRef, classId(Device), &pDeviceRef)**refFindAncestorOfType(pDeviceRef, classId(Device), &pDeviceRef)*memmgrPageLevelPoolsGetInfo(pGpu, pMemoryManager, pDevice, &pGpuState->pPageTableMemPool)**memmgrPageLevelPoolsGetInfo(pGpu, pMemoryManager, pDevice, &pGpuState->pPageTableMemPool)*compPageSize*pBigPT**pBigPT*NULL != pBigPT**NULL != pBigPT*extManagedAlign*fullPdeCoverage*partialPdeExpMax*call to gvaspaceGetReservedVaspaceBase*vaStartMin*vaLimitMax*vaLimitExt*vaLimit <= vaLimitMax**vaLimit <= vaLimitMax*vaLimitInternal <= vaLimitMax**vaLimitInternal <= vaLimitMax*vaStartInternal <= vaLimitInternal**vaStartInternal <= vaLimitInternal*vaStartInternal >= vaStartMin**vaStartInternal >= vaStartMin*vaStartInt*vaLimitInt*bigPageSize == pGVAS->bigPageSize**bigPageSize == pGVAS->bigPageSize*compPageSize == pGVAS->compPageSize**compPageSize == pGVAS->compPageSize*extManagedAlign == pGVAS->extManagedAlign**extManagedAlign == pGVAS->extManagedAlign**pFullPdeCoverage == fullPdeCoverage***pFullPdeCoverage == fullPdeCoverage**pPartialPdeExpMax == partialPdeExpMax***pPartialPdeExpMax == partialPdeExpMax*walkFlags*bAtsEnabled*mmuWalkCreate(pFmt->pRoot, NULL, &g_gmmuWalkCallbacks, walkFlags, &pGpuState->pWalk, NULL)**mmuWalkCreate(pFmt->pRoot, NULL, &g_gmmuWalkCallbacks, walkFlags, &pGpuState->pWalk, NULL)*pGVAS->numPartialPtRanges < GVAS_MAX_PARTIAL_PAGE_TABLE_RANGES**pGVAS->numPartialPtRanges < GVAS_MAX_PARTIAL_PAGE_TABLE_RANGES*NVRM: GVAS is still used by some channel group(s) **NVRM: GVAS is still used by some channel group(s) *call to _gvaspaceBar1VaSpaceDestruct*call to _gvaspaceFlaVaspaceDestruct*call to _gvaspaceReleaseVaForServerRm*NV_OK == _gvaspaceReleaseVaForServerRm(pGVAS, pGpu)**NV_OK == _gvaspaceReleaseVaForServerRm(pGVAS, pGpu)*call to _gvaspaceGpuStateDestruct*call to gmmuMemDescCacheFree**pGpuStates*mmuWalkUnmap(userCtx.pGpuState->pWalk, vaspaceGetVaStart(pVAS), vaspaceGetVaLimit(pVAS))**mmuWalkUnmap(userCtx.pGpuState->pWalk, vaspaceGetVaStart(pVAS), vaspaceGetVaLimit(pVAS))*NVRM: Releasing legacy FLA VASPACE, gpu: %x **NVRM: Releasing legacy FLA VASPACE, gpu: %x *call to _gvaspaceBar1VaSpaceDestructFW*call to _gvaspaceBar1VaSpaceDestructClient*FERMI_VASPACE_A == classId**FERMI_VASPACE_A == classId*bIsFaultCapable*bIsExternallyOwned*bIsAtsEnabled*NVRM: ATS Enabled VaSpace **NVRM: ATS Enabled VaSpace *highestBitIdx*call to _gvaspaceGpuStateConstruct*pVAS->vasStart <= pVAS->vasLimit**pVAS->vasStart <= pVAS->vasLimit*pVAS->vasLimit >= pGVAS->vaLimitInternal**pVAS->vasLimit >= pGVAS->vaLimitInternal*NULL != pGVAS->pHeap**NULL != pGVAS->pHeap*call to gvaspaceReserveSplitVaSpace_IMPL*bRMInternalRestrictedVaRange*call to _gvaspaceReserveRange*!(flags & VASPACE_FLAGS_RESTRICTED_RM_INTERNAL_VALIMITS)**!(flags & VASPACE_FLAGS_RESTRICTED_RM_INTERNAL_VALIMITS)*partialPtVaRangeSize*call to _gvaspaceAddPartialPtRange*call to _gvaspaceBar1VaSpaceConstruct*bClientRm*bServerRm*vaStartServerRMOwned*vaLimitServerRMOwned*NVRM: vaLimitServerRMOwned (0x%llx)> vaLimitInternal (0x%llx) **NVRM: vaLimitServerRMOwned (0x%llx)> vaLimitInternal (0x%llx) *call to _gvaspaceReserveVaForServerRm*call to _gvaspaceReserveVaForClientRm*call to gvaspaceCopyServerRmReservedPdesToServerRm_IMPL*call to _gvaspaceBar1VaSpaceConstructFW*call to _gvaspaceBar1VaSpaceConstructClient*src/kernel/mem_mgr/hw_resources.c**src/kernel/mem_mgr/hw_resources.c*bVidmem*call to _hwresHwAlloc*call to memmgrGetInvalidOffset_DISPATCH*pMemory->pMemDesc**pMemory->pMemDesc*NVRM: No memory for Resource %p **NVRM: No memory for Resource %p *NVRM: memmgrDeterminePageSize failed **NVRM: memmgrDeterminePageSize failed *NVRM: memmgrAllocDetermineAlignment failed **NVRM: memmgrAllocDetermineAlignment failed *isVgpuHostAllocated*call to NV_RM_RPC_MANAGE_HW_RESOURCE_ALLOC*NVRM: nvHalFbAlloc failure status = 0x%x Requested Attr 0x%x! **NVRM: nvHalFbAlloc failure status = 0x%x Requested Attr 0x%x! *NVRM: nvHalFbAlloc Out of Resources Requested=%x Returned=%x ! **NVRM: nvHalFbAlloc Out of Resources Requested=%x Returned=%x ! *pIOVAS != NULL*src/kernel/mem_mgr/io_vaspace.c**pIOVAS != NULL**src/kernel/mem_mgr/io_vaspace.c*call to iovaspaceDestroyMapping_IMPL*pIovaMapping->refcount > 0**pIovaMapping->refcount > 0*call to _iovaspaceDestroySubmapping*call to _iovaspaceDestroyRootMapping*pNextIovaMapping**pNextIovaMapping**pTmpIovaMapping*call to memdescRemoveIommuMap*call to osIovaUnmap*pIovaMapping->refcount != 0**pIovaMapping->refcount != 0*call to _iovaspaceCreateSubmapping*call to _iovaspaceCreateMapping*call to _iovaspaceCreateMappingDataFromMemDesc*call to osIovaMap*NVRM: failed to map memdesc into I/O VA space 0x%x (status = 0x%x) **NVRM: failed to map memdesc into I/O VA space 0x%x (status = 0x%x) *call to memdescAddIommuMap*pRootIovaMapping**pChildren*pTmpIovaMapping != NULL**pTmpIovaMapping != NULL*pRootMemDesc != pPhysMemDesc**pRootMemDesc != pPhysMemDesc**pRootIovaMapping*pRootIovaMapping != NULL**pRootIovaMapping != NULL*(rootOffset & RM_PAGE_MASK) == 0**(rootOffset & RM_PAGE_MASK) == 0*pSubMapping**pSubMapping*((rootOffset >> RM_PAGE_SHIFT) + pPhysMemDesc->PageCount) <= pRootMemDesc->PageCount**((rootOffset >> RM_PAGE_SHIFT) + pPhysMemDesc->PageCount) <= pRootMemDesc->PageCount*mappingDataSize*NVRM: too much memory to map! (0x%llx bytes) **NVRM: too much memory to map! (0x%llx bytes) *NVRM: failed to allocate 0x%x bytes for IOVA mapping metadata **NVRM: failed to allocate 0x%x bytes for IOVA mapping metadata *NULL != pKernelGmmu**NULL != pKernelGmmu*NVRM: %lld left-over mappings in IOVAS 0x%x **NVRM: %lld left-over mappings in IOVAS 0x%x *IO_VASPACE_A == classId**IO_VASPACE_A == classId*mappingCount*(memdescGetAddressSpace(pMemDesc) == ADDR_EGM) || (memdescGetAddressSpace(pMemDesc) == ADDR_SYSMEM)*src/kernel/mem_mgr/mem.c**(memdescGetAddressSpace(pMemDesc) == ADDR_EGM) || (memdescGetAddressSpace(pMemDesc) == ADDR_SYSMEM)**src/kernel/mem_mgr/mem.c*gpuCacheAttrib*call to memmgrIsMemoryIoCoherent_DISPATCH*pMemory1*memIsReady(pMemory, NV_FALSE)**memIsReady(pMemory, NV_FALSE)*hMemoryDevice*pSrcParentRef*pSrcParentRef != NULL**pSrcParentRef != NULL*pDstParentRef*pDstParentRef != NULL**pDstParentRef != NULL*pMemorySrc != NULL**pMemorySrc != NULL*memIsReady(pMemorySrc, NV_TRUE)**memIsReady(pMemorySrc, NV_TRUE)*RES_GET_CLIENT_HANDLE(pMemorySrc) == RES_GET_PARENT_HANDLE(pMemorySrc)**RES_GET_CLIENT_HANDLE(pMemorySrc) == RES_GET_PARENT_HANDLE(pMemorySrc)*pSrcDevice**pSrcDevice*pSrcSubDevice**pSrcSubDevice*pDstSubDevice**pDstSubDevice*pSrcDevice != NULL**pSrcDevice != NULL*pDstDevice != NULL**pDstDevice != NULL*NVRM: Parent type mismatch between Src and Dst objectsBoth should be either device or subDevice **NVRM: Parent type mismatch between Src and Dst objectsBoth should be either device or subDevice *NVRM: Failed to acquire GPU locks, error 0x%x **NVRM: Failed to acquire GPU locks, error 0x%x *bReleaseGpuLock*memCheckCopyPermissions(pMemorySrc, pDstGpu, pDstDevice)**memCheckCopyPermissions(pMemorySrc, pDstGpu, pDstDevice)*HeapOwner*KernelMapPriv**KernelMapPriv***KernelMapPriv*Attr*Attr2*isMemDescOwner**pMemoryDst*dupListItem*memIsReady(*ppMemory, NV_FALSE)**memIsReady(*ppMemory, NV_FALSE)*call to _memUnregisterFromGsp*call to _memDestructCommonWithDevice*pSubDeviceInfo**pSubDeviceInfo*pMemory->pMemDesc->_subDeviceAllocCount == 1**pMemory->pMemDesc->_subDeviceAllocCount == 1*call to NV_RM_RPC_MANAGE_HW_RESOURCE_FREE*btreeUnlink(&pMemory->Node, &pDevice->DevMemoryTable)**btreeUnlink(&pMemory->Node, &pDevice->DevMemoryTable)*bRegisteredWithGsp*NVRM: Failed to unregister hMemory 0x%08x from GSP, status 0x%08x **NVRM: Failed to unregister hMemory 0x%08x from GSP, status 0x%08x *clientGetResourceRef(pClient, hMemory, &pMemoryRef)**clientGetResourceRef(pClient, hMemory, &pMemoryRef)*RmDeprecatedConvertOs32ToOs02Flags(pMemory->Attr, pMemory->Attr2, pMemory->Flags, &os02Flags)**RmDeprecatedConvertOs32ToOs02Flags(pMemory->Attr, pMemory->Attr2, pMemory->Flags, &os02Flags)*NVRM: Unable to allocate HWRESOURCE_INFO tracking structure **NVRM: Unable to allocate HWRESOURCE_INFO tracking structure *call to memdescSetGpuP2PCacheAttrib*Invalid attr and attr2 page size arguments**Invalid attr and attr2 page size arguments*FLD_TEST_DRF(OS32, _ATTR, _PHYSICALITY, _CONTIGUOUS, attr)**FLD_TEST_DRF(OS32, _ATTR, _PHYSICALITY, _CONTIGUOUS, attr)**pMemDescNext*pGrandParentRef*pMemory->pGpu != NULL**pMemory->pGpu != NULL*call to memCopyConstruct_IMPL*call to portSafeAddU16*cachedParams*src/kernel/mem_mgr/mem_export.c**src/kernel/mem_mgr/mem_export.c*bAllGpuLockAcquired*memInfos**memInfos*NVRM: Invalid handle **NVRM: Invalid handle *call to _memoryexportFindImporterParent*NVRM: Failed to find parent: 0x%x **NVRM: Failed to find parent: 0x%x *NVRM: Failed to duping 0x%x **NVRM: Failed to duping 0x%x *call to _memoryexportFindImporterMIGParent*bDevice*pSrcGpu != NULL**pSrcGpu != NULL*pSrcKernelMIGGpuInstance**pSrcKernelMIGGpuInstance*pSrcKernelMIGGpuInstance != NULL**pSrcKernelMIGGpuInstance != NULL*impRef*call to _memoryexportValidateParent*bModuleLockAcquired*pGpuOsInfo**pGpuOsInfo***pGpuOsInfo*call to _memoryexportGetParentHandles*stashMemInfos**stashMemInfos*call to _memoryexportValidateAndDupMem*call to _memoryexportDetachMemAndParent*pMemInfo->hDupedMem == 0**pMemInfo->hDupedMem == 0*hDupedMem*attachedUsageCount**attachedUsageCount*migGi**migGi*giIdMasks**giIdMasks*call to _memoryexportDetachParent*call to _memoryexportUndupMem*kmigmgrDecRefCount(pParentInfo->pKernelMIGGpuInstance->pShare)**kmigmgrDecRefCount(pParentInfo->pKernelMIGGpuInstance->pShare)*pTempDevice*call to osMatchGpuOsInfo*call to _memoryexportVerifyMem*NVRM: Failed to query address space: 0x%x **NVRM: Failed to query address space: 0x%x *rmGpuGroupLockAcquire(0, GPU_LOCK_GRP_MASK, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_MEM, &gpuMask)**rmGpuGroupLockAcquire(0, GPU_LOCK_GRP_MASK, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_MEM, &gpuMask)*pExportInfo->attachedUsageCount[j].gpu == 0**pExportInfo->attachedUsageCount[j].gpu == 0*pExportInfo->attachedUsageCount[j].migGi[i] == 0**pExportInfo->attachedUsageCount[j].migGi[i] == 0*pExportInfo->cachedParams.giIdMasks[j] == 0**pExportInfo->cachedParams.giIdMasks[j] == 0*pExportInfo->cachedParams.numCurHandles == 0**pExportInfo->cachedParams.numCurHandles == 0*pExportInfo->cachedParams.deviceInstanceMask == 0**pExportInfo->cachedParams.deviceInstanceMask == 0*listCount(&pExportInfo->parentInfoList) == 0**listCount(&pExportInfo->parentInfoList) == 0*pRmApi->Free(pRmApi, pParentInfo->hClient, pMemInfo->hDupedMem)**pRmApi->Free(pRmApi, pParentInfo->hClient, pMemInfo->hDupedMem)*pParentInfo->refCount == 0**pParentInfo->refCount == 0*threadStateGetCurrent(&pThreadNode, NULL) == NV_OK**threadStateGetCurrent(&pThreadNode, NULL) == NV_OK*freeCallback*threadStateEnqueueCallbackOnFree(pThreadNode, &freeCallback)**threadStateEnqueueCallbackOnFree(pThreadNode, &freeCallback)*call to _memoryexportDup*call to _memoryexportConstruct*call to _memoryexportGenerateUuid*numMaxHandles*expId*bImexDaemon*rmGpuGroupLockIsOwner(0, GPU_LOCK_GRP_MASK, &gpuMask)*src/kernel/mem_mgr/mem_fabric.c**rmGpuGroupLockIsOwner(0, GPU_LOCK_GRP_MASK, &gpuMask)**src/kernel/mem_mgr/mem_fabric.c*call to _memoryFabricDetachMem*_memoryFabricDetachMem(pMemoryFabric, pParams->dmaOffset, NV_FALSE)**_memoryFabricDetachMem(pMemoryFabric, pParams->dmaOffset, NV_FALSE)*call to _memoryfabricGetPhysAttrsUsingFabricMemdesc*_memoryfabricGetPhysAttrsUsingFabricMemdesc(pGpu, pFabricVAS, pFabricMemDesc, pParams->offset, &physPageSize)**_memoryfabricGetPhysAttrsUsingFabricMemdesc(pGpu, pFabricVAS, pFabricMemDesc, pParams->offset, &physPageSize)*NV_IS_ALIGNED64(pParams->offset, mappingPageSize)**NV_IS_ALIGNED64(pParams->offset, mappingPageSize)*pParams->offset < memdescGetSize(pFabricMemDesc)**pParams->offset < memdescGetSize(pFabricMemDesc)*pageLevelInfoParams*call to fabricvaspaceGetPageLevelInfo_IMPL*fabricvaspaceGetPageLevelInfo(pFabricVAS, pGpu, &pageLevelInfoParams)**fabricvaspaceGetPageLevelInfo(pFabricVAS, pGpu, &pageLevelInfoParams)*pAttachMemInfoTree*btreeSearch(offset, &pNode, pMemdescData->pAttachMemInfoTree)**btreeSearch(offset, &pNode, pMemdescData->pAttachMemInfoTree)*pAttachMemInfoNode**pAttachMemInfoNode*numMemInfos*numDetached*numAttached*call to _memoryFabricAttachMem*physAttrs*NVRM: unable to query cliqueId 0x%x **NVRM: unable to query cliqueId 0x%x *call to knvlinkGetBWModeEpoch_IMPL*totalPfns*NVRM: offset: 0x%llx is out of range: 0x%llx **NVRM: offset: 0x%llx is out of range: 0x%llx *numPfns*pFabricArray**pFabricArray*pfnArray**pfnArray*call to memoryfabricCopyConstruct_IMPL*NVRM: Fabric vaspace object not available **NVRM: Fabric vaspace object not available *NVRM: UC FLA ranges should be initialized by this time! **NVRM: UC FLA ranges should be initialized by this time! *call to memmgrIsValidFlaPageSize_DISPATCH*NVRM: Unsupported pageSize: 0x%llx **NVRM: Unsupported pageSize: 0x%llx *NVRM: Alignment should be pageSize aligned **NVRM: Alignment should be pageSize aligned *NVRM: AllocSize should be pageSize aligned **NVRM: AllocSize should be pageSize aligned *bFlexible*NVRM: RO mappings are only supported on non-release builds **NVRM: RO mappings are only supported on non-release builds *NVRM: Physmem can't be provided during flexible object alloc **NVRM: Physmem can't be provided during flexible object alloc *call to _memoryfabricValidatePhysMem*bSkipTlbInvalidateOnFree*bForceContig*bForceNonContig*call to _memoryfabricAllocFabricVa*NVRM: VA Space alloc failed! Status Code: 0x%x Size: 0x%llx RangeLo: 0x%llx, RangeHi: 0x%llx, page size: 0x%llx **NVRM: VA Space alloc failed! Status Code: 0x%x Size: 0x%llx RangeLo: 0x%llx, RangeHi: 0x%llx, page size: 0x%llx *NVRM: Failed to allocate memory descriptor **NVRM: Failed to allocate memory descriptor *NVRM: MemoryFabric memConstructCommon failed **NVRM: MemoryFabric memConstructCommon failed *call to memmgrGetFlaKind_DISPATCH*NVRM: Error getting kind attr for fabric memory **NVRM: Error getting kind attr for fabric memory *NVRM: Failed to dup physmem handle **NVRM: Failed to dup physmem handle *surfaceInfoParam*NVRM: Failed to query physmem info **NVRM: Failed to query physmem info *compressionCoverage*call to fabricvaspaceMapPhysMemdesc_IMPL*NVRM: Failed to map FLA at the given physmem offset **NVRM: Failed to map FLA at the given physmem offset *call to fabricvaspaceVaToGpaMapInsert_IMPL*pRmApi->Free(pRmApi, pFabricVAS->hClient, pMemdescData->hDupedPhysMem) == NV_OK**pRmApi->Free(pRmApi, pFabricVAS->hClient, pMemdescData->hDupedPhysMem) == NV_OK*call to _memoryfabricFreeFabricVa*call to _memoryfabricAllocFabricVa_VGPU*call to fabricvaspaceAllocNonContiguous_IMPL*NVRM: Alloc NV_MEMORY_FABRIC RPC failed, status: %x **NVRM: Alloc NV_MEMORY_FABRIC RPC failed, status: %x *pDescribeParams**pDescribeParams*NVRM: CTRL_CMD_DESCRIBE failed, status: 0x%x, numPfns: 0x%x, totalPfns: 0x%llx, readSoFar: 0x%x **NVRM: CTRL_CMD_DESCRIBE failed, status: 0x%x, numPfns: 0x%x, totalPfns: 0x%llx, readSoFar: 0x%x *call to _memoryfabricFreeFabricVa_VGPU*pGpu->pFabricVAS != NULL**pGpu->pFabricVAS != NULL*call to _memoryfabricMemDescGetNumAddr*pageGranularityShift*call to fabricvaspaceVaToGpaMapRemove_IMPL*pNode == NULL**pNode == NULL*NVRM: Unsupported fabric memory type **NVRM: Unsupported fabric memory type *call to refAddInterMapping*NVRM: Failed to setup inter mapping **NVRM: Failed to setup inter mapping *NVRM: Failed to map FLA **NVRM: Failed to map FLA *pInterMapping**pInterMapping*NVRM: Failed to track attach mem info **NVRM: Failed to track attach mem info *call to refRemoveInterMapping*NVRM: Invalid object handle passed **NVRM: Invalid object handle passed *NVRM: Device-less memory isn't supported yet **NVRM: Device-less memory isn't supported yet *NVRM: Physmem handle's owner GPU does not match **NVRM: Physmem handle's owner GPU does not match *call to memmgrIsMemDescSupportedByFla_DISPATCH*NVRM: Invalid physmem handle passed **NVRM: Invalid physmem handle passed *NVRM: Physmem page size should be 2MB **NVRM: Physmem page size should be 2MB *pPfnArray**pPfnArray*call to _createTempMemDesc*call to importDescriptorInstallMemDesc*call to importDescriptorPutNonBlocking*src/kernel/mem_mgr/mem_fabric_import_ref.c**src/kernel/mem_mgr/mem_fabric_import_ref.c*call to importDescriptorGetUnused*exportUuid**exportUuid*src/kernel/mem_mgr/mem_fabric_import_v2.c**src/kernel/mem_mgr/mem_fabric_import_v2.c*NVRM: cliqueId does not match: owner %u, mapper %u **NVRM: cliqueId does not match: owner %u, mapper %u *NVRM: bwMode does not match: owner %u, mapper %u **NVRM: bwMode does not match: owner %u, mapper %u *NVRM: bwModeEpoch does not match: owner %llu, mapper %llu **NVRM: bwModeEpoch does not match: owner %llu, mapper %llu *call to fabricImportCacheDelete_IMPL*memoryfabricimportv2IsReady(pMemoryFabricImportV2, NV_FALSE)**memoryfabricimportv2IsReady(pMemoryFabricImportV2, NV_FALSE)*pSourceMemoryFabricImportV2*call to _importDescriptorEnqueueWait*expNodeId*call to _importDescriptorDequeueWait*call to _importDescriptorPutAndLockReleaseWrite*call to memoryfabricimportv2CopyConstruct_IMPL*call to _memoryfabricimportv2Construct*call to memoryExportGetNodeId*call to _importDescriptorAlloc*call to _initImportFabricEvent*importEvent*NVRM: Failed to notify IMEX daemon of import event **NVRM: Failed to notify IMEX daemon of import event *call to _importDescriptorGetAndLockAcquireWrite*call to fabricImportCacheInsert_IMPL*call to _importDescriptorCleanup*pValidatedOsEvent**pValidatedOsEvent*call to _importDescriptorFlushImporters*osDereferenceObjectCount(pNode->pOsEvent)**osDereferenceObjectCount(pNode->pOsEvent)*bMemdescInstalled*memConstructCommon(pNode->pMemory, NV_MEMORY_FABRIC_IMPORT_V2, 0, pFabricImportDesc->pMemDesc, 0, NULL, 0, 0, 0, 0, NVOS32_MEM_TAG_NONE, NULL)**memConstructCommon(pNode->pMemory, NV_MEMORY_FABRIC_IMPORT_V2, 0, pFabricImportDesc->pMemDesc, 0, NULL, 0, 0, 0, 0, NVOS32_MEM_TAG_NONE, NULL)*pFabricImportDesc->extUsageCount > 0**pFabricImportDesc->extUsageCount > 0*pFabricImportDesc->extUsageCount == 1**pFabricImportDesc->extUsageCount == 1*call to _initUnimportFabricEvent*fabricPostEventsV2(pFabric, &unimportEvent, 1) == NV_OK**fabricPostEventsV2(pFabric, &unimportEvent, 1) == NV_OK*bUnimported*listCount(&pFabricImportDesc->waitingImportersList) == 0**listCount(&pFabricImportDesc->waitingImportersList) == 0*pFabricImportDesc->extUsageCount == 0**pFabricImportDesc->extUsageCount == 0*unimport*call to fabricImportCacheGet_IMPL**call to fabricImportCacheGet_IMPL*Cache*src_hClient*src_hParent*src_hHwResClient*src_hHwResDevice*src_hHwResHandle*src_pGpu**src_pGpu*!(pAllocParams->flags & NVOS32_ALLOC_FLAGS_VIRTUAL)*src/kernel/mem_mgr/mem_list.c**!(pAllocParams->flags & NVOS32_ALLOC_FLAGS_VIRTUAL)**src/kernel/mem_mgr/mem_list.c*pHwResClient*src_pMemDesc**src_pMemDesc*src_pMemDesc == NULL**src_pMemDesc == NULL*call to memdescSetGuestId*call to rmapiParamsCopyIn*tmpActualSize*NVRM: *** Cannot fake guest sysmem allocation. status =0x%x **NVRM: *** Cannot fake guest sysmem allocation. status =0x%x *(pHeap != NULL)**(pHeap != NULL)*trueLength*(pAllocParams->hHwResHandle == 0) || !(pAllocParams->attr & (DRF_SHIFTMASK(NVOS32_ATTR_COMPR) | DRF_SHIFTMASK(NVOS32_ATTR_ZCULL)))**(pAllocParams->hHwResHandle == 0) || !(pAllocParams->attr & (DRF_SHIFTMASK(NVOS32_ATTR_COMPR) | DRF_SHIFTMASK(NVOS32_ATTR_ZCULL)))*newBase*NVRM: Out of range contig memory at 0x%016llx of size 0x%016llx **NVRM: Out of range contig memory at 0x%016llx of size 0x%016llx *NVRM: Out of range page address 0x%016llx **NVRM: Out of range page address 0x%016llx *vgpuIsCallingContextPlugin(pMemDesc->pGpu, &bCallingContextPlugin)**vgpuIsCallingContextPlugin(pMemDesc->pGpu, &bCallingContextPlugin)*NVRM: memmgrAllocHwResources failure! **NVRM: memmgrAllocHwResources failure! *pFbAllocInfo->format == pAllocParams->format**pFbAllocInfo->format == pAllocParams->format*NVRM: fbAlloc for comptag successful! **NVRM: fbAlloc for comptag successful! *NVRM: memmgrAllocHwResources result **NVRM: memmgrAllocHwResources result *NVRM: Attr:0x%x **NVRM: Attr:0x%x *NVRM: Attr2:0x%x **NVRM: Attr2:0x%x *NVRM: comprCovg:0x%x **NVRM: comprCovg:0x%x *NVRM: zcullCovg:0x%x **NVRM: zcullCovg:0x%x *NVRM: ctagOffset:0x%x **NVRM: ctagOffset:0x%x *NVRM: hwResId:0x%x **NVRM: hwResId:0x%x *NVRM: page size default doesn't have any impact **NVRM: page size default doesn't have any impact *NVRM: unexpected pageSizeAttr = 0x%x **NVRM: unexpected pageSizeAttr = 0x%x *localAttachedGpusMask*call to _memMulticastFabricIsPrime*bAllowed*src/kernel/mem_mgr/mem_multicast_fabric.c**src/kernel/mem_mgr/mem_multicast_fabric.c*call to _memorymulticastfabricCtrlRegisterEvent*call to _memMulticastFabricDescriptorEnqueueWait*memConstructCommon(pMemory, NV_MEMORY_MULTICAST_FABRIC, 0, pMulticastFabricDesc->pMemDesc, 0, NULL, 0, 0, 0, 0, NVOS32_MEM_TAG_NONE, NULL)**memConstructCommon(pMemory, NV_MEMORY_MULTICAST_FABRIC, 0, pMulticastFabricDesc->pMemDesc, 0, NULL, 0, 0, 0, 0, NVOS32_MEM_TAG_NONE, NULL)*call to _memorymulticastfabricCtrlGetInfo*numMaxGpus*numAttachedGpus*pSourceMemoryMulticastFabric*bImported*call to _memorymulticastfabricCtrlDetachMem*call to _memMulticastFabricDescriptorDequeueWait*call to _memMulticastFabricDescriptorFree*call to _memorymulticastfabricValidatePhysMem*NVRM: Failed to validate physmem handle **NVRM: Failed to validate physmem handle *call to _memorymulticastfabricCtrlAttachMem*call to _memorymulticastfabricGetAttchedGpuInfo*pFabricVAS != NULL**pFabricVAS != NULL*pGpuInfo->bMcflaAlloc**pGpuInfo->bMcflaAlloc*call to _memorymulticastfabricDetachMem**pNodeItr*call to _memMulticastfabricRemoteAttachResolveDefaults*bLastAttach*bLastAttachRecheck*call to _memorymulticastfabricCtrlAttachRemoteGpu*call to _memMulticastfabricResolvePageSize*NVRM: Only supported on prime MCLFA object **NVRM: Only supported on prime MCLFA object *NVRM: Max no. of GPUs have already attached! **NVRM: Max no. of GPUs have already attached! *call to _memorymulticastfabricValidateFabricAttr*NVRM: Page size of remote GPU does not match prime MCLFA object **NVRM: Page size of remote GPU does not match prime MCLFA object *call to _memorymulticastfabricValidateNvlAttr*NVRM: Clique ID etc. validation failed **NVRM: Clique ID etc. validation failed *NVRM: Invalid node ID **NVRM: Invalid node ID *NVRM: GPU is already attached **NVRM: GPU is already attached *NVRM: Failed to track remote GPU info **NVRM: Failed to track remote GPU info *nvlAttr*call to _memMulticastFabricSendInbandRequest*NVRM: Inband request Multicast Team Setup failed! **NVRM: Inband request Multicast Team Setup failed! *call to _memorymulticastfabricCtrlSetFailure*_memorymulticastfabricCtrlSetFailure(pMemoryMulticastFabric, ¶ms)**_memorymulticastfabricCtrlSetFailure(pMemoryMulticastFabric, ¶ms)*NVRM: Fabric validation failed. Prime and non-prime object page sizes do not match. **NVRM: Fabric validation failed. Prime and non-prime object page sizes do not match. *call to _memMulticastFabricDescriptorFlushClients*call to _memorymulticastfabricCtrlAttachGpu*NVRM: flags passed for attach mem must be zero **NVRM: flags passed for attach mem must be zero *NVRM: The object is already ready. **NVRM: The object is already ready. *NVRM: Multicast attach not supported on Windows/CC modes **NVRM: Multicast attach not supported on Windows/CC modes *NVRM: Unsupported pageSize: 0x%llx. **NVRM: Unsupported pageSize: 0x%llx. *call to _memMulticastFabricGpuInfoAdd*NVRM: Failed to populate GPU info **NVRM: Failed to populate GPU info *call to gpuFabricProbeGetGpuFabricHandle*NVRM: Attaching GPU does not have a valid probe handle **NVRM: Attaching GPU does not have a valid probe handle *NVRM: Attaching GPU does not have a valid clique ID **NVRM: Attaching GPU does not have a valid clique ID *NVRM: Fabric vaspace object not available for GPU %x **NVRM: Fabric vaspace object not available for GPU %x *_memMulticastFabricIsPrime(pMulticastFabricDesc->allocFlags)**_memMulticastFabricIsPrime(pMulticastFabricDesc->allocFlags)*NVRM: Failed to query IMEX FM caps **NVRM: Failed to query IMEX FM caps *NVRM: Remote attach is supported from V2+ **NVRM: Remote attach is supported from V2+ *NVRM: IMEX channel subscription is not available **NVRM: IMEX channel subscription is not available *call to _memMulticastFabricInitAttachEvent*pRemoteHead**pRemoteHead*call to _memorymulticastfabricValidateNvlAttrCommon*_memorymulticastfabricValidateNvlAttrCommon(&pHead->nvlAttr, cliqueId, bwMode, bwModeEpoch)**_memorymulticastfabricValidateNvlAttrCommon(&pHead->nvlAttr, cliqueId, bwMode, bwModeEpoch)*_memorymulticastfabricValidateNvlAttrCommon(&pRemoteHead->nvlAttr, cliqueId, bwMode, bwModeEpoch)**_memorymulticastfabricValidateNvlAttrCommon(&pRemoteHead->nvlAttr, cliqueId, bwMode, bwModeEpoch)*NVRM: Clique ID mismatch %u:%u **NVRM: Clique ID mismatch %u:%u *NVRM: bwModeEpoch mismatch %llu:%llu **NVRM: bwModeEpoch mismatch %llu:%llu *NVRM: bwMode mismatch %u:%u **NVRM: bwMode mismatch %u:%u *call to memorymulticastfabricCopyConstruct_DISPATCH*call to _memMulticastFabricConstructResolveDefaults*call to _memMulticastFabricValidateAllocParams*_memMulticastFabricValidateAllocParams(pAllocParams)**_memMulticastFabricValidateAllocParams(pAllocParams)*call to _memMulticastFabricConstruct*call to _memMulticastfabricResolveAlignment*pMcTeamSetupRspMsg**pMcTeamSetupRspMsg*pMcTeamSetupRsp**pMcTeamSetupRsp*call to fabricMulticastSetupCacheGet_IMPL**call to fabricMulticastSetupCacheGet_IMPL*bInbandReqInProgress*call to _memMulticastFabricAttachGpuPostProcessor*bResponseReceived*call to _memMulticastFabricSendInbandTeamReleaseRequest*call to fabricMulticastCleanupCacheInvokeCallback_IMPL*mcTeamStatus != NV_ERR_NOT_READY**mcTeamStatus != NV_ERR_NOT_READY*(status != NV_OK)**(status != NV_OK)*NVRM: MCFLA retry failed %x **NVRM: MCFLA retry failed %x *NVRM: Insufficient mcAddressSize returned from Fabric Manager **NVRM: Insufficient mcAddressSize returned from Fabric Manager *call to _memMulticastFabricCreateMemDesc*NVRM: Failed to allocate fabric memdesc **NVRM: Failed to allocate fabric memdesc *call to _memorymulticastFabricAllocVas*NVRM: Failed to allocate fabric VAS **NVRM: Failed to allocate fabric VAS *call to _memMulticastFabricInstallMemDesc*NVRM: Attached GPU's probe handle is stale **NVRM: Attached GPU's probe handle is stale *call to fabricvaspaceAllocMulticast_IMPL*NVRM: Fabric VA space alloc failed for GPU %d **NVRM: Fabric VA space alloc failed for GPU %d *bMcflaAlloc*pMulticastFabricDesc->pMemDesc == NULL**pMulticastFabricDesc->pMemDesc == NULL*NVRM: Failed to allocate memory descriptor for multicast object **NVRM: Failed to allocate memory descriptor for multicast object *call to _memMulticastFabricDescriptorAllocUsingExpPacket*call to _memMulticastFabricDescriptorAlloc*expUuid*cacheKey*!_memMulticastFabricIsPrime(pMulticastFabricDesc->allocFlags)**!_memMulticastFabricIsPrime(pMulticastFabricDesc->allocFlags)*pMulticastFabricDesc->cacheKey != 0**pMulticastFabricDesc->cacheKey != 0*call to fabricMulticastSetupCacheDelete_IMPL*pMulticastFabricDesc->bMemdescInstalled**pMulticastFabricDesc->bMemdescInstalled*pNode != NULL**pNode != NULL*call to _memorymulticastfabricBatchFreeVas*threadStateGetCurrent(&pThreadNode, NULL)**threadStateGetCurrent(&pThreadNode, NULL)*osAllocWaitQueue(&pWq)**osAllocWaitQueue(&pWq)*call to fabricMulticastCleanupCacheInsert_IMPL*fabricMulticastCleanupCacheInsert(pFabric, pMulticastFabricDesc->inbandReqId, pWq)**fabricMulticastCleanupCacheInsert(pFabric, pMulticastFabricDesc->inbandReqId, pWq)*inbandReqId**inbandReqId*call to _memMulticastFabricGpuInfoRemove*call to _memMulticastFabricDescriptorCleanup*pGpuNode*pMemNode == NULL**pMemNode == NULL**pGpuNode*call to _memMulticastFabricSendInbandTeamSetupRequest*call to _memMulticastFabricSendInbandTeamReleaseRequestV1*call to _memMulticastFabricSendInbandTeamSetupRequestV2*call to _memMulticastFabricSendInbandTeamSetupRequestV1*sendDataParams**sendDataParams*pMcTeamReleaseReqMsg**pMcTeamReleaseReqMsg*pMcTeamReleaseReq**pMcTeamReleaseReq*sendDataSize*call to nvlinkInitInbandMsgHdr*call to knvlinkSendInbandData_IMPL*pMcTeamSetupReqMsg**pMcTeamSetupReqMsg*pMcTeamSetupReq**pMcTeamSetupReq*(NvU32)sendDataSize <= sizeof(sendDataParams->buffer)**(NvU32)sendDataSize <= sizeof(sendDataParams->buffer)*mcAllocSize*gpuHandles**gpuHandles*idx == pMcTeamSetupReq->numGpuHandles**idx == pMcTeamSetupReq->numGpuHandles*numKeys*gpuHandlesAndKeys**gpuHandlesAndKeys*smIter*call to multimapValueToNode*submapNode*pRemoteNode**pRemoteNode*pMulticastFabricDesc->imexChannel != -1**pMulticastFabricDesc->imexChannel != -1*call to _memMulticastFabricInitUnimportEvent*subdeviceGetByHandle(RES_GET_CLIENT(pMemoryMulticastFabric), pAttachParams->hSubdevice, &pSubdevice)**subdeviceGetByHandle(RES_GET_CLIENT(pMemoryMulticastFabric), pAttachParams->hSubdevice, &pSubdevice)*NVRM: GPU %x has already attached **NVRM: GPU %x has already attached **pAttachMemInfoTree*pNodeNext**pNodeNext*call to fabricMulticastSetupCacheInsert_IMPL*NVRM: Failed to track memdesc 0x%x**NVRM: Failed to track memdesc 0x%x*listCount(&pMulticastFabricDesc->gpuInfoList) == 0**listCount(&pMulticastFabricDesc->gpuInfoList) == 0*pMulticastFabricDesc->numAttachedGpus == 0**pMulticastFabricDesc->numAttachedGpus == 0*pMulticastFabricDesc->localAttachedGpusMask == 0**pMulticastFabricDesc->localAttachedGpusMask == 0*listCount(&pMulticastFabricDesc->waitingClientsList) == 0**listCount(&pMulticastFabricDesc->waitingClientsList) == 0*NVRM: Alignment must be pageSize for now **NVRM: Alignment must be pageSize for now *NVRM: Invalid number of GPUs to attach **NVRM: Invalid number of GPUs to attach *bPrime*coherency*call to memSetGpuCacheSnoop_IMPL*memSetGpuCacheSnoop(NULL, attr, pMemDesc)*src/kernel/mem_mgr/no_device_mem.c**memSetGpuCacheSnoop(NULL, attr, pMemDesc)**src/kernel/mem_mgr/no_device_mem.c*Destroying memdesc but not all refs destroyed! *src/kernel/mem_mgr/os_desc_mem.c**Destroying memdesc but not all refs destroyed! **src/kernel/mem_mgr/os_desc_mem.c*RmDeprecatedConvertOs32ToOs02Flags(pUserParams->attr, pUserParams->attr2, pUserParams->flags, &os02Flags)**RmDeprecatedConvertOs32ToOs02Flags(pUserParams->attr, pUserParams->attr2, pUserParams->flags, &os02Flags)*osCreateMemFromOsDescriptor(pGpu, pUserParams->descriptor, hClient, os02Flags, &limit, &pMemDesc, pUserParams->descriptorType, pRmAllocParams->pSecInfo->privLevel)**osCreateMemFromOsDescriptor(pGpu, pUserParams->descriptor, hClient, os02Flags, &limit, &pMemDesc, pUserParams->descriptorType, pRmAllocParams->pSecInfo->privLevel)*memSetGpuCacheSnoop(pGpu, pUserParams->attr, pMemDesc)**memSetGpuCacheSnoop(pGpu, pUserParams->attr, pMemDesc)*memConstructCommon(pMemory, NV01_MEMORY_SYSTEM_OS_DESCRIPTOR, pUserParams->flags, pMemDesc, 0, NULL, pUserParams->attr, pUserParams->attr2, 0, 0, pUserParams->tag, (HWRESOURCE_INFO *)NULL)**memConstructCommon(pMemory, NV01_MEMORY_SYSTEM_OS_DESCRIPTOR, pUserParams->flags, pMemDesc, 0, NULL, pUserParams->attr, pUserParams->attr2, 0, 0, pUserParams->tag, (HWRESOURCE_INFO *)NULL)*refAddMapping(pResourceRef, &dummyParams, pResourceRef->pParentRef, &pCpuMapping)**refAddMapping(pResourceRef, &dummyParams, pResourceRef->pParentRef, &pCpuMapping)*call to CliUpdateMemoryMappingInfo*CliUpdateMemoryMappingInfo(pCpuMapping, pCallContext->secInfo.privLevel >= RS_PRIV_LEVEL_KERNEL, pUserParams->descriptor, NvP64_NULL, limit+1, flags)**CliUpdateMemoryMappingInfo(pCpuMapping, pCallContext->secInfo.privLevel >= RS_PRIV_LEVEL_KERNEL, pUserParams->descriptor, NvP64_NULL, limit+1, flags)*fbAllocPageFormat*pMemorySystemConfig->bUseRawModeComptaglineAllocation**pMemorySystemConfig->bUseRawModeComptaglineAllocation*!FLD_TEST_DRF(OS32, _ATTR, _COMPR, _REQUIRED, fbAllocPageFormat.attr)**!FLD_TEST_DRF(OS32, _ATTR, _COMPR, _REQUIRED, fbAllocPageFormat.attr)*memmgrChooseKind_HAL(pGpu, pMemoryManager, &fbAllocPageFormat, DRF_VAL(OS32, _ATTR, _COMPR, fbAllocPageFormat.attr), &kind)**memmgrChooseKind_HAL(pGpu, pMemoryManager, &fbAllocPageFormat, DRF_VAL(OS32, _ATTR, _COMPR, fbAllocPageFormat.attr), &kind)*NVRM: memmgrChooseKind_HAL() return (%d) kind(%x). **NVRM: memmgrChooseKind_HAL() return (%d) kind(%x). *bCompressedKind*src/kernel/mem_mgr/phys_mem.c**src/kernel/mem_mgr/phys_mem.c*FbAllocInfo*memmgrDeterminePageSize(pMemoryManager, FbAllocInfo.hClient, FbAllocInfo.size, FbAllocInfo.format, FbAllocInfo.pageFormat->flags, &FbAllocInfo.retAttr, &FbAllocInfo.retAttr2) != 0**memmgrDeterminePageSize(pMemoryManager, FbAllocInfo.hClient, FbAllocInfo.size, FbAllocInfo.format, FbAllocInfo.pageFormat->flags, &FbAllocInfo.retAttr, &FbAllocInfo.retAttr2) != 0*memmgrAllocDetermineAlignment_HAL(pGpu, pMemoryManager, &FbAllocInfo.size, &FbAllocInfo.align, FbAllocInfo.alignPad, FbAllocInfo.pageFormat->flags, FbAllocInfo.retAttr, FbAllocInfo.retAttr2, 0)**memmgrAllocDetermineAlignment_HAL(pGpu, pMemoryManager, &FbAllocInfo.size, &FbAllocInfo.align, FbAllocInfo.alignPad, FbAllocInfo.pageFormat->flags, FbAllocInfo.retAttr, FbAllocInfo.retAttr2, 0)*memmgrAllocHwResources(pGpu, pMemoryManager, &FbAllocInfo)**memmgrAllocHwResources(pGpu, pMemoryManager, &FbAllocInfo)*FbAllocInfo.format == pAllocParams->format**FbAllocInfo.format == pAllocParams->format*memConstructCommon(pMemory, NV01_MEMORY_LOCAL_USER, 0, pMemDesc, 0, NULL, attr, attr2, 0, 0, NVOS32_MEM_TAG_NONE, bCompressedKind ? &hwResource : NULL)**memConstructCommon(pMemory, NV01_MEMORY_LOCAL_USER, 0, pMemDesc, 0, NULL, attr, attr2, 0, 0, NVOS32_MEM_TAG_NONE, bCompressedKind ? &hwResource : NULL)*call to NV_RM_RPC_ALLOC_LOCAL_USER*pMemReserveInfo != NULL*src/kernel/mem_mgr/pool_alloc.c**pMemReserveInfo != NULL**src/kernel/mem_mgr/pool_alloc.c*(pChunkSize != NULL) && (pPageSize != NULL)**(pChunkSize != NULL) && (pPageSize != NULL)*NULL != pMemReserveInfo**NULL != pMemReserveInfo*call to rmMemPoolGetRef*rmMemPoolGetRef(pMemReserveInfo) == 0**rmMemPoolGetRef(pMemReserveInfo) == 0***pPool*call to poolGetListLength*freeListLength == 0**freeListLength == 0*partialListLength == 0**partialListLength == 0*fullListLenght == 0**fullListLenght == 0*call to poolDestroy*poolIndex*pPoolLock**pPoolLock*bPrevSkipScrubState*call to poolTrim*(pMemDesc->pPageHandleList != NULL) && (listCount(pMemDesc->pPageHandleList) != 0)**(pMemDesc->pPageHandleList != NULL) && (listCount(pMemDesc->pPageHandleList) != 0)*topPool*(poolIndex >= 0)**(poolIndex >= 0)*call to poolFree*call to rmMemPoolRemoveRef*(NULL != pMemReserveInfo)**(NULL != pMemReserveInfo)*pPageHandleList != NULL**pPageHandleList != NULL*NVRM: Total size of memory reserved for allocation = 0x%llx Bytes **NVRM: Total size of memory reserved for allocation = 0x%llx Bytes *NVRM: Allocating from pool with alloc size = 0x%llx Bytes **NVRM: Allocating from pool with alloc size = 0x%llx Bytes *call to poolAllocateContig*(pPageHandle != NULL)**(pPageHandle != NULL)*call to poolAllocate*(NULL != pPageHandle)**(NULL != pPageHandle)*call to rmMemPoolAddRef*numChunks*call to poolReserve*(status == NV_OK) || ((status == NV_ERR_NO_MEMORY) && (flags & VASPACE_FLAGS_RETRY_PTE_ALLOC_IN_SYS))**(status == NV_OK) || ((status == NV_ERR_NO_MEMORY) && (flags & VASPACE_FLAGS_RETRY_PTE_ALLOC_IN_SYS))*topmostPoolIndex*pmaChunkSize*call to poolInitialize*bTrimOnFree*pMemReserveInfo->validAllocCount > 0**pMemReserveInfo->validAllocCount > 0*NULL != pCtx**NULL != pCtx*NULL != pPage**NULL != pPage*call to pmaFreePages*poolAllocate((POOLALLOC *)pCtx, &pPage[i])**poolAllocate((POOLALLOC *)pCtx, &pPage[i])*pPageStore*pmaAllocatePages(pMemReserveInfo->pPma, numPages, pageSize, &allocOptions, pPageStore)**pmaAllocatePages(pMemReserveInfo->pPma, numPages, pageSize, &allocOptions, pPageStore)***pMetadata**pPageStore*pmaAllocatePages(pMemReserveInfo->pPma, pageSize / PMA_CHUNK_SIZE_64K, PMA_CHUNK_SIZE_64K, &allocOptions, &pageBegin)**pmaAllocatePages(pMemReserveInfo->pPma, pageSize / PMA_CHUNK_SIZE_64K, PMA_CHUNK_SIZE_64K, &allocOptions, &pageBegin)*src/kernel/mem_mgr/standard_mem.c*NVRM: stdmemConstruct output **src/kernel/mem_mgr/standard_mem.c**NVRM: stdmemConstruct output *NVRM: Height: 0x%x **NVRM: Height: 0x%x *NVRM: Width: 0x%x **NVRM: Width: 0x%x *NVRM: Pitch: 0x%x **NVRM: Pitch: 0x%x *NVRM: Size: 0x%08llx **NVRM: Size: 0x%08llx *NVRM: Alignment: 0x%08llx **NVRM: Alignment: 0x%08llx *NVRM: Offset: 0x%08llx **NVRM: Offset: 0x%08llx *NVRM: Attr: 0x%x **NVRM: Attr: 0x%x *NVRM: Attr2: 0x%x **NVRM: Attr2: 0x%x *NVRM: Format: 0x%x **NVRM: Format: 0x%x *NVRM: ComprCovg: 0x%x **NVRM: ComprCovg: 0x%x *NVRM: ZCullCovg: 0x%x **NVRM: ZCullCovg: 0x%x *NVRM: stdmemConstruct input **NVRM: stdmemConstruct input *NVRM: Owner: 0x%x **NVRM: Owner: 0x%x *NVRM: hMemory: 0x%x **NVRM: hMemory: 0x%x *NVRM: Type: 0x%x **NVRM: Type: 0x%x *NVRM: Flags: 0x%x **NVRM: Flags: 0x%x *NVRM: Begin: 0x%08llx **NVRM: Begin: 0x%08llx *NVRM: End: 0x%08llx **NVRM: End: 0x%08llx *NVRM: CtagOffset: 0x%x **NVRM: CtagOffset: 0x%x *NVRM: hVASpace: 0x%x **NVRM: hVASpace: 0x%x *NVRM: tag: 0x%x **NVRM: tag: 0x%x *(pAllocData->rangeLo == 0) && (pAllocData->rangeHi == 0)**(pAllocData->rangeLo == 0) && (pAllocData->rangeHi == 0)*NVRM: MMU_PROFILER Attr 0x%x Type 0x%x Attr2 0x%x **NVRM: MMU_PROFILER Attr 0x%x Type 0x%x Attr2 0x%x *NVRM: Encryption requested for video memory on a non-0FB chip; **NVRM: Encryption requested for video memory on a non-0FB chip; *FLD_TEST_DRF(OS32, _ATTR, _LOCATION, _VIDMEM, pAllocData->attr)**FLD_TEST_DRF(OS32, _ATTR, _LOCATION, _VIDMEM, pAllocData->attr)*call to osGetSyncpointAperture*src/kernel/mem_mgr/syncpoint_mem.c*NVRM: failed to get syncpoint aperture %x **src/kernel/mem_mgr/syncpoint_mem.c**NVRM: failed to get syncpoint aperture %x ***physAddr*NVRM: failed to import syncpoint memory %x **NVRM: failed to import syncpoint memory %x *pMemoryManager->pSysmemScrubber != NULL*src/kernel/mem_mgr/system_mem.c**pMemoryManager->pSysmemScrubber != NULL**src/kernel/mem_mgr/system_mem.c*call to sysmemscrubScrubAndFree_IMPL*memUtilsAllocMemDesc(pGpu, pAllocRequest, pFbAllocInfo, &pMemDesc, NULL, ADDR_SYSMEM, bContig, &bAllocedMemDesc)**memUtilsAllocMemDesc(pGpu, pAllocRequest, pFbAllocInfo, &pMemDesc, NULL, ADDR_SYSMEM, bContig, &bAllocedMemDesc)*call to sysmemAllocResources*sysmemAllocResources(pGpu, pMemoryManager, pAllocRequest, pFbAllocInfo, pSystemMemory)**sysmemAllocResources(pGpu, pMemoryManager, pAllocRequest, pFbAllocInfo, pSystemMemory)*pAllocParams->size != 0**pAllocParams->size != 0*(pAllocParams->type < NVOS32_NUM_MEM_TYPES)**(pAllocParams->type < NVOS32_NUM_MEM_TYPES)*localAttr*NVRM: Fixed allocation on sysmem not allowed. **NVRM: Fixed allocation on sysmem not allowed. *localAttr2*memdescCreate(&pAllocRequest->pMemDesc, pGpu, pAllocParams->size, 0, bContig, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_SKIP_RESOURCE_COMPUTE)**memdescCreate(&pAllocRequest->pMemDesc, pGpu, pAllocParams->size, 0, bContig, ADDR_SYSMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_SKIP_RESOURCE_COMPUTE)*memdescGetAddressSpace(pMemory->pMemDesc) == ADDR_SYSMEM**memdescGetAddressSpace(pMemory->pMemDesc) == ADDR_SYSMEM*call to osGetMemoryPages*NVRM: Failed to get sysmem pages **NVRM: Failed to get sysmem pages *call to osGetNumMemoryPages*call to memmgrDuplicateFromScanoutCarveoutRegion_DISPATCH*DRF_VAL(OS32, _ATTR, _LOCATION, pAllocData->attr) != NVOS32_ATTR_LOCATION_VIDMEM && !(pAllocData->flags & NVOS32_ALLOC_FLAGS_VIRTUAL)**DRF_VAL(OS32, _ATTR, _LOCATION, pAllocData->attr) != NVOS32_ATTR_LOCATION_VIDMEM && !(pAllocData->flags & NVOS32_ALLOC_FLAGS_VIRTUAL)*call to sysmemInitAllocRequest_DISPATCH*sysmemInitAllocRequest(pGpu, pSystemMemory, pAllocRequest)**sysmemInitAllocRequest(pGpu, pSystemMemory, pAllocRequest)*memSetGpuCacheSnoop(pGpu, pAllocData->attr, pMemDesc)**memSetGpuCacheSnoop(pGpu, pAllocData->attr, pMemDesc)*NVRM: Cannot specify NUMA node in unprotected memory. **NVRM: Cannot specify NUMA node in unprotected memory. *NVRM: NUMA node mismatch. Requested node: %u CPU node: %u **NVRM: NUMA node mismatch. Requested node: %u CPU node: %u *call to memdescSetNumaNode*call to _sysmemGetNextSmallestPageSize*NVRM: Sysmem alloc failed, retrying with page size 0x%llx. **NVRM: Sysmem alloc failed, retrying with page size 0x%llx. *!FLD_TEST_DRF(OS32, _ATTR2, _SMMU_ON_GPU, _ENABLE, pAllocData->attr2)**!FLD_TEST_DRF(OS32, _ATTR2, _SMMU_ON_GPU, _ENABLE, pAllocData->attr2)*bLargePageNonContigAllocSupported**bLargePageNonContigAllocSupported*call to memCreateKernelMapping_IMPL*memCreateKernelMapping(pMemory, NV_PROTECT_READ_WRITE, NV_FALSE)**memCreateKernelMapping(pMemory, NV_PROTECT_READ_WRITE, NV_FALSE)*NVRM: Invalid page size attribute: 0x%x **NVRM: Invalid page size attribute: 0x%x *nextPageSize*NVRM: Invalid page size: 0x%llx **NVRM: Invalid page size: 0x%llx *pVaList != NULL*src/kernel/mem_mgr/vaddr_list.c**pVaList != NULL**src/kernel/mem_mgr/vaddr_list.c*refCount != 0**refCount != 0*pRefCount != NULL**pRefCount != NULL*pVaList->impl.simple.entries[0].refCnt > 0**pVaList->impl.simple.entries[0].refCnt > 0*pVaList->impl.simple.entries[1].refCnt > 0**pVaList->impl.simple.entries[1].refCnt > 0*pVaInfo->refCnt > 0**pVaInfo->refCnt > 0*pVaddr != NULL**pVaddr != NULL*pVaList->impl.simple.entries[0].refCnt < NV_U64_MAX**pVaList->impl.simple.entries[0].refCnt < NV_U64_MAX*pVaList->impl.simple.entries[0].vAddr == vAddr**pVaList->impl.simple.entries[0].vAddr == vAddr*pVaList->impl.simple.entries[1].refCnt < NV_U64_MAX**pVaList->impl.simple.entries[1].refCnt < NV_U64_MAX*pVaList->impl.simple.entries[1].vAddr == vAddr**pVaList->impl.simple.entries[1].vAddr == vAddr*call to vaListInitMap*Simple va list full**Simple va list full*pVaInfo->refCnt < NV_U64_MAX**pVaInfo->refCnt < NV_U64_MAX*pVaInfo->vAddr == vAddr**pVaInfo->vAddr == vAddr**pVaInfo*pVaInfo != NULL**pVaInfo != NULL*pVaListInfo*(vaListMapCount(pVaList) == 0) || (vaListGetManaged(pVaList) == bManaged)**(vaListMapCount(pVaList) == 0) || (vaListGetManaged(pVaList) == bManaged)*NVRM: non-zero mapCount(pVaList): 0x%x **NVRM: non-zero mapCount(pVaList): 0x%x *pVaInfo->vAddr == 0**pVaInfo->vAddr == 0*pVaInfo->refCnt == 0**pVaInfo->refCnt == 0*pVaInfo->pVaListInfo**pVaInfo->pVaListInfo**pVaListInfo*vaCache**vaCache*_hDeviceOrSubDevice*src/kernel/mem_mgr/vaspace.c*NVRM: Invalid object handle 0x%x **src/kernel/mem_mgr/vaspace.c**NVRM: Invalid object handle 0x%x *NVRM: Invalid parent handle! **NVRM: Invalid parent handle! *NVRM: VA mode %d (PRIVATE) doesn't support allocating an implicit VA space. **NVRM: VA mode %d (PRIVATE) doesn't support allocating an implicit VA space. *call to deviceGetDefaultVASpace_IMPL*NVRM: VA mode %d (GLOBAL) doesn't support allocating private VA spaces. **NVRM: VA mode %d (GLOBAL) doesn't support allocating private VA spaces. *bRestrictedVaRange*bEnforce32bitPtr*call to vaspaceApplyDefaultAlignment_DISPATCH*vaspaceApplyDefaultAlignment(pVAS, pAllocInfo, pAlign, pSize, pPageSizeLockMask)**vaspaceApplyDefaultAlignment(pVAS, pAllocInfo, pAlign, pSize, pPageSizeLockMask)*NVRM: Requested size 0x%llx more than available range. RangeLo=0x%llx, RangeHi=0x%llx **NVRM: Requested size 0x%llx more than available range. RangeLo=0x%llx, RangeHi=0x%llx *bPreferSysmemPageTables*bExternallyManaged*pVAS->refCnt != 0**pVAS->refCnt != 0*pDstKernelMIGManager*kmigmgrGetMemoryPartitionHeapFromDevice(pDstGpu, pDstKernelMIGManager, pDstDevice, &pDstClientHeap)*src/kernel/mem_mgr/video_mem.c**kmigmgrGetMemoryPartitionHeapFromDevice(pDstGpu, pDstKernelMIGManager, pDstDevice, &pDstClientHeap)**src/kernel/mem_mgr/video_mem.c*pDstClientHeap*NVRM: Duping outside of GPU instance not allowed with MIG **NVRM: Duping outside of GPU instance not allowed with MIG *refFindAncestorOfType(RES_GET_REF(pMemory), classId(Device), &pSrcDeviceRef)**refFindAncestorOfType(RES_GET_REF(pMemory), classId(Device), &pSrcDeviceRef)*pSrcDeviceRef*pSrcKernelMIGManager*kmigmgrGetInstanceRefFromDevice(pSrcGpu, pSrcKernelMIGManager, pSrcDevice, &srcInstRef)**kmigmgrGetInstanceRefFromDevice(pSrcGpu, pSrcKernelMIGManager, pSrcDevice, &srcInstRef)*kmigmgrGetInstanceRefFromDevice(pDstGpu, pDstKernelMIGManager, pDstDevice, &dstInstRef)**kmigmgrGetInstanceRefFromDevice(pDstGpu, pDstKernelMIGManager, pDstDevice, &dstInstRef)*srcInstRef*dstInstRef*NVRM: GPU instance subscription differ between Source and Destination clients **NVRM: GPU instance subscription differ between Source and Destination clients *bIsPmaOwned*NVRM: NVOS32_ALLOC_FLAGS_FIXED_ADDRESS_ALLOCATE for PMA cannot be accommodated for NUMA systems **NVRM: NVOS32_ALLOC_FLAGS_FIXED_ADDRESS_ALLOCATE for PMA cannot be accommodated for NUMA systems *call to memmgrAllocFromScanoutCarveoutRegion_DISPATCH*call to memdescSetCustomHeap*NVRM: Allocated surface in scanout region **NVRM: Allocated surface in scanout region *NVRM: Failed to allocate surface in scanout region **NVRM: Failed to allocate surface in scanout region *memUtilsAllocMemDesc(pGpu, pAllocRequest, pFbAllocInfo, &pMemDesc, pHeap, ADDR_FBMEM, bContig, &bAllocedMemDesc)**memUtilsAllocMemDesc(pGpu, pAllocRequest, pFbAllocInfo, &pMemDesc, pHeap, ADDR_FBMEM, bContig, &bAllocedMemDesc)*NVRM: ---> PMA Path taken contiguous **NVRM: ---> PMA Path taken contiguous **pageArray*NVRM: ---> PMA Path taken discontiguous **NVRM: ---> PMA Path taken discontiguous *!bContig && bNoncontigAllowed**!bContig && bNoncontigAllowed*NVRM: Localized memory requested when localized memory not enabled **NVRM: Localized memory requested when localized memory not enabled *bNoncontigAllocation*call to heapAlloc_IMPL*bAllocedMemory*preFillStatus*preFillStatus == NV_OK**preFillStatus == NV_OK*call to memmgrScrubMemory_b3696a*call to CliUnregisterMemoryFromThirdPartyP2P*customHeap*NVRM: Function: FREE **NVRM: Function: FREE *NVRM: Owner: 0x%x **NVRM: Owner: 0x%x *NVRM: hMemory: 0x%x **NVRM: hMemory: 0x%x *pMemDesc->_subDeviceAllocCount == 1**pMemDesc->_subDeviceAllocCount == 1*call to memmgrFreeFromScanoutCarveoutRegion_DISPATCH*rmDeviceGpuLocksAcquire(pGpu, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_MEM)**rmDeviceGpuLocksAcquire(pGpu, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_MEM)*call to vidmemCopyConstruct*DRF_VAL(OS32, _ATTR, _LOCATION, pAllocData->attr) == NVOS32_ATTR_LOCATION_VIDMEM && !(pAllocData->flags & NVOS32_ALLOC_FLAGS_VIRTUAL)**DRF_VAL(OS32, _ATTR, _LOCATION, pAllocData->attr) == NVOS32_ATTR_LOCATION_VIDMEM && !(pAllocData->flags & NVOS32_ALLOC_FLAGS_VIRTUAL)*bRsvdHeap*NVRM: Non-CPR region not yet created **NVRM: Non-CPR region not yet created *NVRM: Protected memory not enabled but PROTECTED flag is set by client**NVRM: Protected memory not enabled but PROTECTED flag is set by client*bIsPmaAlloc*!memmgrIsScrubOnFreeEnabled(pMemoryManager) || bIsPmaAlloc || bSubheap || bRsvdHeap**!memmgrIsScrubOnFreeEnabled(pMemoryManager) || bIsPmaAlloc || bSubheap || bRsvdHeap*call to _vidmemPmaAllocate*NV_OK == rmStatus**NV_OK == rmStatus*Video memory requested despite BROKEN FB**Video memory requested despite BROKEN FB*pTopLevelMemDesc*SMMU mapping allocation is not supported for ARMv7**SMMU mapping allocation is not supported for ARMv7*pMemory->pHeap**pMemory->pHeap*call to NV_RM_RPC_ALLOC_VIDMEM**pTopLevelMemDesc*smcInfo*kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, pDevice, &partitionRef)**kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, pDevice, &partitionRef)*pidInfoData*vidMemUsage*call to gpuacctUpdateProcPeakFbUsage_IMPL*call to heapReference_IMPL*pHeap != NULL && pHeap->heapType == HEAP_TYPE_PHYS_MEM_SUBALLOCATOR**pHeap != NULL && pHeap->heapType == HEAP_TYPE_PHYS_MEM_SUBALLOCATOR*NVRM: failed to get memory partition heap for hClient = 0x%x, hDevice = 0x%x **NVRM: failed to get memory partition heap for hClient = 0x%x, hDevice = 0x%x *NULL != pPmaAllocInfo**NULL != pPmaAllocInfo*NVRM: PMA input **NVRM: PMA input *call to stdmemQueryPageSize_IMPL*call to _vidmemQueryAlignment*allocOptions.physBegin <= allocOptions.physEnd**allocOptions.physBegin <= allocOptions.physEnd*(NV_DIV_AND_CEIL(size, pageSize) <= NV_U32_MAX)**(NV_DIV_AND_CEIL(size, pageSize) <= NV_U32_MAX)*pmaInfoSize*portSafeMulU32((pageCount - 1), (sizeof(NvU64)), &pmaInfoSize)**portSafeMulU32((pageCount - 1), (sizeof(NvU64)), &pmaInfoSize)*portSafeAddU32(pmaInfoSize, (sizeof(PMA_ALLOC_INFO)), &pmaInfoSize)**portSafeAddU32(pmaInfoSize, (sizeof(PMA_ALLOC_INFO)), &pmaInfoSize)*NULL != pAllocRequest->pPmaAllocInfo[subdevInst]**NULL != pAllocRequest->pPmaAllocInfo[subdevInst]*NVRM: NVRM: Size requested: 0x%llx bytes **NVRM: NVRM: Size requested: 0x%llx bytes *NVRM: PageSize: 0x%llx bytes **NVRM: PageSize: 0x%llx bytes *NVRM: PageCount: 0x%x **NVRM: PageCount: 0x%x *NVRM: Actual Size: 0x%llx **NVRM: Actual Size: 0x%llx *NVRM: Contiguous: %s **NVRM: Contiguous: %s *NVRM: pmaAllocatePages failed -- retrying as noncontiguous **NVRM: pmaAllocatePages failed -- retrying as noncontiguous *NVRM: pmaAllocatePages failed (%x) **NVRM: pmaAllocatePages failed (%x) *(NULL != pSize) && (NULL != pAlign)**(NULL != pSize) && (NULL != pAlign)*memmgrDeterminePageSize**memmgrDeterminePageSize*memmgrAllocDetermineAlignment_HAL(pGpu, pMemoryManager, &size, &align, 0, pAllocData->flags, retAttr, retAttr2, 0)**memmgrAllocDetermineAlignment_HAL(pGpu, pMemoryManager, &size, &align, 0, pAllocData->flags, retAttr, retAttr2, 0)*src/kernel/mem_mgr/virt_mem_mgr.c**src/kernel/mem_mgr/virt_mem_mgr.c*call to vaspaceDecRefCnt_IMPL**pTargetVAS*ppVAS != NULL**ppVAS != NULL*call to vaspaceConstruct__DISPATCH*vasUniqueId*kgmmuGetMaxVASize(pKernelGmmu)*src/kernel/mem_mgr/virt_mem_range.c**kgmmuGetMaxVASize(pKernelGmmu)**src/kernel/mem_mgr/virt_mem_range.c*maxVA*pVAS->pVASpace**pVAS->pVASpace*pAllocData->limit < maxVA**pAllocData->limit < maxVA*pAllocData->offset < maxVA**pAllocData->offset < maxVA*bAllowUnicastMapping**pMemorySrc*bIsIndirectPeer*src/kernel/mem_mgr/virtual_mem.c*NVRM: Unicast DMA mappings into virtual memory object not supported. **src/kernel/mem_mgr/virtual_mem.c**NVRM: Unicast DMA mappings into virtual memory object not supported. *call to intermapGetDmaMapping*pDmaMappingInfo != NULL**pDmaMappingInfo != NULL*!bPartialUnmap || (gpuMask & (gpuMask - 1)) == 0**!bPartialUnmap || (gpuMask & (gpuMask - 1)) == 0*!bPartialUnmap || !bIsIndirectPeer**!bPartialUnmap || !bIsIndirectPeer*!bPartialUnmap**!bPartialUnmap*call to _virtmemFreeKernelMapping*call to dmaFreeBar1P2PMapping_DISPATCH*call to intermapCreateDmaMapping*intermapCreateDmaMapping(pClient, pVirtualMemory, &pDmaMappingInfoLeft, pDmaMappingInfo->Flags, pDmaMappingInfo->Flags2)**intermapCreateDmaMapping(pClient, pVirtualMemory, &pDmaMappingInfoLeft, pDmaMappingInfo->Flags, pDmaMappingInfo->Flags2)*pDmaMappingInfoLeft*bP2P*memdescCreateSubMem(&pDmaMappingInfoLeft->pMemDesc, pDmaMappingInfo->pMemDesc, pGpu, pDmaMappingInfoLeft->DmaOffset - pDmaMappingInfo->DmaOffset, dmaOffset - pDmaMappingInfoLeft->DmaOffset)**memdescCreateSubMem(&pDmaMappingInfoLeft->pMemDesc, pDmaMappingInfo->pMemDesc, pGpu, pDmaMappingInfoLeft->DmaOffset - pDmaMappingInfo->DmaOffset, dmaOffset - pDmaMappingInfoLeft->DmaOffset)*intermapCreateDmaMapping(pClient, pVirtualMemory, &pDmaMappingInfoRight, pDmaMappingInfo->Flags, pDmaMappingInfo->Flags2)**intermapCreateDmaMapping(pClient, pVirtualMemory, &pDmaMappingInfoRight, pDmaMappingInfo->Flags, pDmaMappingInfo->Flags2)*pDmaMappingInfoRight*memdescCreateSubMem(&pDmaMappingInfoRight->pMemDesc, pDmaMappingInfo->pMemDesc, pGpu, pDmaMappingInfoRight->DmaOffset - pDmaMappingInfo->DmaOffset, pDmaMappingInfo->DmaOffset + pDmaMappingInfo->pMemDesc->Size - pDmaMappingInfoRight->DmaOffset)**memdescCreateSubMem(&pDmaMappingInfoRight->pMemDesc, pDmaMappingInfo->pMemDesc, pGpu, pDmaMappingInfoRight->DmaOffset - pDmaMappingInfo->DmaOffset, pDmaMappingInfo->DmaOffset + pDmaMappingInfo->pMemDesc->Size - pDmaMappingInfoRight->DmaOffset)*pDmaMappingInfoUnmap**pDmaMappingInfoUnmap*intermapCreateDmaMapping(pClient, pVirtualMemory, &pDmaMappingInfoUnmap, pDmaMappingInfo->Flags, pDmaMappingInfo->Flags2)**intermapCreateDmaMapping(pClient, pVirtualMemory, &pDmaMappingInfoUnmap, pDmaMappingInfo->Flags, pDmaMappingInfo->Flags2)*memdescCreateSubMem(&pDmaMappingInfoUnmap->pMemDesc, pDmaMappingInfo->pMemDesc, pGpu, pDmaMappingInfoUnmap->DmaOffset - pDmaMappingInfo->DmaOffset, pParams->size)**memdescCreateSubMem(&pDmaMappingInfoUnmap->pMemDesc, pDmaMappingInfo->pMemDesc, pGpu, pDmaMappingInfoUnmap->DmaOffset - pDmaMappingInfo->DmaOffset, pParams->size)*call to dmaFreeMap_IMPL*call to intermapDelDmaMapping*call to intermapRegisterDmaMapping*intermapRegisterDmaMapping(pClient, pVirtualMemory, pDmaMappingInfoLeft, pDmaMappingInfoLeft->DmaOffset, gpuMask)**intermapRegisterDmaMapping(pClient, pVirtualMemory, pDmaMappingInfoLeft, pDmaMappingInfoLeft->DmaOffset, gpuMask)*bDmaMappingInfoLeftRegistered*intermapRegisterDmaMapping(pClient, pVirtualMemory, pDmaMappingInfoRight, pDmaMappingInfoRight->DmaOffset, gpuMask)**intermapRegisterDmaMapping(pClient, pVirtualMemory, pDmaMappingInfoRight, pDmaMappingInfoRight->DmaOffset, gpuMask)*bDmaMappingInfoRightRegistered*call to intermapFreeDmaMapping*(!IS_VIRTUAL(pGpu) && !IS_GSP_CLIENT(pGpu)) || !IsSLIEnabled(pGpu)**(!IS_VIRTUAL(pGpu) && !IS_GSP_CLIENT(pGpu)) || !IsSLIEnabled(pGpu)*call to kbusHasPcieBar1P2PMapping_DISPATCH*NVRM: Unicast mappings into virtual memory object not supported. **NVRM: Unicast mappings into virtual memory object not supported. **pPeerMemDesc*call to dmaAllocBar1P2PMapping_DISPATCH*tgtAddressSpace*bIsSysmem*NVRM: FLAGS_PAGE_KIND_VIRTUAL and FLAGS_PAGE_KIND_OVERRIDE_YES cannot both be set **NVRM: FLAGS_PAGE_KIND_VIRTUAL and FLAGS_PAGE_KIND_OVERRIDE_YES cannot both be set *NVRM: PTE kind override of %d is not supported **NVRM: PTE kind override of %d is not supported *bSetPteKind*memdescGetFlag(pMemory->pMemDesc, MEMDESC_FLAGS_SET_KIND)**memdescGetFlag(pMemory->pMemDesc, MEMDESC_FLAGS_SET_KIND)*call to memmgrGetCompressedKind_DISPATCH*perGpuKind*NVRM: DMA map pages failed for requested GPU! **NVRM: DMA map pages failed for requested GPU! *call to dmaAllocMap_IMPL*bDmaMappingRegistered*call to _virtmemAllocKernelMapping*intermapDelDmaMapping(pClient, pVirtualMemory, *pDmaOffset, gpuMask)**intermapDelDmaMapping(pClient, pVirtualMemory, *pDmaOffset, gpuMask)*NVRM: Allocating coherent link mapping. length=%lld, memDesc->size=%lld **NVRM: Allocating coherent link mapping. length=%lld, memDesc->size=%lld *mempoolFlags*vaspaceGetByHandleOrDeviceDefault(pClient, RES_GET_HANDLE(pDevice), pVirtualMemory->hVASpace, &pVAS)**vaspaceGetByHandleOrDeviceDefault(pClient, RES_GET_HANDLE(pDevice), pVirtualMemory->hVASpace, &pVAS)*pFbAllocInfoClient != NULL**pFbAllocInfoClient != NULL*memUtilsAllocMemDesc(pGpu, pAllocRequest, pFbAllocInfo, &pMemDesc, NULL, ADDR_VIRTUAL, NV_TRUE, &bAllocedMemDesc)**memUtilsAllocMemDesc(pGpu, pAllocRequest, pFbAllocInfo, &pMemDesc, NULL, ADDR_VIRTUAL, NV_TRUE, &bAllocedMemDesc)*call to vaspaceFillAllocParams_IMPL*NVRM: FillAllocParams failed. **NVRM: FillAllocParams failed. *NVRM: VA Space alloc failed! Status Code: 0x%x Size: 0x%llx RangeLo: 0x%llx, RangeHi: 0x%llx, pageSzLockMask: 0x%llx **NVRM: VA Space alloc failed! Status Code: 0x%x Size: 0x%llx RangeLo: 0x%llx, RangeHi: 0x%llx, pageSzLockMask: 0x%llx *largestSupportedPageSizeBitIdx*largestSupportedPageSize*heapOwner != 0**heapOwner != 0*NVRM: VirtualMemory memmgrFree failed, client: %x, hVASpace: %x, gpu: %x **NVRM: VirtualMemory memmgrFree failed, client: %x, hVASpace: %x, gpu: %x *call to _virtmemCopyConstruct*bReserveVaOnAlloc*bFlaVAS*pDmaMappingList**pDmaMappingList*pAllocData->flags & NVOS32_ALLOC_FLAGS_VIRTUAL**pAllocData->flags & NVOS32_ALLOC_FLAGS_VIRTUAL*deviceGetByHandle(pRsClient, hParent, &pDevice)**deviceGetByHandle(pRsClient, hParent, &pDevice)*call to _virtmemQueryVirtAllocParams*bOptimizePageTableMempoolUsage*call to virtmemAllocResources*bResAllocated*pAllocRequest->pMemDesc != NULL**pAllocRequest->pMemDesc != NULL*call to NV_RM_RPC_ALLOC_VIRTMEM*memdescGetAddressSpace(pMemory->pMemDesc) == ADDR_VIRTUAL**memdescGetAddressSpace(pMemory->pMemDesc) == ADDR_VIRTUAL*serverGetClientUnderLock(&g_resServ, GPU_GET_KERNEL_BUS(pSrcGpu)->flaInfo.hClient, &pFlaClient)**serverGetClientUnderLock(&g_resServ, GPU_GET_KERNEL_BUS(pSrcGpu)->flaInfo.hClient, &pFlaClient)*pFlaClient*vaspaceGetByHandleOrDeviceDefault(pFlaClient, RES_GET_HANDLE(pSrcMemory->pDevice), GPU_GET_KERNEL_BUS(pSrcGpu)->flaInfo.hFlaVASpace, &pVASSrc)**vaspaceGetByHandleOrDeviceDefault(pFlaClient, RES_GET_HANDLE(pSrcMemory->pDevice), GPU_GET_KERNEL_BUS(pSrcGpu)->flaInfo.hFlaVASpace, &pVASSrc)*deviceGetByGpu(pDstClient, pSrcGpu, NV_TRUE, &pDstDevice)**deviceGetByGpu(pDstClient, pSrcGpu, NV_TRUE, &pDstDevice)*pRmApi->DupObject(pRmApi, pDstClient->hClient, RES_GET_HANDLE(pDstDevice), &hImportedVASpace, GPU_GET_KERNEL_BUS(pSrcGpu)->flaInfo.hClient, GPU_GET_KERNEL_BUS(pSrcGpu)->flaInfo.hFlaVASpace, 0)**pRmApi->DupObject(pRmApi, pDstClient->hClient, RES_GET_HANDLE(pDstDevice), &hImportedVASpace, GPU_GET_KERNEL_BUS(pSrcGpu)->flaInfo.hClient, GPU_GET_KERNEL_BUS(pSrcGpu)->flaInfo.hFlaVASpace, 0)*pDupedVasRef*bIncAllocRefCnt*pDstMemory*pVASDst*pVASSrc*vaspaceIncAllocRefCnt(pVASSrc, vaddr)**vaspaceIncAllocRefCnt(pVASSrc, vaddr)*rmDeviceGpuLocksAcquire(pGpu, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_MEM_PMA)**rmDeviceGpuLocksAcquire(pGpu, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_MEM_PMA)*vaspaceGetByHandleOrDeviceDefault(pClient, hDevice, pAllocData->hVASpace, ppVAS)**vaspaceGetByHandleOrDeviceDefault(pClient, hDevice, pAllocData->hVASpace, ppVAS)*vaspaceApplyDefaultAlignment(*ppVAS, pFbAllocInfo, pAlign, pSize, pPageSizeLockMask)**vaspaceApplyDefaultAlignment(*ppVAS, pFbAllocInfo, pAlign, pSize, pPageSizeLockMask)*call to nvErrorLog2*arglistCpy**arglistCpy*call to gpuLogOobXidMessage_KERNEL*call to krcReportXid_IMPL*src/kernel/os/os_init.c**src/kernel/os/os_init.c*pCbLen != NULL**pCbLen != NULL*!(*pCbLen != 0 && pData == NULL)**!(*pCbLen != 0 && pData == NULL)*call to osReadRegistryStringBase*call to osReadRegistryDwordBase*call to nbsiReadRegistryDword*Paged memory access is prohibited**Paged memory access is prohibited*call to osGetMaxUserVa*call to kbifGetPciConfigSpacePriMirror_DISPATCH*configSpaceSize*isPCIConfigAccess*offAddr*call to gpuIsBar1Size64Bit*call to gpuIsBar2MovedByVtd*call to osInitObjOS**Unknown Error**Nv Internal Testing**Bus Error**Reserved**Invalid Bindata Access**BSOD on Assert or Breakpoint**Display Underflow*call to _osVerifyInterrupts*src/kernel/os/os_sanity.c*NVRM: called with gpuInstance: 0x%x **src/kernel/os/os_sanity.c**NVRM: called with gpuInstance: 0x%x *pIntrEn0**pIntrEn0*NVRM: pIntrEn0 portMemAllocNonPaged failed! **NVRM: pIntrEn0 portMemAllocNonPaged failed! *pIntrEn1**pIntrEn1*NVRM: pIntrEn1 portMemAllocNonPaged failed! **NVRM: pIntrEn1 portMemAllocNonPaged failed! *call to intrGetNonStallEnable_DISPATCH*call to intrDisableNonStall_DISPATCH*call to intrClearStallSWIntr_DISPATCH*call to intrEnableStallSWIntr_DISPATCH*call to intrSetStallSWIntr_DISPATCH*Bailout*call to intrDisableStallSWIntr_DISPATCH*NVRM: Finishing with %d **NVRM: Finishing with %d *call to osWaitForInterrupt*call to intrGetStallInterruptMode_DISPATCH*NVRM: INTR_EN_0_INTA_SOFTWARE was not set on gpuInstance: 0x%x **NVRM: INTR_EN_0_INTA_SOFTWARE was not set on gpuInstance: 0x%x *NVRM: Triggered for gpuInstance: 0x%x **NVRM: Triggered for gpuInstance: 0x%x *src/kernel/os/os_stubs.c**src/kernel/os/os_stubs.c*pOs1HzCallbackList*call to _osRunAll1HzCallbacks*bAcquired*ppEntryPtr**ppEntryPtr***ppEntryPtr*pOs1HzCallbackFreeList**pOs1HzCallbackFreeList*lastCallbackTime*pOs1HzEvent*call to _os1HzCallbackIsOnList*tmrEventScheduleRelSec(pTmr, pTmr->pOs1HzEvent, 1)*src/kernel/os/os_timer.c**tmrEventScheduleRelSec(pTmr, pTmr->pOs1HzEvent, 1)**src/kernel/os/os_timer.c**pOs1HzCallbackList*NVRM: Callback registration FAILED! **NVRM: Callback registration FAILED! **pOs1HzEvent*os1HzCallbackTable**os1HzCallbackTable*tmrEventCreate(pTmr, &pTmr->pOs1HzEvent, _os1HzCallback, NULL, TMR_FLAG_RECUR)**tmrEventCreate(pTmr, &pTmr->pOs1HzEvent, _os1HzCallback, NULL, TMR_FLAG_RECUR)*suppFuncsLen*suppFuncStatus*src/kernel/platform/acpi_common.c**src/kernel/platform/acpi_common.c*globIdx*globSrc*call to getNbsiObjByType*NVRM: ACPI DSM object (type=0x%x) signature check failed! **NVRM: ACPI DSM object (type=0x%x) signature check failed! *call to _acpiDsmSupportedFuncCacheInit*call to _acpiGenFuncCacheInit*call to _acpiDsmCallbackInit*call to _acpiDsmCapsInit*call to _acpiDsmFeatureInit*call to _acpiCacheMethodData*acpiIdListLen*jtCaps*jtRevId*call to gpuSetGC6SBIOSCapabilities_IMPL*acpiIdMuxPartTable**acpiIdMuxPartTable*acpiIdMuxStateTable**acpiIdMuxStateTable*call to testIfDsmSubFunctionEnabled*MDTLFeatureSupport*dsmCurrentFuncSupport*dsmIndex*call to remapDsmFunctionAndSubFunction*NVRM: DSM Generic subfunction 0x%x is not supported. Leaving entry at func %s subfunction 0x%x. **NVRM: DSM Generic subfunction 0x%x is not supported. Leaving entry at func %s subfunction 0x%x. *NVRM: DSM Generic subfunction 0x%x supported. Mapping to func %s subfunction 0x%x **NVRM: DSM Generic subfunction 0x%x supported. Mapping to func %s subfunction 0x%x *NVRM: DSM Test generic subfunction 0x%x is not supported. Indicates possible table corruption. **NVRM: DSM Test generic subfunction 0x%x is not supported. Indicates possible table corruption. *testGenSubFunc*asmDsmSubFunction*call to _isDsmError*NVRM: SBIOS suggested %s supports function %d, but the call failed! **NVRM: SBIOS suggested %s supports function %d, but the call failed! *dsmPlatCapsCache**dsmPlatCapsCache*dispStatusHotplugFunc*dispStatusConfigFunc*perfPostPowerStateFunc*stereo3dStateActiveFunc*callbackOrderOfPrecedenceList**callbackOrderOfPrecedenceList*callbackStatus*testDSMfuncIndex*supportFuncs**supportFuncs*bArg3isInteger*NVRM: %s DSM function not present in ASL. **NVRM: %s DSM function not present in ASL. **pAcpiDsmFunction < ACPI_DSM_FUNCTION_COUNT***pAcpiDsmFunction < ACPI_DSM_FUNCTION_COUNT**pAcpiDsmSubFunction < NV_ACPI_GENERIC_FUNC_START***pAcpiDsmSubFunction < NV_ACPI_GENERIC_FUNC_START*suppFuncs**suppFuncs**pAcpiDsmSubFunction == NV_ACPI_ALL_FUNC_SUPPORT***pAcpiDsmSubFunction == NV_ACPI_ALL_FUNC_SUPPORT*NVRM: entry *pAcpiDsmFunction = %x **NVRM: entry *pAcpiDsmFunction = %x *curFuncForGetObjByType*dummySubFunc*curFuncForGetAllObjects*NVRM: exit *pAcpiDsmFunction = 0x%x *pGetObjByTypeSubFunction=0x%x, status=%x **NVRM: exit *pAcpiDsmFunction = 0x%x *pGetObjByTypeSubFunction=0x%x, status=%x *NVRM: ACPI DSM remapping function = %x Subfunction = %x **NVRM: ACPI DSM remapping function = %x Subfunction = %x *call to _getRemappedDsmSubfunction*isGenericDsmSubFunction(*pRemappedDsmSubFunction)**isGenericDsmSubFunction(*pRemappedDsmSubFunction)*NVRM: ACPI DSM remap (func=%s/subfunc=0x%x) remapped to (func=%s/subfunc=0x%x). **NVRM: ACPI DSM remap (func=%s/subfunc=0x%x) remapped to (func=%s/subfunc=0x%x). *!isGenericDsmFunction(acpiDsmFunction)**!isGenericDsmFunction(acpiDsmFunction)*isGenericDsmSubFunction(acpiDsmSubFunction)**isGenericDsmSubFunction(acpiDsmSubFunction)*acpiDsmFunction < ACPI_DSM_FUNCTION_COUNT**acpiDsmFunction < ACPI_DSM_FUNCTION_COUNT*(pGpu->acpi.dsm[acpiDsmFunction].suppFuncStatus == DSM_FUNC_STATUS_SUCCESS) || (pGpu->acpi.dsm[acpiDsmFunction].suppFuncStatus == DSM_FUNC_STATUS_OVERRIDE) || (pGpu->acpi.dsm[acpiDsmFunction].suppFuncStatus == DSM_FUNC_STATUS_FAILED)**(pGpu->acpi.dsm[acpiDsmFunction].suppFuncStatus == DSM_FUNC_STATUS_SUCCESS) || (pGpu->acpi.dsm[acpiDsmFunction].suppFuncStatus == DSM_FUNC_STATUS_OVERRIDE) || (pGpu->acpi.dsm[acpiDsmFunction].suppFuncStatus == DSM_FUNC_STATUS_FAILED)*(acpiDsmFunction < ACPI_DSM_FUNCTION_COUNT) || (acpiDsmFunction == ACPI_DSM_FUNCTION_CURRENT)**(acpiDsmFunction < ACPI_DSM_FUNCTION_COUNT) || (acpiDsmFunction == ACPI_DSM_FUNCTION_CURRENT)*bitToTest*NVRM: %s ACPI DSM called before _acpiDsmSupportedFuncCacheInit subfunction = %x. **NVRM: %s ACPI DSM called before _acpiDsmSupportedFuncCacheInit subfunction = %x. *NVRM: %s DSM functions not available. **NVRM: %s DSM functions not available. *NVRM: %s DSM get supported subfunction returned 0x%08x size=%d suppStatus=%d **NVRM: %s DSM get supported subfunction returned 0x%08x size=%d suppStatus=%d *NVRM: %s DSM get supported subfunction returned 0x%04x size=%d suppStatus=%d **NVRM: %s DSM get supported subfunction returned 0x%04x size=%d suppStatus=%d *idx < (sizeof(pGSI->clPdbProperties) * 8)*src/kernel/platform/chipset/chipset.c**idx < (sizeof(pGSI->clPdbProperties) * 8)**src/kernel/platform/chipset/chipset.c*Chipset*chipsetIDInfo*call to clFreeBusTopologyCache_IMPL*call to clInitMappingPciBusDevice_IMPL*call to clUpdatePcieConfig_IMPL*chipsetIDBusAddr**pBusTopologyInfo*matchFound*FHBAddr*revisionID*call to getSubsystemFromPCIECapabilities*chipsetInfo[i].vendorID == 0**chipsetInfo[i].vendorID == 0*pciSubBaseClass**pciSubBaseClass*NVRM: DeviceId[%x] VendorID[%x] BDF[%x:%x:%x] SubClassId[%x] device found. **NVRM: DeviceId[%x] VendorID[%x] BDF[%x:%x:%x] SubClassId[%x] device found. *NVRM: NVRM : This is Bad. FHB/P2P/3DCTRL not found in cached bus topology!!! **NVRM: NVRM : This is Bad. FHB/P2P/3DCTRL not found in cached bus topology!!! *call to osPciReadByte*PCIECapNext*PCIECap*bFoundDevice*NVRM: NVRM initMappingPciBusDevice: can't find a device! **NVRM: NVRM initMappingPciBusDevice: can't find a device! *call to clDestructHWBC*call to clFreePcieConfigSpaceBase_IMPL**pSibling**pFirstChild*MB_DisableBr03FlowControl**MB_DisableBr03FlowControl*PDB_PROP_CL_DISABLE_BR03_FLOW_CONTROL*RmForceEnableGen2**RmForceEnableGen2*PDB_PROP_CL_PCIE_FORCE_GEN2_ENABLE*RmForceDisableIomapWC**RmForceDisableIomapWC*PDB_PROP_CL_DISABLE_IOMAP_WC*call to osQADbgRegistryInit**pPcieConfigSpaceBase*PDB_PROP_CL_IS_CHIPSET_IO_COHERENT**Intel**VIA**ServerWorks**Micron**Apple**SiS**ATI**Transmeta**HP**AMD**ALi**AppliedMicro**IBM**MarvellThunderX2**QemuRedhat**AmpereComputing**Huawei**Mellanox**Amazon**Fujitsu**Cadence**ARM**Alibaba**Qualcomm**SiFive**PLDA**Phytium**Grantsdale**Alderwood**Intel2588**Alviso**Greencreek**IntelQ35**BearlakeB**IntelQ33**BearlakeX**Tumwater**Stoakley**SkullTrail**IntelX58**Tylersburg**Lakeport**Glenwood**Montevina**Eaglelake**Arrandale/Auburndale**Clarksfield**P55/PM55/H57**IntelP67-CougarPoint**HuronRiver-HM67**HuronRiver-QM67**HuronRiver-HM65**IntelZ68**IntelP67**IntelX79**IntelZ75**IntelZ77A-GD55**SharkBay-HM87**SharkBay-Z87**SharkBay-H8x/P8x**SharkBay-HM86**SharkBay-E3**IntelZ97**IntelHM97**IntelZ170**IntelHM170**SkyLake C236**SkyLake C232**SkyLake-H**SkyLake C620**IntelX99**IntelC612**IntelZ270**IntelRX9S**IntelC422**IntelX299**IntelC621**IntelC622**IntelC624**IntelC625**IntelC626**IntelC627**IntelC628**IntelZ370**IntelZ390**IntelH370**Intel-CannonLake**Intel-CometLake**Intel-IceLake**Intel-RocketLake**Intel-AlderLake**Intel-SapphireRapids**Intel-RaptorLake**Intel-GraniteRapids**Intel-B660**Intel-Arrowlake**T210**T186**T194**T234**T23x**TH500**T264**649**656**RS400**RS480**FX790**FX890**RD850**RD870**RD890**RX780**FX990/X990/970**GX890**RS780**X370/X399/X470/ TRX40/X570/WRX80**AMD-Raphael**VT8369B**VX900**X-Gene Storm**Venice**Marvell ThunderX2**QEMU Redhat**AMPERE eMag**Huawei Kunpeng920**Mellanox BlueField**Mellanox BlueField 2**Mellanox BlueField 2 Crypto disabled**Mellanox BlueField 3 Crypto enabled**Mellanox BlueField 3 Crypto disabled**Amazon Gravitron2**Fujitsu A64FX**Phytium S2500**Ampere Altra**Arm Neoverse N1**Hygon-C86-7151**Marvell Octeon CN96xx**Marvell Octeon CN98xx**Qualcomm Snapdragon 8cx Gen3**Qualcomm Snapdragon 8cx Gen4**SiFive FU740-000**XpressRich-AXI Ref Design**Ampere AmpereOne-160**Phytium S5000**Ampere AmpereOne-192**T254*PDB_PROP_CL_IS_CHIPSET_IN_ASPM_POR_LIST*PDB_PROP_CL_ASPM_L0S_CHIPSET_DISABLED*PDB_PROP_CL_ASPM_L1_CHIPSET_DISABLED*pszUnknown*PDB_PROP_CL_PCIE_NON_COHERENT_USE_TC0_ONLY*PDB_PROP_CL_PCIE_GEN1_GEN2_SWITCH_CHIPSET_DISABLED*PDB_PROP_CL_WAR_AMD_5107271*call to _Set_ASPM_L0S_L1*PDB_PROP_CL_BUG_999673_P2P_ARBITRARY_SPLIT_WAR*PDB_PROP_CL_RELAXED_ORDERING_NOT_CAPABLE*nbcfg*PcieConfigBaseReg*call to clInsertPcieConfigSpaceBase_IMPL*PDB_PROP_CL_PCIE_CONFIG_ACCESSIBLE*PDB_PROP_CL_PCIE_CONFIG_SKIP_MCFG_READ*PDB_PROP_CL_ASLM_SUPPORTS_GEN2_LINK_UPGRADE*PDB_PROP_CL_BUG_3562968_WAR_ALLOW_PCIE_ATOMICS*call to Nvidia_T210_setupFunc*PDB_PROP_CL_ASPM_L0S_CHIPSET_ENABLED_MOBILE_ONLY*PDB_PROP_CL_BUG_1340801_DISABLE_GEN3_ON_GIGABYTE_SNIPER_3*PDB_PROP_CL_BUG_1681803_WAR_DISABLE_MSCG*PDB_PROP_CL_ON_PCIE_GEN3_PATSBURG*PDB_PROP_CL_ALLOW_PCIE_GEN3_ON_PATSBURG_WITH_IVBE_CPU*call to Intel_Huron_River_setupFunc*PDB_PROP_CL_ASPM_L1_CHIPSET_ENABLED_MOBILE_ONLY*PDB_PROP_CL_INTEL_CPU_ROOTPORT1_NEEDS_H57_WAR*call to Intel_Core_Nehalem_Processor_setupFunc*hecbase*src/kernel/platform/chipset/chipset_info.c*NVRM: Can't read HECBASE register on Intel SkullTrail! **src/kernel/platform/chipset/chipset_info.c**NVRM: Can't read HECBASE register on Intel SkullTrail! *call to clPcieReadDword_IMPL*call to Intel_29XX_setupFunc*PDB_PROP_CL_PCIE_GEN1_GEN2_SWITCH_CHIPSET_DISABLED_GEFORCE*PDB_PROP_CL_PCIE_GEN2_AT_LESS_THAN_X16_DISABLED*PDB_PROP_CL_ASLM_SUPPORTS_NV_LINK_UPGRADE*PDB_PROP_CL_WAR_4802761_ENABLED*portData*call to objClSetPortPcieEnhancedCapsOffsets*objClSetPortPcieEnhancedCapsOffsets(pCl, &portData)*src/kernel/platform/chipset/chipset_pcie.c**objClSetPortPcieEnhancedCapsOffsets(pCl, &portData)**src/kernel/platform/chipset/chipset_pcie.c*call to _clPcieGetDiagnosticData*pPCIeHandles**pPCIeHandles***pPCIeHandles**pUpstreamPort*blkHeader*blkHeader.action == RM_PCIE_ACTION_EOS**blkHeader.action == RM_PCIE_ACTION_EOS*call to _clPcieSavePcieDiagnosticBlock*call to _clPciePopulateCapMap*idx2*pActiveMap**pActiveMap*bCollectAll*locator*call to _clPcieGetPcieCapSize*pBlkHeader**pBlkHeader*call to _clPcieCopyConfigSpaceDiagData*tempDword*pSCI*pSCI != NULL**pSCI != NULL*linkCaps*clockPmSupport*NVRM: Invalid ASPM state passed. **NVRM: Invalid ASPM state passed. *NVRM: Link Control register read failed for upstream port **NVRM: Link Control register read failed for upstream port *NVRM: Skipping non-pass-through GPU%u **NVRM: Skipping non-pass-through GPU%u *virtualConfigBits*NVRM: Hypervisor has specified config bits %u for GPU%u **NVRM: Hypervisor has specified config bits %u for GPU%u *NVRM: No virtual P2P approval capability found in GPU%u's capability list **NVRM: No virtual P2P approval capability found in GPU%u's capability list *NVRM: Unable to handle virtual P2P approval capability version %u on GPU%u **NVRM: Unable to handle virtual P2P approval capability version %u on GPU%u *pciePeerClique*NVRM: Hypervisor has assigned GPU%u to peer clique %u **NVRM: Hypervisor has assigned GPU%u to peer clique %u *pPcieConfigSpaceBaseNext**pPcieConfigSpaceBaseNext*NVRM: PCIe Config BaseAddress 0x%llx Domain %x startBusNumber %x endBusNumber %x **NVRM: PCIe Config BaseAddress 0x%llx Domain %x startBusNumber %x endBusNumber %x *!pOS->getProperty(pOS, PDB_PROP_OS_DOES_NOT_ALLOW_DIRECT_PCIE_MAPPINGS)**!pOS->getProperty(pOS, PDB_PROP_OS_DOES_NOT_ALLOW_DIRECT_PCIE_MAPPINGS)*call to GetMcfgTableFromOS*call to storePcieGetConfigSpaceBaseFromMcfgTable*call to GetRsdtXsdtTablesAddr*call to ScanForTable*mcfgAddr*pMcfgAddressAllocationStructure**pMcfgAddressAllocationStructure*EndBusNumber*sdtAddr*current_sig*NVRM: Checksum mismatch **NVRM: Checksum mismatch *tableAddr*bTableFound*call to osInitGetAcpiTable*call to osGetAcpiTable*call to osGetAcpiRsdpFromUefi*call to scanForRsdtXsdtTables*edbaSeg**edbaSeg*rsdpRev*call to gpuDevIdIsMultiGpuBoard*NVRM: GPU Config Space not accessible **NVRM: GPU Config Space not accessible *PDB_PROP_GPU_3D_CONTROLLER*PDB_PROP_GPU_UPSTREAM_PORT_L1_POR_SUPPORTED*PDB_PROP_GPU_UPSTREAM_PORT_L1_POR_MOBILE_ONLY*call to osNv_cpuid*bC0orC1CPUID*PDB_PROP_SYS_HASWELL_CPU_C0_STEPPING*PDB_PROP_CL_NOSNOOP_NOT_CAPABLE*PCIEErrorCapPtr*PDB_PROP_CL_EXTENDED_TAG_FIELD_NOT_CAPABLE*offset + sizeof(value) <= 0x1000**offset + sizeof(value) <= 0x1000*device < PCI_MAX_DEVICES**device < PCI_MAX_DEVICES*func < PCI_MAX_FUNCTIONS**func < PCI_MAX_FUNCTIONS*clPcieWriteDword() failed!**clPcieWriteDword() failed!*call to osTestPcieExtendedConfigAccess*call to objClPcieMapEnhCfgSpace**call to objClPcieMapEnhCfgSpace*call to objClPcieUnmapEnhCfgSpace*clPcieWriteWord() failed!**clPcieWriteWord() failed!*call to osPciWriteWord*clPcieReadDword() failed!**clPcieReadDword() failed!*clPcieReadWord() failed!**clPcieReadWord() failed!*call to clFindPcieConfigSpaceBase_IMPL*pcieConfigSpaceBase*call to clPcieReadWord_IMPL*bridgeCtl*clDevCtrlStatus*call to clPcieWriteRootPortConfigReg_IMPL*pBusTopologyInfoLast**pBusTopologyInfoLast*NVRM: Buffer Allocation for clStoreBusTopologyCache FAILED **NVRM: Buffer Allocation for clStoreBusTopologyCache FAILED *bVgaAdapter*pBusTopologyInfoNext**pBusTopologyInfoNext*bGpuIsMultiGpuBoard*br04handle**br04handle***br04handle*BR04Rev*BR04Bus*BR03Bus*bNoUnsupportedBRFound*PLXBus*bNoOnboardBR04*brNot3rdParty*pGpu1 != pGpu2**pGpu1 != pGpu2*domain1*domain2***handleUp1***handleUp2*pciSwitchBus*bus2*bus1**pBusTopologyInfoBR04DS*BR04DSPorts**pBusTopologyInfoBR04GPU*handleBR04**handleBR04***handleBR04*gpuBus*handleBrdg**handleBrdg***handleBrdg***handleUpstream*gpuDomain*cap_next*cap_type*PCIEVCCapPtr*PCIEL1SsCapPtr*PCIEAcsCapPtr*NVRM: NVPCIE: Upstream port doesn't support PCI Express Capability structure. This is a violation of PCIE spec **NVRM: NVPCIE: Upstream port doesn't support PCI Express Capability structure. This is a violation of PCIE spec *pcie_caps*clSetPortPcieCapOffset(pCl, pPort->addr.handle, &pPort->PCIECapPtr)**clSetPortPcieCapOffset(pCl, pPort->addr.handle, &pPort->PCIECapPtr)*objClSetPortPcieEnhancedCapsOffsets(pCl, pPort)**objClSetPortPcieEnhancedCapsOffsets(pCl, pPort)***gpuCfgAddr*NVRM: unable to map GPU's PCI-E configuration space. **NVRM: unable to map GPU's PCI-E configuration space. ***vAddr*NVRM: NVPCIE: unable to map root port PCIE config space. **NVRM: NVPCIE: unable to map root port PCIE config space. *secBus16*call to clStoreBusTopologyCache_IMPL*gpuIsDBDFValid(pGpu)**gpuIsDBDFValid(pGpu)*DeviceID*VendorID*call to objClSetPortCapsOffsets*call to objClFindRootPort**call to objClFindRootPort*call to objClBR03Exists*PDB_PROP_GPU_IS_BR03_PRESENT*call to objClBR04Exists*PDB_PROP_GPU_IS_BR04_PRESENT*boardDownstreamPort*boardUpstreamPort*call to objClSetPcieHWBC*PDB_PROP_GPU_BEHIND_BRIDGE*PDB_PROP_GPU_UPSTREAM_PORT_L0S_UNSUPPORTED*PDB_PROP_GPU_UPSTREAM_PORT_L1_UNSUPPORTED*NVRM: Error reading pcie link control status of upstream port **NVRM: Error reading pcie link control status of upstream port *call to objClGpuUnmapRootPort*call to objClGpuUnmapEnhCfgSpace*busIntfType*NVRM: GPU Domain %X Bus %X Device %X Func %X **NVRM: GPU Domain %X Bus %X Device %X Func %X *call to objClGpuIs3DController*call to objClLoadPcieVirtualP2PApproval*call to objClLoadPcieVirtualConfigBits*call to objClBuildPcieAtomicsAllowList*call to objClInitPcieChipset*call to _objClIsPciePowerControlPresent*call to kbifInitPcieDeviceControlStatus_IMPL*call to kbifProbePcieReqAtomicCaps_DISPATCH*call to kbifProbePcieCplAtomicCaps_DISPATCH*bIsMultiGpu*call to gpumgrUpdateBoardId_IMPL*PDB_PROP_GPU_IS_GEMINI*call to clIsL1SupportedForUpstreamPort_IMPL*PDB_PROP_CL_ASPM_L1_UPSTREAM_PORT_SUPPORTED*NVRM: Chipset %X Domain %X Bus %X Device %X Func %X PCIE PTR %X **NVRM: Chipset %X Domain %X Bus %X Device %X Func %X PCIE PTR %X *NVRM: Chipset %X Root Port Domain %X Bus %X Device %X Func %X PCIE PTR %X **NVRM: Chipset %X Root Port Domain %X Bus %X Device %X Func %X PCIE PTR %X *NVRM: Chipset %X Board Upstream Port Domain %X Bus %X Device %X Func %X PCIE PTR %X **NVRM: Chipset %X Board Upstream Port Domain %X Bus %X Device %X Func %X PCIE PTR %X *NVRM: Chipset %X Board Downstream Port Domain %X Bus %X Device %X Func %X PCIE PTR %X **NVRM: Chipset %X Board Downstream Port Domain %X Bus %X Device %X Func %X PCIE PTR %X *NVRM: FHB Domain %X Bus %X Device %X Func %X VendorID %X DeviceID %X **NVRM: FHB Domain %X Bus %X Device %X Func %X VendorID %X DeviceID %X *call to _objClAdjustTcVcMap*bInvalidSubIds*gpuSubVenIds**gpuSubVenIds*gpuSubDevIds**gpuSubDevIds*gpuDevIds**gpuDevIds*NVRM: NVPCIE: Can not read VC resource control 0 on port %04x:%02x:%02x.%1x (bug 1048498). **NVRM: NVPCIE: Can not read VC resource control 0 on port %04x:%02x:%02x.%1x (bug 1048498). *upTcVcMap*NVRM: Cannot read NV_XVE_VCCAP_CTRL0 **NVRM: Cannot read NV_XVE_VCCAP_CTRL0 *epTcVcMap*subsetTcVcMap*NVRM: NVPCIE: TC/VC map is inconsistent (Port %04x:%02x:%02x.%1x 0x%02x, GPU 0x%02x)! **NVRM: NVPCIE: TC/VC map is inconsistent (Port %04x:%02x:%02x.%1x 0x%02x, GPU 0x%02x)! *NVRM: NVPCIE: Fixing TC/VC map to common subset 0x%02x. **NVRM: NVPCIE: Fixing TC/VC map to common subset 0x%02x. *epVcCtrl0*devCap2*devCtrl2*pHandleUp**pHandleUp***pHandleUp*NVRM: Capability pointer not found. **NVRM: Capability pointer not found. *call to _objClGetDownstreamAtomicsEnabledMask*call to _objClGetUpstreamAtomicRoutingCap*call to _objClGetDownstreamAtomicRoutingCap*NVRM: PCIE config space is inaccessible! **NVRM: PCIE config space is inaccessible! *call to kbifGetPciePowerControlValue_IMPL*bPciePowerControlPresent*pciePowerControlValue*NVRM: None of the PCIe Power Control for ASPM override are available **NVRM: None of the PCIe Power Control for ASPM override are available *call to clRootportNeedsNosnoopWAR_FWCLIENT*needsNosnoopWAR*call to clFindFHBAndGetChipsetInfoIndex_IMPL*NVRM: *** Chipset Setup Function Error! **NVRM: *** Chipset Setup Function Error! *NVRM: *** Chipset has no definition! (vendor ID 0x%04x, device ID 0x%04x) **NVRM: *** Chipset has no definition! (vendor ID 0x%04x, device ID 0x%04x) *call to clStorePcieConfigSpaceBaseFromMcfg_IMPL*call to _objClPostSetupFuncRegkeyOverrides*ChipsetInitialized*NVRM: *** PCI-E config space not consistent with PCI config space, FHB vendor ID and device ID not equal! **NVRM: *** PCI-E config space not consistent with PCI config space, FHB vendor ID and device ID not equal! *NVRM: *** Setting PCI-E config space inaccessible! **NVRM: *** Setting PCI-E config space inaccessible! *NVRM: Skipping PCI Express host bridge initialization **NVRM: Skipping PCI Express host bridge initialization *call to objClInitGpuPortData*NVRM: *** Unable to get PCI port handles **NVRM: *** Unable to get PCI port handles *call to objClGpuMapEnhCfgSpace*call to objClGpuMapRootPort*call to plxPex8747GetFirmwareInfo*PDB_PROP_GPU_IS_PLX_PRESENT*call to addHwbcToList*addHwbcToList(pGpu, pHWBC)**addHwbcToList(pGpu, pHWBC)*NVRM: *** Root Port Setup Function Error **NVRM: *** Root Port Setup Function Error *NVRM: Upstream port Setup Function Error **NVRM: Upstream port Setup Function Error *NVRM: *** PCI-E config space not consistent with PCI config space, root port vendor ID and device ID not equal! **NVRM: *** PCI-E config space not consistent with PCI config space, root port vendor ID and device ID not equal! *rootPortLtrSupported*PDB_PROP_CL_UPSTREAM_LTR_SUPPORTED*call to clCheckUpstreamLtrSupport_IMPL*NVRM: LTR capability not supported. **NVRM: LTR capability not supported. *call to kbifCacheChipsetL1SubstatesEnable_IMPL*AslmCfg**AslmCfg*PDB_PROP_CL_ASLM_SUPPORTS_HOT_RESET*PDB_PROP_CL_ASLM_SUPPORTS_FAST_LINK_UPGRADE*call to pciPbiFindCapability*call to pciPbiCheck*src/kernel/platform/chipset/pci_pbi.c*NVRM: Device does not support PBI **src/kernel/platform/chipset/pci_pbi.c**NVRM: Device does not support PBI *call to pciPbiAcquireMutex*NVRM: Could not acquire pciPbi mutex **NVRM: Could not acquire pciPbi mutex *call to pciPbiSendCommandWait*NVRM: Device did not provide PBI GET FEATURE, %0x **NVRM: Device did not provide PBI GET FEATURE, %0x *call to pciPbiReleaseMutex*NVRM: Device did not respond to PBI GET_CAPABILITIES **NVRM: Device did not respond to PBI GET_CAPABILITIES *NVRM: Device does not support PBI Execute Routine **NVRM: Device does not support PBI Execute Routine **gid*NVRM: Failure reading GID **NVRM: Failure reading GID *call to pciPbiCheckStatusWait*poll_limit*cmdStatus*call to pciPbiError*NVRM: Attempted to release PBI mutex that does not match client ID **NVRM: Attempted to release PBI mutex that does not match client ID *Family*Model*src/kernel/platform/cpu.c*NVRM: Unrecognized AMD processor 0x%x in cpuidInfoAMD. Assuming new Ryzen **src/kernel/platform/cpu.c**NVRM: Unrecognized AMD processor 0x%x in cpuidInfoAMD. Assuming new Ryzen *largestExtendedFunctionNumberSupported*dataCacheLineSize*l1DataCacheSize*l2DataCacheSize*bSEVCapable*maxEncryptedGuests*platformID*call to DecodePrescottCache*coresOnDie*uLevel*uLineSize*uCacheSize*size >= maxSize**size >= maxSize*call to osGetCpuCount*numLogicalCpus*numPhysicalCpus*maxLogicalCpus*Foundry*StrID**StrID*cpuHasLeafB*CpuHT*NVRM: RmInitCpuCounts: physical 0x%x logical 0x%x **NVRM: RmInitCpuCounts: physical 0x%x logical 0x%x *ExtFamily*DisplayedFamily*ExtModel*DisplayedModel*Stepping*StandardFeatures*BrandId*call to osNv_rdcr4*check_osfxsr*call to osNv_rdxcr0*check_osxsave*call to osGetCpuFrequency*ExtendedFeatures*call to getEmbeddedProcessorName*call to cpuidInfoIntel*call to cpuidInfoAMD*String**String*family*stepping*brandId*call to getCpuCounts*call to Plx_Pex8747_GetBar0*bar0*call to Plx_Pex8747_ChangeUpstreamBusSpeed*src/kernel/platform/hwbc.c*NVRM: Not a PLX PEX8747! **src/kernel/platform/hwbc.c**NVRM: Not a PLX PEX8747! *NVRM: Already at Gen%u speed. No need to transition. **NVRM: Already at Gen%u speed. No need to transition. *NVRM: Failed to train to Gen%u speed. **NVRM: Failed to train to Gen%u speed. *NVRM: Device has no BAR0! **NVRM: Device has no BAR0! *call to Nvidia_BR04_GetBar0*pDpData**pDpData***pDpData*NVRM: *** Set clock trims for BR04 A01. **NVRM: *** Set clock trims for BR04 A01. *regValue2*NVRM: *** Enabling BR04 Gen2 features. **NVRM: *** Enabling BR04 Gen2 features. *NVRM: *** Setup BR04 upstream link speed. **NVRM: *** Setup BR04 upstream link speed. *enableCorrErrors*NVRM: *** BR04 has fallen off the bus after we tried to train it to Gen2! **NVRM: *** BR04 has fallen off the bus after we tried to train it to Gen2! *NVRM: Verified we are at Gen2 speed. **NVRM: Verified we are at Gen2 speed. *NVRM: Failed to train to Gen2 speed. **NVRM: Failed to train to Gen2 speed. *NVRM: Already in Gen2 speed. No need to transition. **NVRM: Already in Gen2 speed. No need to transition. *NVRM: *** Gen2 not supported by other side of link. **NVRM: *** Gen2 not supported by other side of link. *NVRM: *** BR04 WAR for bug 779279 is not working! **NVRM: *** BR04 WAR for bug 779279 is not working! *NVRM: *** BR04 registers have already been programmed. **NVRM: *** BR04 registers have already been programmed. *NVRM: *** Setup BR04 registers. **NVRM: *** Setup BR04 registers. *CreditSet*call to Nvidia_BR04_ShiftAliasingRegisters*minBus*maxBus*Shifted*SecBus*SubBus*call to Plx_Pex8747_setupFunc*call to objClResumeBridgeHWBC*bFirstGpuResuming*call to objClGetBr03Bar0*NVRM: *** BR03 registers has already been programmed! **NVRM: *** BR03 registers has already been programmed! *call to objClFreeBr03Bar0*laneWidth**laneWidth*needRes**needRes**total*NVRM: *** BR03 registers has already been programmed (one device workaround)! **NVRM: *** BR03 registers has already been programmed (one device workaround)! ***bufferSize*NVRM: *** Setup BR03 registers! **NVRM: *** Setup BR03 registers! *theader*theader >= 0 && tdata >= 0**theader >= 0 && tdata >= 0*dport*Rev*PDB_PROP_GPU_BEHIND_BR03*PDB_PROP_GPU_BEHIND_BR04***root**father*call to objClFindUpperHWBC**now*call to Nvidia_BR04_setupFunc*(pHWBC != NULL)**(pHWBC != NULL)*pHWBC->ctrlDev.handle**pHWBC->ctrlDev.handle*vendorID == PCI_VENDOR_ID_PLX && deviceID == PLX_DEVICE_ID_PEX8747**vendorID == PCI_VENDOR_ID_PLX && deviceID == PLX_DEVICE_ID_PEX8747*osPciReadWord(pHWBC->ctrlDev.handle, PCI_COMMON_CLASS_SUBCLASS) == PCI_COMMON_CLASS_SUBBASECLASS_P2P**osPciReadWord(pHWBC->ctrlDev.handle, PCI_COMMON_CLASS_SUBCLASS) == PCI_COMMON_CLASS_SUBBASECLASS_P2P*osPciReadByte(pHWBC->ctrlDev.handle, PCI_TYPE_1_SECONDARY_BUS_NUMBER) == port.bus**osPciReadByte(pHWBC->ctrlDev.handle, PCI_TYPE_1_SECONDARY_BUS_NUMBER) == port.bus*hasPlxFirmwareInfo*vendorID == PCI_VENDOR_ID_NVIDIA && deviceID == NV_BR03_XVU_DEV_ID_DEVICE_ID_BR03**vendorID == PCI_VENDOR_ID_NVIDIA && deviceID == NV_BR03_XVU_DEV_ID_DEVICE_ID_BR03*call to objClSetupBR03*vendorID == PCI_VENDOR_ID_NVIDIA && deviceID >= NV_BR04_XVU_DEV_ID_DEVICE_ID_BR04_0 && deviceID <= NV_BR04_XVU_DEV_ID_DEVICE_ID_BR04_15**vendorID == PCI_VENDOR_ID_NVIDIA && deviceID >= NV_BR04_XVU_DEV_ID_DEVICE_ID_BR04_0 && deviceID <= NV_BR04_XVU_DEV_ID_DEVICE_ID_BR04_15*pHWBC == NULL**pHWBC == NULL*src/kernel/platform/nbsi/nbsi_getv.c**src/kernel/platform/nbsi/nbsi_getv.c*!(*pRetSize != 0 && pRetBuf == NULL)**!(*pRetSize != 0 && pRetBuf == NULL)*NVRM: Invalid gpu index %d. Aborting nbsi get value. **NVRM: Invalid gpu index %d. Aborting nbsi get value. *nbsiDrvrTable**nbsiDrvrTable***nbsiDrvrTable*pNbsiDriverObj**pNbsiDriverObj*objHdr*pNbsiScopes*modules**modules*pNbsiModule**pNbsiModule*tPtr**tPtr**pNbsiScopes*pNbsiElements**paths**pNbsiElements*pNbsiElementFound**pNbsiElementFound***pNbsiElementFound*hdrPartA*hdrPartB*valueID*ba*bax*NVRM: Invalid NBSI table entry (1). **NVRM: Invalid NBSI table entry (1). *NVRM: Mod/Path/El=%x/%x/%x n/hash/typ=%d/%x/%d. **NVRM: Mod/Path/El=%x/%x/%x n/hash/typ=%d/%x/%d. *call to rtnNbsiElement*pRetBufEnd**pRetBufEnd*rtnData*NVRM: Invalid NBSI table entry (2) **NVRM: Invalid NBSI table entry (2) *NVRM: Mod/Path/Elem=%x/%x/%x, ndx=%d, type = %d. **NVRM: Mod/Path/Elem=%x/%x/%x, ndx=%d, type = %d. *nbsiOSstr**nbsiOSstr***nbsiOSstr*nbsiOSstrLen**nbsiOSstrLen*nbsiOSstrHash**nbsiOSstrHash*curMaxNbsiOSes*DriverVer*DX*availDirLoc**availDirLoc*pTblCache**pTblCache***pTblCache*regOverrideList**regOverrideList***regOverrideList*src/kernel/platform/nbsi/nbsi_init.c*NVRM: Invalid gpu index %d. Aborting free NBSI table. **src/kernel/platform/nbsi/nbsi_init.c**NVRM: Invalid gpu index %d. Aborting free NBSI table. *call to freeNbsiCache*NVRM: Invalid gpu index %d. Aborting NBSI init. **NVRM: Invalid gpu index %d. Aborting NBSI init. *NVRM: NBSI table already initialized for GPU index %d. Aborting NBSI init. **NVRM: NBSI table already initialized for GPU index %d. Aborting NBSI init. *NVRM: Initializing NBSI tables for gpu %d **NVRM: Initializing NBSI tables for gpu %d *call to allocNbsiCache*call to setNbsiOSstring*call to checkUidMatch*nbsiDriverSource*nbsiDriverIndex*call to getNbsiCacheInfoForGlobType*NVRM: No NBSI table for gpu %d found. **NVRM: No NBSI table for gpu %d found. *NVRM: Using NBSI driver object for gpu %d from registry. **NVRM: Using NBSI driver object for gpu %d from registry. *NVRM: Using NBSI driver object for gpu %d from VBIOS. **NVRM: Using NBSI driver object for gpu %d from VBIOS. *NVRM: Using NBSI driver object for gpu %d from SBIOS. **NVRM: Using NBSI driver object for gpu %d from SBIOS. *NVRM: Using NBSI driver object for gpu %d from ACPI table. **NVRM: Using NBSI driver object for gpu %d from ACPI table. *NVRM: Using NBSI driver object for gpu %d from unknown source. **NVRM: Using NBSI driver object for gpu %d from unknown source. *!(*pRtnObjSize != 0 && pRtnObj == NULL)**!(*pRtnObjSize != 0 && pRtnObj == NULL)*NVRM: Invalid gpu index %d. Aborting NBSI get object. **NVRM: Invalid gpu index %d. Aborting NBSI get object. *call to getNbsiObjFromCache*dirList*searchDirNdx*nbsiDirLocs**nbsiDirLocs*call to getNbsiDirectory*bFreeTestObjRequired*call to _extractNBSIObjFromACPIDir*call to extractNBSIObjFromDir*call to addNbsiCacheEntry***inOutData*NVRM: Unable to allocate 0x%x bytes for ACPI parm memory. **NVRM: Unable to allocate 0x%x bytes for ACPI parm memory. ***tmpBuffer*call to getTableUsingObjTypeCall*call to getTableUsingAllObjectCall*call to getTableDataUsingAllObjectCall*call to getNbsiDirSize*call to testNbsiDir*call to isGlobTypeInNbsiDir*cntOfGlobsWithWantedGlobType*curGlobSize*bMyFound*pNbsiDrvrObj**pNbsiDrvrObj*call to testNbsiTable*thisScore*bestDriverObjectMatchSize*bestDriverObjectMatchOffset*bestDriverObjectMatchScore*bestDriverObjectIndex*bestFitDriverVersion*NVRM: NBSI Gpu%d Tloc=%d/Glob=%d best fit (score/ver = (%x/%d)) **NVRM: NBSI Gpu%d Tloc=%d/Glob=%d best fit (score/ver = (%x/%d)) *NVRM: Can't alloc 0x%x bytes for ACPI NBSI table. **NVRM: Can't alloc 0x%x bytes for ACPI NBSI table. *bCheckCRC*!(*sizeToRead > outBufferSz)**!(*sizeToRead > outBufferSz)*leftToRead*bufferOffset*(*sizeToRead+rtnSize)<=outBufferSz**(*sizeToRead+rtnSize)<=outBufferSz*call to getTableDataUsingObjTypeCall*bestDriverObjectMatchGlob*call to nbsiObjTypeCallAcpi*call to nbsiObjTypeCallUefi*curGlob < 16**curGlob < 16*uefiVariableName**uefiVariableName*NVRM: curGlob bits do not fit in 15:12. Returning early **NVRM: curGlob bits do not fit in 15:12. Returning early *call to getNbsiDirFromRegistry*call to determineACPIAccess*call to getDsmGetObjectSubfunction*statusByType*statusAllObj*RMemNBSItable**RMemNBSItable*NVRM: Emulated NBSI table too big. 0x%x > than 0x%x! **NVRM: Emulated NBSI table too big. 0x%x > than 0x%x! *NVRM: Can't allocate 0x%x mem for emulated NBSI table. **NVRM: Can't allocate 0x%x mem for emulated NBSI table. *NVRM: Unable to read emulated NBSI table from reg. **NVRM: Unable to read emulated NBSI table from reg. *pTempGlob*pCacheObj**pCacheObj***pCacheEntry*pGenObj**pGenObj*curTbl*tblCacheNumEntries*cacheNdx*NVRM: Unable to alloc 0x%x mem for NBSI cache entry **NVRM: Unable to alloc 0x%x mem for NBSI cache entry *altGlobSource*altGlobIndex*pNbsiObj->pTblCache[idx] == NULL**pNbsiObj->pTblCache[idx] == NULL*cacheSize*NVRM: Unable to allocate 0x%x memory for NBSI cache. **NVRM: Unable to allocate 0x%x memory for NBSI cache. *tblCacheMaxNumEntries*pBestDriverObjectMatch**pBestDriverObjectMatch*NVRM: NBSI current driver version: %d.%d.1%d.%d **NVRM: NBSI current driver version: %d.%d.1%d.%d *NVRM: NBSI Gpu%d ID info: svid=%x, ssid=%x, cvid=%x, cdid=%x, crev=%x **NVRM: NBSI Gpu%d ID info: svid=%x, ssid=%x, cvid=%x, cdid=%x, crev=%x *pNbsiGenObj0*Driver*tOS*tDX*tRev*Chip*score*NVRM: NBSI Gpu%d Tloc=%d/Glob=%d object match score/ver = (%x/%d) **NVRM: NBSI Gpu%d Tloc=%d/Glob=%d object match score/ver = (%x/%d) **allocSize*NVRM: NBSI Gpu%d Tloc=%d/Glob=%d table size 0x%x < min 0x%x **NVRM: NBSI Gpu%d Tloc=%d/Glob=%d table size 0x%x < min 0x%x *NVRM: NBSI Gpu%d Tloc=%d/Glob=%d tbl size 0x%x > max 0x%x **NVRM: NBSI Gpu%d Tloc=%d/Glob=%d tbl size 0x%x > max 0x%x *NVRM: NBSI Gpu%d Tloc=%d/Glob=%d table size 0x%x > alloc 0x%x **NVRM: NBSI Gpu%d Tloc=%d/Glob=%d table size 0x%x > alloc 0x%x *NVRM: NBSI Gpu%d Tloc=%d/Glob=%d numModules %d > max %d **NVRM: NBSI Gpu%d Tloc=%d/Glob=%d numModules %d > max %d *NVRM: NBSI Gpu%d Tloc=%d/Glob=%d wantedGlobType = %04x != returned globtype = %04x **NVRM: NBSI Gpu%d Tloc=%d/Glob=%d wantedGlobType = %04x != returned globtype = %04x *call to testObjectHash*NVRM: NBSI Gpu%d TLoc=%d/globType=%x bad hash **NVRM: NBSI Gpu%d TLoc=%d/globType=%x bad hash *NVRM: GPU%d, source %d, Size of NBSI dir %x > alloc mem %x. **NVRM: GPU%d, source %d, Size of NBSI dir %x > alloc mem %x. *NVRM: nbsi dir (new fmt) entries=%x, source=%x **NVRM: nbsi dir (new fmt) entries=%x, source=%x *NVRM: nbsi dir (old fmt) entries=%x, source=%x **NVRM: nbsi dir (old fmt) entries=%x, source=%x *od*nbsiDirVer*NVRM: GPU%d, source %d, NBSI dir ver %d > max ver %d. **NVRM: GPU%d, source %d, NBSI dir ver %d > max ver %d. **globType*pGlobType**pGlobType*pNbsiGenObj!=NULL**pNbsiGenObj!=NULL*hashStart**hashStart*hashLen*call to fnv1Hash64*tableHash*NVRM: NBSI tbl computed hash %llx != hash %llx **NVRM: NBSI tbl computed hash %llx != hash %llx *nbsiOSndx < MAX_NBSI_OS**nbsiOSndx < MAX_NBSI_OS*maxNbsiOS < MAX_NBSI_OS+1**maxNbsiOS < MAX_NBSI_OS+1*NVRM: NBSI OS string length %d too long for OS ndx %d. **NVRM: NBSI OS string length %d too long for OS ndx %d. *call to fnv32buf*call to fnv1Hash16*nullPathStr**nullPathStr*nbsiBlankPathHash*src/kernel/platform/nbsi/nbsi_osrg.c**src/kernel/platform/nbsi/nbsi_osrg.c*call to _nvStrLen*pParams->bAllCaps || (pParams->peerGpuCount <= NV0000_CTRL_SYSTEM_MAX_ATTACHED_GPUS)*src/kernel/platform/p2p/p2p_caps.c**pParams->bAllCaps || (pParams->peerGpuCount <= NV0000_CTRL_SYSTEM_MAX_ATTACHED_GPUS)**src/kernel/platform/p2p/p2p_caps.c*(pParams->bUseUuid == NV_FALSE)**(pParams->bUseUuid == NV_FALSE)*pShimParams**pShimParams*pShimParams != NULL**pShimParams != NULL*peerGpuCaps**peerGpuCaps*pParamsPeerInfo*busPeerId*NVRM: P2P is marked unsupported with MIG for GPU instance = 0x%x **NVRM: P2P is marked unsupported with MIG for GPU instance = 0x%x *call to _kgetP2PCapsStatusOverC2C*call to _gpumgrGetP2PCapsStatusOverNvLink*call to _kp2pCapsGetStatusIndirectOverNvLink*call to _kp2pCapsGetStatusOverPcie*call to _kp2pCapsGetStatusOverPcieBar1*pFirstGpu*call to _kp2pCapsCheckStatusOverridesForPcie*call to _p2pCapsGetHostSystemStatusOverPcieBar1*_p2pCapsGetHostSystemStatusOverPcieBar1(pFirstGpu, &writeCapStatus, &readCapStatus, bCommonPciSwitchFound)**_p2pCapsGetHostSystemStatusOverPcieBar1(pFirstGpu, &writeCapStatus, &readCapStatus, bCommonPciSwitchFound)*call to _p2pCapsGetPcieToplogySupportForBar1Atomics*NVRM: Unrecognized CPU. Read Cap is disabled **NVRM: Unrecognized CPU. Read Cap is disabled *call to gpumgrGetPcieP2PCapsFromCache_IMPL*call to hypervisorPcieP2pDetection_IMPL*iohDomain_ref*iohBus_ref**pFirstGpu*call to areGpusP2PCompatible*pciSwitchBus_ref*lockedGpuMask*pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GET_PCIE_P2P_CAPS, &p2pCapsParams, sizeof(NV2080_CTRL_INTERNAL_GET_PCIE_P2P_CAPS_PARAMS))**pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, NV2080_CTRL_CMD_INTERNAL_GET_PCIE_P2P_CAPS, &p2pCapsParams, sizeof(NV2080_CTRL_INTERNAL_GET_PCIE_P2P_CAPS_PARAMS))*gpuP2PWriteCapsStatus*gpuP2PReadCapsStatus*call to gpumgrStorePcieP2PCapsCache_IMPL*gpumgrStorePcieP2PCapsCache(gpuMask, *pP2PWriteCapStatus, *pP2PReadCapStatus)**gpumgrStorePcieP2PCapsCache(gpuMask, *pP2PWriteCapStatus, *pP2PReadCapStatus)*pFirstGpu != NULL**pFirstGpu != NULL*NVRM: Links failed to train for the given gpu pairs! **NVRM: Links failed to train for the given gpu pairs! *bIndirectPeers*call to p2pGetCapsStatus*acpiIdMapping**acpiIdMapping***acpiIdMapping*call to initNbsiObject*sensorData*PFMREQHNDLRACPIData*controlData*call to pfmreqhndlrHandleEdppeakLimitUpdate_IMPL*lcstatus*Failed to apply the EDPpeak limit from system*src/kernel/platform/platform_request_handler.c**Failed to apply the EDPpeak limit from system**src/kernel/platform/platform_request_handler.c*call to pfmreqhndlrHandleUserConfigurableTgpMode_IMPL*Failed to update user configurable TGP (Turbo) mode from system**Failed to update user configurable TGP (Turbo) mode from system*edppLimit*call to pfmreqhndlrHandlePlatformEdppLimitUpdate_IMPL*Failed to update platform EDPpeak limit**Failed to update platform EDPpeak limit*bDifferPlatformEdppLimit*call to pfmreqhndlrGetPerfSensorCounterById_IMPL*targetTemp*bTGPUOverrideRequired*PRH failed to update thermal limit!**PRH failed to update thermal limit!*call to pfmreqhndlrHandleCheckPM1Available_IMPL*call to pfmreqhndlrUpdatePerfCounter*bForcedOff*NVRM: GPS FUNC_SUPPORT is not supported on this Platform, Failing ACPI-GPS subfunction 0x%x. **NVRM: GPS FUNC_SUPPORT is not supported on this Platform, Failing ACPI-GPS subfunction 0x%x. *NVRM: PRH : This system doesn't support any ACPI-GPS-subfunctions. **NVRM: PRH : This system doesn't support any ACPI-GPS-subfunctions. *call to _pfmreqhndlrPmgrPmuPostLoadPrereqCallback*call to _pfmreqhndlrThermPmuPostInitPrereqCallback*call to pfmreqhndlrOperatingLimitUpdate_IMPL*bStateInitialized*NVRM: PlatformRequestHandler pointer is NULL **NVRM: PlatformRequestHandler pointer is NULL *call to pfmreqhndlrIsInitialized_IMPL*call to pfmreqhndlrInitGpu*bEDPpeakUpdateEnabled*bUserConfigTGPmodeEnabled*bPlatformUserConfigTGPmodeEnabled*frmData*nextSampleNumber*call to _pfmreqhndlrSupportExists*PDB_PROP_PFMREQHNDLR_SUPPORTED*bIsSysCtrlSupported*PDB_PROP_PFMREQHNDLR_SYS_CONTROL_SUPPORTED*PDB_PROP_PFMREQHNDLR_IS_PLATFORM_LEGACY*call to pfmreqhndlrInitSensors*call to _pfmreqhndlrCheckAndGetPM1ForcedOffState*bPM1ForcedOff*vPstateCache*bVpsPs20Supported*vPstateIdxHighest*NVRM: Gpu pointer is NULL **NVRM: Gpu pointer is NULL *inputData*acpiParams*call to pfmreqhndlrCallACPI*NVRM: ACPI call failed. Error : 0x%x **NVRM: ACPI call failed. Error : 0x%x *requestData*NVRM: No action is required in response to SBIOS event PCONTROL **NVRM: No action is required in response to SBIOS event PCONTROL *pStateIdx*vpStateMapping*responseData*maxVPstate*vPstateIdx*prevSbiosVPStateLimit*NVRM: Received a wrong request type from SBIOS, which is not supported **NVRM: Received a wrong request type from SBIOS, which is not supported *NVRM: Error acquiring semaphore! **NVRM: Error acquiring semaphore! *bReleaseRmSema*NVRM: Error acquiring API lock! **NVRM: Error acquiring API lock! *bReleaseApiLock*NVRM: Error acquiring GPUs lock! **NVRM: Error acquiring GPUs lock! *dsmVersion*pfmreqhndlrSupportedGpuIdx*NVRM: PLATFORM_REQUEST_HANDLER: Cannot support %u GPUs yet **NVRM: PLATFORM_REQUEST_HANDLER: Cannot support %u GPUs yet *call to pfmreqhndlrPassiveModeTransition*src/kernel/platform/platform_request_handler_ctrl.c*NVRM: Failed to acquire the locks/semaphore! **src/kernel/platform/platform_request_handler_ctrl.c**NVRM: Failed to acquire the locks/semaphore! *lockAcquired*edppLimitInfo*ulVersion*limitLast*pEdppLimit*limitMin*limitRated*limitMax*limitCurr*limitBattRated*limitBattMax*bWorkItemPending*pPpmData**pPpmData*ppmMaskCurrent*ppmIdxCurrent*ppmIdxRequested*pshareParams*platformLimitDeltamW*ctgpOffsetmW*bPfmReqHndlrSupported*PFMREQHNDLRSensorCache**PFMREQHNDLRSensorCache*call to _pfmreqhndlrUpdateCounter*call to _pfmreqhndlrSampleSensorLimit_ACPI*bStale*NULL != pPlatformRequestHandler**NULL != pPlatformRequestHandler*counterBlock**counterBlock*minInterval*call to _pfmreqhndlrResetCounter*bQueryEdppRequired*bPlatformEdpUpdate*bSystemParamLimitUpdate*bEDPpeakLimitUpdateRequest*bUserConfigTGPmodeRequest*bPlatformUserConfigTGPSupport*lclStatus*call to pfmreqhndlrHandlePlatformGetEdppLimit_IMPL*call to _pfmreqhndlrHandleUserConfigurableTgpModePlatformCustomization*call to _pfmreqhndlrGetTimeStamp*call to _pfmreqhndlrUpdateTgpuLimit*call to _pfmreqhndlrUpdatePpmdLimit*call to pfmreqhndlrHandlePlatformSetEdppLimitInfo_IMPL*sysControlData*call to _pfmreqhndlrUpdateSystemParamLimit*queuedCounterMask*acpiParamsEx*inSize*pfmreqhndlrFunc*ppmData*call to pfmreqhndlrCallACPI_EX*platformPowerModeIndex*platformPowerModeMask*eventReason*PlatformRequestHandler failed to get target temp from SBIOS**PlatformRequestHandler failed to get target temp from SBIOS*PlatformRequestHandler failed to get platform power mode from SBIOS**PlatformRequestHandler failed to get platform power mode from SBIOS*lastSampled*bOverridden*FULL_GPU_SANITY_CHECK(pGpu)**FULL_GPU_SANITY_CHECK(pGpu)*call to pfmreqhndlrHandleStatusChangeEvent*NVRM: Error querying EDPpeak Limit from platform: 0x%08x **NVRM: Error querying EDPpeak Limit from platform: 0x%08x *clientLimit*NVRM: Error updating EDPpeak Limit State: 0x%08x **NVRM: Error updating EDPpeak Limit State: 0x%08x *call to _pfmreqhndlrCallPshareStatus*PshareStatus**PshareStatus*supported subfunction**supported subfunction*bIsPctrlSupported*call to pfmreqhndlrPcontrol_IMPL*Pcontrol command**Pcontrol command*NULL != pCounterVal**NULL != pCounterVal*NVRM: Invalid counter Id specified = %d. **NVRM: Invalid counter Id specified = %d. *call to _pfmreqhndlrIsCacheEntryStale*call to pfmreqhndlrSampleCounter*NVRM: Sampling counter failed for counter id = %d. **NVRM: Sampling counter failed for counter id = %d. *call to _pfmreqhndlrSampleSensorLimit*acpiVersionSw*counterHoldPeriod*call to _pfmreqhndlrSetSupportedStatesFromAcpiParams*Get all the supported PlatformRequestHandler subfunctions**Get all the supported PlatformRequestHandler subfunctions*pPShareParams**pPShareParams*PFMREQHNDLRShareParamsAvailMask*call to _pfmreqhndlrInitSupportedCounter*call to _pfmreqhndlrPlatformPowerModeStateReset*NVRM: Unable to retrieve PFM_REQ_HNDLR_SUPPORT. Possibly not supported/enabled by CPU/ACPI? rc = %x **NVRM: Unable to retrieve PFM_REQ_HNDLR_SUPPORT. Possibly not supported/enabled by CPU/ACPI? rc = %x *NVRM: Unable to retrieve PFM_REQ_HNDLR_PCONTROL. Possibly not supported/enabled by CPU/ACPI? rc = %x **NVRM: Unable to retrieve PFM_REQ_HNDLR_PCONTROL. Possibly not supported/enabled by CPU/ACPI? rc = %x *NVRM: Unable to retrieve PFM_REQ_HNDLR_PSHARESTATUS. Possibly not supported/enabled by CPU/ACPI? rc = %x **NVRM: Unable to retrieve PFM_REQ_HNDLR_PSHARESTATUS. Possibly not supported/enabled by CPU/ACPI? rc = %x *NVRM: buffer to small to hold output. **NVRM: buffer to small to hold output. *NVRM: Unable to retrieve PFM_REQ_HNDLR_PSHAREPARAMS. Possibly not supported/enabled by CPU/ACPI? rc = %x **NVRM: Unable to retrieve PFM_REQ_HNDLR_PSHAREPARAMS. Possibly not supported/enabled by CPU/ACPI? rc = %x *NVRM: Unable to retrieve PFM_REQ_HNDLR_FUNC_GETEDPPLIMIT. Possibly not supported/enabled by ACPI? rc = %x **NVRM: Unable to retrieve PFM_REQ_HNDLR_FUNC_GETEDPPLIMIT. Possibly not supported/enabled by ACPI? rc = %x *NVRM: Unable to retrieve PFM_REQ_HNDLR_FUNC_SETEDPPLIMITINFO. Possibly not supported/enabled by ACPI? rc = %x **NVRM: Unable to retrieve PFM_REQ_HNDLR_FUNC_SETEDPPLIMITINFO. Possibly not supported/enabled by ACPI? rc = %x *NVRM: Unknown request %x **NVRM: Unknown request %x *resultSz*sampleNumber*logicalBufferStart*block0Size*newLimit*call to _pfmreqhndlrCancelCounterOverride*call to _pfmreqhndlrOverrideCounter*call to _pfmreqhndlrEnablePM1*NVRM: Update perf counter failed. **NVRM: Update perf counter failed. *pPfmreqhndlrCtrlTableEntry**pPfmreqhndlrCtrlTableEntry*rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_RPC, &gpuLockMask) == NV_OK*src/kernel/platform/sli/sli.c**rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_RPC, &gpuLockMask) == NV_OK**src/kernel/platform/sli/sli.c*call to RmRunSLISupportCheck*numValidConfigs*call to gpuInitSliIllumination_46f6a7*call to rmSliSearchForSliCombination*numXWayValidConfigs*NVRM: gpuSliStatus 0x%x **NVRM: gpuSliStatus 0x%x *firstBit*lastBit*firstBit != (NvU32)~0**firstBit != (NvU32)~0*lastBit != (NvU32)~0**lastBit != (NvU32)~0*smallestMask*largestMask*call to rmSliNextMask*call to gpumgrDetectSliLinkFromGpus*call to gpuboostmgrDecrementRefCount_IMPL*src/kernel/power/gpu_boost_mgr.c**src/kernel/power/gpu_boost_mgr.c*p0060Params*NULL != p0060Params**NULL != p0060Params*call to gpuboostmgrIncrementRefCount_IMPL*gpuboostmgrIncrementRefCount(pBoostMgr, p0060Params->gpuBoostGroupId)**gpuboostmgrIncrementRefCount(pBoostMgr, p0060Params->gpuBoostGroupId)*gpuBoostGroupId*NULL != pBoostConfig**NULL != pBoostConfig**pGpuItr*NULL != pGpuItr**NULL != pGpuItr*bIsSli*bIsUnlinkedSli**pGSCI*projectSKU**projectSKU*SKUInfo*project**project*subVendor*bMatchingDevId*subVendorItr*subDeviceItr*NVRM: GPUs not compatible to be put in the same group **NVRM: GPUs not compatible to be put in the same group *NULL != pBoostGrpId**NULL != pBoostGrpId*NVRM: No Boost Group found for the gpu with ID: 0x%08x **NVRM: No Boost Group found for the gpu with ID: 0x%08x *NULL != pIndex**NULL != pIndex*NVRM: Gpu Count is 0 for group ID: 0x%x **NVRM: Gpu Count is 0 for group ID: 0x%x *call to gpuboostmgrValidateGroupId_IMPL*NVRM: Invalid group ID 0x%x **NVRM: Invalid group ID 0x%x *NVRM: Ref count on group 0x%x is already 0 **NVRM: Ref count on group 0x%x is already 0 *call to kperfGpuBoostSyncStateUpdate*NVRM: Could not deactivate Sync GPU Boost on group 0x%x. Status: 0x%08x **NVRM: Could not deactivate Sync GPU Boost on group 0x%x. Status: 0x%08x *NVRM: Max limit reached for ref count on group 0x%x **NVRM: Max limit reached for ref count on group 0x%x *NVRM: Could not activate Sync GPU Boost on group 0x%x. Status: 0x%08x **NVRM: Could not activate Sync GPU Boost on group 0x%x. Status: 0x%08x *call to btreeDestroyNodes*pGpuIdTree*NVRM: Invalid Gpu Count 0x%x **NVRM: Invalid Gpu Count 0x%x *call to _gpuboostmgrApplyPolicyFilters*NVRM: Invalid GPU ID 0x%x at index 0x%x **NVRM: Invalid GPU ID 0x%x at index 0x%x *NVRM: OBJGPU not constructed yet for ID 0x%x at index 0x%x **NVRM: OBJGPU not constructed yet for ID 0x%x at index 0x%x *NVRM: GPU with ID 0x%x already in use in another group **NVRM: GPU with ID 0x%x already in use in another group *pBoostMgr->groupCount > 0**pBoostMgr->groupCount > 0*NV0000_SYNC_GPU_BOOST_MAX_GROUPS > boostGroupId**NV0000_SYNC_GPU_BOOST_MAX_GROUPS > boostGroupId*0 != pBoostMgr->pBoostGroups[boostGroupId].gpuCount**0 != pBoostMgr->pBoostGroups[boostGroupId].gpuCount*pGpuIdNode*((NV_OK == status) && (NULL != pGpuIdNode))**((NV_OK == status) && (NULL != pGpuIdNode))**pGpuIdNode*pBoostMgr->groupCount < NV0000_SYNC_GPU_BOOST_MAX_GROUPS**pBoostMgr->groupCount < NV0000_SYNC_GPU_BOOST_MAX_GROUPS*call to gpuboostmgrCheckConfig_IMPL*NVRM: Invalid Boost Config. Failing Boost Group creation. **NVRM: Invalid Boost Config. Failing Boost Group creation. *bCleanup*(pGpuIdNode != NULL)**(pGpuIdNode != NULL)*NVRM: Inconsistency in pBoostGroups state. **NVRM: Inconsistency in pBoostGroups state. *pAccess != NULL*src/kernel/rmapi/alloc_free.c**pAccess != NULL**src/kernel/rmapi/alloc_free.c*call to serverSupportsReadOnlyLock*pResourceDesc*call to rmapiLockIsWriteOwner*call to rmapiDisableClientsWithSecInfo*NVRM: numClients: %d **NVRM: numClients: %d *call to rmapiControlCacheFreeClientEntry*call to serverMarkClientListDisabled*NVRM: Disable clients complete **NVRM: Disable clients complete *call to rmapiFreeWithSecInfo*NVRM: Nv01Free: client:0x%x object:0x%x **NVRM: Nv01Free: client:0x%x object:0x%x *call to rmapiInitLockInfo*NVRM: RMAPI_GPU_LOCK_INTERNAL free requested without holding the RMAPI lock **NVRM: RMAPI_GPU_LOCK_INTERNAL free requested without holding the RMAPI lock *freeParams*freeFlags*call to serverFreeResourceTree*NVRM: Nv01Free: free complete **NVRM: Nv01Free: free complete *NVRM: Nv01Free: free failed; status: %s (0x%08x) **NVRM: Nv01Free: free failed; status: %s (0x%08x) *NVRM: Nv01Free: client:0x%x object:0x%x **NVRM: Nv01Free: client:0x%x object:0x%x *call to rmapiAllocWithSecInfo*pParentGpuResource*NVRM: Skipping unsupported class 0x%x **NVRM: Skipping unsupported class 0x%x *call to serverAllocLookupSecondClient*NVRM: NVRM: %s: RMAPI_GPU_LOCK_INTERNAL alloc requested without holding the RMAPI lock: client:0x%x parent:0x%x object:0x%x class:0x%x **NVRM: NVRM: %s: RMAPI_GPU_LOCK_INTERNAL alloc requested without holding the RMAPI lock: client:0x%x parent:0x%x object:0x%x class:0x%x *NVRM: client:0x%x parent:0x%x object:0x%x class:0x%x **NVRM: client:0x%x parent:0x%x object:0x%x class:0x%x *call to _rmAlloc*call to rmclientSetClientFlagsByHandle*pHClient**pHClient*NVRM: allocation complete **NVRM: allocation complete *NVRM: allocation failed; status: %s (0x%08x) **NVRM: allocation failed; status: %s (0x%08x) *call to tmrapiDeregisterEvents_IMPL*call to rmapiResourceDescToLegacyFlags*NVRM: Overriding flags for free of class %04x **NVRM: Overriding flags for free of class %04x *pResourceRef != NULL**pResourceRef != NULL*clientInUse*call to rmapiFixupAllocParams*rmapiFixupAllocParams(&pResDesc, pRmAllocParams)**rmapiFixupAllocParams(&pResDesc, pRmAllocParams)*call to _serverAllocValidatePrivilege*traceOp*traceClassId*pParentList*!pResDesc->bMultiInstance**!pResDesc->bMultiInstance*call to serverAllocResourceLookupLockFlags*resLockAccess*NVRM: Overriding flags for alloc of class %04x **NVRM: Overriding flags for alloc of class %04x *bClearRecursiveStateFlag*call to serverResLock_Prologue*call to clientAssignResourceHandle_IMPL*call to rsAccessMaskFromArray*pRightsRequiredArray*call to clientAllocResource_IMPL*call to clientFreeResource_DISPATCH*call to serverResLock_Epilogue*NVRM: external class 0x%08x is missing its privilege flag in RS_ENTRY **NVRM: external class 0x%08x is missing its privilege flag in RS_ENTRY *call to clientIsAdmin_DISPATCH*call to clientGetCachedPrivilege_DISPATCH*call to _serverAlloc_ValidateVgpu*NVRM: hClient: 0x%08x, externalClassId: 0x%08x: CPU hypervisor does not have permission to allocate object **NVRM: hClient: 0x%08x, externalClassId: 0x%08x: CPU hypervisor does not have permission to allocate object *NVRM: hClient: 0x%08x, externalClassId: 0x%08x: non-privileged context tried to allocate privileged object **NVRM: hClient: 0x%08x, externalClassId: 0x%08x: non-privileged context tried to allocate privileged object *NVRM: hClient: 0x%08x, externalClassId: 0x%08x: non-privileged context tried to allocate kernel privileged object **NVRM: hClient: 0x%08x, externalClassId: 0x%08x: non-privileged context tried to allocate kernel privileged object *phObject != NULL**phObject != NULL*rmAllocParams*allocState*call to serverAllocResource*gpulockRelease*call to dispchnGrabChannel_IMPL*Attempting to lock per-GPU lock for a non-GpuResource**Attempting to lock per-GPU lock for a non-GpuResource*call to _resGetBackRefGpusMask*pResRefToBackRef**pBackRefItem*cpStatus*call to _portMemAllocatorFree*call to rmapiParamsCopyInit*pMaskBuffer*RightsRequested**pMaskBuffer**RightsRequested*handleBase*call to binapiControl_IMPL*rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_RPC, &gpuMaskRelease)*src/kernel/rmapi/binary_api.c**rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_RPC, &gpuMaskRelease)**src/kernel/rmapi/binary_api.c*pFind**pFind***pFind*pInsert**pInsert***pInsert*call to osAllocatedRmClient*call to _rmclientIsCapable*src/kernel/rmapi/client.c**src/kernel/rmapi/client.c*bRestoreBcState*call to rmapiFreeResourcePrologue*call to clientFreeResource_IMPL*call to rmclientIsKernelOnly*call to _rmclientUserClientSecurityCheck***pUidToken*call to osUidTokensEqual*call to rmclientGetSecurityToken_IMPL**call to rmclientGetSecurityToken_IMPL*call to _rmclientPromoteDebuggerState*call to rmclientSetClientFlags_IMPL*pStopRef**pStopRef*call to clientUpdatePendingFreeList_IMPL*pNextRef*pFirstLowPriRef**pFirstLowPriRef*pPrevRef*NVRM: Incorrect client handle used in the User export **NVRM: Incorrect client handle used in the User export *call to osGetSecurityToken**call to osGetSecurityToken*pCurrentToken**pCurrentToken***pCurrentToken*NVRM: Cannot get the security token for the current user. **NVRM: Cannot get the security token for the current user. *NVRM: The user client cannot be validated **NVRM: The user client cannot be validated *NVRM: Error validating client token. Status = 0x%08x **NVRM: Error validating client token. Status = 0x%08x *ClientDebuggerState*unmapFromParams*hBroadcastDevice*mapToParams*bFlaMapping*rmapiLockIsWriteOwner() || (serverIsClientInternal(&g_resServ, staticCast(pClient, RsClient)->hClient) && rmGpuLockIsOwner())**rmapiLockIsWriteOwner() || (serverIsClientInternal(&g_resServ, staticCast(pClient, RsClient)->hClient) && rmGpuLockIsOwner())*NVRM: type: client **NVRM: type: client *call to CliUnregisterFromThirdPartyP2P*call to osPutPidInfo*call to eventSystemClearEventQueue*CliSysEventInfo*bReleaseLock*call to _unregisterOSInfo*call to _unregisterUserInfo***pSecurityToken*rmapiLockIsWriteOwner() || (serverIsClientInternal(&g_resServ, pRsClient->hClient) && rmGpuLockIsOwner())**rmapiLockIsWriteOwner() || (serverIsClientInternal(&g_resServ, pRsClient->hClient) && rmGpuLockIsOwner())*bIsRootNonPriv*cachedPrivilege*ProcID*call to osGetPidInfo**call to osGetPidInfo*NVRM: NVRM_RPC: Failed to set host client resource handle range %x **NVRM: NVRM_RPC: Failed to set host client resource handle range %x *call to clientSetRestrictedRange_IMPL*NVRM: Failed to set host client restricted resource handle range. Status=%x **NVRM: Failed to set host client restricted resource handle range. Status=%x *call to _registerOSInfo*_registerOSInfo(pClient, pClient->pOSInfo)**_registerOSInfo(pClient, pClient->pOSInfo)*bIsClientVirtualMode*call to _registerUserInfo*call to eventSystemInitEventQueue*NVRM: New RM Client: hClient=0x%08x (%c), ProcID=%u, name='%s' **NVRM: New RM Client: hClient=0x%08x (%c), ProcID=%u, name='%s' *call to getAcpiDsmObjectData*bIsPresent*call to pfmreqhndlrGetPerfSensorCounters*call to rmapiControlCacheGetMode*clientCount*resourceCount*call to RmIdleChannels*src/kernel/rmapi/client_resource.c**src/kernel/rmapi/client_resource.c*gpumgrGetGpuAttachInfo(&gpuAttachCount, &gpuMask)**gpumgrGetGpuAttachInfo(&gpuAttachCount, &gpuMask)*rptIdx*call to krcCliresCtrlNvdGetRcerrRptCheckPermissions_KERNEL*call to rcdbGetRcDiagRecBoundaries_IMPL*rptTime*rptType*rptCount*call to gpumgrSetGpuNvlinkBwMode_IMPL*call to gpumgrSetGpuInitDisabledNvlinks_IMPL*enableMask*NVRM: MemOpOverride enabled **NVRM: MemOpOverride enabled *call to osImexChannelIsSupported*call to osImexChannelCount*call to osImexChannelGet*pResRef*pObjectRef*resservSwapTlsCallContext(&pOldCallContext, &callContext)**resservSwapTlsCallContext(&pOldCallContext, &callContext)*resservRestoreTlsCallContext(pOldCallContext)**resservRestoreTlsCallContext(pOldCallContext)*call to clientAddAccessBackRef_IMPL*pClientTarget*sharePolicy*call to cliresCtrlCmdClientShareObject_IMPL*pClientRef*call to rsAccessUpdateRights*pRsResourceRef*call to rsAccessGetAvailableRights*hResult*iResult*clientGetResourceRef(pRsClient, pParams->hObject, &pResourceRef)**clientGetResourceRef(pRsClient, pParams->hObject, &pResourceRef)*memGetMapAddrSpace(pMemory, &callContext, pParams->mapFlags, &memType)**memGetMapAddrSpace(pMemory, &callContext, pParams->mapFlags, &memType)*gpuresGetMapAddrSpace(pGpuResource, &callContext, pParams->mapFlags, &memType)**gpuresGetMapAddrSpace(pGpuResource, &callContext, pParams->mapFlags, &memType)*NVRM: VIRTUAL (0x%x) is not a valid NV0000_CTRL_CMD_CLIENT_GET_ADDR_SPACE_TYPE **NVRM: VIRTUAL (0x%x) is not a valid NV0000_CTRL_CMD_CLIENT_GET_ADDR_SPACE_TYPE *NVRM: Cannot determine address space 0x%x **NVRM: Cannot determine address space 0x%x *call to kvgpumgrSetHostVgpuVersion*host_min_supported_version*host_max_supported_version*NVRM: User enforced vGPU version = (0x%x, 0x%x) **NVRM: User enforced vGPU version = (0x%x, 0x%x) *call to kvgpumgrGetHostVgpuVersion*call to osWakeRemoveVgpu*call to gpuboostmgrQueryGroups_IMPL*call to gpuboostmgrDestroyGroup_IMPL*boostConfig*call to gpuboostmgrCreateGroup_IMPL*SubProcessName*subProcessName**SubProcessName**subProcessName*drainState*call to gpumgrModifyGpuDrainState*call to knvlinkSyncLaneShutdownProps_IMPL*NVRM: Failed to sync lane shutdown properties **NVRM: Failed to sync lane shutdown properties *call to gpumgrGetGpuUuidInfo*NVRM: gpumgrGetGpuInfo: getting gpu GUID failed **NVRM: gpumgrGetGpuInfo: getting gpu GUID failed *uuidStrLen*fabricStatus*privStatusFlags**gpuBus*gpuExternalPowerStatus**gpuExternalPowerStatus*gpuIdGrpA**gpuIdGrpA*call to CliGetSystemP2pCapsMatrix_GSPCLIENT*groupA**groupA*gpuIdGrpB**gpuIdGrpB*groupB**groupB*bReflexive***p2pCaps*a2bOptimalReadCes**a2bOptimalReadCes***a2bOptimalReadCes*a2bOptimalWriteCes**a2bOptimalWriteCes***a2bOptimalWriteCes*b2aOptimalReadCes**b2aOptimalReadCes***b2aOptimalReadCes*b2aOptimalWriteCes**b2aOptimalWriteCes***b2aOptimalWriteCes*grpBIdx*grpAIdx*call to CliGetSystemP2pCaps_GSPCLIENT*busEgmPeerIds**busEgmPeerIds*pGetParams != NULL**pGetParams != NULL*bAllCaps*localGpuCaps*pParams->component >= NV0000_CTRL_NVD_DUMP_COMPONENT_NVLOG**pParams->component >= NV0000_CTRL_NVD_DUMP_COMPONENT_NVLOG*pParams->component < NV0000_CTRL_NVD_DUMP_COMPONENT_RESERVED**pParams->component < NV0000_CTRL_NVD_DUMP_COMPONENT_RESERVED*pParams->size <= NV0000_CTRL_NVLOG_MAX_BLOCK_SIZE**pParams->size <= NV0000_CTRL_NVLOG_MAX_BLOCK_SIZE*call to nvlogPauseLoggingToBuffer*call to nvlogExtractBufferChunk*call to nvlogGetBufferHandleFromTag*call to nvlogRunFlushCbs*bufferTags**bufferTags*pDumpParams->size <= NV0000_CTRL_NVD_MAX_DUMP_SIZE**pDumpParams->size <= NV0000_CTRL_NVD_MAX_DUMP_SIZE*dsmDataSize*NVRM: Unable to retrieve NVPCF supported functions. Possibly not supported by SBIOS rc = %x **NVRM: Unable to retrieve NVPCF supported functions. Possibly not supported by SBIOS rc = %x *NVRM: NVPCF FUNC_GET_SUPPORTED is not supportedrc = %x **NVRM: NVPCF FUNC_GET_SUPPORTED is not supportedrc = %x **5b**4d**7d*commonSize*bRequireDcSysPowerLimitsTable*bAllowDcRestOfSystemReserveOverride*bSupportDcTsp*param0*NVRM: Unable to retrieve NVPCF dynamic data. Possibly not supported by SBIOSrc = %x **NVRM: Unable to retrieve NVPCF dynamic data. Possibly not supported by SBIOSrc = %x *call to configReadStructure*pSzHeaderFmt*headerOut*pSzCommonFmt*pSzEntryFmt*entriesOut*bEnableForAC*commonOut*targetTppOffsetmW*maxOutputOffsetmW*minOutputOffsetmW*bEnableForDC*maxOutputBattOffsetmW*minOutputBattOffsetmW*ctgpBattOffsetmW*targetTppBattOffsetmW*dcRosReserveOverridemW*dcTspLongTimescaleLimitOverridemA*dcTspShortTimescaleLimitmA*NVRM: Unable to retrieve NVPCF Static data. Possibly not supported by SBIOSrc = %x **NVRM: Unable to retrieve NVPCF Static data. Possibly not supported by SBIOSrc = %x *call to _validateConfigStaticTable_2x*NVRM: Config Static Data checksum failed **NVRM: Config Static Data checksum failed *call to _controllerParseStaticTable_2x**2b*NVRM: ERROR: NVPCF0100_CTRL_CONFIG_DSM_2X_FUNC_GET_DC_SYSTEM_POWER_LIMITS_CASE: mem alloc failed **NVRM: ERROR: NVPCF0100_CTRL_CONFIG_DSM_2X_FUNC_GET_DC_SYSTEM_POWER_LIMITS_CASE: mem alloc failed *NVRM: Unable to retrieve DC System power limits table data. Possibly not supported by SBIOS. rc = %x **NVRM: Unable to retrieve DC System power limits table data. Possibly not supported by SBIOS. rc = %x *bIsTspSupported*sysPwrLimitsTableVersion*header1x**4b*szFmtHeader**szFmtHeader*NVRM: Invalid Header Size. **NVRM: Invalid Header Size. *NVRM: headerSize = %d **NVRM: headerSize = %d *1b4d**1b4d*szFmtEntry**szFmtEntry*NVRM: Invalid Entry Size. **NVRM: Invalid Entry Size. *NVRM: entrySize = %d **NVRM: entrySize = %d *NVRM: Invalid Entry Count. **NVRM: Invalid Entry Count. *NVRM: entryCount = %d **NVRM: entryCount = %d *sysPwrGetInfo**sysPwrGetInfo*sbiosEntry*batteryStateOfChargePercent*batteryCurrentLimitmA*restOfSytemReservedPowermW*minCpuTdpmW*maxCpuTdpmW*shortTimescaleBatteryCurrentLimitmA*sysPwrIndex*NVRM: Invalid threshold for entry 0. Must be 100 percent **NVRM: Invalid threshold for entry 0. Must be 100 percent *NVRM: threshold = %d **NVRM: threshold = %d *NVRM: limit[%d] has threshold not strictly smaller than limit[%d] **NVRM: limit[%d] has threshold not strictly smaller than limit[%d] *NVRM: %d >= %d **NVRM: %d >= %d *prevThreshold*header2x*1b2d**1b2d*NVRM: entrySize = %d **NVRM: entrySize = %d *bodySize*body*NVRM: Unable to set CPU TDP Limit. Possibly not supported by SBIOSrc = %x **NVRM: Unable to set CPU TDP Limit. Possibly not supported by SBIOSrc = %x *NVRM: Inavlid NVPCF subFunc : 0x%x **NVRM: Inavlid NVPCF subFunc : 0x%x *pDataSize != NULL**pDataSize != NULL*call to _sysDeviceParseStaticTable_2x*call to _controllerParseStaticTable_v22*call to _controllerParseStaticTable_v20**3b*pSzSysDevHeaderFmt*sysdevHeader*NVRM: NVPCF: %s: Unsupported header **NVRM: NVPCF: %s: Unsupported header *cpuType*gpuType**1b1d*pHeaderFmt*expectedSize*pEntryFmt*call to _controllerBuildConfig_StaticTable_v22*call to _controllerBuildQboostConfig_StaticTable_v22*call to _controllerBuildCtgpConfig_StaticTable_2x*loop*cpuTdpControlType*samplingPeriodmS**1w**1b1w3d*pCommonFmt*call to _controllerBuildConfig_StaticTable_v20*call to _controllerBuildQboostConfig_StaticTable_v20*samplingMulti*filterType*filterParam*weight*bIsBoostController*incRatio*decRatio*bSupportBatt*filterReserved*windowSize*cmdData**cmdData*call to pfmreqhndlrControl_IMPL*call to gpuacctGetAcctPids_IMPL*call to gpuacctGetAccountingMode_IMPL*call to gpuacctClearAccountingData_IMPL*call to lookupPid*call to gpuacctGetProcAcctInfo_IMPL*call to eventSystemDequeueEvent*call to CliControlSystemEvent*call to gsyncGetIdInfo*call to gsyncGetAttachedIds*call to gpumgrGetAttachedGpuIds*call to _cliresValidateGpuIdAgainstProbed*call to gpumgrGetProbedGpuDomainBusDevice*call to gpumgrCacheGetActiveDeviceIds_IMPL*call to gpumgrGetDeviceInstanceMask*deviceIds*call to gpumgrGetGpuInitStatus*call to gpumgrGetGpuIdInfo*rm_instance_id*systemType*call to classGetSystemClasses*call to RsResInfoGetResourceList*params.numClasses < NV0000_CTRL_SYSTEM_MAX_CLASSLIST_SIZE**params.numClasses < NV0000_CTRL_SYSTEM_MAX_CLASSLIST_SIZE**classes*call to rmapiLockGetTimes*call to rmGpuLockGetTimes*subSysVendorId*subSysDeviceId*HBvendorId*HBdeviceId*HBsubSysVendorId*HBsubSysDeviceId*sliBondId*call to csGetInfoStrings*chipsetNameString**chipsetNameString*vendorNameString**vendorNameString*sliBondNameString**sliBondNameString*subSysVendorNameString**subSysVendorNameString*L1DataCacheSize*L2DataCacheSize*bCCEnabled*selfHostedSocType*call to gpuDetermineSelfHostedSocType_DISPATCH*pGpuIter*NVRM: ERROR: NV0000_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: Parameter validation failed: outDataSize=%d method = %ud **NVRM: ERROR: NV0000_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: Parameter validation failed: outDataSize=%d method = %ud *NVRM: ERROR: NV0000_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: mem alloc failed **NVRM: ERROR: NV0000_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: mem alloc failed *NVRM: ERROR: NV0000_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: Unrecognized Api Code: 0x%x **NVRM: ERROR: NV0000_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: Unrecognized Api Code: 0x%x *NVRM: ERROR: NV0000_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: Execution failed for method: 0x%x, status=0x%x **NVRM: ERROR: NV0000_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: Execution failed for method: 0x%x, status=0x%x *NVRM: ERROR: NV0000_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: output buffer is smaller then expected! **NVRM: ERROR: NV0000_CTRL_CMD_SYSTEM_EXECUTE_ACPI_METHOD: output buffer is smaller then expected! *driverBranch**driverBranch*r591_47**r591_47*featuresMask*call to _configCalculateSizes*pPacked_data**pPacked_data*call to _configUnpackStructure*!"Bad pFormat argument"**!"Bad pFormat argument"*NVRM: Invalid GPU count **NVRM: Invalid GPU count *pGpuLocalLoop**pGpuLocalLoop*NVRM: GPU ID not found: 0x%x **NVRM: GPU ID not found: 0x%x *peerGpuIndex*localGpuIndex*pGpuLocal**pGpuLocal*pLocalKernelNvlink != NULL**pLocalKernelNvlink != NULL*call to knvlinkGetP2POptimalCEs_DISPATCH*call to serverSerializeCtrlUp*serverSerializeCtrlUp(pCallContext, pParams->cmd, &pParams->pParams, &pParams->paramsSize, &pParams->flags)**serverSerializeCtrlUp(pCallContext, pParams->cmd, &pParams->pParams, &pParams->paramsSize, &pParams->flags)*call to serverFreeSerializeStructures*call to serverDeserializeCtrlDown*Cannot get the security token for the current user**Cannot get the security token for the current user*call to resShareCallback_IMPL*src/kernel/rmapi/client_resource_sli.c**src/kernel/rmapi/client_resource_sli.c*call to gpumgrGetSliLinks*call to _rmapiControlWithSecInfoTlsIRQL*call to rmapiControlWithSecInfo*src/kernel/rmapi/control.c*NVRM: Nv04Control: hClient:0x%x hObject:0x%x cmd:0x%x params:%p paramSize:0x%x flags:0x%x **src/kernel/rmapi/control.c**NVRM: Nv04Control: hClient:0x%x hObject:0x%x cmd:0x%x params:%p paramSize:0x%x flags:0x%x *call to _rmapiRmControl*NVRM: Nv04Control: control complete **NVRM: Nv04Control: control complete *NVRM: Nv04Control: control failed; status: %s (0x%08x) **NVRM: Nv04Control: control failed; status: %s (0x%08x) *NVRM: Nv04Control: hClient:0x%x hObject:0x%x cmd:0x%x params:%p paramSize:0x%x flags:0x%x **NVRM: Nv04Control: hClient:0x%x hObject:0x%x cmd:0x%x params:%p paramSize:0x%x flags:0x%x *rmapiLockIsWriteOwner()**rmapiLockIsWriteOwner()*call to rmapiutilGetControlInfo*NVRM: rmapiutilGetControlInfo(cmd=0x%x, out flags=0x%x, NULL) = status=0x%x **NVRM: rmapiutilGetControlInfo(cmd=0x%x, out flags=0x%x, NULL) = status=0x%x *call to gpumgrAreAllGpusInOffloadMode*((controlFlags & RMCTRL_FLAGS_NO_GPUS_LOCK) != 0)**((controlFlags & RMCTRL_FLAGS_NO_GPUS_LOCK) != 0)*((controlFlags & RMCTRL_FLAGS_API_LOCK_READONLY) == 0)**((controlFlags & RMCTRL_FLAGS_API_LOCK_READONLY) == 0)*NVRM: Calling context is NULL! **NVRM: Calling context is NULL! *call to rmapiutilSkipErrorMessageForUnsupportedVgpuGuestControl*NVRM: Unsupported ROUTE_TO_PHYSICAL control 0x%x was called on vGPU guest **NVRM: Unsupported ROUTE_TO_PHYSICAL control 0x%x was called on vGPU guest *call to gpuValidateRmctrlCmd_DISPATCH*NVRM: Control command 0x%x is not in allowlist **NVRM: Control command 0x%x is not in allowlist *call to serverControl_ValidateVgpu*NVRM: hClient: 0x%08x, hObject 0x%08x, cmd 0x%08x: non-privileged hypervisor context issued privileged cmd **NVRM: hClient: 0x%08x, hObject 0x%08x, cmd 0x%08x: non-privileged hypervisor context issued privileged cmd *call to rmControlValidateClientPrivilegeAccess*rmControlValidateClientPrivilegeAccess(pRmCtrlParams->hClient, pRmCtrlParams->hObject, pRmCtrlParams->cmd, pRmCtrlExecuteCookie->ctrlFlags, &pRmCtrlParams->secInfo)**rmControlValidateClientPrivilegeAccess(pRmCtrlParams->hClient, pRmCtrlParams->hObject, pRmCtrlParams->cmd, pRmCtrlExecuteCookie->ctrlFlags, &pRmCtrlParams->secInfo)*NVRM: hClient: 0x%08x, hObject 0x%08x, cmd 0x%08x: non-privileged context issued privileged cmd **NVRM: hClient: 0x%08x, hObject 0x%08x, cmd 0x%08x: non-privileged context issued privileged cmd *NVRM: hClient: 0x%08x, hObject 0x%08x, cmd 0x%08x: non-kernel client issued kernel-only cmd **NVRM: hClient: 0x%08x, hObject 0x%08x, cmd 0x%08x: non-kernel client issued kernel-only cmd *bPermissionGranted*NVRM: rmControl: hClient 0x%x hObject 0x%x cmd 0x%x **NVRM: rmControl: hClient 0x%x hObject 0x%x cmd 0x%x *NVRM: rmControl: pUserParams 0x%p paramSize 0x%x **NVRM: rmControl: pUserParams 0x%p paramSize 0x%x *bInternalRequest*bIsRaisedIrqlCmd*bIsLockBypassCmd*call to rmapiRmControlCanBeRaisedIrql*NVRM: rmControl: cmd 0x%x cannot be called at raised irq level **NVRM: rmControl: cmd 0x%x cannot be called at raised irq level *NVRM: rmControl: raised cmd 0x%x at normal irq level **NVRM: rmControl: raised cmd 0x%x at normal irq level *call to rmapiRmControlCanBeBypassLock*NVRM: rmControl: cmd 0x%x cannot bypass locks **NVRM: rmControl: cmd 0x%x cannot bypass locks *getCtrlInfoStatus*NVRM: bad params: cmd:0x%x ptr %p size: 0x%x expect size: 0x%x **NVRM: bad params: cmd:0x%x ptr %p size: 0x%x expect size: 0x%x *call to serverControl*call to _rmControlDeferred*!osIsISR() || rmCtrlParams.pGpu**!osIsISR() || rmCtrlParams.pGpu*call to rmapiControlIsCacheable*call to serverControlApiCopyIn*call to rmapiControlCacheGet*call to serverControlApiCopyOut*call to serverControlLookupSecondClient*pCookie != NULL**pCookie != NULL*pRmCtrlParams != NULL**pRmCtrlParams != NULL*call to rmapiControlCacheSet*embeddedParamCopies**embeddedParamCopies*pEmbeddedParamCopies**pEmbeddedParamCopies*bFreeParamCopy*bFreeEmbeddedCopy*call to embeddedParamCopyOut*call to embeddedParamCopyIn*NVRM: rmctrl param size (%d) larger than limit (%d). **NVRM: rmctrl param size (%d) larger than limit (%d). *paramBuffer**paramBuffer*rmCtrlDeferredParams*call to releaseDeferRmCtrlBuffer*NVRM: deferred rmctrl %x failed %x! **NVRM: deferred rmctrl %x failed %x! *RmAlloc*RmControl*RmFree*RmMapMemory*CopyUser*AllocMem*FreeMem*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_GPU_GET_NVENC_SW_SESSION_INFO_PARAMS))*src/kernel/rmapi/embedded_param_copy.c**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_GPU_GET_NVENC_SW_SESSION_INFO_PARAMS))**src/kernel/rmapi/embedded_param_copy.c***sessionInfoTbl*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_GPU_GET_ENGINES_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_GPU_GET_ENGINES_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_CE_GET_CAPS_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_CE_GET_CAPS_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_FIFO_GET_CHANNELLIST_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_FIFO_GET_CHANNELLIST_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0000_CTRL_SYSTEM_EXECUTE_ACPI_METHOD_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0000_CTRL_SYSTEM_EXECUTE_ACPI_METHOD_PARAMS))***inData*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0073_CTRL_SYSTEM_EXECUTE_ACPI_METHOD_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0073_CTRL_SYSTEM_EXECUTE_ACPI_METHOD_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV83DE_CTRL_DEBUG_READ_MEMORY_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV83DE_CTRL_DEBUG_READ_MEMORY_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV83DE_CTRL_DEBUG_WRITE_MEMORY_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV83DE_CTRL_DEBUG_WRITE_MEMORY_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV83DE_CTRL_DEBUG_ACCESS_MEMORY_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV83DE_CTRL_DEBUG_ACCESS_MEMORY_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_HOST_GET_CAPS_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_HOST_GET_CAPS_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_BIOS_GET_INFO_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_BIOS_GET_INFO_PARAMS))***biosInfoList*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_BIOS_GET_NBSI_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_BIOS_GET_NBSI_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_BIOS_GET_NBSI_OBJ_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_BIOS_GET_NBSI_OBJ_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_GR_GET_INFO_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_GR_GET_INFO_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_FIFO_GET_CAPS_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_FIFO_GET_CAPS_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NVA0BC_CTRL_NVENC_SW_SESSION_UPDATE_INFO_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NVA0BC_CTRL_NVENC_SW_SESSION_UPDATE_INFO_PARAMS))***timestampBuffer*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV402C_CTRL_I2C_INDEXED_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV402C_CTRL_I2C_INDEXED_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV402C_CTRL_I2C_TRANSACTION_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV402C_CTRL_I2C_TRANSACTION_PARAMS))*call to i2cTransactionCopyOut*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_GPU_EXEC_REG_OPS_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_GPU_EXEC_REG_OPS_PARAMS))***regOps*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_NVD_GET_DUMP_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_NVD_GET_DUMP_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0000_CTRL_NVD_GET_DUMP_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0000_CTRL_NVD_GET_DUMP_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0041_CTRL_GET_SURFACE_INFO_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0041_CTRL_GET_SURFACE_INFO_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0000_CTRL_SYSTEM_GET_P2P_CAPS_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0000_CTRL_SYSTEM_GET_P2P_CAPS_PARAMS))*peerIdsStatus***busPeerIds***busEgmPeerIds*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_FB_GET_CAPS_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_FB_GET_CAPS_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_GPU_GET_CLASSLIST_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_GPU_GET_CLASSLIST_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_GPU_GET_ENGINE_CLASSLIST_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_GPU_GET_ENGINE_CLASSLIST_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_GR_GET_CAPS_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_GR_GET_CAPS_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_I2C_ACCESS_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_I2C_ACCESS_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_GR_GET_INFO_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_GR_GET_INFO_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NVB06F_CTRL_MIGRATE_ENGINE_CTX_DATA_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NVB06F_CTRL_MIGRATE_ENGINE_CTX_DATA_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NVB06F_CTRL_GET_ENGINE_CTX_DATA_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NVB06F_CTRL_GET_ENGINE_CTX_DATA_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_RC_READ_VIRTUAL_MEM_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_RC_READ_VIRTUAL_MEM_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_DMA_UPDATE_PDE_2_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV0080_CTRL_DMA_UPDATE_PDE_2_PARAMS))*(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_GPU_RPC_GSP_TEST_PARAMS))**(((pRmCtrlParams)->pParams != NULL) && ((pRmCtrlParams)->paramsSize) == sizeof(NV2080_CTRL_GPU_RPC_GSP_TEST_PARAMS))*call to _i2cTransactionCopyIn*smbusMultibyteRegisterData*edidData*bCopyInitDone*call to XlateUserModeArgsToSecInfo*call to _nv04ControlWithSecInfo*call to RmDeprecatedGetControlHandler*call to rmapiInitDeprecatedContext*ctxGraveyard*call to _nv04ShareWithSecInfo*call to _nv04DupObjectWithSecInfo*call to _nv04UnmapMemoryDmaWithSecInfo*call to _nv04MapMemoryDmaWithSecInfo*call to RmDeprecatedBindContextDma*call to RmDeprecatedAllocContextDma*call to RmDeprecatedI2CAccess*call to _nv04UnmapMemoryWithSecInfo*call to _nv04MapMemoryWithSecInfo*call to RmDeprecatedIdleChannels*call to RmDeprecatedVidHeapControl*call to _nv01FreeWithSecInfo*call to _nv04AllocWithAccessSecInfo*call to _nv04AllocWithSecInfo*call to RmDeprecatedAllocObject*call to RmDeprecatedAllocMemory*call to _nv04VidHeapControl*call to _nv04UnmapMemoryDma*call to _nv04UnmapMemory*call to _nv04MapMemoryDma*call to _nv04MapMemory*call to _nv04IdleChannels*call to _nv04I2CAccess*call to _nv04Share*call to _nv04DupObject*call to _nv04Control*call to _nv04BindContextDma*call to _nv04AllocContextDma*call to _nv04AllocWithAccess*call to _nv04Alloc*call to _nv04AddVblankCallback*call to _nv01Free*call to _nv01AllocObject*call to _nv01AllocMemory*call to RmDeprecatedAddVblankCallback*pFirstNode*pLastNode*vgpuUnbind*vgpuBind*gpuBindUnbind*pEventResourceRef*src/kernel/rmapi/event.c*NVRM: Event is null **src/kernel/rmapi/event.c**NVRM: Event is null *NVRM: pNotifierShare or pNotifier is NULL **NVRM: pNotifierShare or pNotifier is NULL *NVRM: pEventNotification is NULL **NVRM: pEventNotification is NULL *NVRM: Failed to look up resource reference handle: 0x%x **NVRM: Failed to look up resource reference handle: 0x%x *pCliResRef*call to eventSystemEnqueueEvent*NVRM: fails to add event=%d **NVRM: fails to add event=%d *call to eventSystemDequeueEventLatest*NVRM: failed to deliver event 0x%x**NVRM: failed to deliver event 0x%x*call to inotifyUnregisterEvent_DISPATCH*pEventNotif*call to inotifySetNotificationShare_DISPATCH*pNotifierClient**pNotifierClient*pNotifierRef*call to inotifyGetOrAllocNotifShare_DISPATCH*call to _eventRpcForType*serverutilGetResourceRef(hNotifierClient, hNotifierResource, &pNotifierRef)**serverutilGetResourceRef(hNotifierClient, hNotifierResource, &pNotifierRef)*NVRM: RmFreeEvent could not set pGpu. hClient=0x%x, hObject=0x%x **NVRM: RmFreeEvent could not set pGpu. hClient=0x%x, hObject=0x%x *call to unregisterEventNotification*pNv0050AllocParams*NVRM: RmAllocEvent could not set pGpu. hClient=0x%x, hObject=0x%x **NVRM: RmAllocEvent could not set pGpu. hClient=0x%x, hObject=0x%x *clientGetResourceRef(pRsClient, pRsClient->hClient, &pClientRef)**clientGetResourceRef(pRsClient, pRsClient->hClient, &pClientRef)*call to eventInit_IMPL*serverutilGetResourceRef(hParentClient, pNv0050AllocParams->hSrcResource, &pSourceRef)**serverutilGetResourceRef(hParentClient, pNv0050AllocParams->hSrcResource, &pSourceRef)*pSourceRef*pRBI**pRBI*pHeader->recordPut < pRBI->totalRecordCount*src/kernel/rmapi/event_buffer.c**pHeader->recordPut < pRBI->totalRecordCount**src/kernel/rmapi/event_buffer.c*call to eventBufferProducerAddEvent*pProducerData*call to eventBufferIsNotifyThresholdMet*kernelMapInfo*call to eventBufferGetRecordBufferCount*pProducerInfo*call to eventBufferGetVardataBufferCount*call to eventBufferUpdateRecordBufferGet*call to eventBufferUpdateVardataBufferGet*updateTelemetry*call to eventBufferSetKeepNewest*pAddress == NvP64_NULL**pAddress == NvP64_NULL*pSubdevice != NULL && pGpu != NULL**pSubdevice != NULL && pGpu != NULL*call to _unmapAndFreeMemory*call to videoRemoveAllBindpoints*call to fecsRemoveAllBindpoints*call to gspTraceRemoveAllBindpoints*pHeaderDesc*headerAddr**headerAddr*headerPriv**headerPriv*pKernelMap*pClientMap*recordBuffPriv**recordBuffPriv*pVardataBufDesc*vardataBuffPriv**vardataBuffPriv*bufferHeader**bufferHeader***bufferHeader***recordBuffer*vardataBuffer**vardataBuffer***vardataBuffer*(pAllocParams->hRecordBuffer != 0)**(pAllocParams->hRecordBuffer != 0)*((pAllocParams->vardataBufferSize == 0) ^ (pAllocParams->hVardataBuffer != 0))**((pAllocParams->vardataBufferSize == 0) ^ (pAllocParams->hVardataBuffer != 0))*pRecordRef*pHeaderRef*pVardataRef*bNoDeviceMem*NVRM: hSubDevice must be provided. **NVRM: hSubDevice must be provided. *bUsingVgpuStagingBuffer*hMapperClient*hMapperDevice***headerAddr*call to _allocAndMapMemory*call to eventBufferInitRecordBuffer*call to eventBufferInitVardataBuffer*kernelNotificationhandle**kernelNotificationhandle***kernelNotificationhandle*call to eventBufferInitNotificationHandle*src/kernel/rmapi/event_notification.c*NVRM: Method = 0x%x **src/kernel/rmapi/event_notification.c**NVRM: Method = 0x%x *NVRM: Data = 0x%x **NVRM: Data = 0x%x *NVRM: Status = 0x%x **NVRM: Status = 0x%x *NVRM: Action = 0x%x **NVRM: Action = 0x%x *NotifyTriggerCount**nextEvent*lastEvent**lastEvent*(nextEvent->pGpu != NULL)**(nextEvent->pGpu != NULL)*call to _removeEventNotification*pTargetEvent*call to eventGetEngineTypeFromSubNotifyIndex*kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, GPU_RES_GET_DEVICE(pSubDevice), &ref)**kmigmgrGetInstanceRefFromDevice(pGpu, pKernelMIGManager, GPU_RES_GET_DEVICE(pSubDevice), &ref)*kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, rmEngineId, &globalRmEngineId)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, rmEngineId, &globalRmEngineId)*call to _gpuEngineEventNotificationRemove**pTargetEvent*EventNotify**EventNotify*bBroadcastEvent*bSubdeviceSpecificEvent*SubdeviceSpecificValue*bEventDataRequired*bClientRM*bNonStallIntrEvent*call to _insertEventNotification*call to _gpuEngineEventNotificationInsert*rmTmpStatus*NVRM: notifier 0x%x doesn't use the fast non-stall interrupt path! **NVRM: notifier 0x%x doesn't use the fast non-stall interrupt path! *rmEngineId < NV_ARRAY_ELEMENTS(pGpu->engineNonstallIntrEventNotifications)**rmEngineId < NV_ARRAY_ELEMENTS(pGpu->engineNonstallIntrEventNotifications)*call to _gpuEngineEventNotificationListNotify*pEngineEventNotification*pTempKernelMapping*NVRM: Per-vGPU semaphore location mapping is NULL. Skipping the current node. **NVRM: Per-vGPU semaphore location mapping is NULL. Skipping the current node. *call to hypervisorInjectInterrupt_IMPL*bInPendingNotifyList*osNotifyEvent(pGpu, pIter->pEventNotify, 0, 0, NV_OK)**osNotifyEvent(pGpu, pIter->pEventNotify, 0, 0, NV_OK)*pIter->pendingNotifyCount == 0**pIter->pendingNotifyCount == 0*call to _gpuEngineEventNotificationListLockPreemptible**pEngineEventNotification*call to _gpuEngineEventNotificationListUnlockPreemptible*pEngineEventNotification != NULL**pEngineEventNotification != NULL*pEventNotify != NULL**pEventNotify != NULL*pEventNotificationList->activeNotifyThreads == 0**pEventNotificationList->activeNotifyThreads == 0*listCount(&pEventNotificationList->pendingEventNotifyList) == 0**listCount(&pEventNotificationList->pendingEventNotifyList) == 0*listCount(&pEventNotificationList->eventNotificationList) == 0**listCount(&pEventNotificationList->eventNotificationList) == 0*pEventNotificationList != NULL**pEventNotificationList != NULL*pEventNotificationList->pSpinlock != NULL**pEventNotificationList->pSpinlock != NULL*activeNotifyThreads*pRmInternalClient**pRmInternalClient*globalLockStressCounter*gpuLockStressCounter*clientLockStressCounter*internalClientLockStressCounter*call to osGetRandomBytes*call to updateLockStressCountersInternal*call to serverIsClientLocked*serverIsClientLocked(&g_resServ, pClient->hClient)*src/kernel/rmapi/lock_stress.c**serverIsClientLocked(&g_resServ, pClient->hClient)**src/kernel/rmapi/lock_stress.c*pRmApi->Control(pRmApi, pResource->hInternalClient, pResource->hInternalLockStressObject, internalCmd, &internalParams, sizeof(internalParams))**pRmApi->Control(pRmApi, pResource->hInternalClient, pResource->hInternalLockStressObject, internalCmd, &internalParams, sizeof(internalParams))*call to updateLockStressCounters*lockStressCounter*pRmApi->AllocWithHandle(pRmApi, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_ROOT, &pResource->hInternalClient, sizeof(pResource->hInternalClient))**pRmApi->AllocWithHandle(pRmApi, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_ROOT, &pResource->hInternalClient, sizeof(pResource->hInternalClient))*pRmApi->Alloc(pRmApi, pResource->hInternalClient, pResource->hInternalClient, &pResource->hInternalDevice, NV01_DEVICE_0, &nv0080AllocParams, sizeof(nv0080AllocParams))**pRmApi->Alloc(pRmApi, pResource->hInternalClient, pResource->hInternalClient, &pResource->hInternalDevice, NV01_DEVICE_0, &nv0080AllocParams, sizeof(nv0080AllocParams))*pRmApi->Alloc(pRmApi, pResource->hInternalClient, pResource->hInternalDevice, &pResource->hInternalSubdevice, NV20_SUBDEVICE_0, &nv2080AllocParams, sizeof(nv2080AllocParams))**pRmApi->Alloc(pRmApi, pResource->hInternalClient, pResource->hInternalDevice, &pResource->hInternalSubdevice, NV20_SUBDEVICE_0, &nv2080AllocParams, sizeof(nv2080AllocParams))*pRmApi->Alloc(pRmApi, pResource->hInternalClient, pResource->hInternalSubdevice, &pResource->hInternalLockStressObject, LOCK_STRESS_OBJECT, NULL, 0)**pRmApi->Alloc(pRmApi, pResource->hInternalClient, pResource->hInternalSubdevice, &pResource->hInternalLockStressObject, LOCK_STRESS_OBJECT, NULL, 0)*pParams->pClient != NULL*src/kernel/rmapi/lock_test.c**pParams->pClient != NULL**src/kernel/rmapi/lock_test.c*clientGetResourceRef(pParams->pClient, pParams->hParent, &pParentRef)**clientGetResourceRef(pParams->pClient, pParams->hParent, &pParentRef)*pParentGpuRes**pParentGpuRes*pParentGpuRes != NULL**pParentGpuRes != NULL*pSrcObj*NVRM: Invalid source object **NVRM: Invalid source object *rmGpuGroupLockIsOwner(pParentGpu->gpuInstance, GPU_LOCK_GRP_DEVICE, &gpuMask)**rmGpuGroupLockIsOwner(pParentGpu->gpuInstance, GPU_LOCK_GRP_DEVICE, &gpuMask)*rmGpuGroupLockIsOwner(0, GPU_LOCK_GRP_ALL, &gpuMask)**rmGpuGroupLockIsOwner(0, GPU_LOCK_GRP_ALL, &gpuMask)*gpuMask == rmGpuLocksGetOwnedMask()**gpuMask == rmGpuLocksGetOwnedMask()*src/kernel/rmapi/mapping.c**src/kernel/rmapi/mapping.c*call to rmapiUnmapWithSecInfo*NVRM: Nv04Unmap: client:0x%x device:0x%x context:0x%x **NVRM: Nv04Unmap: client:0x%x device:0x%x context:0x%x *NVRM: Nv04Unmap: flags:0x%x dmaOffset:0x%08llx size:0x%llx **NVRM: Nv04Unmap: flags:0x%x dmaOffset:0x%08llx size:0x%llx *call to _rmapiRmUnmapMemoryDma*NVRM: Nv04Unmap: Unmap complete **NVRM: Nv04Unmap: Unmap complete *NVRM: Nv04Unmap: ummap failed; status: %s (0x%08x) **NVRM: Nv04Unmap: ummap failed; status: %s (0x%08x) *call to rmapiMapWithSecInfo*NVRM: Nv04Map: client:0x%x device:0x%x context:0x%x memory:0x%x flags:0x%x flags2:0x%x **NVRM: Nv04Map: client:0x%x device:0x%x context:0x%x memory:0x%x flags:0x%x flags2:0x%x *NVRM: Nv04Map: offset:0x%llx length:0x%llx dmaOffset:0x%08llx **NVRM: Nv04Map: offset:0x%llx length:0x%llx dmaOffset:0x%08llx *NVRM: MMU_PROFILER Nv04Map 0x%x **NVRM: MMU_PROFILER Nv04Map 0x%x *hMapper*call to serverInterMap*NVRM: Nv04Map: map complete **NVRM: Nv04Map: map complete *NVRM: Nv04Map: dmaOffset: 0x%08llx **NVRM: Nv04Map: dmaOffset: 0x%08llx *NVRM: Nv04Map: map failed; status: %s (0x%08x) **NVRM: Nv04Map: map failed; status: %s (0x%08x) *serverGetClientUnderLock(&g_resServ, pParms->hClient, &pRsClient)**serverGetClientUnderLock(&g_resServ, pParms->hClient, &pRsClient)*call to serverInterUnmap*call to _getMappingPageSize*call to virtmemReserveMempool_IMPL*virtmemReserveMempool(pVirtualMemory, pGpu, pDevice, pParams->length, pageSize)**virtmemReserveMempool(pVirtualMemory, pGpu, pDevice, pParams->length, pageSize)*memInterMapParams*pSrcMemDesc != NULL**pSrcMemDesc != NULL*NVRM: Mapping offset 0x%llX or length 0x%llX out of bounds! **NVRM: Mapping offset 0x%llX or length 0x%llX out of bounds! *NVRM: Attempting to map READ_ONLY surface as READ_WRITE / WRITE_ONLY! **NVRM: Attempting to map READ_ONLY surface as READ_WRITE / WRITE_ONLY! *src/kernel/rmapi/mapping_cpu.c**src/kernel/rmapi/mapping_cpu.c*call to rmapiUnmapFromCpuWithSecInfo*NVRM: Nv04UnmapMemory: client:0x%x device:0x%x memory:0x%x pLinearAddr:%p flags:0x%x **NVRM: Nv04UnmapMemory: client:0x%x device:0x%x memory:0x%x pLinearAddr:%p flags:0x%x *rmUnmapParams*call to serverUnmap*NVRM: Nv04UnmapMemory: unmap complete **NVRM: Nv04UnmapMemory: unmap complete *NVRM: Nv04UnmapMemory: unmap failed; status: %s (0x%08x) **NVRM: Nv04UnmapMemory: unmap failed; status: %s (0x%08x) *call to rmapiMapToCpuWithSecInfoV2*NVRM: Nv04MapMemory: client:0x%x device:0x%x memory:0x%x **NVRM: Nv04MapMemory: client:0x%x device:0x%x memory:0x%x *NVRM: Nv04MapMemory: offset: %llx length: %llx flags:0x%x **NVRM: Nv04MapMemory: offset: %llx length: %llx flags:0x%x *NVRM: MMU_PROFILER Nv04MapMemory 0x%x **NVRM: MMU_PROFILER Nv04MapMemory 0x%x *rmMapParams*call to serverMap*NVRM: Nv04MapMemory: complete **NVRM: Nv04MapMemory: complete *NVRM: Nv04MapMemory: *ppCpuVirtAddr:%p **NVRM: Nv04MapMemory: *ppCpuVirtAddr:%p *NVRM: Nv04MapMemory: map failed; status: %s (0x%08x) **NVRM: Nv04MapMemory: map failed; status: %s (0x%08x) *pCpuVirtAddrNvP64**pCpuVirtAddrNvP64*pProcessHandle**pProcessHandle*call to osDetachFromProcess***pProcessHandle*clientGetResourceRef(staticCast(pClient, RsClient), hMemory, &pMemoryRef)**clientGetResourceRef(staticCast(pClient, RsClient), hMemory, &pMemoryRef)*hParent != hClient**hParent != hClient*gpuGetByRef(pUnmapParams->pLockInfo->pContextRef, &bBroadcast, &pGpu)**gpuGetByRef(pUnmapParams->pLockInfo->pContextRef, &bBroadcast, &pGpu)*hParent == hClient**hParent == hClient*call to osAttachToProcess*clientGetResourceRef(staticCast(pClient, RsClient), pMapParams->hMemory, &pMemoryRef)**clientGetResourceRef(staticCast(pClient, RsClient), pMapParams->hMemory, &pMemoryRef)*hContext*(memdescGetPteKind(pMemDesc) == memmgrGetHwPteKindFromSwPteKind_HAL(pGpu, pMemoryManager, RM_DEFAULT_PTE_KIND)) && (!memdescGetFlag(pMemDesc, MEMDESC_FLAGS_ENCRYPTED))**(memdescGetPteKind(pMemDesc) == memmgrGetHwPteKindFromSwPteKind_HAL(pGpu, pMemoryManager, RM_DEFAULT_PTE_KIND)) && (!memdescGetFlag(pMemDesc, MEMDESC_FLAGS_ENCRYPTED))*call to RmUnmapBusAperture*NVRM: Unmapping from NVLINK handle = 0x%x, addr= 0x%llx **NVRM: Unmapping from NVLINK handle = 0x%x, addr= 0x%llx *pMapParams->pLockInfo != NULL**pMapParams->pLockInfo != NULL*gpuGetByRef(pContextRef, &bBroadcast, &pGpu)**gpuGetByRef(pContextRef, &bBroadcast, &pGpu)*pMemoryInfo != NULL**pMemoryInfo != NULL*NVRM: CPU mapping not supported for addressSpace: 0x%x **NVRM: CPU mapping not supported for addressSpace: 0x%x *NVRM: BAR1 mapping to CPR vidmem not supported **NVRM: BAR1 mapping to CPR vidmem not supported *rmapiGetEffectiveAddrSpace(pGpu, memdescGetMemDescFromGpu(pMemDesc, pGpu), pMapParams->flags, &effectiveAddrSpace)**rmapiGetEffectiveAddrSpace(pGpu, memdescGetMemDescFromGpu(pMemDesc, pGpu), pMapParams->flags, &effectiveAddrSpace)*effectiveAddrSpace*memArea.pRanges != NULL**memArea.pRanges != NULL*NVRM: NVLINK mapping allocated: AtsBase=0x%llx, _pteArray[0]=0x%llx, mappedCpuAddr=0x%llx, length=%d **NVRM: NVLINK mapping allocated: AtsBase=0x%llx, _pteArray[0]=0x%llx, mappedCpuAddr=0x%llx, length=%d *NVRM: Need BAR mapping on coherent link! FAIL!! **NVRM: Need BAR mapping on coherent link! FAIL!! *pGpu->busInfo.gpuPhysFbAddr**pGpu->busInfo.gpuPhysFbAddr*bUseMemArea*call to memdescGetPteAdjust*fbAddr*NVRM: %s created. CPU Virtual Address: %p **NVRM: %s created. CPU Virtual Address: %p *Direct mapping**Direct mapping*Mapping**Mapping*call to kbusGetEffectiveAddressSpace_STATIC_DISPATCH*pMemCtxRef*DevMemoryTable*call to virtmemMatchesVASpace_IMPL*pDmaMapping != NULL*src/kernel/rmapi/mapping_list.c**pDmaMapping != NULL**src/kernel/rmapi/mapping_list.c*btreeSearch(dmaOffset, &pNode, pVirtualMemory->pDmaMappingList)**btreeSearch(dmaOffset, &pNode, pVirtualMemory->pDmaMappingList)*pDmaMapping->gpuMask == gpuMask**pDmaMapping->gpuMask == gpuMask*pDmaMappingPrev*pDmaMappingNext**pDmaMappingNext*btreeUnlink(pNode, &pVirtualMemory->pDmaMappingList)**btreeUnlink(pNode, &pVirtualMemory->pDmaMappingList)**pDmaMappingPrev*pDmaMappingFirst**pDmaMappingFirst*pDmaMappingFirst->DmaOffset == pDmaMapping->DmaOffset && pDmaMappingFirst->pMemDesc->Size == pDmaMapping->pMemDesc->Size**pDmaMappingFirst->DmaOffset == pDmaMapping->DmaOffset && pDmaMappingFirst->pMemDesc->Size == pDmaMapping->pMemDesc->Size*(pDmaMappingFirst->gpuMask & (pDmaMappingFirst->gpuMask - 1)) == 0**(pDmaMappingFirst->gpuMask & (pDmaMappingFirst->gpuMask - 1)) == 0*(pDmaMapping->gpuMask & (pDmaMapping->gpuMask - 1)) == 0**(pDmaMapping->gpuMask & (pDmaMapping->gpuMask - 1)) == 0*pDmaMappingCurrent**pDmaMappingCurrent*(pDmaMapping->gpuMask & pDmaMappingCurrent->gpuMask) == 0**(pDmaMapping->gpuMask & pDmaMappingCurrent->gpuMask) == 0*NVRM: Failed to insert new mapping node for range 0x%llX-0x%llX! **NVRM: Failed to insert new mapping node for range 0x%llX-0x%llX! *Flags2*mapIt*call to ccslLogEncryption_IMPL*call to ccslIncrementIv_IMPL*call to ccslEncryptWithIv_IMPL*call to ccslRotateIv_IMPL*src/kernel/rmapi/nv_gpu_ops.c*NVRM: Attempting to synchronously rotate %u keys. **src/kernel/rmapi/nv_gpu_ops.c**NVRM: Attempting to synchronously rotate %u keys. *serverGetClientUnderLock(&g_resServ, contextList[0]->ctx->hClient, &pChannelClient)**serverGetClientUnderLock(&g_resServ, contextList[0]->ctx->hClient, &pChannelClient)*CliGetKernelChannel(pChannelClient, contextList[0]->ctx->hChannel, &pKernelChannel)**CliGetKernelChannel(pChannelClient, contextList[0]->ctx->hChannel, &pKernelChannel)*NVRM: Unable to acquire GPU lock for key rotation. Returning early. **NVRM: Unable to acquire GPU lock for key rotation. Returning early. *call to nvGpuOpsKeyRotationChannelDisable*nvGpuOpsKeyRotationChannelDisable(contextList, contextListCount)**nvGpuOpsKeyRotationChannelDisable(contextList, contextListCount)*call to ccslContextUpdate_KERNEL*ccslContextUpdate(contextList[index]->ctx)**ccslContextUpdate(contextList[index]->ctx)*contextList != NULL**contextList != NULL*contextListCount != 0**contextListCount != 0*error != NV_OK**error != NV_OK*NVRM: uvm encountered global fatal error 0x%x, requiring os reboot to recover. **NVRM: uvm encountered global fatal error 0x%x, requiring os reboot to recover. *pPagingChannelRpcMutex*srcVaSpace && device**srcVaSpace && device*call to getHandleForVirtualAddr*NVRM: %s: getHandleForVirtualAddr returned error %s! **NVRM: %s: getHandleForVirtualAddr returned error %s! *call to _nvGpuOpsLocksAcquireAll*NVRM: %s: _nvGpuOpsLocksAcquire returned error %s! **NVRM: %s: _nvGpuOpsLocksAcquire returned error %s! *NVRM: %s: serverGetClientUnderLock returned error %s! **NVRM: %s: serverGetClientUnderLock returned error %s! *NVRM: %s: deviceGetByHandle returned error %s! **NVRM: %s: deviceGetByHandle returned error %s! *call to _nvGpuOpsLocksRelease*call to _nvGpuOpsLocksAcquire*NVRM: %s: rmapiLockAcquire returned error %s! **NVRM: %s: rmapiLockAcquire returned error %s! **errorNotifier*NVRM: %s: UnmapFromCpu returned error %s! **NVRM: %s: UnmapFromCpu returned error %s! *errorNotifierSize*call to nvGpuOpsAllocPhysical*errorNotifierHandle*channel->errorNotifierHandle**channel->errorNotifierHandle*channel->errorNotifier**channel->errorNotifier**shadowErrorNotifier*status2*status2 == NV_OK**status2 == NV_OK*NVRM: %s: NV2080_CTRL_CMD_GPU_REPORT_NON_REPLAYABLE_FAULTreturned error %s! **NVRM: %s: NV2080_CTRL_CMD_GPU_REPORT_NON_REPLAYABLE_FAULTreturned error %s! *call to _nvGpuOpsP2pObjectDestroy*call to nvGpuOpsDestroyP2pInfoByP2pObjectHandle*call to _nvGpuOpsP2pObjectCreate*call to getSystemP2PCaps*p2pAllocParams*hTemp*CliSetGpuContext(session->handle, device->handle, &pGpu, NULL)**CliSetGpuContext(session->handle, device->handle, &pGpu, NULL)*pMemoryManager != NULL**pMemoryManager != NULL*serverGetClientUnderLock(&g_resServ, session->handle, &pClient)**serverGetClientUnderLock(&g_resServ, session->handle, &pClient)*deviceGetByHandle(pClient, device->handle, &pDevice)**deviceGetByHandle(pClient, device->handle, &pDevice)*kmigmgrGetMemoryPartitionHeapFromDevice(pGpu, pKernelMIGManager, pDevice, &pHeap)**kmigmgrGetMemoryPartitionHeapFromDevice(pGpu, pKernelMIGManager, pDevice, &pHeap)*call to pmaGetStats*call to nvGpuOpsGetChannelData*stopChannelParams*pRmApi->Control(pRmApi, RES_GET_CLIENT_HANDLE(pKernelChannel), RES_GET_HANDLE(pKernelChannel), NVA06F_CTRL_CMD_STOP_CHANNEL, &stopChannelParams, sizeof(stopChannelParams))**pRmApi->Control(pRmApi, RES_GET_CLIENT_HANDLE(pKernelChannel), RES_GET_HANDLE(pKernelChannel), NVA06F_CTRL_CMD_STOP_CHANNEL, &stopChannelParams, sizeof(stopChannelParams))*pRmApi->Control(pRmApi, retainedChannel->session->handle, retainedChannel->rmSubDevice->subDeviceHandle, NV2080_CTRL_CMD_GPU_EVICT_CTX, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, retainedChannel->session->handle, retainedChannel->rmSubDevice->subDeviceHandle, NV2080_CTRL_CMD_GPU_EVICT_CTX, ¶ms, sizeof(params))*call to nvGpuOpsBuildExternalAllocPtes*resourceMemDesc**resourceMemDesc***resourceMemDesc*call to _shadowMemdescDestroy*channelEngineType == UVM_GPU_CHANNEL_ENGINE_TYPE_CE || channelEngineType == UVM_GPU_CHANNEL_ENGINE_TYPE_GR || channelEngineType == UVM_GPU_CHANNEL_ENGINE_TYPE_SEC2**channelEngineType == UVM_GPU_CHANNEL_ENGINE_TYPE_CE || channelEngineType == UVM_GPU_CHANNEL_ENGINE_TYPE_GR || channelEngineType == UVM_GPU_CHANNEL_ENGINE_TYPE_SEC2*channelResourceInfo*pFlcnParams**pFlcnParams*deviceDescendant*call to fillMIGGiUUID*call to _shadowMemdescCreateFlcn*pParams->bufferCount <= NV_ARRAY_ELEMENTS(channelInstanceInfo->resourceInfo)**pParams->bufferCount <= NV_ARRAY_ELEMENTS(channelInstanceInfo->resourceInfo)*call to _shadowMemdescCreate*call to _memDescFindAndRetain*j + pParams->numPages <= numBufferPages**j + pParams->numPages <= numBufferPages*pParams->bNoMorePages**pParams->bNoMorePages*pCtxBufferInfo->bIsContigous**pCtxBufferInfo->bIsContigous*call to _memdescRetain*call to _nvGpuOpsReleaseChannel*call to _nvGpuOpsReleaseChannelResources*retainedChannel->hDupTsg**retainedChannel->hDupTsg**rmSubDevice*call to nvGpuOpsVerifyChannel**rmDevice*call to nvGpuOpsGetChannelEngineType*call to nvGpuOpsGetChannelInstanceMemInfo*call to nvGpuOpsGetChannelTsgInfo*call to nvGpuOpsGetChannelSmcInfo*call to nvGpuOpsGetChannelSubctxInfo*hChannelParent*channelRetainerParams*NVRM: %s:Channel duping is not supported. Fall back to UVM_CHANNEL_RETAINER **NVRM: %s:Channel duping is not supported. Fall back to UVM_CHANNEL_RETAINER *NVRM: %s:%d: %s **NVRM: %s:%d: %s *workSubmissionToken*clearFaultedToken*call to isDeviceAmperePlus*kfifoEngineInfoXlate_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), ENGINE_INFO_TYPE_RUNLIST, kchannelGetRunlistId(pKernelChannel), ENGINE_INFO_TYPE_CHRAM_PRI_BASE, &chramPri)**kfifoEngineInfoXlate_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), ENGINE_INFO_TYPE_RUNLIST, kchannelGetRunlistId(pKernelChannel), ENGINE_INFO_TYPE_CHRAM_PRI_BASE, &chramPri)**pChramChannelRegister*call to isDeviceHopperPlus**workSubmissionOffset*kfifoEngineInfoXlate_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), ENGINE_INFO_TYPE_RUNLIST, kchannelGetRunlistId(pKernelChannel), ENGINE_INFO_TYPE_RUNLIST_PRI_BASE, &runlistPri)**kfifoEngineInfoXlate_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), ENGINE_INFO_TYPE_RUNLIST, kchannelGetRunlistId(pKernelChannel), ENGINE_INFO_TYPE_RUNLIST_PRI_BASE, &runlistPri)*call to _nvGpuOpsRetainChannelResources*channelEngineType*instanceMemDesc*instanceMemInfo*hDupKernelCtxShare*bInSubctx*channelInstanceInfo->bInSubctx**channelInstanceInfo->bInSubctx*channelInstanceInfo->subctxId < channelInstanceInfo->tsgMaxSubctxCount**channelInstanceInfo->subctxId < channelInstanceInfo->tsgMaxSubctxCount*smcEngineId*smcEngineVeIdOffset*kfifoEngineInfoXlate_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), ENGINE_INFO_TYPE_ENG_DESC, ENG_GR(grIdx), ENGINE_INFO_TYPE_MMU_FAULT_ID, &grFaultId)**kfifoEngineInfoXlate_HAL(pGpu, GPU_GET_KERNEL_FIFO(pGpu), ENGINE_INFO_TYPE_ENG_DESC, ENG_GR(grIdx), ENGINE_INFO_TYPE_MMU_FAULT_ID, &grFaultId)*call to kgmmuGetGraphicsEngineId_DISPATCH*grMmuFaultEngId*grFaultId >= grMmuFaultEngId**grFaultId >= grMmuFaultEngId*bLockAcquire*hDupTsg*rmGpuLocksAcquire(GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_GPU_OPS)**rmGpuLocksAcquire(GPUS_LOCK_FLAGS_NONE, RM_LOCK_MODULES_GPU_OPS)*tsgMaxSubctxCount*bTsgChannel**instanceMemDesc*pSessionClient*pVaSpaceGpu*call to kgmmuToggleFaultOnPrefetch_DISPATCH*call to kgmmuIssueReplayableFaultBufferFlush_DISPATCH**cslCtx*NVRM: Non Replayable buffer CSL context not allocated **NVRM: Non Replayable buffer CSL context not allocated *maxFaultBufferEntries*shadowBufferGetIndex*call to kgmmuReadShadowBufPutIndex_DISPATCH*shadowBufferPutIndex**shadowBufferAddress*NVRM: Fatal error detected in fault buffer packet decryption: 0x%x **NVRM: Fatal error detected in fault buffer packet decryption: 0x%x *shadowBufferGet**shadowBufferContext*call to circularQueuePopAndCopyNonManaged_IMPL*pQueueCtx*call to isDeviceVoltaPlus*call to gpuDeviceUnmapCpuFreeHandle*enableParams*intrOwnership*call to getAccessCounterGranularityValue*setConfigParams*mimcLimit*momcLimit*accessCntrBufferAllocParams***bufferAddress*sizeParams*registermappings*accessCounterMask*nonReplayableFaultsParams***shadowBufferAddress***shadowBufferContext**shadowBufferMetadata*replayableFaultsParams**bufferMetadata*registermappingsParams*faultBufferType**pPmcIntr**pPmcIntrEnSet**pPmcIntrEnClear*replayableFaultMask*NVRM: Replayable buffer CSL context not allocated **NVRM: Replayable buffer CSL context not allocated *call to nvGpuOpsGetMemoryByHandle*call to memdescRequiresIommuMapping*call to _disablePeerAccess*nvlinkMask*nvlinkOffset***nvlinkReadLocation*bNvlinkRecoveryEnabled**nvlinkErrorNotifier*eccMask*eccOffset***eccReadLocation*bEccEnabled**eccErrorNotifier*call to nvGpuOpsDisableVaSpaceChannels*call to nvGpuOpsEnableVaSpaceChannels*dscDecoderColorDepthMask*retriesRemoteBKSVReadMessage*retriesRemoteBCapsReadMessage*retriesRemote22BCapsReadMessage*retriesLinkAddressMessage*retriesRemoteDpcdWriteMessage*portsToDelete*retriesRemoteDpcdReadMessage*pipelineID*ltCounter*crHighRateFallbackCount*dpCtrlCmdCommon*m_added*ddcIndex*combination_count*eachOfDescriptorsSize*max_bpp*validSliceCountMask*did2ExtCount*DataCnt*db3*db4*db2*db7*db8*byte7*optIdx*vendor_index*vfd_index*vsvdb_index*didT7_index*didT8_index*didT10_index*DTDCount*extDTDCount*DIDT7Count*DIDT10Count*OVTCount*rl_num*total_white_points*block_location*total_did2_extensions*datablock_location*lockLinkCount*device_count*linkMaskAliRequested*onboardAttempts*callbackCounter*nTries*totalPayloadSize*currentLaneCrcRateSum*maxLaneCrcRateSum*lifetimeRefreshCount*datasum*errorLeftCount*interruptingLinks*runtimeErrorMask*pkdSize*unpkdSize*fields*reverse*recordsHeld*clearBits*cagesMask*dlDeferredIntrLinkMask*dl*lnkStatusChangeLinks*liptLnk*tlcRx1*tlcRx1Injected*retry_count*totalBytesToSend*dir_entries_used*spray_group_offset*round_count*ports_used*persistent_mask*nonpersistent_mask*ports_mask*daEnableMask*staticOffset*dynamicOffset*submitted*unpadded*bitsRead**debugLineEnd**debugLineStrEnd**debugARangesEnd**symtabEnd*strtabEnd**strtabEnd*stringOffset*minLogBufferEntryLength*newEntriesSize*numNewEntries*fbsrStatus*localFlag*fbPitch*fbDepth*fbHeight*fbWidth*bMatchingArch*bMatchingImpl*isDispLowLatencyPendingPerGpu*isDispPendingPerGpu*isTmrPendingPerGpu*vblankIntrServicedHeadMask*aggregate_mask*nBytesInflated*gpusFreezeMask*gpuMaskWakeup*gpusLockedMask*gpuMaskLocked*gpusLockableMask*apiMask*sumUtil*sumFbUtil*NumRingBuffers**pJournalBuff*NumRecords*driverLoadCount*totalFree*total_ns*timeUs*mmuCount*pcieCount*barRegCSOffset*waitCount*xveAerFlags*currIdx*staticBar1MappingRefCount*userdSize*requiredAutoBar1Size*endRange*idy*beginRange*P2PfbMappedBytes*totalP2pObjectsAliveRefCount*numTotalPces*tempLceMask*c2cIndex*numPceAssigned*localPcesAvailable*numPcesMapped*supportedPceMask*supportedLceMask*grceMask*lceAssignedMask*numCEs*channelCounter*totalH2Dbytes*totalD2Hbytes*totalEncryptOpsH2D*totalEncryptOpsD2H*msgCounterLo*msgCounter*keySlotIndex*NumWaitingOnTLBFlush*persistentSwStateGpuMask*winSemEvtMask*blankingBits*ControlOffset*unionNonEmptyQueues*maskCallbacksStillPending*vblankServicedHeadMask*vblankCallbackHeadMask*regCtrl3*ctrl3*ReferenceCount**pGpuTemp*tries*size_shift*anySuccess*unwindDepth*userdBar1RefMask*updateFlags*numUserKernelChannel*chanCount*numChannelGroupsInUse*methodBufCount*numChannelsInUse*gpuLockMask*perGpuIdx*regWriteCount**gpuLoop*numPoweredOn*srcIdx*flagsFailed*fabricProbeRegKeyOverride*memPrivate*protectedMemPrivate*memSharedOwned*protectedMemSharedOwned*memSharedDuped*protectedMemSharedDuped*pidcount*videoCtxswLogConsumerCount*timerFlags*fecsCtxswLogConsumerCount*bIntrDriven*RUSD_READ_DATA_ATTEMPTS*channelObjects*bytesXfered*registeredIdx*timeoutCount*gspTraceConsumerCount*nMaxRetries*rxSeqNum*txSeqNum*intrTopEnMask*intrEn0Orig*totalCopySize*remainingMapSize*currentFbOffset**pDHPI*currIdxMod*submissionPausedRefCount*swLastCompletedPayload*execute*semaD*blockId*pteIndexOffset**pBlockFirst*childDescriptorCnt*newQueuePut*pmaRegionIdx*pmaInitFlags*scrubListSize*tempLastSeenIdByClient*lastSeenIdByClient*totalScrubbedPages*requiredItemsToSave*curPagesSaved*blCount*dynamicBlacklistCount*staticBlacklistCount*exceptedMask*scrubFlags*blacklistEntryIn*curMapIdx*curEvictPage*initialIdx*finalIdx*remainingBits*noPLC*osBlackListCount*syspipeMask*invalidMask*instanceVeidMask*ciCount*rmPktCount*reservedBytes*tableIdx*constructFlags*oldestIndex*ErrInfoFlags*uniqueIdMask*switchLinks*deviceLockRefcount*tmpEnabledLinkMask*tmpDisabledLinkMask*remoteMask*reentrancyMask**pGpuItr2*clDevCtrlStatusFlags*enableRequestsRefCount*disableRequestsRefCount*softDisableRequestsRefCount*thwapChannelMask*stompChannelMask*watchdogStatusFlags*certTotalSize*certCount*engineOffset*successfulInstructions*healthMask*localSessionCount**pBridgeVersionParams*FlushLimitTimestamp*alidCount*tempGpuMask*subDeviceCount*deviceInstanceMask*gpuMonolithicRmMask*outMask*outCount*availableIDs*bHighSpeedBridgeConnected*totalFreeSize*numValidPTs*eheapFlags*numPartialPtRanges*rootOffset*numCurHandles*numUpdatedPfns*extUsageCount*validAllocCount*pmaFreeFlags*muxPartId*clPdbProperties*collectedDataSize*dataSz*gpuBrCount*gpuBR04Count*gpuBrNot3rdPartyCount*BR04Count*numActiveDPs*gpuResumingCount*totalDevices*br04HwbcCount*dirContentsSize*dwLen*countersReturned*gpuSliStatus*lockState*privStatus**pGpuIter*apiCopyFlags*paramsCnt*busMapFbFlags*chramPri*gpaOffset*bwMBps*gpuMaskRequested*rpc_data_size*rmapiRpcFlags*fbTotal*remainedBytes*transfer_bytes*numSMsRead*refCountChannel*pBuf8**pBuf8*waitSequence*entryLength*bytes_rem*sequence_gsp_request*gspBufferAddr*pfnBufferAddr*currentNumChannels*hbmBaseAddr*numValidHbmRegions*homogeneousPlacementCount*heterogeneousPlacementCount*numCreatedVgpu*createdVfMask*assignedSwizzIdMask*pgpuCap*digitCount*numActiveVgpu*numVgpuDevices*pgpuCount*numSupported*total_written_bytes*inx*numReferences*wp1*optSize*bk*td**td*outptr*numFreeBlocks*blockFirst**blockFirst*itemCount*remainingElemToCpy*validTags*pVardataOffset*vardataDropcount*recordDropcount*req_index*res_index*index_sort*digest_count*root_cert_index*current_psk_session_count*current_dhe_session_count*total_element_len*element_index*context_str**context_str*secured_message_version_index*param2*cert_chain_size_internal*measurement_block_count*req_param1*other_params_support*request_handshake_sequence_number*response_handshake_sequence_number*request_data_sequence_number*response_data_sequence_number*final_size*remaining_buffer_size*bChanged*numHybrid*numReserved*numValid*numSparse*numNv4k*entryIndexFillEnd*pendingFillCount*indexLo_64K*indexHi_64K*relativeIdx*chunksFound***pChunks*written*catLen*srcLen*countSatisfied*limb*pAccessBackRef**pAccessBackRef*ppDepRef**ppDepRef***ppDepRef*ppIndepRef**ppIndepRef***ppIndepRef*clientHandleIndex*initialLockState*hKmsDisp*kapiGpuCount*pollCount*frameLockExtraClients*frameLockSliProxyClients*idLen*valLen*claimedApiHeadMask*detectedTilesMask*detectedTilesCount*noCoreInterlockMask*subDevMaskStackDepth*call to nvGpuOpsCreateClient*gpuIdInfoParams*call to findVaspaceFromPid*classParams*call to isClassHost*call to isClassCE*call to isClassCompute*call to isClassFaultBuffer*call to isClassAccessCounterBuffer*call to isClassAccessBitsBuffer*call to isClassSec2*bHostClass*getEnginesParams*ceIndex*ceParams*call to setCeCaps*pceMaskParams*cePceMask*grce*sysmemRead*sysmemWrite*nvlinkP2p*secure*scrub*getGidParams*srcVaSpace != 0**srcVaSpace != 0*dstVaSpace != 0**dstVaSpace != 0*srcVaSpace != dstVaSpace**srcVaSpace != dstVaSpace*srcAddress != 0**srcAddress != 0*dstAddress != NULL**dstAddress != NULL*call to dupMemory*call to nvGpuOpsAllocVirtual*call to nvGpuOpsMapGpuMemory*tmpDstAddress == *dstAddress**tmpDstAddress == *dstAddress*call to nvGpuOpsFreeVirtual*(flags == NV04_DUP_HANDLE_FLAGS_REJECT_KERNEL_DUP_PRIVILEGE) || (flags == NV04_DUP_HANDLE_FLAGS_NONE)**(flags == NV04_DUP_HANDLE_FLAGS_REJECT_KERNEL_DUP_PRIVILEGE) || (flags == NV04_DUP_HANDLE_FLAGS_NONE)*call to threadStateAlloc**pAdjustedMemDesc*call to nvGpuOpsFillGpuMemoryInfo**bAssert*dynamicCast(pParentRef->pResource, Device)**dynamicCast(pParentRef->pResource, Device)*dupedMemHandle*call to _enablePeerAccess*rmapiLockIsWriteOwner() && rmGpuLockIsOwner()**rmapiLockIsWriteOwner() && rmGpuLockIsOwner()*rmapiLockIsOwner() && !rmGpuLockIsOwner()**rmapiLockIsOwner() && !rmGpuLockIsOwner()*dumpParams*op_enum*accessBits**accessBits*accessBitsBufferHandle*bDirtyBits*enabledMask**enabledMask*accessBitMask**accessBitMask*call to nvGpuOpsMemGetPageSize*call to nvGpuOpsGetExternalAllocAperture*pEccStatus**pEccStatus*isClientAllocated*isDeviceAllocated*isSubdeviceAllocated*archInfoParams*gpuNameParams*gpuNameStringFlags*ascii**ascii*call to queryVirtMode*gpuInTcc*call to findDeviceClasses*subDevParams*subdeviceCount*call to getGpcTpcInfo*pGpuInfo->subdeviceCount == 1**pGpuInfo->subdeviceCount == 1*call to getSwizzIdFromUserSmcPartHandle*smcEnabled*smcSwizzId*smcUserClientInfo*getGIUuidParams*simulationInfoParams*isSimulated*call to nvGpuOpsQueryGpuConfidentialComputeCaps*call to getSysmemLinkInfo*call to getSystemMemoryWindow*call to getNvswitchInfo*call to getNvlinkDirectConnectInfo*call to getEgmInfo*call to getAtsInfo*call to getNonPasidAtsInfo*call to getCdmmInfo*nvlDirectConnect*nvlDirectConnectMemoryWindowStart*call to allocNvlinkStatus*NVRM: Failed to convert enabled linkmask into bit vector! 0x%x **NVRM: Failed to convert enabled linkmask into bit vector! 0x%x *call to rmControlToUvmNvlinkVersion*rmControlToUvmNvlinkVersion(version) != UVM_LINK_TYPE_NVLINK_1**rmControlToUvmNvlinkVersion(version) != UVM_LINK_TYPE_NVLINK_1*connectedToSwitch*call to getNvlinkConnectionToSwitch*rmControlToUvmNvlinkVersion(nvlinkVersion) != UVM_LINK_TYPE_NVLINK_1**rmControlToUvmNvlinkVersion(nvlinkVersion) != UVM_LINK_TYPE_NVLINK_1*nvswitchMemoryWindowStart*nvswitchEgmMemoryWindowStart*systemMemoryWindowStart*systemMemoryWindowSize**gpuInfoParams*egmInfo*egmEnabled*call to memmgrLocalEgmBaseAddress*egmBaseAddr*NVRM: EGM enabled: %u peerId: %u BaseAddr: 0x%llx **NVRM: EGM enabled: %u peerId: %u BaseAddr: 0x%llx *nonPasidAtsInfo*nonPasidAtsSupport*NVRM: Non-PASID ATS supported: %d **NVRM: Non-PASID ATS supported: %d *coherentModeInfo*cdmmEnabled*NVRM: CDMM Enabled: %d **NVRM: CDMM Enabled: %d *atsInfo*atsSupport*NVRM: ATS supported: %d **NVRM: ATS supported: %d *sysmemLink*busInfoParams**busInfoParams*sysmemConnType*call to getNvlinkConnectionToNpu*call to getC2CConnectionToCpu*call to getPCIELinkRateMBps*NVRM: Unsupported sysmem connection type: %d **NVRM: Unsupported sysmem connection type: %d *NVRM: sysmem link type: %d bw: %u **NVRM: sysmem link type: %d bw: %u *pGpuInfo->sysmemLink != UVM_LINK_TYPE_NONE**pGpuInfo->sysmemLink != UVM_LINK_TYPE_NONE*confComputeAllocParams*pRmApi->Alloc(pRmApi, hClient, hClient, &hConfCompute, NV_CONFIDENTIAL_COMPUTE, &confComputeAllocParams, sizeof(confComputeAllocParams))**pRmApi->Alloc(pRmApi, hClient, hClient, &hConfCompute, NV_CONFIDENTIAL_COMPUTE, &confComputeAllocParams, sizeof(confComputeAllocParams))*pRmApi->Control(pRmApi, hClient, hConfCompute, NV_CONF_COMPUTE_CTRL_CMD_SYSTEM_GET_CAPABILITIES, &confComputeParams, sizeof(confComputeParams))**pRmApi->Control(pRmApi, hClient, hConfCompute, NV_CONF_COMPUTE_CTRL_CMD_SYSTEM_GET_CAPABILITIES, &confComputeParams, sizeof(confComputeParams))*bConfComputingEnabled*confComputeParams*NV_ERR_NOT_SUPPORTED**NV_ERR_NOT_SUPPORTED*keyRotationParams*pRmApi->Control(pRmApi, hClient, hConfCompute, NV_CONF_COMPUTE_CTRL_CMD_GPU_GET_KEY_ROTATION_STATE, &keyRotationParams, sizeof(keyRotationParams))**pRmApi->Control(pRmApi, hClient, hConfCompute, NV_CONF_COMPUTE_CTRL_CMD_GPU_GET_KEY_ROTATION_STATE, &keyRotationParams, sizeof(keyRotationParams))*bKeyRotationEnabled*pRmApi->Control(pRmApi, hClient, hDevice, NV0080_CTRL_CMD_GPU_GET_VIRTUALIZATION_MODE, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, hClient, hDevice, NV0080_CTRL_CMD_GPU_GET_VIRTUALIZATION_MODE, ¶ms, sizeof(params))*pRmApi->Alloc(pRmApi, hClient, hDevice, &vgpuHandle, KEPLER_DEVICE_VGPU, NULL, 0)**pRmApi->Alloc(pRmApi, hClient, hDevice, &vgpuHandle, KEPLER_DEVICE_VGPU, NULL, 0)*pRmApi->Control(pRmApi, hClient, vgpuHandle, NVA080_CTRL_CMD_VGPU_GET_CONFIG, &cparams, sizeof(cparams))**pRmApi->Control(pRmApi, hClient, vgpuHandle, NVA080_CTRL_CMD_VGPU_GET_CONFIG, &cparams, sizeof(cparams))*cparams*hDeviceLocal*hSubDeviceLocal*call to findUvmAddressSpace*numaEnabled*call to _gpuGetFabricStatus*pRmApi->Control(pRmApi, pDevice->session->handle, pDevice->subhandle, NV2080_CTRL_CMD_GET_GPU_FABRIC_PROBE_INFO, &fabricProbeParams, sizeof(fabricProbeParams))**pRmApi->Control(pRmApi, pDevice->session->handle, pDevice->subhandle, NV2080_CTRL_CMD_GET_GPU_FABRIC_PROBE_INFO, &fabricProbeParams, sizeof(fabricProbeParams))*call to _convertGpuFabricProbeStateToErrorCode*pRmApi->Control(pRmApi, pDevice->session->handle, pDevice->session->handle, NV0000_CTRL_CMD_SYSTEM_GET_FABRIC_STATUS, &fabricParams, sizeof(fabricParams))**pRmApi->Control(pRmApi, pDevice->session->handle, pDevice->session->handle, NV0000_CTRL_CMD_SYSTEM_GET_FABRIC_STATUS, &fabricParams, sizeof(fabricParams))*call to _convertSystemFabricStateToErrorCode*NVRM: Invalid Fabric Probe State **NVRM: Invalid Fabric Probe State *NVRM: Invalid Fabric State **NVRM: Invalid Fabric State *call to queryCopyEngines*allocationsLock*call to deleteDescriptor*memDesc->childHandle**memDesc->childHandle*memDesc->handle**memDesc->handle*cpuMappingsLock*btreeNode**mappingDesc*cpuPointer**cpuPointer*cpuMapDesc**cpuMapDesc*call to findDescriptor*mappedAddr**mappedAddr***mappedAddr*call to destroyAllGpuMemDescriptors**btreeNode*isFakeTsg*call to tsgEngineType*channel->tsg**channel->tsg*controlPage**controlPage*isDeviceHopperPlus(device)**isDeviceHopperPlus(device)*call to nvGpuOpsUnmapGpuMemory**gpFifoEntries*call to channelReleaseDummyAlloc*channelType*channelType == UVM_GPU_CHANNEL_ENGINE_TYPE_CE || channelType == UVM_GPU_CHANNEL_ENGINE_TYPE_SEC2**channelType == UVM_GPU_CHANNEL_ENGINE_TYPE_CE || channelType == UVM_GPU_CHANNEL_ENGINE_TYPE_SEC2*call to channelAllocate*call to engineAllocate*engineType == UVM_GPU_CHANNEL_ENGINE_TYPE_CE || engineType == UVM_GPU_CHANNEL_ENGINE_TYPE_SEC2**engineType == UVM_GPU_CHANNEL_ENGINE_TYPE_CE || engineType == UVM_GPU_CHANNEL_ENGINE_TYPE_SEC2*ceAllocParams*engineHandle*call to nvGpuOpsGetWorkSubmissionInfo*pWorkSubmissionToken**pWorkSubmissionToken*workSubmissionOffsetGpuVa**keyRotationNotifier*channelGrpParams*fifoEntries*gpFifoSize*gpuAllocInfo*bPersistentVidmem*bGetKernelVA*call to nvGpuOpsGpuMalloc*cpuMap**cpuMap*channel->errorNotifierOffset**channel->errorNotifierOffset*gpFifoAllocParams*hUserdPhysHandle*hTsg*kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, (NvU32)tsgEngineType(channel->tsg), &globalRmEngineId)**kmigmgrGetLocalToGlobalEngineType(pGpu, pKernelMIGManager, ref, (NvU32)tsgEngineType(channel->tsg), &globalRmEngineId)*globalRmEngineId*call to nvGpuOpsChannelGetHwChannelId*gpfifoCtrl**gpfifoCtrl*call to channelRetainDummyAlloc*call to isDevicePascalPlus*gpGet**gpGet**gpPut*channelClassNum*hwRunlistId*gpFifoGpuVa*gpPutGpuVa*gpGetGpuVa*tsg->engineType == UVM_GPU_CHANNEL_ENGINE_TYPE_CE || tsg->engineType == UVM_GPU_CHANNEL_ENGINE_TYPE_SEC2**tsg->engineType == UVM_GPU_CHANNEL_ENGINE_TYPE_CE || tsg->engineType == UVM_GPU_CHANNEL_ENGINE_TYPE_SEC2*call to channelNeedsDummyAlloc*channelNeedsDummyAlloc(channel)**channelNeedsDummyAlloc(channel)*call to nvGpuOpsVaSpaceReleaseDummyAlloc*call to nvGpuOpsVaSpaceRetainDummyAlloc*retainedDummyAlloc*dummyGpuAlloc**dummyBar1Mapping*call to deviceNeedsDummyAlloc*isDeviceVoltaPlus(vaSpace->device)**isDeviceVoltaPlus(vaSpace->device)*clientRegionMapping**clientRegionMapping*(channel->errorNotifier != NULL)**(channel->errorNotifier != NULL)*vaAllocInfo*bClientRegionGpuMappingNeeded*clientRegionGpuAddr*usermodeClass*isDeviceVoltaPlus(device)**isDeviceVoltaPlus(device)*rmSubDevice->clientRegionHandle == 0 && rmSubDevice->clientRegionMapping == NULL**rmSubDevice->clientRegionHandle == 0 && rmSubDevice->clientRegionMapping == NULL*regionHandle*clientRegionHandle*call to pmaPinPages*hAllocation**nvGpuOpsMemoryReopen: VA offset Mismatch!*(vaOffset == mapDmaParams.dmaOffset) && "nvGpuOpsMemoryReopen: VA offset Mismatch!"**(vaOffset == mapDmaParams.dmaOffset) && "nvGpuOpsMemoryReopen: VA offset Mismatch!"*memDescVa**memDescVa*vaSpace != NULL**vaSpace != NULL*findDescriptor(vaSpace->allocations, gpuOffset, (void**)&memDescVa)**findDescriptor(vaSpace->allocations, gpuOffset, (void**)&memDescVa)*memDescVa != NULL**memDescVa != NULL*memDescVa->handle != 0**memDescVa->handle != 0*memDescVa->childHandle != 0**memDescVa->childHandle != 0*memDescVa->address == gpuOffset**memDescVa->address == gpuOffset*pRmApi->Unmap(pRmApi, &unmapDmaParams)**pRmApi->Unmap(pRmApi, &unmapDmaParams)*allocInfo->hPhysHandle**allocInfo->hPhysHandle*paMemDescHandle*memDescVa->handle**memDescVa->handle*memDescVa->childHandle**memDescVa->childHandle*memDescVa->address == mapDmaParams.dmaOffset**memDescVa->address == mapDmaParams.dmaOffset**physHandle*pRmApi->Alloc(pRmApi, vaSpace->device->session->handle, vaSpace->device->handle, &memDesc->handle, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))**pRmApi->Alloc(pRmApi, vaSpace->device->session->handle, vaSpace->device->handle, &memDesc->handle, NV50_MEMORY_VIRTUAL, &memAllocParams, sizeof(memAllocParams))*childHandle*call to trackDescriptor*pRmApi->Alloc(pRmApi, device->session->handle, isSystemMemory ? device->handle : device->subhandle, &physHandle, isSystemMemory ? NV01_MEMORY_SYSTEM : NV01_MEMORY_LOCAL_USER, &memAllocParams, sizeof(memAllocParams))**pRmApi->Alloc(pRmApi, device->session->handle, isSystemMemory ? device->handle : device->subhandle, &physHandle, isSystemMemory ? NV01_MEMORY_SYSTEM : NV01_MEMORY_LOCAL_USER, &memAllocParams, sizeof(memAllocParams))*gpuPhysOffset*vaSpace->dummyGpuAlloc.refCount == 0**vaSpace->dummyGpuAlloc.refCount == 0*physAllocationsLock*call to btreeDestroyData*call to nvGpuOpsGetExternalAllocPtesOrPhysAddrs*clientLockAccess*call to _nvGpuOpsLocksAcquireAllWithClientLockFlags*call to nvGpuOpsGetExternalAllocP2pInfo*isSliSupported*!(isPeerSupported && isSliSupported)**!(isPeerSupported && isSliSupported)*NVRM: UVM GPU%d->GPU%d: remote EGM peer id not found **NVRM: UVM GPU%d->GPU%d: remote EGM peer id not found *NVRM: UVM GPU%d->GPU%d: peer Id: %d **NVRM: UVM GPU%d->GPU%d: peer Id: %d *call to memmgrLocalEgmPeerId*NVRM: UVM GPU%d->CPU: Local EGM peer Id: %d **NVRM: UVM GPU%d->CPU: Local EGM peer Id: %d *peerId != BUS_INVALID_PEER**peerId != BUS_INVALID_PEER*call to nvGpuOpsBuildExternalAllocPhysAddrs*call to nvGpuOpsGetExternalAllocMappingAttribute*mappingSize*physAddrCount*physicalAddresses**physicalAddresses*kbusGetBar1P2PDmaInfo_HAL(pMappingGpu, pRemoteGpu, GPU_GET_KERNEL_BUS(pRemoteGpu), &dmaBaseAddress, &dmaSize)**kbusGetBar1P2PDmaInfo_HAL(pMappingGpu, pRemoteGpu, GPU_GET_KERNEL_BUS(pRemoteGpu), &dmaBaseAddress, &dmaSize)*NVRM: DMA base address is not set for BAR1P2P mapping **NVRM: DMA base address is not set for BAR1P2P mapping *NVRM: DMA size is not set for BAR1P2P mapping **NVRM: DMA size is not set for BAR1P2P mapping *call to _nvGpuOpsEncodeBar1P2PAddrs*numWrittenPhysAddrs*numRemainingPhysAddrs*pPteFmt**pPteFmt*oldKind*call to nvGpuOpsGetPteKind*kindChanged*call to nvGpuOpsGetExternalAllocVolatility*vol*privileged*skipPteCount*isCompressedKind*guestPhysicalAddress**guestPhysicalAddress*isPLCable**isPLCable*call to NV_RM_RPC_GET_PLCABLE_ADDRESS_KIND*bEnablePlc*numWrittenPtes*numRemainingPtes*pteSize*pGpuExternalMappingInfo->elementBits != UvmGpuFormatElementBitsDefault**pGpuExternalMappingInfo->elementBits != UvmGpuFormatElementBitsDefault*NVRM: Invalid kind type (%x) **NVRM: Invalid kind type (%x) *ctagId*(pGpuExternalMappingInfo->elementBits == UvmGpuFormatElementBitsDefault) || (pGpuExternalMappingInfo->elementBits == UvmGpuFormatElementBits8)**(pGpuExternalMappingInfo->elementBits == UvmGpuFormatElementBitsDefault) || (pGpuExternalMappingInfo->elementBits == UvmGpuFormatElementBits8)*!(isIndirectPeerSupported && isPeerSupported)**!(isIndirectPeerSupported && isPeerSupported)*call to _nvGpuOpsIsBar1P2pAtomicEnabled*pGpu->gpuId != pMemOwnerGpu->gpuId**pGpu->gpuId != pMemOwnerGpu->gpuId*p2pInfoLock*btreeLock*peerInfo**peerInfo*pMemOwnerNvlink*isMemOwnerGpuDegraded*call to nvGpuOpsFindAndDestroyP2pInfo*isPeerGpuDegraded*call to nvGpuOpsAddP2pInfo*memOwnerGpuId != peerGpuId**memOwnerGpuId != peerGpuId*peerSupported*call to nvGpuOpsP2pInfoCreate*nvGpuOpsP2pInfoCreate(&p2pInfo)**nvGpuOpsP2pInfoCreate(&p2pInfo)*call to nvGpuOpsP2pInfoDestroy*pP2pApi**pP2pApi*gpuId1*gpuId2*call to nvGpuOpsDestroyPeerInfo*p2pLink*p2pCaps.atomicSupported**p2pCaps.atomicSupported*call to getNvlinkP2PCaps*p2pCapsParams->p2pLink != UVM_LINK_TYPE_NONE**p2pCapsParams->p2pLink != UVM_LINK_TYPE_NONE*linkBandwidthMBps != 0**linkBandwidthMBps != 0*totalLinkLineRateMBps*kbusGetBar1P2PDmaInfo_HAL(pLocalGpu, pRemoteGpu, GPU_GET_KERNEL_BUS(pRemoteGpu), &p2pCapsParams->bar1DmaAddress[0], &p2pCapsParams->bar1DmaSize[0])**kbusGetBar1P2PDmaInfo_HAL(pLocalGpu, pRemoteGpu, GPU_GET_KERNEL_BUS(pRemoteGpu), &p2pCapsParams->bar1DmaAddress[0], &p2pCapsParams->bar1DmaSize[0])*kbusGetBar1P2PDmaInfo_HAL(pRemoteGpu, pLocalGpu, GPU_GET_KERNEL_BUS(pLocalGpu), &p2pCapsParams->bar1DmaAddress[1], &p2pCapsParams->bar1DmaSize[1])**kbusGetBar1P2PDmaInfo_HAL(pRemoteGpu, pLocalGpu, GPU_GET_KERNEL_BUS(pLocalGpu), &p2pCapsParams->bar1DmaAddress[1], &p2pCapsParams->bar1DmaSize[1])*call to _nvGpuOpsGetDeviceArchByGpuId*nvlinkVersion1*nvlinkVersion2*call to getNvlinkConnectionToGpu*nvlinkVersion1 == nvlinkVersion2**nvlinkVersion1 == nvlinkVersion2*linkBandwidthMBps1 == linkBandwidthMBps2**linkBandwidthMBps1 == linkBandwidthMBps2*call to rmSystemP2PCapsControl*nvlinkSupported*bar1Supported*accessSupported*nvPopCount32(p2pCapsParams->p2pOptimalWriteCEs) == 1**nvPopCount32(p2pCapsParams->p2pOptimalWriteCEs) == 1*version == nvlinkStatus->linkInfo[i].nvlinkVersion**version == nvlinkStatus->linkInfo[i].nvlinkVersion**linkBandwidthMBps == 0***linkBandwidthMBps == 0**atomicSupported == atomicCap***atomicSupported == atomicCap*pRmApi->Control(pRmApi, hClient, hSubDevice, NV2080_CTRL_CMD_BUS_GET_C2C_INFO, ¶ms, sizeof(params))**pRmApi->Control(pRmApi, hClient, hSubDevice, NV2080_CTRL_CMD_BUS_GET_C2C_INFO, ¶ms, sizeof(params))*NVRM: No PCI information for GPU. **NVRM: No PCI information for GPU. *NV2080_CTRL_NVLINK_GET_CAP(((NvU8 *)&capsTbl), NV2080_CTRL_NVLINK_CAPS_P2P_ATOMICS)**NV2080_CTRL_NVLINK_GET_CAP(((NvU8 *)&capsTbl), NV2080_CTRL_NVLINK_CAPS_P2P_ATOMICS)*pVaSpaceRef*gpuVaSpace**gpuVaSpace**allocationsLock**cpuMappingsLock**physAllocationsLock*call to getAddressSpaceInfo*vaParams.flags != NV_VASPACE_ALLOCATION_FLAGS_NONE**vaParams.flags != NV_VASPACE_ALLOCATION_FLAGS_NONE*atsEnabled*call to isDeviceTuringPlus**time0Offset**time1Offset*rmSubDevice->clientRegionMapping**rmSubDevice->clientRegionMapping*smcPartition*maxSubctxCount*smcGpcCount*fifoGetInfoParams**fifoGetInfoParams*fifoInfoTblSize*changeParams*call to _nvGpuOpsGetGpuFromDevice*call to gpuDeviceDestroyUsermodeRegion*call to gpuDeviceRmSubDeviceDeinitNvlink*call to gpuDeviceRmSubDeviceDeinitEcc*call to nvGpuOpsRmSmcPartitionDestroy*rmSubDevice->hP2pObject == 0**rmSubDevice->hP2pObject == 0*rmSubDevice->p2pObjectRef == 0**rmSubDevice->p2pObjectRef == 0*call to nvGpuOpsRmDeviceDestroy*peerNode*isMigDevice*call to nvGpuOpsRmDeviceCreate*call to nvGpuOpsRmSubDeviceCreate*bPreCreated*call to nvGpuOpsRmSmcPartitionCreate*call to gpuDeviceMapUsermodeRegion*call to gpuDeviceRmSubDeviceInitEcc*call to gpuDeviceRmSubDeviceInitNvlink*call to queryFbInfo*isTccMode*isWddmMode**pPagingChannelRpcMutex*devicesLock*busInfoV2Params*NVRM: Unknown PCIe speed **NVRM: Unknown PCIe speed *(nvStatus == NV_OK) && (pGpu != NULL)**(nvStatus == NV_OK) && (pGpu != NULL)*reservedHeapSize*bZeroFb*heapStart*gpuMaxSupportedPageSizeParams*maxVidmemPageSize*maxAllocatableAddress*fbStaticBar1Params*device->rmDevice**device->rmDevice*call to makeDeviceDescriptorKey*rmDevice->deviceHandle == device->handle**rmDevice->deviceHandle == device->handle*subDevices*subhandle*rmSubDevice->smcPartition.info == NULL**rmSubDevice->smcPartition.info == NULL*call to getSwizzIdFromSmcPartHandle*pSmcResourceRef*gisubscriptionGetMIGGPUInstance(pGPUInstanceSubscription)->swizzId == pGpuInfo->smcSwizzId**gisubscriptionGetMIGGPUInstance(pGPUInstanceSubscription)->swizzId == pGpuInfo->smcSwizzId*nvlinkMasterHandle*bNvlinkInitialized*localMasterHandle*errContIntrMask*rmSubDevice->nvlinkReadLocation**rmSubDevice->nvlinkReadLocation*nvlinkErrorCallback*eventAlloc*nvlinkCallbackHandle*setNotification*setNotificationParams*eccMasterHandle*bEccInitialized*supportedOnAnyUnits*rmSubDevice->eccReadLocation**rmSubDevice->eccReadLocation*eccDbeCallback*allocDbe*eccCallbackHandle*eventDbe*eventDbeParams*rmDevice != NULL**rmDevice != NULL**subDevices**btreeLock*disableParams*disableParams.numChannels < NV2080_CTRL_FIFO_DISABLE_CHANNELS_MAX_ENTRIES**disableParams.numChannels < NV2080_CTRL_FIFO_DISABLE_CHANNELS_MAX_ENTRIES*deviceNeedsDummyAlloc(vaSpace->device)**deviceNeedsDummyAlloc(vaSpace->device)*vaSpace->dummyGpuAlloc.refCount != 0**vaSpace->dummyGpuAlloc.refCount != 0*gpuAddr*deviceNeedsDummyAlloc(device)**deviceNeedsDummyAlloc(device)*vaSpace->dummyGpuAlloc.gpuAddr**vaSpace->dummyGpuAlloc.gpuAddr*vaSpace->dummyGpuAlloc.cpuAddr**vaSpace->dummyGpuAlloc.cpuAddr*!session->devices**!session->devices*!session->p2pInfo**!session->p2pInfo*gpuSession**gpuSession**devicesLock**p2pInfoLock*serverGetClientUnderLock(&g_resServ, device->session->handle, &pClient)**serverGetClientUnderLock(&g_resServ, device->session->handle, &pClient)*call to _nvGpuOpsLocksAcquireWithClientLockFlags*isRmSemaAcquired*isRmLockAcquired*pClientEntryLocked**pClientEntryLocked*gpuMaskAcquired*src/kernel/rmapi/param_copy.c*NVRM: (%s): bad params from client: ptr %p size: 0x%x **src/kernel/rmapi/param_copy.c**NVRM: (%s): bad params from client: ptr %p size: 0x%x *NVRM: (%s): portMemExCopyToUser failure: status 0x%x **NVRM: (%s): portMemExCopyToUser failure: status 0x%x *NVRM: (%s): portMemExCopyFromUser failure: status 0x%x **NVRM: (%s): portMemExCopyFromUser failure: status 0x%x *NVRM: %s: bad params from client: ptr %p size: 0x%x (%s) **NVRM: %s: bad params from client: ptr %p size: 0x%x (%s) *bUseParamsDirectly*call to osIsKernelBuffer*NVRM: Error validating kernel pointer. Status 0x%x **NVRM: Error validating kernel pointer. Status 0x%x *NVRM: (%s): Requested size exceeds max (%ud > %ud) **NVRM: (%s): Requested size exceeds max (%ud > %ud) *NVRM: (%s): portMemAllocNonPaged failure: status 0x%x **NVRM: (%s): portMemAllocNonPaged failure: status 0x%x *pParamCopy->ppKernelParams != NULL**pParamCopy->ppKernelParams != NULL*pParams->pCookie->ctrlFlags & RMCTRL_FLAGS_NO_GPUS_LOCK*src/kernel/rmapi/resource.c**pParams->pCookie->ctrlFlags & RMCTRL_FLAGS_NO_GPUS_LOCK**src/kernel/rmapi/resource.c*rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPU_LOCK_FLAGS_SAFE_LOCK_UPGRADE, RM_LOCK_MODULES_RPC, &gpuMaskRelease)**rmGpuGroupLockAcquire(pGpu->gpuInstance, GPU_LOCK_GRP_SUBDEVICE, GPU_LOCK_FLAGS_SAFE_LOCK_UPGRADE, RM_LOCK_MODULES_RPC, &gpuMaskRelease)*call to serverDeserializeCtrlUp*serverDeserializeCtrlUp(pCallContext, pParams->cmd, &pParams->pParams, &pParams->paramsSize, &pParams->flags)**serverDeserializeCtrlUp(pCallContext, pParams->cmd, &pParams->pParams, &pParams->paramsSize, &pParams->flags)*call to serverSerializeCtrlDown*serverDeserializeCtrlDown(pCallContext, pParams->cmd, &pParams->pParams, &pParams->paramsSize, &pParams->flags)**serverDeserializeCtrlDown(pCallContext, pParams->cmd, &pParams->pParams, &pParams->paramsSize, &pParams->flags)*rightsRequired*call to resAccessCallback_IMPL*pSrcResource*clientHandleListSize*src/kernel/rmapi/rmapi.c*NVRM: *** Found %d clients for %llx **src/kernel/rmapi/rmapi.c**NVRM: *** Found %d clients for %llx *Empty client handle submap**Empty client handle submap*pClientHandleList**pClientHandleList*clientHandleListSize > k**clientHandleListSize > k**** OS info mismatch***** OS info mismatch*NVRM: *** Found: %x **NVRM: *** Found: %x *NVRM: Device object leak: (0x%x, 0x%x). Please file a bug against RM-core. **NVRM: Device object leak: (0x%x, 0x%x). Please file a bug against RM-core. *NVRM: Internal device object leak: (0x%x, 0x%x). Please file a bug against RM-core. **NVRM: Internal device object leak: (0x%x, 0x%x). Please file a bug against RM-core. *bDevicesInMask*rmapiInRtd3PmPath()**rmapiInRtd3PmPath()*g_rtd3PmPathThreadId == ~0ULL**g_rtd3PmPathThreadId == ~0ULL*waitApiLock*holdRoApiLock*holdRwApiLock*g_RmApiLock.pLock != NULL**g_RmApiLock.pLock != NULL*!rmapiLockIsOwner()**!rmapiLockIsOwner()*call to osApiLockAcquireConfigureFlags*call to portSyncRwLockAcquireReadConditional*call to portSyncRwLockAcquireWriteConditional*call to threadStateGetSetupFlags*bUnlockedParamCopy*RMLockingLowPriorityAging**RMLockingLowPriorityAging*lowPriorityAging*RMParamCopyNoLock**RMParamCopyNoLock*call to tlsEntryAlloc*tlsEntryId*pLockInfo != NULL**pLockInfo != NULL*call to serverAllClientsLockIsOwner*pSecondClient**pSecondClient*pRmApi != NULL**pRmApi != NULL*pContext != NULL**pContext != NULL*call to osGetDynamicPowerSupportMask*call to _rmapiUnrefGpuAccessNeeded*call to _rmapiRefGpuAccessNeeded*call to rmapiInitStubInterface*Alloc*AllocWithSecInfo*DisableClients*DisableClientsWithSecInfo*FreeWithSecInfo*ControlWithSecInfo*DupObject*DupObjectWithSecInfo*Share*ShareWithSecInfo*MapToCpu*MapToCpuWithSecInfo*MapToCpuWithSecInfoV2*UnmapFromCpu*UnmapFromCpuWithSecInfo*Map*MapWithSecInfo*Unmap*UnmapWithSecInfo*call to serverFreeDomain*call to serverDestruct*call to _rmapiLockFree*call to rmapiControlCacheFree*!g_bResServInit**!g_bResServInit*call to _rmapiLockAlloc*NVRM: *** Cannot allocate rmapi locks **NVRM: *** Cannot allocate rmapi locks *call to rmapiControlCacheInit*NVRM: *** Cannot initialize rmapi cache **NVRM: *** Cannot initialize rmapi cache *call to RsResInfoInitialize*call to serverConstruct*NVRM: *** Cannot initialize resource server **NVRM: *** Cannot initialize resource server *call to serverSetClientHandleBase*call to _rmapiInitInterface*src/kernel/rmapi/rmapi_cache.c*NVRM: Set rmapi control cache mode to 0x%x **src/kernel/rmapi/rmapi_cache.c**NVRM: Set rmapi control cache mode to 0x%x *call to _cacheLockAcquire*call to _cacheLockRelease*call to _rmapiControlCacheFreeGpuAttrForObject*call to _rmapiControlCacheFreeGpuAttrForClient*call to _rmapiControlCacheFreeGpuCache*call to _freeSubmap*bHasNext*call to _rmapiControlCacheSet*call to _rmapiControlCacheSetByInput*NVRM: Invalid cacheable flag 0x%x for cmd 0x%x **NVRM: Invalid cacheable flag 0x%x for cmd 0x%x *NVRM: control cache set for 0x%x 0x%x 0x%x status: 0x%x **NVRM: control cache set for 0x%x 0x%x 0x%x status: 0x%x *rmapiutilGetControlInfo(cmd, &flags, NULL, &ctrlParamsSize)**rmapiutilGetControlInfo(cmd, &flags, NULL, &ctrlParamsSize)*(params != NULL && paramsSize == ctrlParamsSize)**(params != NULL && paramsSize == ctrlParamsSize)*call to rmapiControlCacheSetUnchecked*call to _rmapiControlCacheGetCacheable*call to _rmapiControlCacheGetByInput*NVRM: control cache get for 0x%x 0x%x 0x%x status: 0x%x **NVRM: control cache get for 0x%x 0x%x 0x%x status: 0x%x *call to _rmapiControlCacheGetAny*call to _getInfoCacheHandler*call to _getCePhysicalCapsHandler*call to _getCePceMaskHandler*call to _gpuNameStringSet*call to _rmapiControlCacheGetByInputTemplateMethod*NVRM: No implementation for cacheable by input cmd 0x%x **NVRM: No implementation for cacheable by input cmd 0x%x *call to _gpuNameStringGet*ceEngineIndex*call to _cacheIsDisabled*call to _rmapiControlCacheGetGpuAttrForObject*call to _setCacheEntry*call to _getCacheEntry*cachedTable**cachedTable**pceMask == cachedTable[ceEngineIndex].pceMask***pceMask == cachedTable[ceEngineIndex].pceMask*portMemCmp(capsTbl, cachedTable[ceEngineIndex].capsTbl, NV2080_CTRL_CE_CAPS_TBL_SIZE) == 0**portMemCmp(capsTbl, cachedTable[ceEngineIndex].capsTbl, NV2080_CTRL_CE_CAPS_TBL_SIZE) == 0**cachedParams*portMemCmp(pParams->gpuNameString.ascii, cachedParams->ascii, sizeof(pParams->gpuNameString.ascii)) == 0**portMemCmp(pParams->gpuNameString.ascii, cachedParams->ascii, sizeof(pParams->gpuNameString.ascii)) == 0*bAsciiValid*unicode**unicode*portMemCmp(pParams->gpuNameString.unicode, cachedParams->unicode, sizeof(pParams->gpuNameString.unicode)) == 0**portMemCmp(pParams->gpuNameString.unicode, cachedParams->unicode, sizeof(pParams->gpuNameString.unicode)) == 0*bUnicodeValid*NVRM: Unknown gpu name string flag: %u **NVRM: Unknown gpu name string flag: %u *call to _rmapiControlCacheRemoveMapEntry*call to _isGetInfoIndexCacheable*cachedTable[index].data == pInfo[i].data**cachedTable[index].data == pInfo[i].data*call to _isGpuGetInfoIndexCacheable*call to _isFifoGetInfoIndexCacheable*call to _isBusGetInfoIndexCacheable*call to _isVbiosGetInfoIndexCacheable*insertedSubmap**insertedSubmap*call to _isCmdSystemWide*portMemCmp(entry->params, params, paramsSize) == 0**portMemCmp(entry->params, params, paramsSize) == 0*call to _handlesToGpuAttrKey*call to _gpuAttrKeyToClient*NVRM: Gpu Inst entry with key 0x%llx freed **NVRM: Gpu Inst entry with key 0x%llx freed *NVRM: cached gpu attr lookup for 0x%x 0x%x **NVRM: cached gpu attr lookup for 0x%x 0x%x *NVRM: cached gpu attr for 0x%x 0x%x: 0x%llx **NVRM: cached gpu attr for 0x%x 0x%x: 0x%llx *call to _getGpuInstFromGpuAttr*call to _getCacheGpuFlagsFromGpuAttr*call to _getGpuAttrFromGpu*NVRM: gpu attr set for 0x%x 0x%x: 0x%llx **NVRM: gpu attr set for 0x%x 0x%x: 0x%llx *NVRM: set existing gpu attr 0x%x 0x%x was 0x%llx is 0x%llx **NVRM: set existing gpu attr 0x%x 0x%x was 0x%llx is 0x%llx *RmEnableCacheableControls**RmEnableCacheableControls*NVRM: using cache mode %d **NVRM: using cache mode %d *NVRM: failed to create rw lock **NVRM: failed to create rw lock *cacheEntryIdx*pCacheTable*cachedEntries**cachedEntries*pCacheTable->cachedEntries[cacheEntryIdx].displayType == pParams->displayType*src/kernel/rmapi/rmapi_cache_handlers.c**pCacheTable->cachedEntries[cacheEntryIdx].displayType == pParams->displayType**src/kernel/rmapi/rmapi_cache_handlers.c*displayType*(src->sorIndex == dst->sorIndex) && (src->maxLinkRate == dst->maxLinkRate) && (src->dpVersionsSupported == dst->dpVersionsSupported) && (src->UHBRSupportedByGpu == dst->UHBRSupportedByGpu) && (src->bIsMultistreamSupported == dst->bIsMultistreamSupported) && (src->bIsSCEnabled == dst->bIsSCEnabled) && (src->bHasIncreasedWatermarkLimits == dst->bHasIncreasedWatermarkLimits) && (src->isSingleHeadMSTSupported == dst->isSingleHeadMSTSupported) && (src->bFECSupported == dst->bFECSupported) && (src->bIsTrainPhyRepeater == dst->bIsTrainPhyRepeater) && (src->bOverrideLinkBw == dst->bOverrideLinkBw) && (src->bUseRgFlushSequence == dst->bUseRgFlushSequence) && (src->bSupportDPDownSpread == dst->bSupportDPDownSpread) && (src->bAvoidHBR3 == dst->bAvoidHBR3) && (src->DSC.bDscSupported == dst->DSC.bDscSupported) && (src->DSC.encoderColorFormatMask == dst->DSC.encoderColorFormatMask) && (src->DSC.lineBufferSizeKB == dst->DSC.lineBufferSizeKB) && (src->DSC.rateBufferSizeKB == dst->DSC.rateBufferSizeKB) && (src->DSC.bitsPerPixelPrecision == dst->DSC.bitsPerPixelPrecision) && (src->DSC.maxNumHztSlices == dst->DSC.maxNumHztSlices) && (src->DSC.lineBufferBitDepth == dst->DSC.lineBufferBitDepth)**(src->sorIndex == dst->sorIndex) && (src->maxLinkRate == dst->maxLinkRate) && (src->dpVersionsSupported == dst->dpVersionsSupported) && (src->UHBRSupportedByGpu == dst->UHBRSupportedByGpu) && (src->bIsMultistreamSupported == dst->bIsMultistreamSupported) && (src->bIsSCEnabled == dst->bIsSCEnabled) && (src->bHasIncreasedWatermarkLimits == dst->bHasIncreasedWatermarkLimits) && (src->isSingleHeadMSTSupported == dst->isSingleHeadMSTSupported) && (src->bFECSupported == dst->bFECSupported) && (src->bIsTrainPhyRepeater == dst->bIsTrainPhyRepeater) && (src->bOverrideLinkBw == dst->bOverrideLinkBw) && (src->bUseRgFlushSequence == dst->bUseRgFlushSequence) && (src->bSupportDPDownSpread == dst->bSupportDPDownSpread) && (src->bAvoidHBR3 == dst->bAvoidHBR3) && (src->DSC.bDscSupported == dst->DSC.bDscSupported) && (src->DSC.encoderColorFormatMask == dst->DSC.encoderColorFormatMask) && (src->DSC.lineBufferSizeKB == dst->DSC.lineBufferSizeKB) && (src->DSC.rateBufferSizeKB == dst->DSC.rateBufferSizeKB) && (src->DSC.bitsPerPixelPrecision == dst->DSC.bitsPerPixelPrecision) && (src->DSC.maxNumHztSlices == dst->DSC.maxNumHztSlices) && (src->DSC.lineBufferBitDepth == dst->DSC.lineBufferBitDepth)*cacheEntry*internalDisplaysMask*availableInternalDisplaysMask*displayMaskDDC*pSerializedParams**pSerializedParams*pDeserializedParams**pDeserializedParams***pSerializedParams***pDeserializedParams*serializedSize*deserializedSize*call to FinnRmApiGetUnserializedSize*pDeserBuffer**pDeserBuffer***pDeserBuffer*pSerBuffer**pSerBuffer*call to FinnRmApiDeserializeUp*src/kernel/rmapi/rmapi_finn.c*NVRM: Deserialization failed for classId 0x%06x with status %s (0x%02x) **src/kernel/rmapi/rmapi_finn.c**NVRM: Deserialization failed for classId 0x%06x with status %s (0x%02x) *call to FinnRmApiSerializeUp*NVRM: Serialization failed for classId 0x%06x with status %s (0x%02x) **NVRM: Serialization failed for classId 0x%06x with status %s (0x%02x) *pCallContext->deserializedSize == unserializedSize**pCallContext->deserializedSize == unserializedSize*pDeserParams**pDeserParams***pDeserParams*call to FinnRmApiDeserializeDown*bReserialize*call to FinnRmApiGetSerializedSize*pCallContext->serializedSize == serializedSize**pCallContext->serializedSize == serializedSize*call to FinnRmApiSerializeDown*bLocalSerialization*NVRM: Deserialization failed for cmd 0x%06x with status %s (0x%02x) **NVRM: Deserialization failed for cmd 0x%06x with status %s (0x%02x) *NVRM: Serialization failed for cmd 0x%06x with status %s (0x%02x) **NVRM: Serialization failed for cmd 0x%06x with status %s (0x%02x) *src/kernel/rmapi/rmapi_specific.c**src/kernel/rmapi/rmapi_specific.c*pNv0005Params*hSrcResource*ControlPrefetch*pMethodDef*pResDesc != NULL*src/kernel/rmapi/rmapi_utils.c**pResDesc != NULL**src/kernel/rmapi/rmapi_utils.c**phClient != NV01_NULL_OBJECT***phClient != NV01_NULL_OBJECT*pRmApi->AllocWithHandle(pRmApi, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_ROOT, &hClient, sizeof(hClient))**pRmApi->AllocWithHandle(pRmApi, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_NULL_OBJECT, NV01_ROOT, &hClient, sizeof(hClient))*serverutilGenResourceHandle(hClient, &hDevice)**serverutilGenResourceHandle(hClient, &hDevice)*pRmApi->AllocWithHandle(pRmApi, hClient, hClient, hDevice, NV01_DEVICE_0, &nv0080AllocParams, sizeof(nv0080AllocParams))**pRmApi->AllocWithHandle(pRmApi, hClient, hClient, hDevice, NV01_DEVICE_0, &nv0080AllocParams, sizeof(nv0080AllocParams))*serverutilGenResourceHandle(hClient, &hSubDevice)**serverutilGenResourceHandle(hClient, &hSubDevice)*pRmApi->AllocWithHandle(pRmApi, hClient, hDevice, hSubDevice, NV20_SUBDEVICE_0, &nv2080AllocParams, sizeof(nv2080AllocParams))**pRmApi->AllocWithHandle(pRmApi, hClient, hDevice, hSubDevice, NV20_SUBDEVICE_0, &nv2080AllocParams, sizeof(nv2080AllocParams))*src/kernel/rmapi/rpc_common.c**src/kernel/rmapi/rpc_common.c*NVRM: NVRM_RPC: RPC buffer not initialized. Function %d **NVRM: NVRM_RPC: RPC buffer not initialized. Function %d *NVRM: NVRM_RPC: called with NULL pRpc. Function %d. **NVRM: NVRM_RPC: called with NULL pRpc. Function %d. *header_version*rpc_result*rpc_result_private*cpuRmGfid*pDevice->pKernelHostVgpuDevice->bGspPluginTaskInitialized**pDevice->pKernelHostVgpuDevice->bGspPluginTaskInitialized*spare*NVRM: cannot allocate memory for OBJRPC (instance %d) **NVRM: cannot allocate memory for OBJRPC (instance %d) *call to rpcSetIpVersion*call to rpcRmApiSetup*NVRM: rpcConstruct failed **NVRM: rpcConstruct failed *src/kernel/rmapi/rs_utils.c**src/kernel/rmapi/rs_utils.c*call to serverShareIterNext*call to serverShareIter*pScopedRef*hFoundParent*pGpuResSrc*pGpuResDst*src/kernel/rmapi/sharing.c**src/kernel/rmapi/sharing.c*call to rmapiShareWithSecInfo*NVRM: Nv04Share: hClient:0x%x hObject:0x%x pSharePolicy:%p **NVRM: Nv04Share: hClient:0x%x hObject:0x%x pSharePolicy:%p *lockInfo.pClient == NULL**lockInfo.pClient == NULL*call to _RmShare*NVRM: ...resource share complete **NVRM: ...resource share complete *NVRM: Nv04Share: share failed; status: %s (0x%08x) **NVRM: Nv04Share: share failed; status: %s (0x%08x) *NVRM: Nv04Share: hClient:0x%x hObject:0x%x pSharePolicy:%p **NVRM: Nv04Share: hClient:0x%x hObject:0x%x pSharePolicy:%p *call to serverShareResourceAccess*call to rmapiDupObjectWithSecInfo*NVRM: Nv04DupObject: hClient:0x%x hParent:0x%x hObject:0x%x **NVRM: Nv04DupObject: hClient:0x%x hParent:0x%x hObject:0x%x *NVRM: Nv04DupObject: hClientSrc:0x%x hObjectSrc:0x%x flags:0x%x **NVRM: Nv04DupObject: hClientSrc:0x%x hObjectSrc:0x%x flags:0x%x *call to _RmDupObject*NVRM: ...handle dup complete **NVRM: ...handle dup complete *NVRM: Nv04DupObject: dup failed; status: %s (0x%08x) **NVRM: Nv04DupObject: dup failed; status: %s (0x%08x) *NVRM: Nv04DupObject: hClient:0x%x hParent:0x%x hObject:0x%x **NVRM: Nv04DupObject: hClient:0x%x hParent:0x%x hObject:0x%x *hResourceSrc*hClientDst*hParentDst*hResourceDst*call to serverCopyResource*!gpuIsSriovEnabled(pGpu) || IS_SRIOV_HEAVY(pGpu)*src/kernel/vgpu/objvgpu.c**!gpuIsSriovEnabled(pGpu) || IS_SRIOV_HEAVY(pGpu)**src/kernel/vgpu/objvgpu.c*pDevice->pKernelHostVgpuDevice**pDevice->pKernelHostVgpuDevice*(pKernelHostVgpuDevice != NULL) == !!(pDevice->deviceAllocFlags & NV_DEVICE_ALLOCATION_FLAGS_HOST_VGPU_DEVICE)**(pKernelHostVgpuDevice != NULL) == !!(pDevice->deviceAllocFlags & NV_DEVICE_ALLOCATION_FLAGS_HOST_VGPU_DEVICE)*call to gpuGetGfidState_IMPL*gpuGetGfidState(pGpu, *pGfid, &gfidState)**gpuGetGfidState(pGpu, *pGfid, &gfidState)*(gfidState != GFID_FREE)**(gfidState != GFID_FREE)*(pDevice->pKernelHostVgpuDevice != NULL) == !!(pDevice->deviceAllocFlags & NV_DEVICE_ALLOCATION_FLAGS_HOST_VGPU_DEVICE)**(pDevice->pKernelHostVgpuDevice != NULL) == !!(pDevice->deviceAllocFlags & NV_DEVICE_ALLOCATION_FLAGS_HOST_VGPU_DEVICE)*pHostVgpuDevice*call to resservGetContextRefByType*NVRM: Overwriting big page size to 64K **NVRM: Overwriting big page size to 64K *PDB_PROP_GPU_VGPU_BIG_PAGE_SIZE_64K*RMSetVGPUVersionMax**RMSetVGPUVersionMax*RMSetVGPUVersionMin**RMSetVGPUVersionMin*call to freeRpcInfrastructure_VGPU*NVRM: NVRM_RPC: _freeRpcInfrastructure: failed. **NVRM: NVRM_RPC: _freeRpcInfrastructure: failed. *call to teardownSysmemPfnBitMap*NVRM: vGPU instances more than %d are not supported **NVRM: vGPU instances more than %d are not supported *NVRM: : %d vGPU instance already allocated **NVRM: : %d vGPU instance already allocated *NVRM: cannot allocate memory for OBJVGPU (instance %d) **NVRM: cannot allocate memory for OBJVGPU (instance %d) *call to initRpcInfrastructure_VGPU*NVRM: NVRM_RPC: _initRpcInfrastructure: failed. **NVRM: NVRM_RPC: _initRpcInfrastructure: failed. *NVRM: NVRM_RPC: GET_STATIC_DATA : failed. **NVRM: NVRM_RPC: GET_STATIC_DATA : failed. *call to vgpuGetUsmType*gspResponseBuf*vgpuConfigUsmType*numQueries*statusInfo*queries**queries*chipletType*chipletIndex*poolInfosCount*poolInfos**poolInfos*numCredits*poolCount*totalHeapSize*poolStats**poolStats*allocatedSize*peakAllocatedSize*managedSize*peakAllocationCount*largestFreeChunkSize*reserveParams*cwd*creditInfo**creditInfo*bMembytesPollingRequired*hChannelGroup*bEnableAllTpcs*smID*bSingleStep*bSetMaxFreq*samplingMode*queryType*queryParams*fbp*ltc*fbpIndex*lts*rop*dmLtc*dmLts*dmFbpa*dmRop*dmFbpaSubp*fbpaSubp*fbpLogicalMap*sysl2Ltc*sysIdx*pac*logicalLtc*dmLogicalLtc*sysl2Lts*queryData*chipletGpcMapData*tpcMaskData*ppcMaskData*partitionGpcMapData*partitionChipletSyspipeData*dmGpcMaskData*ropMaskData*gfxGpcMaskData*regOp*regType*regQuad*regGroupMask*regSubGroupMask*regAndNMaskLo*regAndNMaskHi*regValueLo*regValueHi*bytesConsumed*bUpdateAvailableBytes*bReturnPut*grpACount*grpBCount*tableType*stopTriggerType*hTargetChannel*hVirtMemory*exceptionMask*numSMsToClear*hDmaHandle*vMemPtr*zcullMode*gfxpPreemptMode*cilpPreemptMode*vMemPtrs**vMemPtrs*bManualTimeout*bOnlyDisableScheduling*bRewindGpPut***pRunlistPreemptEvent*stencil*bSkipL2Table*colorFB**colorFB*colorDS**colorDS*indexSize*indexUsed*valType*bCtsIdValid*currentFreq*defaultFreq*minFreq*maxFreq*bMode*clientActiveMask*bRegkeyLimitRatedTdp*inv*fbpEnMask*ltcEnMask*ltsEnMask*fbpaEnMask*ropEnMask*fbpaSubpEnMask*fbpLogicalIndex*sysl2LtcEnMask*pacEnMask*logicalLtcEnMask*sysl2LtsEnMask*sysEnMask*gpcCountData*chipletGpcMap*syspipeMaskData*chipletSyspipeMask*physSyspipeIdCount*physSyspipeId**physSyspipeId*gpcEnMask*partitionSyspipeIdData*ropMask*gfxSyspipeMaskData*putPtr*bOverflowStatus*bPassed*bDirect*enabledLinks_s**enabledLinks_s*enabledLinks_d**enabledLinks_d*deviceInfo_d**deviceInfo_d*deviceInfo_s**deviceInfo_s*bIndexValid*smErrorState*hwwGlobalEsr*hwwWarpEsr*hwwWarpEsrPc*hwwGlobalEsrReportMask*hwwWarpEsrReportMask*hwwEsrAddr*hwwWarpEsrPc64*hwwCgaEsr*hwwCgaEsrReportMask*waitForEvent*hResidentChannel*smIdDest*smErrorStateArray**smErrorStateArray*brands*call to serialize_NV2080_CTRL_INTERNAL_GPU_CHECK_CTS_ID_VALID_PARAMS_v2B_12*call to _issueRpcAndWait*src/kernel/vgpu/rpc.c*NVRM: RPC to check CTS ID validity failed with error 0x%x **src/kernel/vgpu/rpc.c**NVRM: RPC to check CTS ID validity failed with error 0x%x *call to deserialize_NV2080_CTRL_INTERNAL_GPU_CHECK_CTS_ID_VALID_PARAMS_v2B_12*call to serialize_NV2080_CTRL_CMD_GSP_GET_LIBOS_HEAP_STATS_PARAMS_v29_02*call to deserialize_NV2080_CTRL_CMD_GSP_GET_LIBOS_HEAP_STATS_PARAMS_v29_02*call to serialize_NV2080_CTRL_CMD_GSP_GET_VGPU_HEAP_STATS_PARAMS_v28_06*call to deserialize_NV2080_CTRL_CMD_GSP_GET_VGPU_HEAP_STATS_PARAMS_v28_06*call to serialize_NV2080_CTRL_CMD_GSP_GET_VGPU_HEAP_STATS_PARAMS_v28_03*call to deserialize_NV2080_CTRL_CMD_GSP_GET_VGPU_HEAP_STATS_PARAMS_v28_03*ctrl_cmd_internal_control_gsp_trace_v28_00*ctrl_cmd_internal_gpu_start_fabric_probe_v25_09*call to _issueRpcAsync*free_v03_00*NVRM: Calling RPC RmFree without adequate locks! **NVRM: Calling RPC RmFree without adequate locks! *RPC locking violation - see kernel_log.txt**RPC locking violation - see kernel_log.txt*NVRM: GspRmFree failed: hClient=0x%08x; hObject=0x%08x; paramsStatus=0x%08x; status=0x%08x **NVRM: GspRmFree failed: hClient=0x%08x; hObject=0x%08x; paramsStatus=0x%08x; status=0x%08x *dup_object_v03_00*NVRM: Calling RPC RmDupObject without adequate locks! **NVRM: Calling RPC RmDupObject without adequate locks! *NVRM: GspRmDupObject failed: hClient=0x%08x; hParent=0x%08x; hObject=0x%08x; hClientSrc=0x%08x; hObjectSrc=0x%08x; flags=0x%08x; paramsStatus=0x%08x; status=0x%08x **NVRM: GspRmDupObject failed: hClient=0x%08x; hParent=0x%08x; hObject=0x%08x; hClientSrc=0x%08x; hObjectSrc=0x%08x; flags=0x%08x; paramsStatus=0x%08x; status=0x%08x *NVRM: Calling RPC RmAlloc 0x%04x without adequate locks! **NVRM: Calling RPC RmAlloc 0x%04x without adequate locks! *rmapiGetClassAllocParamSize(¶msSize, NV_PTR_TO_NvP64(pAllocParams), &bNullAllowed, hClass)**rmapiGetClassAllocParamSize(¶msSize, NV_PTR_TO_NvP64(pAllocParams), &bNullAllowed, hClass)*NVRM: NULL allocation params not allowed for class 0x%x **NVRM: NULL allocation params not allowed for class 0x%x *rpcWriteCommonHeader(pGpu, pRpc, NV_VGPU_MSG_FUNCTION_GSP_RM_ALLOC, sizeof(rpc_gsp_rm_alloc_v03_00))**rpcWriteCommonHeader(pGpu, pRpc, NV_VGPU_MSG_FUNCTION_GSP_RM_ALLOC, sizeof(rpc_gsp_rm_alloc_v03_00))*call to serverSerializeAllocDown*serverSerializeAllocDown(&callContext, hClass, &pAllocParams, ¶msSize, &flags)**serverSerializeAllocDown(&callContext, hClass, &pAllocParams, ¶msSize, &flags)*memCopyResult**memCopyResult***memCopyResult*call to serverDeserializeAllocUp*serverDeserializeAllocUp(&callContext, hClass, &pAllocParams, ¶msSize, &flags)**serverDeserializeAllocUp(&callContext, hClass, &pAllocParams, ¶msSize, &flags)*NVRM: GspRmAlloc failed: hClient=0x%08x; hParent=0x%08x; hObject=0x%08x; hClass=0x%08x; paramsSize=0x%08x; paramsStatus=0x%08x; status=0x%08x **NVRM: GspRmAlloc failed: hClient=0x%08x; hParent=0x%08x; hObject=0x%08x; hClass=0x%08x; paramsSize=0x%08x; paramsStatus=0x%08x; status=0x%08x *pOriginalParams**pOriginalParams*NVRM: Calling RPC RmControl 0x%08x without adequate locks! **NVRM: Calling RPC RmControl 0x%08x without adequate locks! *rmctrlInfoStatus*bCacheable*pRmApi == GPU_GET_PHYSICAL_RMAPI(pGpu)**pRmApi == GPU_GET_PHYSICAL_RMAPI(pGpu)*resCtrlFlags*bPreSerialized*!(bCacheable && (resCtrlFlags & NVOS54_FLAGS_FINN_SERIALIZED))**!(bCacheable && (resCtrlFlags & NVOS54_FLAGS_FINN_SERIALIZED))*rmctrlCacheStatus*call to rmapiControlCacheGetUnchecked*rpc_params_size*rpcWriteCommonHeader(pGpu, pRpc, NV_VGPU_MSG_FUNCTION_GSP_RM_CONTROL, rpc_params_size)**rpcWriteCommonHeader(pGpu, pRpc, NV_VGPU_MSG_FUNCTION_GSP_RM_CONTROL, rpc_params_size)*rmctrlAccessRight**large_message_copy*large_message_copy != NULL**large_message_copy != NULL*message_buffer_remaining*NVRM: Bad params: ptr %p size: 0x%x **NVRM: Bad params: ptr %p size: 0x%x *call to _issueRpcAndWaitLarge*NVRM: GspRmControl failed: hClient=0x%08x; hObject=0x%08x; cmd=0x%08x; paramsSize=0x%08x; paramsStatus=0x%08x; status=0x%08x **NVRM: GspRmControl failed: hClient=0x%08x; hObject=0x%08x; cmd=0x%08x; paramsSize=0x%08x; paramsStatus=0x%08x; status=0x%08x *NVRM: NVRM_RPC: Get FB usage info failed : %x **NVRM: NVRM_RPC: Get FB usage info failed : %x *fbFree*fbUsageParams*NVRM: NVRM_RPC: Host vGPU FB usage update failed : %x **NVRM: NVRM_RPC: Host vGPU FB usage update failed : %x *fixed_param_size <= pRpc->maxRpcSize**fixed_param_size <= pRpc->maxRpcSize*remainingMessageSize*call to osPackageRegistry*totalSize < pRpc->pMessageQueueInfo->commandQueueSize**totalSize < pRpc->pMessageQueueInfo->commandQueueSize*largeRpcBuffer**largeRpcBuffer*call to _issueRpcAsyncLarge*gsp_set_system_info_v17_00*NVRM: GSP_SET_SYSTEM_INFO parameters size (0x%x) exceed message_buffer size (0x%x) **NVRM: GSP_SET_SYSTEM_INFO parameters size (0x%x) exceed message_buffer size (0x%x) *rpcInfo*status == NV_OK || status == NV_ERR_NO_SUCH_DOMAIN**status == NV_OK || status == NV_ERR_NO_SUCH_DOMAIN*bFlrSupported*b64bBar0Supported*simAccessBufPhysAddr*notifyOpSharedSurfacePhysAddr*pcieAtomicsOpMask*pcieAtomicsCplDeviceCapMask*consoleMemSize*maxUserVa*call to clSyncWithGsp_IMPL*bSystemHasMux*bGpuBehindBridge*bUpstreamL0sUnsupported*bUpstreamL1Unsupported*bUpstreamL1PorSupported*bUpstreamL1PorMobileOnly*upstreamAddressValid*hypervisorType*gspVFInfo*bIsPrimary*bIsUnixHdmiFrlComplianceEnabled*bS0ixSupport*call to osGetGridCspSupport*gridBuildCsp*bPreserveVideoMemoryAllocations*bTdrEventSupported*bFeatureStretchVblankCapable*bWindowChannelAlwaysMapped*invalidate_tlb_v23_03*NVRM: Failed to invaldiate TLB rpc 0x%x **NVRM: Failed to invaldiate TLB rpc 0x%x *disable_channels_v1E_0B*get_encoder_capacity_v07_00*pParamStructPtr != NULL**pParamStructPtr != NULL*call to rpcGetEngineUtilization_v09_0C_GetPidList*rpc_param_header_size*NVRM: NVRM_RPC: SendDmaControl: requested %u bytes (but only room for %u) **NVRM: NVRM_RPC: SendDmaControl: requested %u bytes (but only room for %u) *get_engine_utilization_v1F_0E*call to engine_utilization_copy_params_to_rpc_buffer_v1F_0E*call to engine_utilization_copy_params_from_rpc_buffer_v1F_0E*get_engine_utilization_v*call to engine_utilization_copy_params_to_rpc_buffer_v09_0C*call to engine_utilization_copy_params_from_rpc_buffer_v09_0C*rpcPidStruct*rmPidStruct*passIndex*pidTable**pidTable*call to vgpuGspMakeBufferAddress*gspHibernateShrdBufInfo*addrBuf*setup_hibernation_buffer_v2A_06*NVRM: SetupHibernationBuffer RPC FAILURE **NVRM: SetupHibernationBuffer RPC FAILURE *NVRM: SetupHibernationBuffer RPC SUCCESS **NVRM: SetupHibernationBuffer RPC SUCCESS *call to _restoreHibernationDataNonGsp*call to _restoreHibernationDataGsp*hibernationData*headerLength*maxPayloadSize*rhd**pHibernationData*call to _saveHibernationDataNonGsp*call to _saveHibernationDataGsp*shd*totalHibernationDataSize*NVRM: No memory for hibernation buffer **NVRM: No memory for hibernation buffer *NVRM: RPC SETUP_HIBERNATION_BUFFER failed, status 0x%x **NVRM: RPC SETUP_HIBERNATION_BUFFER failed, status 0x%x *gspCtrlBuf*IsMoreHibernateDataRestore*NVRM: RPC RESTORE_HIBERNATION_DATA FAILURE, status 0x%x **NVRM: RPC RESTORE_HIBERNATION_DATA FAILURE, status 0x%x *call to _readGspHibernationBufPutDuringRestore*base_dst*write_bytes*call to _writeGspHibernationBufPutDuringRestore*call to _readGspHibernationBufGetDuringRestore*NVRM: RPC SAVE_HIBERNATION_DATA FAILURE, status 0x%x **NVRM: RPC SAVE_HIBERNATION_DATA FAILURE, status 0x%x *call to _readGspHibernationBufGetDuringSave*NVRM: Hibernation data size is more than MAX limit **NVRM: Hibernation data size is more than MAX limit *pTempBuffer**pTempBuffer*NVRM: No memory for hibernation buffer reallocation **NVRM: No memory for hibernation buffer reallocation *base_src*call to _writeGspHibernationBufGetDuringSave*call to _readGspHibernationBufPutDuringSave*unset_page_directory_v1E_05*set_page_directory_v1E_05*update_bar_pde_v15_00*get_gsp_static_info_v14_00**pSCI*NVRM: Gsp static info parameters size (0x%x) exceed message_buffer size (0x%x) **NVRM: Gsp static info parameters size (0x%x) exceed message_buffer size (0x%x) **rpcInfo*NVRM: vGPU consolidated static information parameters size (0x%x) exceed message_buffer size (0x%x) **NVRM: vGPU consolidated static information parameters size (0x%x) exceed message_buffer size (0x%x) *call to getConsolidatedGrRpcBufferSize*NVRM: no memory for temporary buffer **NVRM: no memory for temporary buffer *get_consolidated_gr_static_info_v1B_04*call to copyPayloadToGrStaticInfo*NVRM: Failed to copy the data from RPC to GR Static Info buffer. Status :%x **NVRM: Failed to copy the data from RPC to GR Static Info buffer. Status :%x *NVRM: vGPU static data parameters size (0x%x) exceed message_buffer size (0x%x) **NVRM: vGPU static data parameters size (0x%x) exceed message_buffer size (0x%x) *call to getStaticDataRpcBufferSize*NVRM: getStaticDataRpcBufferSize is failed. status: 0x%x **NVRM: getStaticDataRpcBufferSize is failed. status: 0x%x *pRpcPayload**pRpcPayload*get_static_data_v27_01*call to copyPayloadToStaticData*NVRM: Failed to copy the data from RPC to Static Info buffer. Status :%x **NVRM: Failed to copy the data from RPC to Static Info buffer. Status :%x *NVRM: NVRM_RPC: GET_CONSOLIDATED_GR_STATIC_INFO failed. **NVRM: NVRM_RPC: GET_CONSOLIDATED_GR_STATIC_INFO failed. *get_static_data_v25_0E*NVRM: RegOps RPC failed: Invalid regOp count - requested 0x%x regOps **NVRM: RegOps RPC failed: Invalid regOp count - requested 0x%x regOps *NVRM: NVRM_RPC: rpcGpuExecRegOps_v12_01: Insufficient space on message buffer **NVRM: NVRM_RPC: rpcGpuExecRegOps_v12_01: Insufficient space on message buffer *gpu_exec_reg_ops_v12_01*reg_op_params*operations**operations*NVRM: RegOps RPC failed: skipping 0x%x regOps **NVRM: RegOps RPC failed: skipping 0x%x regOps *regOpsExecuted*!bProfileRPC**!bProfileRPC*rpcDumpRec.pHead**rpcDumpRec.pHead*rpcProfilerBuffer**rpcProfilerBuffer*remainingEntryCount*outputEntryCount*NVRM: Unloading guest driver parameters size (0x%x) exceed message_buffer size (0x%x) **NVRM: Unloading guest driver parameters size (0x%x) exceed message_buffer size (0x%x) *unloading_guest_driver_v1F_07*cleanup_surface_v03_00*last_surface_info*last_surface*blankingEnabled*set_surface_properties_v07_07*isPrimary*surfaceType*surfaceBlockHeight*surfacePitch*rectX*rectY*rectWidth*rectHeight*surfaceKind*effectiveFbPageSize*NVRM: NVRM_RPC: PerfGetLevelInfo : List Size Exceeded for perfGetClkInfoList, currentSize: %u (maxAllowedSize: %u) **NVRM: NVRM_RPC: PerfGetLevelInfo : List Size Exceeded for perfGetClkInfoList, currentSize: %u (maxAllowedSize: %u) *rpc_buffer_params*call to serialize_NV2080_CTRL_PERF_GET_LEVEL_INFO_V2_PARAMS_v2B_0D*call to deserialize_NV2080_CTRL_PERF_GET_LEVEL_INFO_V2_PARAMS_v2B_0D*NVRM: NVRM_RPC: PerfGetLevelInfo : requested %u bytes (but only room for %u) **NVRM: NVRM_RPC: PerfGetLevelInfo : requested %u bytes (but only room for %u) *perf_get_level_info_v03_00*shared_memory*currPstate*guestDriverBranch*set_guest_system_info_ext_v25_1B**guestDriverBranch*set_guest_system_info_ext_v15_02*majorNum*minorNum*NVRM: NVRM_RPC: Skipping RPC version handshake for instance 0x%x **NVRM: NVRM_RPC: Skipping RPC version handshake for instance 0x%x *NVRM: NVRM_RPC: RPC version handshake already failed. Bailing out for device instance 0x%x **NVRM: NVRM_RPC: RPC version handshake already failed. Bailing out for device instance 0x%x *NVRM: NVRM_RPC: SetGuestSystemInfo: Insufficient space on message buffer **NVRM: NVRM_RPC: SetGuestSystemInfo: Insufficient space on message buffer *set_guest_system_info_v*guestDriverVersionBufferLength*guestDriverVersion**guestDriverVersion*guestVersionBufferLength*guestVersion**guestVersion*guestTitleBufferLength*guestTitle**guestTitle*guestClNum*vgxVersionMajorNum*vgxVersionMinorNum*NVRM: NVRM_RPC: SetGuestSystemInfo: Guest VGX version (%d.%d) is newer than the host VGX version (%d.%d) NVRM_RPC: SetGuestSystemInfo: Retrying with the VGX version requested by the host. **NVRM: NVRM_RPC: SetGuestSystemInfo: Guest VGX version (%d.%d) is newer than the host VGX version (%d.%d) NVRM_RPC: SetGuestSystemInfo: Retrying with the VGX version requested by the host. *NVRM: NVRM_RPC: SetGuestSystemInfo: The host version (%d.%d) is too old. NVRM_RPC: SetGuestSystemInfo: Minimum required host version is %d.%d. **NVRM: NVRM_RPC: SetGuestSystemInfo: The host version (%d.%d) is too old. NVRM_RPC: SetGuestSystemInfo: Minimum required host version is %d.%d. *######## Guest NVIDIA Driver Information: ########**######## Guest NVIDIA Driver Information: ########*Driver Version: 590.48.01**Driver Version: 590.48.01*Incompatible Guest/Host drivers: Host VGX version is older than the minimum version supported by the Guest. Disabling vGPU.**Incompatible Guest/Host drivers: Host VGX version is older than the minimum version supported by the Guest. Disabling vGPU.*NVRM: SET_GUEST_SYSTEM_INFO_EXT : failed. **NVRM: SET_GUEST_SYSTEM_INFO_EXT : failed. *call to serialize_NV83DE_CTRL_DEBUG_GET_MODE_MMU_GCC_DEBUG_PARAMS_v2A_05*call to deserialize_NV83DE_CTRL_DEBUG_GET_MODE_MMU_GCC_DEBUG_PARAMS_v2A_05*call to serialize_NV83DE_CTRL_DEBUG_GET_MODE_MMU_DEBUG_PARAMS_v25_04*call to deserialize_NV83DE_CTRL_DEBUG_GET_MODE_MMU_DEBUG_PARAMS_v25_04*call to serialize_NV2080_CTRL_GPU_MIGRATABLE_OPS_PARAMS_v21_07*call to deserialize_NV2080_CTRL_GPU_MIGRATABLE_OPS_PARAMS_v21_07*call to serialize_NV2080_CTRL_GPU_GET_INFO_V2_PARAMS_v2B_13*call to deserialize_NV2080_CTRL_GPU_GET_INFO_V2_PARAMS_v2B_13*call to serialize_NV2080_CTRL_GPU_GET_INFO_V2_PARAMS_v2B_0C*call to deserialize_NV2080_CTRL_GPU_GET_INFO_V2_PARAMS_v2B_0C*call to serialize_NV2080_CTRL_GPU_GET_INFO_V2_PARAMS_v2B_05*call to deserialize_NV2080_CTRL_GPU_GET_INFO_V2_PARAMS_v2B_05*call to serialize_NV2080_CTRL_GPU_GET_INFO_V2_PARAMS_v2B_03*call to deserialize_NV2080_CTRL_GPU_GET_INFO_V2_PARAMS_v2B_03*call to serialize_NV2080_CTRL_GPU_GET_INFO_V2_PARAMS_v2A_04*call to deserialize_NV2080_CTRL_GPU_GET_INFO_V2_PARAMS_v2A_04*call to serialize_NV2080_CTRL_GPU_GET_INFO_V2_PARAMS_v25_11*call to deserialize_NV2080_CTRL_GPU_GET_INFO_V2_PARAMS_v25_11*call to serialize_NV2080_CTRL_BUS_UNSET_P2P_MAPPING_PARAMS_v21_03*call to deserialize_NV2080_CTRL_BUS_UNSET_P2P_MAPPING_PARAMS_v21_03*call to serialize_NV2080_CTRL_BUS_SET_P2P_MAPPING_PARAMS_v21_03*call to deserialize_NV2080_CTRL_BUS_SET_P2P_MAPPING_PARAMS_v21_03*call to serialize_NV2080_CTRL_BUS_SET_P2P_MAPPING_PARAMS_v29_08*call to deserialize_NV2080_CTRL_BUS_SET_P2P_MAPPING_PARAMS_v29_08*call to serialize_NV0080_CTRL_FIFO_SET_CHANNEL_PROPERTIES_PARAMS_v03_00*call to deserialize_NV0080_CTRL_FIFO_SET_CHANNEL_PROPERTIES_PARAMS_v03_00*call to serialize_NV0090_CTRL_GET_MMU_DEBUG_MODE_PARAMS_v1E_06*NVRM: RPC to get MMU debug mode failed with error 0x%x **NVRM: RPC to get MMU debug mode failed with error 0x%x *call to deserialize_NV0090_CTRL_GET_MMU_DEBUG_MODE_PARAMS_v1E_06*update_gpm_guest_buffer_info_v2B_07*update_gpm_guest_buffer_info_v27_01*call to serialize_NV2080_CTRL_FB_GET_INFO_V2_PARAMS_v2B_00*call to deserialize_NV2080_CTRL_FB_GET_INFO_V2_PARAMS_v2B_00*call to serialize_NV2080_CTRL_FB_GET_INFO_V2_PARAMS_v27_00*call to deserialize_NV2080_CTRL_FB_GET_INFO_V2_PARAMS_v27_00*call to serialize_NV2080_CTRL_FB_GET_INFO_V2_PARAMS_v25_0A*call to deserialize_NV2080_CTRL_FB_GET_INFO_V2_PARAMS_v25_0A*call to serialize_NV0000_CTRL_SYSTEM_GET_P2P_CAPS_MATRIX_PARAMS_v18_0A*call to deserialize_NV0000_CTRL_SYSTEM_GET_P2P_CAPS_MATRIX_PARAMS_v18_0A*call to serialize_NV0000_CTRL_SYSTEM_GET_P2P_CAPS_PARAMS_v1F_0D*call to deserialize_NV0000_CTRL_SYSTEM_GET_P2P_CAPS_PARAMS_v1F_0D*call to serialize_NV2080_CTRL_CMD_NVLINK_GET_NVLINK_STATUS_PARAMS_v2B_11*call to deserialize_NV2080_CTRL_CMD_NVLINK_GET_NVLINK_STATUS_PARAMS_v2B_11*call to serialize_NV2080_CTRL_CMD_NVLINK_GET_NVLINK_STATUS_PARAMS_v28_09*call to deserialize_NV2080_CTRL_CMD_NVLINK_GET_NVLINK_STATUS_PARAMS_v28_09*call to serialize_NV2080_CTRL_CMD_NVLINK_GET_NVLINK_STATUS_PARAMS_v23_04*call to deserialize_NV2080_CTRL_CMD_NVLINK_GET_NVLINK_STATUS_PARAMS_v23_04*call to serialize_NV9096_CTRL_GET_ZBC_CLEAR_TABLE_ENTRY_PARAMS_v1A_07*call to deserialize_NV9096_CTRL_GET_ZBC_CLEAR_TABLE_ENTRY_PARAMS_v1A_07*call to serialize_NV2080_CTRL_CE_GET_CE_PCE_MASK_PARAMS_v1A_07*call to deserialize_NV2080_CTRL_CE_GET_CE_PCE_MASK_PARAMS_v1A_07*call to serialize_NV0080_CTRL_DMA_SET_DEFAULT_VASPACE_PARAMS_v03_00*call to deserialize_NV0080_CTRL_DMA_SET_DEFAULT_VASPACE_PARAMS_v03_00*call to serialize_NV83DE_CTRL_DEBUG_SET_NEXT_STOP_TRIGGER_TYPE_PARAMS_v1A_06*call to deserialize_NV83DE_CTRL_DEBUG_SET_NEXT_STOP_TRIGGER_TYPE_PARAMS_v1A_06*call to serialize_NV83DE_CTRL_DEBUG_SET_MODE_ERRBAR_DEBUG_PARAMS_v1A_06*call to deserialize_NV83DE_CTRL_DEBUG_SET_MODE_ERRBAR_DEBUG_PARAMS_v1A_06*call to serialize_NV83DE_CTRL_DEBUG_READ_SINGLE_SM_ERROR_STATE_PARAMS_v21_06*call to deserialize_NV83DE_CTRL_DEBUG_READ_SINGLE_SM_ERROR_STATE_PARAMS_v21_06*call to serialize_NV83DE_CTRL_DEBUG_CLEAR_SINGLE_SM_ERROR_STATE_PARAMS_v1A_06*call to deserialize_NV83DE_CTRL_DEBUG_CLEAR_SINGLE_SM_ERROR_STATE_PARAMS_v1A_06*call to serialize_NV83DE_CTRL_DEBUG_SET_MODE_MMU_GCC_DEBUG_PARAMS_v2A_05*call to deserialize_NV83DE_CTRL_DEBUG_SET_MODE_MMU_GCC_DEBUG_PARAMS_v2A_05*call to serialize_NV83DE_CTRL_DEBUG_SET_MODE_MMU_DEBUG_PARAMS_v1A_06*call to deserialize_NV83DE_CTRL_DEBUG_SET_MODE_MMU_DEBUG_PARAMS_v1A_06*call to serialize_NV83DE_CTRL_DEBUG_EXEC_REG_OPS_PARAMS_v1A_06*call to deserialize_NV83DE_CTRL_DEBUG_EXEC_REG_OPS_PARAMS_v1A_06*call to serialize_NV83DE_CTRL_CMD_DEBUG_SUSPEND_CONTEXT_PARAMS_v1A_06*call to deserialize_NV83DE_CTRL_CMD_DEBUG_SUSPEND_CONTEXT_PARAMS_v1A_06*masterGetVfErrCntIntMsk*call to serialize_NV90E6_CTRL_CMD_MASTER_GET_VIRTUAL_FUNCTION_ERROR_CONT_INTR_MASK_v1F_0D*NVRM: RPC to get vf error cont intr mask failed with error 0x%x **NVRM: RPC to get vf error cont intr mask failed with error 0x%x *call to deserialize_NV90E6_CTRL_CMD_MASTER_GET_VIRTUAL_FUNCTION_ERROR_CONT_INTR_MASK_v1F_0D*call to serialize_NVC36F_CTRL_CMD_GPFIFO_SET_WORK_SUBMIT_TOKEN_NOTIF_INDEX_v1F_0A*NVRM: RPC to set work submit token notify index failed with error 0x%x **NVRM: RPC to set work submit token notify index failed with error 0x%x *call to deserialize_NVC36F_CTRL_CMD_GPFIFO_SET_WORK_SUBMIT_TOKEN_NOTIF_INDEX_v1F_0A*call to serialize_NVC36F_CTRL_CMD_GPFIFO_GET_WORK_SUBMIT_TOKEN_v1F_0A*NVRM: RPC to get work submit token failed with error 0x%x **NVRM: RPC to get work submit token failed with error 0x%x *call to deserialize_NVC36F_CTRL_CMD_GPFIFO_GET_WORK_SUBMIT_TOKEN_v1F_0A*call to serialize_NVC637_CTRL_CMD_EXEC_PARTITIONS_EXPORT_v29_0C*NVRM: RPC to export exec partitions failed with error 0x%x **NVRM: RPC to export exec partitions failed with error 0x%x *call to deserialize_NVC637_CTRL_CMD_EXEC_PARTITIONS_EXPORT_v29_0C*call to serialize_NVC637_CTRL_CMD_EXEC_PARTITIONS_DELETE_v1F_0A*NVRM: RPC to delete exec partitions failed with error 0x%x **NVRM: RPC to delete exec partitions failed with error 0x%x *call to deserialize_NVC637_CTRL_CMD_EXEC_PARTITIONS_DELETE_v1F_0A*call to serialize_NVC637_CTRL_CMD_EXEC_PARTITIONS_CREATE_v24_05*call to deserialize_NVC637_CTRL_CMD_EXEC_PARTITIONS_CREATE_v24_05*NVRM: NVRM_RPC: deserialize exec partition creation params : failed. **NVRM: NVRM_RPC: deserialize exec partition creation params : failed. *NVRM: NVRM_RPC: GET_CONSOLIDATED_GR_STATIC_INFO : failed. **NVRM: NVRM_RPC: GET_CONSOLIDATED_GR_STATIC_INFO : failed. *NVRM: RPC to create exec partitions failed with error 0x%x **NVRM: RPC to create exec partitions failed with error 0x%x *NVRM: NVRM_RPC: Insufficient space in message buffer to copy all SMs error states **NVRM: NVRM_RPC: Insufficient space in message buffer to copy all SMs error states *startingSM*numSMsToRead*call to serialize_NV83DE_CTRL_DEBUG_READ_ALL_SM_ERROR_STATES_PARAMS_v21_06*call to deserialize_NV83DE_CTRL_DEBUG_READ_ALL_SM_ERROR_STATES_PARAMS_v21_06*call to serialize_NV83DE_CTRL_DEBUG_SET_EXCEPTION_MASK_PARAMS_v03_00*call to deserialize_NV83DE_CTRL_DEBUG_SET_EXCEPTION_MASK_PARAMS_v03_00*call to serialize_NV83DE_CTRL_DEBUG_CLEAR_ALL_SM_ERROR_STATES_PARAMS_v03_00*call to deserialize_NV83DE_CTRL_DEBUG_CLEAR_ALL_SM_ERROR_STATES_PARAMS_v03_00*call to serialize_NVB0CC_CTRL_INTERNAL_SRIOV_PROMOTE_PMA_STREAM_PARAMS_v1C_0C*NVRM: RPC to promote PMA stream for full SRIOV failed with error 0x%x **NVRM: RPC to promote PMA stream for full SRIOV failed with error 0x%x *call to deserialize_NVB0CC_CTRL_INTERNAL_SRIOV_PROMOTE_PMA_STREAM_PARAMS_v1C_0C*call to serialize_NVB0CC_CTRL_INTERNAL_QUIESCE_PMA_CHANNEL_PARAMS_v1C_08*NVRM: Quiesce PMA channel RPC failed with error 0x%x **NVRM: Quiesce PMA channel RPC failed with error 0x%x *call to deserialize_NVB0CC_CTRL_INTERNAL_QUIESCE_PMA_CHANNEL_PARAMS_v1C_08*call to serialize_NVA06C_CTRL_INTERNAL_PROMOTE_FAULT_METHOD_BUFFERS_PARAMS_v1E_07*NVRM: RPC to promote fault method buffers failed with error 0x%x **NVRM: RPC to promote fault method buffers failed with error 0x%x *call to deserialize_NVA06C_CTRL_INTERNAL_PROMOTE_FAULT_METHOD_BUFFERS_PARAMS_v1E_07*call to serialize_NV2080_CTRL_FLA_GET_FABRIC_MEM_STATS_PARAMS_v1E_0C*NVRM: RPC to fabric memory stats failed with error 0x%x **NVRM: RPC to fabric memory stats failed with error 0x%x *call to deserialize_NV2080_CTRL_FLA_GET_FABRIC_MEM_STATS_PARAMS_v1E_0C*call to serialize_NV00F8_CTRL_DESCRIBE_PARAMS_v1E_0C*NVRM: RPC to 00f8 describe failed with error 0x%x **NVRM: RPC to 00f8 describe failed with error 0x%x *call to deserialize_NV00F8_CTRL_DESCRIBE_PARAMS_v1E_0C*call to serialize_NV0080_CTRL_INTERNAL_MEMSYS_SET_ZBC_REFERENCED_PARAMS_v2B_0E*NVRM: RPC to internal memsys set zbc referenced failed with error 0x%x **NVRM: RPC to internal memsys set zbc referenced failed with error 0x%x *call to deserialize_NV0080_CTRL_INTERNAL_MEMSYS_SET_ZBC_REFERENCED_PARAMS_v2B_0E*call to serialize_NV0080_CTRL_INTERNAL_MEMSYS_SET_ZBC_REFERENCED_PARAMS_v2A_00*call to deserialize_NV0080_CTRL_INTERNAL_MEMSYS_SET_ZBC_REFERENCED_PARAMS_v2A_00*call to serialize_NV2080_CTRL_INTERNAL_MEMSYS_SET_ZBC_REFERENCED_PARAMS_v1F_05*call to deserialize_NV2080_CTRL_INTERNAL_MEMSYS_SET_ZBC_REFERENCED_PARAMS_v1F_05*call to serialize_NV0080_CTRL_GR_TPC_PARTITION_MODE_PARAMS_v1C_04*NVRM: RPC to set gr tpc partition mode failed with error 0x%x **NVRM: RPC to set gr tpc partition mode failed with error 0x%x *call to deserialize_NV0080_CTRL_GR_TPC_PARTITION_MODE_PARAMS_v1C_04*NVRM: RPC to get gr tpc partition mode failed with error 0x%x **NVRM: RPC to get gr tpc partition mode failed with error 0x%x *call to serialize_NV83DE_CTRL_DEBUG_SET_SINGLE_SM_SINGLE_STEP_PARAMS_v1C_02*NVRM: RPC to set single SM single step failed with error 0x%x **NVRM: RPC to set single SM single step failed with error 0x%x *call to deserialize_NV83DE_CTRL_DEBUG_SET_SINGLE_SM_SINGLE_STEP_PARAMS_v1C_02*call to serialize_NV2080_CTRL_FIFO_SETUP_VF_ZOMBIE_SUBCTX_PDB_PARAMS_v1A_23*call to deserialize_NV2080_CTRL_FIFO_SETUP_VF_ZOMBIE_SUBCTX_PDB_PARAMS_v1A_23*call to serialize_NV2080_CTRL_CMD_TIMER_SET_GR_TICK_FREQ_PARAMS_v1A_1F*NVRM: RPC to set GR update/tick frequency failed with error 0x%x **NVRM: RPC to set GR update/tick frequency failed with error 0x%x *call to deserialize_NV2080_CTRL_CMD_TIMER_SET_GR_TICK_FREQ_PARAMS_v1A_1F*call to serialize_NVB0CC_CTRL_FREE_PMA_STREAM_PARAMS_v1A_1F*NVRM: PMA Stream free RPC failed with error 0x%x **NVRM: PMA Stream free RPC failed with error 0x%x *call to deserialize_NVB0CC_CTRL_FREE_PMA_STREAM_PARAMS_v1A_1F*call to serialize_NV2080_CTRL_PERF_RATED_TDP_CONTROL_PARAMS_v1A_1F*NVRM: RPC to set RATED_TDP control action failed with error 0x%x **NVRM: RPC to set RATED_TDP control action failed with error 0x%x *call to deserialize_NV2080_CTRL_PERF_RATED_TDP_CONTROL_PARAMS_v1A_1F*call to serialize_NV2080_CTRL_PERF_RATED_TDP_STATUS_PARAMS_v1A_1F*NVRM: RPC to fetch RATED_TDP status failed with error 0x%x **NVRM: RPC to fetch RATED_TDP status failed with error 0x%x *call to deserialize_NV2080_CTRL_PERF_RATED_TDP_STATUS_PARAMS_v1A_1F*call to serialize_NV2080_CTRL_GR_PC_SAMPLING_MODE_PARAMS_v1A_1F*NVRM: Set PC sampling mode RPC failed with error 0x%x **NVRM: Set PC sampling mode RPC failed with error 0x%x *call to deserialize_NV2080_CTRL_GR_PC_SAMPLING_MODE_PARAMS_v1A_1F*call to serialize_NVB0CC_CTRL_PMA_STREAM_UPDATE_GET_PUT_PARAMS_v29_0B*NVRM: PMA Stream update get/put RPC failed with error 0x%x **NVRM: PMA Stream update get/put RPC failed with error 0x%x *call to deserialize_NVB0CC_CTRL_PMA_STREAM_UPDATE_GET_PUT_PARAMS_v29_0B*call to serialize_NVB0CC_CTRL_PMA_STREAM_UPDATE_GET_PUT_PARAMS_v1A_14*call to deserialize_NVB0CC_CTRL_PMA_STREAM_UPDATE_GET_PUT_PARAMS_v1A_14*call to serialize_NVB0CC_CTRL_ALLOC_PMA_STREAM_PARAMS_v1A_14*NVRM: PMA Stream allocation RPC failed with error 0x%x **NVRM: PMA Stream allocation RPC failed with error 0x%x *call to deserialize_NVB0CC_CTRL_ALLOC_PMA_STREAM_PARAMS_v1A_14*NVRM: RPC to bind PM resources failed with error 0x%x **NVRM: RPC to bind PM resources failed with error 0x%x *call to serialize_NVB0CC_CTRL_EXEC_REG_OPS_PARAMS_v1A_0F*NVRM: Profiler RegOps RPC failed with error 0x%x **NVRM: Profiler RegOps RPC failed with error 0x%x *call to deserialize_NVB0CC_CTRL_EXEC_REG_OPS_PARAMS_v1A_0F*call to serialize_NVB0CC_CTRL_RESERVE_HWPM_LEGACY_PARAMS_v1A_0F*NVRM: RPC to acquire HWPM reservation failed with error 0x%x **NVRM: RPC to acquire HWPM reservation failed with error 0x%x *call to deserialize_NVB0CC_CTRL_RESERVE_HWPM_LEGACY_PARAMS_v1A_0F*call to serialize_NVB0CC_CTRL_RESERVE_PM_AREA_SMPC_PARAMS_v1A_0F*NVRM: RPC to acquire SMPC reservation failed with error 0x%x **NVRM: RPC to acquire SMPC reservation failed with error 0x%x *call to deserialize_NVB0CC_CTRL_RESERVE_PM_AREA_SMPC_PARAMS_v1A_0F*ctrl_gpu_promote_ctx_v1A_20*call to serialize_NV2080_CTRL_GPU_PROMOTE_CTX_PARAMS_v1A_20*rpc_ctrl_params*call to deserialize_NV2080_CTRL_GPU_PROMOTE_CTX_PARAMS_v1A_20*call to serialize_NV2080_CTRL_GET_P2P_CAPS_PARAMS_v21_02*call to deserialize_NV2080_CTRL_GET_P2P_CAPS_PARAMS_v21_02*NVRM: vGPU P2P_V2 parameters size (0x%x) exceed message_buffer size (0x%x) **NVRM: vGPU P2P_V2 parameters size (0x%x) exceed message_buffer size (0x%x) *ctrl_get_p2p_caps_v2_v1F_0D*NVRM: RPC to fetch P2P caps failed with error 0x%x **NVRM: RPC to fetch P2P caps failed with error 0x%x *call to serialize_NV2080_CTRL_MC_SERVICE_INTERRUPTS_PARAMS_v15_01*call to deserialize_NV2080_CTRL_MC_SERVICE_INTERRUPTS_PARAMS_v15_01*call to serialize_NV90F1_CTRL_VASPACE_COPY_SERVER_RESERVED_PDES_PARAMS_v1E_04*call to deserialize_NV90F1_CTRL_VASPACE_COPY_SERVER_RESERVED_PDES_PARAMS_v1E_04*call to serialize_NV2080_CTRL_GPU_INITIALIZE_CTX_PARAMS_v03_00*call to deserialize_NV2080_CTRL_GPU_INITIALIZE_CTX_PARAMS_v03_00*call to serialize_NV2080_CTRL_GR_CTXSW_ZCULL_BIND_PARAMS_v03_00*call to deserialize_NV2080_CTRL_GR_CTXSW_ZCULL_BIND_PARAMS_v03_00*call to serialize_NV2080_CTRL_GR_SET_CTXSW_PREEMPTION_MODE_PARAMS_v12_01*call to deserialize_NV2080_CTRL_GR_SET_CTXSW_PREEMPTION_MODE_PARAMS_v12_01*call to serialize_NV2080_CTRL_GR_CTXSW_PREEMPTION_BIND_PARAMS_v28_07*call to deserialize_NV2080_CTRL_GR_CTXSW_PREEMPTION_BIND_PARAMS_v28_07*call to serialize_NV2080_CTRL_GR_CTXSW_PREEMPTION_BIND_PARAMS_v12_01*call to deserialize_NV2080_CTRL_GR_CTXSW_PREEMPTION_BIND_PARAMS_v12_01*ctrl_set_channel_interleave_level_v1A_0A*call to serialize_NVA06F_CTRL_INTERLEAVE_LEVEL_PARAMS_v17_02*call to deserialize_NVA06F_CTRL_INTERLEAVE_LEVEL_PARAMS_v17_02*ctrl_set_tsg_interleave_level_v1A_0A*call to serialize_NVA06C_CTRL_INTERLEAVE_LEVEL_PARAMS_v17_02*call to deserialize_NVA06C_CTRL_INTERLEAVE_LEVEL_PARAMS_v17_02*ctrl_preempt_v1A_0A*call to serialize_NVA06C_CTRL_PREEMPT_PARAMS_v09_0A*call to deserialize_NVA06C_CTRL_PREEMPT_PARAMS_v09_0A*ctrl_fifo_disable_channels_v1A_0A*call to serialize_NV2080_CTRL_FIFO_DISABLE_CHANNELS_PARAMS_v06_00*call to deserialize_NV2080_CTRL_FIFO_DISABLE_CHANNELS_PARAMS_v06_00*ctrl_set_timeslice_v1A_0A*call to serialize_NVA06C_CTRL_TIMESLICE_PARAMS_v06_00*call to deserialize_NVA06C_CTRL_TIMESLICE_PARAMS_v06_00*ctrl_gpfifo_schedule_v1A_0A*call to serialize_NVA06F_CTRL_GPFIFO_SCHEDULE_PARAMS_v03_00*call to deserialize_NVA06F_CTRL_GPFIFO_SCHEDULE_PARAMS_v03_00*ctrl_set_zbc_stencil_clear_v27_06*call to serialize_NV9096_CTRL_SET_ZBC_STENCIL_CLEAR_PARAMS_v27_06*call to deserialize_NV9096_CTRL_SET_ZBC_STENCIL_CLEAR_PARAMS_v27_06*ctrl_set_zbc_depth_clear_v1A_09*call to serialize_NV9096_CTRL_SET_ZBC_DEPTH_CLEAR_PARAMS_v03_00*call to deserialize_NV9096_CTRL_SET_ZBC_DEPTH_CLEAR_PARAMS_v03_00*ctrl_set_zbc_color_clear_v1A_09*call to serialize_NV9096_CTRL_SET_ZBC_COLOR_CLEAR_PARAMS_v03_00*call to deserialize_NV9096_CTRL_SET_ZBC_COLOR_CLEAR_PARAMS_v03_00*ctrl_get_zbc_clear_table_v1A_09*call to serialize_NV9096_CTRL_GET_ZBC_CLEAR_TABLE_PARAMS_v04_00*call to deserialize_NV9096_CTRL_GET_ZBC_CLEAR_TABLE_PARAMS_v04_00*ctrl_perf_boost_v1A_09*call to serialize_NV2080_CTRL_PERF_BOOST_PARAMS_v03_00*call to deserialize_NV2080_CTRL_PERF_BOOST_PARAMS_v03_00*ctrl_gpu_handle_vf_pri_fault_v1A_09*call to serialize_NV2080_CTRL_CMD_GPU_HANDLE_VF_PRI_FAULT_PARAMS_v18_09*call to deserialize_NV2080_CTRL_CMD_GPU_HANDLE_VF_PRI_FAULT_PARAMS_v18_09*ctrl_reset_isolated_channel_v1A_09*call to serialize_NV506F_CTRL_CMD_RESET_ISOLATED_CHANNEL_PARAMS_v03_00*call to deserialize_NV506F_CTRL_CMD_RESET_ISOLATED_CHANNEL_PARAMS_v03_00*ctrl_reset_channel_v1A_09*call to serialize_NV906F_CTRL_CMD_RESET_CHANNEL_PARAMS_v10_01*call to deserialize_NV906F_CTRL_CMD_RESET_CHANNEL_PARAMS_v10_01*ctrl_nvenc_sw_session_update_info_v1A_09*call to serialize_NVA0BC_CTRL_NVENC_SW_SESSION_UPDATE_INFO_PARAMS_v06_01*call to deserialize_NVA0BC_CTRL_NVENC_SW_SESSION_UPDATE_INFO_PARAMS_v06_01*ctrl_set_vgpu_fb_usage_v1A_08*call to isFbUsageUpdateRequired*call to serialize_NVA080_CTRL_SET_FB_USAGE_PARAMS_v07_02*rpc_fb_params*call to deserialize_NVA080_CTRL_SET_FB_USAGE_PARAMS_v07_02*last_fb_update_timestamp*last_fb_used_value*call to serialize_GET_BRAND_CAPS_v25_12*NVRM: RPC to get brand caps failed with error 0x%x **NVRM: RPC to get brand caps failed with error 0x%x *call to deserialize_GET_BRAND_CAPS_v25_12*call to rpcRmApiControl_wrapper*rm_api_control_v*NVRM: API control 0x%x failed with status 0x%x **NVRM: API control 0x%x failed with status 0x%x *call to serialize_NVB0CC_CTRL_SET_HS_CREDITS_PARAMS_v21_08*NVRM: RPC to set hs credits failed with error 0x%x **NVRM: RPC to set hs credits failed with error 0x%x *call to deserialize_NVB0CC_CTRL_SET_HS_CREDITS_PARAMS_v21_08*pParams_buf**pParams_buf*call to serialize_NVB0CC_CTRL_GET_HS_CREDITS_POOL_MAPPING_PARAMS_v29_0A*NVRM: RPC to get hs credits mapping failed with error 0x%x **NVRM: RPC to get hs credits mapping failed with error 0x%x *call to deserialize_NVB0CC_CTRL_GET_HS_CREDITS_POOL_MAPPING_PARAMS_v29_0A*call to serialize_NVB0CC_CTRL_GET_CHIPLET_HS_CREDIT_POOL_v29_0A*NVRM: RPC to get chiplet hs credit pool failed with error 0x%x **NVRM: RPC to get chiplet hs credit pool failed with error 0x%x *call to deserialize_NVB0CC_CTRL_GET_CHIPLET_HS_CREDIT_POOL_v29_0A*NVRM: RPC rpcCtrlReleaseCcuProf_v29_07 failed with error 0x%x **NVRM: RPC rpcCtrlReleaseCcuProf_v29_07 failed with error 0x%x *call to serialize_NVB0CC_CTRL_RESERVE_CCUPROF_PARAMS_v29_07*NVRM: RPC rpcCtrlReserveCcuProf_v29_07 failed with error 0x%x **NVRM: RPC rpcCtrlReserveCcuProf_v29_07 failed with error 0x%x *call to deserialize_NVB0CC_CTRL_RESERVE_CCUPROF_PARAMS_v29_07*call to serialize_NVB0CC_CTRL_RELEASE_HES_PARAMS_v29_07*NVRM: RPC rpcCtrlReleaseHes_v29_07 failed with error 0x%x **NVRM: RPC rpcCtrlReleaseHes_v29_07 failed with error 0x%x *call to deserialize_NVB0CC_CTRL_RELEASE_HES_PARAMS_v29_07*call to serialize_NVB0CC_CTRL_RESERVE_HES_PARAMS_v29_07*NVRM: RPC rpcCtrlReserveHes_v29_07 failed with error 0x%x **NVRM: RPC rpcCtrlReserveHes_v29_07 failed with error 0x%x *call to deserialize_NVB0CC_CTRL_RESERVE_HES_PARAMS_v29_07*call to serialize_NVB0CC_CTRL_GET_HS_CREDITS_PARAMS_v21_08*NVRM: RPC to get hs credits failed with error 0x%x **NVRM: RPC to get hs credits failed with error 0x%x *call to deserialize_NVB0CC_CTRL_GET_HS_CREDITS_PARAMS_v21_08*call to serialize_NVB0CC_CTRL_GET_TOTAL_HS_CREDITS_PARAMS_v21_08*NVRM: RPC to get total hs credits failed with error 0x%x **NVRM: RPC to get total hs credits failed with error 0x%x *call to deserialize_NVB0CC_CTRL_GET_TOTAL_HS_CREDITS_PARAMS_v21_08*call to serialize_NV2080_CTRL_CMD_FLA_SETUP_INSTANCE_MEM_BLOCK_v21_05*call to deserialize_NV2080_CTRL_CMD_FLA_SETUP_INSTANCE_MEM_BLOCK_v21_05*call to serialize_NV2080_CTRL_GRMGR_GET_GR_FS_INFO_PARAMS_v2B_09*NVRM: Get GR FS Info RPC failed with error 0x%x **NVRM: Get GR FS Info RPC failed with error 0x%x *call to deserialize_NV2080_CTRL_GRMGR_GET_GR_FS_INFO_PARAMS_v2B_09*call to serialize_NV2080_CTRL_GRMGR_GET_GR_FS_INFO_PARAMS_v1A_1D*call to deserialize_NV2080_CTRL_GRMGR_GET_GR_FS_INFO_PARAMS_v1A_1D*call to serialize_NV2080_CTRL_FB_GET_FS_INFO_PARAMS_v2B_07*NVRM: Get FB FS Info RPC failed with error 0x%x **NVRM: Get FB FS Info RPC failed with error 0x%x *call to deserialize_NV2080_CTRL_FB_GET_FS_INFO_PARAMS_v2B_07*call to serialize_NV2080_CTRL_FB_GET_FS_INFO_PARAMS_v26_04*call to deserialize_NV2080_CTRL_FB_GET_FS_INFO_PARAMS_v26_04*call to serialize_NV2080_CTRL_FB_GET_FS_INFO_PARAMS_v24_00*call to deserialize_NV2080_CTRL_FB_GET_FS_INFO_PARAMS_v24_00*call to serialize_NV2080_CTRL_GPU_EVICT_CTX_PARAMS_v1A_1C*call to deserialize_NV2080_CTRL_GPU_EVICT_CTX_PARAMS_v1A_1C*call to serialize_NVA06F_CTRL_STOP_CHANNEL_PARAMS_v1A_1E*call to deserialize_NVA06F_CTRL_STOP_CHANNEL_PARAMS_v1A_1E*NVRM: NVRM_RPC: Bad pParamStructPtr/paramSize for cmd:0x%x **NVRM: NVRM_RPC: Bad pParamStructPtr/paramSize for cmd:0x%x *NVRM: DMA Control Command cmd 0x%x NOT supported **NVRM: DMA Control Command cmd 0x%x NOT supported *NVRM: Failed to get client(0x%x) under lock: 0x%x! **NVRM: Failed to get client(0x%x) under lock: 0x%x! *NVRM: Failed to get resource ref: 0x%x! hObject: 0x%x **NVRM: Failed to get resource ref: 0x%x! hObject: 0x%x *alloc_event_v03_00*NVRM: NVRM_RPC: IdleChannels: requested %u entries (but only room for %u) **NVRM: NVRM_RPC: IdleChannels: requested %u entries (but only room for %u) *idle_channels_v03_00*nchannels*channel_list**channel_list*phChannel*NVRM: Failed to get client under lock: 0x%x! hClient: 0x%x **NVRM: Failed to get client under lock: 0x%x! hClient: 0x%x *alloc_subdevice_v08_01*unmap_memory_dma_v2C_05*unmap_memory_dma_v03_00*map_memory_dma_v2C_05*map_memory_dma_v03_00*call to _rpcAllocObjectPrologue*NVRM: Alloc object RPC prologue failed (status: 0x%x) for hObject: 0x%x, hClass: 0x%x, hChannel: 0x%x, hClient: 0x%x **NVRM: Alloc object RPC prologue failed (status: 0x%x) for hObject: 0x%x, hClass: 0x%x, hChannel: 0x%x, hClient: 0x%x *alloc_object_v*param_len*call to _serializeClassParams_v2C_01*NVRM: RmAllocObjectEx: vGPU: object RPC skipped (handle = 0x%08x, class = 0x%x) with non-NULL params ptr **NVRM: RmAllocObjectEx: vGPU: object RPC skipped (handle = 0x%08x, class = 0x%x) with non-NULL params ptr *call to _setRmReturnParams_v*call to _serializeClassParams_v2B_04*call to _serializeClassParams_v29_06*call to _serializeClassParams_v27_00*call to _serializeClassParams_v26_00*call to _serializeClassParams_v25_08*call to _serializeClassParams_v*call to validateRpcForSriov*NVRM: UVM (0x%x) object allocation is not supported **NVRM: UVM (0x%x) object allocation is not supported *NVRM: Display Class (0x%x) object allocation is not supported **NVRM: Display Class (0x%x) object allocation is not supported *call to _rpcAllocBuffersForKGrObj*_rpcAllocBuffersForKGrObj(pGpu, hClient, hObject)**_rpcAllocBuffersForKGrObj(pGpu, hClient, hObject)*clientGetResourceRef(pRsClient, hObject, &pResourceRef)**clientGetResourceRef(pRsClient, hObject, &pResourceRef)*pKernelGraphicsObject != NULL**pKernelGraphicsObject != NULL*pChannelParentRef**pChannelParentRef*pChannelParentRef != NULL**pChannelParentRef != NULL*call to kgrctxSetVgpuGfxpBuffers*call to _allocateGfxpBuffer*kgrctxGetVgpuGfxpBuffers(pGpu, pKernelGraphicsContext) == NULL**kgrctxGetVgpuGfxpBuffers(pGpu, pKernelGraphicsContext) == NULL*NVRM: failed to allocate memory for pKernelGraphicsContext->pVgpuGfxpBuffers! **NVRM: failed to allocate memory for pKernelGraphicsContext->pVgpuGfxpBuffers! *NVRM: cannot get unique memory handle for vidmem : %x **NVRM: cannot get unique memory handle for vidmem : %x *NVRM: failed to acquire lock **NVRM: failed to acquire lock *NVRM: Failed to allocate vidmem for gfxp: 0x%x **NVRM: Failed to allocate vidmem for gfxp: 0x%x **hMemory*NVRM: cannot get unique memory handle for virtmem : %x **NVRM: cannot get unique memory handle for virtmem : %x *NVRM: Call to allocate virtmem for gfxp failed : %x **NVRM: Call to allocate virtmem for gfxp failed : %x **hDma*NVRM: Call to map gfxp buffer to gpu va failed : %x **NVRM: Call to map gfxp buffer to gpu va failed : %x **dmaOffset*NVRM: cannot get subdevice handle **NVRM: cannot get subdevice handle *NVRM: NVRM_RPC: rpc call to bind gfxp buffer failed : %x **NVRM: NVRM_RPC: rpc call to bind gfxp buffer failed : %x *bIsBufferAllocated*call to _freeGfxpBuffer*pGpfifoAllocParams != NULL**pGpfifoAllocParams != NULL*pChID != NULL**pChID != NULL*alloc_channel_dma_v1F_04*pChannelGPFIFOAllocParms**pChannelGPFIFOAllocParms*NVRM: NVRM_RPC: AllocMemory: pMemDesc arg was NULL **NVRM: NVRM_RPC: AllocMemory: pMemDesc arg was NULL *alloc_memory_v13_01*call to _issuePteDescRpc*alloc_share_device_v03_00*alloc_root_v07_00*processName**processName*processID == osGetCurrentProcess()**processID == osGetCurrentProcess()*NVRM: NVRM_RPC: Failed to set guest client resource handle range %x **NVRM: NVRM_RPC: Failed to set guest client resource handle range %x *call to _rpcFreePrologue*NVRM: RPC Free prologue failed: 0x%x! **NVRM: RPC Free prologue failed: 0x%x! *pMemory->pDevice != NULL**pMemory->pDevice != NULL*hClientLocal*subdeviceGetByInstance(pRsClient, hDeviceLocal, 0, &pSubdeviceLocal)**subdeviceGetByInstance(pRsClient, hDeviceLocal, 0, &pSubdeviceLocal)*hSubdeviceLocal*call to updateHostVgpuFbUsage*NVRM: Failed to update FB usage to host : 0x%x **NVRM: Failed to update FB usage to host : 0x%x *call to _rpcFreeBuffersForKGrObj*_rpcFreeBuffersForKGrObj(pGpu, pRsClient, pResourceRef)**_rpcFreeBuffersForKGrObj(pGpu, pRsClient, pResourceRef)*pVgpuGfxpBuffers->refCountChannel != 0**pVgpuGfxpBuffers->refCountChannel != 0*NVRM: No vGPU GFxP buffers associated! hChannel: 0x%x **NVRM: No vGPU GFxP buffers associated! hChannel: 0x%x *NVRM: NVRM_RPC: rpc call to bind gfxp buffer for WFI mode failed : %x **NVRM: NVRM_RPC: rpc call to bind gfxp buffer for WFI mode failed : %x *pRmApi->Free(pRmApi, hClient, pVgpuGfxpBuffers->hDma[i])**pRmApi->Free(pRmApi, hClient, pVgpuGfxpBuffers->hDma[i])*pRmApi->Free(pRmApi, hClient, pVgpuGfxpBuffers->hMemory[i])**pRmApi->Free(pRmApi, hClient, pVgpuGfxpBuffers->hMemory[i])*NVRM: LOG RPC - string too long **NVRM: LOG RPC - string too long *log_v03_00*log_len*log_msg**log_msg*call to engine_utilization_copy_params_to_rpc_buffer_v1E_0D**gpumonPerfmonsampleV2*call to engine_utilization_copy_params_from_rpc_buffer_v1E_0D*NVRM: Unknown Engine Utilization Control Command 0x%x **NVRM: Unknown Engine Utilization Control Command 0x%x *clkPercentBusy*samplingPeriodUs*alloc_object_v29_06*param_NVC9FA_VIDEO_OFA*prohibitMultipleInstances*param_length*param_NV50_TESLA*pGrAllocParam*param_GT212_DMA_COPY*pNv85b5CreateParms*param_GF100_DISP_SW*_reserved1*_reserved2*logicalHeadId*param_FERMI_CONTEXT_SHARE_A*param_NVD0B7_VIDEO_ENCODER*param_FERMI_VASPACE_A*param_NV83DE_ALLOC_PARAMETERS*param_NVENC_SW_SESSION*param_NVC4B0_VIDEO_DECODER*param_NVFBC_SW_SESSION*param_KEPLER_CHANNEL_GROUP_A*param_NVC637_ALLOCATION_PARAMETERS*param_NVC638_ALLOCATION_PARAMETERS*param_NV503C_ALLOC_PARAMETERS*param_NVB1CC_ALLOC_PARAMETERS*param_NVB2CC_ALLOC_PARAMETERS*hContextTarget*param_NV_GR_ALLOCATION_PARAMETERS*param_NV_UVM_CHANNEL_RETAINER_ALLOC_PARAMS*param_NV503B_ALLOC_PARAMETERS*param_NV00F8_ALLOCATION_PARAMETERS*hVidMem*param_NV_NVJPG_ALLOCATION_PARAMETERS*pContextShareParams*pConsolidatedRpcPayload != NULL**pConsolidatedRpcPayload != NULL*align_offset*call to consolidated_gr_static_info_copy*NVRM: NVRM_RPC: copyPayloadToGrStaticInfo: failed. **NVRM: NVRM_RPC: copyPayloadToGrStaticInfo: failed. *bufferSize != NULL**bufferSize != NULL*NVRM: NVRM_RPC: getConsolidatedGrRpcBufferSize: failed. **NVRM: NVRM_RPC: getConsolidatedGrRpcBufferSize: failed. *pPayload != NULL**pPayload != NULL*call to static_data_copy*NVRM: NVRM_RPC: copyPayloadToStaticData: failed. **NVRM: NVRM_RPC: copyPayloadToStaticData: failed. *NVRM: NVRM_RPC: Get static data RPC bufferSize: failed. **NVRM: NVRM_RPC: Get static data RPC bufferSize: failed. *guestPages != NULL**guestPages != NULL*pHdr != NULL**pHdr != NULL*pAllocatedRecord**pAllocatedRecord***pAllocatedRecord*NVRM: no memory for allocated record **NVRM: no memory for allocated record *pPteDesc**pPteDesc*pte_pde**pte_pde*call to _issueRpcLarge*NVRM: rpcSendMessage failed with status 0x%08x for fn %d! **NVRM: rpcSendMessage failed with status 0x%08x for fn %d! *NVRM: rpcSendMessage failed with status 0x%08x for fn %d continuation record (remainingSize=0x%x)! **NVRM: rpcSendMessage failed with status 0x%08x for fn %d continuation record (remainingSize=0x%x)! *lastSequence == (firstSequence + recordCount)**lastSequence == (firstSequence + recordCount)*NVRM: rpcRecvPoll timedout for fn %d sequence %d! **NVRM: rpcRecvPoll timedout for fn %d sequence %d! *NVRM: rpcRecvPoll failed with status 0x%08x for fn %d sequence %d! **NVRM: rpcRecvPoll failed with status 0x%08x for fn %d sequence %d! *entryLength <= pRpc->maxRpcSize**entryLength <= pRpc->maxRpcSize*NVRM: rpcRecvPoll timedout for fn %d sequence %d continuation record (remainingSize=0x%x)! **NVRM: rpcRecvPoll timedout for fn %d sequence %d continuation record (remainingSize=0x%x)! *NVRM: rpcRecvPoll failed with status 0x%08x for fn %d sequence %d continuation record! (remainingSize=0x%x) **NVRM: rpcRecvPoll failed with status 0x%08x for fn %d sequence %d continuation record! (remainingSize=0x%x) *entryLength >= sizeof(rpc_message_header_v)**entryLength >= sizeof(rpc_message_header_v)*recordCount == 0**recordCount == 0*waitSequence - 1 == lastSequence**waitSequence - 1 == lastSequence*NVRM: RPC failed with status 0x%08x for fn %d! **NVRM: RPC failed with status 0x%08x for fn %d! *NVRM: rpcSendMessage async failed with status 0x%08x for fn %d! **NVRM: rpcSendMessage async failed with status 0x%08x for fn %d! *NVRM: failed to allocate RPC meter memory! **NVRM: failed to allocate RPC meter memory! *rpcData*rpcDataTag*rpcExtraData*NVRM: rpcSendMessage failed with status 0x%08x for fn %d sequence %d! **NVRM: rpcSendMessage failed with status 0x%08x for fn %d sequence %d! *call to _gspHibernationBufAvailableData*NVRM: Timeout while waiting for available date in the hibernation buffer **NVRM: Timeout while waiting for available date in the hibernation buffer *available_data*call to _transferDataFromGspHibernationBuf*NVRM: _transferDataFromGspHibernationBuf failed with status 0x%08x **NVRM: _transferDataFromGspHibernationBuf failed with status 0x%08x *NVRM: Hibernation Data Buffer is NULL **NVRM: Hibernation Data Buffer is NULL *bytes_to_transfer*call to _gspHibernationBufFreeSpace*NVRM: Timeout while waiting for free space in the hibernation buffer **NVRM: Timeout while waiting for free space in the hibernation buffer *bytes_written*call to _transferDataToGspHibernationBuf*NVRM: Complete data not restored to GSP plugin **NVRM: Complete data not restored to GSP plugin *NVRM: rpcRecvPoll timedout for fn %d sequence %u! **NVRM: rpcRecvPoll timedout for fn %d sequence %u! *NVRM: rpcRecvPoll failed with status 0x%08x for fn %d sequence %u! **NVRM: rpcRecvPoll failed with status 0x%08x for fn %d sequence %u! *call to _vgpuGspWaitForResponse*pSequence != NULL**pSequence != NULL*call to _vgpuGspSendRpcRequest*NVRM: virtual function not implemented. **NVRM: virtual function not implemented. *bRpcInitialized*bGspPlugin*NVRM: NVRM_RPC: SET_GUEST_SYSTEM_INFO : failed. **NVRM: NVRM_RPC: SET_GUEST_SYSTEM_INFO : failed. *vGPU type is not supported**vGPU type is not supported*call to _setupSysmemPfnBitMap*NVRM: RPC: Sysmem PFN bitmap setup failed: 0x%x **NVRM: RPC: Sysmem PFN bitmap setup failed: 0x%x *call to updateSharedBufferInfoInSysmemPfnBitMap*NVRM: RPC: Sysmem PFN bitmap update failed for shared buffer sysmem pages failed: 0x%x **NVRM: RPC: Sysmem PFN bitmap update failed for shared buffer sysmem pages failed: 0x%x *bVncSupported*bVncConnected*bECCSupported*bECCEnabled*Guest explicitly disabled ECC support. **Guest explicitly disabled ECC support. *Error: Guest trying to enable ECC on unsupported configuration. **Error: Guest trying to enable ECC on unsupported configuration. *guestEccStatus*call to _setupGspControlBuffer*NVRM: RPC: GSP Shared memory setup failed: 0x%x **NVRM: RPC: GSP Shared memory setup failed: 0x%x *call to _setupGspResponseBuffer*NVRM: RPC: GSP Response memory setup failed: 0x%x **NVRM: RPC: GSP Response memory setup failed: 0x%x *call to _setupGspMessageBuffer*NVRM: RPC: GSP Message buffer setup failed: 0x%x **NVRM: RPC: GSP Message buffer setup failed: 0x%x *gspMessageBuf*largeRpcSize*call to _setupGspEventInfrastructure*NVRM: RPC: Event setup failed: 0x%x **NVRM: RPC: Event setup failed: 0x%x *call to _setupGspSharedMemory*NVRM: RPC: Shared memory setup failed: 0x%x **NVRM: RPC: Shared memory setup failed: 0x%x *call to _setupGspDebugBuff*NVRM: RPC: Debug memory setup failed: 0x%x **NVRM: RPC: Debug memory setup failed: 0x%x *pVSInfo*call to _setupGspHibernateShrdBuff*NVRM: RPC: Hibernate memory setup failed: 0x%x **NVRM: RPC: Hibernate memory setup failed: 0x%x *call to vgpuUpdateGuestOsType*call to _vgpuGspSetupCommunicationWithPlugin*NVRM: RPC: GSP Setup failed: 0x%x **NVRM: RPC: GSP Setup failed: 0x%x *call to setGuestEccStatus*call to _tryEnableGspDebugBuff*NVRM: RPC: Enable debug buffer failed: 0x%x **NVRM: RPC: Enable debug buffer failed: 0x%x *bGspBuffersInitialized*call to _vgpuGspTeardownCommunicationWithPlugin*call to _teardownGspHibernateShrdBuff*call to _teardownGspDebugBuff*call to _teardownGspSharedMemory*call to _teardownGspEventInfrastructure*call to _teardownGspMessageBuffer*call to _teardownGspResponseBuffer*call to _teardownGspControlBuffer*responseBuf*gspResponseBufInfo*msgBuf*sharedMem*sharedMemory*eventBuf*eventRing*bar2Offset*sysmemBitMapTablePfn*call to vgpuGspSysmemPfnMakeBufferAddress*gspCtrlBufInfo*addrCtrlBuf*call to _vgpuGspSendSetupRequest*NVRM: Communication setup with GSP plugin failed 0x%x **NVRM: Communication setup with GSP plugin failed 0x%x *NVRM: RPC: Response buf addr IOVA 0x%llx **NVRM: RPC: Response buf addr IOVA 0x%llx *gfn*NVRM: RPC: Control buf addr IOVA 0x%llx **NVRM: RPC: Control buf addr IOVA 0x%llx *NVRM: RPC: Version 0x%x **NVRM: RPC: Version 0x%x *NVRM: RPC: Requested GSP caps 0x%x **NVRM: RPC: Requested GSP caps 0x%x *NVRM: RPC: Enabled GSP caps 0x%x **NVRM: RPC: Enabled GSP caps 0x%x *NVRM: RPC: Control buf addr 0x%llx **NVRM: RPC: Control buf addr 0x%llx *NVRM: RPC: Response buf addr 0x%llx **NVRM: RPC: Response buf addr 0x%llx *NVRM: RPC: Message buf addr 0x%llx **NVRM: RPC: Message buf addr 0x%llx *NVRM: RPC: Message buf BAR2 offset 0x%llx **NVRM: RPC: Message buf BAR2 offset 0x%llx *NVRM: RPC: Shared buf addr 0x%llx **NVRM: RPC: Shared buf addr 0x%llx *NVRM: RPC: Shared buf BAR2 offset 0x%llx **NVRM: RPC: Shared buf BAR2 offset 0x%llx *NVRM: RPC: Event buf addr 0x%llx **NVRM: RPC: Event buf addr 0x%llx *NVRM: RPC: Event buf BAR2 offset 0x%llx **NVRM: RPC: Event buf BAR2 offset 0x%llx *NVRM: RPC: Debug buf addr 0x%llx **NVRM: RPC: Debug buf addr 0x%llx *debugBuf*NVRM: Communication teardown with GSP Plugin failed 0x%x **NVRM: Communication teardown with GSP Plugin failed 0x%x *call to _vgpuGspSendRequest*NVRM: RPC: Invlid address space %d **NVRM: RPC: Invlid address space %d *NVRM: RPC: Invalid buffer size %lld **NVRM: RPC: Invalid buffer size %lld *call to _freeRpcMemDesc*call to kbusIsBar2Initialized*call to _allocRpcMemDesc*NVRM: RPC: GSP Message memory setup failed: 0x%x **NVRM: RPC: GSP Message memory setup failed: 0x%x **gspResponseBuf**gspCtrlBuf*bAllocGspBufferInSysmem*NVRM: vGPU type is not supported**NVRM: vGPU type is not supported*(pVGpu->eventRing.mem.pMemory != NULL)**(pVGpu->eventRing.mem.pMemory != NULL)*NVRM: RPC: Failed to set GUEST_SYSTEM_INFO on resume from hibernate:0x%x **NVRM: RPC: Failed to set GUEST_SYSTEM_INFO on resume from hibernate:0x%x *call to _objrpcAssignIpVersion*call to rpcStructureCopySetIpVersion*call to _objrpcStructureCopyAssignIpVersion*bSysmemPfnInfoInitialized*call to _freeSysmemPfnRing*nodeNext**nodeNext*call to vgpuFreeSysmemPfnBitMapNode*sysmemPfnRefCount**sysmemPfnRefCount*guestMaxPfn*call to vgpuUpdateSysmemPfnBitMap*NVRM: Failed to update sysmemPfnMap info in PFN bitmap, error 0x%x **NVRM: Failed to update sysmemPfnMap info in PFN bitmap, error 0x%x *bAddedToBitmap*NVRM: Failed to update ctrl buff sysmem info in PFN bitmap, error 0x%x **NVRM: Failed to update ctrl buff sysmem info in PFN bitmap, error 0x%x *NVRM: Failed to update response buff sysmem info in PFN bitmap, error 0x%x **NVRM: Failed to update response buff sysmem info in PFN bitmap, error 0x%x *NVRM: Failed to update message buff sysmem info in PFN bitmap, error 0x%x **NVRM: Failed to update message buff sysmem info in PFN bitmap, error 0x%x *NVRM: Failed to update event mem sysmem in PFN bitmap, error 0x%x **NVRM: Failed to update event mem sysmem in PFN bitmap, error 0x%x *NVRM: Failed to update shared mem sysmem info in PFN bitmap, error 0x%x **NVRM: Failed to update shared mem sysmem info in PFN bitmap, error 0x%x *debugBuff*NVRM: Failed to update debug memory sysmem info in PFN bitmap, error 0x%x **NVRM: Failed to update debug memory sysmem info in PFN bitmap, error 0x%x *NVRM: Failed to update PFN bitmap sysmem info in PFN bitmap, error 0x%x **NVRM: Failed to update PFN bitmap sysmem info in PFN bitmap, error 0x%x *call to _initSysmemPfnRing*NVRM: NVRM_RPC: Failed to init sysmem pfn ring **NVRM: NVRM_RPC: Failed to init sysmem pfn ring *call to vgpuAllocSysmemPfnBitMapNode*NVRM: NVRM_RPC: Failed to alloc sysmem pfn bitmap node **NVRM: NVRM_RPC: Failed to alloc sysmem pfn bitmap node *NVRM: failed to allocate sysmem pfn refcount array **NVRM: failed to allocate sysmem pfn refcount array *NVRM: RPC: PFN ring setup failed: 0x%x **NVRM: RPC: PFN ring setup failed: 0x%x *sysmemPfnRing_pfn*Alloc shared hibernation buffer for vGPU GSP **Alloc shared hibernation buffer for vGPU GSP *NVRM: RPC: GSP hibernate buffer setup failed: 0x%x **NVRM: RPC: GSP hibernate buffer setup failed: 0x%x *shmInterruptActive*call to _freeSharedMemory*call to _allocSharedMemory*_allocSharedMemory(pGpu, pVGpu, addressSpace, memFlags)**_allocSharedMemory(pGpu, pVGpu, addressSpace, memFlags)**shared_memory*call to _freeRpcMemDescSysmem*call to _freeRpcMemDescFb*NVRM: RPC: unknown memory address space %d **NVRM: RPC: unknown memory address space %d *call to _allocRpcMemDescSysmem*call to _allocRpcMemDescFb*call to _freeRpcMemDescFbBar2Virtual*call to _allocRpcMemDescFbBar2Virtual*memdescCreate(ppMemDesc, pGpu, size, RM_PAGE_SIZE, bContig, ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)**memdescCreate(ppMemDesc, pGpu, size, RM_PAGE_SIZE, bContig, ADDR_FBMEM, NV_MEMORY_UNCACHED, MEMDESC_FLAGS_NONE)*NVRM: RPC: BAR2 map failed **NVRM: RPC: BAR2 map failed *memdescCreate(ppMemDesc, pGpu, size, 0, bContig, ADDR_SYSMEM, NV_MEMORY_CACHED, memdescFlag)**memdescCreate(ppMemDesc, pGpu, size, 0, bContig, ADDR_SYSMEM, NV_MEMORY_CACHED, memdescFlag)*memdescMapOld(*ppMemDesc, 0, size, memdescGetFlag(*ppMemDesc, MEMDESC_FLAGS_KERNEL_MODE), NV_PROTECT_READ_WRITE, ppMemBuffer, ppMemPriv)**memdescMapOld(*ppMemDesc, 0, size, memdescGetFlag(*ppMemDesc, MEMDESC_FLAGS_KERNEL_MODE), NV_PROTECT_READ_WRITE, ppMemBuffer, ppMemPriv)*guestOsType*putRestoreHibernateBuf*getSaveHibernateBuf*pParams_v25_13**pParams_v25_13*pParams_v25_06**pParams_v25_06*bOneToOneComptagLineAllocation*bUseOneToFourComptagLineAllocation*bUseRawModeComptaglineAllocation*bDisableCompbitBacking*bDisablePostL2Compression*bEnabledEccFBPA*bL2PreFill*bFbpaPresent*comprPageSize*comprPageShift*ramType*ltsPerLtcCount*pParams_v28_04**pParams_v28_04*faultId*typeEnum*resetId*devicePriBase*isEngine*rlEngId*groupId*ginTargetId*deviceBroadcastPriBase*groupLocalInstanceId*pParams_v27_05**pParams_v27_05*pParams_v25_05**pParams_v25_05*pParams_v**pParams_v*bGpuSupportsFabricProbe*pParams_v25_01**pParams_v25_01*pParams_v2C_07**pParams_v2C_07*pParams_v25_00**pParams_v25_00*c2c_info_v22_01**c2c_info_v22_01*ce_get_all_caps_v21_0A**ce_get_all_caps_v21_0A*pcie_supported_gpu_atomics_v1F_08**pcie_supported_gpu_atomics_v1F_08*vgpu_get_config_params_v21_0C**vgpu_get_config_params_v21_0C*frameRateLimiter*swVSyncEnabled*cudaEnabled*pluginPteBlitEnabled*disableWddm1xPreemption*debugBufferSize*debugBuffer**debugBuffer***debugBuffer*mappableCpuHostAperture*linuxInterruptOptimization*vgpuDeviceCapsBits*maxPixels*uvmEnabledFeatures*enableKmdSysmemScratch*range_params_v1A_18**range_params_v1A_18*get_nvlink_caps_v2B_11**get_nvlink_caps_v2B_11*discoveredLinks_s**discoveredLinks_s*discoveredLinks_d**discoveredLinks_d*get_nvlink_caps_v15_02**get_nvlink_caps_v15_02*vgpu_ce_get_caps_v2_v24_09**vgpu_ce_get_caps_v2_v24_09*vgpu_get_latency_buffer_size_v27_02**vgpu_get_latency_buffer_size_v27_02*vgpu_get_latency_buffer_size_v1C_09**vgpu_get_latency_buffer_size_v1C_09*vgpu_bsp_get_caps_v25_00**vgpu_bsp_get_caps_v25_00*bspCaps**bspCaps*vgpu_static_properties_v29_03**vgpu_static_properties_v29_03*bProfilingTracingEnabled*bDebuggingEnabled*channelCount*bPblObjNotPresent*firstAsyncCEIdx*vgpu_static_properties_v26_03**vgpu_static_properties_v26_03*vgpu_static_properties_v1B_01**vgpu_static_properties_v1B_01*get_zcull_info_params_12_01**get_zcull_info_params_12_01*widthAlignPixels*heightAlignPixels*pixelSquaresByAliquots*aliquotTotal*zcullRegionByteMultiplier*zcullRegionHeaderSize*zcullSubregionHeaderSize*subregionCount*subregionWidthAlignPixels*subregionHeightAlignPixels*ccuSampleInfoParams_v29_05**ccuSampleInfoParams_v29_05*ccuSampleSize*execSyspipeInfo_v26_01**execSyspipeInfo_v26_01*eccStatusParams_v2C_02**eccStatusParams_v2C_02*bFatalPoisonError*scrubComplete*dbeNonResettable*sbeNonResettable*eccStatusParams_v28_08**eccStatusParams_v28_08*eccStatusParams_v28_01**eccStatusParams_v28_01*eccStatusParams_v27_04**eccStatusParams_v27_04*eccStatusParams_v26_02**eccStatusParams_v26_02*eccStatusParams_v24_06**eccStatusParams_v24_06*gpu_partition_info_v28_02**gpu_partition_info_v28_02*gpu_partition_info_v24_05**gpu_partition_info_v24_05*zbcTableSizes_v1A_07**zbcTableSizes_v1A_07*execPartitionInfo_v24_05**execPartitionInfo_v24_05*ciProfiles_v20_04**ciProfiles_v20_04*ciProfiles_v20_04->profileCount <= NV_ARRAY_ELEMENTS(ciProfiles->profiles)*src/kernel/vgpu/rpcstructurecopy.c**ciProfiles_v20_04->profileCount <= NV_ARRAY_ELEMENTS(ciProfiles->profiles)**src/kernel/vgpu/rpcstructurecopy.c*fbRegionInfoParams_v2B_02**fbRegionInfoParams_v2B_02*fbRegionInfoParams_v03_00**fbRegionInfoParams_v03_00*sku_info_v25_0E**sku_info_v25_0E*BoardID*skuConfigVersion*chipSKUMod**chipSKUMod*CDP**CDP*projectSKUMod**projectSKUMod*businessCycle*gid_info_v03_00**gid_info_v03_00*vgx_system_info_v03_00**vgx_system_info_v03_00*szHostDriverVersionBuffer**szHostDriverVersionBuffer*szHostVersionBuffer**szHostVersionBuffer*szHostTitleBuffer**szHostTitleBuffer*szPluginTitleBuffer**szPluginTitleBuffer*szHostUnameBuffer**szHostUnameBuffer*iHostChangelistNumber*iPluginChangelistNumber*vgpu_static_data_v2B_08**vgpu_static_data_v2B_08*fbTaxLength*fbBusWidth*fbioMask*fbpMask*ltsCount*sizeL2Cache*poisonFuseEnabled*guestManagedHwAlloc*gpuName*bSplitVasBetweenServerClientRm*bPerRunlistChannelRamEnabled*bAtsSupported*bC2CLinkUp*bSelfHostedMode*ceFaultMethodBufferDepth*pcieGpuLinkCaps*vgpu_static_data_v2A_07**vgpu_static_data_v2A_07*vgpu_static_data_v27_01**vgpu_static_data_v27_01*vgpu_static_data_v27_00**vgpu_static_data_v27_00*vgpu_static_data_v25_0E**vgpu_static_data_v25_0E*gr_pdb_properties_v1E_02**gr_pdb_properties_v1E_02*fecs_trace_defines_v1D_04**fecs_trace_defines_v1D_04*timestampHiTagMask*timestampHiTagShift*timestampVMask*numLowerBitsZeroShift*fecs_record_size_v1B_05**fecs_record_size_v1B_05*zcull_info_v1B_05**zcull_info_v1B_05*floorsweep_mask_params_v2B_01**floorsweep_mask_params_v2B_01*mmuPerGpc**mmuPerGpc*physGpcMask*floorsweep_mask_params_v1D_03**floorsweep_mask_params_v1D_03*throttle_ctrl_v2B_10**throttle_ctrl_v2B_10*rate_modifier_v2B_06**rate_modifier_v2B_06*rate_modifier_v1B_05**rate_modifier_v1B_05*pParams_v25_0B**pParams_v25_0B*ctx_buff_info_v25_07**ctx_buff_info_v25_07*ppc_mask_v1C_06**ppc_mask_v1C_06*rop_info_v1B_05**rop_info_v1B_05*ropUnitCount*ropOperationsFactor*ropOperationsCount*sm_order_v2B_0B**sm_order_v2B_0B*sm_order_v2A_02**sm_order_v2A_02*sm_order_v1F_01**sm_order_v1F_01*gr_info_v2C_03**gr_info_v2C_03*gr_info_v29_00**gr_info_v29_00*gr_info_v24_07**gr_info_v24_07*gr_get_sm_issue_throttle_ctrl_v2B_10**gr_get_sm_issue_throttle_ctrl_v2B_10*gr_get_sm_issue_rate_modifier_v2B_06**gr_get_sm_issue_rate_modifier_v2B_06*gr_get_sm_issue_rate_modifier_v1A_1F**gr_get_sm_issue_rate_modifier_v1A_1F*bus_get_info_v2_v1C_09**bus_get_info_v2_v1C_09*vgpu_fb_get_dynamic_blacklisted_pages_v1A_07**vgpu_fb_get_dynamic_blacklisted_pages_v1A_07*vgpu_fifo_get_device_info_table_v1A_07**vgpu_fifo_get_device_info_table_v1A_07*mc_get_static_intr_table_v1E_09**mc_get_static_intr_table_v1E_09*nv2080IntrType*intrVectorStall*mc_get_engine_notification_intr_vectors_v16_00**mc_get_engine_notification_intr_vectors_v16_00*notificationIntrVector*vgpu_fb_get_ltc_info_for_fbp_v1A_0D**vgpu_fb_get_ltc_info_for_fbp_v1A_0D*pstateParams*NewPstate*src/kernel/vgpu/vgpu_events.c*NVRM: GPU sanity check failed! gpuInstance = 0x%x. **src/kernel/vgpu/vgpu_events.c**NVRM: GPU sanity check failed! gpuInstance = 0x%x. *call to vgpuServiceGspPlugin*call to vgpuServiceEvents*call to _readEventBufGet*call to _readEventBufPut*call to vgpuServiceEventGuestAllocated*call to vgpuServiceEventRC*call to vgpuServiceEventVnc*call to vgpuServiceEventPstate*call to vgpuServiceEventEcc*call to vgpuServiceEventNvencReportingState*call to vgpuServiceEventInbandResponse*call to vgpuServiceEventTracing*NVRM: Unsupported vgpu event type %d **NVRM: Unsupported vgpu event type %d *call to _writeEventBufGet*call to gspTraceServiceVgpuEventTracing*NVRM: Failed to schedule Pstate callback! 0x%x **NVRM: Failed to schedule Pstate callback! 0x%x *NVRM: SET_SURFACE_PROPERTY RPC failed with error : 0x%x **NVRM: SET_SURFACE_PROPERTY RPC failed with error : 0x%x *NVRM: ROBUST_CHANNEL error occurred (hClient = 0x%x hFifo = 0x%x chID = %d exceptType = %d engineID = 0x%x (0x%x)) ... **NVRM: ROBUST_CHANNEL error occurred (hClient = 0x%x hFifo = 0x%x chID = %d exceptType = %d engineID = 0x%x (0x%x)) ... *call to vgpuRcErrorRecovery*call to krcErrorInvokeCallback_IMPL*NVRM: _setupGspEventInfrastructure: GSP Event buf memory setup failed: 0x%x **NVRM: _setupGspEventInfrastructure: GSP Event buf memory setup failed: 0x%x *getEventBuf*listCount(&(vgpuSysmemPfnInfo.listVgpuSysmemPfnBitmapHead)) > 0*src/kernel/vgpu/vgpu_util.c**listCount(&(vgpuSysmemPfnInfo.listVgpuSysmemPfnBitmapHead)) > 0**src/kernel/vgpu/vgpu_util.c*(vgpuSysmemPfnInfo.sysmemPfnRefCount != NULL)**(vgpuSysmemPfnInfo.sysmemPfnRefCount != NULL)*NVRM: Update sysmem pfn bitmap for pfn: 0x%llx > guestMaxPfn: 0x%llx **NVRM: Update sysmem pfn bitmap for pfn: 0x%llx > guestMaxPfn: 0x%llx *call to vgpuExpandSysmemPfnBitMapList*NVRM: Cannot re-allocate sysmem PFN bitmap :%x **NVRM: Cannot re-allocate sysmem PFN bitmap :%x *call to _updateSysmemPfnBitMap*NVRM: Sysmem PFN bitmap update failed :%x **NVRM: Sysmem PFN bitmap update failed :%x *bitmapNodeIndex*nodeBitIndex*bitmapNodes**bitmapNodes***bitmapNodes*sysmemPfnMap*vgpuSysmemPfnInfo.sysmemPfnRefCount[pfn] > 0**vgpuSysmemPfnInfo.sysmemPfnRefCount[pfn] > 0*NVRM: Failed to alloc sysmem pfn bitmap node **NVRM: Failed to alloc sysmem pfn bitmap node *temp_pfn_ref_count**temp_pfn_ref_count*NVRM: Invalid argumets passed while allocating sysmem pfn bitmap node **NVRM: Invalid argumets passed while allocating sysmem pfn bitmap node *NVRM: failed to allocate memory for sysmem pfn bitmap node **NVRM: failed to allocate memory for sysmem pfn bitmap node **pMemDesc_sysmemPfnMap**sysmemPfnMap*sysmemPfnMap_priv**sysmemPfnMap_priv*nodeStartPfn*nodeEndPfn*NVRM: Cannot alloc memory descriptor for sysmem pfn bitmap node (size = 0x%llx) **NVRM: Cannot alloc memory descriptor for sysmem pfn bitmap node (size = 0x%llx) *NVRM: Cannot alloc sysmem pfn bitmap node buffer **NVRM: Cannot alloc sysmem pfn bitmap node buffer *NVRM: Cannot map sysmem pfn bitmap node buffer (size = 0x%llx) **NVRM: Cannot map sysmem pfn bitmap node buffer (size = 0x%llx) *sysmemPfnRing*sysmemBitmapRootNode**sysmemBitmapRootNode*nodePfns**nodePfns*NVRM: Exhausted limit of dirty sysmem tracking. Migration will not work correctly. **NVRM: Exhausted limit of dirty sysmem tracking. Migration will not work correctly. *nodeCount*NVRM: Dirty sysmem pfn: Invlid address space %d **NVRM: Dirty sysmem pfn: Invlid address space %d *num_swrl*engineFifoListNumEntries != 0*src/kernel/virtualization/common_vgpu_mgr.c**engineFifoListNumEntries != 0**src/kernel/virtualization/common_vgpu_mgr.c*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_FIFO_TAG, engineFifoList[i].engineData[ENGINE_INFO_TYPE_FIFO_TAG], ENGINE_INFO_TYPE_RUNLIST, &runlistId) == NV_OK**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_FIFO_TAG, engineFifoList[i].engineData[ENGINE_INFO_TYPE_FIFO_TAG], ENGINE_INFO_TYPE_RUNLIST, &runlistId) == NV_OK*call to kfifoChidMgrFreeSystemChids_IMPL*kfifoChidMgrFreeSystemChids(pGpu, pKernelFifo, pChidMgr, gfid, pChidOffset, pChannelCount, pMigDevice, engineFifoListNumEntries, engineFifoList) == NV_OK**kfifoChidMgrFreeSystemChids(pGpu, pKernelFifo, pChidMgr, gfid, pChidOffset, pChannelCount, pMigDevice, engineFifoListNumEntries, engineFifoList) == NV_OK*call to kvgpuMgrGetSwizzIdFromDevice*call to kvgpuMgrGetHeterogeneousModePerGI*kvgpuMgrGetHeterogeneousModePerGI(pGpu, swizzId, &bHeterogeneousModeEnabled)**kvgpuMgrGetHeterogeneousModePerGI(pGpu, swizzId, &bHeterogeneousModeEnabled)*call to kvgpumgrHeterogeneousGetChidOffset*kvgpumgrHeterogeneousGetChidOffset(vgpuTypeInfo->vgpuTypeId, placementId, numChannels, &heapOffset)**kvgpumgrHeterogeneousGetChidOffset(vgpuTypeInfo->vgpuTypeId, placementId, numChannels, &heapOffset)*call to kvgpumgrHomogeneousGetChidOffset*kvgpumgrHomogeneousGetChidOffset(vgpuTypeInfo->vgpuTypeId, placementId, numChannels, &heapOffset)**kvgpumgrHomogeneousGetChidOffset(vgpuTypeInfo->vgpuTypeId, placementId, numChannels, &heapOffset)*kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_FIFO_TAG, engineFifoList[i].engineData[ENGINE_INFO_TYPE_FIFO_TAG], ENGINE_INFO_TYPE_RUNLIST, &runlistId)**kfifoEngineInfoXlate_HAL(pGpu, pKernelFifo, ENGINE_INFO_TYPE_FIFO_TAG, engineFifoList[i].engineData[ENGINE_INFO_TYPE_FIFO_TAG], ENGINE_INFO_TYPE_RUNLIST, &runlistId)*call to kfifoChidMgrReserveSystemChids_IMPL*kfifoChidMgrReserveSystemChids(pGpu, pKernelFifo, pChidMgr, currentNumChannels, flags, gfid, pChidOffset, heapOffset, pChannelCount, pMigDevice, engineFifoListNumEntries, engineFifoList)**kfifoChidMgrReserveSystemChids(pGpu, pKernelFifo, pChidMgr, currentNumChannels, flags, gfid, pChidOffset, heapOffset, pChannelCount, pMigDevice, engineFifoListNumEntries, engineFifoList)*maxInstance*maxResolutionX*maxResolutionY*frlConfig*eccSupported*gpuInstanceSize*multiVgpuSupported*vdevId*pdevId*gspHeapSize*fbReservation*mappableVideoSize*bar1Length*gpuDirectSupported*nvlinkP2PSupported*maxInstancePerGI*multiVgpuExclusive*frlEnable*vgpuName**vgpuName*license**license*vgpuSignature**vgpuSignature*call to getGridLicenseProductName*licenseProductNameBuffer**licenseProductNameBuffer*licensedProductName**licensedProductName*GRID-Virtual-PC,2.0;Quadro-Virtual-DWS,5.0;GRID-Virtual-WS,2.0;GRID-Virtual-WS-Ext,2.0**GRID-Virtual-PC,2.0;Quadro-Virtual-DWS,5.0;GRID-Virtual-WS,2.0;GRID-Virtual-WS-Ext,2.0**NVIDIA Virtual PC*GRID-Virtual-Apps,3.0**GRID-Virtual-Apps,3.0**NVIDIA Virtual Applications*Quadro-Virtual-DWS,5.0;GRID-Virtual-WS,2.0;GRID-Virtual-WS-Ext,2.0**Quadro-Virtual-DWS,5.0;GRID-Virtual-WS,2.0;GRID-Virtual-WS-Ext,2.0**NVIDIA RTX Virtual Workstation*GRID-vGaming,8.0**GRID-vGaming,8.0**NVIDIA Cloud Gaming*NVIDIA-vComputeServer,9.0**NVIDIA-vComputeServer,9.0**NVIDIA Virtual Compute Server*result != NULL*src/kernel/virtualization/hypervisor/hyperv/hyperv.c**result != NULL**src/kernel/virtualization/hypervisor/hyperv/hyperv.c*NVRM: CPUID is NOT supported! **NVRM: CPUID is NOT supported! **HyperV**Microsoft Hv*peerCliqueId*bDetected*vmmSignature**vmmSignature*src/kernel/virtualization/hypervisor/hypervisor.c**src/kernel/virtualization/hypervisor/hypervisor.c*hypervisorSig**hypervisorSig*call to _hypervisorDetection_HVM*bIsHypervHost*NVRM: Found HVM kernel running on hypervisor: %s. *hypervisorName**NVRM: Found HVM kernel running on hypervisor: %s. *NVRM: Found PV kernel running with vGPU hypervisor. **NVRM: Found PV kernel running with vGPU hypervisor. *call to _hypervisorCheckVirtualPcieP2PApproval*call to _hypervisorCheckVirtualPcieP2PGeneralApproval*call to _hypervisorLoad**KVM**KVMKVMKVM