ax_position_embeddings]`. Use the same value as `max_position_embeddings`. pad_token_id (`int`, *optional*, defaults to 0): The value used to pad input_ids. position_biased_input (`bool`, *optional*, defaults to `False`): Whether add absolute position embedding to content embedding. pos_att_type (`List[str]`, *optional*): The type of relative position attention, it can be a combination of `["p2c", "c2p"]`, e.g. `["p2c"]`, `["p2c", "c2p"]`, `["p2c", "c2p"]`. layer_norm_eps (`float`, optional, defaults to 1e-12): The epsilon used by the layer normalization layers. Example: ```python >>> from transformers import DebertaV2Config, DebertaV2Model >>> # Initializing a DeBERTa-v2 microsoft/deberta-v2-xlarge style configuration >>> configuration = DebertaV2Config() >>> # Initializing a model (with random weights) from the microsoft/deberta-v2-xlarge style configuration >>> model = DebertaV2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```z deberta-v2édô