trained('z'')` or `AutoTokenizer.from_pretrained('z˜', use_fast=False)` This issue will be fixed soon, see: https://github.com/huggingface/tokenizers/pull/1005. so that the fast tokenizer works correctly.r