are used. Return: Tensor with BLEU Score Raises: ValueError: If ``preds`` and ``target`` corpus have different lengths. ValueError: If a length of a list of weights is not ``None`` and not equal to ``n_gram``. Example: >>> from torchmetrics.functional.text import sacre_bleu_score >>> preds = ['the cat is on the mat'] >>> target = [['there is a cat on the mat', 'a cat is on the mat']] >>> sacre_bleu_score(preds, target) tensor(0.7598) References: [1] BLEU: a Method for Automatic Evaluation of Machine Translation by Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu `BLEU`_ [2] A Call for Clarity in Reporting BLEU Scores by Matt Post. [3] Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statistics by Chin-Yew Lin and Franz Josef Och `Machine Translation Evolution`_ z*Argument `tokenize` expected to be one of z but got Ú