of containing ``"input_ids"`` and ``"attention_mask"`` represented by :class:`~torch.Tensor` as an input and return the model's output represented by the single :class:`~torch.Tensor`. verbose: An indication of whether a progress bar to be displayed during the embeddings' calculation. idf: An indication whether normalization using inverse document frequencies should be used. device: A device to be used for calculation. max_length: A maximum length of input sequences. Sequences longer than ``max_length`` are to be trimmed. batch_size: A batch size used for model processing. num_threads: A number of threads to use for a dataloader. return_hash: An indication of whether the correspodning ``hash_code`` should be returned. lang: A language of input sentences. rescale_with_baseline: An indication of whether bertscore should be rescaled with a pre-computed baseline. When a pretrained model from ``transformers`` model is used, the corresponding baseline is downloaded from the original ``bert-score`` package from `BERT_score`_ if available. In other cases, please specify a path to the baseline csv/tsv file, which must follow the formatting of the files from `BERT_score`_. baseline_path: A path to the user's own local csv/tsv file with the baseline scale. baseline_url: A url path to the user's own csv/tsv file with the baseline scale. kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info. Example: >>> from pprint import pprint >>> from torchmetrics.text.bert import BERTScore >>> preds = ["hello there", "general kenobi"] >>> target = ["hello there", "master kenobi"] >>> bertscore = BERTScore() >>> pprint(bertscore(preds, target)) {'f1': tensor([1.0000, 0.9961]), 'precision': tensor([1.0000, 0.9961]), 'recall': tensor([1.0000, 0.9961])} FÚ