penai/clip-vit-large-patch14"` data_range: The maximum value of the input tensor. For example, if the input images are in range [0, 255], data_range should be 255. The images are normalized by this value. prompts: A string, tuple of strings or nested tuple of strings. If a single string is provided, it must be one of the available prompts (see above). Else the input is expected to be a tuple, where each element can be one of two things: either a string or a tuple of strings. If a string is provided, it must be one of the available prompts (see above). If tuple is provided, it must be of length 2 and the first string must be a positive prompt and the second string must be a negative prompt. kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info. .. note:: If using the default `clip_iqa` model, the package `piq` must be installed. Either install with `pip install piq` or `pip install torchmetrics[image]`. Raises: ModuleNotFoundError: If transformers package is not installed or version is lower than 4.10.0 ValueError: If `prompts` is a tuple and it is not of length 2 ValueError: If `prompts` is a string and it is not one of the available prompts ValueError: If `prompts` is a list of strings and not all strings are one of the available prompts Example:: Single prompt: >>> from torchmetrics.multimodal import CLIPImageQualityAssessment >>> import torch >>> _ = torch.manual_seed(42) >>> imgs = torch.randint(255, (2, 3, 224, 224)).float() >>> metric = CLIPImageQualityAssessment() >>> metric(imgs) tensor([0.8894, 0.8902]) Example:: Multiple prompts: >>> from torchmetrics.multimodal import CLIPImageQualityAssessment >>> import torch >>> _ = torch.manual_seed(42) >>> imgs = torch.randint(255, (2, 3, 224, 224)).float() >>> metric = CLIPImageQualityAssessment(prompts=("quality", "brightness")) >>> metric(imgs) {'quality': tensor([0.8894, 0.8902]), 'brightness': tensor([0.5507, 0.5208])} Example:: Custom prompts. Must always be a tuple of length 2, with a positive and negative prompt. >>> from torchmetrics.multimodal import CLIPImageQualityAssessment >>> import torch >>> _ = torch.manual_seed(42) >>> imgs = torch.randint(255, (2, 3, 224, 224)).float() >>> metric = CLIPImageQualityAssessment(prompts=(("Super good photo.", "Super bad photo."), "brightness")) >>> metric(imgs) {'user_defined_0': tensor([0.9652, 0.9629]), 'brightness': tensor([0.5507, 0.5208])} FÚ