rator has a FakeTensor implementation (and if it is correct). - test_aot_dispatch_static: If the operator works with AOTAutograd/AOTDispatch, which is one of the parts in the PT2 stack. Checks that the outputs (and gradients, if they are computable) of the operator are the same under eager-mode PyTorch and torch.compile. - test_aot_dispatch_dynamic: Same as aot_dispatch_static, but tests dynamic shapes instead of static shapes. For best results, please call ``opcheck`` multiple times with a representative set of inputs. For example, if your operator supports autograd, please use ``opcheck`` with inputs that require_grad. Args: op: The operator. Should look like torch.ops.aten.foo args: The args to the operator kwargs: The kwargs to the operator test_utils: Tests that we should run. Default: all of them. Example: ["test_schema", "test_faketensor"] raise_exception: If we should raise an exception on the first error. If False, we will return a dict with information on if each test passed or not. Nzeopcheck(op, ...): op must be instance of torch._ops.OpOverload, e.g. torch.ops.aten.sin.default, got r