ust not mutate the array ``xi``. See Notes regarding vectorization and the dimensionality of the input and output. x : float array_like Points at which to evaluate the Jacobian. Must have at least one dimension. See Notes regarding the dimensionality and vectorization. tolerances : dictionary of floats, optional Absolute and relative tolerances. Valid keys of the dictionary are: - ``atol`` - absolute tolerance on the derivative - ``rtol`` - relative tolerance on the derivative Iteration will stop when ``res.error < atol + rtol * abs(res.df)``. The default `atol` is the smallest normal number of the appropriate dtype, and the default `rtol` is the square root of the precision of the appropriate dtype. maxiter : int, default: 10 The maximum number of iterations of the algorithm to perform. See Notes. order : int, default: 8 The (positive integer) order of the finite difference formula to be used. Odd integers will be rounded up to the next even integer. initial_step : float array_like, default: 0.5 The (absolute) initial step size for the finite difference derivative approximation. Must be broadcastable with `x` and `step_direction`. step_factor : float, default: 2.0 The factor by which the step size is *reduced* in each iteration; i.e. the step size in iteration 1 is ``initial_step/step_factor``. If ``step_factor < 1``, subsequent steps will be greater than the initial step; this may be useful if steps smaller than some threshold are undesirable (e.g. due to subtractive cancellation error). step_direction : integer array_like An array representing the direction of the finite difference steps (e.g. for use when `x` lies near to the boundary of the domain of the function.) Must be broadcastable with `x` and `initial_step`. Where 0 (default), central differences are used; where negative (e.g. -1), steps are non-positive; and where positive (e.g. 1), all steps are non-negative. Returns ------- res : _RichResult An object similar to an instance of `scipy.optimize.OptimizeResult` with the following attributes. The descriptions are written as though the values will be scalars; however, if `f` returns an array, the outputs will be arrays of the same shape. success : bool array ``True`` where the algorithm terminated successfully (status ``0``); ``False`` otherwise. status : int array An integer representing the exit status of the algorithm. - ``0`` : The algorithm converged to the specified tolerances. - ``-1`` : The error estimate increased, so iteration was terminated. - ``-2`` : The maximum number of iterations was reached. - ``-3`` : A non-finite value was encountered. df : float array The Jacobian of `f` at `x`, if the algorithm terminated successfully. error : float array An estimate of the error: the magnitude of the difference between the current estimate of the Jacobian and the estimate in the previous iteration. nit : int array The number of iterations of the algorithm that were performed. nfev : int array The number of points at which `f` was evaluated. Each element of an attribute is associated with the corresponding element of `df`. For instance, element ``i`` of `nfev` is the number of points at which `f` was evaluated for the sake of computing element ``i`` of `df`. See Also -------- derivative, hessian Notes ----- Suppose we wish to evaluate the Jacobian of a function :math:`f: \mathbf{R}^m \rightarrow \mathbf{R}^n`. Assign to variables ``m`` and ``n`` the positive integer values of :math:`m` and :math:`n`, respectively, and let ``...`` represent an arbitrary tuple of integers. If we wish to evaluate the Jacobian at a single point, then: - argument `x` must be an array of shape ``(m,)`` - argument `f` must be vectorized to accept an array of shape ``(m, ...)``. The first axis represents the :math:`m` inputs of :math:`f`; the remainder are for evaluating the function at multiple points in a single call. - argument `f` must return an array of shape ``(n, ...)``. The first axis represents the :math:`n` outputs of :math:`f`; the remainder are for the result of evaluating the function at multiple points. - attribute ``df`` of the result object will be an array of shape ``(n, m)``, the Jacobian. This function is also vectorized in the sense that the Jacobian can be evaluated at ``k`` points in a single call. In this case, `x` would be an array of shape ``(m, k)``, `f` would accept an array of shape ``(m, k, ...)`` and return an array of shape ``(n, k, ...)``, and the ``df`` attribute of the result would have shape ``(n, m, k)``. Suppose the desired callable ``f_not_vectorized`` is not vectorized; it can only accept an array of shape ``(m,)``. A simple solution to satisfy the required interface is to wrap ``f_not_vectorized`` as follows:: def f(x): return np.apply_along_axis(f_not_vectorized, axis=0, arr=x) Alternatively, suppose the desired callable ``f_vec_q`` is vectorized, but only for 2-D arrays of shape ``(m, q)``. To satisfy the required interface, consider:: def f(x): m, batch = x.shape[0], x.shape[1:] # x.shape is (m, ...) x = np.reshape(x, (m, -1)) # `-1` is short for q = prod(batch) res = f_vec_q(x) # pass shape (m, q) to function n = res.shape[0] return np.reshape(res, (n,) + batch) # return shape (n, ...) Then pass the wrapped callable ``f`` as the first argument of `jacobian`. References ---------- .. [1] Jacobian matrix and determinant, *Wikipedia*, https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant Examples -------- The Rosenbrock function maps from :math:`\mathbf{R}^m \rightarrow \mathbf{R}`; the SciPy implementation `scipy.optimize.rosen` is vectorized to accept an array of shape ``(m, p)`` and return an array of shape ``p``. Suppose we wish to evaluate the Jacobian (AKA the gradient because the function returns a scalar) at ``[0.5, 0.5, 0.5]``. >>> import numpy as np >>> from scipy.differentiate import jacobian >>> from scipy.optimize import rosen, rosen_der >>> m = 3 >>> x = np.full(m, 0.5) >>> res = jacobian(rosen, x) >>> ref = rosen_der(x) # reference value of the gradient >>> res.df, ref (array([-51., -1., 50.]), array([-51., -1., 50.])) As an example of a function with multiple outputs, consider Example 4 from [1]_. >>> def f(x): ... x1, x2, x3 = x ... return [x1, 5*x3, 4*x2**2 - 2*x3, x3*np.sin(x1)] The true Jacobian is given by: >>> def df(x): ... x1, x2, x3 = x ... one = np.ones_like(x1) ... return [[one, 0*one, 0*one], ... [0*one, 0*one, 5*one], ... [0*one, 8*x2, -2*one], ... [x3*np.cos(x1), 0*one, np.sin(x1)]] Evaluate the Jacobian at an arbitrary point. >>> rng = np.random.default_rng(389252938452) >>> x = rng.random(size=3) >>> res = jacobian(f, x) >>> ref = df(x) >>> res.df.shape == (4, 3) True >>> np.allclose(res.df, ref) True Evaluate the Jacobian at 10 arbitrary points in a single call. >>> x = rng.random(size=(3, 10)) >>> res = jacobian(f, x) >>> ref = df(x) >>> res.df.shape == (4, 3, 10) True >>> np.allclose(res.df, ref) True integralr