ional return values. disp : bool, optional If True, return a convergence message, followed by `xopt`. retall : bool, optional If True, add to the returned values the results of each iteration. callback : callable, optional An optional user-supplied function, called after each iteration. Called as ``callback(xk)``, where ``xk`` is the current value of `x0`. c1 : float, default: 1e-4 Parameter for Armijo condition rule. c2 : float, default: 0.4 Parameter for curvature condition rule. Returns ------- xopt : ndarray Parameters which minimize f, i.e., ``f(xopt) == fopt``. fopt : float, optional Minimum value found, f(xopt). Only returned if `full_output` is True. func_calls : int, optional The number of function_calls made. Only returned if `full_output` is True. grad_calls : int, optional The number of gradient calls made. Only returned if `full_output` is True. warnflag : int, optional Integer value with warning status, only returned if `full_output` is True. 0 : Success. 1 : The maximum number of iterations was exceeded. 2 : Gradient and/or function calls were not changing. May indicate that precision was lost, i.e., the routine did not converge. 3 : NaN result encountered. allvecs : list of ndarray, optional List of arrays, containing the results at each iteration. Only returned if `retall` is True. See Also -------- minimize : common interface to all `scipy.optimize` algorithms for unconstrained and constrained minimization of multivariate functions. It provides an alternative way to call ``fmin_cg``, by specifying ``method='CG'``. Notes ----- This conjugate gradient algorithm is based on that of Polak and Ribiere [1]_. Conjugate gradient methods tend to work better when: 1. `f` has a unique global minimizing point, and no local minima or other stationary points, 2. `f` is, at least locally, reasonably well approximated by a quadratic function of the variables, 3. `f` is continuous and has a continuous gradient, 4. `fprime` is not too large, e.g., has a norm less than 1000, 5. The initial guess, `x0`, is reasonably close to `f` 's global minimizing point, `xopt`. Parameters `c1` and `c2` must satisfy ``0 < c1 < c2 < 1``. References ---------- .. [1] Wright & Nocedal, "Numerical Optimization", 1999, pp. 120-122. Examples -------- Example 1: seek the minimum value of the expression ``a*u**2 + b*u*v + c*v**2 + d*u + e*v + f`` for given values of the parameters and an initial guess ``(u, v) = (0, 0)``. >>> import numpy as np >>> args = (2, 3, 7, 8, 9, 10) # parameter values >>> def f(x, *args): ... u, v = x ... a, b, c, d, e, f = args ... return a*u**2 + b*u*v + c*v**2 + d*u + e*v + f >>> def gradf(x, *args): ... u, v = x ... a, b, c, d, e, f = args ... gu = 2*a*u + b*v + d # u-component of the gradient ... gv = b*u + 2*c*v + e # v-component of the gradient ... return np.asarray((gu, gv)) >>> x0 = np.asarray((0, 0)) # Initial guess. >>> from scipy import optimize >>> res1 = optimize.fmin_cg(f, x0, fprime=gradf, args=args) Optimization terminated successfully. Current function value: 1.617021 Iterations: 4 Function evaluations: 8 Gradient evaluations: 8 >>> res1 array([-1.80851064, -0.25531915]) Example 2: solve the same problem using the `minimize` function. (This `myopts` dictionary shows all of the available options, although in practice only non-default values would be needed. The returned value will be a dictionary.) >>> opts = {'maxiter' : None, # default value. ... 'disp' : True, # non-default value. ... 'gtol' : 1e-5, # default value. ... 'norm' : np.inf, # default value. ... 'eps' : 1.4901161193847656e-08} # default value. >>> res2 = optimize.minimize(f, x0, jac=gradf, args=args, ... method='CG', options=opts) Optimization terminated successfully. Current function value: 1.617021 Iterations: 4 Function evaluations: 8 Gradient evaluations: 8 >>> res2.x # minimum found array([-1.80851064, -0.25531915]) )