# solver¶

The solver module defines solvers for problems of the kind res = 0 or ∂inertia/∂t + res = 0, where res is a nutils.sample.Integral. To demonstrate this consider the following setup:

>>> from nutils import mesh, function, solver
>>> ns = function.Namespace()
>>> domain, ns.x = mesh.rectilinear([4,4])
>>> ns.basis = domain.basis('spline', degree=2)
>>> cons = domain.boundary['left,top'].project(0, onto=ns.basis, geometry=ns.x, ischeme='gauss4')
project > constrained 11/36 dofs, error 0.00e+00/area
>>> ns.u = 'basis_n ?lhs_n'


Function u represents an element from the discrete space but cannot not evaluated yet as we did not yet establish values for ?lhs. It can, however, be used to construct a residual functional res. Aiming to solve the Poisson problem u_,kk = f we define the residual functional res = v,k u,k + v f and solve for res == 0 using solve_linear:

>>> res = domain.integral('(basis_n,i u_,i + basis_n) d:x' @ ns, degree=2)
>>> lhs = solver.solve_linear('lhs', residual=res, constrain=cons)
solve > solver returned with residual ...


The coefficients lhs represent the solution to the Poisson problem.

In addition to solve_linear the solver module defines newton and pseudotime for solving nonlinear problems, as well as impliciteuler for time dependent problems.

nutils.solver.solve_linear(target, residual, constrain=None, *, arguments={}, solveargs={})

solve linear problem

Parameters
Returns

Array of target values for which residual == 0

Return type

numpy.ndarray

nutils.solver.solve(gen_lhs_resnorm, tol=0.0, maxiter=inf)

execute nonlinear solver, return lhs

Iterates over nonlinear solver until tolerance is reached. Example:

lhs = solve(newton(target, residual), tol=1e-5)

Parameters
Returns

Coefficient vector that corresponds to a smaller than tol residual.

Return type

numpy.ndarray

nutils.solver.solve_withinfo(gen_lhs_resnorm, tol=0.0, maxiter=inf)

execute nonlinear solver, return lhs and info

Like solve(), but return a 2-tuple of the solution and the corresponding info object which holds information about the final residual norm and other generator-dependent information.

class nutils.solver.RecursionWithSolve(*args, **kwargs)

add a .solve method to (lhs,resnorm) iterators

Introduces the convenient form:

newton(target, residual).solve(tol)


Shorthand for:

solve(newton(target, residual), tol)

solve_withinfo(gen_lhs_resnorm, tol=0.0, maxiter=inf)

execute nonlinear solver, return lhs and info

Like solve(), but return a 2-tuple of the solution and the corresponding info object which holds information about the final residual norm and other generator-dependent information.

solve(gen_lhs_resnorm, tol=0.0, maxiter=inf)

execute nonlinear solver, return lhs

Iterates over nonlinear solver until tolerance is reached. Example:

lhs = solve(newton(target, residual), tol=1e-5)

Parameters
Returns

Coefficient vector that corresponds to a smaller than tol residual.

Return type

numpy.ndarray

class nutils.solver.newton(target, residual, jacobian=None, lhs0=None, constrain=None, searchrange=(0.01, 0.6666666666666666), droptol=None, rebound=2.0, failrelax=1e-06, arguments={}, solveargs={})

iteratively solve nonlinear problem by gradient descent

Generates targets such that residual approaches 0 using Newton procedure with line search based on the residual norm. Suitable to be used inside solve.

An optimal relaxation value is computed based on the following cubic assumption:

|res(lhs + r * dlhs)|^2 = A + B * r + C * r^2 + D * r^3


where A, B, C and D are determined based on the current residual and tangent, the new residual, and the new tangent. If this value is found to be close to 1 then the newton update is accepted.

Parameters
Yields

numpy.ndarray – Coefficient vector that approximates residual==0 with increasing accuracy

__weakref__

list of weak references to the object (if defined)

class nutils.solver.minimize(target, energy, lhs0=None, constrain=None, searchrange=(0.01, 0.5), rebound=2.0, droptol=None, failrelax=1e-06, arguments={}, solveargs={})

iteratively minimize nonlinear functional by gradient descent

Generates targets such that residual approaches 0 using Newton procedure with line search based on the energy. Suitable to be used inside solve.

An optimal relaxation value is computed based on the following assumption:

energy(lhs + r * dlhs) = A + B * r + C * r^2 + D * r^3 + E * r^4 + F * r^5


where A, B, C, D, E and F are determined based on the current and new energy, residual and tangent. If this value is found to be close to 1 then the newton update is accepted.

Parameters
Yields

numpy.ndarray – Coefficient vector that approximates residual==0 with increasing accuracy

__weakref__

list of weak references to the object (if defined)

class nutils.solver.pseudotime(target, residual, inertia, timestep, lhs0=None, constrain=None, arguments={}, solveargs={})

iteratively solve nonlinear problem by pseudo time stepping

Generates targets such that residual approaches 0 using hybrid of Newton and time stepping. Requires an inertia term and initial timestep. Suitable to be used inside solve.

Parameters
Yields

numpy.ndarray with dtype float – Tuple of coefficient vector and residual norm

__weakref__

list of weak references to the object (if defined)

class nutils.solver.thetamethod(target, residual, inertia, timestep, lhs0, theta, target0='_thetamethod_target0', constrain=None, newtontol=1e-10, arguments={}, newtonargs={})

solve time dependent problem using the theta method

Parameters
Yields

numpy.ndarray – Coefficient vector for all timesteps after the initial condition.

__weakref__

list of weak references to the object (if defined)

nutils.solver.optimize(target, functional, *, newtontol=0.0, arguments={}, **kwargs)

find the minimizer of a given functional

Parameters
Yields

numpy.ndarray – Coefficient vector corresponding to the functional optimum