API

Argument wrappers

DifferentiationInterface.ConstantType
Constant

Concrete type of Context argument which is kept constant during differentiation.

Note that an operator can be prepared with an arbitrary value of the constant. However, same-point preparation must occur with the exact value that will be reused later.

Warning

Some backends require any Constant context to be a Number or an AbstractArray.

Example

julia> using DifferentiationInterface

julia> import ForwardDiff

julia> f(x, c) = c * sum(abs2, x);

julia> gradient(f, AutoForwardDiff(), [1.0, 2.0], Constant(10))
2-element Vector{Float64}:
 20.0
 40.0

julia> gradient(f, AutoForwardDiff(), [1.0, 2.0], Constant(100))
2-element Vector{Float64}:
 200.0
 400.0
source
DifferentiationInterface.CacheType
Cache

Concrete type of Context argument which can be mutated with active values during differentiation.

The initial values present inside the cache do not matter.

For some backends, preparation allocates the required memory for Cache contexts with the right element type, similar to PreallocationTools.jl.

Warning

Some backends require any Cache context to be an AbstractArray, others accept nested (named) tuples of AbstractArrays.

Example

julia> using DifferentiationInterface

julia> import ForwardDiff

julia> f(x, c) = sum(copyto!(c, x));

julia> prep = prepare_gradient(f, AutoForwardDiff(), [1.0, 2.0], Cache(zeros(2)));

julia> gradient(f, prep, AutoForwardDiff(), [3.0, 4.0], Cache(zeros(2)))
2-element Vector{Float64}:
 1.0
 1.0
source

First order

Pushforward

DifferentiationInterface.prepare_pushforwardFunction
prepare_pushforward(f,     backend, x, tx, [contexts...]; strict=Val(false)) -> prep
prepare_pushforward(f!, y, backend, x, tx, [contexts...]; strict=Val(false)) -> prep

Create a prep object that can be given to pushforward and its variants to speed them up.

Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.

For in-place functions, y is mutated by f! during preparation.

Warning

The preparation result prep is only reusable as long as the arguments to pushforward do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.

When strict=Val(true), type checking is enforced between preparation and execution (but size checking is left to the user).

source
DifferentiationInterface.prepare_pushforward_same_pointFunction
prepare_pushforward_same_point(f,     backend, x, tx, [contexts...]; strict=Val(false)) -> prep_same
prepare_pushforward_same_point(f!, y, backend, x, tx, [contexts...]; strict=Val(false)) -> prep_same

Create a prep object that can be given to pushforward and its variants to speed them up, if they are applied at the same point x and with the same contexts.

Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.

For in-place functions, y is mutated by f! during preparation.

Warning

The preparation result prep is only reusable as long as the arguments to pushforward do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.

When strict=Val(true), type checking is enforced between preparation and execution (but size checking is left to the user).

source
DifferentiationInterface.pushforwardFunction
pushforward(f,     [prep,] backend, x, tx, [contexts...]) -> ty
pushforward(f!, y, [prep,] backend, x, tx, [contexts...]) -> ty

Compute the pushforward of the function f at point x with a tuple of tangents tx.

To improve performance via operator preparation, refer to prepare_pushforward and prepare_pushforward_same_point.

Tip

Pushforwards are also commonly called Jacobian-vector products or JVPs. This function could have been named jvp.

source
DifferentiationInterface.pushforward!Function
pushforward!(f,     dy, [prep,] backend, x, tx, [contexts...]) -> ty
pushforward!(f!, y, dy, [prep,] backend, x, tx, [contexts...]) -> ty

Compute the pushforward of the function f at point x with a tuple of tangents tx, overwriting ty.

To improve performance via operator preparation, refer to prepare_pushforward and prepare_pushforward_same_point.

Tip

Pushforwards are also commonly called Jacobian-vector products or JVPs. This function could have been named jvp!.

source
DifferentiationInterface.value_and_pushforwardFunction
value_and_pushforward(f,     [prep,] backend, x, tx, [contexts...]) -> (y, ty)
value_and_pushforward(f!, y, [prep,] backend, x, tx, [contexts...]) -> (y, ty)

Compute the value and the pushforward of the function f at point x with a tuple of tangents tx.

To improve performance via operator preparation, refer to prepare_pushforward and prepare_pushforward_same_point.

Tip

Pushforwards are also commonly called Jacobian-vector products or JVPs. This function could have been named value_and_jvp.

Info

Required primitive for forward mode backends.

source
DifferentiationInterface.value_and_pushforward!Function
value_and_pushforward!(f,     dy, [prep,] backend, x, tx, [contexts...]) -> (y, ty)
value_and_pushforward!(f!, y, dy, [prep,] backend, x, tx, [contexts...]) -> (y, ty)

Compute the value and the pushforward of the function f at point x with a tuple of tangents tx, overwriting ty.

To improve performance via operator preparation, refer to prepare_pushforward and prepare_pushforward_same_point.

Tip

Pushforwards are also commonly called Jacobian-vector products or JVPs. This function could have been named value_and_jvp!.

source

Pullback

DifferentiationInterface.prepare_pullbackFunction
prepare_pullback(f,     backend, x, ty, [contexts...]; strict=Val(false)) -> prep
prepare_pullback(f!, y, backend, x, ty, [contexts...]; strict=Val(false)) -> prep

Create a prep object that can be given to pullback and its variants to speed them up.

Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.

For in-place functions, y is mutated by f! during preparation.

Warning

The preparation result prep is only reusable as long as the arguments to pullback do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.

When strict=Val(true), type checking is enforced between preparation and execution (but size checking is left to the user).

source
DifferentiationInterface.prepare_pullback_same_pointFunction
prepare_pullback_same_point(f,     backend, x, ty, [contexts...]; strict=Val(false)) -> prep_same
prepare_pullback_same_point(f!, y, backend, x, ty, [contexts...]; strict=Val(false)) -> prep_same

Create a prep object that can be given to pullback and its variants to speed them up, if they are applied at the same point x and with the same contexts.

Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.

For in-place functions, y is mutated by f! during preparation.

Warning

The preparation result prep is only reusable as long as the arguments to pullback do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.

When strict=Val(true), type checking is enforced between preparation and execution (but size checking is left to the user).

source
DifferentiationInterface.pullbackFunction
pullback(f,     [prep,] backend, x, ty, [contexts...]) -> tx
pullback(f!, y, [prep,] backend, x, ty, [contexts...]) -> tx

Compute the pullback of the function f at point x with a tuple of tangents ty.

To improve performance via operator preparation, refer to prepare_pullback and prepare_pullback_same_point.

Tip

Pullbacks are also commonly called vector-Jacobian products or VJPs. This function could have been named vjp.

source
DifferentiationInterface.pullback!Function
pullback!(f,     dx, [prep,] backend, x, ty, [contexts...]) -> tx
pullback!(f!, y, dx, [prep,] backend, x, ty, [contexts...]) -> tx

Compute the pullback of the function f at point x with a tuple of tangents ty, overwriting dx.

To improve performance via operator preparation, refer to prepare_pullback and prepare_pullback_same_point.

Tip

Pullbacks are also commonly called vector-Jacobian products or VJPs. This function could have been named vjp!.

source
DifferentiationInterface.value_and_pullbackFunction
value_and_pullback(f,     [prep,] backend, x, ty, [contexts...]) -> (y, tx)
value_and_pullback(f!, y, [prep,] backend, x, ty, [contexts...]) -> (y, tx)

Compute the value and the pullback of the function f at point x with a tuple of tangents ty.

To improve performance via operator preparation, refer to prepare_pullback and prepare_pullback_same_point.

Tip

Pullbacks are also commonly called vector-Jacobian products or VJPs. This function could have been named value_and_vjp.

Info

Required primitive for reverse mode backends.

source
DifferentiationInterface.value_and_pullback!Function
value_and_pullback!(f,     dx, [prep,] backend, x, ty, [contexts...]) -> (y, tx)
value_and_pullback!(f!, y, dx, [prep,] backend, x, ty, [contexts...]) -> (y, tx)

Compute the value and the pullback of the function f at point x with a tuple of tangents ty, overwriting dx.

To improve performance via operator preparation, refer to prepare_pullback and prepare_pullback_same_point.

Tip

Pullbacks are also commonly called vector-Jacobian products or VJPs. This function could have been named value_and_vjp!.

source

Derivative

DifferentiationInterface.prepare_derivativeFunction
prepare_derivative(f,     backend, x, [contexts...]; strict=Val(false)) -> prep
prepare_derivative(f!, y, backend, x, [contexts...]; strict=Val(false)) -> prep

Create a prep object that can be given to derivative and its variants to speed them up.

Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.

For in-place functions, y is mutated by f! during preparation.

Warning

The preparation result prep is only reusable as long as the arguments to derivative do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.

When strict=Val(true), type checking is enforced between preparation and execution (but size checking is left to the user).

source
DifferentiationInterface.derivativeFunction
derivative(f,     [prep,] backend, x, [contexts...]) -> der
derivative(f!, y, [prep,] backend, x, [contexts...]) -> der

Compute the derivative of the function f at point x.

To improve performance via operator preparation, refer to prepare_derivative.

source
DifferentiationInterface.derivative!Function
derivative!(f,     der, [prep,] backend, x, [contexts...]) -> der
derivative!(f!, y, der, [prep,] backend, x, [contexts...]) -> der

Compute the derivative of the function f at point x, overwriting der.

To improve performance via operator preparation, refer to prepare_derivative.

source
DifferentiationInterface.value_and_derivativeFunction
value_and_derivative(f,     [prep,] backend, x, [contexts...]) -> (y, der)
value_and_derivative(f!, y, [prep,] backend, x, [contexts...]) -> (y, der)

Compute the value and the derivative of the function f at point x.

To improve performance via operator preparation, refer to prepare_derivative.

source
DifferentiationInterface.value_and_derivative!Function
value_and_derivative!(f,     der, [prep,] backend, x, [contexts...]) -> (y, der)
value_and_derivative!(f!, y, der, [prep,] backend, x, [contexts...]) -> (y, der)

Compute the value and the derivative of the function f at point x, overwriting der.

To improve performance via operator preparation, refer to prepare_derivative.

source

Gradient

DifferentiationInterface.prepare_gradientFunction
prepare_gradient(f, backend, x, [contexts...]; strict=Val(false)) -> prep

Create a prep object that can be given to gradient and its variants to speed them up.

Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.

Warning

The preparation result prep is only reusable as long as the arguments to gradient do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.

When strict=Val(true), type checking is enforced between preparation and execution (but size checking is left to the user).

source

Jacobian

DifferentiationInterface.prepare_jacobianFunction
prepare_jacobian(f,     backend, x, [contexts...]; strict=Val(false)) -> prep
prepare_jacobian(f!, y, backend, x, [contexts...]; strict=Val(false)) -> prep

Create a prep object that can be given to jacobian and its variants to speed them up.

Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.

For in-place functions, y is mutated by f! during preparation.

Warning

The preparation result prep is only reusable as long as the arguments to jacobian do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.

When strict=Val(true), type checking is enforced between preparation and execution (but size checking is left to the user).

source
DifferentiationInterface.jacobianFunction
jacobian(f,     [prep,] backend, x, [contexts...]) -> jac
jacobian(f!, y, [prep,] backend, x, [contexts...]) -> jac

Compute the Jacobian matrix of the function f at point x.

To improve performance via operator preparation, refer to prepare_jacobian.

source
DifferentiationInterface.jacobian!Function
jacobian!(f,     jac, [prep,] backend, x, [contexts...]) -> jac
jacobian!(f!, y, jac, [prep,] backend, x, [contexts...]) -> jac

Compute the Jacobian matrix of the function f at point x, overwriting jac.

To improve performance via operator preparation, refer to prepare_jacobian.

source
DifferentiationInterface.value_and_jacobianFunction
value_and_jacobian(f,     [prep,] backend, x, [contexts...]) -> (y, jac)
value_and_jacobian(f!, y, [prep,] backend, x, [contexts...]) -> (y, jac)

Compute the value and the Jacobian matrix of the function f at point x.

To improve performance via operator preparation, refer to prepare_jacobian.

source
DifferentiationInterface.value_and_jacobian!Function
value_and_jacobian!(f,     jac, [prep,] backend, x, [contexts...]) -> (y, jac)
value_and_jacobian!(f!, y, jac, [prep,] backend, x, [contexts...]) -> (y, jac)

Compute the value and the Jacobian matrix of the function f at point x, overwriting jac.

To improve performance via operator preparation, refer to prepare_jacobian.

source

Second order

DifferentiationInterface.SecondOrderType
SecondOrder

Combination of two backends for second-order differentiation.

Danger

SecondOrder backends do not support first-order operators.

Constructor

SecondOrder(outer_backend, inner_backend)

Fields

  • outer::AbstractADType: backend for the outer differentiation
  • inner::AbstractADType: backend for the inner differentiation
source

Second derivative

DifferentiationInterface.prepare_second_derivativeFunction
prepare_second_derivative(f, backend, x, [contexts...]; strict=Val(false)) -> prep

Create a prep object that can be given to second_derivative and its variants to speed them up.

Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.

Warning

The preparation result prep is only reusable as long as the arguments to second_derivative do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.

When strict=Val(true), type checking is enforced between preparation and execution (but size checking is left to the user).

source

Hessian-vector product

DifferentiationInterface.prepare_hvpFunction
prepare_hvp(f, backend, x, tx, [contexts...]; strict=Val(false)) -> prep

Create a prep object that can be given to hvp and its variants to speed them up.

Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.

Warning

The preparation result prep is only reusable as long as the arguments to hvp do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.

When strict=Val(true), type checking is enforced between preparation and execution (but size checking is left to the user).

source
DifferentiationInterface.prepare_hvp_same_pointFunction
prepare_hvp_same_point(f, backend, x, tx, [contexts...]; strict=Val(false)) -> prep_same

Create a prep object that can be given to hvp and its variants to speed them up, if they are applied at the same point x and with the same contexts.

Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.

Warning

The preparation result prep is only reusable as long as the arguments to hvp do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.

When strict=Val(true), type checking is enforced between preparation and execution (but size checking is left to the user).

source

Hessian

DifferentiationInterface.prepare_hessianFunction
prepare_hessian(f, backend, x, [contexts...]; strict=Val(false)) -> prep

Create a prep object that can be given to hessian and its variants to speed them up.

Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.

Warning

The preparation result prep is only reusable as long as the arguments to hessian do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.

When strict=Val(true), type checking is enforced between preparation and execution (but size checking is left to the user).

source

Utilities

Backend queries

DifferentiationInterface.outerFunction
outer(backend::SecondOrder)
outer(backend::AbstractADType)

Return the outer backend of a SecondOrder object, tasked with differentiation at the second order.

For any other backend type, this function acts like the identity.

source
DifferentiationInterface.innerFunction
inner(backend::SecondOrder)
inner(backend::AbstractADType)

Return the inner backend of a SecondOrder object, tasked with differentiation at the first order.

For any other backend type, this function acts like the identity.

source

Backend switch

DifferentiationInterface.DifferentiateWithType
DifferentiateWith

Function wrapper that enforces differentiation with a "substitute" AD backend, possible different from the "true" AD backend that is called.

For instance, suppose a function f is not differentiable with Zygote because it involves mutation, but you know that it is differentiable with Enzyme. Then f2 = DifferentiateWith(f, AutoEnzyme()) is a new function that behaves like f, except that f2 is differentiable with Zygote (thanks to a chain rule which calls Enzyme under the hood). Moreover, any larger algorithm alg that calls f2 instead of f will also be differentiable with Zygote (as long as f was the only Zygote blocker).

Tip

This is mainly relevant for package developers who want to produce differentiable code at low cost, without writing the differentiation rules themselves. If you sprinkle a few DifferentiateWith in places where some AD backends may struggle, end users can pick from a wider variety of packages to differentiate your algorithms.

Warning

DifferentiateWith only supports out-of-place functions y = f(x) without additional context arguments. It only makes these functions differentiable if the true backend is either ForwardDiff or compatible with ChainRules. For any other true backend, the differentiation behavior is not altered by DifferentiateWith (it becomes a transparent wrapper).

Fields

  • f: the function in question, with signature f(x)
  • backend::AbstractADType: the substitute backend to use for differentiation
Note

For the substitute AD backend to be called under the hood, its package needs to be loaded in addition to the package of the true AD backend.

Constructor

DifferentiateWith(f, backend)

Example

julia> using DifferentiationInterface

julia> import FiniteDiff, ForwardDiff, Zygote

julia> function f(x::Vector{Float64})
           a = Vector{Float64}(undef, 1)  # type constraint breaks ForwardDiff
           a[1] = sum(abs2, x)  # mutation breaks Zygote
           return a[1]
       end;

julia> f2 = DifferentiateWith(f, AutoFiniteDiff());

julia> f([3.0, 5.0]) == f2([3.0, 5.0])
true

julia> alg(x) = 7 * f2(x);

julia> ForwardDiff.gradient(alg, [3.0, 5.0])
2-element Vector{Float64}:
 42.0
 70.0

julia> Zygote.gradient(alg, [3.0, 5.0])[1]
2-element Vector{Float64}:
 42.0
 70.0
source

Sparsity tools

DifferentiationInterface.MixedModeType
MixedMode

Combination of a forward and a reverse mode backend for mixed-mode sparse Jacobian computation.

Danger

MixedMode backends only support jacobian and its variants, and it should be used inside an AutoSparse wrapper.

Constructor

MixedMode(forward_backend, reverse_backend)
source
DifferentiationInterface.DenseSparsityDetectorType
DenseSparsityDetector

Sparsity pattern detector satisfying the detection API of ADTypes.jl.

The nonzeros in a Jacobian or Hessian are detected by computing the relevant matrix with dense AD, and thresholding the entries with a given tolerance (which can be numerically inaccurate). This process can be very slow, and should only be used if its output can be exploited multiple times to compute many sparse matrices.

Danger

In general, the sparsity pattern you obtain can depend on the provided input x. If you want to reuse the pattern, make sure that it is input-agnostic.

Warning

DenseSparsityDetector functionality is now located in a package extension, please load the SparseArrays.jl standard library before you use it.

Fields

  • backend::AbstractADType is the dense AD backend used under the hood
  • atol::Float64 is the minimum magnitude of a matrix entry to be considered nonzero

Constructor

DenseSparsityDetector(backend; atol, method=:iterative)

The keyword argument method::Symbol can be either:

  • :iterative: compute the matrix in a sequence of matrix-vector products (memory-efficient)
  • :direct: compute the matrix all at once (memory-hungry but sometimes faster).

Note that the constructor is type-unstable because method ends up being a type parameter of the DenseSparsityDetector object (this is not part of the API and might change).

Examples

using ADTypes, DifferentiationInterface, SparseArrays
import ForwardDiff

detector = DenseSparsityDetector(AutoForwardDiff(); atol=1e-5, method=:direct)

ADTypes.jacobian_sparsity(diff, rand(5), detector)

# output

4×5 SparseMatrixCSC{Bool, Int64} with 8 stored entries:
 1  1  ⋅  ⋅  ⋅
 ⋅  1  1  ⋅  ⋅
 ⋅  ⋅  1  1  ⋅
 ⋅  ⋅  ⋅  1  1

Sometimes the sparsity pattern is input-dependent:

ADTypes.jacobian_sparsity(x -> [prod(x)], rand(2), detector)

# output

1×2 SparseMatrixCSC{Bool, Int64} with 2 stored entries:
 1  1
ADTypes.jacobian_sparsity(x -> [prod(x)], [0, 1], detector)

# output

1×2 SparseMatrixCSC{Bool, Int64} with 1 stored entry:
 1  ⋅
source

Internals

The following is not part of the public API.

DifferentiationInterface.AutoSimpleFiniteDiffType
AutoSimpleFiniteDiff <: ADTypes.AbstractADType

Forward mode backend based on the finite difference (f(x + ε) - f(x)) / ε, with artificial chunk size to mimick ForwardDiff.

Constructor

AutoSimpleFiniteDiff(ε=1e-5; chunksize=nothing)
source
DifferentiationInterface.BatchSizeSettingsType
BatchSizeSettings{B,singlebatch,aligned}

Configuration for the batch size deduced from a backend and a sample array of length N.

Type parameters

  • B::Int: batch size
  • singlebatch::Bool: whether B == N (B > N is not allowed)
  • aligned::Bool: whether N % B == 0

Fields

  • N::Int: array length
  • A::Int: number of batches A = div(N, B, RoundUp)
  • B_last::Int: size of the last batch (if aligned is false)
source
DifferentiationInterface.RewrapType
Rewrap

Utility for recording context types of additional arguments (e.g. Constant or Cache) and re-wrapping them into their types after they have been unwrapped.

Useful for second-order differentiation.

source
ADTypes.modeMethod
mode(backend::SecondOrder)

Return the outer mode of the second-order backend.

source
DifferentiationInterface.overloaded_input_typeFunction
overloaded_input_type(prep)

If it exists, return the overloaded input type which will be passed to the differentiated function when preparation result prep is reused.

Danger

This function is experimental and not part of the public API.

source
DifferentiationInterface.pick_batchsizeMethod
pick_batchsize(backend, x_or_y::AbstractArray)

Return a BatchSizeSettings appropriate for arrays of the same length as x_or_y with a given backend.

Note that the array in question can be either the input or the output of the function, depending on whether the backend performs forward- or reverse-mode AD.

source
DifferentiationInterface.prepare!_derivativeFunction
prepare!_derivative(f,     prep, backend, x, [contexts...]) -> new_prep
prepare!_derivative(f!, y, prep, backend, x, [contexts...]) -> new_prep

Same behavior as prepare_derivative but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.prepare!_gradientFunction
prepare!_gradient(f, prep, backend, x, [contexts...]) -> new_prep

Same behavior as prepare_gradient but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.prepare!_hessianFunction
prepare!_hessian(f, backend, x, [contexts...]) -> new_prep

Same behavior as prepare_hessian but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.prepare!_hvpFunction
prepare!_hvp(f, backend, x, tx, [contexts...]) -> new_prep

Create a prep object that can be given to hvp and its variants to speed them up.

Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.

Warning

The preparation result prep is only reusable as long as the arguments to hvp do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.

When strict=Val(true), type checking is enforced between preparation and execution (but size checking is left to the user).

source
DifferentiationInterface.prepare!_jacobianFunction
prepare!_jacobian(f,     prep, backend, x, [contexts...]) -> new_prep
prepare!_jacobian(f!, y, prep, backend, x, [contexts...]) -> new_prep

Same behavior as prepare_jacobian but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.prepare!_pullbackFunction
prepare!_pullback(f,     prep, backend, x, ty, [contexts...]) -> new_prep
prepare!_pullback(f!, y, prep, backend, x, ty, [contexts...]) -> new_prep

Same behavior as prepare_pullback but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.prepare!_pushforwardFunction
prepare!_pushforward(f,     prep, backend, x, tx, [contexts...]) -> new_prep
prepare!_pushforward(f!, y, prep, backend, x, tx, [contexts...]) -> new_prep

Same behavior as prepare_pushforward but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.prepare!_second_derivativeFunction
prepare!_second_derivative(f, prep, backend, x, [contexts...]) -> new_prep

Same behavior as prepare_second_derivative but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.reasonable_batchsizeMethod
reasonable_batchsize(N::Integer, Bmax::Integer)

Reproduces the heuristic from ForwardDiff to minimize

  1. the number of batches necessary to cover an array of length N
  2. the number of leftover indices in the last partial batch

Source: https://github.com/JuliaDiff/ForwardDiff.jl/blob/ec74fbc32b10bbf60b3c527d8961666310733728/src/prelude.jl#L19-L29

source
DifferentiationInterface.threshold_batchsizeFunction
threshold_batchsize(backend::AbstractADType, B::Integer)

If the backend object has a fixed batch size B0, return a new backend where the fixed batch size is min(B0, B). Otherwise, act as the identity.

source