Internals

The following names are not part of the public API.

DifferentiationInterface.AutoSimpleFiniteDiffType
AutoSimpleFiniteDiff <: ADTypes.AbstractADType

Forward mode backend based on the finite difference (f(x + ε) - f(x)) / ε, with artificial chunk size to mimick ForwardDiff.

Constructor

AutoSimpleFiniteDiff(ε=1e-5; chunksize=nothing)
source
DifferentiationInterface.BatchSizeSettingsType
BatchSizeSettings{B,singlebatch,aligned}

Configuration for the batch size deduced from a backend and a sample array of length N.

Type parameters

  • B::Int: batch size
  • singlebatch::Bool: whether B == N (B > N is only allowed when N == 0)
  • aligned::Bool: whether N % B == 0

Fields

  • N::Int: array length
  • A::Int: number of batches A = div(N, B, RoundUp)
  • B_last::Int: size of the last batch (if aligned is false)
source
DifferentiationInterface.RewrapType
Rewrap

Utility for recording context types of additional arguments (e.g. Constant or Cache) and re-wrapping them into their types after they have been unwrapped.

Useful for second-order differentiation.

source
ADTypes.modeMethod
mode(backend::SecondOrder)

Return the outer mode of the second-order backend.

source
DifferentiationInterface.get_patternMethod
get_pattern(M::AbstractMatrix)

Return the Bool-valued sparsity pattern for a given matrix.

Only specialized on SparseMatrixCSC because it is used with symbolic backends, and at the moment their sparse Jacobian/Hessian utilities return a SparseMatrixCSC.

The trivial dense fallback is designed to protect against a change of format in these packages.

source
DifferentiationInterface.overloaded_input_typeFunction
overloaded_input_type(prep)

If it exists, return the overloaded input type which will be passed to the differentiated function when preparation result prep is reused.

Danger

This function is experimental and not part of the public API.

source
DifferentiationInterface.pick_batchsizeMethod
pick_batchsize(backend, x_or_y::AbstractArray)

Return a BatchSizeSettings appropriate for arrays of the same length as x_or_y with a given backend.

Note that the array in question can be either the input or the output of the function, depending on whether the backend performs forward- or reverse-mode AD.

source
DifferentiationInterface.prepare!_derivativeMethod
prepare!_derivative(f,     prep, backend, x, [contexts...]) -> new_prep
prepare!_derivative(f!, y, prep, backend, x, [contexts...]) -> new_prep

Same behavior as prepare_derivative but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.prepare!_gradientMethod
prepare!_gradient(f, prep, backend, x, [contexts...]) -> new_prep

Same behavior as prepare_gradient but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.prepare!_hessianMethod
prepare!_hessian(f, backend, x, [contexts...]) -> new_prep

Same behavior as prepare_hessian but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.prepare!_hvpMethod
prepare!_hvp(f, backend, x, tx, [contexts...]) -> new_prep

Create a prep object that can be given to hvp and its variants to speed them up.

Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.

Warning

The preparation result prep is only reusable as long as the arguments to hvp do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.

Danger

The preparation result prep is not thread-safe. Sharing it between threads may lead to unexpected behavior. If you need to run differentiation concurrently, prepare separate prep objects for each thread.

When strict=Val(true) (the default), type checking is enforced between preparation and execution (but size checking is left to the user). While your code may work for different types by setting strict=Val(false), this is not guaranteed by the API and can break without warning.

source
DifferentiationInterface.prepare!_jacobianMethod
prepare!_jacobian(f,     prep, backend, x, [contexts...]) -> new_prep
prepare!_jacobian(f!, y, prep, backend, x, [contexts...]) -> new_prep

Same behavior as prepare_jacobian but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.prepare!_pullbackMethod
prepare!_pullback(f,     prep, backend, x, ty, [contexts...]) -> new_prep
prepare!_pullback(f!, y, prep, backend, x, ty, [contexts...]) -> new_prep

Same behavior as prepare_pullback but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.prepare!_pushforwardMethod
prepare!_pushforward(f,     prep, backend, x, tx, [contexts...]) -> new_prep
prepare!_pushforward(f!, y, prep, backend, x, tx, [contexts...]) -> new_prep

Same behavior as prepare_pushforward but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.prepare!_second_derivativeMethod
prepare!_second_derivative(f, prep, backend, x, [contexts...]) -> new_prep

Same behavior as prepare_second_derivative but can resize the contents of an existing prep object to avoid some allocations.

There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.

Danger

Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.

Danger

For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.

source
DifferentiationInterface.reasonable_batchsizeMethod
reasonable_batchsize(N::Integer, Bmax::Integer)

Reproduces the heuristic from ForwardDiff to minimize

  1. the number of batches necessary to cover an array of length N
  2. the number of leftover indices in the last partial batch

Source: https://github.com/JuliaDiff/ForwardDiff.jl/blob/ec74fbc32b10bbf60b3c527d8961666310733728/src/prelude.jl#L19-L29

source
DifferentiationInterface.threshold_batchsizeFunction
threshold_batchsize(backend::AbstractADType, B::Integer)

If the backend object has a fixed batch size B0, return a new backend where the fixed batch size is min(B0, B). Otherwise, act as the identity.

source