Internals
The following names are not part of the public API.
DifferentiationInterface.AutoSimpleFiniteDiff — Type
AutoSimpleFiniteDiff <: ADTypes.AbstractADTypeForward mode backend based on the finite difference (f(x + ε) - f(x)) / ε, with artificial chunk size to mimick ForwardDiff.
Constructor
AutoSimpleFiniteDiff(ε=1e-5; chunksize=nothing)DifferentiationInterface.AutoZeroForward — Type
AutoZeroForward <: ADTypes.AbstractADTypeTrivial backend that sets all derivatives to zero. Used in testing and benchmarking.
DifferentiationInterface.AutoZeroReverse — Type
AutoZeroReverse <: ADTypes.AbstractADTypeTrivial backend that sets all derivatives to zero. Used in testing and benchmarking.
DifferentiationInterface.BatchSizeSettings — Type
BatchSizeSettings{B,singlebatch,aligned}Configuration for the batch size deduced from a backend and a sample array of length N.
Type parameters
B::Int: batch sizesinglebatch::Bool: whetherB == N(B > Nis only allowed whenN == 0)aligned::Bool: whetherN % B == 0
Fields
N::Int: array lengthA::Int: number of batchesA = div(N, B, RoundUp)B_last::Int: size of the last batch (ifalignedisfalse)
DifferentiationInterface.DerivativePrep — Type
DerivativePrepAbstract type for additional information needed by derivative and its variants.
DifferentiationInterface.DontPrepareInner — Type
DontPrepareInnerTrait identifying outer backends for which the inner backend in second-order autodiff should not be prepared at all.
DifferentiationInterface.FixTail — Type
FixTailClosure around a function f and a set of tail argument tail_args such that
(ft::FixTail)(args...) = ft.f(args..., ft.tail_args...)DifferentiationInterface.ForwardAndReverseMode — Type
ForwardAndReverseMode <: ADTypes.AbstractModeAppropriate mode type for MixedMode backends.
DifferentiationInterface.ForwardOverForward — Type
ForwardOverForwardTraits identifying second-order backends that compute HVPs in forward over forward mode (inefficient).
DifferentiationInterface.ForwardOverReverse — Type
ForwardOverReverseTraits identifying second-order backends that compute HVPs in forward over reverse mode.
DifferentiationInterface.FunctionContext — Type
FunctionContextPrivate type of Context argument used for passing functions inside second-order differentiation.
Behaves differently for Enzyme only, where the function can be annotated.
DifferentiationInterface.GradientPrep — Type
GradientPrepAbstract type for additional information needed by gradient and its variants.
DifferentiationInterface.HVPPrep — Type
HVPPrepAbstract type for additional information needed by hvp and its variants.
DifferentiationInterface.HessianPrep — Type
HessianPrepAbstract type for additional information needed by hessian and its variants.
DifferentiationInterface.InPlaceNotSupported — Type
InPlaceNotSupportedTrait identifying backends that do not support in-place functions f!(y, x).
DifferentiationInterface.InPlaceSupported — Type
InPlaceSupportedTrait identifying backends that support in-place functions f!(y, x).
DifferentiationInterface.JacobianPrep — Type
JacobianPrepAbstract type for additional information needed by jacobian and its variants.
DifferentiationInterface.PrepareInnerOverload — Type
PrepareInnerOverloadTrait identifying outer backends for which the inner backend in second-order autodiff should be prepared with an overloaded input type.
DifferentiationInterface.PrepareInnerSimple — Type
PrepareInnerSimpleTrait identifying outer backends for which the inner backend in second-order autodiff should be prepared with the same input type.
DifferentiationInterface.PullbackFast — Type
PullbackFastTrait identifying backends that support efficient pullbacks.
DifferentiationInterface.PullbackPrep — Type
PullbackPrepAbstract type for additional information needed by pullback and its variants.
DifferentiationInterface.PullbackSlow — Type
PullbackSlowTrait identifying backends that do not support efficient pullbacks.
DifferentiationInterface.PushforwardFast — Type
PushforwardFastTrait identifying backends that support efficient pushforwards.
DifferentiationInterface.PushforwardPrep — Type
PushforwardPrepAbstract type for additional information needed by pushforward and its variants.
DifferentiationInterface.PushforwardSlow — Type
PushforwardSlowTrait identifying backends that do not support efficient pushforwards.
DifferentiationInterface.ReverseOverForward — Type
ReverseOverForwardTraits identifying second-order backends that compute HVPs in reverse over forward mode.
DifferentiationInterface.ReverseOverReverse — Type
ReverseOverReverseTraits identifying second-order backends that compute HVPs in reverse over reverse mode.
DifferentiationInterface.Rewrap — Type
RewrapUtility for recording context types of additional arguments (e.g. Constant or Cache) and re-wrapping them into their types after they have been unwrapped.
Useful for second-order differentiation.
DifferentiationInterface.SecondDerivativePrep — Type
SecondDerivativePrepAbstract type for additional information needed by second_derivative and its variants.
ADTypes.mode — Method
mode(backend::SecondOrder)Return the outer mode of the second-order backend.
DifferentiationInterface.basis — Method
basis(a::AbstractArray, i)Construct the i-th standard basis array in the vector space of a.
DifferentiationInterface.fix_tail — Method
fix_tail(f, tail_args...)Convenience for constructing a FixTail, with a shortcut when there are no tail arguments.
DifferentiationInterface.forward_backend — Method
forward_backend(m::MixedMode)Return the forward-mode part of a MixedMode backend.
DifferentiationInterface.get_pattern — Method
get_pattern(M::AbstractMatrix)Return the Bool-valued sparsity pattern for a given matrix.
Only specialized on SparseMatrixCSC because it is used with symbolic backends, and at the moment their sparse Jacobian/Hessian utilities return a SparseMatrixCSC.
The trivial dense fallback is designed to protect against a change of format in these packages.
DifferentiationInterface.hessian_sparsity_with_contexts — Method
hessian_sparsity_with_contexts(f, detector, x, contexts...)Wrapper around ADTypes.hessian_sparsity enabling the allocation of caches with proper element types.
DifferentiationInterface.hvp_mode — Method
hvp_mode(backend)Return the best combination of modes for hvp and its variants, among the following options:
DifferentiationInterface.inner_preparation_behavior — Method
inner_preparation_behavior(backend::AbstractADType)Return PrepareInnerSimple, PrepareInnerOverload or DontPrepareInner in a statically predictable way.
DifferentiationInterface.inplace_support — Method
inplace_support(backend)Return InPlaceSupported or InPlaceNotSupported in a statically predictable way.
DifferentiationInterface.ismutable_array — Method
ismutable_array(x)Check whether x is a mutable array and return a Bool.
At the moment, this only returns false for StaticArrays.SArray.
DifferentiationInterface.jacobian_sparsity_with_contexts — Method
jacobian_sparsity_with_contexts(f, detector, x, contexts...)
jacobian_sparsity_with_contexts(f!, y, detector, x, contexts...)Wrapper around ADTypes.jacobian_sparsity enabling the allocation of caches with proper element types.
DifferentiationInterface.multibasis — Method
multibasis(a::AbstractArray, inds)Construct the sum of the i-th standard basis arrays in the vector space of a for all i ∈ inds.
DifferentiationInterface.overloaded_input_type — Function
overloaded_input_type(prep)If it exists, return the overloaded input type which will be passed to the differentiated function when preparation result prep is reused.
DifferentiationInterface.pick_batchsize — Method
pick_batchsize(backend, x_or_y::AbstractArray)Return a BatchSizeSettings appropriate for arrays of the same length as x_or_y with a given backend.
Note that the array in question can be either the input or the output of the function, depending on whether the backend performs forward- or reverse-mode AD.
DifferentiationInterface.pick_batchsize — Method
pick_batchsize(backend, N::Integer)Return a BatchSizeSettings appropriate for arrays of length N with a given backend.
DifferentiationInterface.prepare!_derivative — Method
prepare!_derivative(f, prep, backend, x, [contexts...]) -> new_prep
prepare!_derivative(f!, y, prep, backend, x, [contexts...]) -> new_prepSame behavior as prepare_derivative but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
DifferentiationInterface.prepare!_gradient — Method
prepare!_gradient(f, prep, backend, x, [contexts...]) -> new_prepSame behavior as prepare_gradient but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
DifferentiationInterface.prepare!_hessian — Method
prepare!_hessian(f, backend, x, [contexts...]) -> new_prepSame behavior as prepare_hessian but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
DifferentiationInterface.prepare!_hvp — Method
prepare!_hvp(f, backend, x, tx, [contexts...]) -> new_prepCreate a prep object that can be given to hvp and its variants to speed them up.
Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.
The preparation result prep is only reusable as long as the arguments to hvp do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.
The preparation result prep is not thread-safe. Sharing it between threads may lead to unexpected behavior. If you need to run differentiation concurrently, prepare separate prep objects for each thread.
When strict=Val(true) (the default), type checking is enforced between preparation and execution (but size checking is left to the user). While your code may work for different types by setting strict=Val(false), this is not guaranteed by the API and can break without warning.
DifferentiationInterface.prepare!_jacobian — Method
prepare!_jacobian(f, prep, backend, x, [contexts...]) -> new_prep
prepare!_jacobian(f!, y, prep, backend, x, [contexts...]) -> new_prepSame behavior as prepare_jacobian but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
DifferentiationInterface.prepare!_pullback — Method
prepare!_pullback(f, prep, backend, x, ty, [contexts...]) -> new_prep
prepare!_pullback(f!, y, prep, backend, x, ty, [contexts...]) -> new_prepSame behavior as prepare_pullback but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
DifferentiationInterface.prepare!_pushforward — Method
prepare!_pushforward(f, prep, backend, x, tx, [contexts...]) -> new_prep
prepare!_pushforward(f!, y, prep, backend, x, tx, [contexts...]) -> new_prepSame behavior as prepare_pushforward but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
DifferentiationInterface.prepare!_second_derivative — Method
prepare!_second_derivative(f, prep, backend, x, [contexts...]) -> new_prepSame behavior as prepare_second_derivative but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
DifferentiationInterface.pullback_performance — Method
pullback_performance(backend)Return PullbackFast or PullbackSlow in a statically predictable way.
DifferentiationInterface.pushforward_performance — Method
pushforward_performance(backend)Return PushforwardFast or PushforwardSlow in a statically predictable way.
DifferentiationInterface.reasonable_batchsize — Method
reasonable_batchsize(N::Integer, Bmax::Integer)Reproduces the heuristic from ForwardDiff to minimize
- the number of batches necessary to cover an array of length
N - the number of leftover indices in the last partial batch
Source: https://github.com/JuliaDiff/ForwardDiff.jl/blob/ec74fbc32b10bbf60b3c527d8961666310733728/src/prelude.jl#L19-L29
DifferentiationInterface.recursive_similar — Method
recursive_similar(x, T)Apply similar(_, T) recursively to x or its components.
Works if x is an AbstractArray or a (nested) NTuple / NamedTuple of AbstractArrays.
DifferentiationInterface.reverse_backend — Method
reverse_backend(m::MixedMode)Return the reverse-mode part of a MixedMode backend.
DifferentiationInterface.threshold_batchsize — Function
threshold_batchsize(backend::AbstractADType, B::Integer)If the backend object has a fixed batch size B0, return a new backend where the fixed batch size is min(B0, B). Otherwise, act as the identity.