Internals
The following names are not part of the public API.
DifferentiationInterface.AutoSimpleFiniteDiff — Type
AutoSimpleFiniteDiff <: ADTypes.AbstractADTypeForward mode backend based on the finite difference (f(x + ε) - f(x)) / ε, with artificial chunk size to mimick ForwardDiff.
Constructor
AutoSimpleFiniteDiff(ε=1e-5; chunksize=nothing)sourceDifferentiationInterface.AutoZeroForward — Type
AutoZeroForward <: ADTypes.AbstractADTypeTrivial backend that sets all derivatives to zero. Used in testing and benchmarking.
sourceDifferentiationInterface.AutoZeroReverse — Type
AutoZeroReverse <: ADTypes.AbstractADTypeTrivial backend that sets all derivatives to zero. Used in testing and benchmarking.
sourceDifferentiationInterface.BatchSizeSettings — Type
BatchSizeSettings{B,singlebatch,aligned}Configuration for the batch size deduced from a backend and a sample array of length N.
Type parameters
B::Int: batch sizesinglebatch::Bool: whetherB == N(B > Nis only allowed whenN == 0)aligned::Bool: whetherN % B == 0
Fields
N::Int: array lengthA::Int: number of batchesA = div(N, B, RoundUp)B_last::Int: size of the last batch (ifalignedisfalse)
DifferentiationInterface.DontPrepareInner — Type
DontPrepareInnerTrait identifying outer backends for which the inner backend in second-order autodiff should not be prepared at all.
sourceDifferentiationInterface.FixTail — Type
FixTailClosure around a function f and a set of tail argument tail_args such that
(ft::FixTail)(args...) = ft.f(args..., ft.tail_args...)sourceDifferentiationInterface.ForwardOverForward — Type
ForwardOverForwardTraits identifying second-order backends that compute HVPs in forward over forward mode (inefficient).
sourceDifferentiationInterface.ForwardOverReverse — Type
ForwardOverReverseTraits identifying second-order backends that compute HVPs in forward over reverse mode.
sourceDifferentiationInterface.InPlaceNotSupported — Type
InPlaceNotSupportedTrait identifying backends that do not support in-place functions f!(y, x).
DifferentiationInterface.PrepareInnerOverload — Type
PrepareInnerOverloadTrait identifying outer backends for which the inner backend in second-order autodiff should be prepared with an overloaded input type.
sourceDifferentiationInterface.PrepareInnerSimple — Type
PrepareInnerSimpleTrait identifying outer backends for which the inner backend in second-order autodiff should be prepared with the same input type.
sourceDifferentiationInterface.PushforwardPrep — Type
PushforwardPrepAbstract type for additional information needed by pushforward and its variants.
DifferentiationInterface.ReverseOverForward — Type
ReverseOverForwardTraits identifying second-order backends that compute HVPs in reverse over forward mode.
sourceDifferentiationInterface.ReverseOverReverse — Type
ReverseOverReverseTraits identifying second-order backends that compute HVPs in reverse over reverse mode.
sourceDifferentiationInterface.Rewrap — Type
RewrapUtility for recording context types of additional arguments (e.g. Constant or Cache) and re-wrapping them into their types after they have been unwrapped.
Useful for second-order differentiation.
sourceDifferentiationInterface.SecondDerivativePrep — Type
SecondDerivativePrepAbstract type for additional information needed by second_derivative and its variants.
ADTypes.mode — Method
DifferentiationInterface.basis — Method
DifferentiationInterface.fix_tail — Method
DifferentiationInterface.get_pattern — Method
get_pattern(M::AbstractMatrix)Return the Bool-valued sparsity pattern for a given matrix.
Only specialized on SparseMatrixCSC because it is used with symbolic backends, and at the moment their sparse Jacobian/Hessian utilities return a SparseMatrixCSC.
The trivial dense fallback is designed to protect against a change of format in these packages.
sourceDifferentiationInterface.hessian_sparsity_with_contexts — Method
hessian_sparsity_with_contexts(f, detector, x, contexts...)Wrapper around ADTypes.hessian_sparsity enabling the allocation of caches with proper element types.
DifferentiationInterface.hvp_mode — Method
DifferentiationInterface.inner_preparation_behavior — Method
inner_preparation_behavior(backend::AbstractADType)Return PrepareInnerSimple, PrepareInnerOverload or DontPrepareInner in a statically predictable way.
DifferentiationInterface.inplace_support — Method
inplace_support(backend)Return InPlaceSupported or InPlaceNotSupported in a statically predictable way.
DifferentiationInterface.ismutable_array — Method
ismutable_array(x)Check whether x is a mutable array and return a Bool.
At the moment, this only returns false for StaticArrays.SArray.
DifferentiationInterface.jacobian_sparsity_with_contexts — Method
jacobian_sparsity_with_contexts(f, detector, x, contexts...)
jacobian_sparsity_with_contexts(f!, y, detector, x, contexts...)Wrapper around ADTypes.jacobian_sparsity enabling the allocation of caches with proper element types.
DifferentiationInterface.multibasis — Method
multibasis(a::AbstractArray, inds)Construct the sum of the i-th standard basis arrays in the vector space of a for all i ∈ inds.
DifferentiationInterface.overloaded_input_type — Function
overloaded_input_type(prep)If it exists, return the overloaded input type which will be passed to the differentiated function when preparation result prep is reused.
DifferentiationInterface.pick_batchsize — Method
pick_batchsize(backend, x_or_y::AbstractArray)Return a BatchSizeSettings appropriate for arrays of the same length as x_or_y with a given backend.
Note that the array in question can be either the input or the output of the function, depending on whether the backend performs forward- or reverse-mode AD.
sourceDifferentiationInterface.pick_batchsize — Method
pick_batchsize(backend, N::Integer)Return a BatchSizeSettings appropriate for arrays of length N with a given backend.
DifferentiationInterface.prepare!_derivative — Method
prepare!_derivative(f, prep, backend, x, [contexts...]) -> new_prep
prepare!_derivative(f!, y, prep, backend, x, [contexts...]) -> new_prepSame behavior as prepare_derivative but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.prepare!_gradient — Method
prepare!_gradient(f, prep, backend, x, [contexts...]) -> new_prepSame behavior as prepare_gradient but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.prepare!_hessian — Method
prepare!_hessian(f, backend, x, [contexts...]) -> new_prepSame behavior as prepare_hessian but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.prepare!_hvp — Method
prepare!_hvp(f, backend, x, tx, [contexts...]) -> new_prepCreate a prep object that can be given to hvp and its variants to speed them up.
Depending on the backend, this can have several effects (preallocating memory, recording an execution trace) which are transparent to the user.
The preparation result prep is only reusable as long as the arguments to hvp do not change type or size, and the function and backend themselves are not modified. Otherwise, preparation becomes invalid and you need to run it again. In some settings, invalid preparations may still give correct results (e.g. for backends that require no preparation), but this is not a semantic guarantee and should not be relied upon.
The preparation result prep is not thread-safe. Sharing it between threads may lead to unexpected behavior. If you need to run differentiation concurrently, prepare separate prep objects for each thread.
When strict=Val(true) (the default), type checking is enforced between preparation and execution (but size checking is left to the user). While your code may work for different types by setting strict=Val(false), this is not guaranteed by the API and can break without warning.
DifferentiationInterface.prepare!_jacobian — Method
prepare!_jacobian(f, prep, backend, x, [contexts...]) -> new_prep
prepare!_jacobian(f!, y, prep, backend, x, [contexts...]) -> new_prepSame behavior as prepare_jacobian but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.prepare!_pullback — Method
prepare!_pullback(f, prep, backend, x, ty, [contexts...]) -> new_prep
prepare!_pullback(f!, y, prep, backend, x, ty, [contexts...]) -> new_prepSame behavior as prepare_pullback but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.prepare!_pushforward — Method
prepare!_pushforward(f, prep, backend, x, tx, [contexts...]) -> new_prep
prepare!_pushforward(f!, y, prep, backend, x, tx, [contexts...]) -> new_prepSame behavior as prepare_pushforward but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.prepare!_second_derivative — Method
prepare!_second_derivative(f, prep, backend, x, [contexts...]) -> new_prepSame behavior as prepare_second_derivative but can resize the contents of an existing prep object to avoid some allocations.
There is no guarantee that prep will be mutated, or that performance will be improved compared to preparation from scratch.
Compared to when prep was first created, the only authorized modification is a size change for input x or output y. Any other modification (like a change of type for the input) is not supported and will give erroneous results.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.pullback_performance — Method
pullback_performance(backend)Return PullbackFast or PullbackSlow in a statically predictable way.
DifferentiationInterface.pushforward_performance — Method
pushforward_performance(backend)Return PushforwardFast or PushforwardSlow in a statically predictable way.
DifferentiationInterface.reasonable_batchsize — Method
reasonable_batchsize(N::Integer, Bmax::Integer)Reproduces the heuristic from ForwardDiff to minimize
- the number of batches necessary to cover an array of length
N - the number of leftover indices in the last partial batch
Source: https://github.com/JuliaDiff/ForwardDiff.jl/blob/ec74fbc32b10bbf60b3c527d8961666310733728/src/prelude.jl#L19-L29
sourceDifferentiationInterface.recursive_similar — Method
recursive_similar(x, T)Apply similar(_, T) recursively to x or its components.
Works if x is an AbstractArray or a (nested) NTuple / NamedTuple of AbstractArrays.
DifferentiationInterface.threshold_batchsize — Function
threshold_batchsize(backend::AbstractADType, B::Integer)If the backend object has a fixed batch size B0, return a new backend where the fixed batch size is min(B0, B). Otherwise, act as the identity.