API reference

Entry points

DifferentiationInterfaceTest.ScenarioType
Scenario{op,pl_op,pl_fun}

Store a testing scenario composed of a function and its input + output for a given operator.

This generic type should never be used directly: use the specific constructor corresponding to the operator you want to test, or a predefined list of scenarios.

Type parameters

  • op: one of :pushforward, :pullback, :derivative, :gradient, :jacobian,:second_derivative, :hvp, :hessian
  • pl_op: either :in (for op!(f, result, backend, x)) or :out (for result = op(f, backend, x))
  • pl_fun: either :in (for f!(y, x)) or :out (for y = f(x))

Constructors

Scenario{op,pl_op}(f, x; tang, contexts, res1, res2)
Scenario{op,pl_op}(f!, y, x; tang, contexts, res1, res2)

Fields

  • f::Any: function f (if args==1) or f! (if args==2) to apply

  • x::Any: primal input

  • y::Any: primal output

  • tang::Union{Nothing, NTuple{N, T} where {N, T}}: tangents for pushforward, pullback or HVP

  • contexts::Tuple: contexts (if applicable)

  • res1::Any: first-order result of the operator (if applicable)

  • res2::Any: second-order result of the operator (if applicable)

source
DifferentiationInterfaceTest.test_differentiationFunction
test_differentiation(
    backends::Vector{<:ADTypes.AbstractADType};
    ...
) -> Union{Nothing, DataFrames.DataFrame}
test_differentiation(
    backends::Vector{<:ADTypes.AbstractADType},
    scenarios::Vector{<:Scenario};
    correctness,
    type_stability,
    allocations,
    benchmark,
    excluded,
    detailed,
    logging,
    isapprox,
    atol,
    rtol,
    scenario_intact,
    sparsity,
    ignored_modules,
    function_filter,
    skip_allocations,
    count_calls,
    benchmark_test
) -> Union{Nothing, DataFrames.DataFrame}

Apply a list of backends on a list of scenarios, running a variety of different tests and/or benchmarks.

Return

This function always creates and runs a @testset, though its contents may vary.

  • if benchmark == :none, it returns nothing.
  • if benchmark != :none, it returns a DataFrame of benchmark results, whose columns correspond to the fields of DifferentiationBenchmarkDataRow.

Positional arguments

  • backends::Vector{<:AbstractADType}: the backends to test
  • scenarios::Vector{<:Scenario}: the scenarios on which to test them (defaults to the output of default_scenarios())

Keyword arguments

Test categories:

  • correctness=true: whether to compare the differentiation results with the theoretical values specified in each scenario
  • type_stability=:none: whether (and how) to check type stability of operators with JET.jl.
  • allocations=:none: whether (and how) to check allocations inside operators with AllocCheck.jl
  • benchmark=:none: whether (and how) to benchmark operators with Chairmarks.jl

For type_stability, allocations and benchmark, the possible values are :none, :prepared or :full. Each setting tests/benchmarks a different subset of calls:

kwargprepared operatorunprepared operatorpreparation
:nonenonono
:preparedyesnono
:fullyesyesyes

Misc options:

  • excluded::Vector{Symbol}: list of operators to exclude, such as FIRST_ORDER or SECOND_ORDER
  • detailed=false: whether to create a detailed or condensed testset
  • logging=false: whether to log progress

Correctness options:

  • isapprox=isapprox: function used to compare objects approximately, with the standard signature isapprox(x, y; atol, rtol)
  • atol=0: absolute precision for correctness testing (when comparing to the reference outputs)
  • rtol=1e-3: relative precision for correctness testing (when comparing to the reference outputs)
  • scenario_intact=true: whether to check that the scenario remains unchanged after the operators are applied
  • sparsity=false: whether to check sparsity patterns for Jacobians / Hessians

Type stability options:

  • ignored_modules=nothing: list of modules that JET.jl should ignore
  • function_filter: filter for functions that JET.jl should ignore (with a reasonable default)

Benchmark options:

  • count_calls=true: whether to also count function calls during benchmarking
  • benchmark_test=true: whether to include tests which succeed iff benchmark doesn't error
source
test_differentiation(
    backend::ADTypes.AbstractADType,
    args...;
    kwargs...
) -> Union{Nothing, DataFrames.DataFrame}

Shortcut for a single backend.

source
DifferentiationInterfaceTest.benchmark_differentiationFunction
benchmark_differentiation(
    backends,
    scenarios::Vector{<:Scenario};
    benchmark,
    excluded,
    logging,
    count_calls,
    benchmark_test
) -> Union{Nothing, DataFrames.DataFrame}

Shortcut for test_differentiation with only benchmarks and no correctness or type stability checks.

Specifying the set of scenarios is mandatory for this function.

source

Utilities

DifferentiationInterfaceTest.DifferentiationBenchmarkDataRowType
DifferentiationBenchmarkDataRow

Ad-hoc storage type for differentiation benchmarking results.

Fields

  • backend::ADTypes.AbstractADType: backend used for benchmarking

  • scenario::Scenario: scenario used for benchmarking

  • operator::Symbol: differentiation operator used for benchmarking, e.g. :gradient or :hessian

  • prepared::Union{Nothing, Bool}: whether the operator had been prepared

  • calls::Int64: number of calls to the differentiated function for one call to the operator

  • samples::Int64: number of benchmarking samples taken

  • evals::Int64: number of evaluations used for averaging in each sample

  • time::Float64: minimum runtime over all samples, in seconds

  • allocs::Float64: minimum number of allocations over all samples

  • bytes::Float64: minimum memory allocated over all samples, in bytes

  • gc_fraction::Float64: minimum fraction of time spent in garbage collection over all samples, between 0.0 and 1.0

  • compile_fraction::Float64: minimum fraction of time spent compiling over all samples, between 0.0 and 1.0

See the documentation of Chairmarks.jl for more details on the measurement fields.

source

Pre-made scenario lists

The precise contents of the scenario lists are not part of the API, only their existence.

Internals

This is not part of the public API.

Base.zeroMethod
zero(scen::Scenario)

Return a new Scenario identical to scen except for the first- and second-order results which are set to zero.

source
DifferentiationInterfaceTest.batchifyMethod
batchify(scen::Scenario)

Return a new Scenario identical to scen except for the tangents tang and associated results res1 / res2, which are duplicated (batch mode).

Only works if scen is a pushforward, pullback or hvp scenario.

source
DifferentiationInterfaceTest.cachifyMethod
cachify(scen::Scenario)

Return a new Scenario identical to scen except for the function f, which is made to accept an additional cache argument a to store the result before it is returned.

source
DifferentiationInterfaceTest.constantifyMethod
constantify(scen::Scenario)

Return a new Scenario identical to scen except for the function f, which is made to accept an additional constant argument a by which the output is multiplied. The output and result fields are updated accordingly.

source
DifferentiationInterfaceTest.flux_scenariosFunction
flux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Flux.jl.

Warning

This function requires FiniteDifferences.jl and Flux.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API. Their ground truth values are computed with finite differences, and thus subject to imprecision.

source
DifferentiationInterfaceTest.lux_scenariosFunction
lux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Lux.jl.

Warning

This function requires ComponentArrays.jl, ForwardDiff.jl, Lux.jl and LuxTestUtils.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API.

source