Skip to content

ADgradient should always cache/prepare #51

@tpapp

Description

@tpapp

Given that the API provides a vector length, "preparation" (allocation of buffers, caches, precomputations, everything that DifferentiationInterface.prepare_... methods do) just needs a type.

I am proposing that ADgradient always performs this preparation step, with the default type T = Float64. The user may call the function with other types, in which case the layer should just silently convert as needed, without warning.

Cf JuliaDiff/DifferentiationInterface.jl#859, but we can do this in this package if that is not supported.

The x = ... argument should be retired.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions