thunder.jit

thunder.jit(fn, /, *, langctx=None, executors=None, sharp_edges=None, cache=None, disable_torch_autograd=False, transforms=None, debug_options=None, **compile_options)[source]

Just-in-time compile a callable (function or model).

Note

Thunder’s support of PyTorch in-place support is experimental. The in-place support can be turned off with the kwarg of skip_inplace_alias_updates.

Parameters:
Keyword Arguments:
  • langctx – the language context, which language / library to emulate. default: “torch” for PyTorch compatibility.

  • executors – list of executors to use. Defaults to the executors returned by thunder.extend.get_default_executors() and always amended by thunder.extend.get_always_executors(). You can get a list of all available executors with thunder.get_all_executors(). You can also pass the name of an executor that’s been registered, and it will be resolved with thunder.extend.get_executor().

  • sharp_edges – sharp edge detection action. What to do when thunder detects a construct that is likely to lead to errors. Can be "allow", "warn", "error". Defaults to "allow".

  • cache

    caching mode. default: "constant values"`

    • "no caching" - disable caching and always recompute,

    • "constant values" - require Tensors to be of the same shape, device, dtype etc., and integers and strings to match exactly,

    • "same input" - don’t check, but just assume that a cached function works if it exists.

  • transforms – optional list of transforms to be applied. It should be a list of instances of thunder.core.transforms.Transform. Default: None

  • debug_options – optional thunder.DebugOptions instance. See the doc string of DebugOptions for supported debug options. Default: None

Return type:

Callable