XLAPrecision

class lightning.pytorch.plugins.precision.XLAPrecision(precision='32-true')[source]

Bases: Precision

Plugin for training with XLA.

Parameters:

precision (Literal['32-true', '16-true', 'bf16-true']) – Full precision (32-true) or half precision (16-true, bf16-true).

Raises:

ValueError – If unsupported precision is provided.

optimizer_step(optimizer, model, closure, **kwargs)[source]

Hook to run the optimizer step.

Return type:

Any

teardown()[source]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

Return type:

None

property precision: Literal['transformer-engine', 'transformer-engine-float16', '16-true', '16-mixed', 'bf16-true', 'bf16-mixed', '32-true', '64-true']

str(object=’’) -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to ‘strict’.