DeepSpeedPrecision¶
- class lightning.pytorch.plugins.precision.DeepSpeedPrecision(precision)[source]¶
Bases:
PrecisionPrecision plugin for DeepSpeed integration.
Warning
This is an experimental feature.
- Parameters:
precision¶ (
Literal['32-true','16-true','bf16-true','16-mixed','bf16-mixed']) – Full precision (32-true), half precision (16-true, bf16-true) or mixed precision (16-mixed, bf16-mixed).- Raises:
ValueError – If unsupported
precisionis provided.
- backward(tensor, model, optimizer, *args, **kwargs)[source]¶
Performs back-propagation.
- clip_gradients(optimizer, clip_val=0.0, gradient_clip_algorithm=GradClipAlgorithmType.NORM)[source]¶
Clips the gradients.
- Return type:
- convert_input(data)[source]¶
Convert model inputs (forward) to the floating point precision type of this plugin.
This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).
- Return type:
- convert_module(module)[source]¶
Convert the module parameters to the precision type this plugin handles.
This is optional and depends on the precision limitations during optimization.
- Return type:
- module_init_context()[source]¶
Instantiate module parameters or tensors in the precision type this plugin handles.
This is optional and depends on the precision limitations during optimization.
- Return type:
- optimizer_step(optimizer, model, closure, **kwargs)[source]¶
Hook to run the optimizer step.
- Return type:
- property precision: Literal['transformer-engine', 'transformer-engine-float16', '16-true', '16-mixed', 'bf16-true', 'bf16-mixed', '32-true', '64-true']¶
str(object=’’) -> str str(bytes_or_buffer[, encoding[, errors]]) -> str
Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to ‘strict’.