Fabric¶
- class lightning_fabric.fabric.Fabric(accelerator=None, strategy=None, devices=None, num_nodes=1, precision=32, plugins=None, callbacks=None, loggers=None)[source]¶
Bases:
object
Fabric accelerates your PyTorch training or inference code with minimal changes required.
Automatic placement of models and data onto the device.
Automatic support for mixed and double precision (smaller memory footprint).
Seamless switching between hardware (CPU, GPU, TPU) and distributed training strategies (data-parallel training, sharded training, etc.).
Automated spawning of processes, no launch utilities required.
Multi-node support.
- Parameters
accelerator¶ (
Union
[str
,Accelerator
,None
]) – The hardware to run on. Possible choices are:"cpu"
,"cuda"
,"mps"
,"gpu"
,"tpu"
,"auto"
.strategy¶ (
Union
[str
,Strategy
,None
]) – Strategy for how to run across multiple devices. Possible choices are:"dp"
,"ddp"
,"ddp_spawn"
,"deepspeed"
,"fsdp"
.devices¶ (
Union
[int
,str
,List
[int
],None
]) – Number of devices to train on (int
), which GPUs to train on (list
orstr
), or"auto"
. The value applies per node.num_nodes¶ (
int
) – Number of GPU nodes for distributed training.precision¶ (
Union
[Literal
[64, 32, 16],Literal
[‘64’, ‘32’, ‘16’, ‘bf16’]]) – Double precision (64
), full precision (32
), half precision (16
), or bfloat16 precision ("bf16"
).plugins¶ (
Union
[Precision
,ClusterEnvironment
,CheckpointIO
,str
,List
[Union
[Precision
,ClusterEnvironment
,CheckpointIO
,str
]],None
]) – One or several custom pluginscallbacks¶ (
Union
[List
[Any
],Any
,None
]) – A single callback or a list of callbacks. A callback can contain any arbitrary methods that can be invoked throughcall()
by the user.loggers¶ (
Union
[Logger
,List
[Logger
],None
]) – A single logger or a list of loggers. Seelog()
for more information.
- all_gather(data, group=None, sync_grads=False)[source]¶
Gather tensors or collections of tensors from multiple processes.
- Parameters
data¶ (
Union
[Tensor
,Dict
,List
,Tuple
]) – int, float, tensor of shape (batch, …), or a (possibly nested) collection thereof.group¶ (
Optional
[Any
]) – the process group to gather results from. Defaults to all processes (world)sync_grads¶ (
bool
) – flag that allows users to synchronize gradients for the all_gather operation
- Return type
- Returns
A tensor of shape (world_size, batch, …), or if the input was a collection the output will also be a collection with tensors of this shape.
- autocast()[source]¶
A context manager to automatically convert operations for the chosen precision.
Use this only if the forward method of your model does not cover all operations you wish to run with the chosen precision setting.
- backward(tensor, *args, model=None, **kwargs)[source]¶
Replaces
loss.backward()
in your training loop. Handles precision and automatically for you.- Parameters
tensor¶ (
Tensor
) – The tensor (loss) to back-propagate gradients from.*args¶ – Optional positional arguments passed to the underlying backward function.
model¶ (
Optional
[_FabricModule
]) – Optional model instance for plugins that require the model for backward().**kwargs¶ – Optional named keyword arguments passed to the underlying backward function.
Note
When using
strategy="deepspeed"
and multiple models were set up, it is required to pass in the model as argument here.- Return type
- barrier(name=None)[source]¶
Wait for all processes to enter this call. Use this to synchronize all parallel processes, but only if necessary, otherwise the overhead of synchronization will cause your program to slow down.
Example:
if self.global_rank == 0: # let process 0 download the dataset dataset.download_files() # let all processes wait before reading the dataset self.barrier() # now all processes can read the files and start training
- Return type
- call(hook_name, *args, **kwargs)[source]¶
Trigger the callback methods with the given name and arguments.
Not all objects registered via
Fabric(callbacks=...)
must implement a method with the given name. The ones that have a matching method name will get called.- Parameters
Example:
class MyCallback: def on_train_epoch_end(self, results): ... fabric = Fabric(callbacks=[MyCallback()]) fabric.call("on_train_epoch_end", results={...})
- Return type
- load(filepath)[source]¶
Load a checkpoint from a file.
How and which processes load gets determined by the strategy
- log(name, value, step=None)[source]¶
Log a scalar to all loggers that were added to Fabric.
- Parameters
- Return type
- log_dict(metrics, step=None)[source]¶
Log multiple scalars at once to all loggers that were added to Fabric.
- Parameters
metrics¶ (
Mapping
[str
,Any
]) – A dictionary where the key is the name of the metric and the value the scalar to be logged. Anytorch.Tensor
in the dictionary get detached from the graph automatically.step¶ (
Optional
[int
]) – Optional step number. Most Logger implementations auto-increment this value by one with every log call. You can specify your own value here.
- Return type
- no_backward_sync(module, enabled=True)[source]¶
Skip gradient synchronization during backward to avoid redundant communication overhead.
Use this context manager when performing gradient accumulation to speed up training with multiple devices.
Example:
# Accumulate gradient 8 batches at a time with self.no_backward_sync(model, enabled=(batch_idx % 8 != 0)): output = model(input) loss = ... self.backward(loss) ...
For those strategies that don’t support it, a warning is emitted. For single-device strategies, it is a no-op. Both the model’s .forward() and the self.backward() call need to run under this context.
- print(*args, **kwargs)[source]¶
Print something only on the first process.
Arguments passed to this method are forwarded to the Python built-in
print()
function.- Return type
- run(*args, **kwargs)[source]¶
All the code inside this run method gets accelerated by Fabric.
You can pass arbitrary arguments to this function when overriding it.
- Return type
- save(content, filepath)[source]¶
Save checkpoint contents to a file.
How and which processes save gets determined by the strategy. For example, the ddp strategy saves checkpoints only on process 0.
- static seed_everything(seed=None, workers=None)[source]¶
Helper function to seed everything without explicitly importing Lightning.
See
pytorch_lightning.seed_everything()
for more details.- Return type
- setup(module, *optimizers, move_to_device=True)[source]¶
Set up a model and its optimizers for accelerated training.
- Parameters
module¶ (
Module
) – Atorch.nn.Module
to set up*optimizers¶ – The optimizer(s) to set up (no optimizers is also possible)
move_to_device¶ (
bool
) – If setTrue
(default), moves the model to the correct device. Set this toFalse
and alternatively useto_device()
manually.
- Return type
- Returns
The tuple containing wrapped module and the optimizers, in the same order they were passed in.
- setup_dataloaders(*dataloaders, replace_sampler=True, move_to_device=True)[source]¶
Set up one or multiple dataloaders for accelerated training. If you need different settings for each dataloader, call this method individually for each one.
- Parameters
*dataloaders¶ – A single dataloader or a sequence of dataloaders.
replace_sampler¶ (
bool
) – If setTrue
(default), automatically wraps or replaces the sampler on the dataloader(s) for distributed training. If you have a custom sampler defined, set this to this argument toFalse
.move_to_device¶ (
bool
) – If setTrue
(default), moves the data returned by the dataloader(s) automatically to the correct device. Set this toFalse
and alternatively useto_device()
manually on the returned data.
- Return type
- Returns
The wrapped dataloaders, in the same order they were passed in.
- setup_module(module, move_to_device=True)[source]¶
Set up a model for accelerated training or inference.
This is the same as calling
.setup(model)
with no optimizers. It is useful for inference or for certain strategies like FSDP that require setting up the module before the optimizer can be created and set up. See alsosetup_optimizers()
.- Parameters
module¶ (
Module
) – Atorch.nn.Module
to set upmove_to_device¶ (
bool
) – If setTrue
(default), moves the model to the correct device. Set this toFalse
and alternatively useto_device()
manually.
- Return type
_FabricModule
- Returns
The wrapped model.
- setup_optimizers(*optimizers)[source]¶
Set up one or more optimizers for accelerated training.
Some strategies do not allow setting up model and optimizer independently. For them, you should call
.setup(model, optimizer, ...)
instead to jointly set them up.
- sharded_model()[source]¶
Shard the parameters of the model instantly when instantiating the layers.
Use this context manager with strategies that support sharding the model parameters to save peak memory usage.
Example:
with self.sharded_model(): model = MyModel()
The context manager is strategy-agnostic and for the ones that don’t do sharding, it is a no-op.
- Return type
- to_device(obj: torch.nn.modules.module.Module) torch.nn.modules.module.Module [source]¶
- to_device(obj: torch.Tensor) torch.Tensor
- to_device(obj: Any) Any
Move a
torch.nn.Module
or a collection of tensors to the current device, if it is not already on that device.
- property device: torch.device¶
The current device this process runs on.
Use this to create tensors directly on the device if needed.
- Return type
- property global_rank: int¶
The global index of the current process across all devices and nodes.
- Return type
- property local_rank: int¶
The index of the current process among the processes running on the local node.
- Return type
- property logger: lightning_fabric.loggers.logger.Logger¶
Returns the first logger in the list passed to Fabric, which is considered the main logger.
- Return type
- property loggers: List[lightning_fabric.loggers.logger.Logger]¶
Returns all loggers passed to Fabric.