Changelog¶
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog.
[2.0.0] - 2023-03-15¶
[2.0.0] - Added¶
- Added - Fabric.all_reduce(#16459)
- Added support for saving and loading DeepSpeed checkpoints through - Fabric.save/load()(#16452)
- Added support for automatically calling - set_epochon the- dataloader.batch_sampler.sampler(#16841)
- Added support for writing logs to remote file systems with the - CSVLogger(#16880)
- Added support for frozen dataclasses in the optimizer state (#16656) 
- Added - lightning.fabric.is_wrappedto check whether a module, optimizer, or dataloader was already wrapped by Fabric (#16953)
[2.0.0] - Changed¶
- Fabric now chooses - accelerator="auto", strategy="auto", devices="auto"as defaults (#16842)
- Checkpoint saving and loading redesign (#16434) - Changed the method signatrue of - Fabric.saveand- Fabric.load
- Changed the method signature of - Strategy.save_checkpointand- Fabric.load_checkpoint
- Fabric.saveaccepts a state that can contain model and optimizer references
- Fabric.loadcan now load state in-place onto models and optimizers
- Fabric.loadreturns a dictionary of objects that weren’t loaded into the state
- Strategy.save_checkpointand- Fabric.load_checkpointare now responsible for accessing the state of the model and optimizers
 
- DataParallelStrategy.get_module_state_dict()and- DDPStrategy.get_module_state_dict()now correctly extracts the state dict without keys prefixed with ‘module’ (#16487)
- “Native” suffix removal (#16490) - strategy="fsdp_full_shard_offload"is now- strategy="fsdp_cpu_offload"
- lightning.fabric.plugins.precision.native_ampis now- lightning.fabric.plugins.precision.amp
 
- Enabled all shorthand strategy names that can be supported in the CLI (#16485) 
- Renamed - strategy='tpu_spawn'to- strategy='xla'and- strategy='tpu_spawn_debug'to- strategy='xla_debug'(#16781)
- Changed arguments for precision settings (from [64|32|16|bf16] to [“64-true”|”32-true”|”16-mixed”|”bf16-mixed”]) (#16767) 
- The selection - Fabric(strategy="ddp_spawn", ...)no longer falls back to “ddp” when a cluster environment gets detected (#16780)
- Renamed - setup_dataloaders(replace_sampler=...)to- setup_dataloaders(use_distributed_sampler=...)(#16829)
[2.0.0] - Removed¶
[1.9.4] - 2023-03-01¶
[1.9.3] - 2023-02-21¶
[1.9.2] - 2023-02-15¶
[1.9.1] - 2023-02-10¶
[1.9.1] - Fixed¶
- Fixed error handling for - accelerator="mps"and- ddpstrategy pairing (#16455)
- Fixed strict availability check for - torch_xlarequirement (#16476)
- Fixed an issue where PL would wrap DataLoaders with XLA’s MpDeviceLoader more than once (#16571) 
- Fixed the batch_sampler reference for DataLoaders wrapped with XLA’s MpDeviceLoader (#16571) 
- Fixed an import error when - torch.distributedis not available (#16658)
[1.9.0] - 2023-01-17¶
[1.9.0] - Added¶
- Added - Fabric.launch()to programmatically launch processes (e.g. in Jupyter notebook) (#14992)
- Added the option to launch Fabric scripts from the CLI, without the need to wrap the code into the - runmethod (#14992)
- Added - Fabric.setup_module()and- Fabric.setup_optimizers()to support strategies that need to set up the model before an optimizer can be created (#15185)
- Added support for Fully Sharded Data Parallel (FSDP) training in Lightning Lite (#14967) 
- Added - lightning.fabric.accelerators.find_usable_cuda_devicesutility function (#16147)
- Added basic support for LightningModules (#16048) 
- Added support for managing callbacks via - Fabric(callbacks=...)and emitting events through- Fabric.call()(#16074)
- Added Logger support (#16121) - Added - Fabric(loggers=...)to support different Logger frameworks in Fabric
- Added - Fabric.logfor logging scalars using multiple loggers
- Added - Fabric.log_dictfor logging a dictionary of multiple metrics at once
- Added - Fabric.loggersand- Fabric.loggerattributes to access the individual logger instances
- Added support for calling - self.logand- self.log_dictin a LightningModule when using Fabric
- Added access to - self.loggerand- self.loggersin a LightningModule when using Fabric
 
- Added - lightning.fabric.loggers.TensorBoardLogger(#16121)
- Added - lightning.fabric.loggers.CSVLogger(#16346)
- Added support for a consistent - .zero_grad(set_to_none=...)on the wrapped optimizer regardless of which strategy is used (#16275)
[1.9.0] - Changed¶
- The - Fabric.run()method is no longer abstract (#14992)
- The - XLAStrategynow inherits from- ParallelStrategyinstead of- DDPSpawnStrategy(#15838)
- Merged the implementation of - DDPSpawnStrategyinto- DDPStrategyand removed- DDPSpawnStrategy(#14952)
- The dataloader wrapper returned from - .setup_dataloaders()now calls- .set_epoch()on the distributed sampler if one is used (#16101)
- Renamed - Strategy.reduceto- Strategy.all_reducein all strategies (#16370)
- When using multiple devices, the strategy now defaults to “ddp” instead of “ddp_spawn” when none is set (#16388) 
[1.8.6] - 2022-12-21¶
- minor cleaning 
[1.8.5] - 2022-12-15¶
- minor cleaning