ColossalAIStrategy¶
- class pytorch_lightning.strategies.ColossalAIStrategy(use_chunk=True, chunk_size=None, enable_distributed_storage=True, placement_policy='auto', force_outputs_fp32=False, gpu_margin_mem_ratio=0.0, chunk_search_range=67108864, chunk_search_n_grids=4096, min_chunk_size=33554432, initial_scale=65536, min_scale=1, growth_factor=2, backoff_factor=0.5, growth_interval=1000, hysteresis=2, max_scale=4294967296, accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None)[source]¶
Bases:
pytorch_lightning.strategies.ddp.DDPStrategy
ColossalAI strategy. It only supports a single optimizer, which must be
colossalai.nn.optimizer.CPUAdam
orcolossalai.nn.optimizer.HybridAdam
now. Your model must be created in the functionLightningModule.configure_sharded_model()
. Thus, you should overwrite this function. More details can be found in the below example.It configures accelerator and precision, and you should not configure them when initializing
Trainer
. CUDA is essential for this strategy. Please make sure CUDA is available.Example:
class GLUETransformer(LightningModule): ... def configure_sharded_model(self) -> None: self.model = BertForSequenceClassification.from_pretrained('bert-base-uncased') trainer = Trainer(..., accelerator="gpu", precision=16, strategy="colossalai")
- Parameters:
use_chunk¶ (
bool
) – Whether to use chunk-based memory management. It can speed up training, but slightly more memory will be used.chunk_size¶ (
Optional
[int
]) – The size of a chunk. It will be ignored whenuse_chunk=False
. If it’s None, a best chunk size will be searched out based onchunk_search_range
,chunk_search_n_grids
andmin_chunk_size
.enable_distributed_storage¶ (
bool
) – Whether to storage model in a distributed manner. It reduces memory from 1 to 1/N, but it may slow down training.It can be “cpu”, “cuda” and “auto”.
- If it’s “cpu”, parameters, gradients and optimizer states will be offloaded to CPU,
which means min CUDA memory will be used.
If it’s “cuda”, they won’t be offloaded, which means max CUDA memory will be used. It’s the fastest.
- If it’s “auto”, they are moving dynamically based on CPU and CUDA memory usage.
It will utilize heterogeneous memory space evenly and well. Note that “auto” policy can only work well when no other processes use CUDA during your training.
force_outputs_fp32¶ (
bool
) – Whether to cast outputs to fp32.gpu_margin_mem_ratio¶ (
float
) – The ratio of GPU remaining memory (after the first forward-backward) which will be used by optimizer. This argument will be ignored whenplacement_policy
is not “auto”.chunk_search_range¶ (
int
) – The range of chunk size to search. The actual search range will be frommax(min_chunk_size, max_param_size)
tomax(min_chunk_size, max_param_size) + chunk_search_range
.chunk_search_n_grids¶ (
int
) – The number of intervals in the search range.min_chunk_size¶ (
int
) – The minimum size for a chunk in bytes.initial_scale¶ (
float
) – The initial dynamic loss scale value.min_scale¶ (
float
) – The minimum dynamic loss scaling value.growth_factor¶ (
float
) – The multiplication factor for increasing loss scale.backoff_factor¶ (
float
) – The multiplication factor for decreasing loss scale.growth_interval¶ (
int
) – The number of steps to increase loss scale when no overflow occurs.hysteresis¶ (
int
) – The number of overflows before decreasing loss scale.max_scale¶ (
float
) – The maximum dynamic loss scaling value.
- all_gather(tensor, group=None, sync_grads=False)[source]¶
Perform a all_gather on all processes.
- Return type:
- lightning_module_state_dict(rank_zero_only=False)[source]¶
Returns a dictionary containing a whole state of the module. But all the tensors in the dictionary are detached from their parameters and located in cpu memory.
- model_sharded_context()[source]¶
Provide hook to create modules in a distributed aware context. This is useful for when we’d like to shard the model instantly, which is useful for extremely large models which can save memory and initialization time.
Returns: Model parallel context.
- Return type:
- optimizer_step(optimizer, opt_idx, closure, model=None, **kwargs)[source]¶
Performs the actual optimizer step.
- reduce(tensor, group=None, reduce_op='sum')[source]¶
Reduces a tensor from several distributed processes to one aggregated tensor.
- Parameters:
- Return type:
- Returns:
reduced value, except when the input was not a tensor the output remains is unchanged
- teardown()[source]¶
This method is called to teardown the training process.
It is the right place to release memory and free other resources.
- Return type:
- validation_step(*args, **kwargs)[source]¶
The actual validation step.
See
validation_step()
for more details
- property handles_gradient_accumulation: bool¶
Whether the plugin handles gradient accumulation internally.
- property restore_checkpoint_after_setup: bool¶
Override to delay restoring from checkpoint till after pre-dispatch.
- property root_device: torch.device¶
Return the root device.