• Docs >
  • Mixed Precision Training

Mixed Precision Training

Mixed precision combines the use of both FP32 and lower bit floating points (such as FP16) to reduce memory footprint during model training, resulting in improved performance.

Lightning offers mixed precision training for GPUs and CPUs, as well as bfloat16 mixed precision training for TPUs.


In some cases it is important to remain in FP32 for numerical stability, so keep this in mind when using mixed precision.

For example when running scatter operations during the forward (such as torchpoint3d) computation must remain in FP32.

FP16 Mixed Precision

In most cases, mixed precision uses FP16. Supported torch operations are automatically run in FP16, saving memory and improving throughput on GPU and TPU accelerators.

Since computation happens in FP16, there is a chance of numerical instability. This is handled internally by a dynamic grad scaler which skips steps that are invalid, and adjusts the scaler to ensure subsequent steps fall within a finite range. For more information see the autocast docs.


When using TPUs, setting precision=16 will enable bfloat16 which is the only supported precision type on TPUs.

Trainer(gpus=1, precision=16)

BFloat16 Mixed Precision


BFloat16 requires PyTorch 1.10 or later. Currently this requires installing PyTorch Nightly.

BFloat16 is also experimental and may not provide large speedups or memory improvements, but offer better numerical stability.

Do note for GPUs, largest benefits require Ampere based GPUs, such as A100s or 3090s.

BFloat16 Mixed precision is similar to FP16 mixed precision, however we maintain more of the “dynamic range” that FP32 has to offer. This means we are able to improve numerical stability, compared to FP16 mixed precision. For more information see this TPU performance blog post.

Since BFloat16 is more stable than FP16 during training, we do not need to worry about any gradient scaling or nan gradient values that comes with using FP16 mixed precision.

Trainer(gpus=1, precision="bf16")

It is also possible to use BFloat16 mixed precision on the CPU, relying on MKLDNN under the hood.


NVIDIA APEX Mixed Precision


We strongly recommend to use the above native mixed precision rather than NVIDIA APEX unless you require more finer control.

NVIDIA APEX offers some additional flexibility in setting mixed precision. This can be useful for when wanting to try out different precision configurations, such as keeping most of your weights in FP16 as well as running computation in FP16.

Trainer(gpus=1, amp_backend="apex")

Set the NVIDIA optimization level via the trainer.

Trainer(gpus=1, amp_backend="apex", amp_level="O2")