Upgrade from 1.8 to the 2.0¶
Regular User¶
If |
Then |
Ref |
---|---|---|
used |
set |
|
used |
call |
|
imported |
import |
If |
Then |
Ref |
---|---|---|
used Python 3.7 |
upgrade to Python 3.8 or higher |
|
used PyTorch 1.10 |
upgrade to PyTorch 1.11 or higher |
|
used Trainer’s flag |
use |
|
used Trainer’s flag |
use |
|
used Trainer’s flag |
use |
|
used Trainer’s flag |
use |
|
used Trainer’s flag |
pass the path to the |
|
used Trainer’s flag |
use |
|
called the |
use Trainer’s flag``devices=”auto”`` |
|
called the |
use Trainer’s flag``devices=”auto”`` |
|
used Trainer’s flag |
use the |
|
imported profiles from |
import from |
|
used |
move to a standalone |
|
used Trainer’s flag |
use |
|
used Trainer’s flag |
use callbacks |
Advanced User¶
If |
Then |
Ref |
---|---|---|
imported |
import |
|
imported |
import |
|
imported |
import |
|
imported profiler classes from |
import |
|
used |
use |
|
used |
use |
|
used the |
switch to |
|
used the Lightning Hydra multi-run integration |
removed support for it as it caused issues with processes hanging |
|
used |
use |
If |
Then |
Ref |
---|---|---|
used the |
switch to |
|
used Trainer’s flag |
use DDP with |
|
implemented |
port your logic to |
|
implemented |
port your logic to |
|
implemented |
port your logic to |
|
used Trainer’s flag |
switch to |
|
used Trainer’s flag |
implement particular offload logic in your custom metric or turn it on in |
|
used Trainer’s flag |
overwrite |
|
used Trainer’s flag |
use |
|
relied on the |
switch to manual optimization |
|
relied on the |
switch to manual optimization |
|
were using |
switch to PyTorch native mixed precision |
|
used Trainer’s flag |
use PyTorch native mixed precision |
|
used Trainer’s flag |
use PyTorch native mixed precision |
|
used Trainer’s flag |
use PyTorch native mixed precision |
|
used Trainer’s attribute |
use PyTorch native mixed precision |
|
used Trainer’s attribute |
use PyTorch native mixed precision |
|
used Trainer’s attribute |
use PyTorch native mixed precision |
|
use the |
consider using PyTorch’s native FSDP implementation or outsourced implementation into own project |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
pass this option and via dictionary of |
|
used |
pass this option and via dictionary of |
|
have customized loops |
implement your training loop with Fabric. |
|
have customized loops |
implement your training loop with Fabric. |
|
have customized loops |
implement your training loop with Fabric. |
|
used the Trainer’s |
implement your training loop with Fabric |
|
used the Trainer’s |
implement your training loop with Fabric |
|
used the Trainer’s |
implement your training loop with Fabric |
|
used the Trainer’s |
implement your training loop with Fabric |
|
used the |
being marked as protected |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used declaring optimizer frequencies in the dictionary returned from |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used Trainer’s |
use manual optimization |
|
used |
||
used training integration with Horovod |
install standalone package/project |
|
used training integration with ColossalAI |
install standalone package/project |
|
used |
use Torch’s Quantization directly |
|
had any logic except reducing the DP outputs in |
port it to |
|
had any logic except reducing the DP outputs in |
port it to |
|
had any logic except reducing the DP outputs in |
port it to |
|
used |
switch to general |
|
used the automatic addition of a moving average of the |
use |
|
rely on the |
access them via |
|
need to pass a dictionary to |
pass them independently. |
Developer¶
If |
Then |
Ref |
---|---|---|
derived from |
derive from |
|
derived from |
derive from |
|
derived from |
derive from |
If |
Then |
Ref |
---|---|---|
passed the |
passed the (required) |
|
used |
use DDP or DeepSpeed instead |
|
used |
use DDP or DeepSpeed instead |
|
called |
use DDP or DeepSpeed instead |
|
used or derived from |
use DDP instead |
|
used the pl.plugins.ApexMixedPrecisionPlugin`` plugin |
use PyTorch native mixed precision |
|
used the |
switch to the |
|
used the |
implement your training loop with Fabric |
|
used the |
implement your training loop with Fabric |
|
used the |
check the same using |
|
used any function from |
switch to |
|
imported functions from |
import them from |
|
imported functions from |
import them from |
|
imported functions from |
import them from |
|
used any code from |
use the base classes |
|
used any code from |
rely on Pytorch’s native functions |
|
used any code from |
it was removed |
|
used any code from |
it was removed |
|
used any code from |
it was removed |
|
were using truncated backpropagation through time (TBPTT) with |
use manual optimization |
|
were using truncated backpropagation through time (TBPTT) with |
use manual optimization |
|
were using truncated backpropagation through time (TBPTT) and passing |
use manual optimization |
|
used |
it was removed |
|
used |
it was removed |
|
used |
it was removed |
|
used |
it was removed |
|
used |
it was removed |
|
used |
it was removed |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
derived from |
switch to PyTorch native equivalent |
|
used |
customize your logger |
|
if you derived from mixin’s method |
rely on |
|
used |
switch to |
|
used |
implement own logic with Fabric |
|
used or derived from public |
it is set as protected |
|
used the |
use manual optimization |
|
used the |
use manual optimization |
|
used the |
use manual optimization |
|
used |
use |
|
used |
rely on Trainer precision attribute |
|
used |
you shall pass the |
|
relied on |
pass dataloders directly |
|
relied on |
pass dataloders directly |
|
accessed |
rely on Trainer internal loops’ properties |
|
accessed |
rely on Trainer internal loops’ properties |
|
accessed |
rely on Trainer internal loops’ properties |
|
accessed |
rely on Trainer internal loops’ properties |
|
used |
rely on precision plugin |
|
used |
it was removed |
|
used |
it was removed |