Upgrade from 1.4 to the 2.0¶
Regular User¶
If |
Then |
Ref |
---|---|---|
relied on the |
rely on either |
|
accessed |
swicth to manual optimization |
|
called |
rely on |
|
passed the |
pass the |
|
passed the |
now pass |
|
passed the |
now pass the |
|
relied on the |
use |
|
implemented |
now update the signature to include |
|
relied on |
now import separate package |
|
accessed |
now access |
If |
Then |
Ref |
---|---|---|
used |
use |
|
used |
use |
|
passed |
remove them since these parameters are now passed from the |
|
passed |
remove them since these parameters are now passed from the |
|
didn’t provide a |
pass |
|
used |
change the argument to |
|
used Trainer’s flag |
use pass |
|
used Trainer’s flag |
use |
If |
Then |
Ref |
---|---|---|
used Trainer’s flag |
set |
|
used Trainer’s flag |
pass a |
|
used Trainer’s flag |
set |
|
used Trainer’s flag |
add the |
|
used Trainer’s flag |
pass it to the logger init if it is supported for the particular logger |
|
used Trainer’s flag |
turn off the limit by passing |
|
used Trainer’s flag |
pass the same path to the fit function instead, |
|
used Trainer’s flag |
use the |
|
used Trainer’s flag |
set the |
|
called |
use the utility function |
|
used the |
use the utility function |
|
relied on the |
use |
|
relied on the |
use |
|
relied on the |
use |
|
relied on the |
use |
|
implemented the |
implement the |
|
relied on the |
Use another logger like |
|
used the basic progress bar |
use the |
|
were using |
use |
|
were using |
use |
If |
Then |
Ref |
---|---|---|
have wrapped your loggers with |
directly pass a list of loggers to the Trainer and access the list via the |
|
used |
access |
|
used |
upgrade to the latest API |
|
used |
use |
|
used |
use |
|
used |
switch to general purpose hook |
|
used |
switch to general purpose hook |
|
used Trainer’s flag |
use directly |
|
used Trainer’s property |
If |
Then |
Ref |
---|---|---|
used |
set |
|
used |
call |
|
imported |
import |
If |
Then |
Ref |
---|---|---|
used Python 3.7 |
upgrade to Python 3.8 or higher |
|
used PyTorch 1.10 |
upgrade to PyTorch 1.11 or higher |
|
used Trainer’s flag |
use |
|
used Trainer’s flag |
use |
|
used Trainer’s flag |
use |
|
used Trainer’s flag |
use |
|
used Trainer’s flag |
pass the path to the |
|
used Trainer’s flag |
use |
|
called the |
use Trainer’s flag``devices=”auto”`` |
|
called the |
use Trainer’s flag``devices=”auto”`` |
|
used Trainer’s flag |
use the |
|
imported profiles from |
import from |
|
used |
move to a standalone |
|
used Trainer’s flag |
use |
|
used Trainer’s flag |
use callbacks |
Advanced User¶
If |
Then |
Ref |
---|---|---|
called |
now call |
|
accessed the |
now access the |
|
used |
now use the |
|
used |
now use |
|
used |
now use |
|
If you relied on |
now use |
|
selected the i-th GPU with |
now this will set the number of GPUs, just like passing |
If |
Then |
Ref |
---|---|---|
used |
use |
|
used the argument |
use |
|
returned values from |
call |
|
imported |
import |
|
relied on |
manage data lifecycle in customer methods |
|
relied on |
manage data lifecycle in customer methods |
|
relied on |
manage data lifecycle in customer methods |
|
relied on |
manage data lifecycle in customer methods |
|
relied on |
manage data lifecycle in customer methods |
|
relied on |
manage data lifecycle in customer methods |
|
relied on |
manage data lifecycle in customer methods |
|
relied on |
manage data lifecycle in customer methods |
|
relied on |
manage data lifecycle in customer methods |
|
used |
use |
|
used |
use the condition |
If |
Then |
Ref |
---|---|---|
passed |
set it as a property of |
|
used |
specify your |
|
used distributed training attributes |
user the same methods in |
|
called |
use the utility function |
|
used |
remove it as parameters tying happens automatically without the need of implementing your own logic |
|
relied on |
use |
|
used |
rely on the logic in |
|
used the Accelerator collective API |
call |
|
used |
rely on automatic parameters tying with |
|
used |
access them using |
|
implemented |
switch to |
If |
Then |
Ref |
---|---|---|
used |
switch to |
|
used |
now use |
|
used any |
rename them to |
|
used |
rely on protected |
|
used |
rely on protected |
|
used |
switch to built-in https://github.com/pytorch/torchdistx support |
|
have implemented |
move your implementation to |
|
have implemented the |
move your implementation to |
|
have implemented the |
move your implementation to |
|
have implemented the |
move your implementation to |
|
have implemented the |
move your implementation to |
|
have implemented the |
move your implementation to |
|
used |
use |
|
used |
use |
|
used Trainer’s attribute |
it was replaced by |
|
used Trainer’s attribute |
it was replaced by |
|
used Trainer’s attribute |
use |
|
used Trainer’s attribute |
use |
|
used Trainer’s attribute |
use |
|
used |
switch to using |
|
used |
it was removed |
|
logged with |
switch to |
|
used |
log metrics explicitly |
|
used |
log metrics explicitly |
|
used |
rely on generic read-only property |
|
used |
rely on generic read-only property |
|
used |
rely on generic read-only property |
|
rely on the returned dictionary from |
call directly |
If |
Then |
Ref |
---|---|---|
imported |
import |
|
imported |
import |
|
imported |
import |
|
imported profiler classes from |
import |
|
used |
use |
|
used |
use |
|
used the |
switch to |
|
used the Lightning Hydra multi-run integration |
removed support for it as it caused issues with processes hanging |
|
used |
use |
If |
Then |
Ref |
---|---|---|
used the |
switch to |
|
used Trainer’s flag |
use DDP with |
|
implemented |
port your logic to |
|
implemented |
port your logic to |
|
implemented |
port your logic to |
|
used Trainer’s flag |
switch to |
|
used Trainer’s flag |
implement particular offload logic in your custom metric or turn it on in |
|
used Trainer’s flag |
overwrite |
|
used Trainer’s flag |
use |
|
relied on the |
switch to manual optimization |
|
relied on the |
switch to manual optimization |
|
were using |
switch to PyTorch native mixed precision |
|
used Trainer’s flag |
use PyTorch native mixed precision |
|
used Trainer’s flag |
use PyTorch native mixed precision |
|
used Trainer’s flag |
use PyTorch native mixed precision |
|
used Trainer’s attribute |
use PyTorch native mixed precision |
|
used Trainer’s attribute |
use PyTorch native mixed precision |
|
used Trainer’s attribute |
use PyTorch native mixed precision |
|
use the |
consider using PyTorch’s native FSDP implementation or outsourced implementation into own project |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
pass this option and via dictionary of |
|
used |
pass this option and via dictionary of |
|
have customized loops |
implement your training loop with Fabric. |
|
have customized loops |
implement your training loop with Fabric. |
|
have customized loops |
implement your training loop with Fabric. |
|
used the Trainer’s |
implement your training loop with Fabric |
|
used the Trainer’s |
implement your training loop with Fabric |
|
used the Trainer’s |
implement your training loop with Fabric |
|
used the Trainer’s |
implement your training loop with Fabric |
|
used the |
being marked as protected |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used declaring optimizer frequencies in the dictionary returned from |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used Trainer’s |
use manual optimization |
|
used |
||
used training integration with Horovod |
install standalone package/project |
|
used training integration with ColossalAI |
install standalone package/project |
|
used |
use Torch’s Quantization directly |
|
had any logic except reducing the DP outputs in |
port it to |
|
had any logic except reducing the DP outputs in |
port it to |
|
had any logic except reducing the DP outputs in |
port it to |
|
used |
switch to general |
|
used the automatic addition of a moving average of the |
use |
|
rely on the |
access them via |
|
need to pass a dictionary to |
pass them independently. |
Developer¶
If |
Then |
Ref |
---|---|---|
called |
just call |
|
used |
now rely on the corresponding utility functions in |
|
assigned the |
now assign the equivalent |
|
accessed |
the property has been removed |
If |
Then |
Ref |
---|---|---|
called |
switch to |
|
called |
switch to |
|
used |
it is set not as protected and discouraged from direct use |
|
used |
it is set not as protected and discouraged from direct use |
|
used |
change it to |
|
called |
update it |
If |
Then |
Ref |
---|---|---|
Removed the legacy |
||
used the generic method |
switch to a specific one depending on your purpose |
|
used |
import it from |
|
used |
import it from |
|
used |
import it from |
|
used |
import it from |
|
used |
import it from |
|
used |
import it from |
|
used |
import it from |
|
used |
switch it to |
|
derived it from |
use Trainer base class |
|
used base class |
switch to use |
|
set distributed backend via the environment variable |
use |
|
used |
switch to |
|
used |
switch to |
|
used |
use |
|
used |
rely on Torch native AMP |
|
used |
rely on Torch native AMP |
|
used Trainer’s attribute |
rely on loop constructor |
|
used Trainer’s attribute |
it was removed |
|
derived from |
rely on |
|
derived from |
rely on methods from |
|
used Trainer’s attribute |
switch to the |
|
used |
it was set as a protected method |
|
used Profiler’s attribute |
it was removed |
|
used Profiler’s attribute |
it was removed |
|
used the |
||
used |
chang it to (tbptt_steps, n_optimizers). You can update your code by adding the following parameter to your hook signature: |
|
used |
change it to (n_batches, tbptt_steps, n_optimizers). You can update your code by adding the following parameter to your hook signature: |
If |
Then |
Ref |
---|---|---|
derived from |
derive from |
|
derived from |
derive from |
|
derived from |
derive from |
If |
Then |
Ref |
---|---|---|
passed the |
passed the (required) |
|
used |
use DDP or DeepSpeed instead |
|
used |
use DDP or DeepSpeed instead |
|
called |
use DDP or DeepSpeed instead |
|
used or derived from |
use DDP instead |
|
used the pl.plugins.ApexMixedPrecisionPlugin`` plugin |
use PyTorch native mixed precision |
|
used the |
switch to the |
|
used the |
implement your training loop with Fabric |
|
used the |
implement your training loop with Fabric |
|
used the |
check the same using |
|
used any function from |
switch to |
|
imported functions from |
import them from |
|
imported functions from |
import them from |
|
imported functions from |
import them from |
|
used any code from |
use the base classes |
|
used any code from |
rely on Pytorch’s native functions |
|
used any code from |
it was removed |
|
used any code from |
it was removed |
|
used any code from |
it was removed |
|
were using truncated backpropagation through time (TBPTT) with |
use manual optimization |
|
were using truncated backpropagation through time (TBPTT) with |
use manual optimization |
|
were using truncated backpropagation through time (TBPTT) and passing |
use manual optimization |
|
used |
it was removed |
|
used |
it was removed |
|
used |
it was removed |
|
used |
it was removed |
|
used |
it was removed |
|
used |
it was removed |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
used |
switch to using |
|
derived from |
switch to PyTorch native equivalent |
|
used |
customize your logger |
|
if you derived from mixin’s method |
rely on |
|
used |
switch to |
|
used |
implement own logic with Fabric |
|
used or derived from public |
it is set as protected |
|
used the |
use manual optimization |
|
used the |
use manual optimization |
|
used the |
use manual optimization |
|
used |
use |
|
used |
rely on Trainer precision attribute |
|
used |
you shall pass the |
|
relied on |
pass dataloders directly |
|
relied on |
pass dataloders directly |
|
accessed |
rely on Trainer internal loops’ properties |
|
accessed |
rely on Trainer internal loops’ properties |
|
accessed |
rely on Trainer internal loops’ properties |
|
accessed |
rely on Trainer internal loops’ properties |
|
used |
rely on precision plugin |
|
used |
it was removed |
|
used |
it was removed |