Lit newbie with first post…
Working on method to assess quality of self-supervised learning using a synthetic dataset generator. Playing God, I know the target labels. Want to supplement reconstruction loss metrics with assessing label accuracy using pretrained encoder.
So, my initial plan is… Train AE encoder/decoder as normal. Then train classifier using the trained encoder.
- Does this make sense? Other work that has done the same?
- Best approach with Lit? Examples of Lit multi-phase training?
- Anyone interested in collaborating?