I’m a DL super newbie trying to use a model stored at a Github repository.
Trying to reproduce its tutorial ended up with the following error.
File “/home/user1/Project_Uncertainty/BayesianUNet/utils/metrics.py”, line 44, in accuracy_coeff
return torchmetrics.functional.accuracy(preds = preds,
TypeError: accuracy() got an unexpected keyword argument ‘mdmc_average’
Following is a relevant part of metrics.py.
def accuracy_coeff(preds, target, num_classes):
return torchmetrics.functional.accuracy(preds = preds,
target = target,
average=‘macro’,
mdmc_average=‘global’,
threshold=0.5,
top_k=None,
subset_accuracy=False,
num_classes=num_classes,
multiclass=None,
ignore_index=None)
I discovered that the option mdmc_average does not exist anymore.
Torchmetrics.functional.classification.accuracy(preds , target , task , threshold=0.5 , num_classes=None , num_labels=None , average=‘micro’ , multidim_average=‘global’ , top_k=1 , ignore_index=None , validate_args=True )
Is mdmc_average and multidim_average equivalent?
Is there anything I need to do more to make sure that the new accuracy class gives me the same result as the old class defined at metrics.py? subset_accuracy option disappeared in the new one, while validate_args appeared in the new one.