Welcome to TorchMetrics¶
TorchMetrics is a collection of 100+ PyTorch metrics implementations and an easy-to-use API to create custom metrics. It offers:
A standardized interface to increase reproducibility
Reduces Boilerplate
Distributed-training compatible
Rigorously tested
Automatic accumulation over batches
Automatic synchronization between multiple devices
You can use TorchMetrics in any PyTorch model, or within PyTorch Lightning to enjoy the following additional benefits:
Your data will always be placed on the same device as your metrics
You can log
Metric
objects directly in Lightning to reduce even more boilerplate
Install TorchMetrics¶
For pip users
pip install torchmetrics
Or directly from conda
conda install -c conda-forge torchmetrics
- Quick Start
- All TorchMetrics
- TorchMetrics’ gallery
- Audio domain
- Image domain
- Text domain
- Structure Overview
- Metrics and devices
- Metrics and memory management
- Saving and loading metrics
- Metrics in Distributed Data Parallel (DDP) mode
- Metrics and 16-bit precision
- Metric Arithmetic
- MetricCollection
- Metric wrappers
- Module vs Functional Metrics
- Metrics and differentiability
- Metrics and hyperparameter optimization
- Advanced metric settings
- Plotting
- Implementing a Metric
- TorchMetrics in PyTorch Lightning
- Complex Scale-Invariant Signal-to-Noise Ratio (C-SI-SNR)
- Deep Noise Suppression Mean Opinion Score (DNSMOS)
- Perceptual Evaluation of Speech Quality (PESQ)
- Permutation Invariant Training (PIT)
- Scale-Invariant Signal-to-Distortion Ratio (SI-SDR)
- Scale-Invariant Signal-to-Noise Ratio (SI-SNR)
- Short-Time Objective Intelligibility (STOI)
- Signal to Distortion Ratio (SDR)
- Signal-to-Noise Ratio (SNR)
- Source Aggregated Signal-to-Distortion Ratio (SA-SDR)
- Speech-to-Reverberation Modulation Energy Ratio (SRMR)
- Accuracy
- AUROC
- Average Precision
- Calibration Error
- Cohen Kappa
- Confusion Matrix
- Coverage Error
- Dice
- Exact Match
- F-1 Score
- F-Beta Score
- Group Fairness
- Hamming Distance
- Hinge Loss
- Jaccard Index
- Label Ranking Average Precision
- Label Ranking Loss
- Matthews Correlation Coefficient
- Precision
- Precision At Fixed Recall
- Precision Recall Curve
- Recall
- Recall At Fixed Precision
- ROC
- Sensitivity At Specificity
- Specificity
- Specificity At Sensitivity
- Stat Scores
- Error Relative Global Dim. Synthesis (ERGAS)
- Frechet Inception Distance (FID)
- Image Gradients
- Inception Score
- Kernel Inception Distance
- Learned Perceptual Image Patch Similarity (LPIPS)
- Memorization-Informed Frechet Inception Distance (MiFID)
- Multi-Scale SSIM
- Peak Signal-to-Noise Ratio (PSNR)
- Peak Signal To Noise Ratio With Blocked Effect
- Perceptual Path Length (PPL)
- Quality with No Reference
- Relative Average Spectral Error (RASE)
- Root Mean Squared Error Using Sliding Window
- Spatial Correlation Coefficient (SCC)
- Spatial Distortion Index
- Spectral Angle Mapper
- Spectral Distortion Index
- Structural Similarity Index Measure (SSIM)
- Total Variation (TV)
- Universal Image Quality Index
- Visual Information Fidelity (VIF)
- Concordance Corr. Coef.
- Cosine Similarity
- Critical Success Index (CSI)
- Explained Variance
- Kendall Rank Corr. Coef.
- KL Divergence
- Log Cosh Error
- Mean Absolute Error (MAE)
- Mean Absolute Percentage Error (MAPE)
- Mean Squared Error (MSE)
- Mean Squared Log Error (MSLE)
- Minkowski Distance
- Pearson Corr. Coef.
- R2 Score
- Relative Squared Error (RSE)
- Spearman Corr. Coef.
- Symmetric Mean Absolute Percentage Error (SMAPE)
- Tweedie Deviance Score
- Weighted MAPE
- torchmetrics.Metric
Metric
Metric.add_state()
Metric.clone()
Metric.compute()
Metric.double()
Metric.float()
Metric.forward()
Metric.half()
Metric.persistent()
Metric.plot()
Metric.reset()
Metric.set_dtype()
Metric.state_dict()
Metric.sync()
Metric.sync_context()
Metric.type()
Metric.unsync()
Metric.update()
Metric.device
Metric.dtype
Metric.metric_state
Metric.update_called
Metric.update_count
- torchmetrics.utilities