Frechet Inception 距离得分(Frechet Inception Distance score,FID)是计算真实图像和生成图像的特征向量之间距离的一种度量。
FID 从原始图像的计算机视觉特征的统计方面的相似度来衡量两组图像的相似度,这种视觉特征是使用 Inception v3 图像分类模型计算的得到的。分数越低代表两组图像越相似,或者说二者的统计量越相似,FID 在最佳情况下的得分为 0.0,表示两组图像相同。
FID 分数被用于评估由生成性对抗网络生成的图像的质量,较低的分数与较高质量的图像有很高的相关性。
在本教程中,你将了解如何通过 FID 评估生成的图像。
https://github.com/mseitzer/pytorch-fidhttps://github.com/mseitzer/pytorch-fid
首先安装:pip install pytorch-fid
Usage
To compute the FID score between two datasets, where images of each dataset are contained in an individual folder:
python -m pytorch_fid path/to/dataset1 path/to/dataset2
To run the evaluation on GPU, use the flag --device cuda:N, where N is the index of the GPU to use.(现在变成--gpu 0)
例如命令:python -m pytorch_fid groundtruth input --gpu 0
Using different layers for feature maps
In difference to the official implementation, you can choose to use a different feature layer of the Inception network instead of the default pool3 layer. As the lower layer features still have spatial extent, the features are first global average pooled to a vector before estimating mean and covariance.
This might be useful if the datasets you want to compare have less than the otherwise required 2048 images. Note that this changes the magnitude of the FID score and you can not compare them against scores calculated on another dimensionality. The resulting scores might also no longer correlate with visual quality.
You can select the dimensionality of features to use with the flag --dims N, where N is the dimensionality of features. The choices are:
- 64: first max pooling features
- 192: second max pooling features
- 768: pre-aux classifier features
- 2048: final average pooling features (this is the default)
例如在该项目下使用该命令:python -m pytorch_fid datasets/colorization/sidd/val/groundtruth datasets/colorization/sidd/val/input --gpu 0
最终得到得分:



