发布时间:2023-07-09 11:30
目录
17-ICCV-Sampling Matters in Deep Embedding Learning
Preliminaries
contrastive loss
triplet loss
hard negative mining
semi-hard negative mining
Distance weighted sampling
Margin based loss
Relationship to isotonic regression
感觉这篇论文的主要贡献应该是后面提出的损失函数(把contrastive和triplet结合在一起了),而不是前面的采样策略,这种“均匀”的采样策略几乎和其他所有论文使用困难(非平凡)样本矛盾了。
正样本尽可能近,负样本被固定距离α隔开
visually diverse classes are embedded in the same small space as visually similar ones. The embedding space does not allow for distortions.
loss+sampling strategy
——>模型坍塌
online selection:one triplet is sampled for every (a, p) pair
offline selection:a batch has 1=3 of images as anchors, positives, and negatives respectively
如果有正确的采样策略,简单的pairwse loss也是高效的。
n维单位球面成对距离分布:球面上取一固定点a,随机在球面上另选一点,这个点和a之间距离(余弦距离?球面上的距离?)为d的概率
距离加权采样提供较大的距离范围,在控制方差的同时,稳定地生成信息丰富的示例。
def inverse_sphere_distances(self, batch, anchor_to_all_dists, labels, anchor_label):
dists = anchor_to_all_dists
bs,dim = len(dists),batch.shape[-1]
#negated log-distribution of distances of unit sphere in dimension
log_q_d_inv = ((2.0 - float(dim)) * torch.log(dists) - (float(dim-3) / 2) * torch.log(1.0 - 0.25 * (dists.pow(2))))
log_q_d_inv[np.where(labels==anchor_label)[0]] = 0
q_d_inv = torch.exp(log_q_d_inv - torch.max(log_q_d_inv)) # - max(log) for stability
q_d_inv[np.where(labels==anchor_label)[0]] = 0
### NOTE: Cutting of values with high distances made the results slightly worse. It can also lead to
# errors where there are no available negatives (for high samples_per_class cases).
# q_d_inv[np.where(dists.detach().cpu().numpy()>self.upper_cutoff)[0]] = 0
q_d_inv = q_d_inv/q_d_inv.sum()
return q_d_inv.detach().cpu().numpy()
【实际实现:权重log分布,没有λ切断可能更好】
basic idea:ordinal regression关注相对顺序,不是绝对距离。Isotonic regression(保序回归)单独估计阈值,根据阈值惩罚分数。这里应用到成对距离而不是分数函数上:
β越大越能更好的使用嵌入空间,需要正则化β
优化margin based loss=解决距离的排序问题