# DeepLearning-MachineLearning-Note
**Repository Path**: edata-code/DeepLearning-MachineLearning-Note
## Basic Information
- **Project Name**: DeepLearning-MachineLearning-Note
- **Description**: Deep Learning and machine learning note and code
- **Primary Language**: Python
- **License**: Apache-2.0
- **Default Branch**: master
- **Homepage**: https://www.cnblogs.com/endlesscoding/
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 1
- **Created**: 2020-01-06
- **Last Updated**: 2021-09-30
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
## 深度/机器学习笔记
深度学习和机器学习的一些笔记
### Focal Loss
- [ ] 添加更为详细的实现说明
FL对处理样本不平衡问题效果不错,这里对它做一个详细的解释,并且用pytorch实现,详细的解释请查看我的[博客](https://www.cnblogs.com/endlesscoding/p/12155588.html)。
- [x] Pytorch Focal Loss
- [x] Light Focal Loss
- [ ] Catboost Focal Loss
### GHM Loss
GHM Loss 出自[Gradient Harmonized Single-Stage Detector](https://www.aaai.org/ojs/index.php/AAAI/article/view/4877/4750),可以看作是Focal Loss的改进版,比Focal Loss的效果更好。
论文中实现了分类loss和回归loss。
#### 分类loss:GHM-C
$$
\begin{aligned}
L_{G H M-C} &=\frac{1}{N} \sum_{i=1}^{N}{\beta_{i} L_{C E}\left(p_{i}, p_{i}^{*}\right)} \\
&=\sum_{i=1}^{N}{\frac{L_{C E}\left(p_{i}, p_{i}^{*}\right)}{G D\left(g_{i}\right)}}
\end{aligned}
$$
#### 回归loss: GHM-R
提出一个`Smooth L1 Loss`替代版:
$$
A S L_{1}(d)=\sqrt{d^{2}+\mu^{2}}-\mu
$$
$$
\begin{aligned}
L_{G H M-R}
&=\frac{1}{N} \sum_{i=1}^{N} \beta_{i} A S L_{1}\left(d_{i}\right) \\
&=\sum_{i=1}^{N} \frac{A S L_{1}\left(d_{i}\right)}{G D\left(g r_{i}\right)}
\end{aligned}
$$
> 更加详细的解释请看论文。