Skip to content

Commit

Permalink
Internal change
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 410043098
  • Loading branch information
MarkDaoust authored and ramakumar1729 committed Nov 16, 2021
1 parent d3d5394 commit ace49da
Show file tree
Hide file tree
Showing 2 changed files with 42 additions and 42 deletions.
10 changes: 5 additions & 5 deletions tensorflow_ranking/python/keras/losses.py
Original file line number Diff line number Diff line change
Expand Up @@ -668,9 +668,9 @@ class ListMLELoss(_ListwiseLoss):
\mathcal{L}(\{y\}, \{s\}) = - \log(P(\pi_y | s))
$$
where $$P(\pi_y | s)$$ is the Plackett-Luce probability of a permutation
$$\pi_y$$ conditioned on scores $$s$$. Here $$\pi_y$$ represents a permutation
of items ordered by the relevance labels $$y$$ where ties are broken randomly.
where $P(\pi_y | s)$ is the Plackett-Luce probability of a permutation
$\pi_y$ conditioned on scores $s$. Here $\pi_y$ represents a permutation
of items ordered by the relevance labels $y$ where ties are broken randomly.
References:
- [Listwise approach to learning to rank: theory and algorithm, Xia et al,
Expand Down Expand Up @@ -963,7 +963,7 @@ class ClickEMLoss(_RankingLoss):
Implementation of click EM loss ([Wang et al, 2018][wang2018]). This loss
assumes that a click is generated by a factorized model
$$P(\text{examination}) \cdot P(\text{relevance})$$, which are latent
$P(\text{examination}) \cdot P(\text{relevance})$, which are latent
variables determined by `exam_logits` and `rel_logits` respectively.
NOTE: This loss should be called with a `logits` tensor of shape
Expand Down Expand Up @@ -1043,7 +1043,7 @@ class SigmoidCrossEntropyLoss(_RankingLoss):
```
NOTE: This loss does not support graded relevance labels and should only be
used with binary relevance labels ($$y \in [0, 1]$$).
used with binary relevance labels ($y \in [0, 1]$).
Standalone usage:
Expand Down
74 changes: 37 additions & 37 deletions tensorflow_ranking/python/keras/metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -237,8 +237,8 @@ class MRRMetric(_RankingMetric):
\text{MRR}(\{y\}, \{s\}) = \max_i \frac{\bar{y}_i}{\text{rank}(s_i)}
$$
where $$\text{rank}(s_i)$$ is the rank of item $$i$$ after sorting by scores
$$s$$ with ties broken randomly and $$\bar{y_i}$$ are truncated labels:
where $\text{rank}(s_i)$ is the rank of item $i$ after sorting by scores
$s$ with ties broken randomly and $\bar{y_i}$ are truncated labels:
$$
\bar{y}_i = \begin{cases}
Expand Down Expand Up @@ -300,8 +300,8 @@ class ARPMetric(_RankingMetric):
\frac{1}{\sum_i y_i} \sum_i y_i \cdot \text{rank}(s_i)
$$
where $$\text{rank}(s_i)$$ is the rank of item $$i$$ after sorting by scores
$$s$$ with ties broken randomly.
where $\text{rank}(s_i)$ is the rank of item $i$ after sorting by scores
$s$ with ties broken randomly.
"""

def __init__(self, name=None, dtype=None, ragged=False, **kwargs):
Expand Down Expand Up @@ -353,21 +353,21 @@ class PrecisionMetric(_RankingMetric):
where:
* $$\text{rank}(s_i)$$ is the rank of item $$i$$ after sorting by scores $$s$$
* $\text{rank}(s_i)$ is the rank of item $i$ after sorting by scores $s$
with ties broken randomly
* $$I[]$$ is the indicator function:\
$$I[\text{cond}] = \begin{cases}
* $I[]$ is the indicator function:\
$I[\text{cond}] = \begin{cases}
1 & \text{if cond is true}\\
0 & \text{else}\end{cases}
$$
* $$\bar{y}_i$$ are the truncated labels:\
$$
$
* $\bar{y}_i$ are the truncated labels:\
$
\bar{y}_i = \begin{cases}
1 & \text{if }y_i \geq 1 \\
0 & \text{else}
\end{cases}
$$
* $$k = |y|$$ if $$k$$ is not provided
$
* $k = |y|$ if $k$ is not provided
"""

def __init__(self, name=None, topn=None, dtype=None, ragged=False, **kwargs):
Expand Down Expand Up @@ -429,21 +429,21 @@ class RecallMetric(_RankingMetric):
where:
* $$\text{rank}(s_i)$$ is the rank of item $$i$$ after sorting by scores $$s$$
* $\text{rank}(s_i)$ is the rank of item $i$ after sorting by scores $s$
with ties broken randomly
* $$I[]$$ is the indicator function:\
$$I[\text{cond}] = \begin{cases}
* $I[]$ is the indicator function:\
$I[\text{cond}] = \begin{cases}
1 & \text{if cond is true}\\
0 & \text{else}\end{cases}
$$
* $$\bar{y}_i$$ are the truncated labels:\
$$
$
* $\bar{y}_i$ are the truncated labels:\
$
\bar{y}_i = \begin{cases}
1 & \text{if }y_i \geq 1 \\
0 & \text{else}
\end{cases}
$$
* $$k = |y|$$ if $$k$$ is not provided
$
* $k = |y|$ if $k$ is not provided
"""

def __init__(self, name=None, topn=None, dtype=None, ragged=False, **kwargs):
Expand Down Expand Up @@ -512,8 +512,8 @@ class PrecisionIAMetric(_RankingMetric):
\sum_t \sum_i I[\text{rank}(s_i) \leq k] y_{i,t}
$$
where $$\text{rank}(s_i)$$ is the rank of item $$i$$ after sorting by scores
$$s$$ with ties broken randomly.
where $\text{rank}(s_i)$ is the rank of item $i$ after sorting by scores
$s$ with ties broken randomly.
References:
Expand Down Expand Up @@ -603,22 +603,22 @@ class MeanAveragePrecisionMetric(_RankingMetric):
where:
* $$P@k(y, s)$$ is the Precision at rank $$k$$. See
* $P@k(y, s)$ is the Precision at rank $k$. See
`tfr.keras.metrics.PrecisionMetric`.
* $$\text{rank}(s_i)$$ is the rank of item $$i$$ after sorting by scores $$s$$
* $\text{rank}(s_i)$ is the rank of item $i$ after sorting by scores $s$
with ties broken randomly
* $$I[]$$ is the indicator function:\
$$I[\text{cond}] = \begin{cases}
* $I[]$ is the indicator function:\
$I[\text{cond}] = \begin{cases}
1 & \text{if cond is true}\\
0 & \text{else}\end{cases}
$$
* $$\bar{y}_i$$ are the truncated labels:\
$$
$
* $\bar{y}_i$ are the truncated labels:\
$
\bar{y}_i = \begin{cases}
1 & \text{if }y_i \geq 1 \\
0 & \text{else}
\end{cases}
$$
$
"""

def __init__(self, name=None, topn=None, dtype=None, ragged=False, **kwargs):
Expand Down Expand Up @@ -685,8 +685,8 @@ class NDCGMetric(_RankingMetric):
\sum_i \text{gain}(y_i) \cdot \text{rank_discount}(\text{rank}(s_i))
$$
where $$\text{rank}(s_i)$$ is the rank of item $$i$$ after sorting by scores
$$s$$ with ties broken randomly.
where $\text{rank}(s_i)$ is the rank of item $i$ after sorting by scores
$s$ with ties broken randomly.
References:
Expand Down Expand Up @@ -771,8 +771,8 @@ class DCGMetric(_RankingMetric):
\sum_i \text{gain}(y_i) \cdot \text{rank_discount}(\text{rank}(s_i))
$$
where $$\text{rank}(s_i)$$ is the rank of item $$i$$ after sorting by scores
$$s$$ with ties broken randomly.
where $\text{rank}(s_i)$ is the rank of item $i$ after sorting by scores
$s$ with ties broken randomly.
References:
Expand Down Expand Up @@ -868,8 +868,8 @@ class AlphaDCGMetric(_RankingMetric):
y_{i, t} (1 - \alpha)^{\sum_j I[\text{rank}(s_j) < \text{rank}(s_i)] y_{j, t}}
$$
where $$\text{rank}(s_i)$$ is the rank of item $$i$$ after sorting by scores
$$s$$ with ties broken randomly and $$I[]$$ is the indicator function:
where $\text{rank}(s_i)$ is the rank of item $i$ after sorting by scores
$s$ with ties broken randomly and $I[]$ is the indicator function:
$$
I[\text{cond}] = \begin{cases}
Expand Down Expand Up @@ -981,7 +981,7 @@ class OPAMetric(_RankingMetric):
\frac{\sum_i \sum_j I[s_i > s_j] I[y_i > y_j]}{\sum_i \sum_j I[y_i > y_j]}
$$
where $$I[]$$ is the indicator function:
where $I[]$ is the indicator function:
$$
I[\text{cond}] = \begin{cases}
Expand Down

0 comments on commit ace49da

Please sign in to comment.