Improve crps theory slide
This commit is contained in:
@@ -26298,12 +26298,12 @@ w_{t,k}^{\text{Naive}} = \frac{1}{K}\label{eq:naive_combination}
|
|||||||
<p>EWA satisfies optimal selection convergence <span class="math inline">\(\eqref{eq_optp_select}\)</span> in a deterministic setting if:</p>
|
<p>EWA satisfies optimal selection convergence <span class="math inline">\(\eqref{eq_optp_select}\)</span> in a deterministic setting if:</p>
|
||||||
<p><i class="fa fa-fw fa-triangle-exclamation" style="color:var(--col_amber_9);"></i> Loss <span class="math inline">\(\ell\)</span> is exp-concave</p>
|
<p><i class="fa fa-fw fa-triangle-exclamation" style="color:var(--col_amber_9);"></i> Loss <span class="math inline">\(\ell\)</span> is exp-concave</p>
|
||||||
<p><i class="fa fa-fw fa-triangle-exclamation" style="color:var(--col_amber_9);"></i> Learning-rate <span class="math inline">\(\eta\)</span> is chosen correctly</p>
|
<p><i class="fa fa-fw fa-triangle-exclamation" style="color:var(--col_amber_9);"></i> Learning-rate <span class="math inline">\(\eta\)</span> is chosen correctly</p>
|
||||||
<p>Those results can be converted to <em>any</em> stochastic setting <span class="citation" data-cites="wintenberger2017optimal">Wintenberger (<a href="#/references" role="doc-biblioref" onclick>2017</a>)</span>.</p>
|
|
||||||
<p>Optimal convex aggregation convergence <span class="math inline">\(\eqref{eq_optp_conv}\)</span> can be satisfied by applying the kernel-trick:</p>
|
<p>Optimal convex aggregation convergence <span class="math inline">\(\eqref{eq_optp_conv}\)</span> can be satisfied by applying the kernel-trick:</p>
|
||||||
<p><span class="math display">\[\begin{align}
|
<p><span class="math display">\[\begin{align}
|
||||||
\ell^{\nabla}(x,y) = \ell'(\widetilde{X},y) x
|
\ell^{\nabla}(x,y) = \ell'(\widetilde{X},y) x
|
||||||
\end{align}\]</span></p>
|
\end{align}\]</span></p>
|
||||||
<p><span class="math inline">\(\ell'\)</span> is the subgradient of <span class="math inline">\(\ell\)</span> at forecast combination <span class="math inline">\(\widetilde{X}\)</span>.</p>
|
<p><span class="math inline">\(\ell'\)</span> is the subgradient of <span class="math inline">\(\ell\)</span> at forecast combination <span class="math inline">\(\widetilde{X}\)</span></p>
|
||||||
|
<p>Those results can be converted to <em>any</em> stochastic setting <span class="citation" data-cites="wintenberger2017optimal">Wintenberger (<a href="#/references" role="doc-biblioref" onclick>2017</a>)</span></p>
|
||||||
</div><div class="column" style="width:4%;">
|
</div><div class="column" style="width:4%;">
|
||||||
|
|
||||||
</div><div class="column" style="width:48%;">
|
</div><div class="column" style="width:48%;">
|
||||||
|
|||||||
@@ -1215,15 +1215,15 @@ EWA satisfies optimal selection convergence \eqref{eq_optp_select} in a determin
|
|||||||
|
|
||||||
<i class="fa fa-fw fa-triangle-exclamation" style="color:var(--col_amber_9);"></i> Learning-rate $\eta$ is chosen correctly
|
<i class="fa fa-fw fa-triangle-exclamation" style="color:var(--col_amber_9);"></i> Learning-rate $\eta$ is chosen correctly
|
||||||
|
|
||||||
Those results can be converted to *any* stochastic setting @wintenberger2017optimal.
|
|
||||||
|
|
||||||
Optimal convex aggregation convergence \eqref{eq_optp_conv} can be satisfied by applying the kernel-trick:
|
Optimal convex aggregation convergence \eqref{eq_optp_conv} can be satisfied by applying the kernel-trick:
|
||||||
|
|
||||||
\begin{align}
|
\begin{align}
|
||||||
\ell^{\nabla}(x,y) = \ell'(\widetilde{X},y) x
|
\ell^{\nabla}(x,y) = \ell'(\widetilde{X},y) x
|
||||||
\end{align}
|
\end{align}
|
||||||
|
|
||||||
$\ell'$ is the subgradient of $\ell$ at forecast combination $\widetilde{X}$.
|
$\ell'$ is the subgradient of $\ell$ at forecast combination $\widetilde{X}$
|
||||||
|
|
||||||
|
Those results can be converted to *any* stochastic setting @wintenberger2017optimal
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user