From 13c238779e449728d176b3a7ab4912f997ff1639 Mon Sep 17 00:00:00 2001 From: Jonathan Berrisch Date: Fri, 27 Jun 2025 09:44:14 +0200 Subject: [PATCH] Improve crps theory slide --- index.html | 4 ++-- index.qmd | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/index.html b/index.html index 10d884c..156e7b2 100644 --- a/index.html +++ b/index.html @@ -26298,12 +26298,12 @@ w_{t,k}^{\text{Naive}} = \frac{1}{K}\label{eq:naive_combination}

EWA satisfies optimal selection convergence \(\eqref{eq_optp_select}\) in a deterministic setting if:

Loss \(\ell\) is exp-concave

Learning-rate \(\eta\) is chosen correctly

-

Those results can be converted to any stochastic setting Wintenberger (2017).

Optimal convex aggregation convergence \(\eqref{eq_optp_conv}\) can be satisfied by applying the kernel-trick:

\[\begin{align} \ell^{\nabla}(x,y) = \ell'(\widetilde{X},y) x \end{align}\]

-

\(\ell'\) is the subgradient of \(\ell\) at forecast combination \(\widetilde{X}\).

+

\(\ell'\) is the subgradient of \(\ell\) at forecast combination \(\widetilde{X}\)

+

Those results can be converted to any stochastic setting Wintenberger (2017)

diff --git a/index.qmd b/index.qmd index 382890a..068588c 100644 --- a/index.qmd +++ b/index.qmd @@ -1215,15 +1215,15 @@ EWA satisfies optimal selection convergence \eqref{eq_optp_select} in a determin Learning-rate $\eta$ is chosen correctly -Those results can be converted to *any* stochastic setting @wintenberger2017optimal. - Optimal convex aggregation convergence \eqref{eq_optp_conv} can be satisfied by applying the kernel-trick: \begin{align} \ell^{\nabla}(x,y) = \ell'(\widetilde{X},y) x \end{align} -$\ell'$ is the subgradient of $\ell$ at forecast combination $\widetilde{X}$. +$\ell'$ is the subgradient of $\ell$ at forecast combination $\widetilde{X}$ + +Those results can be converted to *any* stochastic setting @wintenberger2017optimal :::