diff --git a/index.html b/index.html index 10d884c..156e7b2 100644 --- a/index.html +++ b/index.html @@ -26298,12 +26298,12 @@ w_{t,k}^{\text{Naive}} = \frac{1}{K}\label{eq:naive_combination}
EWA satisfies optimal selection convergence \(\eqref{eq_optp_select}\) in a deterministic setting if:
Loss \(\ell\) is exp-concave
Learning-rate \(\eta\) is chosen correctly
-Those results can be converted to any stochastic setting Wintenberger (2017).
Optimal convex aggregation convergence \(\eqref{eq_optp_conv}\) can be satisfied by applying the kernel-trick:
\[\begin{align} \ell^{\nabla}(x,y) = \ell'(\widetilde{X},y) x \end{align}\]
-\(\ell'\) is the subgradient of \(\ell\) at forecast combination \(\widetilde{X}\).
+\(\ell'\) is the subgradient of \(\ell\) at forecast combination \(\widetilde{X}\)
+Those results can be converted to any stochastic setting Wintenberger (2017)