Fix typos throughout the document

This commit is contained in:
2025-06-22 09:52:45 +02:00
parent a8f46db8a2
commit 9f8cd986f1

View File

@@ -400,7 +400,7 @@ col_yellow <- "#FCE135"
Reduces estimation time by 2-3 orders of magnitude Reduces estimation time by 2-3 orders of magnitude
Maintainins competitive forecasting accuracy Maintains competitive forecasting accuracy
Real-World Validation in Energy Markets Real-World Validation in Energy Markets
@@ -1059,7 +1059,7 @@ chart = {
Each day, $t = 1, 2, ... T$ Each day, $t = 1, 2, ... T$
- The **forecaster** receives predictions $\widehat{X}_{t,k}$ from $K$ **experts** - The **forecaster** receives predictions $\widehat{X}_{t,k}$ from $K$ **experts**
- The **forecaster** assings weights $w_{t,k}$ to each **expert** - The **forecaster** assigns weights $w_{t,k}$ to each **expert**
- The **forecaster** calculates her prediction: - The **forecaster** calculates her prediction:
\begin{equation} \begin{equation}
\widetilde{X}_{t} = \sum_{k=1}^K w_{t,k} \widehat{X}_{t,k}. \widetilde{X}_{t} = \sum_{k=1}^K w_{t,k} \widehat{X}_{t,k}.
@@ -1230,7 +1230,7 @@ Optimal rates with respect to selection \eqref{eq_opt_select} and convex aggrega
\label{eq_optp_conv} \label{eq_optp_conv}
\end{align} \end{align}
Algorithms can statisfy both \eqref{eq_optp_select} and \eqref{eq_optp_conv} depending on: Algorithms can satisfy both \eqref{eq_optp_select} and \eqref{eq_optp_conv} depending on:
- The loss function - The loss function
- Regularity conditions on $Y_t$ and $\widehat{X}_{t,k}$ - Regularity conditions on $Y_t$ and $\widehat{X}_{t,k}$
@@ -1312,7 +1312,7 @@ Using the CRPS, we can calculate time-adaptive weights $w_{t,k}$. However, what
<i class="fa fa-fw fa-triangle-exclamation" style="color:var(--col_amber_9);"></i> QL is convex, but not exp-concave <i class="fa fa-fw fa-triangle-exclamation" style="color:var(--col_amber_9);"></i> QL is convex, but not exp-concave
<i class="fa fa-fw fa-arrow-right" style="color:var(--col_grey_10);"></i> Bernstein Online Aggregation (BOA) lets us weaken the exp-concavity condition. It satisfies that there exist a $C>0$ such that for $x>0$ it holds that <i class="fa fa-fw fa-arrow-right" style="color:var(--col_grey_10);"></i> Bernstein Online Aggregation (BOA) lets us weaken the exp-concavity condition. It satisfies that there exists a $C>0$ such that for $x>0$ it holds that
\begin{equation} \begin{equation}
P\left( \frac{1}{t}\left(\widetilde{\mathcal{R}}_t - \widehat{\mathcal{R}}_{t,\pi} \right) \leq C \log(\log(t)) \left(\sqrt{\frac{\log(K)}{t}} + \frac{\log(K)+x}{t}\right) \right) \geq P\left( \frac{1}{t}\left(\widetilde{\mathcal{R}}_t - \widehat{\mathcal{R}}_{t,\pi} \right) \leq C \log(\log(t)) \left(\sqrt{\frac{\log(K)}{t}} + \frac{\log(K)+x}{t}\right) \right) \geq
@@ -1324,7 +1324,7 @@ if the loss function is convex.
<i class="fa fa-fw fa-arrow-right" style="color:var(--col_grey_10);"></i> Almost optimal w.r.t. *convex aggregation* \eqref{eq_optp_conv} @wintenberger2017optimal. <i class="fa fa-fw fa-arrow-right" style="color:var(--col_grey_10);"></i> Almost optimal w.r.t. *convex aggregation* \eqref{eq_optp_conv} @wintenberger2017optimal.
The same algorithm satisfies that there exist a $C>0$ such that for $x>0$ it holds that The same algorithm satisfies that there exists a $C>0$ such that for $x>0$ it holds that
\begin{equation} \begin{equation}
P\left( \frac{1}{t}\left(\widetilde{\mathcal{R}}_t - \widehat{\mathcal{R}}_{t,\min} \right) \leq P\left( \frac{1}{t}\left(\widetilde{\mathcal{R}}_t - \widehat{\mathcal{R}}_{t,\min} \right) \leq
C\left(\frac{\log(K)+\log(\log(Gt))+ x}{\alpha t}\right)^{\frac{1}{2-\beta}} \right) \geq C\left(\frac{\log(K)+\log(\log(Gt))+ x}{\alpha t}\right)^{\frac{1}{2-\beta}} \right) \geq
@@ -1362,7 +1362,7 @@ Pointwise can outperform constant procedures
<i class="fa fa-fw fa-arrow-right" style="color:var(--col_grey_10);"></i> $\text{QL}$ is convex: almost optimal convergence w.r.t. *convex aggregation* \eqref{eq_boa_opt_conv} <i class="fa fa-fw fa-check" style="color:var(--col_green_9);"></i> </br> <i class="fa fa-fw fa-arrow-right" style="color:var(--col_grey_10);"></i> $\text{QL}$ is convex: almost optimal convergence w.r.t. *convex aggregation* \eqref{eq_boa_opt_conv} <i class="fa fa-fw fa-check" style="color:var(--col_green_9);"></i> </br>
For almost optimal congerence w.r.t. *selection* \eqref{eq_boa_opt_select} we need: For almost optimal convergence w.r.t. *selection* \eqref{eq_boa_opt_select} we need:
**A1: Lipschitz Continuity** **A1: Lipschitz Continuity**
@@ -1443,7 +1443,7 @@ The gradient based fully adaptive Bernstein online aggregation (BOAG) applied po
$$\widehat{\mathcal{R}}_{t,\pi} = 2\overline{\widehat{\mathcal{R}}}^{\text{QL}}_{t,\pi}.$$ $$\widehat{\mathcal{R}}_{t,\pi} = 2\overline{\widehat{\mathcal{R}}}^{\text{QL}}_{t,\pi}.$$
If $Y_t|\mathcal{F}_{t-1}$ is bounded If $Y_t|\mathcal{F}_{t-1}$ is bounded
and has a pdf $f_t$ satifying $f_t>\gamma >0$ on its and has a pdf $f_t$ satisfying $f_t>\gamma >0$ on its
support $\text{spt}(f_t)$ then \eqref{eq_boa_opt_select} holds with $\beta=1$ and support $\text{spt}(f_t)$ then \eqref{eq_boa_opt_select} holds with $\beta=1$ and
$$\widehat{\mathcal{R}}_{t,\min} = 2\overline{\widehat{\mathcal{R}}}^{\text{QL}}_{t,\min}$$ $$\widehat{\mathcal{R}}_{t,\min} = 2\overline{\widehat{\mathcal{R}}}^{\text{QL}}_{t,\min}$$
@@ -1684,7 +1684,7 @@ Weights converge to the constant solution if $L\rightarrow 1$
### Initialization: ### Initialization:
Array of expert predicitons: $\widehat{X}_{t,p,k}$ Array of expert predictions: $\widehat{X}_{t,p,k}$
Vector of Prediction targets: $Y_t$ Vector of Prediction targets: $Y_t$
@@ -1908,7 +1908,7 @@ Combination methods:
::: {.column width="69%"} ::: {.column width="69%"}
Tuning paramter grids: Tuning parameter grids:
- Smoothing Penalty: $\Lambda= \{0\}\cup \{2^x|x\in \{-4,-3.5,\ldots,12\}\}$ - Smoothing Penalty: $\Lambda= \{0\}\cup \{2^x|x\in \{-4,-3.5,\ldots,12\}\}$
- Learning Rates: $\mathcal{E}= \{2^x|x\in \{-1,-0.5,\ldots,9\}\}$ - Learning Rates: $\mathcal{E}= \{2^x|x\in \{-1,-0.5,\ldots,9\}\}$
@@ -2301,7 +2301,7 @@ Let $\boldsymbol{\psi}^{\text{mv}}=(\psi_1,\ldots, \psi_{D})$ and $\boldsymbol{\
\boldsymbol w_{t,k} = \boldsymbol{\psi}^{\text{mv}} \boldsymbol{b}_{t,k} {\boldsymbol{\psi}^{pr}}' \boldsymbol w_{t,k} = \boldsymbol{\psi}^{\text{mv}} \boldsymbol{b}_{t,k} {\boldsymbol{\psi}^{pr}}'
\end{equation*} \end{equation*}
with parameter matix $\boldsymbol b_{t,k}$. The latter is estimated to penalize $L_2$-smoothing which minimizes with parameter matrix $\boldsymbol b_{t,k}$. The latter is estimated to penalize $L_2$-smoothing which minimizes
\begin{align} \begin{align}
& \| \boldsymbol{\beta}_{t,d, k}' \boldsymbol{\varphi}^{\text{pr}} - \boldsymbol b_{t, d, k}' \boldsymbol{\psi}^{\text{pr}} \|^2_2 + \lambda^{\text{pr}} \| \mathcal{D}_{q} (\boldsymbol b_{t, d, k}' \boldsymbol{\psi}^{\text{pr}}) \|^2_2 + \nonumber \\ & \| \boldsymbol{\beta}_{t,d, k}' \boldsymbol{\varphi}^{\text{pr}} - \boldsymbol b_{t, d, k}' \boldsymbol{\psi}^{\text{pr}} \|^2_2 + \lambda^{\text{pr}} \| \mathcal{D}_{q} (\boldsymbol b_{t, d, k}' \boldsymbol{\psi}^{\text{pr}}) \|^2_2 + \nonumber \\
@@ -3168,7 +3168,7 @@ It holds that:
with: $\boldsymbol{u}_t =(u_{1,t},\ldots, u_{K,t})^\intercal$, $u_{k,t} = F_{X_{k,t}|\mathcal{F}_{t-1}}(x_{k,t})$ with: $\boldsymbol{u}_t =(u_{1,t},\ldots, u_{K,t})^\intercal$, $u_{k,t} = F_{X_{k,t}|\mathcal{F}_{t-1}}(x_{k,t})$
For brewity we drop the conditioning on $\mathcal{F}_{t-1}$. For brevity we drop the conditioning on $\mathcal{F}_{t-1}$.
The model can be specified as follows The model can be specified as follows
@@ -3267,7 +3267,7 @@ $\Lambda(\cdot)$ is a link function:
### Estimation ### Estimation
Joint maximum lieklihood estimation: Joint maximum likelihood estimation:
\begin{align*} \begin{align*}
f_{\mathbf{X}_t}(\mathbf{x}_t | \mathcal{F}_{t-1}) = c\left[\mathbf{F}(\mathbf{x}_t;\boldsymbol{\mu}_t, \boldsymbol{\sigma}_{t}^2, \boldsymbol{\nu}, f_{\mathbf{X}_t}(\mathbf{x}_t | \mathcal{F}_{t-1}) = c\left[\mathbf{F}(\mathbf{x}_t;\boldsymbol{\mu}_t, \boldsymbol{\sigma}_{t}^2, \boldsymbol{\nu},
@@ -3417,7 +3417,7 @@ table_energy %>%
- VES models deliver poor performance in short horizons - VES models deliver poor performance in short horizons
- For Oil prices the RW Benchmark can't be oupterformed 30 steps ahead - For Oil prices the RW Benchmark can't be outperformed 30 steps ahead
- Both VECM models generally deliver good performance - Both VECM models generally deliver good performance
::: :::
@@ -3742,7 +3742,7 @@ plot_quant_data %>% ggplot(aes(x = date, y = value)) +
Accounting for heteroscedasticity or stabilizing the variance via log transformation is crucial for good performance in terms of ES Accounting for heteroscedasticity or stabilizing the variance via log transformation is crucial for good performance in terms of ES
- Price dynamics emerged way before the russian invaion into ukraine - Price dynamics emerged way before the Russian invasion into Ukraine
- Linear dependence between the series reacted only right after the invasion - Linear dependence between the series reacted only right after the invasion
- Improvements in forecasting performance is mainly attributed to: - Improvements in forecasting performance is mainly attributed to:
- the tails - the tails