From 5fc0d76f4b77ec2bf9e77895d5b6b647f13a6cb4 Mon Sep 17 00:00:00 2001 From: Riccardo Finotello Date: Thu, 3 Dec 2020 22:25:34 +0100 Subject: [PATCH] Typo Signed-off-by: Riccardo Finotello --- sec/app/ml.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sec/app/ml.tex b/sec/app/ml.tex index 98a0547..7bec2be 100644 --- a/sec/app/ml.tex +++ b/sec/app/ml.tex @@ -66,7 +66,7 @@ For larger values of the hyperparameter $\alpha$, $w$ (and $b$) assume smaller v \subsection{Support Vector Machines for Regression} \label{sec:app:svr} -This family of supervised \ml algorithms were created with classification tasks in mind~\cite{Cortes:1995:SupportvectorNetworks} but have proven to be effective also for regression problems~\cite{Drucker:1997:SupportVectorRegression}. +This family of supervised \ml algorithms was created with classification tasks in mind~\cite{Cortes:1995:SupportvectorNetworks} but have proven to be effective also for regression problems~\cite{Drucker:1997:SupportVectorRegression}. Differently from the linear regression, instead of minimising the squared distance of each sample, the algorithm assigns a penalty to predictions of samples $x^{(i)} \in \R^F$ (for $i = 1, 2, \dots, N$) which are further away than a certain hyperparameter $\varepsilon$ from their true value $y$, allowing however a \textit{soft margin} of tolerance represented by the penalties $\zeta$ above and $\xi$ below. This is achieved by minimising $w,\, b,\, \zeta$ and $\xi$ in the function:\footnotemark{} \footnotetext{%