Add new figures in Tikz
Signed-off-by: Riccardo Finotello <riccardo.finotello@gmail.com>
This commit is contained in:
@@ -59,7 +59,7 @@ Thus getting also \hodge{2}{1} from \ml techniques is an important first step to
|
||||
Finally regression is also more useful for extrapolating results: a classification approach assumes that we already know all the possible values of the Hodge numbers and has difficulties to predict labels which do not appear in the training set.
|
||||
This is necessary when we move to a dataset for which not all topological quantities have been computed, for instance CY constructed from the Kreuzer--Skarke list of polytopes~\cite{Kreuzer:2000:CompleteClassificationReflexive}.
|
||||
|
||||
The data analysis and \ml are programmed in Python using open-source packages: \texttt{pandas}~\cite{WesMcKinney:2010:DataStructuresStatistical}, \texttt{matplotlib}~\cite{Hunter:2007:Matplotlib2DGraphics}, \texttt{seaborn}~\cite{Waskom:2020:MwaskomSeabornV0}, \texttt{scikit-learn}~\cite{Pedregosa:2011:ScikitlearnMachineLearning}, \texttt{scikit-optimize}~\cite{Head:2020:ScikitoptimizeScikitoptimize}, \texttt{tensorflow}~\cite{Abadi:2015:TensorFlowLargescaleMachine} (and its high level API \emph{Keras}).
|
||||
The data analysis and \ml are programmed in Python using known open-source packages such as \texttt{pandas}~\cite{WesMcKinney:2010:DataStructuresStatistical}, \texttt{matplotlib}~\cite{Hunter:2007:Matplotlib2DGraphics}, \texttt{seaborn}~\cite{Waskom:2020:MwaskomSeabornV0}, \texttt{scikit-learn}~\cite{Pedregosa:2011:ScikitlearnMachineLearning}, \texttt{scikit-optimize}~\cite{Head:2020:ScikitoptimizeScikitoptimize}, \texttt{tensorflow}~\cite{Abadi:2015:TensorFlowLargescaleMachine} (and its high level API \emph{Keras}).
|
||||
Code is available on \href{https://thesfinox.github.io/ml-cicy/}{Github}.
|
||||
|
||||
|
||||
@@ -192,14 +192,14 @@ Below we show a list of the \cicy properties and of their configuration matrices
|
||||
|
||||
\begin{figure}[tbp]
|
||||
\centering
|
||||
\begin{subfigure}[c]{.45\linewidth}
|
||||
\begin{subfigure}[b]{.45\linewidth}
|
||||
\centering
|
||||
\includegraphics[width=\linewidth, trim={0 0.45in 6in 0}, clip]{img/label-distribution_orig}
|
||||
\caption{\hodge{1}{1}}
|
||||
\label{fig:data:hist-h11}
|
||||
\end{subfigure}
|
||||
\hfill
|
||||
\begin{subfigure}[c]{.45\linewidth}
|
||||
\begin{subfigure}[b]{.45\linewidth}
|
||||
\centering
|
||||
\includegraphics[width=\linewidth, trim={6in 0.45in 0 0}, clip]{img/label-distribution_orig}
|
||||
\caption{\hodge{2}{1}}
|
||||
|
||||
@@ -1020,7 +1020,7 @@ Using the same network we also achieve \SI{97}{\percent} of accuracy in the favo
|
||||
\centering
|
||||
\begin{subfigure}[c]{0.475\linewidth}
|
||||
\centering
|
||||
\includegraphics[width=\linewidth]{img/fc}
|
||||
\import{tikz}{fc.pgf}
|
||||
\caption{Architecture of the network.}
|
||||
\label{fig:nn:dense}
|
||||
\end{subfigure}
|
||||
@@ -1099,7 +1099,7 @@ The convolution layers have $180$, $100$, $40$ and $20$ units each.
|
||||
|
||||
\begin{figure}[tbp]
|
||||
\centering
|
||||
\includegraphics[width=0.75\linewidth]{img/ccnn}
|
||||
\import{tikz}{ccnn.pgf}
|
||||
\caption{%
|
||||
Pure convolutional neural network for redicting \hodge{1}{1}.
|
||||
It is made of $4$ modules composed by convolutional layer, ReLU activation, batch normalisation (in this order), followed by a dropout layer, a flatten layer and the output layer (in this order).
|
||||
@@ -1204,7 +1204,7 @@ The callbacks helped to contain the training time (without optimisation) under 5
|
||||
|
||||
\begin{figure}[tbp]
|
||||
\centering
|
||||
\includegraphics[width=0.9\linewidth]{img/icnn}
|
||||
\resizebox{\linewidth}{!}{\import{tikz}{icnn.pgf}}
|
||||
\caption{%
|
||||
In each concatenation module (here shown for the ``old'' dataset) we operate with separate convolution operations over rows and columns, then concatenate the results.
|
||||
The overall architecture is composed of 3 ``inception'' modules made by two separate convolutions, a concatenation layer and a batch normalisation layer (strictly in this order), followed by a dropout layer, a flatten layer and the output layer with ReLU activation (in this order).
|
||||
@@ -1374,7 +1374,7 @@ Another reason is that the different algorithms may perform similarly well in th
|
||||
|
||||
\begin{figure}[tbp]
|
||||
\centering
|
||||
\includegraphics[width=0.65\linewidth]{img/stacking}
|
||||
\resizebox{0.65\linewidth}{!}{\import{tikz}{stacking.pgf}}
|
||||
\caption{Stacking ensemble learning with two level learning.}
|
||||
\label{fig:stack:def}
|
||||
\end{figure}
|
||||
|
||||
Reference in New Issue
Block a user