From b724c983723dc7b17b3aecd9aa93c1ec831a81d7 Mon Sep 17 00:00:00 2001 From: Riccardo Finotello Date: Wed, 16 Dec 2020 14:17:52 +0100 Subject: [PATCH] Some modifications and presentation outline Signed-off-by: Riccardo Finotello --- presentation.txt | 379 ++++++++++++++++++++--------------------------- thesis.tex | 18 +-- 2 files changed, 173 insertions(+), 224 deletions(-) diff --git a/presentation.txt b/presentation.txt index 00f7b8f..aef348c 100644 --- a/presentation.txt +++ b/presentation.txt @@ -175,386 +175,345 @@ However the choice is not unique and labelled by the periodicity of the rotations. The superposition of the solutions is however still not the final result. --------------- TODO - -- page 36/102 +- page 29/102 The reason is a huge redundancy in the description: using the free parameters of the rotations we should in fact fix all degrees of freedom in the solution, which at the moment is an infinite sum involving an infinite amount of free parameters. - For the moment we only showed that the rotation matrix is equivalent to a monodromy matrix from which can build an initial solution. -- page 37/102 + For the moment we only showed that the rotation matrix is equivalent to a monodromy matrix from which can build an overparametrised solution. Using contiguity relations we can then restrict the sum over independent functions (that is functions which cannot be written as rational functions of contiguous hypergeometrics). -- page 38/102 - - Finally requiring the Euclidean action to be finite restricts the sum to only two terms. - -- page 39/102 + Finally requiring the Euclidean action to be finite restricts the sum to only two terms (the particular terms surviving in the sum depend on the rotation vectors but they are never more than two). Imposing the boundary conditions (that is fixing the intersection points) fixes the free constants in the solution. -- page 40/102 +- page 30/102 - The physical interpretation of the solution is finally straightforward in the Abelian case, where the action can be reduced to a sum of the area of the triangles. + The physical interpretation of the solution is finally straightforward in the Abelian case, where the action can be reduced to the sum of the areas of the internal triangles (this is a general result even for a generic number of D-branes). -- page 41/102 +- page 31/102 In the non Abelian case we considered there is no simple way to write the action using global data. However the contribution to the Euclidean action is larger than the Abelian case: the strings are in fact no longer constrained on a plane and, in order to stretch across the boundaries, they have to form a small bump while detaching from the D-brane. The Yukawa coupling in this case is therefore suppressed with respect to the Abelian case. Phenomenologically speaking, since the couplings are proportional to the mass of the scalar involved, the non Abelian case describes the coupling of lighter states. -- page 42/102 +- page 32/102 We then turn the attention to fermions and the computation of correlators involving spin fields. - Though ideally extending the framework introduced before, we abandon the intersecting D-brane scenario, and we introduce point-like defects on one boundary of the superstring worldsheeet in its time direction in such a way that the superstring undergoes a change of its boundary conditions when meeting a defect. + Though ideally extending some ideas, we abandon the intersecting D-brane scenario, and we introduce point-like defects on one boundary of the superstring worldsheeet in its time direction in such a way that the superstring undergoes a change of its boundary conditions when meeting a defect. -- page 43/102 +- page 33/102 - It is possible to show that in this case the Hamiltonian of the theory develops a time dependence: it is in fact conserved only between consecutive defects. + It is possible to show that in this case the Hamiltonian of the theory develops a time dependence since it is in fact conserved only between consecutive defects. -- page 44/102 +- page 34/102 - Suppose now that we could expand the field in a basis of solutions to the boundary conditions and work, as before, on the entire complex plane. + Suppose now that we could expand the field on a basis of solutions to the boundary conditions and work, as before, on the entire complex plane. -- page 45/102 +- page 35/102 Ideally we would be interested in extracting the modes in order to perform any computation of amplitudes. The definition of the operation is connected to a dual basis whose form is completely fixed by the original field (which we know) and the request of time independence. -- page 46/102 +- page 36/102 - The resulting algebra of the operators is in fact defined through such operation and is therefore time independent (as it should be for consistency). + The resulting algebra of the operators is in fact defined through such operation and it is therefore time independent. -- page 47/102 +- page 37/102 Differently from what done in the bosonic case, we focus on U(1) boundary change operators. The resulting monodromy on the complex plane is therefore a phase factor. -- page 48/102 +- page 38/102 As in the previous case we can write a basis of solutions which incorporates the behaviour looping the point-like defects. Consequently we can also define a dual basis. - Notice that both fields are defined up to integer factors, since we are dealing with rotations (not unlike the previous bosonic case). + Notice that both fields are defined up to integer factors, since we are still dealing with rotations. -- page 49/102 +- page 39/102 In order to compute amplitudes we then need to define the space on which the representation of the algebra acts. We define an excited vacuum, annihilated by positive frequency modes, and the lowest energy vacuum (from the strip definition). - None of these vacua is actually the usual invariant vacuum, as the short distance behaviour with the stress energy tensor shows the presence of operators responsible of the change in the boundary conditions. -- page 50/102 +- page 40/102 - The vacua need to be consistent, leading to conditions labelled by an integer factor L relating the basis of solutions with its dual (and ultimately the algebra of operators). + Vacua need to be consistent, leading to conditions labelled by an integer factor L relating the basis of solutions with its dual (and ultimately the algebra of operators). In fact the vacuum should always be correctly normalised and the description of physics using any two of the vacuum definitions should be consistently equivalent. -- page 51/102 +- page 41/102 To avoid having overlapping in- and out-annihilators, the label L must vanish. -- page 52/102 +- page 42/102 - With this frameword, the stress energy tensor displays as expected a time dependence due to the presence of the point-like defects. - Specifically it shows that in each defect we have a primary boundary changing operators (whose weight depends on the monodromy epsilon) and which creates the excited vacuum from the invariant vacuum. + In this framework, the stress energy tensor displays as expected a time dependence due to the presence of the point-like defects. + Specifically it shows that in each defect we have a primary boundary changing operators (whose weight depends on the monodromy) and which creates the excited vacuum from the invariant vacuum. This is by all means an excited spin field. Moreover the first order singularities display the interaction between pairs of excited spin fields. Finally (and definitely fascinating), the stress energy tensor obeys the canonical OPE, that is the theory is still conformal (even though there is a time dependence). -- page 53/102 +- page 43/102 - In formulae, the excited vacuum used in computations is created by a radially ordered product of excited spin fields. + In formulae, the excited vacuum used in computations is thus created by a radially ordered product of excited spin fields hidden in the defects. -- page 54/102 +- page 44/102 We are therefore in a position to compute the correlators involving such spin fields (however since we cannot compute the normalisation, we can compute only quantities not involving it). For instance we reproduce the known result of bosonization where the boundary changing operator is realised through the exponential of a different operator. + Moreover, since we have complete control over the algebra of the fermionic fields, we can also compute any correlator involving both spin and matter fields. -- page 55/102 +- page 45/102 We therefore showed that semi-phenomenological models need the ability to compute correlators involving twist and spin fields. -- page 56/102 - - We then introduced a framework to compute the instanton contribution to the correlators using intersecting D-branes. - -- page 57/102 - - We then showed how to compute correlators in the fermionic case involving spin fields as point-like defects on the string worldsheet. - -- page 58/102 + We then introduced a framework to compute the instanton contribution to the correlators using intersecting D-branes and we showed how to compute correlators in the fermionic case involving spin fields as point-like defects on the string worldsheet. The question would now be how to extend this to non Abelian spin fields and, most importantly, to twist fields, where there is no framework such as bosonization. -- page 59/102 +- page 46/102 - After considering defects and singular points in particle physics, we analyse time dependent singularities. - These are defining properties of many cosmological models (such as the Big Bang singularity), and can be studied in different ways. + After considering defects and singular points in particle physics, we analyse time dependent singularities in cosmology. -- page 60/102 +- page 47/102 As string theory is considered a theory of everything, its phenomenological description should in fact include both strong and electroweak forces as well as gravity. -- page 61/102 +- page 48/102 In particular from the gravity and cosmology side, we would like to have a better view of the cosmological implications in string theory. -- page 62/102 +- page 49/102 - For instance we could try to study Big Bang models to gain an improved insight with respect to field theory. + For instance we could try to study Big Bang models to gain some better insight with respect to field theory. -- page 63/102 +- page 50/102 For this, one way would be to build toy models of singularities in time, in which the singular point exists in one specific moment, rather than place. -- page 64/102 +- page 51/102 - A simple way to make it so is to build toy models from time-dependent orbifolds. + A simple way to make it so is to build toy models from time-dependent orbifolds which can model singluarities as their fixed points. -- page 65/102 +- page 52/102 - Orbifolds were introduced in physics before their formalisation as rigorous mathematical entities. - The mathematical idea of orbifold is similar in construction to the construction of differential manifolds, where the topological space inherits the topology of the left coset of M and a Lie Group G and where the charts encode the orbital partitions through a projection map. - All in all, orbifolds are manifolds locally isomorphic to a quotient. + In the literature we can already find studies in the computation of amplitudes (mainly closed strings, since we are dealing with gravitational interactions). + The presence of divergences in N-point correlators is however usually associated to a gravitational backreaction due to exchange of gravitons. - From the physical point of view, the requirements are relaxed. - In fact we usually focus on global quotients where the orbifold group is the isometry group. - This leads to the presence of fixed points on the orbifold where additional states are located, the so called twisted sectors of the theory. - They usually appear as singular limits of Calabi-Yau manifolds. +- page 53/102 -- page 66/102 + However the 4-tachyon amplitude in string theory is divergent already in the open string sector at tree level (thus we are sure no gravitational interaction is present). - We shall use these to introduce singularities in time. - -- page 67/102 - - In the literature we can already find efforts in the computation of amplitudes (mainly closed strings, since we are dealing with gravitational interactions). - The presence of divergences in N-point correlators is however usually associated to a gravitational backreaction due to the exchange of gravitons. - -- page 68/102 - - However the 4-tachyon amplitude in string theory is divergent already in the open string sector (at tree level, thus we are sure no gravitational interaction is present). The effective field theory interpretation would be a 4-point interaction of scalar fields (higher spins would only spoil the behaviour). -- page 69/102 +- page 54/102 - To investigate further, consider the null boost orbifold from D-dimensional Minkowski space through a change of coordinates (as such it shares most of the properties with the usual spacetime). + To investigate further, we consider the so called Null Boost Orbifold. + The construction starts from D-dimensional Minkowski spacetime through a change of coordinates. -- page 70/102 +- page 55/102 - The orbifold is built through the periodic identification of one coordinate along the direction of its Killing vector. + The orbifold is then built through the periodic identification of one coordinate along the direction of its Killing vector. + Notice that at this time the momentum in this direction will have to be quantized to be consistent. -- page 71/102 +- page 56/102 From these identifications, we can build the usual scalar wave function obeying the standard equations of motion. + Notice the behaviour in the time direction u which already takes a peculiar form and the presence of the quantized momentum in a strategic place. -- page 72/102 +- page 57/102 - In order to introduce the problem we first consider a theory of scalar QED. + In order to introduce the divergence problem we first consider a theory of scalar QED. -- page 73/102 +- page 58/102 When computing the interactions between the fields, the terms involved are entirely defined by two main integrals. + It might not be immediately visible, but given the behaviour of the scalar functions, any vertex interaction with more than 3 fields diverges. -- page 74/102 +- page 59/102 The reason for the divergence is connected to the "strategically placed" quantized momentum. - When when all quantized momenta vanish, in the limit of small u (that is near the singularity) the integrands develop isolated zeros which prevent the convergence. + When when all quantized momenta vanish, in the limit of small u (that is near the singularity) the integrands develop isolated zeros preventing the convergence. In fact, in this case, even a distributional interpretation (not unlike the derivative of a delta function) fails. -- page 75/102 +- page 60/102 So far the situation is therefore somewhat troublesome. In fact even the simplest theory presents divergences. -- page 76/102 - Moreover, obvious ways to regularise the theory do not work: for instance adding a Wilson line does not cure the problem as divergences also involve neutral strings which would not feel the regularisation. -- page 77/102 - The nature of the divergence is therefore not just gravitational, but there must be something hidden. -- page 78/102 - In fact the problems seem to arise from the vanishing volume in phase space along the compact direction: the issue looks like geometrical, rather than strictly gravitational. -- page 79/102 +- page 61/102 Since the field theory fails to give a reasonable value for amplitudes involving time-dependent singularities, we could therefore ask whether string theory can shed some light. -- page 80/102 +- page 62/102 The relevant divergent integrals are in fact present also in string theory. They arise from interactions of massive vertices (the first massive vertex is shown here). - These vertices are usually overlooked as they do not play in general a relevant role. - However it is possible that near the singularity they might actually come into game and give a contribution. + + These vertices are usually overlooked as they do not play in general a relevant role at low energy. + However it is possible that near the singularity they might actually give their contribution. These vertices are involved at low energy in the definition of contact terms (that is terms which do not involve exchange of vector bosons) in the effective field theory, which therefore is lacking their definition. -- page 81/102 +- page 63/102 In this sense even string theory cannot give a solution to the problem. In other words since the effective theory does not even exist, its high energy completion cannot be capable of providing a better description. -- page 82/102 +- page 64/102 There is however one geometric way to escape this. - Since the issues are related to a vanishing phase space volume, it is sufficient to add a non compact direction to the orbifold in which the particle is "free to escape". + Since the issues are related to a vanishing phase space volume, analytically speaking it is sufficient to add a non compact direction to the orbifold in which the particle is "free to escape". -- page 83/102 +- page 65/102 - While the generalised null boost orbifold has basically the same definition through its Killing vector, the presence of the additional direction acts in a different way on the definition of the scalar functions. + While the Generalised Null Boost Orbifold has basically the same definition through one of its Killing vector, the presence of the additional direction acts in a different way on the definition of the scalar functions. As you can see the new time behaviour ensures better convergence properties, and the presence of the continuous momentum ensures that no isolated zeros are present at any time. In fact even in the worst case scenario, the arising amplitudes would still have a distributional interpretation. -- page 84/102 +- page 66/102 - We therefore showed that divergences in the simplest theories are present both in field theory and string theory. + We therefore showed that divergences in the simplest theories are present both in field theory and string theory and that in the presence of singularities, the string massive states start to play a role. -- page 85/102 - - And that in the presence of singularities, the string massive states start to play a role. - -- page 86/102 - - The nature of the divergences is entirely due to vanishing volumes in phase space. - -- page 87/102 - - But the introduction of "escape routes" for fields establishes a distributional interpretation of the amplitudes. - -- page 88/102 + The nature of the divergences is however due to vanishing volumes in phase space and cannot be classified as simply a gravitational backreaction. + In fact the introduction of "escape routes" for fields grants a distributional interpretation of the amplitudes. It is also possible to show that this is not restricted to "null boost" types of orbifolds, but even other kinds of orbifolds present the same issues. -- page 89/102 +- page 67/102 - In summary we showed that the divergences cannot be regarded as simply gravitational, but even gauge theories present issues. - Their nature is however subtle: string massive states are not usually taken into account when computing amplitudes. + In summary we showed that the divergences cannot be regarded as simply gravitational, but even gauge theories (that is the open sector of the string theory) present issues. -- page 90/102 + Their nature is however subtle and connected to the interaction of string massive modes (or contact terms in the low energy formulation) which are not usually studied in detail. - We finally move to the last chapter involving methods for phenomenology in string theory. - After the analysis of semi-phenomenological analytical models, we now consider a computational task related to compactifications of extra-dimensions. +- page 68/102 -- page 91/102 + We finally move to the last part involving tools for phenomenology in string theory. + + After the analysis of semi-phenomenological analytical models, we now consider a computational task related to compactifications of extra-dimensions using machine learning. + +- page 69/102 We focus on Calabi-Yau manifolds in three complex dimensions. + Due to their properties and their symmetries, the relevant topological invariants are two Hodge numbers: they are integer numbers and in general can be difficult to compute. + As the number of possible Calabi-Yau 3-folds is an astonishingly huge number, we focus on a subset. -- page 92/102 +- page 70/102 + + Specifically we focus on manifolds built as intersections of hypersurfaces in projective spaces, that is intersections of several homogeneous equations in the complex coordinates of the manifold. - Specifically we focus on manifolds built as intersections of projective spaces, that is intersections of several homogeneous equations in the complex coordinates of the manifold. As we are interested in studying these manifolds as topological spaces we do not care about the coefficients, but only the exponents. The intersection is complete in the sense that it is non degenerate. -- page 93/102 +- page 71/102 + + The intersections can be generalised to multiple projective spaces and equations and the manifold can be characterised by a matrix containing the powers of the coordinates in each equation. - The intersections can be generalised to multiple projective spaces and equations. The problem we are interested is therefore to be able to take the so called "configuration matrix" of the manifolds and predict the value of the Hodge numbers. Formally this is a map from a matrix to a natural number. -- page 94/102 +- page 72/102 The real issue is now how to treat the configuration matrix and how to build such map. -- page 95/102 +- page 73/102 We use a machine learning approach. + In very simple words it means that we want to find a new representation of the input (possibly parametrized by some weights which we can tune and control) such that the predicted Hodge numbers are as close as possible to the correct result. + In this sense the machine has to learn some way to transform the input to get a result close to what in the computer science literature is called the "ground truth". - The measure of proximity is called "loss function" or "Lagrangian function" (with a slight abuse of naming conventions). + + The measure of proximity or distance is called "loss function" or "Lagrangian function" (with a slight abuse of naming conventions). The machine then learns some way to minimise this function (for instance using gradient descent methods and updating the previously mentioned weights). -- page 96/102 +- page 74/102 We thus exchange the difficult problem of finding an analytical solution with an optimisation problem (it does not imply "easy", but it is at least doable). -- page 97/102 +- page 75/102 - In order to learn the best way to change the input representation, we can rely on a vast computer science literature and use large physics datasets containing lots of samples from which to infer a structure. + In order to learn the best way of doing this, we can rely on a vast computer science literature and use large physics datasets containing lots of samples from which to infer a structure. -- page 98/102 +- page 76/102 In this sense the approach can merge techniques from physics, mathematics and computer science benefiting from advancements in all fields. -- page 99/102 +- page 77/102 The approach can furthermore provide a good way to analyse data and infer structure and advance hypothesis, which could end up overlooked using traditional brute force algorithms. In this case we focus on the prediction of two Hodge numbers with very different distributions and ranges. - The data we consider were computed using top of the class computing power at CERN in the 80s, with a huge effort of the string theory community. - In this sense Complete Intersection Calabi-Yau manifolds are a good starting point to investigate the application of machine learning techniques: they are well studied and completely characterised. + The data we consider were computed using top of the class computing power at CERN in the 80s, with a huge effort by the string theory community. + In this sense Complete Intersection Calabi-Yau manifolds are a good starting point to investigate the application of machine learning techniques because they are well studied and characterised. -- page 100/102 +- page 78/102 - The dataset we use contains less than 10000 manifolds. + The dataset we use contains less than 10000 manifolds (in machine learning terms it is still small). -- page 101/102 - - From these we remove product spaces (recognisable by their block diagonal form of the configuration matrix) and we remove very high values of the Hodge numbers from training. - - Mind that in this sense we are simply not feeding the machine "extremal" configurations in an attempt to push as far as possible the application: should the machine learn a good representation, it should be automatically capable of learning also those configurations without a human manually feeding them. - -- page 102/102 - - With respect to previous distributions, we therefore consider a smaller subset of matrices for training. - -- page 103/102 + From these we remove product spaces (recognisable by their block diagonal form of the configuration matrix) and we remove very high values of the Hodge numbers to avoid learning "extremal configurations". + In this sense we are simply not feeding the machine "extremal" configurations in an attempt to push as far as possible the application: should the machine learn a good representation, it will automatically be capable of learning also those configurations without a human manually feeding them. + We then define three separate folds: the largest contains training data used by the machine to adjust the parametrisation, 10% of the data is then used for intermediate evaluation of the process (for instance to avoid the machine to overfit the data in the training set), while the last subset is used to give the final predictions. Differently from the validation set, the test set has not been seen by the machine and therefore can reliably test the generalisation ability of the algorithm. -- page 104/102 + Differently from previous approaches we consider this as a regression task in the attempt to let the machine learn a true map between the configuration matrix and the Hodge numbers (in case we can also discuss the classification approach as it has some interesting applications itself). - Differently from previous approaches we consider this as a regression task in the attempt to let the machine learn a true map between the configuration matrix and the Hodge numbers, rather than a classification algorithm (in case we can also discuss the classification approach as it has some interesting applications itself). - -- page 105/102 +- page 79/102 The distributions of the Hodge numbers therefore present less outliers than the initial dataset, but as you can see we expect the result to be similar even without the procedure, since the number of outliers removed is small. -- page 106/102 + In fact we also proved it and if anyone is interested we can also discuss a different more "machine learning accurate" approach to the task that we adopted. + +- page 80/102 The pipeline we adopt is the same used at industrial level by companies and data scientists. We in fact heavily rely on data analysis to improve as much as possible the output. -- page 107/102 +- page 81/102 - This for instance can be done by including redundant information, that is by feeding the machine variables which can be manually derived: by definition they are redundant but can be used to easily learn a pattern. + This for instance can be done by including additional information with respect to the configuration matrix, that is by feeding the machine variables which can be manually derived: by definition they are redundant but can be used to easily learn a pattern. In fact as we can see most of the features such as the number of projective spaces or the number of equations in the matrix are heavily correlated with the Hodge numbers. - Moreover even using algorithms to produce a ranking of the variables show that such "engineered features" are much more important than the configuration matrix itself. + Moreover even using algorithms to produce a ranking of the variables such as decision trees show that such "engineered features" are much more important than the configuration matrix itself. Here we can see some of the scalar variables ranked across each other. -- page 108/102 +- page 82/102 Using the "engineered data", we now get to the choice of the algorithm. There is no general rule in this, even though there might be good guidelines to follow. -- page 109/102 +- page 83/102 Though the approach is clearly "supervised" in the sense that the machine learns by approximating a known result, we also tried other approaches in an attempt to generate additional information which the machine could use. - The first approach is a clustering algorithm, intuitively used to look for a notion of "proximity" between the matrices. + The first approach is a clustering algorithm, intuitively used to look for a notion of "proximity" between the configuration matrices. This however did not play a role in the analysis. The other is definitely more interesting and it consists in finding a better representation of the configuration matrix using less components. - The idea is therefore to "squeeze" or "concentrate" the information in a lower dimension (matrices in our case have 180 components, so we are trying to aim for something less than that). + The idea is therefore to "squeeze" or "concentrate" the information in a lower dimensional space (matrices in our case have 180 components, so we are trying to aim for something less than that). -- page 110/102 +- page 84/102 - In general however we first relied on traditional regression algorithms, such as linear models, support vector machines and boosted decision trees. + For the predictions we first relied on traditional regression algorithms, such as linear models, support vector machines and boosted decision trees. I will not enter into the details and differences between the algorithms, but we can indeed discuss them. -- page 111/102 +- page 85/102 - Let me however say a few words a dimensionality reduction procedure known as "principal components analysis" (or PCA for short). + Let me however say a few words a dimensionality reduction procedure known as "principal components analysis" (or PCA for short), since this is going to be part of my future. Suppose that we have a rectangular matrix (which could be number of samples in the dataset times the number of components of the matrix once it has been flattened). @@ -563,37 +522,39 @@ This is usually used to isolate a signal from a noisy background. Thus by isolating only the meaningful components of the matrix we can hope to help the algorithm. -- page 112/102 +- page 86/102 Visually PCA is used to isolate the eigenvalues of the covariance matrix (or the singular values of the matrix) which do not belong to the background. From random matrix theory we know that the eigenvalues of a independently and identically distributed matrix (a Wishart matrix) follow a Marchenko-Pastur distribution. + Such matrix containing a signal would therefore be recognised by the presence of eigenvalues outside this probability distribution. We could therefore simply keep the corresponding eigenvectors. In our case this resulted in an improvement of the accuracy, obtained by retaining less than half of the components of the matrix (corresponding to 99% of the variance of the initial set). -- page 113/102 +- page 87/102 As we can see we used several algorithms to evaluate the procedure. Previous approaches in the literature mainly relied on the direct application of algorithms to the configuration matrix. - It seems that the family of support vector algorithms work best with a large number of data used for training. + We extended this beyond the previously considered algorithms (mainly support vectors) to decision trees and linear models. -- page 114/102 +- page 88/102 Techniques such as feature engineering and PCA provide a huge improvement (even with less training data). - Let me for instance point out the fact the even a simple linear regression represents a large improvement over previous attempts. + Let me for instance point out the fact the even a simple linear regression reaches the same level of accuracy previously reached by more complex algorithms, even with much less training data. -- page 115/102 +- page 89/102 However this does not conclude the landscape of algorithms used in machine learning. In fact we also used neural networks architectures. - They are an entire class of function approximators which use (some variants of) gradient descent to optimise the weights. + They are a class of function approximators which use (some variants of) gradient descent to optimise the weights. Their layered structure is key to learn highly non linear and complicated functions. We focused on two distinct architectures. - The older fully connected network were mostly employed in previous attempts at predicting the Hodge numbers. + + The older fully connected network were employed in previous attempts at predicting the Hodge numbers. They rely on a series of matrix operations to create new outputs from previous layers. In this sense the matrix W and the bias term b are the weights which need to be updated. Each node is connected to all the outputs, hence the name fully connected or densely connected. @@ -602,98 +563,86 @@ The second architecture is called convolutional from its iterated application of "sliding window function" (that is convolutions) applied on the layers. -- page 116/102 +- page 90/102 Convolutional networks have several advantages over a fully connected approach. Since the input in this case does not need to flattened, convolutions retain the notion of vicinity between cells in a grid (here we have an example of a configuration matrix as seen by a convolutional neural network). -- page 117/102 + Since they do not have one weight for each connection, they have a smaller number of parameters (proportional to the size of the window) to be updated (in our specific case we cut more more than one order of magnitude the number of parameters used). - Since they do not have one weight for each connection, they have a smaller number of parameters to be updated (in our specific case we cut more more than one order of magnitude the number of parameters used). + Moreover weights are shared by adjacent cells, meaning that if there is a structure to be inferred, this is the way to go to exploit the "artificial intelligence" underlying the operations involved. -- page 118/102 - - Moreover weights are shared by adjacent cells, meaning that if there is a structure to be inferred, this is the way to go. - -- page 119/102 +- page 91/102 In this sense a convolutional architecture can isolate defining features of the output and pass them to the following layer as in the animation. + Using a computer science analogy, this is used to classify objects given a picture: a convolutional neural network is literally capable of isolating what makes a dog a dog and what distinguishes it from a cat (even more specific it can separate a Labrador from a Golden Retriever). -- page 120/102 +- page 92/102 - This has in fact been used in computer vision tasks in recent year for pattern recognitions, object detections and even spatial awareness tasks (for instance isolate the foreground from the background). + This has in fact been used in computer vision tasks in recent year for pattern recognitions, object detections and spatial awareness tasks (for instance to isolate the foreground from the background). In this sense this is the closest approximation of artificial intelligence in supervised tasks. -- page 121/102 +- page 93/102 My contribution in this sense is inspired by deep learning research at Google. In recent years they were able to devise new architectures using so called "inception modules" in which different convolution operations are used concurrently. - The architecture holds better generalisation properties since more features can be detected and processed at the same time. + The architecture has better generalisation properties since more features can be detected and processed at the same time. -- page 122/102 +- page 94/102 - In our case we decided to go for two concurrent convolutions one scanning each equation (the vertical kernel) of the configuration matrix, while a second convolutions scans each projective space (in horizontal). + In our case we decided to go for two concurrent convolutions one scanning each equation (the vertical kernel) of the configuration matrix, while a second convolutions scans the projective spaces (in horizontal). The layer structure is then concatenated until a single output is produced (the Hodge number that is). - The idea is to be able to learn a relation between between projective spaces and equations and recombine them to find a new representation. + The idea is that this way the network can learn a relation between projective spaces and equations and recombine them to find a new representation. -- page 123/102 +- page 95/102 As we can see even the simple introduction of a traditional convolutional kernel (it was a 5x5 kernel in this case) is sufficient to boost the accuracy of the predictions (results by Bull et al. in 2018 reached only 77% of accuracy on h^{1,1}). -- page 124/102 +- page 96/102 - The introduction of the Inception architecture has however has major advantages: it uses even less parameters than "traditional" convolutional networks, it boosts the performance reaching near perfect accuracy, it needs a lot less data (even with just 30% of the data for training, the accuracy is already near perfect). + The introduction of the Inception architecture has major advantages: it uses even less parameters than "traditional" convolutional networks, it boosts the performance reaching near perfect accuracy, it needs a lot less data (even with just 30% of the data for training, the accuracy is already near perfect). - Moreover with this architecture we were able to predict also h^{2,1} with 50% accuracy: even if does not look a reliable method to predict it (I agree, for now), mind that previous attempts have usually avoided computing it, or they reached accuracies as high as 8-9%. + Moreover with this architecture we were able to predict also h^{2,1} with 50% accuracy: even if does not look a reliable method to predict it (I agree, for now), mind that previous attempts have usually avoided computing it, or they reached accuracies as high as 8-9% (even feature engineering could boost it only around 35%). - The network is also solid enough to predict both Hodge numbers at the same time: trading a bit of the accuracy for better generalisation, it is in fact possible to let the machine learn the existing relation between the Hodge numbers without specifically inputing anything (for instance by inserting the fact that the difference of the Hodge numbers is the Euler characteristic). + The network is also solid enough to predict both Hodge numbers at the same time: trading a bit of the accuracy for a simpler model, it is in fact possible to let the machine learn the existing relation between the Hodge numbers without specifically inputing anything (for instance by inserting the fact that the difference of the Hodge numbers is the Euler characteristic). -- page 125/102 +- page 97/102 - Deep learning can therefore be used conscientiously (and I cannot stress this enough) as a predictive method. + Deep learning can therefore be used conscientiously (and I cannot stress this enough) as a predictive method, provided that one is able to analyse the data (no black boxes should ever be admitted). -- page 126/102 +- page 98/102 - Provided that one is able to analyse the data (no black boxes should ever be admitted). + It can also be used a source of inspiration for inquiries and investigations always provided a good analysis is done beforehand (deep learning is a black box in the sense that once it starts it is difficult to keep track of what is happening under the bonnet, but not because we supposedly do not know what is going on in general). -- page 127/102 - - It can also be used a source of inspiration for inquiries and investigations. - -- page 128/102 - - Always provided a good analysis is done beforehand (deep learning is a black box in the sense that once it starts it is difficult to keep track of what is happening under the bonnet, but not because we supposedly do not know what is going on in general). - -- page 129/102 +- page 99/102 Deep learning can also be used for generalisation of patterns and relations. - -- page 130/102 - As always only after careful consideration. -- page 131/102 +- page 100/102 Moreover convolutional networks look promising and with a lot of unexplored potential. This is in fact the first time in which they have been successfully used in theoretical physics. -- page 132/102 - Finally, this is an interdisciplinary approach in which a lot is yet to be learned from different perspective. Just think of the entire domain of geometric deep learning in computer science where the underlying structures of the process are investigated: surely mathematics and theoretical physics could provide a framework for it. -- page 133/102 +- page 101/102 More directions to investigate now remain. - In fact one could in principle exploit freedom in representing the configuration matrices to learn the best possible representation, and use all symmetries to try new strategies to improve the results. + + In fact one could in principle exploit freedom in representing the configuration matrices to learn the best possible representation. + Otherwise one could start to think about this in a mathematical embedding and study what happens in higher dimensions (where the number of manifolds is larger: almost one million complete intersections). + Moreover, as I was saying, this could be used as an attempt to study formal aspects of deep learning, or even more to directly dive into the "real artificial intelligence" and start to study the problem in a reinforcement learning environment where the machine automatically learns a task without knowing the final result. - page 102/102 I will therefore leave the open question as to whether this is actually going to be the end or just the start of something else. + In the meantime I thank you for your attention. diff --git a/thesis.tex b/thesis.tex index cba4794..40ec158 100644 --- a/thesis.tex +++ b/thesis.tex @@ -447,7 +447,7 @@ \quad \Rightarrow \quad - \highlight{\mathrm{U}(N)} + \highlight{$\mathrm{U}(N)$} \end{equation*} \end{column} \hfill @@ -1164,7 +1164,7 @@ \pause \begin{block}{Divergences} - Even in simple models (e.g.\ NBO, more on this later) the $4$ tachyons amplitude is divergent \textbf{at tree level}: + Even in simple models (e.g.\ NBO, more on this later) the $4$ tachyons amplitude is divergent \textbf{in the open sector at tree level}: \begin{equation*} A_4 \sim \int\limits_{q \sim \infty} \frac{\dd{q}}{\abs{q}} \mathscr{A}( q ) \end{equation*} @@ -1358,7 +1358,7 @@ \item \textbf{non compact} orbifold directions $\Rightarrow$ interpretation of \textbf{amplitudes as distributions} - \item issue not restricted to NBO/GNBO but also BO, null brane, etc. (it is a \textbf{general issues} connected to the geometry of the underlying space) + \item issue not restricted to NBO/GNBO but also BO, null brane, etc. (it is a \textbf{general issue} connected to the geometry of the underlying space) \end{itemize} \vfill @@ -1428,7 +1428,7 @@ \begin{tabular}{@{}lccc@{}} $\mathscr{R}\colon$ & - $\mathds{Z}^{m \times k}$ + $\mathds{N}^{m \times k}$ & $\longrightarrow$ & @@ -1711,13 +1711,13 @@ \only<2->{\includegraphics[width=\columnwidth]{img/cicy_best_plots.pdf}} \end{column} \hfill - \visible<2->{ - \begin{column}{0.5\linewidth} - \centering + \begin{column}{0.5\linewidth} + \centering + \only<2->{ \includegraphics[width=0.75\columnwidth]{img/inc_nn_learning_curve_h11.pdf} \includegraphics[width=0.75\columnwidth]{img/inc_nn_learning_curve.pdf} - \end{column} - } + } + \end{column} \end{columns} \end{frame}