Study of the effects of the galactic magnetic field on uhecr propagation with the Pierre Auger Observatory.
Authorship
C.C.G.
Bachelor of Physics
C.C.G.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Cosmic rays are charged particles, accelerated in space and reaching the Earth at a rate of 1000 particles per square kilometer per second. Among these, those of the highest energy, called Ultra High Energy Cosmic Rays (UHECR), stand out, with energies higher than 1 EeV, and with an arrival rate of less than 1 per square kilometer per year. Although it is known that due to their high energies, they must come from sources with sufficient power to accelerate them to these magnitudes (e.g., from active galactic nuclei), these sources are not known exactly. One method to determine them is to discern their direction of origin, and here the galactic magnetic field plays a crucial role, since the Lorentz force deflects their trajectories when they are moving charges. The aim of the work is to learn how influential this field is and what implications it has, by performing different simulation methods that provide a distribution of initial and final directions of cosmic rays, as well as how to analyze these deviations. These simulations will then be used to compare with some of the data collected by the Pierra Auger Observatory, located in Argentina, which has the largest UHECR detector currently in existence.
Cosmic rays are charged particles, accelerated in space and reaching the Earth at a rate of 1000 particles per square kilometer per second. Among these, those of the highest energy, called Ultra High Energy Cosmic Rays (UHECR), stand out, with energies higher than 1 EeV, and with an arrival rate of less than 1 per square kilometer per year. Although it is known that due to their high energies, they must come from sources with sufficient power to accelerate them to these magnitudes (e.g., from active galactic nuclei), these sources are not known exactly. One method to determine them is to discern their direction of origin, and here the galactic magnetic field plays a crucial role, since the Lorentz force deflects their trajectories when they are moving charges. The aim of the work is to learn how influential this field is and what implications it has, by performing different simulation methods that provide a distribution of initial and final directions of cosmic rays, as well as how to analyze these deviations. These simulations will then be used to compare with some of the data collected by the Pierra Auger Observatory, located in Argentina, which has the largest UHECR detector currently in existence.
Direction
CAZON BOADO, LORENZO (Tutorships)
CAZON BOADO, LORENZO (Tutorships)
Court
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
Ionic liquids in energy applications: environmental effects
Authorship
A.D.C.
Bachelor of Physics
A.D.C.
Bachelor of Physics
Defense date
02.19.2024 15:00
02.19.2024 15:00
Summary
Ionic liquids are formed solely by ions, and have a very low melting point compared to other salts. These compounds have received great interest in recent years due to their interesting properties that make them especially suitable for use as solvents, catalysts, lubricants or electrolytes. One of the main qualities that give them so much interest for different applications is their low volatility, which allows them to be considered green liquids because they do not pollute the atmosphere. However, the study of its toxicity is still necessary since it is not possible to discard a spill of these compounds that could affect the terrestrial and/or aquatic environment. To carry out this study, a series of ionic liquids from different cationic families (imidazolium, pyridinium, pyrrolidinium and ammonium) have been chosen and their toxicity has been analyzed using the Microtox toxicity test. This widely used methodology is based on the reduction of the bioluminescence of the marine bacteria Aliivibrio Fischeri as a consequence of the addition, in different doses, of these ionic liquids. The parameter used is the EC50, which corresponds to the dose that generates a 50% reduction in the emitted light intensity. The results obtained indicate that most of them are low toxic and, therefore, suitable for industry. Furthermore, it has been shown how the increase in the length of the alkyl chain of the cation gives rise to more toxic compounds.
Ionic liquids are formed solely by ions, and have a very low melting point compared to other salts. These compounds have received great interest in recent years due to their interesting properties that make them especially suitable for use as solvents, catalysts, lubricants or electrolytes. One of the main qualities that give them so much interest for different applications is their low volatility, which allows them to be considered green liquids because they do not pollute the atmosphere. However, the study of its toxicity is still necessary since it is not possible to discard a spill of these compounds that could affect the terrestrial and/or aquatic environment. To carry out this study, a series of ionic liquids from different cationic families (imidazolium, pyridinium, pyrrolidinium and ammonium) have been chosen and their toxicity has been analyzed using the Microtox toxicity test. This widely used methodology is based on the reduction of the bioluminescence of the marine bacteria Aliivibrio Fischeri as a consequence of the addition, in different doses, of these ionic liquids. The parameter used is the EC50, which corresponds to the dose that generates a 50% reduction in the emitted light intensity. The results obtained indicate that most of them are low toxic and, therefore, suitable for industry. Furthermore, it has been shown how the increase in the length of the alkyl chain of the cation gives rise to more toxic compounds.
Direction
PARAJO VIEITO, JUAN JOSE (Tutorships)
SALGADO CARBALLO, JOSEFA (Co-tutorships)
PARAJO VIEITO, JUAN JOSE (Tutorships)
SALGADO CARBALLO, JOSEFA (Co-tutorships)
Court
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
Study of thermophysical and electrical properties of a commercial lubricant in the presence of nanoadditives
Authorship
L.L.P.
Bachelor of Physics
L.L.P.
Bachelor of Physics
Defense date
09.17.2024 10:30
09.17.2024 10:30
Summary
This Final Degree Project is part of a research project about tribological characterization of nanolubricants as possible automatic transmission fluids for electric vehicles. The thermophysical and electrical properties of a formulated oil (Matic DCT) doped with silicon oxide nanoparticles SiO2 of different sizes and concentrations were studied. Its chemical composition was characterized through FTIR spectrometry. A decrease in density and viscosity was observed with increasing temperature, as well as a relative increase with increasing concentration in samples doped with nanoparticles of the same size. The variation with respect to the formulated oil is greater as the temperature and concentration increase, but in no case exceeds 1.2%. Similar behavior was observed in viscosity, where the variation also increases regarding the pure formulated oil as we increase the temperature and the concentration of nanoadditives. In this case the variation reaches up to 3%. The viscosity index also increases with the introduction of nanoparticles. All cases show an anomalous behavior in which smaller nanoparticles produce a greater variation than larger ones, probably due to the presence of moisture in the samples, so it was necessary to repeat the preparation of the nanolubricants including a drying phase for the following properties. As for the electrical conductivity, no pattern seems to be observed with the introduction of larger or smaller SiO2, and the maximum variation is less than 4%, so it can be enclosed within the experimental measurement error. An increase in the contact angle was also observed with the nanoadditives introduction, proportional to the increase in concentration and consistent with the increase in surface tension, this time providing a greater variation the larger the size of the nanoparticles is.
This Final Degree Project is part of a research project about tribological characterization of nanolubricants as possible automatic transmission fluids for electric vehicles. The thermophysical and electrical properties of a formulated oil (Matic DCT) doped with silicon oxide nanoparticles SiO2 of different sizes and concentrations were studied. Its chemical composition was characterized through FTIR spectrometry. A decrease in density and viscosity was observed with increasing temperature, as well as a relative increase with increasing concentration in samples doped with nanoparticles of the same size. The variation with respect to the formulated oil is greater as the temperature and concentration increase, but in no case exceeds 1.2%. Similar behavior was observed in viscosity, where the variation also increases regarding the pure formulated oil as we increase the temperature and the concentration of nanoadditives. In this case the variation reaches up to 3%. The viscosity index also increases with the introduction of nanoparticles. All cases show an anomalous behavior in which smaller nanoparticles produce a greater variation than larger ones, probably due to the presence of moisture in the samples, so it was necessary to repeat the preparation of the nanolubricants including a drying phase for the following properties. As for the electrical conductivity, no pattern seems to be observed with the introduction of larger or smaller SiO2, and the maximum variation is less than 4%, so it can be enclosed within the experimental measurement error. An increase in the contact angle was also observed with the nanoadditives introduction, proportional to the increase in concentration and consistent with the increase in surface tension, this time providing a greater variation the larger the size of the nanoparticles is.
Direction
GARCIA GUIMAREY, MARIA JESUS (Tutorships)
GINER RAJALA, OSCAR VICENT (Co-tutorships)
GARCIA GUIMAREY, MARIA JESUS (Tutorships)
GINER RAJALA, OSCAR VICENT (Co-tutorships)
Court
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Magnetic (nano)superballs: competition between shape, size, and anisotropy
Authorship
I.L.V.
Bachelor of Physics
I.L.V.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Detailed investigation of the properties of magnetic nanoparticles (MNPs) based on their shape, size, and anisotropy. Using micromagnetic simulations with the OOMMF software, different configurations of MNPs with superball geometry, i.e., from the sphere to the cube passing through intermediate shapes, are analyzed. The main objective is to observe how these variables influence spontaneous magnetization and the stability of magnetic moments. The research presents a theoretical introduction to the fundamental concepts and follows with a detailed description of the procedure used, which includes the definition of the physical and computational model employed in the simulations. The analysis focuses on the energetic contributions that affect the magnetic behavior of MNPs, highlighting the importance of geometry and anisotropy in determining their properties.
Detailed investigation of the properties of magnetic nanoparticles (MNPs) based on their shape, size, and anisotropy. Using micromagnetic simulations with the OOMMF software, different configurations of MNPs with superball geometry, i.e., from the sphere to the cube passing through intermediate shapes, are analyzed. The main objective is to observe how these variables influence spontaneous magnetization and the stability of magnetic moments. The research presents a theoretical introduction to the fundamental concepts and follows with a detailed description of the procedure used, which includes the definition of the physical and computational model employed in the simulations. The analysis focuses on the energetic contributions that affect the magnetic behavior of MNPs, highlighting the importance of geometry and anisotropy in determining their properties.
Direction
SERANTES ABALO, DAVID (Tutorships)
SERANTES ABALO, DAVID (Tutorships)
Court
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
Hyperspectral images of polarized lught microscopy
Authorship
R.S.B.
Bachelor of Physics
R.S.B.
Bachelor of Physics
Defense date
07.19.2024 10:00
07.19.2024 10:00
Summary
Polarized light microscopy combines the use of a conventional optical microscope with various optical devices. This combination allows for a detailed study of the optical properties of materials, especially anisotropic materials. This work covers the main operating modes of a polarized light microscope and the fundamental characteristics that can be obtained through this type of analysis. Some of these techniques are applied to the study of various birefringent samples, including some ionic liquid crystals. Additionally, a hyperspectral camera is available, which allows images to be divided into the electromagnetic spectrum for each point of the sample, furthering the understanding of the optical properties of the materials.
Polarized light microscopy combines the use of a conventional optical microscope with various optical devices. This combination allows for a detailed study of the optical properties of materials, especially anisotropic materials. This work covers the main operating modes of a polarized light microscope and the fundamental characteristics that can be obtained through this type of analysis. Some of these techniques are applied to the study of various birefringent samples, including some ionic liquid crystals. Additionally, a hyperspectral camera is available, which allows images to be divided into the electromagnetic spectrum for each point of the sample, furthering the understanding of the optical properties of the materials.
Direction
DE LA FUENTE CARBALLO, RAUL (Tutorships)
DE LA FUENTE CARBALLO, RAUL (Tutorships)
Court
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
Compared study of the electronic structure of superconducting niquel oxides and copper oxides.
Authorship
G.R.R.
Bachelor of Physics
G.R.R.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
The main objective of this final degree project is to study and compare the high temperature superconducting molecules of CaCuO2 and LaNiO2. The copper oxides have cations of Cu2+ and the niquel oxides have cations of Ni+ and even though they have the same number of electrons in the d band they have different electronic properties. We will start by studying the crystal field structure of these compounds, considering the octhaedral configuration, the Jahn-Teller effect and the plane square enviroment. Afterwards, we will explain the Density Functional Theory (DFT) for solving the multi-electron problem. We introduce the software WIEN2k and show the steps we have to follow to get the simulations we will need for this comparison. Finally, we compare and explain the electronic results we obtained for both compounds for the spinless and spin configurations based on the Crystal Field Theory. It is worh mentioning that for the spin simulation we consider the antiferromagnetic and ferromagnetic configurations, only describing electronically the first one (it has the lowest energy).
The main objective of this final degree project is to study and compare the high temperature superconducting molecules of CaCuO2 and LaNiO2. The copper oxides have cations of Cu2+ and the niquel oxides have cations of Ni+ and even though they have the same number of electrons in the d band they have different electronic properties. We will start by studying the crystal field structure of these compounds, considering the octhaedral configuration, the Jahn-Teller effect and the plane square enviroment. Afterwards, we will explain the Density Functional Theory (DFT) for solving the multi-electron problem. We introduce the software WIEN2k and show the steps we have to follow to get the simulations we will need for this comparison. Finally, we compare and explain the electronic results we obtained for both compounds for the spinless and spin configurations based on the Crystal Field Theory. It is worh mentioning that for the spin simulation we consider the antiferromagnetic and ferromagnetic configurations, only describing electronically the first one (it has the lowest energy).
Direction
PARDO CASTRO, VICTOR (Tutorships)
PARDO CASTRO, VICTOR (Tutorships)
Court
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
Carbon quantum dots for the detection of pollutants in water.
Authorship
R.S.G.
Double bachelor degree in Physics and Chemistry
R.S.G.
Double bachelor degree in Physics and Chemistry
Defense date
09.17.2024 10:30
09.17.2024 10:30
Summary
Carbon dots (CDs) are a class of carbon-based nanoparticles widely used in various sectors due to their optical and electronic properties. Their application has extended to different fields such as biomedicine and environmental science, thanks to their outstanding biocompatibility and chemical stability. The degree of graphitization and the spatial arrangement of surface functional groups determine the functionality of these particles. In this work, carbon quantum dots (CQDs), a specific type of CDs characterized by intense fluorescence due to quantum confinement, will be synthesized. Generally, the presence of an analyte causes variations in the fluorescence of the CDs due to changes in the chemical environment. In this context, the performance of these particles as detectors of various pollutants will be analyzed by monitoring the changes in their fluorescence emission. Specifically, a detailed study of 4-nitrophenol, a highly toxic and harmful pollutant commonly found in aquatic environments, will be carried out. In summary, this work aims to establish a simple and eco-friendly technique capable of accurately determining the presence and concentration of contaminants in water. Furthermore, it will be demonstrated that the results obtained can be corroborated using complementary techniques.
Carbon dots (CDs) are a class of carbon-based nanoparticles widely used in various sectors due to their optical and electronic properties. Their application has extended to different fields such as biomedicine and environmental science, thanks to their outstanding biocompatibility and chemical stability. The degree of graphitization and the spatial arrangement of surface functional groups determine the functionality of these particles. In this work, carbon quantum dots (CQDs), a specific type of CDs characterized by intense fluorescence due to quantum confinement, will be synthesized. Generally, the presence of an analyte causes variations in the fluorescence of the CDs due to changes in the chemical environment. In this context, the performance of these particles as detectors of various pollutants will be analyzed by monitoring the changes in their fluorescence emission. Specifically, a detailed study of 4-nitrophenol, a highly toxic and harmful pollutant commonly found in aquatic environments, will be carried out. In summary, this work aims to establish a simple and eco-friendly technique capable of accurately determining the presence and concentration of contaminants in water. Furthermore, it will be demonstrated that the results obtained can be corroborated using complementary techniques.
Direction
TABOADA ANTELO, PABLO (Tutorships)
CAMBON FREIRE, ADRIANA (Co-tutorships)
TABOADA ANTELO, PABLO (Tutorships)
CAMBON FREIRE, ADRIANA (Co-tutorships)
Court
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Unraveling the Mystery of Atoms in the Molecule
Authorship
R.S.G.
Double bachelor degree in Physics and Chemistry
R.S.G.
Double bachelor degree in Physics and Chemistry
Defense date
09.11.2024 09:00
09.11.2024 09:00
Summary
The quantum theory of atoms in molecules (QTAIM), developed by Richard F.W. Bader, constitutes a powerful tool in the field of theoretical chemistry. Intended for topological analysis, this quantum theory introduces the innovation of using the electron density, which allows fragmenting the molecular space into regions called atomic basins. The evaluation of the electron density at critical points (CPs) and the calculation of integrated properties within these basins provide answers to various questions, such as the reason for the transferability of electronic properties associated with functional groups between different compounds. In this work we will present the theoretical principles of QTAIM and analyze different chemical systems to relate QTAIM properties to chemical quantities and fundamentals. The results, obtained computationally, depend on the level of calculation used, characterized by the level of theory and the basis set. In addition to isolated molecules, the course of a sigmatropic reaction will be also analyzed, detailing the associated transition state (TS) and minimum energy path (MEP).
The quantum theory of atoms in molecules (QTAIM), developed by Richard F.W. Bader, constitutes a powerful tool in the field of theoretical chemistry. Intended for topological analysis, this quantum theory introduces the innovation of using the electron density, which allows fragmenting the molecular space into regions called atomic basins. The evaluation of the electron density at critical points (CPs) and the calculation of integrated properties within these basins provide answers to various questions, such as the reason for the transferability of electronic properties associated with functional groups between different compounds. In this work we will present the theoretical principles of QTAIM and analyze different chemical systems to relate QTAIM properties to chemical quantities and fundamentals. The results, obtained computationally, depend on the level of calculation used, characterized by the level of theory and the basis set. In addition to isolated molecules, the course of a sigmatropic reaction will be also analyzed, detailing the associated transition state (TS) and minimum energy path (MEP).
Direction
FERNANDEZ RAMOS, ANTONIO (Tutorships)
FERRO COSTAS, DAVID (Co-tutorships)
FERNANDEZ RAMOS, ANTONIO (Tutorships)
FERRO COSTAS, DAVID (Co-tutorships)
Court
Rodríguez Prieto, María de la Flor (Chairman)
CASTRO VARELA, GABRIELA (Secretary)
ABOAL SOMOZA, MANUEL (Member)
Rodríguez Prieto, María de la Flor (Chairman)
CASTRO VARELA, GABRIELA (Secretary)
ABOAL SOMOZA, MANUEL (Member)
Using nanomaterials to improve the properties of electric vehicle lubricants
Authorship
I.C.A.
Bachelor of Physics
I.C.A.
Bachelor of Physics
Defense date
07.18.2024 10:00
07.18.2024 10:00
Summary
Due to the need for change towards a more sustainable future, the aim of this work is the thermophysical and tribological study of nanolubricants as transmission fluids for electric vehicles, seeking to improve their efficiency. To this end, we selected a polyalphaolefin (PAO6) as base oil and functionalised graphene oxide nanoparticles (GO-F) as additives. To check the correct functionalisation of GO, both GO and GO-F nanoparticles were characterised by different techniques: Raman and infrared spectroscopy, powder X-ray diffraction, scanning electron microscopy (SEM) and transmission electron microscopy (TEM). The GO-F nanolubricants were formulated with different concentrations of 0,05 %, 0,10 %, 0,15% and 0,20% to obtain the optimal nanolubricant concentration. For this purpose, their stability was analysed, as well as the friction, wear and roughness of the surfaces in contact with the intermediate lubricants, showing a significant improvement with respect to the base oil without additives. Finally, the variation of the density and viscosity of the nanodispersions with temperature was evaluated, showing small variations in these two properties with respect to the base oil.
Due to the need for change towards a more sustainable future, the aim of this work is the thermophysical and tribological study of nanolubricants as transmission fluids for electric vehicles, seeking to improve their efficiency. To this end, we selected a polyalphaolefin (PAO6) as base oil and functionalised graphene oxide nanoparticles (GO-F) as additives. To check the correct functionalisation of GO, both GO and GO-F nanoparticles were characterised by different techniques: Raman and infrared spectroscopy, powder X-ray diffraction, scanning electron microscopy (SEM) and transmission electron microscopy (TEM). The GO-F nanolubricants were formulated with different concentrations of 0,05 %, 0,10 %, 0,15% and 0,20% to obtain the optimal nanolubricant concentration. For this purpose, their stability was analysed, as well as the friction, wear and roughness of the surfaces in contact with the intermediate lubricants, showing a significant improvement with respect to the base oil without additives. Finally, the variation of the density and viscosity of the nanodispersions with temperature was evaluated, showing small variations in these two properties with respect to the base oil.
Direction
FERNANDEZ PEREZ, JOSEFA (Tutorships)
Liñeira del Río, José Manuel (Co-tutorships)
FERNANDEZ PEREZ, JOSEFA (Tutorships)
Liñeira del Río, José Manuel (Co-tutorships)
Court
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
Laser-generated X-ray source
Authorship
J.R.M.
Bachelor of Physics
J.R.M.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
Conventional active radiation detectors based on electronic systems present limitations when used to characterize ultrafast sources, such as those generated by lasers in the Laser Laboratory for Acceleration and Applications (L2A2). For this reason, it is common to use passive detectors, such as Imaging Plates, due to their high dynamic range, sensitivity, and reusability. This work presents a study of spatial resolution and absolute calibration of six types of IP (4IQ1, 4IQ2, 2IQ1, 2IQ2, 2IQ3 y 2IQ4), conducted using X-rays that provide a spectrum without pile-up. All IP sensitivity measurements are performed with an X-ray generator tube. Finally, two functions of the absolute calibrated response of the IPs in the energy range of 0 to 50 keV are proposed.
Conventional active radiation detectors based on electronic systems present limitations when used to characterize ultrafast sources, such as those generated by lasers in the Laser Laboratory for Acceleration and Applications (L2A2). For this reason, it is common to use passive detectors, such as Imaging Plates, due to their high dynamic range, sensitivity, and reusability. This work presents a study of spatial resolution and absolute calibration of six types of IP (4IQ1, 4IQ2, 2IQ1, 2IQ2, 2IQ3 y 2IQ4), conducted using X-rays that provide a spectrum without pile-up. All IP sensitivity measurements are performed with an X-ray generator tube. Finally, two functions of the absolute calibrated response of the IPs in the energy range of 0 to 50 keV are proposed.
Direction
ALEJO ALONSO, AARON JOSE (Tutorships)
ALEJO ALONSO, AARON JOSE (Tutorships)
Court
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
Quantum Field Theory in curved spacetime
Authorship
A.N.H.
Bachelor of Physics
A.N.H.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
In this Bachelor Thesis I present two results of Quantum Field Theory in curved spacetime, the Unruh Effect and the Hawking Radiation. These physical phenomena appear as a consequence of switching from the Minkowski metric to a curved spacetime, more specifically to the case of Rindler and Schwarzschild spacetimes. The main result of both effects is that after choosing a vacuum state, one can detect a thermal spectrum when computing the mean number of particles in that quantum state using Bogolyubov coefficients. As a consequence of the above, black hole thermodynamics and the loss of information due to the horizons are discussed.
In this Bachelor Thesis I present two results of Quantum Field Theory in curved spacetime, the Unruh Effect and the Hawking Radiation. These physical phenomena appear as a consequence of switching from the Minkowski metric to a curved spacetime, more specifically to the case of Rindler and Schwarzschild spacetimes. The main result of both effects is that after choosing a vacuum state, one can detect a thermal spectrum when computing the mean number of particles in that quantum state using Bogolyubov coefficients. As a consequence of the above, black hole thermodynamics and the loss of information due to the horizons are discussed.
Direction
BORSATO , RICCARDO (Tutorships)
BORSATO , RICCARDO (Tutorships)
Court
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
Beyond lithium batteries
Authorship
J.P.M.
Bachelor of Physics
J.P.M.
Bachelor of Physics
Defense date
02.20.2024 17:00
02.20.2024 17:00
Summary
In this work, a preliminary analysis of lithium battery technology will be presented, highlighting its current advantages and drawbacks. Subsequently, an analysis of the potential technological evolution of this type of energy storage system will be discussed, emphasizing how new materials for anodes, cathodes, and electrolytes, along with their integration into devices, can enhance energy storage capacity and battery durability. Finally, a critical reflection will be conducted on the future viability of these potential new technologies.
In this work, a preliminary analysis of lithium battery technology will be presented, highlighting its current advantages and drawbacks. Subsequently, an analysis of the potential technological evolution of this type of energy storage system will be discussed, emphasizing how new materials for anodes, cathodes, and electrolytes, along with their integration into devices, can enhance energy storage capacity and battery durability. Finally, a critical reflection will be conducted on the future viability of these potential new technologies.
Direction
TABOADA ANTELO, PABLO (Tutorships)
TABOADA ANTELO, PABLO (Tutorships)
Court
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
Angular dependence of the critical magnetic field of the NbSe2 superconductor
Authorship
E.A.S.
Bachelor of Physics
E.A.S.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
NbSe2 is a uniaxially anisotropic superconductor with Tc=7.2 K that exhibits multiple superconducting electronic bands. Electrical resistance measurements versus temperature were performed on a single crystal of this material in the presence of magnetic fields with different orientations relative to the crystallographic ab planes. These measurements allowed the determination of the critical magnetic field Hc2 (beyond which the material ceases to be in a superconducting state). An anomalous behavior was observed which could be interpreted based on the multiband nature of this material.
NbSe2 is a uniaxially anisotropic superconductor with Tc=7.2 K that exhibits multiple superconducting electronic bands. Electrical resistance measurements versus temperature were performed on a single crystal of this material in the presence of magnetic fields with different orientations relative to the crystallographic ab planes. These measurements allowed the determination of the critical magnetic field Hc2 (beyond which the material ceases to be in a superconducting state). An anomalous behavior was observed which could be interpreted based on the multiband nature of this material.
Direction
MOSQUEIRA REY, JESUS MANUEL (Tutorships)
MOSQUEIRA REY, JESUS MANUEL (Tutorships)
Court
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
Neural Networks: Fundamentals and Application to Image Recognition
Authorship
C.F.P.
Double bachelor degree in Mathematics and Physics
C.F.P.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2024 11:30
07.16.2024 11:30
Summary
Neural networks constitute the topic of this project. As such, they are studied starting from their fundamentals, showing some of their main models, to finally end with the exploration of a specific model for dealing with images. Firstly, a general definition of neural networks is provided based upon their elements, and they are also considered from the point of view of graph theory. Afterwards, the original model for neural networks, the simple perceptron, is explored, as well as its natural extension, the multilayered perceptron. In both cases, their definition and theoretical results about their capacities and performance are provided. Finally, convolutional neural networks are studied, and a practical example is presented alongside a code for a problem dealing with image classification, for which the variation of different paramethers is studied according to their effect in the performance of the network.
Neural networks constitute the topic of this project. As such, they are studied starting from their fundamentals, showing some of their main models, to finally end with the exploration of a specific model for dealing with images. Firstly, a general definition of neural networks is provided based upon their elements, and they are also considered from the point of view of graph theory. Afterwards, the original model for neural networks, the simple perceptron, is explored, as well as its natural extension, the multilayered perceptron. In both cases, their definition and theoretical results about their capacities and performance are provided. Finally, convolutional neural networks are studied, and a practical example is presented alongside a code for a problem dealing with image classification, for which the variation of different paramethers is studied according to their effect in the performance of the network.
Direction
CRUJEIRAS CASAIS, ROSA MARÍA (Tutorships)
CRUJEIRAS CASAIS, ROSA MARÍA (Tutorships)
Court
GONZALEZ MANTEIGA, WENCESLAO (Chairman)
PAEZ GUILLAN, MARIA PILAR (Secretary)
ALVAREZ DIOS, JOSE ANTONIO (Member)
GONZALEZ MANTEIGA, WENCESLAO (Chairman)
PAEZ GUILLAN, MARIA PILAR (Secretary)
ALVAREZ DIOS, JOSE ANTONIO (Member)
Artificial Intelligence in a Meteorological Environment
Authorship
C.F.P.
Double bachelor degree in Mathematics and Physics
C.F.P.
Double bachelor degree in Mathematics and Physics
Defense date
07.18.2024 09:00
07.18.2024 09:00
Summary
In this work, real meteorological data were obtained from stations from Meteogalicia, and an Echo State Network model was programmed to try to adjust them. The objectives were the modeling and comprehension of the data, the detection and reproduction of extreme events, and, ultimately, to study the capacity of the model to predict future data. The analyses were performed, in order to be manageable and in accordance to the caracteristics of this work, only for a single station located in Ferrol, for data corresponding to summer between the years 2001 and 2023, both included. It was found that the model satisfactorily fitted the data, and that it was capable of finding extreme events with reliability. Training the model for this station, acceptable fits were obtained for stations corresponding to locations with a similar climate, but they were notably worse for places with different caracteristics. Regarding the prediction for future data, it was established that an acceptable prediction was obtained for a time equivalent to 12.5 % of the network training time, moment after which the model stopped providing useful data. No evidence of a significant evolution towards a greater climatic inestability was found. As future paths of study, there is the analysis for the rest of the seasons, as well as for different climates (it would be sensible to take a training station for each one of them). For longer predictions, the predicted values for the acceptable interval could be considered, then the model could be trained again including them, an so on.
In this work, real meteorological data were obtained from stations from Meteogalicia, and an Echo State Network model was programmed to try to adjust them. The objectives were the modeling and comprehension of the data, the detection and reproduction of extreme events, and, ultimately, to study the capacity of the model to predict future data. The analyses were performed, in order to be manageable and in accordance to the caracteristics of this work, only for a single station located in Ferrol, for data corresponding to summer between the years 2001 and 2023, both included. It was found that the model satisfactorily fitted the data, and that it was capable of finding extreme events with reliability. Training the model for this station, acceptable fits were obtained for stations corresponding to locations with a similar climate, but they were notably worse for places with different caracteristics. Regarding the prediction for future data, it was established that an acceptable prediction was obtained for a time equivalent to 12.5 % of the network training time, moment after which the model stopped providing useful data. No evidence of a significant evolution towards a greater climatic inestability was found. As future paths of study, there is the analysis for the rest of the seasons, as well as for different climates (it would be sensible to take a training station for each one of them). For longer predictions, the predicted values for the acceptable interval could be considered, then the model could be trained again including them, an so on.
Direction
Pérez Muñuzuri, Alberto (Tutorships)
García Selfa, David (Co-tutorships)
Pérez Muñuzuri, Alberto (Tutorships)
García Selfa, David (Co-tutorships)
Court
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Classification of J/psi Produced from charm-anticharm Pairs in Color Singlet and Octet Using Machine Learning Methods
Authorship
C.A.S.
Bachelor of Physics
C.A.S.
Bachelor of Physics
Defense date
06.19.2024 10:30
06.19.2024 10:30
Summary
The production of charm-anticharm pairs in both color octet and singlet configurations in proton-proton collisions at sqrt(s) = 13 TeV is studied using Monte Carlo simulations, emulating the experimental conditions of the LHCb detector. NRQCD allows the production of pairs via color octet with non-zero color charge (COM), in addition to those with neutral color charge via singlet (CSM). The use of machine learning techniques, specifically an artificial neural network, is proposed for classifying the production mechanism of the observed J/psi mesons based on the kinematic variables of the collision along with multiplicity variables. Although the classification results are not entirely satisfactory, they suggest the potential application of similar tools to the quarkonium production problem, aiming for greater complexity in the network architecture and an improvement in the variables involved in the classification process.
The production of charm-anticharm pairs in both color octet and singlet configurations in proton-proton collisions at sqrt(s) = 13 TeV is studied using Monte Carlo simulations, emulating the experimental conditions of the LHCb detector. NRQCD allows the production of pairs via color octet with non-zero color charge (COM), in addition to those with neutral color charge via singlet (CSM). The use of machine learning techniques, specifically an artificial neural network, is proposed for classifying the production mechanism of the observed J/psi mesons based on the kinematic variables of the collision along with multiplicity variables. Although the classification results are not entirely satisfactory, they suggest the potential application of similar tools to the quarkonium production problem, aiming for greater complexity in the network architecture and an improvement in the variables involved in the classification process.
Direction
SANTAMARINA RIOS, CIBRAN (Tutorships)
SANTAMARINA RIOS, CIBRAN (Tutorships)
Court
VAZQUEZ RAMALLO, MANUEL (Chairman)
PARDO CASTRO, VICTOR (Secretary)
FERNANDEZ DOMINGUEZ, BEATRIZ (Member)
VAZQUEZ RAMALLO, MANUEL (Chairman)
PARDO CASTRO, VICTOR (Secretary)
FERNANDEZ DOMINGUEZ, BEATRIZ (Member)
Physical principles of 3D printing and applications in medicine
Authorship
M.R.R.
Bachelor of Physics
M.R.R.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
This project explores recent developments and emerging trends in 3D printing applied to medicine, with special emphasis, although not exclusively, on tissue regeneration. It also provides a view of the various 3D printing technologies used for the manufacture of the structures, the materials used, and the advantages and disadvantages of each of the printing techniques.
This project explores recent developments and emerging trends in 3D printing applied to medicine, with special emphasis, although not exclusively, on tissue regeneration. It also provides a view of the various 3D printing technologies used for the manufacture of the structures, the materials used, and the advantages and disadvantages of each of the printing techniques.
Direction
TABOADA ANTELO, PABLO (Tutorships)
TABOADA ANTELO, PABLO (Tutorships)
Court
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
Viscosity instabilities in fluids: Study of physical parameters.
Authorship
A.B.B.
Bachelor of Physics
A.B.B.
Bachelor of Physics
Defense date
02.19.2024 15:00
02.19.2024 15:00
Summary
This study focuses on the instabilities that occur when two fluids with different viscosities interact, known as viscous fingering instabilities. These instabilities have great relevance both in scientific research and in industrial applications, since they reveal intrinsic properties of the interacting liquids. This work focuses on an experimental description of this phenomenon using a Hele-Shaw cell and varying the physical parameters that can influence its behavior. The main parameters modified are the fluid injection rate and the cell thickness. In all cases examined, we evaluate the indicators that describe the instability and analyze how the physical parameters of the system affect these indicators. To carry out this research, image acquisition techniques, image processing software and other tools to analyze the data obtained are employed. This experimental approach allows a better understanding of the viscous fingering instabilities and its relationship with the physical parameters of the system.
This study focuses on the instabilities that occur when two fluids with different viscosities interact, known as viscous fingering instabilities. These instabilities have great relevance both in scientific research and in industrial applications, since they reveal intrinsic properties of the interacting liquids. This work focuses on an experimental description of this phenomenon using a Hele-Shaw cell and varying the physical parameters that can influence its behavior. The main parameters modified are the fluid injection rate and the cell thickness. In all cases examined, we evaluate the indicators that describe the instability and analyze how the physical parameters of the system affect these indicators. To carry out this research, image acquisition techniques, image processing software and other tools to analyze the data obtained are employed. This experimental approach allows a better understanding of the viscous fingering instabilities and its relationship with the physical parameters of the system.
Direction
Pérez Muñuzuri, Alberto (Tutorships)
CARBALLIDO LANDEIRA, JORGE (Co-tutorships)
Pérez Muñuzuri, Alberto (Tutorships)
CARBALLIDO LANDEIRA, JORGE (Co-tutorships)
Court
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
Omega-baryon production in pPb collisions at \sqrt{sNN}= 5.02 TeV at the LHCb experiment.
Authorship
M.I.M.
Bachelor of Physics
M.I.M.
Bachelor of Physics
Defense date
02.20.2024 17:00
02.20.2024 17:00
Summary
This study investigates high-energy nuclear collisions, with a particular focus on the enhanced production of strange quarks (s) as a potential signature of the Quark Gluon Plasma (QGP). The analysis employs data from the LHCb experiment at the Large Hadron Collider (LHC) to examine the production rates of the multi-strange Omega-baryon in proton-nucleus collisions, specifically proton-lead (pPb) collisions at a center-of-mass energy of \sqrt{sNN}= 5.02 TeV. The primary objective of this research is to study the multi-strange Omega-baryon, which is composed of three valence s quarks. This particle serves as an ideal probe to characterise the properties of the QGP. Beyond investigating the increased production of strangeness in high-multiplicity collisions, the production rates of the Omega-baryon are explored in various decay regions through the invariant mass spectrum.
This study investigates high-energy nuclear collisions, with a particular focus on the enhanced production of strange quarks (s) as a potential signature of the Quark Gluon Plasma (QGP). The analysis employs data from the LHCb experiment at the Large Hadron Collider (LHC) to examine the production rates of the multi-strange Omega-baryon in proton-nucleus collisions, specifically proton-lead (pPb) collisions at a center-of-mass energy of \sqrt{sNN}= 5.02 TeV. The primary objective of this research is to study the multi-strange Omega-baryon, which is composed of three valence s quarks. This particle serves as an ideal probe to characterise the properties of the QGP. Beyond investigating the increased production of strangeness in high-multiplicity collisions, the production rates of the Omega-baryon are explored in various decay regions through the invariant mass spectrum.
Direction
SANTAMARINA RIOS, CIBRAN (Tutorships)
LANDESA GOMEZ, CLARA (Co-tutorships)
SANTAMARINA RIOS, CIBRAN (Tutorships)
LANDESA GOMEZ, CLARA (Co-tutorships)
Court
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
Atmospheric Optical Phenomena: analysis and recreation in laboratory
Authorship
P.P.P.
Bachelor of Physics
P.P.P.
Bachelor of Physics
Defense date
07.19.2024 10:00
07.19.2024 10:00
Summary
This paper is a study of three of the best-known optical phenomena that occur naturally in the atmosphere of our planet. First, the main agents responsible for such events are briefly outlined: light (defining what it is, how it interacts with matter and what its sources may be) and the atmosphere (giving a description of its layers and composition). This information is then used to analyse the origin and physical mechanisms behind the phenomena of rainbows, which arise from the interaction of light from the sun with water droplets suspended in the air; mirages, that occur as a result of anomalous vertical temperature gradients in the atmosphere and six types of halo, which result from the interaction of sunlight with ice crystals originating from a specific type of cloud. After each theoretical analysis, an attempt will be made to recreate in the laboratory these phenomena completely or, failing that, some of their characteristics.
This paper is a study of three of the best-known optical phenomena that occur naturally in the atmosphere of our planet. First, the main agents responsible for such events are briefly outlined: light (defining what it is, how it interacts with matter and what its sources may be) and the atmosphere (giving a description of its layers and composition). This information is then used to analyse the origin and physical mechanisms behind the phenomena of rainbows, which arise from the interaction of light from the sun with water droplets suspended in the air; mirages, that occur as a result of anomalous vertical temperature gradients in the atmosphere and six types of halo, which result from the interaction of sunlight with ice crystals originating from a specific type of cloud. After each theoretical analysis, an attempt will be made to recreate in the laboratory these phenomena completely or, failing that, some of their characteristics.
Direction
BAO VARELA, Mª CARMEN (Tutorships)
Gómez Varela, Ana Isabel (Co-tutorships)
BAO VARELA, Mª CARMEN (Tutorships)
Gómez Varela, Ana Isabel (Co-tutorships)
Court
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
Multi-Chance Fission in High Energy Fusion Reactions
Authorship
A.B.B.
Bachelor of Physics
A.B.B.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Nuclear fission is an extremely complex process, and the available knowledge about it has done nothing but increase ever since its discovery less than a century ago. Nowadays there are multiple theoretical models such as the Liquid Drop Model or the Strutinsky Model; as well as phenomenological models such as GEF that, in conjunction, can accurately predict the behaviour of most fissioning systems. However, there are still some discrepancies in these models when compared to empirical data in certain systems. Determining which systems these are, as well as studying them, allows for improvements on the models to be made, and also expands the current understanding of the fission process and its characteristics. In this work one of these cases will be studied: the fission process of berkelium (249Bk) under high excitation energies (47.7 MeV). Experimental data of this system is available thanks to research done on the VAMOS experiment, on the GANIL laboratories. Using the GEF model and the VAMOS experimental data, the behaviour of this fissioning system will be explained, as well as its main fission modes and its tendency to evaporate neutrons before splitting; uncovering possible shortcomings of the GEF model on this kind of extreme cases.
Nuclear fission is an extremely complex process, and the available knowledge about it has done nothing but increase ever since its discovery less than a century ago. Nowadays there are multiple theoretical models such as the Liquid Drop Model or the Strutinsky Model; as well as phenomenological models such as GEF that, in conjunction, can accurately predict the behaviour of most fissioning systems. However, there are still some discrepancies in these models when compared to empirical data in certain systems. Determining which systems these are, as well as studying them, allows for improvements on the models to be made, and also expands the current understanding of the fission process and its characteristics. In this work one of these cases will be studied: the fission process of berkelium (249Bk) under high excitation energies (47.7 MeV). Experimental data of this system is available thanks to research done on the VAMOS experiment, on the GANIL laboratories. Using the GEF model and the VAMOS experimental data, the behaviour of this fissioning system will be explained, as well as its main fission modes and its tendency to evaporate neutrons before splitting; uncovering possible shortcomings of the GEF model on this kind of extreme cases.
Direction
CAAMAÑO FRESCO, MANUEL (Tutorships)
CAAMAÑO FRESCO, MANUEL (Tutorships)
Court
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
Differential geometry of ruled surfaces and their application in architecture
Authorship
B.M.P.S.
Double bachelor degree in Mathematics and Physics
B.M.P.S.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2024 12:15
07.16.2024 12:15
Summary
A surface is called a ruled surface if through every point there is at least one straight line that lies on the surface. The aim of this project is to study this kind of surfaces and their properties in the context of differential geometry. In addition, we will consider a specific type of ruled surfaces which have null curvature, known as developable surfaces. Finally, we will examine their implementation in architecture, focusing on the analysis of the work done by architects Antoni Gaudí, Félix Candela, Santiago Calatrava and Frank Gehry.
A surface is called a ruled surface if through every point there is at least one straight line that lies on the surface. The aim of this project is to study this kind of surfaces and their properties in the context of differential geometry. In addition, we will consider a specific type of ruled surfaces which have null curvature, known as developable surfaces. Finally, we will examine their implementation in architecture, focusing on the analysis of the work done by architects Antoni Gaudí, Félix Candela, Santiago Calatrava and Frank Gehry.
Direction
Vázquez Abal, María Elena (Tutorships)
Vázquez Abal, María Elena (Tutorships)
Court
TORRES LOPERA, JUAN FRANCISCO (Chairman)
CONDE AMBOAGE, MERCEDES (Secretary)
López Pouso, Óscar (Member)
TORRES LOPERA, JUAN FRANCISCO (Chairman)
CONDE AMBOAGE, MERCEDES (Secretary)
López Pouso, Óscar (Member)
Magnetic nanoparticles design for MPI
Authorship
B.M.P.S.
Double bachelor degree in Mathematics and Physics
B.M.P.S.
Double bachelor degree in Mathematics and Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
The aim of this work is to analyze the properties of magnetite nanoparticles and try to optimize their behavior in \magnetic particle imaging (MPI), a novel medical imaging technique that relies on the non-linearity of the magnetization curves of said nanoparticles and the existence of a saturation field in magnetic materials. We will examine how the particles magnetic anisotropy affects their magnetization and compare the results with the predictions of the Langevin theory of paramagnetism, which is often used in the study of MPI. To do this, we will simulate the evolution of magnetite particles magnetization in different scenarios using CESGA's computational resources. Firstly, we will assume that the particles only have one easy magnetization direction, in other words, they only have uniaxial anisotropy. Subsequently, we will consider particles closer to those existing in reality, which will present an intrinsic anisotropy due to their crystalline structure, cubic anisotropy in the case of magnetite, and will have an asymmetric shape, which will introduce a contribution of uniaxial anisotropy. In both cases we will initially assume that the easy axes corresponding to the uniaxial anisotropy are randomly distributed. However, since particles in MPI are in a viscous medium in which they can physically rotate with the applied magnetic field, we will also study how a specific orientation of these easy axes would affect the MPI signal.
The aim of this work is to analyze the properties of magnetite nanoparticles and try to optimize their behavior in \magnetic particle imaging (MPI), a novel medical imaging technique that relies on the non-linearity of the magnetization curves of said nanoparticles and the existence of a saturation field in magnetic materials. We will examine how the particles magnetic anisotropy affects their magnetization and compare the results with the predictions of the Langevin theory of paramagnetism, which is often used in the study of MPI. To do this, we will simulate the evolution of magnetite particles magnetization in different scenarios using CESGA's computational resources. Firstly, we will assume that the particles only have one easy magnetization direction, in other words, they only have uniaxial anisotropy. Subsequently, we will consider particles closer to those existing in reality, which will present an intrinsic anisotropy due to their crystalline structure, cubic anisotropy in the case of magnetite, and will have an asymmetric shape, which will introduce a contribution of uniaxial anisotropy. In both cases we will initially assume that the easy axes corresponding to the uniaxial anisotropy are randomly distributed. However, since particles in MPI are in a viscous medium in which they can physically rotate with the applied magnetic field, we will also study how a specific orientation of these easy axes would affect the MPI signal.
Direction
SERANTES ABALO, DAVID (Tutorships)
SERANTES ABALO, DAVID (Tutorships)
Court
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
Measurement of the M2 factor of a laser beam
Authorship
X.P.S.
Bachelor of Physics
X.P.S.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
The M2 factor is a common measure used to estimate the beam quality of a laser beam. It's defined as the comparison between the product of the divergence and the minumum width of the beam and those of a gaussian beam. In this paper we will determinate experimentally the value of the quality factor of an He-Ne laser for different optical cavity lenghts, paying attention to the evolution of the beam radius along the propagation direction through images taken at different distances.
The M2 factor is a common measure used to estimate the beam quality of a laser beam. It's defined as the comparison between the product of the divergence and the minumum width of the beam and those of a gaussian beam. In this paper we will determinate experimentally the value of the quality factor of an He-Ne laser for different optical cavity lenghts, paying attention to the evolution of the beam radius along the propagation direction through images taken at different distances.
Direction
DE LA FUENTE CARBALLO, RAUL (Tutorships)
DE LA FUENTE CARBALLO, RAUL (Tutorships)
Court
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
Magnetic nanoparticles and anisotropy: beyond the Langevin function.
Authorship
D.B.G.
Bachelor of Physics
D.B.G.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
The objective of the work is to study a system of magnetic nanoparticles based on their response to an applied field and temperature. Starting from the Langevin function, which assumes zero anisotropy, it will be studied how considering anisotropy changes the properties of the systems. It will begin with the simple case of uniaxial anisotropy and collinearity with the applied field, then move on to consider the disordered case. Finally, the intrinsic nature of the particles (cubic and negative magnetocrystalline anisotropy) will be taken into account to provide a general overview of how the system’s properties are determined by the different terms of anisotropy. The work will include an analytical basis for simple cases and will be complemented by computational results for more complex cases. The computational work will be carried out at the Supercomputing Center of Galicia, using the software OOMMF (Object Oriented MicroMagnetic Framework). Á música, por acompañarme e formar parte da miña vida.
The objective of the work is to study a system of magnetic nanoparticles based on their response to an applied field and temperature. Starting from the Langevin function, which assumes zero anisotropy, it will be studied how considering anisotropy changes the properties of the systems. It will begin with the simple case of uniaxial anisotropy and collinearity with the applied field, then move on to consider the disordered case. Finally, the intrinsic nature of the particles (cubic and negative magnetocrystalline anisotropy) will be taken into account to provide a general overview of how the system’s properties are determined by the different terms of anisotropy. The work will include an analytical basis for simple cases and will be complemented by computational results for more complex cases. The computational work will be carried out at the Supercomputing Center of Galicia, using the software OOMMF (Object Oriented MicroMagnetic Framework). Á música, por acompañarme e formar parte da miña vida.
Direction
SERANTES ABALO, DAVID (Tutorships)
SERANTES ABALO, DAVID (Tutorships)
Court
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
Development of a dermatology image classifier using Deep-Learning techniques
Authorship
A.A.A.
Bachelor of Physics
A.A.A.
Bachelor of Physics
Defense date
02.19.2024 15:00
02.19.2024 15:00
Summary
Machine Learning techniques allow for making predictions and classifications based on variables extracted from a particular process. These techniques are widely used to solve complex problems in different scientific fields. In recent years, neural networks were the main character in the progress of this type of technology. This project’s objective is to learn the basics of these kind of network’s functioning, which will be used to classify dermatology images.
Machine Learning techniques allow for making predictions and classifications based on variables extracted from a particular process. These techniques are widely used to solve complex problems in different scientific fields. In recent years, neural networks were the main character in the progress of this type of technology. This project’s objective is to learn the basics of these kind of network’s functioning, which will be used to classify dermatology images.
Direction
GARCIA TAHOCES, PABLO (Tutorships)
GARCIA TAHOCES, PABLO (Tutorships)
Court
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
Obtaining non-minimal couplings for Kalb-Ramond fields via dualization.
Authorship
P.S.F.
Bachelor of Physics
P.S.F.
Bachelor of Physics
Defense date
07.18.2024 09:00
07.18.2024 09:00
Summary
The objective will be to obtain theories for 2-forms (both massive and non-massive) with non-minimal couplings through the dualisation of theories with Horndeski-type interactions for scalar and vector fields. In the case of duals of Proca fields with non-minimal couplings, theories where the field derivatives only enter through the field tensor will be studied first, so that they are gauge-invariant (even if the complete theory is not). Dualisation in this case will be like in the standard case, and the procedure is well-known.
The objective will be to obtain theories for 2-forms (both massive and non-massive) with non-minimal couplings through the dualisation of theories with Horndeski-type interactions for scalar and vector fields. In the case of duals of Proca fields with non-minimal couplings, theories where the field derivatives only enter through the field tensor will be studied first, so that they are gauge-invariant (even if the complete theory is not). Dualisation in this case will be like in the standard case, and the procedure is well-known.
Direction
EDELSTEIN GLAUBACH, JOSE DANIEL (Tutorships)
Beltrán Jiménez, José (Co-tutorships)
EDELSTEIN GLAUBACH, JOSE DANIEL (Tutorships)
Beltrán Jiménez, José (Co-tutorships)
Court
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Introduction to stochastic optimization
Authorship
A.P.P.
Double bachelor degree in Mathematics and Physics
A.P.P.
Double bachelor degree in Mathematics and Physics
Defense date
07.17.2024 11:30
07.17.2024 11:30
Summary
In this work, we introduce stochastic optimization, which studies mathematical programming problems with uncertain data. In the first chapter, fundamental concepts of statistics, probability, and mathematical programming are presented, necessary for understanding and explaining the foundations of this topic. Next, we address two-stage stochastic problems, analyzing their main properties. This study is divided into two parts, considering the stochastic components of the problem, which can be either discrete or continuous. Finally, a solution method for these problems known as the L-Shaped Method is presented. Its algorithm is analyzed in detail, focusing on two of its fundamental components, \optimality cuts and feasibility cuts. Additionally, practical examples using the statistical software R are included to illustrate its resolution and application.
In this work, we introduce stochastic optimization, which studies mathematical programming problems with uncertain data. In the first chapter, fundamental concepts of statistics, probability, and mathematical programming are presented, necessary for understanding and explaining the foundations of this topic. Next, we address two-stage stochastic problems, analyzing their main properties. This study is divided into two parts, considering the stochastic components of the problem, which can be either discrete or continuous. Finally, a solution method for these problems known as the L-Shaped Method is presented. Its algorithm is analyzed in detail, focusing on two of its fundamental components, \optimality cuts and feasibility cuts. Additionally, practical examples using the statistical software R are included to illustrate its resolution and application.
Direction
CASARES DE CAL, MARIA ANGELES (Tutorships)
CASARES DE CAL, MARIA ANGELES (Tutorships)
Court
GONZALEZ MANTEIGA, WENCESLAO (Chairman)
PAEZ GUILLAN, MARIA PILAR (Secretary)
ALVAREZ DIOS, JOSE ANTONIO (Member)
GONZALEZ MANTEIGA, WENCESLAO (Chairman)
PAEZ GUILLAN, MARIA PILAR (Secretary)
ALVAREZ DIOS, JOSE ANTONIO (Member)
Mitigation Potential of Environmental Impact through the implementation of various production technologies in the European Union
Authorship
A.P.P.
Double bachelor degree in Mathematics and Physics
A.P.P.
Double bachelor degree in Mathematics and Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
This study examines the robustness of the static methodology for assessing the Mitigation Potential of Environmental Impact (IMPcc), a key environmental sustainability indicator. It analyzes the need to consider this indicator as dynamic when designing medium and long-term sustainable energy transition agendas, especially as the participation of renewable technologies increases. Using the OSeMOSYS simulation tool, which optimizes the overall costs of the energy system, a simplified reference system for Galicia from 2022 to 2050 is modeled. The methodology includes defining various transition scenarios, increasing in four steps (0.5, 1, 2 and 3 GW) the implemented technological capacity of three key technologies: solar photovoltaic, wind, and combined cycle. Annual emissions are analyzed, and the IMPcc is calculated to assess whether this indicator can remain constant and independent of the technology's share in the energy mix. The results indicate divergent behaviors among the technologies. Wind energy shows robustness, allowing the IMPcc to be considered constant. In contrast, solar photovoltaic presents a linearly increasing IMPcc, while the combined cycle does not show significant variations. This is due to the seasonal production of the technologies: wind and hydro generate more in winter, and solar photovoltaic in summer, with the combined cycle being more needed in summer to cover energy demand. Additionally, implementation and operating costs were analyzed, as well as the evolution of new implementation capacities and their contribution to production within the energy mix.
This study examines the robustness of the static methodology for assessing the Mitigation Potential of Environmental Impact (IMPcc), a key environmental sustainability indicator. It analyzes the need to consider this indicator as dynamic when designing medium and long-term sustainable energy transition agendas, especially as the participation of renewable technologies increases. Using the OSeMOSYS simulation tool, which optimizes the overall costs of the energy system, a simplified reference system for Galicia from 2022 to 2050 is modeled. The methodology includes defining various transition scenarios, increasing in four steps (0.5, 1, 2 and 3 GW) the implemented technological capacity of three key technologies: solar photovoltaic, wind, and combined cycle. Annual emissions are analyzed, and the IMPcc is calculated to assess whether this indicator can remain constant and independent of the technology's share in the energy mix. The results indicate divergent behaviors among the technologies. Wind energy shows robustness, allowing the IMPcc to be considered constant. In contrast, solar photovoltaic presents a linearly increasing IMPcc, while the combined cycle does not show significant variations. This is due to the seasonal production of the technologies: wind and hydro generate more in winter, and solar photovoltaic in summer, with the combined cycle being more needed in summer to cover energy demand. Additionally, implementation and operating costs were analyzed, as well as the evolution of new implementation capacities and their contribution to production within the energy mix.
Direction
LOPEZ AGUERA, Ma ANGELES (Tutorships)
LOPEZ AGUERA, Ma ANGELES (Tutorships)
Court
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
Analysis of endolymphatic flow under different rotational stimuli using CFD techniques
Authorship
S.T.R.
Bachelor of Physics
S.T.R.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
In this dissertation, the dynamics of endolymphatic flow under rotational stimuli have been investigated, focusing on the response of the utricle, which was modeled as a rigid ellipsoid to simplify its study. Using Simcenter STAR CCM+ software, known for its ability to deliver robust solutions in Computer Fluid Dynamics (CFD), an analysis was conducted that complements recent observations on the presence and critical role of vortices in balance perception. The main goal is to simulate the behavior of the endolymph in response to various rotational stimuli, similar to those used in diagnostic tests of vestibular function. This approach will allow for the identification and characterization of vortices that occur during these stimuli, providing information that cannot be obtained clinically. To achieve these objectives, two key parameters were analyzed: velocities and vorticities within the utricle. Additionally, the correlation of these parameters with the angular velocity of the head during rotation was investigated.
In this dissertation, the dynamics of endolymphatic flow under rotational stimuli have been investigated, focusing on the response of the utricle, which was modeled as a rigid ellipsoid to simplify its study. Using Simcenter STAR CCM+ software, known for its ability to deliver robust solutions in Computer Fluid Dynamics (CFD), an analysis was conducted that complements recent observations on the presence and critical role of vortices in balance perception. The main goal is to simulate the behavior of the endolymph in response to various rotational stimuli, similar to those used in diagnostic tests of vestibular function. This approach will allow for the identification and characterization of vortices that occur during these stimuli, providing information that cannot be obtained clinically. To achieve these objectives, two key parameters were analyzed: velocities and vorticities within the utricle. Additionally, the correlation of these parameters with the angular velocity of the head during rotation was investigated.
Direction
Pérez Muñuzuri, Alberto (Tutorships)
ARAN TAPIA, ISMAEL (Co-tutorships)
Pérez Muñuzuri, Alberto (Tutorships)
ARAN TAPIA, ISMAEL (Co-tutorships)
Court
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
The quantum rabi model and its generalizations.
Authorship
P.V.C.
Bachelor of Physics
P.V.C.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
The quantum Rabi model describes the interaction between a two-level atom and a single photonic-mode of the quantized electromagnetic field. This model is an extension of the Jaynes-Cummings model which includes the so-called counter-rotating terms. Throughout this document, we will discuss solutions for this model, first in terms of Bogoliubov operators and then in the Bargmann-Fock space, focusing on the obtention of the energy spectrum. Finally, we will study some generalizations of the model, such as the asymmetric quantum Rabi model or the anisotropic quantum Rabi model.
The quantum Rabi model describes the interaction between a two-level atom and a single photonic-mode of the quantized electromagnetic field. This model is an extension of the Jaynes-Cummings model which includes the so-called counter-rotating terms. Throughout this document, we will discuss solutions for this model, first in terms of Bogoliubov operators and then in the Bargmann-Fock space, focusing on the obtention of the energy spectrum. Finally, we will study some generalizations of the model, such as the asymmetric quantum Rabi model or the anisotropic quantum Rabi model.
Direction
Vázquez Ramallo, Alfonso (Tutorships)
Vázquez Ramallo, Alfonso (Tutorships)
Court
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
Spin coherent states
Authorship
E.B.S.
Bachelor of Physics
E.B.S.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
In quantum mechanics the coherent states of the harmonic oscillator are those whose dynamic more closely resemble that of a classical oscillator. These states saturate Heisenberg's inequality for the position and momentum operators and can be defined as the result of displacing the ground state via the exponential of the ladder operators. In the present document these states will be studied alongside the coherent states generated with the ladder operators of the angular momentum algebra. The latter are known as the spin coherent states (or atomic coherent states) and have notable mathematical properties of great utility in a variety of physical problems, especially those involving the interactions of a great number of spins. In the following document these properties and applications will be reviewed.
In quantum mechanics the coherent states of the harmonic oscillator are those whose dynamic more closely resemble that of a classical oscillator. These states saturate Heisenberg's inequality for the position and momentum operators and can be defined as the result of displacing the ground state via the exponential of the ladder operators. In the present document these states will be studied alongside the coherent states generated with the ladder operators of the angular momentum algebra. The latter are known as the spin coherent states (or atomic coherent states) and have notable mathematical properties of great utility in a variety of physical problems, especially those involving the interactions of a great number of spins. In the following document these properties and applications will be reviewed.
Direction
Vázquez Ramallo, Alfonso (Tutorships)
Vázquez Ramallo, Alfonso (Tutorships)
Court
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
A state of the art review of gravity batteries
Authorship
L.P.N.
Bachelor of Physics
L.P.N.
Bachelor of Physics
Defense date
09.17.2024 10:30
09.17.2024 10:30
Summary
The energy transition towards renewable sources poses a challenge for the electricity grid stability. Energy storage systems (ESS) enable the management of the variable nature of solar or wind energy, decoupling generation from consumption. Gravity Energy Storage (GES) is a mechanical EES that transforms electrical energy into gravitational potential energy by lifting solids. In this work, a review on the state of the art of the different types of gravity batteries present in the literature has been carried out. First, the components common to all routes are studied, emphasising the possibility of introducing circular economy concepts by using recycled materials in the masses. A unified nomenclature for the different technologies is proposed, as well as a double classification according to the location of the system and the type of weight used. There is a trend towards options using multiple smaller weights (modular) rather than a single large mass (monolithic). Finally, the technology readiness level (TRL) of the different systems is assessed and their main techno-economic parameters are analysed. Gravity batteries offer wide capacity and power ranges, efficiencies of 80 percent, response times about seconds, good geographical adaptability and low cost, represented by the LCOS (Levelized Cost of Storage). GES is a promising technology, but its development is still in progress.
The energy transition towards renewable sources poses a challenge for the electricity grid stability. Energy storage systems (ESS) enable the management of the variable nature of solar or wind energy, decoupling generation from consumption. Gravity Energy Storage (GES) is a mechanical EES that transforms electrical energy into gravitational potential energy by lifting solids. In this work, a review on the state of the art of the different types of gravity batteries present in the literature has been carried out. First, the components common to all routes are studied, emphasising the possibility of introducing circular economy concepts by using recycled materials in the masses. A unified nomenclature for the different technologies is proposed, as well as a double classification according to the location of the system and the type of weight used. There is a trend towards options using multiple smaller weights (modular) rather than a single large mass (monolithic). Finally, the technology readiness level (TRL) of the different systems is assessed and their main techno-economic parameters are analysed. Gravity batteries offer wide capacity and power ranges, efficiencies of 80 percent, response times about seconds, good geographical adaptability and low cost, represented by the LCOS (Levelized Cost of Storage). GES is a promising technology, but its development is still in progress.
Direction
BROCOS FERNANDEZ, MARIA DEL PILAR (Tutorships)
BROCOS FERNANDEZ, MARIA DEL PILAR (Tutorships)
Court
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Ab initio study of magnetocrystalline anisotropy in transition metals
Authorship
X.A.C.
Bachelor of Physics
X.A.C.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Magnetocrystalline anisotropy is the energy difference that exists due to the orientation of the magnetization of a magnetic material along different crystal axes. Its origin lies in spin-orbit interaction and it is a very small effect for which there are no simple models with predictive capability. This bachelor's thesis will investigate the computational techniques for calculating magnetocrystalline anisotropy in various transition metals. To achieve this, it is necessary to perform sufficiently accurate calculations, which will be carried out using ab initio calculations based on density functional theory (DFT) implemented in the commercial package WIEN2k. Since magnetocrystalline anisotropy is very small, on the order of micro-eV per atom, supercells will be needed to compare the energy difference when the magnetization is oriented along different crystal axes. It will be analyzed whether standard exchange-correlation functionals have predictive capability, both quantitatively (energy magnitude) and qualitatively (correct prediction of easy magnetization axes), to describe magnetocrystalline anisotropy.
Magnetocrystalline anisotropy is the energy difference that exists due to the orientation of the magnetization of a magnetic material along different crystal axes. Its origin lies in spin-orbit interaction and it is a very small effect for which there are no simple models with predictive capability. This bachelor's thesis will investigate the computational techniques for calculating magnetocrystalline anisotropy in various transition metals. To achieve this, it is necessary to perform sufficiently accurate calculations, which will be carried out using ab initio calculations based on density functional theory (DFT) implemented in the commercial package WIEN2k. Since magnetocrystalline anisotropy is very small, on the order of micro-eV per atom, supercells will be needed to compare the energy difference when the magnetization is oriented along different crystal axes. It will be analyzed whether standard exchange-correlation functionals have predictive capability, both quantitatively (energy magnitude) and qualitatively (correct prediction of easy magnetization axes), to describe magnetocrystalline anisotropy.
Direction
PARDO CASTRO, VICTOR (Tutorships)
PARDO CASTRO, VICTOR (Tutorships)
Court
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
Study of the Light Curves of Type Ia Supernovae and Their Applications in Cosmology
Authorship
A.F.P.
Bachelor of Physics
A.F.P.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Since the mid-20th century, Type Ia supernovae (SN Ia) have held a privileged position in Cosmology due to their tremendous potential as standard candles in measuring distances on a large scale, stemming from their unique observational characteristics. The present work aims, on one hand, to conduct a study of the light curves of SN Ia and the properties that make them reliable distance indicators. Following this, we will address the progenitor systems capable of producing SN Ia and the physical processes that lead to such explosions. On the other hand, a literature review will be conducted on the contribution of these events to Cosmology since their inception. Particularly, the discussion will focus on their role in determining the Hubble constant, H0, characterizing the expansion of the Universe, and discovering its acceleration. Advances in the detection and measurement of SN Ia over the past decades will be analysed, emphasizing their importance in reducing the uncertainty of H0 and constructing more accurate Hubble diagrams. Finally, the challenges faced by modern Cosmology in constructing a cosmological model consistent with the contradictory measurements of cosmic expansion between the early Universe and the late Universe will be explored.
Since the mid-20th century, Type Ia supernovae (SN Ia) have held a privileged position in Cosmology due to their tremendous potential as standard candles in measuring distances on a large scale, stemming from their unique observational characteristics. The present work aims, on one hand, to conduct a study of the light curves of SN Ia and the properties that make them reliable distance indicators. Following this, we will address the progenitor systems capable of producing SN Ia and the physical processes that lead to such explosions. On the other hand, a literature review will be conducted on the contribution of these events to Cosmology since their inception. Particularly, the discussion will focus on their role in determining the Hubble constant, H0, characterizing the expansion of the Universe, and discovering its acceleration. Advances in the detection and measurement of SN Ia over the past decades will be analysed, emphasizing their importance in reducing the uncertainty of H0 and constructing more accurate Hubble diagrams. Finally, the challenges faced by modern Cosmology in constructing a cosmological model consistent with the contradictory measurements of cosmic expansion between the early Universe and the late Universe will be explored.
Direction
ALVAREZ POL, HECTOR (Tutorships)
ALVAREZ POL, HECTOR (Tutorships)
Court
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
Computational and theoretical study of hydrogen production by electrolysis
Authorship
S.C.S.
Bachelor of Physics
S.C.S.
Bachelor of Physics
Defense date
07.18.2024 09:00
07.18.2024 09:00
Summary
In this degree thesis, a review of the state of the art in computer simulation for electrolytic processes, in particular for hydrogen production, is carried out. In addition, molecular dynamics are permormed to analyse the structural properties of mixtures of water with NaCl and KOH. From these calculations, the particle distribution functions, the diffusion of the different species in the mixture and other relevant quantities will be analysed. An introduction to density functional theory will also be given.
In this degree thesis, a review of the state of the art in computer simulation for electrolytic processes, in particular for hydrogen production, is carried out. In addition, molecular dynamics are permormed to analyse the structural properties of mixtures of water with NaCl and KOH. From these calculations, the particle distribution functions, the diffusion of the different species in the mixture and other relevant quantities will be analysed. An introduction to density functional theory will also be given.
Direction
Montes Campos, Hadrián (Tutorships)
MENDEZ MORALES, TRINIDAD (Co-tutorships)
Montes Campos, Hadrián (Tutorships)
MENDEZ MORALES, TRINIDAD (Co-tutorships)
Court
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Persistent homology in 3-manifolds
Authorship
P.T.M.
Double bachelor degree in Mathematics and Physics
P.T.M.
Double bachelor degree in Mathematics and Physics
Defense date
07.17.2024 12:15
07.17.2024 12:15
Summary
We begin with an introduction to topological data analysis, presenting the basic tools of this discipline: persistent homology, persistence diagrams and landscapes, as well as the stability theorem, among others. Specifically, we are interested in the application of persistent homology to finite samples of points on a manifold, using the construction of simplicial complexes on these samples, due to their potential to generate metric invariants. Next, we cover the fundamentals of hyperbolic geometry, studying some of the classical models of hyperbolic space, which is the universal cover of any hyperbolic manifold, focusing on the 3-dimensional case. The main objective of this section is to demonstrate Mostow's Rigidity Theorem in the compact case. To this end, we present some concepts and results used in the proof. As a consequence of this theorem, in the case of 3-dimensional hyperbolic manifolds, the metric is a topological invariant. Therefore, non-homeomorphic hyperbolic manifolds can be distinguished using metric invariants, such as those provided by persistent homology. Finally, we will put the previous techniques into practice by developing a program that samples random points on compact orientable 3-dimensional hyperbolic manifolds, calculates the corresponding persistence diagrams and landscapes, and compares the results obtained for any pair of given hyperbolic manifolds using hypothesis testing, with the aim of topologically distinguishing them with a certain degree of confidence.
We begin with an introduction to topological data analysis, presenting the basic tools of this discipline: persistent homology, persistence diagrams and landscapes, as well as the stability theorem, among others. Specifically, we are interested in the application of persistent homology to finite samples of points on a manifold, using the construction of simplicial complexes on these samples, due to their potential to generate metric invariants. Next, we cover the fundamentals of hyperbolic geometry, studying some of the classical models of hyperbolic space, which is the universal cover of any hyperbolic manifold, focusing on the 3-dimensional case. The main objective of this section is to demonstrate Mostow's Rigidity Theorem in the compact case. To this end, we present some concepts and results used in the proof. As a consequence of this theorem, in the case of 3-dimensional hyperbolic manifolds, the metric is a topological invariant. Therefore, non-homeomorphic hyperbolic manifolds can be distinguished using metric invariants, such as those provided by persistent homology. Finally, we will put the previous techniques into practice by developing a program that samples random points on compact orientable 3-dimensional hyperbolic manifolds, calculates the corresponding persistence diagrams and landscapes, and compares the results obtained for any pair of given hyperbolic manifolds using hypothesis testing, with the aim of topologically distinguishing them with a certain degree of confidence.
Direction
Álvarez López, Jesús Antonio (Tutorships)
Meniño Cotón, Carlos (Co-tutorships)
Álvarez López, Jesús Antonio (Tutorships)
Meniño Cotón, Carlos (Co-tutorships)
Court
TORRES LOPERA, JUAN FRANCISCO (Chairman)
CONDE AMBOAGE, MERCEDES (Secretary)
López Pouso, Óscar (Member)
TORRES LOPERA, JUAN FRANCISCO (Chairman)
CONDE AMBOAGE, MERCEDES (Secretary)
López Pouso, Óscar (Member)
Theoretical-computational study of ternary mixtures of ionic liquids with molecular solvents for electrochemical storage
Authorship
P.T.M.
Double bachelor degree in Mathematics and Physics
P.T.M.
Double bachelor degree in Mathematics and Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
In the present undergraduate thesis, molecular dynamics simulations will be performed on electrolytes based on ternary mixtures of ionic liquids (EAN), lithium salts (LiNO3), and molecular cosolvents (acetonitrile and water) of interest in electrochemical devices. After a review of the theoretical foundations of this discipline and familiarization with the software used in the simulations, the structural and dynamic properties of the aforementioned system will be analyzed for different solvent concentrations. These results will be used to compare the behavior of acetonitrile versus water, as well as to contrast them with the previously proposed theoretical hypotheses based on the structure of the molecules in the mixture.
In the present undergraduate thesis, molecular dynamics simulations will be performed on electrolytes based on ternary mixtures of ionic liquids (EAN), lithium salts (LiNO3), and molecular cosolvents (acetonitrile and water) of interest in electrochemical devices. After a review of the theoretical foundations of this discipline and familiarization with the software used in the simulations, the structural and dynamic properties of the aforementioned system will be analyzed for different solvent concentrations. These results will be used to compare the behavior of acetonitrile versus water, as well as to contrast them with the previously proposed theoretical hypotheses based on the structure of the molecules in the mixture.
Direction
Montes Campos, Hadrián (Tutorships)
MENDEZ MORALES, TRINIDAD (Co-tutorships)
Montes Campos, Hadrián (Tutorships)
MENDEZ MORALES, TRINIDAD (Co-tutorships)
Court
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
The Kalton-Peck space
Authorship
C.F.L.
Double bachelor degree in Mathematics and Physics
C.F.L.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2024 11:00
07.16.2024 11:00
Summary
This document will present the Kalton-Peck space, a solution of the Palais problem. This problem asks whether there exists a Banach space X, which is not Hilbert, such that it contains a subspace isomorphic to a Hilbert space H, such that X/H is also isomorphic to a Hilbert space. For this purpose, we will define twisted sums, which are the ideal setting for solving this problem. In addition, various concepts related to functional analysis will also be introduced, such as quasi-normed spaces, B-convexity or uniform convexity. These will be necessary to prove that the Kalton-Peck space is really a solution to this problem. Finally, some of the properties of this space will also be studied. In particular, it will be shown that it admits a Schauder basis and the form of its dual space.
This document will present the Kalton-Peck space, a solution of the Palais problem. This problem asks whether there exists a Banach space X, which is not Hilbert, such that it contains a subspace isomorphic to a Hilbert space H, such that X/H is also isomorphic to a Hilbert space. For this purpose, we will define twisted sums, which are the ideal setting for solving this problem. In addition, various concepts related to functional analysis will also be introduced, such as quasi-normed spaces, B-convexity or uniform convexity. These will be necessary to prove that the Kalton-Peck space is really a solution to this problem. Finally, some of the properties of this space will also be studied. In particular, it will be shown that it admits a Schauder basis and the form of its dual space.
Direction
LOSADA RODRIGUEZ, JORGE (Tutorships)
LOSADA RODRIGUEZ, JORGE (Tutorships)
Court
FEBRERO BANDE, MANUEL (Chairman)
BUEDO FERNANDEZ, SEBASTIAN (Secretary)
RODRIGUEZ GARCIA, JERONIMO (Member)
FEBRERO BANDE, MANUEL (Chairman)
BUEDO FERNANDEZ, SEBASTIAN (Secretary)
RODRIGUEZ GARCIA, JERONIMO (Member)
Programming quantum computers through pulses
Authorship
C.F.L.
Double bachelor degree in Mathematics and Physics
C.F.L.
Double bachelor degree in Mathematics and Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Quantum computing makes use of laws of quantum physics to solve numerical problems. Its minimum unit of information is the qubit, on which gates are applied to achieve the desired state. These gates are abstractions of underlying pulses that provoke the time evolution of the physical system that represents the qubits. In this paper we will present different methods to find the pulses corresponding to a given gate. On the one hand, we will use optimisation methods, which seek to minimise a cost function by modifying certain parameters related to the Hamiltonian of the system. On the other hand, we will use methods that rely on performing algebra on the Hamiltonians to obtain their results. We will also run a QAOA, a variational algorithm widely used in quantum computing, and compare the results of pulse computing versus gate-based quantum computing.
Quantum computing makes use of laws of quantum physics to solve numerical problems. Its minimum unit of information is the qubit, on which gates are applied to achieve the desired state. These gates are abstractions of underlying pulses that provoke the time evolution of the physical system that represents the qubits. In this paper we will present different methods to find the pulses corresponding to a given gate. On the one hand, we will use optimisation methods, which seek to minimise a cost function by modifying certain parameters related to the Hamiltonian of the system. On the other hand, we will use methods that rely on performing algebra on the Hamiltonians to obtain their results. We will also run a QAOA, a variational algorithm widely used in quantum computing, and compare the results of pulse computing versus gate-based quantum computing.
Direction
SANCHEZ DE SANTOS, JOSE MANUEL (Tutorships)
Mussa Juane, Mariamo (Co-tutorships)
SANCHEZ DE SANTOS, JOSE MANUEL (Tutorships)
Mussa Juane, Mariamo (Co-tutorships)
Court
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
Waves and structuring in nature.
Authorship
T.L.I.
Bachelor of Physics
T.L.I.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Phenomena such as the pumping and circulation of blood, nerve impulses propagation, population dinamics, weather forecasting, financial markets analisys or chemical oscillating reactions such as Belousov-Zhabotinsky reaction, among many others, are very complex and different systems with multiple variables acting at the same time, but with something in common: all of them can be analysed using non-linear physics. This discipline is in charge of finding the equations that rule the behaviour of these systems, coincident in many cases, and establishing specific relations between the parameters that govern them. In this sense, in the present work an approach is made from this branch of physics to the first of these system, the behavior of the heart cells responsible for pumping blood. Diffusive equations that govern it are resolved by numerical methods, establishing the basis of its behaviour and showing how it translates into the solutions of the system.
Phenomena such as the pumping and circulation of blood, nerve impulses propagation, population dinamics, weather forecasting, financial markets analisys or chemical oscillating reactions such as Belousov-Zhabotinsky reaction, among many others, are very complex and different systems with multiple variables acting at the same time, but with something in common: all of them can be analysed using non-linear physics. This discipline is in charge of finding the equations that rule the behaviour of these systems, coincident in many cases, and establishing specific relations between the parameters that govern them. In this sense, in the present work an approach is made from this branch of physics to the first of these system, the behavior of the heart cells responsible for pumping blood. Diffusive equations that govern it are resolved by numerical methods, establishing the basis of its behaviour and showing how it translates into the solutions of the system.
Direction
Pérez Muñuzuri, Alberto (Tutorships)
Pérez Muñuzuri, Alberto (Tutorships)
Court
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
Gödel's constructible universe
Authorship
P.S.F.
Double bachelor degree in Mathematics and Physics
P.S.F.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2024 18:30
07.16.2024 18:30
Summary
Most mathematical theories can be formalized within the ZFC system, which is a first-order logic theory. Gödel's Second Incompleteness Theorem prevents us from proving its consistency within ZFC itself, but it does not impose restrictions on relative consistency proofs. This means that, assuming a formal theory is consistent, we can prove the consistency of another. In our case, we will assume the consistency of a subset of ZFC axioms and proceed to prove the relative consistency of this theory with the remaining axioms. In fact, we will also demonstrate the relative consistency of ZFC with the Generalized Continuum Hypothesis. The fundamental tool in obtaining these results is model theory, which formalizes the intuitive concept of interpretation of a language. In this context, Gödel's constructible universe is a possible interpretation of ZFC set theory.
Most mathematical theories can be formalized within the ZFC system, which is a first-order logic theory. Gödel's Second Incompleteness Theorem prevents us from proving its consistency within ZFC itself, but it does not impose restrictions on relative consistency proofs. This means that, assuming a formal theory is consistent, we can prove the consistency of another. In our case, we will assume the consistency of a subset of ZFC axioms and proceed to prove the relative consistency of this theory with the remaining axioms. In fact, we will also demonstrate the relative consistency of ZFC with the Generalized Continuum Hypothesis. The fundamental tool in obtaining these results is model theory, which formalizes the intuitive concept of interpretation of a language. In this context, Gödel's constructible universe is a possible interpretation of ZFC set theory.
Direction
FERNANDEZ TOJO, FERNANDO ADRIAN (Tutorships)
FERNANDEZ TOJO, FERNANDO ADRIAN (Tutorships)
Court
FEBRERO BANDE, MANUEL (Chairman)
BUEDO FERNANDEZ, SEBASTIAN (Secretary)
RODRIGUEZ GARCIA, JERONIMO (Member)
FEBRERO BANDE, MANUEL (Chairman)
BUEDO FERNANDEZ, SEBASTIAN (Secretary)
RODRIGUEZ GARCIA, JERONIMO (Member)
Electronic properties of graphene pentalayers
Authorship
P.S.F.
Double bachelor degree in Mathematics and Physics
P.S.F.
Double bachelor degree in Mathematics and Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Electronic correlations are a type of interaction necessary to explain some exotic properties and states of matter, but they are not taken into account in band theory. Graphene multilayers have been a good material to study these states, and specifically, the pentalayer proves to be very interesting for this purpose. In this work, we will present the necessary tools for studying the bands of these materials, as well as the information derived from them, such as the density of states or the Fermi surfaces. We will focus on creating models for pentalayer graphene and attempt to associate the correlated states experimentally found in it with regions of high density of states in its bands. Models have been constructed with parameters for the Hamiltonian extracted from the literature, others by modifying these parameters, and even an attempt has been made to obtain an effective model for finding an optimal set of parameters. Although we ultimately did not find signals clearly related to the correlated states, a rather exhaustive study has been conducted that, if complemented, could yield more conclusive results.
Electronic correlations are a type of interaction necessary to explain some exotic properties and states of matter, but they are not taken into account in band theory. Graphene multilayers have been a good material to study these states, and specifically, the pentalayer proves to be very interesting for this purpose. In this work, we will present the necessary tools for studying the bands of these materials, as well as the information derived from them, such as the density of states or the Fermi surfaces. We will focus on creating models for pentalayer graphene and attempt to associate the correlated states experimentally found in it with regions of high density of states in its bands. Models have been constructed with parameters for the Hamiltonian extracted from the literature, others by modifying these parameters, and even an attempt has been made to obtain an effective model for finding an optimal set of parameters. Although we ultimately did not find signals clearly related to the correlated states, a rather exhaustive study has been conducted that, if complemented, could yield more conclusive results.
Direction
PARDO CASTRO, VICTOR (Tutorships)
Bascones Fernández de Velasco, Elena (Co-tutorships)
PARDO CASTRO, VICTOR (Tutorships)
Bascones Fernández de Velasco, Elena (Co-tutorships)
Court
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
Use of neural networks in black hole physics
Authorship
J.A.R.
Bachelor of Physics
J.A.R.
Bachelor of Physics
Defense date
07.01.2024 10:30
07.01.2024 10:30
Summary
In this work we will first introduce gravity as a geometrical theory, starting with Einstein's general relativity and it's fundamental properties. We will proceed then to develop theories of higher order, in particular Lovelock's gravity. A small section on numerical relativity will be made and an introduction to the workframe based on neural networks. We will then explain how they work and how they can be implemented as ODE solvers, going through the most important aspects based on experience. Some results obtained in different modified theories of gravity will be shown, starting with general relativity as a first model, then proceed with Gauss-Bonnet gravity as a particular model on Lovelock gravity and in the last place Einsteinian Cubic Gravity, a cubic model proposed in 2016 due to its good properties Einstein-like. Finally we will propose some new applications to explore in which neural networks could be beneficial.
In this work we will first introduce gravity as a geometrical theory, starting with Einstein's general relativity and it's fundamental properties. We will proceed then to develop theories of higher order, in particular Lovelock's gravity. A small section on numerical relativity will be made and an introduction to the workframe based on neural networks. We will then explain how they work and how they can be implemented as ODE solvers, going through the most important aspects based on experience. Some results obtained in different modified theories of gravity will be shown, starting with general relativity as a first model, then proceed with Gauss-Bonnet gravity as a particular model on Lovelock gravity and in the last place Einsteinian Cubic Gravity, a cubic model proposed in 2016 due to its good properties Einstein-like. Finally we will propose some new applications to explore in which neural networks could be beneficial.
Direction
EDELSTEIN GLAUBACH, JOSE DANIEL (Tutorships)
Garraffo , Cecilia (Co-tutorships)
EDELSTEIN GLAUBACH, JOSE DANIEL (Tutorships)
Garraffo , Cecilia (Co-tutorships)
Court
LOPEZ LAGO, MARIA ELENA (Chairman)
PARDO CASTRO, VICTOR (Secretary)
CASTRO PAREDES, FRANCISCO JAVIER (Member)
LOPEZ LAGO, MARIA ELENA (Chairman)
PARDO CASTRO, VICTOR (Secretary)
CASTRO PAREDES, FRANCISCO JAVIER (Member)
New porous materials for energetic and catalytic applications
Authorship
P.D.C.
Double bachelor degree in Physics and Chemistry
P.D.C.
Double bachelor degree in Physics and Chemistry
Defense date
07.18.2024 09:00
07.18.2024 09:00
Summary
In this work, nanoparticles of the metal organic structure (MOF) type containing Cu in its lattice: CuMI, have been obtained through synthesis processes in aqueous medium and room temperature. The MOF has been structurally and functionally characterized by Scanning Electron Microscopy, X ray Diffraction, Thermogravimetric Analysis and N2 Adsorption Analysis. After the characterization of the standard product, we studied the influence of the synthesis parameters on the MOF: the oncentration of stabilizer, CTAB, and reaction time; and what is its effect on the structure and size of the MOF. Finally, we analyze the colloidal stability of the MOF dissolved in acetone.
In this work, nanoparticles of the metal organic structure (MOF) type containing Cu in its lattice: CuMI, have been obtained through synthesis processes in aqueous medium and room temperature. The MOF has been structurally and functionally characterized by Scanning Electron Microscopy, X ray Diffraction, Thermogravimetric Analysis and N2 Adsorption Analysis. After the characterization of the standard product, we studied the influence of the synthesis parameters on the MOF: the oncentration of stabilizer, CTAB, and reaction time; and what is its effect on the structure and size of the MOF. Finally, we analyze the colloidal stability of the MOF dissolved in acetone.
Direction
TABOADA ANTELO, PABLO (Tutorships)
VILA FUNGUEIRIÑO, JOSE MANUEL (Co-tutorships)
TABOADA ANTELO, PABLO (Tutorships)
VILA FUNGUEIRIÑO, JOSE MANUEL (Co-tutorships)
Court
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Porous materials based on metal organic frameworks (MOFs) for energy and catalytic applications
Authorship
P.D.C.
Double bachelor degree in Physics and Chemistry
P.D.C.
Double bachelor degree in Physics and Chemistry
Defense date
07.15.2024 09:00
07.15.2024 09:00
Summary
In this work, nanoparticles of two types of metal organic structures (MOFs) containing Cu in their lattice: Cu3(HITP)2 and CuMI, have been obtained through synthesis processes in aqueous medium and room temperature. Both types of MOFs have been structurally and functionally characterized by Scanning Electron Microscopy, X Ray Diffraction, UV Vis Spectroscopy, Thermogravimetric Analysis and Brunauer-Emmet-Teller (BET)N2 Adsorption Analysis. Both MOFs have been compared to determine their potential in catalytic applications. In the case of Cu3(HITP)2 nanoparticles of a size of less than 100nm have been obtained, while for CuMI the size range is greater than 130nm. In addition, the percentage of mass corresponding to the organic ligand is higher (45 %) in the Cu3(HITP)2 with a more gradual degradation process. In particular, for the MOF Cu3(HITP)2 it has been possible to determine an orthorhombic structure with a cell volume of 4.83nm3. Although both MOFs have very similar pore sizes, close to 2nm, the surface area available in the Cu3(HITP)2 is double, which suggests that this compound would have greater potential in catalysis applications.
In this work, nanoparticles of two types of metal organic structures (MOFs) containing Cu in their lattice: Cu3(HITP)2 and CuMI, have been obtained through synthesis processes in aqueous medium and room temperature. Both types of MOFs have been structurally and functionally characterized by Scanning Electron Microscopy, X Ray Diffraction, UV Vis Spectroscopy, Thermogravimetric Analysis and Brunauer-Emmet-Teller (BET)N2 Adsorption Analysis. Both MOFs have been compared to determine their potential in catalytic applications. In the case of Cu3(HITP)2 nanoparticles of a size of less than 100nm have been obtained, while for CuMI the size range is greater than 130nm. In addition, the percentage of mass corresponding to the organic ligand is higher (45 %) in the Cu3(HITP)2 with a more gradual degradation process. In particular, for the MOF Cu3(HITP)2 it has been possible to determine an orthorhombic structure with a cell volume of 4.83nm3. Although both MOFs have very similar pore sizes, close to 2nm, the surface area available in the Cu3(HITP)2 is double, which suggests that this compound would have greater potential in catalysis applications.
Direction
VILA FUNGUEIRIÑO, JOSE MANUEL (Tutorships)
TABOADA ANTELO, PABLO (Co-tutorships)
VILA FUNGUEIRIÑO, JOSE MANUEL (Tutorships)
TABOADA ANTELO, PABLO (Co-tutorships)
Court
LORES AGUIN, MARTA (Chairman)
RIOS RODRIGUEZ, MARIA DEL CARMEN (Secretary)
Carro Díaz, Antonia María (Member)
LORES AGUIN, MARTA (Chairman)
RIOS RODRIGUEZ, MARIA DEL CARMEN (Secretary)
Carro Díaz, Antonia María (Member)
First experimental measurements with a scanning tunneling microscope
Authorship
L.B.P.
Bachelor of Physics
L.B.P.
Bachelor of Physics
Defense date
02.19.2024 15:00
02.19.2024 15:00
Summary
We performed the initial setup of a compact scanning tunneling microscope (STM) device of new adquisition in the Faculty of Physics of the University of Santiago de Compostela, Spain (a version of model NaioSTM built by Nanosurf AG) and implemented various experiments appropriate for use in demonstration and teaching contexts. We identified some of the main experimental difficulties and weakness of the device in actual use, issues for which we propose some operational tips. The completed measurements are: i) surface imaging of various flat materials (graphite, YBa2Cu3O7-x, gold and NbSe films) achieving single-cell and even atomic resolution in some of them; and ii) voltage-current (VI) tunnel spectroscopy over the same materials, discriminating, e.g., between metallic and semiconducting local behaviour. We also provide a brief teaching-oriented summary of the theory behind STM operation.
We performed the initial setup of a compact scanning tunneling microscope (STM) device of new adquisition in the Faculty of Physics of the University of Santiago de Compostela, Spain (a version of model NaioSTM built by Nanosurf AG) and implemented various experiments appropriate for use in demonstration and teaching contexts. We identified some of the main experimental difficulties and weakness of the device in actual use, issues for which we propose some operational tips. The completed measurements are: i) surface imaging of various flat materials (graphite, YBa2Cu3O7-x, gold and NbSe films) achieving single-cell and even atomic resolution in some of them; and ii) voltage-current (VI) tunnel spectroscopy over the same materials, discriminating, e.g., between metallic and semiconducting local behaviour. We also provide a brief teaching-oriented summary of the theory behind STM operation.
Direction
VAZQUEZ RAMALLO, MANUEL (Tutorships)
MARTINEZ BOTANA, JOSE MARTIN (Co-tutorships)
VAZQUEZ RAMALLO, MANUEL (Tutorships)
MARTINEZ BOTANA, JOSE MARTIN (Co-tutorships)
Court
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
Quantum information processing with high-dimensional single-photon states
Authorship
C.A.G.
Bachelor of Physics
C.A.G.
Bachelor of Physics
Defense date
09.16.2024 17:00
09.16.2024 17:00
Summary
The main purpose of this work is to provide an introduction to quantum information processing using high-dimensional d or 1-qudits single-photon states. This high dimensionality is achieved by exciting photons in different optical modes, such as polarization or spin-momentum modes, vortex or angular momentum modes, and path or linear momentum modes, among others. The coupling between these modes allows opto-quantum transformations on single-photon states, which is applicable to quantum information processing with a reduced number of 1-qudits. The generation, transformation and detection of these states will be carried out by integrated photonic devices, i.e. optical chips that confine and guide optical modes. These devices, highly compatible with optical fibers, which are key elements in modern photonic quantum technology, will allow the implementation of universal sets of quantum logic gates, essential to execute simple quantum algorithms such as the Deutsch algorithm. Furthermore, these devices will facilitate the creation of opto-quantum measurement projectors, which are essential for applications in quantum cryptography. In this work, states with dimensions d=4 (1-ququart) and d=8 (1-quoctat) will be used, which are sufficient to illustrate what has been said above.
The main purpose of this work is to provide an introduction to quantum information processing using high-dimensional d or 1-qudits single-photon states. This high dimensionality is achieved by exciting photons in different optical modes, such as polarization or spin-momentum modes, vortex or angular momentum modes, and path or linear momentum modes, among others. The coupling between these modes allows opto-quantum transformations on single-photon states, which is applicable to quantum information processing with a reduced number of 1-qudits. The generation, transformation and detection of these states will be carried out by integrated photonic devices, i.e. optical chips that confine and guide optical modes. These devices, highly compatible with optical fibers, which are key elements in modern photonic quantum technology, will allow the implementation of universal sets of quantum logic gates, essential to execute simple quantum algorithms such as the Deutsch algorithm. Furthermore, these devices will facilitate the creation of opto-quantum measurement projectors, which are essential for applications in quantum cryptography. In this work, states with dimensions d=4 (1-ququart) and d=8 (1-quoctat) will be used, which are sufficient to illustrate what has been said above.
Direction
LIÑARES BEIRAS, JESUS (Tutorships)
LIÑARES BEIRAS, JESUS (Tutorships)
Court
MIGUEZ MACHO, GONZALO (Chairman)
González Fernández, Rosa María (Secretary)
BROCOS FERNANDEZ, MARIA DEL PILAR (Member)
MIGUEZ MACHO, GONZALO (Chairman)
González Fernández, Rosa María (Secretary)
BROCOS FERNANDEZ, MARIA DEL PILAR (Member)
Parameterisation of a two-photon polymerization system
Authorship
D.L.F.
Bachelor of Physics
D.L.F.
Bachelor of Physics
Defense date
07.18.2024 10:00
07.18.2024 10:00
Summary
In this work, we will investigate the feasibility of the two-photon polymerization system developed in the Photonics4life laboratory. This system has been assembled in the laboratory and is still in the testing phase. Two-photon polymerization is a technique that enables the fabrication of 3D structures at the micro- or even nanometre scale using a non-linear absorption process, namely two-photon absorption. In adition, we will characterise the system using 2D microstructures of interest, such as tori, diffraction gratings, and pillars. These structures will be designed using CAD software and characterized using optical and electronic microscopes, as well as a confocal microscope. The characterisation process will provide valuable information on the morphology of the structures and their relationships to available experimental parameters.
In this work, we will investigate the feasibility of the two-photon polymerization system developed in the Photonics4life laboratory. This system has been assembled in the laboratory and is still in the testing phase. Two-photon polymerization is a technique that enables the fabrication of 3D structures at the micro- or even nanometre scale using a non-linear absorption process, namely two-photon absorption. In adition, we will characterise the system using 2D microstructures of interest, such as tori, diffraction gratings, and pillars. These structures will be designed using CAD software and characterized using optical and electronic microscopes, as well as a confocal microscope. The characterisation process will provide valuable information on the morphology of the structures and their relationships to available experimental parameters.
Direction
Gómez Varela, Ana Isabel (Tutorships)
BAO VARELA, Mª CARMEN (Co-tutorships)
Gómez Varela, Ana Isabel (Tutorships)
BAO VARELA, Mª CARMEN (Co-tutorships)
Court
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
Multiplicity estimation in proton-nucleus collisions in the LHCb experiment using machine learning algorithms
Authorship
U.S.C.
Bachelor of Physics
U.S.C.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
The aim of the present work is to estimate the multiplicity in high-energy collisions of protons with lead nuclei (pPb) using data from the LHCb detector. For this purpose two machine learning algorithms, linear regression and K-Nearest Neighbors (KNN), are used to study the relationship between subdetector occupancy and the number of generated particles.
The aim of the present work is to estimate the multiplicity in high-energy collisions of protons with lead nuclei (pPb) using data from the LHCb detector. For this purpose two machine learning algorithms, linear regression and K-Nearest Neighbors (KNN), are used to study the relationship between subdetector occupancy and the number of generated particles.
Direction
BELIN , SAMUEL JULES (Tutorships)
Sellam , Sara (Co-tutorships)
BELIN , SAMUEL JULES (Tutorships)
Sellam , Sara (Co-tutorships)
Court
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
Application of Speckle Interferometry techniques for the study of femoral biomechanics
Authorship
J.M.M.
Bachelor of Physics
J.M.M.
Bachelor of Physics
Defense date
07.19.2024 10:00
07.19.2024 10:00
Summary
Nowadays, the advanced prosthetics industry is undergoing significant technological development, driving research into the distribution of stresses in fundamental anatomical structures such as bones. This project explores the powerful metrological qualities of speckle, a common optical phenomenon when using a coherent light source and widely applicable in interferometric techniques. A theoretical and experimental development of three speckle correlation interferometry techniques is presented: out-of-plane, in-plane, and shearing. The latter will be used to begin a study of femoral biomechanics, since it is the most suitable for measuring deformations in materials.
Nowadays, the advanced prosthetics industry is undergoing significant technological development, driving research into the distribution of stresses in fundamental anatomical structures such as bones. This project explores the powerful metrological qualities of speckle, a common optical phenomenon when using a coherent light source and widely applicable in interferometric techniques. A theoretical and experimental development of three speckle correlation interferometry techniques is presented: out-of-plane, in-plane, and shearing. The latter will be used to begin a study of femoral biomechanics, since it is the most suitable for measuring deformations in materials.
Direction
MORENO DE LAS CUEVAS, VICENTE (Tutorships)
MORENO DE LAS CUEVAS, VICENTE (Tutorships)
Court
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
Dipolar braking radiation fundamentals in gases
Authorship
A.N.R.
Bachelor of Physics
A.N.R.
Bachelor of Physics
Defense date
02.20.2024 17:00
02.20.2024 17:00
Summary
The recent measurement of dipolar (or neutral) braking radiation in electrified gases has received great attention, particularly in the context of operating time projection chambers dedicated to direct dark matter detection and neutrinoless double-beta decay (https://journals.aps.org/prx/abstract/10.1103/PhysRevX.12.021005). In this work, we provide a compilation of the existing theoretical framework in noble gases, along with recent experimental examples; and compare them with the most significant existing mechanisms of electromagnetic radiation.
The recent measurement of dipolar (or neutral) braking radiation in electrified gases has received great attention, particularly in the context of operating time projection chambers dedicated to direct dark matter detection and neutrinoless double-beta decay (https://journals.aps.org/prx/abstract/10.1103/PhysRevX.12.021005). In this work, we provide a compilation of the existing theoretical framework in noble gases, along with recent experimental examples; and compare them with the most significant existing mechanisms of electromagnetic radiation.
Direction
GONZALEZ DIAZ, DIEGO (Tutorships)
GONZALEZ DIAZ, DIEGO (Tutorships)
Court
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
Fundamental aspects of Quantum Chromodynamics and the transition to Quark-Gluon Plasma
Authorship
V.D.G.
Bachelor of Physics
V.D.G.
Bachelor of Physics
Defense date
02.19.2024 15:00
02.19.2024 15:00
Summary
This article presents the features of confinement and asymptotic freedom that play an essential role in the understanding of Quantum Chromodynamics and will help us to understand the existence of a new phase of matter, the quark-gluon plasma (QGP) when the conditions are right: either a critical temperature is reached or the baryon density is very high. To obtain estimates of the thermodynamic coordinates at the phase transition, we will study the MIT bag model, which will give us, by means of simple statistical physics and hydrodynamic arguments, what are the values of the temperature at zero chemical potential and of the baryon density at zero temperature for deconfinement to occur. We will construct the corresponding phase diagram.
This article presents the features of confinement and asymptotic freedom that play an essential role in the understanding of Quantum Chromodynamics and will help us to understand the existence of a new phase of matter, the quark-gluon plasma (QGP) when the conditions are right: either a critical temperature is reached or the baryon density is very high. To obtain estimates of the thermodynamic coordinates at the phase transition, we will study the MIT bag model, which will give us, by means of simple statistical physics and hydrodynamic arguments, what are the values of the temperature at zero chemical potential and of the baryon density at zero temperature for deconfinement to occur. We will construct the corresponding phase diagram.
Direction
GONZALEZ FERREIRO, ELENA (Tutorships)
GONZALEZ FERREIRO, ELENA (Tutorships)
Court
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
Fuel Cells as Energy Vector Transformation Systems
Authorship
J.R.O.R.
Bachelor of Physics
J.R.O.R.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
This undergraduate thesis will focus on fuel cells as a key technology for efficient and clean energy generation. A detailed study will be conducted on their operation, largely centered on the physical aspects thereof. Additionally, fuel storage will be explored, including physical methods, material-based methods, and adsorption and absorption technologies. The main types of fuel cells will be analyzed, evaluating their advantages and disadvantages in terms of efficiency, operating temperature, and specific applications among others. This process will cover aspects such as the structure of fuel cells, their components, and the materials composing them. The current status and future prospects of fuel cells in automotive and power generation applications will be reviewed, highlighting current strengths of the technology and major outstanding challenges. Finally, a critical analysis will be conducted on the viability of this technology, considering its potential to contribute to the transition towards more sustainable energy systems and its role in the future global energy landscape.
This undergraduate thesis will focus on fuel cells as a key technology for efficient and clean energy generation. A detailed study will be conducted on their operation, largely centered on the physical aspects thereof. Additionally, fuel storage will be explored, including physical methods, material-based methods, and adsorption and absorption technologies. The main types of fuel cells will be analyzed, evaluating their advantages and disadvantages in terms of efficiency, operating temperature, and specific applications among others. This process will cover aspects such as the structure of fuel cells, their components, and the materials composing them. The current status and future prospects of fuel cells in automotive and power generation applications will be reviewed, highlighting current strengths of the technology and major outstanding challenges. Finally, a critical analysis will be conducted on the viability of this technology, considering its potential to contribute to the transition towards more sustainable energy systems and its role in the future global energy landscape.
Direction
TABOADA ANTELO, PABLO (Tutorships)
TABOADA ANTELO, PABLO (Tutorships)
Court
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
Characterization of a lubricant-surfactant system through experimental determinations of surface tension
Authorship
M.B.G.
Bachelor of Physics
M.B.G.
Bachelor of Physics
Defense date
07.18.2024 10:00
07.18.2024 10:00
Summary
Lubricants are gaining importance in the global industry as they are an alternative for improving energy efficiency, sustainability and cost reduction. Currently, they are used in electric vehicles, which includes the application of low-viscosity oils to counteract the high speeds of the electric motor. Thus, the development of new lubricant formulations is essential. In this work, the behavior of various physical properties is studied: surface tension, density, speed of sound, adiabatic compressibility, and surface pressure, in a system composed of a lubricant and a surfactant. The lubricant is a polyalphaolefin, specifically PAO4, and the surfactant is oleic acid (OA). The aim of this study is to see how these physical properties vary for different proportions of PAO4 and OA. For this purpose, several samples with different concentrations of each substance will be prepared, and the surface tension, density, and speed of sound will be experimentally measured. Then, from these values, the adiabatic compressibility and the surface pressure will be obtained.
Lubricants are gaining importance in the global industry as they are an alternative for improving energy efficiency, sustainability and cost reduction. Currently, they are used in electric vehicles, which includes the application of low-viscosity oils to counteract the high speeds of the electric motor. Thus, the development of new lubricant formulations is essential. In this work, the behavior of various physical properties is studied: surface tension, density, speed of sound, adiabatic compressibility, and surface pressure, in a system composed of a lubricant and a surfactant. The lubricant is a polyalphaolefin, specifically PAO4, and the surfactant is oleic acid (OA). The aim of this study is to see how these physical properties vary for different proportions of PAO4 and OA. For this purpose, several samples with different concentrations of each substance will be prepared, and the surface tension, density, and speed of sound will be experimentally measured. Then, from these values, the adiabatic compressibility and the surface pressure will be obtained.
Direction
AMIGO POMBO, ALFREDO JOSE (Tutorships)
GINER RAJALA, OSCAR VICENT (Co-tutorships)
AMIGO POMBO, ALFREDO JOSE (Tutorships)
GINER RAJALA, OSCAR VICENT (Co-tutorships)
Court
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
Fluidic lens molding
Authorship
S.S.F.
Bachelor of Physics
S.S.F.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Fluidic molding is an innovative technique for lens manufacturing that utilizes fluid dynamics to achieve more precise results. It provides optical finishes without the need for post-treatment, in addition to a shorter and more economical procedure compared to the current lens manufacturing method. In this work, experimental conditions for the fabrication of spherical lenses using circular molds submerged in immiscible liquids were investigated. The goal was to achieve neutral buoyancy, which is crucial for obtaining spherical lenses without size restrictions. Through the injection and subsequent curing of base materials in the submerged molds, the production of lenses was achieved, which were then characterized. Three different base materials were studied, and various curing protocols and immiscible liquids were evaluated based on their specific properties. This work contributes to advancements in lens manufacturing technology, providing a solid foundation for future research in process optimization and the continuous improvement of the optical quality of produced lenses.
Fluidic molding is an innovative technique for lens manufacturing that utilizes fluid dynamics to achieve more precise results. It provides optical finishes without the need for post-treatment, in addition to a shorter and more economical procedure compared to the current lens manufacturing method. In this work, experimental conditions for the fabrication of spherical lenses using circular molds submerged in immiscible liquids were investigated. The goal was to achieve neutral buoyancy, which is crucial for obtaining spherical lenses without size restrictions. Through the injection and subsequent curing of base materials in the submerged molds, the production of lenses was achieved, which were then characterized. Three different base materials were studied, and various curing protocols and immiscible liquids were evaluated based on their specific properties. This work contributes to advancements in lens manufacturing technology, providing a solid foundation for future research in process optimization and the continuous improvement of the optical quality of produced lenses.
Direction
BAO VARELA, Mª CARMEN (Tutorships)
Gómez Varela, Ana Isabel (Co-tutorships)
BAO VARELA, Mª CARMEN (Tutorships)
Gómez Varela, Ana Isabel (Co-tutorships)
Court
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
Theoretical-computational study of transport in mixtures of ionic liquids with molecular solvents.
Authorship
M.A.B.F.
Bachelor of Physics
M.A.B.F.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
In the present undergraduate thesis (TFG), a bibliographic review of the current state of the art of anomalous transport in mixtures of ionic liquids with molecular solvents will be conducted. Additionally, molecular dynamics simulations of mixtures of ionic liquids and alcohols will be carried out, in which structural and dynamic (single-particle) properties will be analyzed to predict macroscopic equilibrium properties and transport in these systems. In particular, properties such as radial distribution functions and self-diffusion coefficients of the different species in the mixture will be calculated.
In the present undergraduate thesis (TFG), a bibliographic review of the current state of the art of anomalous transport in mixtures of ionic liquids with molecular solvents will be conducted. Additionally, molecular dynamics simulations of mixtures of ionic liquids and alcohols will be carried out, in which structural and dynamic (single-particle) properties will be analyzed to predict macroscopic equilibrium properties and transport in these systems. In particular, properties such as radial distribution functions and self-diffusion coefficients of the different species in the mixture will be calculated.
Direction
Montes Campos, Hadrián (Tutorships)
MENDEZ MORALES, TRINIDAD (Co-tutorships)
Montes Campos, Hadrián (Tutorships)
MENDEZ MORALES, TRINIDAD (Co-tutorships)
Court
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
Broadband spectral interferometry
Authorship
J.O.I.
Bachelor of Physics
J.O.I.
Bachelor of Physics
Defense date
07.18.2024 09:00
07.18.2024 09:00
Summary
Spectral interferometry is an experimental technique in which the transmitted light is not directly detected by an interferometer, but the interference pattern is resolved as a function of wavelength with the aid of a spectrometer. In this type of technique, either an ultra-short pulse laser or a broadband or white light source is used as an illumination source. In this Bachelor's Thesis, the latter case is considered and a complete analysis of this technique is carried out. The basics of spectral interferometry and the mathematical apparatus used to describe and extract information from it are introduced and spectral images are obtained showing simple examples of the type of experiment that can be carried out in this context. Results obtained from different scientific and technological applications of this technique are also presented and discussed. This work provides a comprehensive view of the technique, from its theoretical foundations to its applications, demonstrating its relevance in the advancement of precision in optical measurements through an eminently experimental approach.
Spectral interferometry is an experimental technique in which the transmitted light is not directly detected by an interferometer, but the interference pattern is resolved as a function of wavelength with the aid of a spectrometer. In this type of technique, either an ultra-short pulse laser or a broadband or white light source is used as an illumination source. In this Bachelor's Thesis, the latter case is considered and a complete analysis of this technique is carried out. The basics of spectral interferometry and the mathematical apparatus used to describe and extract information from it are introduced and spectral images are obtained showing simple examples of the type of experiment that can be carried out in this context. Results obtained from different scientific and technological applications of this technique are also presented and discussed. This work provides a comprehensive view of the technique, from its theoretical foundations to its applications, demonstrating its relevance in the advancement of precision in optical measurements through an eminently experimental approach.
Direction
DE LA FUENTE CARBALLO, RAUL (Tutorships)
DE LA FUENTE CARBALLO, RAUL (Tutorships)
Court
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Optimization of the Ds+ gamma spectrum using machine learning techniques in the LHCb experiment
Authorship
J.D.M.
Bachelor of Physics
J.D.M.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
In this work, the spectrum resulting from the combination of D+ s particles with photons will be studied. This combination opens the door to the study of various resonances, especially the Ds1(2460)+, which is a hadron candidate for a tetraquark (a hadron formed by 4 quarks) due to a series of qualities such as its small width and its proximity to the D(excitated)K+ threshold. The LHCb experiment is in a privileged position to study this hadron due to its high production in proton-proton collisions at the LHC collider, but the large number of photons produced causes the Ds1(2460)+ to D+s + gamma signal events to be masked by the background. In this work, machine learning techniques will be used to create a selection that allows reducing background events without significant signal loss. Thus, the work will consist of developing a classification algorithm that, after being trained with real and simulated data, can be applied to the data for selection. Finally, through a fit, the performance of said classifier will be verified.
In this work, the spectrum resulting from the combination of D+ s particles with photons will be studied. This combination opens the door to the study of various resonances, especially the Ds1(2460)+, which is a hadron candidate for a tetraquark (a hadron formed by 4 quarks) due to a series of qualities such as its small width and its proximity to the D(excitated)K+ threshold. The LHCb experiment is in a privileged position to study this hadron due to its high production in proton-proton collisions at the LHC collider, but the large number of photons produced causes the Ds1(2460)+ to D+s + gamma signal events to be masked by the background. In this work, machine learning techniques will be used to create a selection that allows reducing background events without significant signal loss. Thus, the work will consist of developing a classification algorithm that, after being trained with real and simulated data, can be applied to the data for selection. Finally, through a fit, the performance of said classifier will be verified.
Direction
SABORIDO SILVA, JUAN JOSE (Tutorships)
Cambón Bouzas, José Iván (Co-tutorships)
SABORIDO SILVA, JUAN JOSE (Tutorships)
Cambón Bouzas, José Iván (Co-tutorships)
Court
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
Characterization, simulation and optimization of a photovoltaic device.
Authorship
J.C.R.
Bachelor of Physics
J.C.R.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Photovoltaic energy represents one of the main resources of renowable energy and also one of the most established in the market. This article explores the fundamentals that govern the operation of a photovoltaic device, as well as the parameters that influence its performance. It also includes a brief review of the state of the art of this technology. The second part of the article consist of an experimental study of a photovoltaic device designed to harness monochromatic light, known as a power converter. The study includes the characterisation of the device and the simulation of its behaviour when illuminated with 808 nm monochromatic ligth. The experimental results are compared with reference studies and, finally, the device is optimized to achieve maximum efficiency.
Photovoltaic energy represents one of the main resources of renowable energy and also one of the most established in the market. This article explores the fundamentals that govern the operation of a photovoltaic device, as well as the parameters that influence its performance. It also includes a brief review of the state of the art of this technology. The second part of the article consist of an experimental study of a photovoltaic device designed to harness monochromatic light, known as a power converter. The study includes the characterisation of the device and the simulation of its behaviour when illuminated with 808 nm monochromatic ligth. The experimental results are compared with reference studies and, finally, the device is optimized to achieve maximum efficiency.
Direction
GARCIA LOUREIRO, ANTONIO JESUS (Tutorships)
GARCIA LOUREIRO, ANTONIO JESUS (Tutorships)
Court
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
Tribological properties of nanoparticle-modified transmission fluids
Authorship
F.G.B.
Bachelor of Physics
F.G.B.
Bachelor of Physics
Defense date
09.17.2024 10:30
09.17.2024 10:30
Summary
In this Bachelor's Final Project, the tribological performance of a series of nanolubricants prepared from a commercial formulated oil additivated with two different types of nanoparticles of SiO2 has been studied: both commercial and superficially functionalized with oleic acid (SiO2-OA). These nanoparticles have been characterized by a Thermogravimetric Analysis (TGA) and a Fourier-transform Infrared Spectroscopy (FTIR). To study the anti-friction and anti-wear capabilities of nanolubricants, friction tests have been carried out in a tribometer, in ball-on-plate configuration and under pure sliding conditions. Despite not achieving improvements in the anti-friction capacity of the nanolubricants with respect to the formulated oil, an increase in the anti-wear capacity has been achieved. The best result has been achieved with the concentration of 0.5 wt% of functionalized nanoparticles (SiO2-OA), which has managed to reduce the area of the wear track by 35%. The anti-wear mechanism carried out by the nanoparticles has been analyzed using Raman spectroscopy of the worn surfaces.
In this Bachelor's Final Project, the tribological performance of a series of nanolubricants prepared from a commercial formulated oil additivated with two different types of nanoparticles of SiO2 has been studied: both commercial and superficially functionalized with oleic acid (SiO2-OA). These nanoparticles have been characterized by a Thermogravimetric Analysis (TGA) and a Fourier-transform Infrared Spectroscopy (FTIR). To study the anti-friction and anti-wear capabilities of nanolubricants, friction tests have been carried out in a tribometer, in ball-on-plate configuration and under pure sliding conditions. Despite not achieving improvements in the anti-friction capacity of the nanolubricants with respect to the formulated oil, an increase in the anti-wear capacity has been achieved. The best result has been achieved with the concentration of 0.5 wt% of functionalized nanoparticles (SiO2-OA), which has managed to reduce the area of the wear track by 35%. The anti-wear mechanism carried out by the nanoparticles has been analyzed using Raman spectroscopy of the worn surfaces.
Direction
AMIGO POMBO, ALFREDO JOSE (Tutorships)
GARCIA GUIMAREY, MARIA JESUS (Co-tutorships)
AMIGO POMBO, ALFREDO JOSE (Tutorships)
GARCIA GUIMAREY, MARIA JESUS (Co-tutorships)
Court
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Analysis of statistical effects in the 'muon puzzle' in ultra-high-energy cosmic rays.
Authorship
S.B.L.
Bachelor of Physics
S.B.L.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Cosmic rays can reach energies completely inaccessible to current particle accelerators, up to the order of 1021 eV. The collision in the atmosphere of an ultra-high-energy cosmic ray with the nuclei present in the air produces a cascade of secondary particles, including muons, which are detected by cosmic ray observatories. Numerous experiments have pointed out a deficit in the muon number simulated using hadronic interaction models developed with data from the Large Hadron Collider (LHC), compared to the muon number observed experimentally. This is known as the ’muon puzzle’ and constitutes one of the current enigmas in ultra-high-energy particle physics. On the other hand, other experiments did not detect this deficit, which increases the complexity of the matter. This study aims to analyze the impact of statistical effects on the prediction of the muon number in the cascades, in order to clarify the possible reasons for the discrepancies between different cosmic ray experiments.
Cosmic rays can reach energies completely inaccessible to current particle accelerators, up to the order of 1021 eV. The collision in the atmosphere of an ultra-high-energy cosmic ray with the nuclei present in the air produces a cascade of secondary particles, including muons, which are detected by cosmic ray observatories. Numerous experiments have pointed out a deficit in the muon number simulated using hadronic interaction models developed with data from the Large Hadron Collider (LHC), compared to the muon number observed experimentally. This is known as the ’muon puzzle’ and constitutes one of the current enigmas in ultra-high-energy particle physics. On the other hand, other experiments did not detect this deficit, which increases the complexity of the matter. This study aims to analyze the impact of statistical effects on the prediction of the muon number in the cascades, in order to clarify the possible reasons for the discrepancies between different cosmic ray experiments.
Direction
CAZON BOADO, LORENZO (Tutorships)
CAZON BOADO, LORENZO (Tutorships)
Court
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
Protein-stabilized metal clusters as contrast agents: synthesis and characterization
Authorship
G.F.S.
Bachelor of Physics
G.F.S.
Bachelor of Physics
Defense date
07.18.2024 10:00
07.18.2024 10:00
Summary
In this work, the synthesis of iron nanoclusters embedded in bovine serum albumin, which is biocompatible and very similar to human serum albumin, was proposed with the aim of designing a viable nanosystem as a T1 contrast agent in nuclear magnetic resonance. A green synthesis process similar to that used for gold nanoclusters reduced with the same protein was chosen. To obtain the nanoclusters, iron chloride salts in a 2:1 ratio were used, and their formation was evaluated at four different iron concentrations. Three of these samples proved to be stable in solution, and a clear magnetic behaviour was observed in the two higher concentration samples. After purifying the samples, the characterization of the obtained systems was carried out, evaluating the size, shape, and molecular weight. Additionally, we verified the formation of iron oxides covalently bound to the protein and estimated the number of magnetite and serum albumin molecules in each system. We also studied the modification of the protein structure due to this binding and obtained the absorbance and fluorescence properties. Finally, their behaviour in the presence of a magnetic field was thoroughly analyzed. For all the samples, we confirmed the formation of nanosystems composed of iron oxides and several protein molecules, observing that the morphology of the systems varies according to the iron concentration. In the lowest concentration sample, we obtained spherical nanoparticles, while in the more concentrated samples, nanoclusters embedded in fibers were formed, which exhibited fluorescence and superparamagnetic properties. Therefore, their use as a contrast agent would be feasible.
In this work, the synthesis of iron nanoclusters embedded in bovine serum albumin, which is biocompatible and very similar to human serum albumin, was proposed with the aim of designing a viable nanosystem as a T1 contrast agent in nuclear magnetic resonance. A green synthesis process similar to that used for gold nanoclusters reduced with the same protein was chosen. To obtain the nanoclusters, iron chloride salts in a 2:1 ratio were used, and their formation was evaluated at four different iron concentrations. Three of these samples proved to be stable in solution, and a clear magnetic behaviour was observed in the two higher concentration samples. After purifying the samples, the characterization of the obtained systems was carried out, evaluating the size, shape, and molecular weight. Additionally, we verified the formation of iron oxides covalently bound to the protein and estimated the number of magnetite and serum albumin molecules in each system. We also studied the modification of the protein structure due to this binding and obtained the absorbance and fluorescence properties. Finally, their behaviour in the presence of a magnetic field was thoroughly analyzed. For all the samples, we confirmed the formation of nanosystems composed of iron oxides and several protein molecules, observing that the morphology of the systems varies according to the iron concentration. In the lowest concentration sample, we obtained spherical nanoparticles, while in the more concentrated samples, nanoclusters embedded in fibers were formed, which exhibited fluorescence and superparamagnetic properties. Therefore, their use as a contrast agent would be feasible.
Direction
PARDO MONTERO, ALBERTO (Tutorships)
Ogando Cortés, Alejandro (Co-tutorships)
PARDO MONTERO, ALBERTO (Tutorships)
Ogando Cortés, Alejandro (Co-tutorships)
Court
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
Computational and Experimental Strategies for the Integration of Biomaterials into Microfluidic Devices
Authorship
J.M.R.V.
Bachelor of Physics
J.M.R.V.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
The integration of biomaterials in microfluidic devices has shown significant potential for advancing tissue engineering applications. The advancement of hydrogel microfibers with cells in this field depends on biotechnological and computational innovations, as well as a deep understanding of physical laws. This project focuses on computational and experimental strategies to optimize the incorporation of microfibers formed by U-87 cells and astrocytes suspended in alginate within these devices. The research is divided into three main sections. The first section involves the computational modeling of the experimental setup, incorporating geometries, physical conditions, and boundary conditions to closely replicate the real-world environment. The second section focuses on the discretization and assembly of the finite element method (FEM) to solve the mathematical model. Finally, the results will be interpreted to evaluate the effectiveness of both the model and the computational methods. The study examines various flow rates (5, 10, 25, and 50 microL/min) and their effects on the formation and stability of the fibers. The results highlight significant differences between the simulated and real-world data, particularly in the core/shell ratios. The research demonstrates the scalability and uniformity of fiber production using microfluidic devices. The findings support the viability of these microfibers for clinical and research applications, opening new possibilities for continuous improvement and customization of fiber designs.
The integration of biomaterials in microfluidic devices has shown significant potential for advancing tissue engineering applications. The advancement of hydrogel microfibers with cells in this field depends on biotechnological and computational innovations, as well as a deep understanding of physical laws. This project focuses on computational and experimental strategies to optimize the incorporation of microfibers formed by U-87 cells and astrocytes suspended in alginate within these devices. The research is divided into three main sections. The first section involves the computational modeling of the experimental setup, incorporating geometries, physical conditions, and boundary conditions to closely replicate the real-world environment. The second section focuses on the discretization and assembly of the finite element method (FEM) to solve the mathematical model. Finally, the results will be interpreted to evaluate the effectiveness of both the model and the computational methods. The study examines various flow rates (5, 10, 25, and 50 microL/min) and their effects on the formation and stability of the fibers. The results highlight significant differences between the simulated and real-world data, particularly in the core/shell ratios. The research demonstrates the scalability and uniformity of fiber production using microfluidic devices. The findings support the viability of these microfibers for clinical and research applications, opening new possibilities for continuous improvement and customization of fiber designs.
Direction
Rial Silva, Ramón (Tutorships)
Rial Silva, Ramón (Tutorships)
Court
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
Eliminating intrinsic bias in the measurement of the CP-violating phase alpha in B0 (rho pi)0 decays
Authorship
P.E.M.P.
Bachelor of Physics
P.E.M.P.
Bachelor of Physics
Defense date
02.20.2024 17:00
02.20.2024 17:00
Summary
Given that the amount of matter-antimatter asymmetry currently measured in the Standard Model is far too small to produce the matter-dominated Universe we live in by at least 9 orders of magnitude, it is imperative to search for new sources of CP violation at the microscopic level. The phase alpha (also known as phi2, of the best-studied unitarity triangle (of phases alpha,beta,gamma), is currently the least known input to constrain possible contributions beyond the Kobayashi-Maskawa theory of the Standard Model, which is the main source of matter-antimatter asymmetry in the above theory. This ultimately limits sensitivity to new high-order processes in the mixing B0-Bbar0 of the B meson, that induce CP-violation through interference with the phase of b to u decays from which alpha derives. The decay B0 to (\rho\pi)0 is one of the most promising channels in which to study alpha as the level of its theory uncertainty is beneath the experimental reach of any current or future planned experiment. However, the experimental parameterisation by which alpha is currently extracted in such decays is known to be biased considering the many compromises upon which it is based. This project aims to quantify exactly the size of the bias in current measurements relative to the theoretical uncertainty and propose an alternative unbiased estimator for alpha in B0 to (rho pi)0 decays going forward.
Given that the amount of matter-antimatter asymmetry currently measured in the Standard Model is far too small to produce the matter-dominated Universe we live in by at least 9 orders of magnitude, it is imperative to search for new sources of CP violation at the microscopic level. The phase alpha (also known as phi2, of the best-studied unitarity triangle (of phases alpha,beta,gamma), is currently the least known input to constrain possible contributions beyond the Kobayashi-Maskawa theory of the Standard Model, which is the main source of matter-antimatter asymmetry in the above theory. This ultimately limits sensitivity to new high-order processes in the mixing B0-Bbar0 of the B meson, that induce CP-violation through interference with the phase of b to u decays from which alpha derives. The decay B0 to (\rho\pi)0 is one of the most promising channels in which to study alpha as the level of its theory uncertainty is beneath the experimental reach of any current or future planned experiment. However, the experimental parameterisation by which alpha is currently extracted in such decays is known to be biased considering the many compromises upon which it is based. This project aims to quantify exactly the size of the bias in current measurements relative to the theoretical uncertainty and propose an alternative unbiased estimator for alpha in B0 to (rho pi)0 decays going forward.
Direction
ADEVA ANDANY, BERNARDO (Tutorships)
DALSENO , JEREMY PETER (Co-tutorships)
ADEVA ANDANY, BERNARDO (Tutorships)
DALSENO , JEREMY PETER (Co-tutorships)
Court
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
Implementation of Machine Learning methods for the prediction of daily maximum and minimum temperatures in the Galician territory
Authorship
B.D.V.F.
Bachelor of Physics
B.D.V.F.
Bachelor of Physics
Defense date
02.20.2024 17:00
02.20.2024 17:00
Summary
In this Bachelor's thesis, Machine Learning techniques will be applied to the field of meteorology, specifically focusing on post-processing the data obtained from numerical models before they are presented to the public. This task, generally performed by trained meteorologists, involves refining the information for better accuracy. The primary tool for this work will be Keras, a library specifically designed for creating deep learning networks, which will be executed using TensorFlow. Utilizing this library, various neural networks will be built with the aim of achieving the mentioned post-processing. The different models will have similar inputs (a selection of various variables provided by the numerical models) and the same output (daily maximum and minimum temperatures). The ultimate goal of this project is to transform this tool into a functional add-on that enhances the necessary corrections meteorologists must make on a daily basis regarding temperature predictions.
In this Bachelor's thesis, Machine Learning techniques will be applied to the field of meteorology, specifically focusing on post-processing the data obtained from numerical models before they are presented to the public. This task, generally performed by trained meteorologists, involves refining the information for better accuracy. The primary tool for this work will be Keras, a library specifically designed for creating deep learning networks, which will be executed using TensorFlow. Utilizing this library, various neural networks will be built with the aim of achieving the mentioned post-processing. The different models will have similar inputs (a selection of various variables provided by the numerical models) and the same output (daily maximum and minimum temperatures). The ultimate goal of this project is to transform this tool into a functional add-on that enhances the necessary corrections meteorologists must make on a daily basis regarding temperature predictions.
Direction
Martinez Hernandez, Diego (Tutorships)
VILLARROYA FERNANDEZ, SEBASTIAN (Co-tutorships)
Martinez Hernandez, Diego (Tutorships)
VILLARROYA FERNANDEZ, SEBASTIAN (Co-tutorships)
Court
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
Biological modeling of toxicity in radiotherapy
Authorship
S.R.G.
Bachelor of Physics
S.R.G.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
Radiotherapy is a type of cancer treatment whose objective is to administer a high dose of radiation to the tumor while minimizing the dose received by surrounding organs, thereby reducing toxicity. Toxicity refers to the adverse side effects that radiation causes in the tissues and organs near the tumor. Understanding the dependence of toxicity on the received dose and the parameters that characterize the response of organs is important for designing treatments that minimize toxicity. In this study, a mechanistic radiobiological model of toxicity was developed, characterizing the probability of toxicity in an organ based on the dose it receives and the parameters that determine its response, with special attention to the so-called volume effect of organs.
Radiotherapy is a type of cancer treatment whose objective is to administer a high dose of radiation to the tumor while minimizing the dose received by surrounding organs, thereby reducing toxicity. Toxicity refers to the adverse side effects that radiation causes in the tissues and organs near the tumor. Understanding the dependence of toxicity on the received dose and the parameters that characterize the response of organs is important for designing treatments that minimize toxicity. In this study, a mechanistic radiobiological model of toxicity was developed, characterizing the probability of toxicity in an organ based on the dose it receives and the parameters that determine its response, with special attention to the so-called volume effect of organs.
Direction
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Pardo Montero, Juan (Co-tutorships)
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Pardo Montero, Juan (Co-tutorships)
Court
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
Notions of quantum cryptography
Authorship
R.A.R.
Double bachelor degree in Mathematics and Physics
R.A.R.
Double bachelor degree in Mathematics and Physics
Defense date
09.12.2024 16:00
09.12.2024 16:00
Summary
Quantum communication spawns contemporaneously to classical Information Theory, with a bigger potential for computations but a handicap when it comes to physical transmission. The present manuscript seeks to compare these two ways of sending information and to present the mathematical formalism behind quantum mechanics, as well as the most relevant protocols of this newly adapted cryptography and the first quantum error correcting codes.
Quantum communication spawns contemporaneously to classical Information Theory, with a bigger potential for computations but a handicap when it comes to physical transmission. The present manuscript seeks to compare these two ways of sending information and to present the mathematical formalism behind quantum mechanics, as well as the most relevant protocols of this newly adapted cryptography and the first quantum error correcting codes.
Direction
GAGO COUSO, FELIPE (Tutorships)
GAGO COUSO, FELIPE (Tutorships)
Court
OTERO ESPINAR, MARIA VICTORIA (Chairman)
GONZALEZ DIAZ, JULIO (Secretary)
Jeremías López, Ana (Member)
OTERO ESPINAR, MARIA VICTORIA (Chairman)
GONZALEZ DIAZ, JULIO (Secretary)
Jeremías López, Ana (Member)
Quantum machine learning for variational problems
Authorship
R.A.R.
Double bachelor degree in Mathematics and Physics
R.A.R.
Double bachelor degree in Mathematics and Physics
Defense date
09.16.2024 17:00
09.16.2024 17:00
Summary
The Heisenberg-Ising hamiltonian or XXZ aims to model magnetism in materials, where the predominant effect is the interaction between spins. The most interesting components of this hamiltonian are its parameter of anisotropy Delta and its external magnetic field lambda, which modify the ground state energy profile and its quantum phase. We will present two variational algorithms to approximate the range of Delta between -1 and 1: the first of them is a type of VQE (Variational Quantum Eigensolver) with a novel approach in which the parameter of anisotropy is included within the variables to optimize; the second one will be a HVA (Hamiltonian Variational Ansatz), a method based on adiabatic quantum programming that will take into account the physics of the hamiltonian XXZ to reach its ground state.
The Heisenberg-Ising hamiltonian or XXZ aims to model magnetism in materials, where the predominant effect is the interaction between spins. The most interesting components of this hamiltonian are its parameter of anisotropy Delta and its external magnetic field lambda, which modify the ground state energy profile and its quantum phase. We will present two variational algorithms to approximate the range of Delta between -1 and 1: the first of them is a type of VQE (Variational Quantum Eigensolver) with a novel approach in which the parameter of anisotropy is included within the variables to optimize; the second one will be a HVA (Hamiltonian Variational Ansatz), a method based on adiabatic quantum programming that will take into account the physics of the hamiltonian XXZ to reach its ground state.
Direction
MAS SOLE, JAVIER (Tutorships)
Gómez Tato, Andrés (Co-tutorships)
MAS SOLE, JAVIER (Tutorships)
Gómez Tato, Andrés (Co-tutorships)
Court
MIGUEZ MACHO, GONZALO (Chairman)
González Fernández, Rosa María (Secretary)
BROCOS FERNANDEZ, MARIA DEL PILAR (Member)
MIGUEZ MACHO, GONZALO (Chairman)
González Fernández, Rosa María (Secretary)
BROCOS FERNANDEZ, MARIA DEL PILAR (Member)
Search for potential sexaquarks through the LHCb experiment
Authorship
B.F.R.
Bachelor of Physics
B.F.R.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Sexaquarks (tightly bound uuddss states) could be promising dark matter (DM) candidates, and their existence could potentially resolve the muon g-2 anomaly. In the present project I explore through machine learning a possible decay (Xi b baryon decaying to a sexaquark and a Lambda c baryon) that could lead to the synthesis of sexaquarks, explaining DM partially or totally. The objective is to determine whether it is feasible to search for sexaquarks (S) as missing energy in the decays produced in the LHCb experiment.
Sexaquarks (tightly bound uuddss states) could be promising dark matter (DM) candidates, and their existence could potentially resolve the muon g-2 anomaly. In the present project I explore through machine learning a possible decay (Xi b baryon decaying to a sexaquark and a Lambda c baryon) that could lead to the synthesis of sexaquarks, explaining DM partially or totally. The objective is to determine whether it is feasible to search for sexaquarks (S) as missing energy in the decays produced in the LHCb experiment.
Direction
CID VIDAL, XABIER (Tutorships)
VIEITES DIAZ, MARIA (Co-tutorships)
CID VIDAL, XABIER (Tutorships)
VIEITES DIAZ, MARIA (Co-tutorships)
Court
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
Design proposals and study of the corresponding thermal dynamics of non-suspended superconducting sensors with sensitivity to single and multiple photons.
Authorship
C.L.A.
Bachelor of Physics
C.L.A.
Bachelor of Physics
Defense date
09.17.2024 10:30
09.17.2024 10:30
Summary
TES (transition-edge superconductor) sensors are capable of detecting very low radiation intensities, down to a single photon, but face challenges related to rapid heat dissipation when fabricated on conventional substrates. Therefore, this work explores layered materials with the ability to channel heat in the planar direction, slowing thermal dissipation. Three configurations are proposed and analyzed: A: NbSe2 on a 2M-WS2 substrate; B: YBCO on 2M-WS2, and C: a sensor entirely composed of 2M-WS2. We determine the optimal operating temperatures and apply a theoretical thermal dynamics model to each case. Case A shows promising results with single-photon detection, while case B achieves sensitivity down to 15 photons, demonstrating the potential of TES sensors based on these materials.
TES (transition-edge superconductor) sensors are capable of detecting very low radiation intensities, down to a single photon, but face challenges related to rapid heat dissipation when fabricated on conventional substrates. Therefore, this work explores layered materials with the ability to channel heat in the planar direction, slowing thermal dissipation. Three configurations are proposed and analyzed: A: NbSe2 on a 2M-WS2 substrate; B: YBCO on 2M-WS2, and C: a sensor entirely composed of 2M-WS2. We determine the optimal operating temperatures and apply a theoretical thermal dynamics model to each case. Case A shows promising results with single-photon detection, while case B achieves sensitivity down to 15 photons, demonstrating the potential of TES sensors based on these materials.
Direction
VAZQUEZ RAMALLO, MANUEL (Tutorships)
MARTINEZ BOTANA, JOSE MARTIN (Co-tutorships)
VAZQUEZ RAMALLO, MANUEL (Tutorships)
MARTINEZ BOTANA, JOSE MARTIN (Co-tutorships)
Court
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Study and modeling of tumor growth through mathematical and computational models
Authorship
L.C.L.
Bachelor of Physics
L.C.L.
Bachelor of Physics
Defense date
02.19.2024 15:00
02.19.2024 15:00
Summary
This document will study how tumors develop, analyzing how they can evolve over time and using different treatments. For this purpose, several mathematical models will be examined, which will be complemented with computational techniques to quantify the viability of those models.
This document will study how tumors develop, analyzing how they can evolve over time and using different treatments. For this purpose, several mathematical models will be examined, which will be complemented with computational techniques to quantify the viability of those models.
Direction
RUSO VEIRAS, JUAN MANUEL (Tutorships)
RUSO VEIRAS, JUAN MANUEL (Tutorships)
Court
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
Fundamental physics and cosmic particles
Authorship
T.T.R.
Bachelor of Physics
T.T.R.
Bachelor of Physics
Defense date
09.17.2024 10:30
09.17.2024 10:30
Summary
Although the primary goal of the Pierre Auger Observatory is to determine the origin and nature of ultra-high-energy cosmic rays, there is also the opportunity to test aspects of fundamental physics at energies unattainable by accelerators such as the LHC. This work will begin with a review of what cosmic rays are, how the cascades they generate propagate through the atmosphere, and how they are detected, with a particular focus on the operation of the Observatory. Following this, we will focus on explaining how data from this Observatory can be used to extract information about one of the fundamental symmetries of nature: Lorentz invariance, along with its possible violation. We will also present the information that can be obtained about the dark matter of the Universe in its super-heavy version.
Although the primary goal of the Pierre Auger Observatory is to determine the origin and nature of ultra-high-energy cosmic rays, there is also the opportunity to test aspects of fundamental physics at energies unattainable by accelerators such as the LHC. This work will begin with a review of what cosmic rays are, how the cascades they generate propagate through the atmosphere, and how they are detected, with a particular focus on the operation of the Observatory. Following this, we will focus on explaining how data from this Observatory can be used to extract information about one of the fundamental symmetries of nature: Lorentz invariance, along with its possible violation. We will also present the information that can be obtained about the dark matter of the Universe in its super-heavy version.
Direction
PARENTE BERMUDEZ, GONZALO (Tutorships)
PARENTE BERMUDEZ, GONZALO (Tutorships)
Court
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Development and optimization of tools of interest in the interaction between amphiphilic molecules. Experimental validation in biophysical systems
Authorship
A.V.B.
Bachelor of Physics
A.V.B.
Bachelor of Physics
Defense date
07.18.2024 09:00
07.18.2024 09:00
Summary
In this project, we developed a neural network to predict the affinity between amphiphilic substances. We studied the interaction of different molecular compounds with albumin, a protein that, among many biological roles, is responsible for distributing some drugs through the bloodstream. Using values from the BindingDB database, we were able to train a model with good prediction abilities for results not included in the model training. Moreover, the interaction between albumin and two other substances was measured using ITC calorimetry to validate the neural network results. The affinity of resveratrol and melatonin with bovine albumin was measured for the first time using ITC, which represents an important novelty. Additionally, the use of neural networks with many descriptors was optimized, allowing the identification of crucial descriptors in biochemical interactions. This advance is noteworthy because, although initially, one might think that neural networks only indicate how much, it has been shown that they can provide valuable information about how these interactions, opening new possibilities in the study of complex biochemical processes.
In this project, we developed a neural network to predict the affinity between amphiphilic substances. We studied the interaction of different molecular compounds with albumin, a protein that, among many biological roles, is responsible for distributing some drugs through the bloodstream. Using values from the BindingDB database, we were able to train a model with good prediction abilities for results not included in the model training. Moreover, the interaction between albumin and two other substances was measured using ITC calorimetry to validate the neural network results. The affinity of resveratrol and melatonin with bovine albumin was measured for the first time using ITC, which represents an important novelty. Additionally, the use of neural networks with many descriptors was optimized, allowing the identification of crucial descriptors in biochemical interactions. This advance is noteworthy because, although initially, one might think that neural networks only indicate how much, it has been shown that they can provide valuable information about how these interactions, opening new possibilities in the study of complex biochemical processes.
Direction
Martinez Hernandez, Diego (Tutorships)
DOMINGUEZ ARCA, VICENTE (Co-tutorships)
Martinez Hernandez, Diego (Tutorships)
DOMINGUEZ ARCA, VICENTE (Co-tutorships)
Court
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Mathematical methods of Artificial Intelligence
Authorship
V.O.Z.
Double bachelor degree in Mathematics and Physics
V.O.Z.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2024 12:30
07.16.2024 12:30
Summary
Artificial intelligence (AI) has been one of the major technological advancements of recent years. In this paper, we will see how mathematics play a crucial role in its development. We will begin by formally introducing artificial neurons and neural networks, starting from their analogy with biological neurons. The Universal Approximation Theorem and its subsequent generalization will be demonstrated, showing how a neural network can approximate any continuous function under minimally restrictive conditions regarding its architecture. Additionally, the backpropagation algorithm is introduced. Finally, we will see a few examples of AI applications in different fields.
Artificial intelligence (AI) has been one of the major technological advancements of recent years. In this paper, we will see how mathematics play a crucial role in its development. We will begin by formally introducing artificial neurons and neural networks, starting from their analogy with biological neurons. The Universal Approximation Theorem and its subsequent generalization will be demonstrated, showing how a neural network can approximate any continuous function under minimally restrictive conditions regarding its architecture. Additionally, the backpropagation algorithm is introduced. Finally, we will see a few examples of AI applications in different fields.
Direction
Nieto Roig, Juan José (Tutorships)
Nieto Roig, Juan José (Tutorships)
Court
FEBRERO BANDE, MANUEL (Chairman)
BUEDO FERNANDEZ, SEBASTIAN (Secretary)
RODRIGUEZ GARCIA, JERONIMO (Member)
FEBRERO BANDE, MANUEL (Chairman)
BUEDO FERNANDEZ, SEBASTIAN (Secretary)
RODRIGUEZ GARCIA, JERONIMO (Member)
Numerical simulation code for the correction of saturation factors of ionization chambers for dosimetry in radiotherapy.
Authorship
N.R.G.
Bachelor of Physics
N.R.G.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Ultra-high dose rate (UHDR) therapy has garnered significant interest in recent years due to the identification of advantages in the radiobiological effect of this type of irradiation (FLASH effect). In this work, the effect of charge multiplication in ionization chambers was experimentally measured. Additionally, it was investigated whether these results coincided with theoretical predictions using a numerical simulation code to evaluate the multiplication effect in saturation curves. The dependence of saturation factors in ionization chambers on absolute pressure in UHDR was also studied, an aspect not covered in any of the current international protocols, such as TRS398 [1] or TG51 [2].
Ultra-high dose rate (UHDR) therapy has garnered significant interest in recent years due to the identification of advantages in the radiobiological effect of this type of irradiation (FLASH effect). In this work, the effect of charge multiplication in ionization chambers was experimentally measured. Additionally, it was investigated whether these results coincided with theoretical predictions using a numerical simulation code to evaluate the multiplication effect in saturation curves. The dependence of saturation factors in ionization chambers on absolute pressure in UHDR was also studied, an aspect not covered in any of the current international protocols, such as TRS398 [1] or TG51 [2].
Direction
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Paz Martín, José (Co-tutorships)
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Paz Martín, José (Co-tutorships)
Court
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
Synthesis and characterization of core-shell structured magnetic nanoparticles with potential theranostic applications
Authorship
I.P.R.
Bachelor of Physics
I.P.R.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
In this Final Degree Project, the properties of core-shell structured magnetic nanoparticles (MNPs) are analyzed in view of their potential application as theranostic agents. For this purpose, a theoretical framework is first constructed to present the current scientific knowledge in nanomagnetism and the special characteristics of core-shell nanostructures. Subsequently, the experimental procedures are described. The MNPs are synthesized through two-step processes, then washed, and finally functionalized to render them water-soluble. Their physicochemical properties are studied using transmission electron microscopy (TEM), X-ray diffraction (XRD), and dinamic light scattering (DLS), obtaining basic parameters of MNPs such as size, morphology, crystalline structure, and colloidal stability. The characteristics of different typologies of synthesized MNPs are compared between them and also with previously reported works in the field.
In this Final Degree Project, the properties of core-shell structured magnetic nanoparticles (MNPs) are analyzed in view of their potential application as theranostic agents. For this purpose, a theoretical framework is first constructed to present the current scientific knowledge in nanomagnetism and the special characteristics of core-shell nanostructures. Subsequently, the experimental procedures are described. The MNPs are synthesized through two-step processes, then washed, and finally functionalized to render them water-soluble. Their physicochemical properties are studied using transmission electron microscopy (TEM), X-ray diffraction (XRD), and dinamic light scattering (DLS), obtaining basic parameters of MNPs such as size, morphology, crystalline structure, and colloidal stability. The characteristics of different typologies of synthesized MNPs are compared between them and also with previously reported works in the field.
Direction
PARDO MONTERO, ALBERTO (Tutorships)
PARDO MONTERO, ALBERTO (Tutorships)
Court
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
Astrophysics with the Cosmic Ray Pierre Auger Observatory
Authorship
B.G.H.
Bachelor of Physics
B.G.H.
Bachelor of Physics
Defense date
07.01.2024 10:30
07.01.2024 10:30
Summary
Taking Pierre Auger Collaboration (2022) as a reference, we reproduce the results in the article and we analyze the anisotropy in the distribution of cosmic rays with energies greater than 32 EeV (1 EeV = 10 18 eV), thereby minimizing deviations due to magnetic fields and providing more precise information on their arrival directions. A more detailed study of the event and galaxy catalogs used is conducted, focusing on starburst galaxies. Additionally, the temporal exposure map of the Pierre Auger Observatory is calculated, and an algorithm is developed to compare the observed distribution with isotropically simulated skies using Monte Carlo methods. Two regions are studied: one near Centaurus A and another centered on the starburst galaxy NGC 253. The results reveal a significant overdensity near Centaurus A, whereas NGC 253 exhibits a smaller excess, consistent with the findings of the Pierre Auger Collaboration. Finally, large-scale anisotropy is investigated, suggesting an extragalactic origin of these particles that becomes more pronounced at higher energies.
Taking Pierre Auger Collaboration (2022) as a reference, we reproduce the results in the article and we analyze the anisotropy in the distribution of cosmic rays with energies greater than 32 EeV (1 EeV = 10 18 eV), thereby minimizing deviations due to magnetic fields and providing more precise information on their arrival directions. A more detailed study of the event and galaxy catalogs used is conducted, focusing on starburst galaxies. Additionally, the temporal exposure map of the Pierre Auger Observatory is calculated, and an algorithm is developed to compare the observed distribution with isotropically simulated skies using Monte Carlo methods. Two regions are studied: one near Centaurus A and another centered on the starburst galaxy NGC 253. The results reveal a significant overdensity near Centaurus A, whereas NGC 253 exhibits a smaller excess, consistent with the findings of the Pierre Auger Collaboration. Finally, large-scale anisotropy is investigated, suggesting an extragalactic origin of these particles that becomes more pronounced at higher energies.
Direction
PARENTE BERMUDEZ, GONZALO (Tutorships)
PARENTE BERMUDEZ, GONZALO (Tutorships)
Court
LOPEZ LAGO, MARIA ELENA (Chairman)
PARDO CASTRO, VICTOR (Secretary)
CASTRO PAREDES, FRANCISCO JAVIER (Member)
LOPEZ LAGO, MARIA ELENA (Chairman)
PARDO CASTRO, VICTOR (Secretary)
CASTRO PAREDES, FRANCISCO JAVIER (Member)
Monte Carlo simulation of a hybrid PET/MR scanner
Authorship
E.L.M.
Bachelor of Physics
E.L.M.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Hybrid PET/MR technology plays a fundamental role in preclinical biomedical research: the fusion of distribution maps of compounds produced with positron emission tomography (PET) and anatomical images created by magnetic resonance imaging (MR) provides valuable information about the pharmacokinetics and biodistribution of new drugs. However, the process of creating these images is affected by multiple sources of uncertainty. To study the influence of variables and physical phenomena occurring in the operation of a PET scanner, Monte Carlo simulations are employed. In this study, we validated a Monte Carlo simulation model of the Bruker PET/MR 3T hybrid scanner. We used SimPET as the simulation tool, and during the validation we verify that the images obtained with this platform reproduce the real ones. To do this, we compare some of the magnitudes that characterise the performance of the simulated scanner (resolution and sensitivity) with those obtained from experimental measurements.
Hybrid PET/MR technology plays a fundamental role in preclinical biomedical research: the fusion of distribution maps of compounds produced with positron emission tomography (PET) and anatomical images created by magnetic resonance imaging (MR) provides valuable information about the pharmacokinetics and biodistribution of new drugs. However, the process of creating these images is affected by multiple sources of uncertainty. To study the influence of variables and physical phenomena occurring in the operation of a PET scanner, Monte Carlo simulations are employed. In this study, we validated a Monte Carlo simulation model of the Bruker PET/MR 3T hybrid scanner. We used SimPET as the simulation tool, and during the validation we verify that the images obtained with this platform reproduce the real ones. To do this, we compare some of the magnitudes that characterise the performance of the simulated scanner (resolution and sensitivity) with those obtained from experimental measurements.
Direction
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Aguiar Fernández, Pablo (Co-tutorships)
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Aguiar Fernández, Pablo (Co-tutorships)
Court
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
Implementation of an algorithm for muon identification in the LHCb experiment.
Authorship
C.C.B.
Bachelor of Physics
C.C.B.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
The LHCb experiment is one of the biggest experiments of the LHC (Large Hadron Collider) and its principal objective is to study particles that contain bottom quarks. The aim of this study is to find a method to distinguish between muons and protons obtained from collisions that take place in the collider. Because of this, an algorithm is implemented from data of the muons signal and of the background, mostly composed of protons. The purpose of this algorithm is to calculate a quantity, namely chi-squared, both for signal and background data, between tracks reconstructed by the LHCb SciFi for the muons trajectory and tracks constructed from hits coordinates detected on the muon chambers. Representing all chi-squared values for each track in a histogram, both for muons and protons, it will be possible to observe the separation power of this quantity between these two particles. That separation will work as a method for the muons identification. In addition, this algorithm takes into account multiple scattering that charged particles experience as they pass through the detector, so another algorithm is implemented, which is exactly the same as the previous one but without the multiple scattering term to compare results from the execution of both. To be able to observe this in a precise way, ROC curves are represented for each algorithm. Finally, these two algorithms execution efficiency is studied for different momentum and transverse momentum intervals. In this way, it can be studied for which intervals the algorithm shows better results.
The LHCb experiment is one of the biggest experiments of the LHC (Large Hadron Collider) and its principal objective is to study particles that contain bottom quarks. The aim of this study is to find a method to distinguish between muons and protons obtained from collisions that take place in the collider. Because of this, an algorithm is implemented from data of the muons signal and of the background, mostly composed of protons. The purpose of this algorithm is to calculate a quantity, namely chi-squared, both for signal and background data, between tracks reconstructed by the LHCb SciFi for the muons trajectory and tracks constructed from hits coordinates detected on the muon chambers. Representing all chi-squared values for each track in a histogram, both for muons and protons, it will be possible to observe the separation power of this quantity between these two particles. That separation will work as a method for the muons identification. In addition, this algorithm takes into account multiple scattering that charged particles experience as they pass through the detector, so another algorithm is implemented, which is exactly the same as the previous one but without the multiple scattering term to compare results from the execution of both. To be able to observe this in a precise way, ROC curves are represented for each algorithm. Finally, these two algorithms execution efficiency is studied for different momentum and transverse momentum intervals. In this way, it can be studied for which intervals the algorithm shows better results.
Direction
CID VIDAL, XABIER (Tutorships)
Casais Vidal, Adrián (Co-tutorships)
CID VIDAL, XABIER (Tutorships)
Casais Vidal, Adrián (Co-tutorships)
Court
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
4D bioprinting and nanostructuring of scaffolds for tissue regeneration
Authorship
A.C.Y.
Bachelor of Physics
A.C.Y.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
The development of 3D printing technology made us capable of reproducing complex and detailed structures, which leads to numerous applications, including regenerative medicine by being able to design customised scaffolds. The combination of different printing techniques with the use of bioinks allows the manufacture of scaffolds with a high potential in tissue regeneration by presenting properties such as greater biological compatibility and ease of cell incorporation. One step further is the incorporation of the temporal dimension in 4D bioprinting, which adds a layer of functionality by allowing scaffolds to evolve in response to stimuli. Controlling the nanoproperties of the scaffolds and the correct sensitisation of the structures is fundamental for obtaining functional and efficient implants. This paper reviews the different bioprinting techniques, the composition of the most commonly used bioinks and the stimulation of scaffolds to finally condense the information by presenting some specific applications in bone, cardiovascular and nervous tissue regeneration.
The development of 3D printing technology made us capable of reproducing complex and detailed structures, which leads to numerous applications, including regenerative medicine by being able to design customised scaffolds. The combination of different printing techniques with the use of bioinks allows the manufacture of scaffolds with a high potential in tissue regeneration by presenting properties such as greater biological compatibility and ease of cell incorporation. One step further is the incorporation of the temporal dimension in 4D bioprinting, which adds a layer of functionality by allowing scaffolds to evolve in response to stimuli. Controlling the nanoproperties of the scaffolds and the correct sensitisation of the structures is fundamental for obtaining functional and efficient implants. This paper reviews the different bioprinting techniques, the composition of the most commonly used bioinks and the stimulation of scaffolds to finally condense the information by presenting some specific applications in bone, cardiovascular and nervous tissue regeneration.
Direction
TABOADA ANTELO, PABLO (Tutorships)
TOPETE CAMACHO, ANTONIO (Co-tutorships)
TABOADA ANTELO, PABLO (Tutorships)
TOPETE CAMACHO, ANTONIO (Co-tutorships)
Court
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
Estimation of the number of partonic interactions in proton-proton collisions of square root of s = 5 TeV with machine learning methods
Authorship
R.M.G.
Bachelor of Physics
R.M.G.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
Protons are composite particles. Its partonic content (quarks and gluons) depends on its energy. In a collision between two protons, a different number of interactions can occur between the partons that constitute them. This work aims to infer the number of partonic interactions in proton-proton collisions at the CERN LHC, at energies of 5 TeV in the center of mass, from experimental observables obtained with the spectrometer of the LHCb collaboration. To do this, different machine learning algorithms will be used and their hyperparameters will be optimized using simulated data.
Protons are composite particles. Its partonic content (quarks and gluons) depends on its energy. In a collision between two protons, a different number of interactions can occur between the partons that constitute them. This work aims to infer the number of partonic interactions in proton-proton collisions at the CERN LHC, at energies of 5 TeV in the center of mass, from experimental observables obtained with the spectrometer of the LHCb collaboration. To do this, different machine learning algorithms will be used and their hyperparameters will be optimized using simulated data.
Direction
GALLAS TORREIRA, ABRAHAM ANTONIO (Tutorships)
CORREDOIRA FERNANDEZ, IMANOL (Co-tutorships)
GALLAS TORREIRA, ABRAHAM ANTONIO (Tutorships)
CORREDOIRA FERNANDEZ, IMANOL (Co-tutorships)
Court
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
Obtaining reformatted images from data volumes.
Authorship
N.A.L.P.
Bachelor of Physics
N.A.L.P.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
The objective of this work is to develop a method for extracting 2D images from data volumes, minimizing resolution loss and artifacts. This need arises from the fact that it is easier for the human eye to work in 2D. Data from computed tomography (CT) will be used, stored in NPY files containing three-dimensional matrices of attenuation coefficient intensities. Initially, 3D virtual images (Phantoms) created for this purpose will be used for intuitive testing, aimed at developing the method. To model the procedure, a 3D plane is defined using a normal vector and a point on the plane. A condition is sought to implement this plane in the data volume, identifying points that satisfy the plane equation. Additionally, PSNR (Peak Signal-to-Noise Ratio) will be used to evaluate the quality of the obtained images. PSNR is calculated from the mean squared error (MSE) between the obtained image and a reference image. The development of the proposed method consists of three main parts: i) obtaining the data volume, ii) defining the target plane, and iii) generating the 2D image representing the slice using the cutting function based on the obtained points once the target plane is defined. The images will be compared both qualitatively (visually) and quantitatively using the 3D Slicer software. Preliminary results suggest that the proposed method is valid. However, it will be necessary to implement some improvements related to the inclusion of smoothing/interpolation techniques and optimization of execution time to make the proposed method applicable in daily clinical practice.
The objective of this work is to develop a method for extracting 2D images from data volumes, minimizing resolution loss and artifacts. This need arises from the fact that it is easier for the human eye to work in 2D. Data from computed tomography (CT) will be used, stored in NPY files containing three-dimensional matrices of attenuation coefficient intensities. Initially, 3D virtual images (Phantoms) created for this purpose will be used for intuitive testing, aimed at developing the method. To model the procedure, a 3D plane is defined using a normal vector and a point on the plane. A condition is sought to implement this plane in the data volume, identifying points that satisfy the plane equation. Additionally, PSNR (Peak Signal-to-Noise Ratio) will be used to evaluate the quality of the obtained images. PSNR is calculated from the mean squared error (MSE) between the obtained image and a reference image. The development of the proposed method consists of three main parts: i) obtaining the data volume, ii) defining the target plane, and iii) generating the 2D image representing the slice using the cutting function based on the obtained points once the target plane is defined. The images will be compared both qualitatively (visually) and quantitatively using the 3D Slicer software. Preliminary results suggest that the proposed method is valid. However, it will be necessary to implement some improvements related to the inclusion of smoothing/interpolation techniques and optimization of execution time to make the proposed method applicable in daily clinical practice.
Direction
GARCIA TAHOCES, PABLO (Tutorships)
GARCIA TAHOCES, PABLO (Tutorships)
Court
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
Study of decays B0 to D*-pi+pi-pi+ in the LHCb experiment at CERN
Authorship
C.L.A.
Bachelor of Physics
C.L.A.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
The study of semileptonic decays of B hadrons is currently of great interest because it could reveal contributions from new physics. There is a persistent tension between experimental measurements and Standard Model predictions for the ratios of decay rates R(D) and R(D*). This discrepancy could either confirm or reject lepton flavor universality, potentially opening the door to new physics beyond the current model. In this study, we investigate the decay B0 to D*-pi+pi-pi+ using experimental data from LHCb from 2018 and Monte Carlo data from the period 2016-2018. The objective is to select events while maintaining a high reconstruction efficiency and rejecting noise events, thereby determining the number of reconstructed events. As a secondary aim, we compare the obtained value for the mass of the B0 meson with existing experimental measurements. The results of this analysis will be relevant for future studies within the framework of lepton flavor universality with tau leptons at LHCb. After introducing the Standard Model and the LHCb experiment, we study two multivariate analysis algorithms to optimize the discrimination between signal and background. The best-performing model, combined with the optimal cut estimation, offers a signal-to-background ratio of S/B = 905 +/- 16, representing a relative improvement of 75 percent. The selection efficiency for this technique is estimated at (98,54 +/- 0,49) percent. The presented measurement of the B0 meson mass, m(B0)=(5279,760 +/- 0,052) MeV/c2, is consistent with existing experimental values.
The study of semileptonic decays of B hadrons is currently of great interest because it could reveal contributions from new physics. There is a persistent tension between experimental measurements and Standard Model predictions for the ratios of decay rates R(D) and R(D*). This discrepancy could either confirm or reject lepton flavor universality, potentially opening the door to new physics beyond the current model. In this study, we investigate the decay B0 to D*-pi+pi-pi+ using experimental data from LHCb from 2018 and Monte Carlo data from the period 2016-2018. The objective is to select events while maintaining a high reconstruction efficiency and rejecting noise events, thereby determining the number of reconstructed events. As a secondary aim, we compare the obtained value for the mass of the B0 meson with existing experimental measurements. The results of this analysis will be relevant for future studies within the framework of lepton flavor universality with tau leptons at LHCb. After introducing the Standard Model and the LHCb experiment, we study two multivariate analysis algorithms to optimize the discrimination between signal and background. The best-performing model, combined with the optimal cut estimation, offers a signal-to-background ratio of S/B = 905 +/- 16, representing a relative improvement of 75 percent. The selection efficiency for this technique is estimated at (98,54 +/- 0,49) percent. The presented measurement of the B0 meson mass, m(B0)=(5279,760 +/- 0,052) MeV/c2, is consistent with existing experimental values.
Direction
ROMERO VIDAL, ANTONIO (Tutorships)
NÓVOA FERNÁNDEZ, JULIO (Co-tutorships)
ROMERO VIDAL, ANTONIO (Tutorships)
NÓVOA FERNÁNDEZ, JULIO (Co-tutorships)
Court
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
Optimization of Magnetic Nanoparticles for Cancer Treatment through Hyperthermia
Authorship
V.O.Z.
Double bachelor degree in Mathematics and Physics
V.O.Z.
Double bachelor degree in Mathematics and Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
The treatment of cancer through magnetic hyperthermia has generated great expectations in recent years. In this paper, we will review the theoretical framework and approaches that allow us to understand how the application of an alternating magnetic field to a system formed by magnetic nanoparticles leads to heat dissipation, which provokes cellular apoptosis. Taking Brezovich’s criterion (which considers safe field conditions for in vivo aplications) as a reference, a computational study will be conducted on the shapes and sizes that optimize the dissipated heat, first assuming a system of non-interacting nanoparticles and then a system with interaction. In both cases, a clear dependence on the particle ratio has been found, as well as higher performance for cubic particles with a side close to 20 nm. Furthermore, the inability of the uniaxial anisotropy approach to correctly model this system will be shown, in favor of a balance between this and cubic magnetocrystalline anisotropy. The simulations are based on solving the Landau-Lifshitz-Gilbert equation with the OOMMF software developed by NIST, thanks to the resources provided by CESGA.
The treatment of cancer through magnetic hyperthermia has generated great expectations in recent years. In this paper, we will review the theoretical framework and approaches that allow us to understand how the application of an alternating magnetic field to a system formed by magnetic nanoparticles leads to heat dissipation, which provokes cellular apoptosis. Taking Brezovich’s criterion (which considers safe field conditions for in vivo aplications) as a reference, a computational study will be conducted on the shapes and sizes that optimize the dissipated heat, first assuming a system of non-interacting nanoparticles and then a system with interaction. In both cases, a clear dependence on the particle ratio has been found, as well as higher performance for cubic particles with a side close to 20 nm. Furthermore, the inability of the uniaxial anisotropy approach to correctly model this system will be shown, in favor of a balance between this and cubic magnetocrystalline anisotropy. The simulations are based on solving the Landau-Lifshitz-Gilbert equation with the OOMMF software developed by NIST, thanks to the resources provided by CESGA.
Direction
SERANTES ABALO, DAVID (Tutorships)
SERANTES ABALO, DAVID (Tutorships)
Court
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
Review of Models for Plastic Degradation in Marine Environments
Authorship
A.R.D.
Bachelor of Physics
A.R.D.
Bachelor of Physics
Defense date
07.19.2024 10:00
07.19.2024 10:00
Summary
The presence of plastics in our daily lives is increasing, leading to growing problems related to their waste management. This phenomenon results in large quantities of plastics ending up in the marine environment each year; however, their ultimate fate remains uncertain. To address this issue, we will study how different factors cause chemical and physical changes in plastics. These factors include photodegradation, thermal oxidation, mechanical degradation, biodegradation and biofouling. Each of these processes impacts the lifespan and final destiny of plastics in the marine environment. To better understand these processes, we will rely on various mathematical models that explain the interactions and mechanisms behind each type of degradation. These models allow us to predict the behavior of plastics under different environmental conditions, providing us with insights into the life of plastics once they enter the marine environment.
The presence of plastics in our daily lives is increasing, leading to growing problems related to their waste management. This phenomenon results in large quantities of plastics ending up in the marine environment each year; however, their ultimate fate remains uncertain. To address this issue, we will study how different factors cause chemical and physical changes in plastics. These factors include photodegradation, thermal oxidation, mechanical degradation, biodegradation and biofouling. Each of these processes impacts the lifespan and final destiny of plastics in the marine environment. To better understand these processes, we will rely on various mathematical models that explain the interactions and mechanisms behind each type of degradation. These models allow us to predict the behavior of plastics under different environmental conditions, providing us with insights into the life of plastics once they enter the marine environment.
Direction
Pérez Muñuzuri, Vicente (Tutorships)
Pérez Muñuzuri, Vicente (Tutorships)
Court
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
Computational study of hybrid water-in-salt electrolytes
Authorship
D.A.F.
Bachelor of Physics
D.A.F.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
In this undergraduate thesis, a bibliographic revision to the condition of the art of dynamic simulation and its aplication to the modeling of physical properties of systems like electrolytes water-in-salt (WiS) will be made. We will be studying different hybrid electrolytes obtained by adding an organic solvent (acetonitrile) to several electrolytes WiS (LiTFSI+H20, 21m; NaTFSI+H20, 8m; y NaOTF+H20, 9m) and analyzing the effect of the solvent concentration and the influence of the salt's anion and cation. Especially, we will undertake the structure and single-particle dynamics of these mixtures and proceed to evaluate/compare their performance, so they could be used in energy storage aplications. The study includes radial distribution functions of the different species, free water fraction, coordination numbers and other main single-particle dynamics parameters.
In this undergraduate thesis, a bibliographic revision to the condition of the art of dynamic simulation and its aplication to the modeling of physical properties of systems like electrolytes water-in-salt (WiS) will be made. We will be studying different hybrid electrolytes obtained by adding an organic solvent (acetonitrile) to several electrolytes WiS (LiTFSI+H20, 21m; NaTFSI+H20, 8m; y NaOTF+H20, 9m) and analyzing the effect of the solvent concentration and the influence of the salt's anion and cation. Especially, we will undertake the structure and single-particle dynamics of these mixtures and proceed to evaluate/compare their performance, so they could be used in energy storage aplications. The study includes radial distribution functions of the different species, free water fraction, coordination numbers and other main single-particle dynamics parameters.
Direction
Montes Campos, Hadrián (Tutorships)
MENDEZ MORALES, TRINIDAD (Co-tutorships)
Montes Campos, Hadrián (Tutorships)
MENDEZ MORALES, TRINIDAD (Co-tutorships)
Court
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
Toxicity analysis of nanoparticles with high interest in the field of lubrication and biomedicine
Authorship
A.A.P.
Bachelor of Physics
A.A.P.
Bachelor of Physics
Defense date
09.16.2024 17:00
09.16.2024 17:00
Summary
Nanoparticles are materials with all their external dimensions on the nanoscale. Mainly due to their larger surface area and quantum effects, they have different physical and chemical properties compared to their macroscopic counterpart. This makes them useful in numerous applications and in very diverse areas such as medicine, lubrication and the food industry, although in many cases the mechanisms by which toxicity is generated and the effects of long-term exposure are not fully understood. Using techniques such as MTT and MTS assays, the neutral red uptake assay and the lactate dehydrogenase test, it can be evaluated how nanoparticles generate cytotoxicity, but the interaction with the biological system is complex and dynamic. Another method that allows to analyze their environmental and health effects is the Microtox test, used in this work. Using it, the toxicity of nanoparticles can be related with the inhibition of bioluminescence presented by the marine bacterium Aliivibrio fischeri.
Nanoparticles are materials with all their external dimensions on the nanoscale. Mainly due to their larger surface area and quantum effects, they have different physical and chemical properties compared to their macroscopic counterpart. This makes them useful in numerous applications and in very diverse areas such as medicine, lubrication and the food industry, although in many cases the mechanisms by which toxicity is generated and the effects of long-term exposure are not fully understood. Using techniques such as MTT and MTS assays, the neutral red uptake assay and the lactate dehydrogenase test, it can be evaluated how nanoparticles generate cytotoxicity, but the interaction with the biological system is complex and dynamic. Another method that allows to analyze their environmental and health effects is the Microtox test, used in this work. Using it, the toxicity of nanoparticles can be related with the inhibition of bioluminescence presented by the marine bacterium Aliivibrio fischeri.
Direction
GARCIA GUIMAREY, MARIA JESUS (Tutorships)
PARAJO VIEITO, JUAN JOSE (Co-tutorships)
GARCIA GUIMAREY, MARIA JESUS (Tutorships)
PARAJO VIEITO, JUAN JOSE (Co-tutorships)
Court
MIGUEZ MACHO, GONZALO (Chairman)
González Fernández, Rosa María (Secretary)
BROCOS FERNANDEZ, MARIA DEL PILAR (Member)
MIGUEZ MACHO, GONZALO (Chairman)
González Fernández, Rosa María (Secretary)
BROCOS FERNANDEZ, MARIA DEL PILAR (Member)
Particle Identification(PID) in the pressurized TPC of ND-GAr detector at DUNE
Authorship
D.R.R.
Bachelor of Physics
D.R.R.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
In this work we study the Particle Identification (PID) capabilities in a TPC with optical readout proposed as the detector for ND-GAr for the DUNE experiment.A stand-alone simulation Framework is developed that simulates the primary ionization in the gaseous medium in a realistic way and imaging particle tracks along the detector. This allows us to to characterize the momentum range $p$ (GeV/c) in which we can distinguish between charged particles of interest for the next generation of Neutrino Physics experiments.
In this work we study the Particle Identification (PID) capabilities in a TPC with optical readout proposed as the detector for ND-GAr for the DUNE experiment.A stand-alone simulation Framework is developed that simulates the primary ionization in the gaseous medium in a realistic way and imaging particle tracks along the detector. This allows us to to characterize the momentum range $p$ (GeV/c) in which we can distinguish between charged particles of interest for the next generation of Neutrino Physics experiments.
Direction
GONZALEZ DIAZ, DIEGO (Tutorships)
GONZALEZ DIAZ, DIEGO (Tutorships)
Court
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
Multiclass classification with Machine Learning of the origin of the final states K-pi+K+pi- in the selection of B0 to D0(to K-pi+)K+pi- decays in the LHCb experiment at CERN.
Authorship
V.A.G.
Bachelor of Physics
V.A.G.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
The work focuses on the analysis of the B0 to D0(to K-pi+)K+pi- decay of the B0 meson using data from the LHCb experiment at the Large Hadron Collider (LHC). The study will involve data selection methods, starting with rectangular cuts and advancing to binary and multiclass classification techinques in the context of Machine Learning. These techniques are used to classify K-pi+K+pi- final state events and events with a similar final state to improve the accuracy in identifyinf CP violations.
The work focuses on the analysis of the B0 to D0(to K-pi+)K+pi- decay of the B0 meson using data from the LHCb experiment at the Large Hadron Collider (LHC). The study will involve data selection methods, starting with rectangular cuts and advancing to binary and multiclass classification techinques in the context of Machine Learning. These techniques are used to classify K-pi+K+pi- final state events and events with a similar final state to improve the accuracy in identifyinf CP violations.
Direction
SANTAMARINA RIOS, CIBRAN (Tutorships)
BROSSA GONZALO, ARNAU (Co-tutorships)
SANTAMARINA RIOS, CIBRAN (Tutorships)
BROSSA GONZALO, ARNAU (Co-tutorships)
Court
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
Gravitational Waves from Eccentric Black Hole Binaries
Authorship
R.R.D.
Bachelor of Physics
R.R.D.
Bachelor of Physics
Defense date
07.18.2024 09:00
07.18.2024 09:00
Summary
Binaries of compact objects, either neutron stars or black holes, are the only sources of gravitational waves so far detected by the LIGO-Virgo-KAGRA network. So far, it has been assumed that the binary orbits are circular or have negligible eccentricity (ellipticity), as expected for formation at large separations. However, detection of eccentric binaries would provide significant new information on the channels of binary formation, favouring extremely dense astrophysical environments such as galactic nuclei. In this project, the effect of eccentricity on the gravitational waveforms emitted by merging binaries will be studied, using the PyCBC and LALSimulation software packages. Current search techniques correlate detector data with model waveforms that neglect eccentricity. Therefore the detectability of possible eccentric binary signals with these techniques will be investigated, using the waveform calculations previously mentioned. Results indicate that lower eccentric (e less than 0.1) signals are perfectly fitted by non-eccentric templates. Indeed, a fitting factor of F higher than 0.95 is obtained for a population of eccentric signals in this range. However, for eccentricities e higher than 0.1, the fitting factor decrease rapidly, necessitating templates that account for eccentricity to optimize search results. Furthermore, some interesting tendencies have been found that relate the mass parameters of the template with the signal parameters. The findings underscore the importance of developing more sophisticated models to enhance the detection of eccentric gravitational wave sources, thus advancing our understanding of their formation and dynamics.
Binaries of compact objects, either neutron stars or black holes, are the only sources of gravitational waves so far detected by the LIGO-Virgo-KAGRA network. So far, it has been assumed that the binary orbits are circular or have negligible eccentricity (ellipticity), as expected for formation at large separations. However, detection of eccentric binaries would provide significant new information on the channels of binary formation, favouring extremely dense astrophysical environments such as galactic nuclei. In this project, the effect of eccentricity on the gravitational waveforms emitted by merging binaries will be studied, using the PyCBC and LALSimulation software packages. Current search techniques correlate detector data with model waveforms that neglect eccentricity. Therefore the detectability of possible eccentric binary signals with these techniques will be investigated, using the waveform calculations previously mentioned. Results indicate that lower eccentric (e less than 0.1) signals are perfectly fitted by non-eccentric templates. Indeed, a fitting factor of F higher than 0.95 is obtained for a population of eccentric signals in this range. However, for eccentricities e higher than 0.1, the fitting factor decrease rapidly, necessitating templates that account for eccentricity to optimize search results. Furthermore, some interesting tendencies have been found that relate the mass parameters of the template with the signal parameters. The findings underscore the importance of developing more sophisticated models to enhance the detection of eccentric gravitational wave sources, thus advancing our understanding of their formation and dynamics.
Direction
DENT , THOMAS (Tutorships)
EDELSTEIN GLAUBACH, JOSE DANIEL (Co-tutorships)
DENT , THOMAS (Tutorships)
EDELSTEIN GLAUBACH, JOSE DANIEL (Co-tutorships)
Court
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Search for long-lived particles using the muon system of the LHCb experiment
Authorship
A.M.A.
Bachelor of Physics
A.M.A.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
This study introduces a new method for detecting Long-Lived Particles (LLPs) which decay near or within the Muon System (MS) of the LHCb experiment. We developed an original algorithm to reconstruct the decay displaced vertices (DVs) of LLPs, using as model an hypothetical neutral Beyond Standard Model (BSM) particle, referred to as S. This particle, originated from the decay of a B meson, is studied with various lifetimes (1, 5, 10, and 50 ns). It decays into two tau particles, which subsequently decay into multiple pions . The algorithm begins by selecting events with hits in at least three MS stations from one pion and hits in at least two stations from other pion. The hits from the pion that impacts three stations are clustered (using a Machine Learning clustering algorithm), and these clusters are combined (selecting one cluster per station) and fitted to a 3D line. Using a constraint on the chi2, the best 3D fit is founded, yielding the pion’s track. For the pion with hits in two stations, the strategy is similar but we apply a Distance of Closest Approach (DOCA) constraint relative to the other reconstructed track. Once both tracks are established, the DV can be calculated. Our results show detection efficiencies ranging from 30% to 50%, depending on the lifetime of S. The spatial resolution achieved is approximately 10% in the transverse axes and 3% in the longitudinal axis.
This study introduces a new method for detecting Long-Lived Particles (LLPs) which decay near or within the Muon System (MS) of the LHCb experiment. We developed an original algorithm to reconstruct the decay displaced vertices (DVs) of LLPs, using as model an hypothetical neutral Beyond Standard Model (BSM) particle, referred to as S. This particle, originated from the decay of a B meson, is studied with various lifetimes (1, 5, 10, and 50 ns). It decays into two tau particles, which subsequently decay into multiple pions . The algorithm begins by selecting events with hits in at least three MS stations from one pion and hits in at least two stations from other pion. The hits from the pion that impacts three stations are clustered (using a Machine Learning clustering algorithm), and these clusters are combined (selecting one cluster per station) and fitted to a 3D line. Using a constraint on the chi2, the best 3D fit is founded, yielding the pion’s track. For the pion with hits in two stations, the strategy is similar but we apply a Distance of Closest Approach (DOCA) constraint relative to the other reconstructed track. Once both tracks are established, the DV can be calculated. Our results show detection efficiencies ranging from 30% to 50%, depending on the lifetime of S. The spatial resolution achieved is approximately 10% in the transverse axes and 3% in the longitudinal axis.
Direction
CID VIDAL, XABIER (Tutorships)
VAZQUEZ SIERRA, CARLOS (Co-tutorships)
CID VIDAL, XABIER (Tutorships)
VAZQUEZ SIERRA, CARLOS (Co-tutorships)
Court
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
The description of phenomenological QCD models of pA collisions at LHC applied to the production of UHE particles in cosmic rays
Authorship
A.N.C.
Bachelor of Physics
A.N.C.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
The cosmic rays energy spectrum can be understood through the analysis of ultraenergetic proton collisions. The study of these hadronic interactions can be described by phenomenological models based on QCD, among which the Quantum Gluon Strings Model (QGSM), derived from Regge Theory, stands out. This model characterizes hadronic collisions through the exchange of resonances with vacuum quantum numbers, called Pomerons, which control the amplitude of processes of the highest energy. By analysing the Pomeron's dynamic, the QGSM can predict the secondary particle production spectrum in proton-proton and proton-antiproton collisions. On the other hand, cosmic rays present an energy spectrum of the detected flux with trend changes in different energy regions. Nowadays, a generally accepted explanation for these changes does not exist. In this work, the QGSM will be presented in detail, as well as the cosmic rays energy spectrum, with the objective of using the QGSM predictions to propose an explanation for the cosmic ray spectrum and its characteristic regions.
The cosmic rays energy spectrum can be understood through the analysis of ultraenergetic proton collisions. The study of these hadronic interactions can be described by phenomenological models based on QCD, among which the Quantum Gluon Strings Model (QGSM), derived from Regge Theory, stands out. This model characterizes hadronic collisions through the exchange of resonances with vacuum quantum numbers, called Pomerons, which control the amplitude of processes of the highest energy. By analysing the Pomeron's dynamic, the QGSM can predict the secondary particle production spectrum in proton-proton and proton-antiproton collisions. On the other hand, cosmic rays present an energy spectrum of the detected flux with trend changes in different energy regions. Nowadays, a generally accepted explanation for these changes does not exist. In this work, the QGSM will be presented in detail, as well as the cosmic rays energy spectrum, with the objective of using the QGSM predictions to propose an explanation for the cosmic ray spectrum and its characteristic regions.
Direction
MERINO GAYOSO, CARLOS MIGUEL (Tutorships)
MERINO GAYOSO, CARLOS MIGUEL (Tutorships)
Court
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
Implementation of a Lagrangian simulator for the study of plastic waste transport in the Ría de Arousa.
Authorship
M.R.O.
Bachelor of Physics
M.R.O.
Bachelor of Physics
Defense date
02.20.2024 17:00
02.20.2024 17:00
Summary
In this Thesis, the implementation process of the Lagrangian ocean circulation model OceanParcels is developed to conduct a study on the transport of plastic waste emitted by the Ulla and Umia rivers into the Ría de Arousa over a specific period of time. Analogous studies have already been conducted at the Nonlinear Physics Group of the USC, using the same hydrodynamic data for the water velocity field in the estuary and the discharge data of the Ulla and Umia rivers. However, the simulations carried out by the GFNL were performed using the MOHID-Lagrangian simulator, based on Fortran. One of the most interesting aspects of this work is observing how the Lagrangian OceanParcels simulator, based on Python, can be adapted to perform the same simulations as those done in a simulator based on a compiled language. The efficiency of Parcels is exemplified in conducting simulations over long periods and with a large number of particles. The dilemma is presented between a simulator with greater ease of use, albeit with longer execution times, compared to other more complex simulators but with shorter execution times. Throughout this work, the adaptation process will be discussed; from the data reading, to the algorithms on which the main code is based, to the visualization of results. Also going through the variation in trajectories of particles based on the time and place of emission, the time particles take to leave our study region, or even a study of the dependence of simulation execution times on different factors. Possible future implementations will also be discussed in order to complete the study and obtain a wider range of results.
In this Thesis, the implementation process of the Lagrangian ocean circulation model OceanParcels is developed to conduct a study on the transport of plastic waste emitted by the Ulla and Umia rivers into the Ría de Arousa over a specific period of time. Analogous studies have already been conducted at the Nonlinear Physics Group of the USC, using the same hydrodynamic data for the water velocity field in the estuary and the discharge data of the Ulla and Umia rivers. However, the simulations carried out by the GFNL were performed using the MOHID-Lagrangian simulator, based on Fortran. One of the most interesting aspects of this work is observing how the Lagrangian OceanParcels simulator, based on Python, can be adapted to perform the same simulations as those done in a simulator based on a compiled language. The efficiency of Parcels is exemplified in conducting simulations over long periods and with a large number of particles. The dilemma is presented between a simulator with greater ease of use, albeit with longer execution times, compared to other more complex simulators but with shorter execution times. Throughout this work, the adaptation process will be discussed; from the data reading, to the algorithms on which the main code is based, to the visualization of results. Also going through the variation in trajectories of particles based on the time and place of emission, the time particles take to leave our study region, or even a study of the dependence of simulation execution times on different factors. Possible future implementations will also be discussed in order to complete the study and obtain a wider range of results.
Direction
Pérez Muñuzuri, Vicente (Tutorships)
BUÑUEL I MURISCOT, ARNAU (Co-tutorships)
Pérez Muñuzuri, Vicente (Tutorships)
BUÑUEL I MURISCOT, ARNAU (Co-tutorships)
Court
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
Frugal innovation aplied to the production of shea butter in Burkina Faso
Authorship
R.S.M.
Bachelor of Physics
R.S.M.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
The project presents the design and sizing of a transformation centre based on frugal innovation techniques for an association of women shea butter's producers in Bobo-Dioulasso located in Burkina Faso. Two distinct designs are proposed, both of them leveraging solar technology: one focuses on socially acceptable technologies, while the other emphasizes intermediate technologies. To achieve this, an analysis of energy consumption and simultaneous processes was conducted to properly size a photovoltaic solar system tailored to the specific needs of the centre. Additionally, the design details of a solar dehydrator, a solar oven, and a solar stove are provided, all developed under the principles of frugal innovation.
The project presents the design and sizing of a transformation centre based on frugal innovation techniques for an association of women shea butter's producers in Bobo-Dioulasso located in Burkina Faso. Two distinct designs are proposed, both of them leveraging solar technology: one focuses on socially acceptable technologies, while the other emphasizes intermediate technologies. To achieve this, an analysis of energy consumption and simultaneous processes was conducted to properly size a photovoltaic solar system tailored to the specific needs of the centre. Additionally, the design details of a solar dehydrator, a solar oven, and a solar stove are provided, all developed under the principles of frugal innovation.
Direction
LOPEZ AGUERA, Ma ANGELES (Tutorships)
LOPEZ AGUERA, Ma ANGELES (Tutorships)
Court
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
Minimizing Hubbard's model with Variational Quantum Eigensolver
Authorship
M.L.E.
Bachelor of Physics
M.L.E.
Bachelor of Physics
Defense date
07.18.2024 09:00
07.18.2024 09:00
Summary
The main goal of this project is to search for Hubbard’s model ground state using Variational Quantum Eigensolver, one of the most relevant quantum algorithms that promises to obtain quantum vantage with current quantum computers. After a first step of mapping electronic interactions to Pauli strings, as quantum computing requires, we attempt to implement a variational algorithm based on adiabatic evolution: Hamiltonian Variational Ansatz. This scheme starts by building an initial state, solution to the non-interacting part of the system, for the later implementation of a parameterized circuit that uses the proper Hubbard’s Hamiltonian to elaborate a solution minimizing the energy. Within the design of the ansatz, we will study the depth that our circuit requires in order to efficiently approach the desired solution, without compromising the optimization process. For minimization, our choice will be Differential Evolution, taking care of the initialization and convergence criteria suitable to our problem. After the exact simulation measuring the expectation value of the Hamiltonian with respect to our solution to the eigenstate wave function, we will evaluate our results by analyzing energy’s relative error and fidelity of the eigenstate in regard to the diagonalization’s solution of the Hamiltonian. At the end, we attempt to prepare the code for a quantum simulation via shots, in order to launch it in a real quantum device.
The main goal of this project is to search for Hubbard’s model ground state using Variational Quantum Eigensolver, one of the most relevant quantum algorithms that promises to obtain quantum vantage with current quantum computers. After a first step of mapping electronic interactions to Pauli strings, as quantum computing requires, we attempt to implement a variational algorithm based on adiabatic evolution: Hamiltonian Variational Ansatz. This scheme starts by building an initial state, solution to the non-interacting part of the system, for the later implementation of a parameterized circuit that uses the proper Hubbard’s Hamiltonian to elaborate a solution minimizing the energy. Within the design of the ansatz, we will study the depth that our circuit requires in order to efficiently approach the desired solution, without compromising the optimization process. For minimization, our choice will be Differential Evolution, taking care of the initialization and convergence criteria suitable to our problem. After the exact simulation measuring the expectation value of the Hamiltonian with respect to our solution to the eigenstate wave function, we will evaluate our results by analyzing energy’s relative error and fidelity of the eigenstate in regard to the diagonalization’s solution of the Hamiltonian. At the end, we attempt to prepare the code for a quantum simulation via shots, in order to launch it in a real quantum device.
Direction
PARDO CASTRO, VICTOR (Tutorships)
FAILDE BALEA, DANIEL (Co-tutorships)
PARDO CASTRO, VICTOR (Tutorships)
FAILDE BALEA, DANIEL (Co-tutorships)
Court
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Synthesis and characterization of nanoparticles for its application in photothermal therapy.
Authorship
M.A.F.B.
Bachelor of Physics
M.A.F.B.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
This project is focused on the search for a nanometric system capable of combining fluorescence and photothermal properties to be potentially used in combined therapy. For this purpose, the nanoparticles (NPs) chosen to combine were quantum dots (CDs), due to their luminescent capacity, and polydopamine (PDA), for its heat generation capacity when irradiated in the infrared. This research is divided into three sections, with the first section being based on the synthesis and characterization of various CDs, as well as the examination of their behavior in other solvents and their response to carbonization processes, under the premise of selecting those that exhibit the highest fluorescence. In the second section, we explore the polymerization process of dopamine (DA) and choose the one that provides the most material in a faster synthesis. For both nanoparticles, their absorbance, fluorescence, zeta potential, and molecular weight were characterized. The final objective, corresponding to the third section of this project, is to create a hybrid system using these nanoparticles (PDA and CDs), and to explore their intrinsic properties for potential applications in theranostics (the same platform is responsible for both diagnosis and treatment). To this end, we added different amounts of CDs onto the PDA (NPs of higher molecular weight) and analyzed the absorbance and fluorescence spectra of the hybrid system, as well as its heating capacity. The results confirm the expected, obtaining of a photothermal nanoplatform with fluorescent properties, so we hope that this work can lead to innovative approaches in cancer therapy.
This project is focused on the search for a nanometric system capable of combining fluorescence and photothermal properties to be potentially used in combined therapy. For this purpose, the nanoparticles (NPs) chosen to combine were quantum dots (CDs), due to their luminescent capacity, and polydopamine (PDA), for its heat generation capacity when irradiated in the infrared. This research is divided into three sections, with the first section being based on the synthesis and characterization of various CDs, as well as the examination of their behavior in other solvents and their response to carbonization processes, under the premise of selecting those that exhibit the highest fluorescence. In the second section, we explore the polymerization process of dopamine (DA) and choose the one that provides the most material in a faster synthesis. For both nanoparticles, their absorbance, fluorescence, zeta potential, and molecular weight were characterized. The final objective, corresponding to the third section of this project, is to create a hybrid system using these nanoparticles (PDA and CDs), and to explore their intrinsic properties for potential applications in theranostics (the same platform is responsible for both diagnosis and treatment). To this end, we added different amounts of CDs onto the PDA (NPs of higher molecular weight) and analyzed the absorbance and fluorescence spectra of the hybrid system, as well as its heating capacity. The results confirm the expected, obtaining of a photothermal nanoplatform with fluorescent properties, so we hope that this work can lead to innovative approaches in cancer therapy.
Direction
TOPETE CAMACHO, ANTONIO (Tutorships)
CAMBON FREIRE, ADRIANA (Co-tutorships)
TOPETE CAMACHO, ANTONIO (Tutorships)
CAMBON FREIRE, ADRIANA (Co-tutorships)
Court
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
Numerical study of von Kármán vortex generated by a moving cylinder inside a fluid.
Authorship
D.G.C.
Bachelor of Physics
D.G.C.
Bachelor of Physics
Defense date
09.16.2024 17:00
09.16.2024 17:00
Summary
The von Kármán vortices generated by a fluid flowing around a cylinder oscillating perpendicularly to the initial direction of fluid motion have been studied. The Lattice Boltzmann method was employed, which allows a faster solve of the Navier-Stokes equations for the fluid. Various forcing frequencies and amplitudes were simulated, revealing quasi-periodic and chaotic behaviors in vortex generation. The temporal variation of velocity at a fixed spatial point was obtained. A power spectrum analysis of the vortex generation frequencies was conducted for each forcing frequency of the cylinder, along with the construction of bifurcation diagrams based on the local maxima of velocity as a function of forcing frequency for the three amplitudes used. For forcing frequencies close to the natural vortex shedding frequency in the unforced case, a frequency locking phenomenon was observed, consistent with what is reported in the literature for periodically forced nonlinear oscillators. The results were compared with other cases in the scientific literature, where the cylinder rotates periodically about its principal axis.
The von Kármán vortices generated by a fluid flowing around a cylinder oscillating perpendicularly to the initial direction of fluid motion have been studied. The Lattice Boltzmann method was employed, which allows a faster solve of the Navier-Stokes equations for the fluid. Various forcing frequencies and amplitudes were simulated, revealing quasi-periodic and chaotic behaviors in vortex generation. The temporal variation of velocity at a fixed spatial point was obtained. A power spectrum analysis of the vortex generation frequencies was conducted for each forcing frequency of the cylinder, along with the construction of bifurcation diagrams based on the local maxima of velocity as a function of forcing frequency for the three amplitudes used. For forcing frequencies close to the natural vortex shedding frequency in the unforced case, a frequency locking phenomenon was observed, consistent with what is reported in the literature for periodically forced nonlinear oscillators. The results were compared with other cases in the scientific literature, where the cylinder rotates periodically about its principal axis.
Direction
Pérez Muñuzuri, Vicente (Tutorships)
Pérez Muñuzuri, Vicente (Tutorships)
Court
MIGUEZ MACHO, GONZALO (Chairman)
González Fernández, Rosa María (Secretary)
BROCOS FERNANDEZ, MARIA DEL PILAR (Member)
MIGUEZ MACHO, GONZALO (Chairman)
González Fernández, Rosa María (Secretary)
BROCOS FERNANDEZ, MARIA DEL PILAR (Member)
Modeling and simulation of Electromagnetically Induced Transparency (EIT) in Rydberg atoms for the detection of RF electromagnetic fields
Authorship
J.A.A.
Bachelor of Physics
J.A.A.
Bachelor of Physics
Defense date
09.16.2024 17:00
09.16.2024 17:00
Summary
This work explores Electromagnetically Induced Transparency (EIT) in multi-level atomic systems, a quantum interference effect that makes an opaque medium become transparent to specific frequencies of light under certain conditions. It begins with the interaction between light and atoms in two-level systems, expanding the analysis to more complex systems and using the density operator formalism to describe quantum states and their evolution. The application of EIT in detecting radiofrequency (RF) signals using Rydberg atoms is highlighted, and three-level systems, where Autler-Townes splitting occurs, are studied. Finally, a four-level system is analyzed where RF fields induce constructive interference.Through simulations and theory, the potential of these quantum systems for precise RF electric field measurements and their use in new sensing technologies is demonstrated.
This work explores Electromagnetically Induced Transparency (EIT) in multi-level atomic systems, a quantum interference effect that makes an opaque medium become transparent to specific frequencies of light under certain conditions. It begins with the interaction between light and atoms in two-level systems, expanding the analysis to more complex systems and using the density operator formalism to describe quantum states and their evolution. The application of EIT in detecting radiofrequency (RF) signals using Rydberg atoms is highlighted, and three-level systems, where Autler-Townes splitting occurs, are studied. Finally, a four-level system is analyzed where RF fields induce constructive interference.Through simulations and theory, the potential of these quantum systems for precise RF electric field measurements and their use in new sensing technologies is demonstrated.
Direction
AYYAD LIMONGE, FRANCESC YASSID (Tutorships)
Ferreira Cao, Miguel (Co-tutorships)
AYYAD LIMONGE, FRANCESC YASSID (Tutorships)
Ferreira Cao, Miguel (Co-tutorships)
Court
MIGUEZ MACHO, GONZALO (Chairman)
González Fernández, Rosa María (Secretary)
BROCOS FERNANDEZ, MARIA DEL PILAR (Member)
MIGUEZ MACHO, GONZALO (Chairman)
González Fernández, Rosa María (Secretary)
BROCOS FERNANDEZ, MARIA DEL PILAR (Member)
The role of hydrogen in the energy transition
Authorship
P.R.S.C.
Double bachelor degree in Physics and Chemistry
P.R.S.C.
Double bachelor degree in Physics and Chemistry
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
This Degree Thesis presents a detailed analysis of the role of hydrogen in the energy transition. Hydrogen is a key energy vector within the framework of the transition towards sustainable systems. This work studies the role of hydrogen as an alternative for the decarbonization of various sectors, including transport and industry. Current hydrogen production and sustainable alternatives, as well as different options for its transportation and storage, are analyzed. Promoting these aspects is crucial for its industrial use, especially in the transport sector, with the aim of reducing dependence on fossil fuels. This study is framed within the Spanish government’s hydrogen roadmap, which outlines a series of actions to promote the use of hydrogen as part of its energy transition plan.
This Degree Thesis presents a detailed analysis of the role of hydrogen in the energy transition. Hydrogen is a key energy vector within the framework of the transition towards sustainable systems. This work studies the role of hydrogen as an alternative for the decarbonization of various sectors, including transport and industry. Current hydrogen production and sustainable alternatives, as well as different options for its transportation and storage, are analyzed. Promoting these aspects is crucial for its industrial use, especially in the transport sector, with the aim of reducing dependence on fossil fuels. This study is framed within the Spanish government’s hydrogen roadmap, which outlines a series of actions to promote the use of hydrogen as part of its energy transition plan.
Direction
MENDEZ MORALES, TRINIDAD (Tutorships)
MENDEZ MORALES, TRINIDAD (Tutorships)
Court
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
Reduction and immobilization of nitrate and phosphate in aquatic systems
Authorship
P.R.S.C.
Double bachelor degree in Physics and Chemistry
P.R.S.C.
Double bachelor degree in Physics and Chemistry
Defense date
07.15.2024 09:00
07.15.2024 09:00
Summary
Eutrophication is a serious global environmental problem resulting from increased concentrations of nitrate and phosphate in aquatic systems, leading to the undesired proliferation of plants and microorganisms. This phenomenon causes a decrease in oxygen levels in the water, severely affecting aquatic fauna and water quality. To design a methodology that can mitigate the problems caused by eutrophication, this study investigates the capacity of iron oxyhydroxides, such as ferrihydrite, to remove phosphate from aquatic environments and the effects that organic matter (concentration and nature), pH, and salinity may have. Additionally, the Iron Wheel hypothesis was evaluated to propose a methodology for removing nitrate from aquatic environments by reducing it to nitrite or ammonium in the presence of organic matter and ferrihydrite. The results obtained are promising and indicate that optimizing the process regarding the concentrations and precursors of iron and organic matter to be used will allow the design of a material suitable for the simultaneous removal of both nutrients.
Eutrophication is a serious global environmental problem resulting from increased concentrations of nitrate and phosphate in aquatic systems, leading to the undesired proliferation of plants and microorganisms. This phenomenon causes a decrease in oxygen levels in the water, severely affecting aquatic fauna and water quality. To design a methodology that can mitigate the problems caused by eutrophication, this study investigates the capacity of iron oxyhydroxides, such as ferrihydrite, to remove phosphate from aquatic environments and the effects that organic matter (concentration and nature), pH, and salinity may have. Additionally, the Iron Wheel hypothesis was evaluated to propose a methodology for removing nitrate from aquatic environments by reducing it to nitrite or ammonium in the presence of organic matter and ferrihydrite. The results obtained are promising and indicate that optimizing the process regarding the concentrations and precursors of iron and organic matter to be used will allow the design of a material suitable for the simultaneous removal of both nutrients.
Direction
FIOL LOPEZ, SARAH (Tutorships)
ANTELO MARTINEZ, JUAN (Co-tutorships)
FIOL LOPEZ, SARAH (Tutorships)
ANTELO MARTINEZ, JUAN (Co-tutorships)
Court
LORES AGUIN, MARTA (Chairman)
RIOS RODRIGUEZ, MARIA DEL CARMEN (Secretary)
Carro Díaz, Antonia María (Member)
LORES AGUIN, MARTA (Chairman)
RIOS RODRIGUEZ, MARIA DEL CARMEN (Secretary)
Carro Díaz, Antonia María (Member)
Anomalies in Quantum Field Theory
Authorship
A.A.O.
Bachelor of Physics
A.A.O.
Bachelor of Physics
Defense date
07.18.2024 10:00
07.18.2024 10:00
Summary
Anomalies are essential for a deep understanding of quantum field theory. This study examines in detail their crucial role in modern theories of fundamental interactions. It begins with a meticulous review of symmetries in classical and quantum field theories, establishing the theoretical framework to understand these phenomena. Subsequently, it focuses on the study of global anomalies, placing the chiral anomaly at the center of the discussion. Both the experimental and theoretical implications of these anomalies are explored, emphasizing their relevance in particle physics. Additionally, this work addresses the issue of gauge anomalies, which pose challenges to the consistency of theories. Potentially anomalous theories are examined, essential conditions for their cancellation are discussed, and this process is illustrated through the paradigmatic case of the Standard Model of particle physics. Finally, the impact of anomalies in the context of gravitation is studied, a crucial component for any theory aspiring to consistently couple to gravity.
Anomalies are essential for a deep understanding of quantum field theory. This study examines in detail their crucial role in modern theories of fundamental interactions. It begins with a meticulous review of symmetries in classical and quantum field theories, establishing the theoretical framework to understand these phenomena. Subsequently, it focuses on the study of global anomalies, placing the chiral anomaly at the center of the discussion. Both the experimental and theoretical implications of these anomalies are explored, emphasizing their relevance in particle physics. Additionally, this work addresses the issue of gauge anomalies, which pose challenges to the consistency of theories. Potentially anomalous theories are examined, essential conditions for their cancellation are discussed, and this process is illustrated through the paradigmatic case of the Standard Model of particle physics. Finally, the impact of anomalies in the context of gravitation is studied, a crucial component for any theory aspiring to consistently couple to gravity.
Direction
BORSATO , RICCARDO (Tutorships)
BORSATO , RICCARDO (Tutorships)
Court
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
Application of neural networks for track identification purposes in the NEXT-100 detector.
Authorship
X.I.M.
Bachelor of Physics
X.I.M.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
NEXT is an international collaboration searching for the neutrinoless double beta decay (0nubetabeta) in the 136Xe isotope. In the NEXT-100 detector, signal events have a characteristic shape of two electrons leaving a track in opposite directions, while background events only have a single electron. This work demonstrates the topological discrimination capability between signal and background events at different pressures in the detector with good results. To achieve this, two classification graph neural networks (GNNs) were developed and trained with Monte Carlo simulated events. The next stages of this study will progressively introduce detector effects into the data and observe how the discrimination capability varies.
NEXT is an international collaboration searching for the neutrinoless double beta decay (0nubetabeta) in the 136Xe isotope. In the NEXT-100 detector, signal events have a characteristic shape of two electrons leaving a track in opposite directions, while background events only have a single electron. This work demonstrates the topological discrimination capability between signal and background events at different pressures in the detector with good results. To achieve this, two classification graph neural networks (GNNs) were developed and trained with Monte Carlo simulated events. The next stages of this study will progressively introduce detector effects into the data and observe how the discrimination capability varies.
Direction
HERNANDO MORATA, JOSE ANGEL (Tutorships)
PEREZ MANEIRO, MARTIN (Co-tutorships)
HERNANDO MORATA, JOSE ANGEL (Tutorships)
PEREZ MANEIRO, MARTIN (Co-tutorships)
Court
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
Colloidal and thermal stability studies of superparamagnetic nanoparticles dispersions in ionic liquid
Authorship
J.V.P.
Bachelor of Physics
J.V.P.
Bachelor of Physics
Defense date
07.19.2024 10:00
07.19.2024 10:00
Summary
Dispersions of magnetic nanoparticles (MNPs) in ionic liquids (ILs) are being studied with a view to industrial applications, making it important to analyze the colloidal stability and thermal stability of these dispersions. In this work, from an initial group of 10, the 3 MNPs that could present the highest colloidal stability were selected and dispersions were prepared in a specific IL, ethylammonium nitrate (EAN). Subsequently, experimental techniques such as dynamic light scattering (DLS) and thermogravimetric analysis (TGA) were used to analyze the colloidal stability and thermal stability, respectively, as a function of the concentration of MNPs in the dispersions.
Dispersions of magnetic nanoparticles (MNPs) in ionic liquids (ILs) are being studied with a view to industrial applications, making it important to analyze the colloidal stability and thermal stability of these dispersions. In this work, from an initial group of 10, the 3 MNPs that could present the highest colloidal stability were selected and dispersions were prepared in a specific IL, ethylammonium nitrate (EAN). Subsequently, experimental techniques such as dynamic light scattering (DLS) and thermogravimetric analysis (TGA) were used to analyze the colloidal stability and thermal stability, respectively, as a function of the concentration of MNPs in the dispersions.
Direction
VILLANUEVA LOPEZ, MARIA (Tutorships)
González Gómez, Manuel Antonio (Co-tutorships)
VILLANUEVA LOPEZ, MARIA (Tutorships)
González Gómez, Manuel Antonio (Co-tutorships)
Court
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
Cubic Variations of the Starobinsky Inflationary Model
Authorship
A.R.T.
Bachelor of Physics
A.R.T.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
We'll go over the steps that lead us from general relativity to Alekséi Starobinsky's model of inflation, which will leave out the inclusion of the inflaton field, being this one replaced with a quadratic term for the curvature in the Einstein-Hilbert action, with scalar degrees of freedom, from where field equations can be derived. The recent founding of a higher order family of corrections of the curvature, resulting in Friedmann-Lemaître equations, and the study and inclusion to the inflation model of the simplest, cubic, the established as Einstenian Cubic Gravity(ECG) precisely, will be the next step. Finishing with a comparative between theoric predictions of the higher order models and experimental values, majorly from the Planck telescope, of the tensor to scalar ratio and spectral índex.
We'll go over the steps that lead us from general relativity to Alekséi Starobinsky's model of inflation, which will leave out the inclusion of the inflaton field, being this one replaced with a quadratic term for the curvature in the Einstein-Hilbert action, with scalar degrees of freedom, from where field equations can be derived. The recent founding of a higher order family of corrections of the curvature, resulting in Friedmann-Lemaître equations, and the study and inclusion to the inflation model of the simplest, cubic, the established as Einstenian Cubic Gravity(ECG) precisely, will be the next step. Finishing with a comparative between theoric predictions of the higher order models and experimental values, majorly from the Planck telescope, of the tensor to scalar ratio and spectral índex.
Direction
EDELSTEIN GLAUBACH, JOSE DANIEL (Tutorships)
EDELSTEIN GLAUBACH, JOSE DANIEL (Tutorships)
Court
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
Modeling of gas detectors with resistive protection
Authorship
L.A.B.
Bachelor of Physics
L.A.B.
Bachelor of Physics
Defense date
02.19.2024 15:00
02.19.2024 15:00
Summary
The development of the DUNE experiment is causing a substantial improvement in the technology of gaseous detectors, with the aim of satisfying the technical requirements of the experiment. Detectors with resistive protection, RPC, play an important role as they prevent the multiplication process from evolving towards the dielectric breakdown of the detector. In this work, gaseous detectors, the evolution of the avalanche and the advantages and disadvantages of the resistive protection will be described as well as the description of some geometries. Finally, an equivalent circuit will be used to analyse some of the explained behaviours and study the detector recovery time.
The development of the DUNE experiment is causing a substantial improvement in the technology of gaseous detectors, with the aim of satisfying the technical requirements of the experiment. Detectors with resistive protection, RPC, play an important role as they prevent the multiplication process from evolving towards the dielectric breakdown of the detector. In this work, gaseous detectors, the evolution of the avalanche and the advantages and disadvantages of the resistive protection will be described as well as the description of some geometries. Finally, an equivalent circuit will be used to analyse some of the explained behaviours and study the detector recovery time.
Direction
GONZALEZ DIAZ, DIEGO (Tutorships)
GONZALEZ DIAZ, DIEGO (Tutorships)
Court
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
Quantum Scrambling and Quantum Chaos
Authorship
M.D.U.
Bachelor of Physics
M.D.U.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
In the context of quantum computing, chaos can be a very useful tool for understanding certain phenomena. In this work, a many-body quantum chaotic system subjected to a teleportation protocol inspired by the AdS/CFT correspondence is studied, by implenting it in a quantum circuit. For this purpose, observables such as OTOCs or size distributions will be introduced, as the tools for the analysis of classical chaos are not sufficient due to the quantum nature of the system. Those observables will show how information spreads when introduced into a quantum chaotic system. Two clearly differentiated temporal regimes are studied in the system’s behavior, each with its own type of teleportation. Finally, the protocol is explored when the model is taken out of its ideal parameters in order to experimentally justify the choice of variables.
In the context of quantum computing, chaos can be a very useful tool for understanding certain phenomena. In this work, a many-body quantum chaotic system subjected to a teleportation protocol inspired by the AdS/CFT correspondence is studied, by implenting it in a quantum circuit. For this purpose, observables such as OTOCs or size distributions will be introduced, as the tools for the analysis of classical chaos are not sufficient due to the quantum nature of the system. Those observables will show how information spreads when introduced into a quantum chaotic system. Two clearly differentiated temporal regimes are studied in the system’s behavior, each with its own type of teleportation. Finally, the protocol is explored when the model is taken out of its ideal parameters in order to experimentally justify the choice of variables.
Direction
MAS SOLE, JAVIER (Tutorships)
SANTOS SUAREZ, JUAN (Co-tutorships)
MAS SOLE, JAVIER (Tutorships)
SANTOS SUAREZ, JUAN (Co-tutorships)
Court
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
Mathematical modeling of infectious diseases: application to Ebola Virus Disease
Authorship
F.L.F.
Bachelor of Physics
F.L.F.
Bachelor of Physics
Defense date
07.18.2024 09:00
07.18.2024 09:00
Summary
This work focuses on the study and application of mathematical models to understand the dynamics of infectious diseases, specifically the Ebola Virus Disease (EVD). Throughout the study, we review and apply compartmental epidemiological models (deterministic and stochastic models) featuring different characteristics. As the basis for our study, we use data recorded during the EVD epidemic in West Africa in 2014. By applying various models that describe the spread of this disease in different ways, we hope to understand the mechanisms and advantages of each model, as well as assess the quality of the results obtained with them and the parties involved in the modeling process.
This work focuses on the study and application of mathematical models to understand the dynamics of infectious diseases, specifically the Ebola Virus Disease (EVD). Throughout the study, we review and apply compartmental epidemiological models (deterministic and stochastic models) featuring different characteristics. As the basis for our study, we use data recorded during the EVD epidemic in West Africa in 2014. By applying various models that describe the spread of this disease in different ways, we hope to understand the mechanisms and advantages of each model, as well as assess the quality of the results obtained with them and the parties involved in the modeling process.
Direction
RUSO VEIRAS, JUAN MANUEL (Tutorships)
RUSO VEIRAS, JUAN MANUEL (Tutorships)
Court
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Development of Cutting-edge Magnetic Nanostructures for Biomedical Applications
Authorship
P.D.L.
Bachelor of Physics
P.D.L.
Bachelor of Physics
Defense date
07.18.2024 10:00
07.18.2024 10:00
Summary
Magnetic nanoparticles are novel materials of considerable interest in the biomedical field due to their ability to generate heat through magnetic hyperthermia and their usefulness as contrast agents in magnetic resonance imaging. In the present study, the synthesis and physicochemical and magnetic characterization of a series of advanced magnetic nanostructures with various biopolymer coatings have been carried out. It was observed that these nanostructures exhibit autofluorescence, and their response to the application of hyperthermia was also evaluated to explore their potential in this treatment. The results demonstrated superparamagnetic behavior and different thermal efficiencies in hyperthermia applications, with nanoparticles coated with lignin standing out due to their high thermal response.
Magnetic nanoparticles are novel materials of considerable interest in the biomedical field due to their ability to generate heat through magnetic hyperthermia and their usefulness as contrast agents in magnetic resonance imaging. In the present study, the synthesis and physicochemical and magnetic characterization of a series of advanced magnetic nanostructures with various biopolymer coatings have been carried out. It was observed that these nanostructures exhibit autofluorescence, and their response to the application of hyperthermia was also evaluated to explore their potential in this treatment. The results demonstrated superparamagnetic behavior and different thermal efficiencies in hyperthermia applications, with nanoparticles coated with lignin standing out due to their high thermal response.
Direction
MIRA PEREZ, JORGE (Tutorships)
González Gómez, Manuel Antonio (Co-tutorships)
MIRA PEREZ, JORGE (Tutorships)
González Gómez, Manuel Antonio (Co-tutorships)
Court
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
Boson Stars
Authorship
M.G.F.
Bachelor of Physics
M.G.F.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
In this document it is explained, in general, what boson stars are. The Einstein-Klein-Gordon theory is introduced, specifically for a static and spherical model, which represents astrophysically viable solutions, describing compact systems in the regime of general relativity. The equilibrium equations for the presented model will be obtained. Finally, a shooting method will be used to perform the numerical calculations of such objects with the aim to study their main characteristics: mass, radius and compactness.
In this document it is explained, in general, what boson stars are. The Einstein-Klein-Gordon theory is introduced, specifically for a static and spherical model, which represents astrophysically viable solutions, describing compact systems in the regime of general relativity. The equilibrium equations for the presented model will be obtained. Finally, a shooting method will be used to perform the numerical calculations of such objects with the aim to study their main characteristics: mass, radius and compactness.
Direction
ADAM , CHRISTOPH (Tutorships)
CASTELO MOURELLE, JORGE (Co-tutorships)
ADAM , CHRISTOPH (Tutorships)
CASTELO MOURELLE, JORGE (Co-tutorships)
Court
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
REY LOSADA, CARLOS (Chairman)
ROMERO VIDAL, ANTONIO (Secretary)
DE LA FUENTE CARBALLO, RAUL (Member)
Experimental study of electrical properties of ionic systems with energy applications.
Authorship
M.J.O.C.
Bachelor of Physics
M.J.O.C.
Bachelor of Physics
Defense date
09.17.2024 10:30
09.17.2024 10:30
Summary
Currently, the energy transition is one of the primary global objectives. This is inconceivable without a comprehensive understanding of energy storage systems. Batteries, the most widely used systems, present several drawbacks, especially at high temperatures. This is due to the flammability and volatility of the electrolytes they are composed of. Thus, ionic liquids emerge as a promising alternative, owing to their versatility and characteristic properties. This work investigates the ionic conductivity as a function of temperature and concentration for mixtures of BMIM-TFSI with DEC, BMPyrr-TFSI with DEC and BMPyrr TFSI with ethanol. To this end, we employ electrochemical impedance spectroscopy (EIS), from which we obtain measurements of the conductance (G) and susceptance (B) of the sample. From these, we can calculate the dielectric constant and, subsequently, the conductivity. The temperature dependence of the conductivity is analyzed using the Vogel-Fulcher-Tammann (VFT) model, while the concentration dependence of the ionic liquid is examined using the Bahe-Varela theory. The significance and motivation of this work lie in comparing our systems, understanding the underlying causes of their differences, and acquiring information for potential applications.
Currently, the energy transition is one of the primary global objectives. This is inconceivable without a comprehensive understanding of energy storage systems. Batteries, the most widely used systems, present several drawbacks, especially at high temperatures. This is due to the flammability and volatility of the electrolytes they are composed of. Thus, ionic liquids emerge as a promising alternative, owing to their versatility and characteristic properties. This work investigates the ionic conductivity as a function of temperature and concentration for mixtures of BMIM-TFSI with DEC, BMPyrr-TFSI with DEC and BMPyrr TFSI with ethanol. To this end, we employ electrochemical impedance spectroscopy (EIS), from which we obtain measurements of the conductance (G) and susceptance (B) of the sample. From these, we can calculate the dielectric constant and, subsequently, the conductivity. The temperature dependence of the conductivity is analyzed using the Vogel-Fulcher-Tammann (VFT) model, while the concentration dependence of the ionic liquid is examined using the Bahe-Varela theory. The significance and motivation of this work lie in comparing our systems, understanding the underlying causes of their differences, and acquiring information for potential applications.
Direction
PARAJO VIEITO, JUAN JOSE (Tutorships)
Santiago Alonso, Antía (Co-tutorships)
PARAJO VIEITO, JUAN JOSE (Tutorships)
Santiago Alonso, Antía (Co-tutorships)
Court
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Pérez Muñuzuri, Alberto (Chairman)
BORSATO , RICCARDO (Secretary)
BARBOSA FERNANDEZ, SILVIA (Member)
Gravity waves in the atmosphere
Authorship
P.G.P.
Bachelor of Physics
P.G.P.
Bachelor of Physics
Defense date
02.19.2024 15:00
02.19.2024 15:00
Summary
Study about the linear theory of gravity waves in the atmosphere. The mathematical development that allows obtaining a linear equation of gravity waves from the fundamental equations of dynamics in the atmosphere will be performed through certain approximations and a method that separates the variables into a disturbed part and a stationary part. The wave type solution of this equation provides relevant information about the general properties of the propagation of these waves, and by applying boundary conditions the specific case of mountain waves can be analyzed in greater detail.
Study about the linear theory of gravity waves in the atmosphere. The mathematical development that allows obtaining a linear equation of gravity waves from the fundamental equations of dynamics in the atmosphere will be performed through certain approximations and a method that separates the variables into a disturbed part and a stationary part. The wave type solution of this equation provides relevant information about the general properties of the propagation of these waves, and by applying boundary conditions the specific case of mountain waves can be analyzed in greater detail.
Direction
MIGUEZ MACHO, GONZALO (Tutorships)
MIGUEZ MACHO, GONZALO (Tutorships)
Court
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
MAS SOLE, JAVIER (Chairman)
PARDO MONTERO, ALBERTO (Secretary)
CALVO IGLESIAS, MARIA ENCINA (Member)
Ultrasensitive Raman probes for the detection of biomarkers
Authorship
D.P.A.
Bachelor of Physics
D.P.A.
Bachelor of Physics
Defense date
09.16.2024 17:00
09.16.2024 17:00
Summary
In recent years, significant scientific efforts have been made in the search for new methods of (bio)detection and bioimaging, both for food and medical applications. The detection and quantification of pesticides, bacteria, or drugs present in consumer foods, as well as the improvement in the early diagnosis of diseases such as cancer or cardiovascular diseases through the detection of proteins, nucleic acids, enzymes, and other biomolecules by non-invasive methods that also allow high precision, sensitivity, and reproducibility in the diagnosis, are of particular relevance. Among the various optical methods, those based on Surface-Enhanced Raman Spectroscopy (SERS) allow quantitative and qualitative analysis of molecules at very low concentrations, on the order of up to nanomolar (nM). The aim of this bibliographic compilation work, is to present the basic principles and fundamentals of Surface-Enhanced Raman Spectroscopy, describing the different elements that make up the Raman probes and their function, as well as the most novel applications published and the future prospects of this detection technique.
In recent years, significant scientific efforts have been made in the search for new methods of (bio)detection and bioimaging, both for food and medical applications. The detection and quantification of pesticides, bacteria, or drugs present in consumer foods, as well as the improvement in the early diagnosis of diseases such as cancer or cardiovascular diseases through the detection of proteins, nucleic acids, enzymes, and other biomolecules by non-invasive methods that also allow high precision, sensitivity, and reproducibility in the diagnosis, are of particular relevance. Among the various optical methods, those based on Surface-Enhanced Raman Spectroscopy (SERS) allow quantitative and qualitative analysis of molecules at very low concentrations, on the order of up to nanomolar (nM). The aim of this bibliographic compilation work, is to present the basic principles and fundamentals of Surface-Enhanced Raman Spectroscopy, describing the different elements that make up the Raman probes and their function, as well as the most novel applications published and the future prospects of this detection technique.
Direction
BARBOSA FERNANDEZ, SILVIA (Tutorships)
BARBOSA FERNANDEZ, SILVIA (Tutorships)
Court
MIGUEZ MACHO, GONZALO (Chairman)
González Fernández, Rosa María (Secretary)
BROCOS FERNANDEZ, MARIA DEL PILAR (Member)
MIGUEZ MACHO, GONZALO (Chairman)
González Fernández, Rosa María (Secretary)
BROCOS FERNANDEZ, MARIA DEL PILAR (Member)
Study of quantum error correction systems and assessment of cosmic ray effects
Authorship
J.B.B.
Bachelor of Physics
J.B.B.
Bachelor of Physics
Defense date
07.18.2024 09:00
07.18.2024 09:00
Summary
Quantum computers promise a computational revolution with unprecedented capabilities in solving problems that cannot be handled by conventional supercomputers. To do so, they need to scale the number of qubits and maintain coherence times orders of magnitude longer than the times they are currently capable of maintaining, in the order of microseconds. Interactions with the medium (electromagnetic fields, phonons from the substrate, ...) can cause coherence breakdown and induce errors in the qubit states. Suitable quantum error correction procedures exist to recover the information of qubits that have undergone variations due to their interaction with the medium, such as the creation of logical qubits from several physical cubits. These procedures work for cases of small numbers of uncorrelated errors. Recently, Google's quantum computing team (Google Quantum AI), working on the Sycamore processor, was able to demonstrate that these quantum correction processes scale appropriately to reduce to tiny amounts (one error in 106 or 109 operations) in large volumes of qubits, so they would work for future quantum processors. However, highly correlated errors, such as those produced by cosmic rays passing through the processor substrate, cannot be corrected using these procedures. These errors are induced by the creation of charge in the ionisation of incident muons, the movement of these charges within the media and the generation of phonons that break the superconducting Cooper pairs, inducing massive correlated transitions of the quantum processor cubits. In this work we will study the reported procedures for quantum error correction, and the processes leading to these errors in quantum processors, with special detail on the interaction of cosmic rays.
Quantum computers promise a computational revolution with unprecedented capabilities in solving problems that cannot be handled by conventional supercomputers. To do so, they need to scale the number of qubits and maintain coherence times orders of magnitude longer than the times they are currently capable of maintaining, in the order of microseconds. Interactions with the medium (electromagnetic fields, phonons from the substrate, ...) can cause coherence breakdown and induce errors in the qubit states. Suitable quantum error correction procedures exist to recover the information of qubits that have undergone variations due to their interaction with the medium, such as the creation of logical qubits from several physical cubits. These procedures work for cases of small numbers of uncorrelated errors. Recently, Google's quantum computing team (Google Quantum AI), working on the Sycamore processor, was able to demonstrate that these quantum correction processes scale appropriately to reduce to tiny amounts (one error in 106 or 109 operations) in large volumes of qubits, so they would work for future quantum processors. However, highly correlated errors, such as those produced by cosmic rays passing through the processor substrate, cannot be corrected using these procedures. These errors are induced by the creation of charge in the ionisation of incident muons, the movement of these charges within the media and the generation of phonons that break the superconducting Cooper pairs, inducing massive correlated transitions of the quantum processor cubits. In this work we will study the reported procedures for quantum error correction, and the processes leading to these errors in quantum processors, with special detail on the interaction of cosmic rays.
Direction
ALVAREZ POL, HECTOR (Tutorships)
ALVAREZ POL, HECTOR (Tutorships)
Court
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Adaptive optics in astronomy
Authorship
P.P.L.
Bachelor of Physics
P.P.L.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
This work focuses on adaptive optics in astronomy, a technology that improves the quality of astronomical images by real-time correction of atmospheric distortions using wavefront sensors and deformable mirrors. The principles of adaptive optics are discussed in detail, including various techniques for measuring aberrations. The implementation of laser guide stars is explored to extend the observable sky range. Additionally, the work highlights other applications of adaptive optics.
This work focuses on adaptive optics in astronomy, a technology that improves the quality of astronomical images by real-time correction of atmospheric distortions using wavefront sensors and deformable mirrors. The principles of adaptive optics are discussed in detail, including various techniques for measuring aberrations. The implementation of laser guide stars is explored to extend the observable sky range. Additionally, the work highlights other applications of adaptive optics.
Direction
DE LA FUENTE CARBALLO, RAUL (Tutorships)
DE LA FUENTE CARBALLO, RAUL (Tutorships)
Court
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
VAZQUEZ REGUEIRO, PABLO (Chairman)
ALEJO ALONSO, AARON JOSE (Secretary)
DEL PINO GONZALEZ DE LA HIGUERA, PABLO ALFONSO (Member)
Sum of squares and modular forms
Authorship
J.C.R.G.
Double bachelor degree in Mathematics and Physics
J.C.R.G.
Double bachelor degree in Mathematics and Physics
Defense date
09.11.2024 16:45
09.11.2024 16:45
Summary
The aim of this Final Degree Project is to answer some of the classic questions in Number Theory, such as which positive integers can be written as the sum of two squares or whether all positive integers are the sum of four squares. These questions are addressed from the point of view of Complex Analysis by means of the theory of modular forms. These are functions defined in the upper half-plane that admit different symmetries. Specifically, the Jacobi theta function will be used. First, definitions that are of interest for the comprehension of subsequent arguments are introduced, together with a series of results that are used when answering these questions. Then, the Jacobi theta function is defined, together with its properties that will later be used. Then, the proofs of the sum of two squares theorem, the sum of four squares and the sum of eight squares theorem according to Jacobi are presented, using the theory of modular forms and Jacobi theta function to prove the equivalence of the structural properties of the latter with other functions defined specifically to prove each theorem. Furthermore, an idea of the proof of the sum of three squares theorem, the treatment of which is more complex, is presented.
The aim of this Final Degree Project is to answer some of the classic questions in Number Theory, such as which positive integers can be written as the sum of two squares or whether all positive integers are the sum of four squares. These questions are addressed from the point of view of Complex Analysis by means of the theory of modular forms. These are functions defined in the upper half-plane that admit different symmetries. Specifically, the Jacobi theta function will be used. First, definitions that are of interest for the comprehension of subsequent arguments are introduced, together with a series of results that are used when answering these questions. Then, the Jacobi theta function is defined, together with its properties that will later be used. Then, the proofs of the sum of two squares theorem, the sum of four squares and the sum of eight squares theorem according to Jacobi are presented, using the theory of modular forms and Jacobi theta function to prove the equivalence of the structural properties of the latter with other functions defined specifically to prove each theorem. Furthermore, an idea of the proof of the sum of three squares theorem, the treatment of which is more complex, is presented.
Direction
Cao Labora, Daniel (Tutorships)
RIVERO SALGADO, OSCAR (Co-tutorships)
Cao Labora, Daniel (Tutorships)
RIVERO SALGADO, OSCAR (Co-tutorships)
Court
VIAÑO REY, JUAN MANUEL (Chairman)
Rodríguez López, Jorge (Secretary)
CARBALLES VAZQUEZ, JOSE MANUEL (Member)
VIAÑO REY, JUAN MANUEL (Chairman)
Rodríguez López, Jorge (Secretary)
CARBALLES VAZQUEZ, JOSE MANUEL (Member)
Relationship between structure and properties of nanomaterials for biomedic applications: Biosensing
Authorship
J.C.R.G.
Double bachelor degree in Mathematics and Physics
J.C.R.G.
Double bachelor degree in Mathematics and Physics
Defense date
07.18.2024 09:00
07.18.2024 09:00
Summary
The aim of this Final Degree Project is to review the state of the art of metal nanoparticles as biosensor systems. These can be synthesized by the breakdown of larger-scale structures (top-down approach) or by the assembly of individual atoms and molecules (bottom-up). Some of the possible ways to create nanoparticles are explained in a general way. The optical properties of metallic nanoparticles that give them interest in biosensing are presented, highlighting the surface plasmon resonance, which generates an electromagnetic field around the particle when irradiated by light. Enhanced Surface Raman Scattering is a spectroscopy technique that amplifies the field generated by the plasmon to allow the detection of low concentrations of analytes. This is because the plasmonics of the gold nanoparticles amplify the Raman signal of the analyte adsorbed on it and, as this signal is characteristic of each molecule, allows its identification. Another biosensing technique is colorimetry, which allows molecules to be identified by the change in the color of the solution in which they are found when they bond with gold nanoparticles and clump together. Finally, COMSOL Multiphysics is used to simulate the behavior of the electric field around gold nanoparticles of different morphologies, in addition to observing the amplification of the electric field in the gaps between several nanoparticles of the same geometry for different values of the gap.
The aim of this Final Degree Project is to review the state of the art of metal nanoparticles as biosensor systems. These can be synthesized by the breakdown of larger-scale structures (top-down approach) or by the assembly of individual atoms and molecules (bottom-up). Some of the possible ways to create nanoparticles are explained in a general way. The optical properties of metallic nanoparticles that give them interest in biosensing are presented, highlighting the surface plasmon resonance, which generates an electromagnetic field around the particle when irradiated by light. Enhanced Surface Raman Scattering is a spectroscopy technique that amplifies the field generated by the plasmon to allow the detection of low concentrations of analytes. This is because the plasmonics of the gold nanoparticles amplify the Raman signal of the analyte adsorbed on it and, as this signal is characteristic of each molecule, allows its identification. Another biosensing technique is colorimetry, which allows molecules to be identified by the change in the color of the solution in which they are found when they bond with gold nanoparticles and clump together. Finally, COMSOL Multiphysics is used to simulate the behavior of the electric field around gold nanoparticles of different morphologies, in addition to observing the amplification of the electric field in the gaps between several nanoparticles of the same geometry for different values of the gap.
Direction
TABOADA ANTELO, PABLO (Tutorships)
TABOADA ANTELO, PABLO (Tutorships)
Court
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Varela Cabo, Luis Miguel (Chairman)
PARAJO VIEITO, JUAN JOSE (Secretary)
ARMESTO PEREZ, NESTOR (Member)
Optimization Based on Natural Evolution: Exploring Bio-Inspired Algorithms
Authorship
S.F.N.
Bachelor of Physics
S.F.N.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
This Final Degree Project deals with the application of bio-inspired algorithms, based on biological evolution or the behaviour of certain species, as a tool for solving optimization problems. The optimization of biomass and lipid production in a photobioreactor of ''Chlorella vulgaris'' will be sought. The main goal of the research will be to compare and evaluate the efficiency of different types of bio-inspired algorithms within this particular context. This evaluation will allow the identification of the most suitable and efficient algorithm to optimise the performance of the photobioreactor. The work reflects the advancement of biotechnology by demonstrating how computational methods inspired by nature can be successfully applied to improve the efficiency of industrial processes.
This Final Degree Project deals with the application of bio-inspired algorithms, based on biological evolution or the behaviour of certain species, as a tool for solving optimization problems. The optimization of biomass and lipid production in a photobioreactor of ''Chlorella vulgaris'' will be sought. The main goal of the research will be to compare and evaluate the efficiency of different types of bio-inspired algorithms within this particular context. This evaluation will allow the identification of the most suitable and efficient algorithm to optimise the performance of the photobioreactor. The work reflects the advancement of biotechnology by demonstrating how computational methods inspired by nature can be successfully applied to improve the efficiency of industrial processes.
Direction
RUSO VEIRAS, JUAN MANUEL (Tutorships)
RUSO VEIRAS, JUAN MANUEL (Tutorships)
Court
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
Radiobiological effect of dose rate uncertainties in modelling tumor response in metabolic radiotherapy
Authorship
A.B.R.
Bachelor of Physics
A.B.R.
Bachelor of Physics
Defense date
02.20.2024 17:00
02.20.2024 17:00
Summary
Metabolic radiotherapy (MRT) is a type of treatment used in nuclear medicine that consists of the administration of radioactive medicines (radiopharmaceuticals) in the patient’s body that are preferentially deposited in the tumor, irradiating it when they decay. MRT has great clinical potential, but also important limitations for its full development: in particular, the difficulty of accurately determining the dose and dose rate received by the tumor (as a result of the biokinetics of medicines in the body) prevents individualized planning based on dose-effect studies. In the present work, we will study how the uncertainty of the dose rate influences the tumor control probability (TCP). The dependence of the TCP uncertainty on the mean repair time of sublethal damage, the alfa/beta ratio and the physical half-life of the isotope used will be analyzed.
Metabolic radiotherapy (MRT) is a type of treatment used in nuclear medicine that consists of the administration of radioactive medicines (radiopharmaceuticals) in the patient’s body that are preferentially deposited in the tumor, irradiating it when they decay. MRT has great clinical potential, but also important limitations for its full development: in particular, the difficulty of accurately determining the dose and dose rate received by the tumor (as a result of the biokinetics of medicines in the body) prevents individualized planning based on dose-effect studies. In the present work, we will study how the uncertainty of the dose rate influences the tumor control probability (TCP). The dependence of the TCP uncertainty on the mean repair time of sublethal damage, the alfa/beta ratio and the physical half-life of the isotope used will be analyzed.
Direction
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Pardo Montero, Juan (Co-tutorships)
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Pardo Montero, Juan (Co-tutorships)
Court
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
PARENTE BERMUDEZ, GONZALO (Chairman)
VAZQUEZ SIERRA, CARLOS (Secretary)
MONTERO ORILLE, CARLOS (Member)
Early diagnosis of Alzheimer's disease with PET quantification
Authorship
L.M.R.
Bachelor of Physics
L.M.R.
Bachelor of Physics
Defense date
07.19.2024 09:30
07.19.2024 09:30
Summary
With the advent of novel disease-modifying therapies for Alzheimer's disease (AD), there is an increasing need for accurate and early diagnosis of this neurodegenerative disorder. In this context, Positron Emission Tomography (PET) plays a central role as it allows for the in vivo detection of the neuropathologic hallmark of the disease: the presence of beta-amyloid plaques. However, the analysis of these images is not trivial, and requires advanced statistical models to understand the long-term implications of amyloid accumulation. In this study, we focus on investigating the time course of amyloid protein accumulation in AD, from asymptomatic to the dementia stages of the disease. For this purpose, we use PET image data from volunteers at different stages of AD and apply various mathematical tools that will enable us to reconstruct the temporal evolution of the abnormal protein aggregates over time.
With the advent of novel disease-modifying therapies for Alzheimer's disease (AD), there is an increasing need for accurate and early diagnosis of this neurodegenerative disorder. In this context, Positron Emission Tomography (PET) plays a central role as it allows for the in vivo detection of the neuropathologic hallmark of the disease: the presence of beta-amyloid plaques. However, the analysis of these images is not trivial, and requires advanced statistical models to understand the long-term implications of amyloid accumulation. In this study, we focus on investigating the time course of amyloid protein accumulation in AD, from asymptomatic to the dementia stages of the disease. For this purpose, we use PET image data from volunteers at different stages of AD and apply various mathematical tools that will enable us to reconstruct the temporal evolution of the abnormal protein aggregates over time.
Direction
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Aguiar Fernández, Pablo (Co-tutorships)
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Aguiar Fernández, Pablo (Co-tutorships)
Court
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
SALGADO CARBALLO, JOSEFA (Chairman)
Montes Campos, Hadrián (Secretary)
SANCHEZ DE SANTOS, JOSE MANUEL (Member)
Particle identification with 'machine learning' in the WCTE experiment.
Authorship
M.F.B.
Bachelor of Physics
M.F.B.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
WatChMaL (Water Cherenkov Machine Learning) is a platform designed to facilitate the training and evaluation of Machine Learning models for particle detection in experiments with water Cherenkov detectors. Its structure includes specific directories such as 'analysis' for analysis tools, 'watchmal' for source code, and 'config' for configuration files. The model is based on a deep neural network with convolutional layers for feature extraction, and densely connected layers for particle classification, which vary depending on the ResNet structure applied (ResNet-18 or ResNet-34). During training, 100,000 and 1 million simulated events generated from a Monte Carlo simulation were processed. An analysis of the spatial positions and directions of the particles was conducted to understand their distribution in the detector. The results obtained from 100,000 events for ResNet-18 improved with ResNet-34 but were insufficient, indicating unsatisfactory performance for the classification between electrons and gamma rays. An increase in the number of training epochs led to overfitting of the training data, while an increase in the number of events to 1 million yielded acceptable results, indicating suitable performance for the model to make predictions.
WatChMaL (Water Cherenkov Machine Learning) is a platform designed to facilitate the training and evaluation of Machine Learning models for particle detection in experiments with water Cherenkov detectors. Its structure includes specific directories such as 'analysis' for analysis tools, 'watchmal' for source code, and 'config' for configuration files. The model is based on a deep neural network with convolutional layers for feature extraction, and densely connected layers for particle classification, which vary depending on the ResNet structure applied (ResNet-18 or ResNet-34). During training, 100,000 and 1 million simulated events generated from a Monte Carlo simulation were processed. An analysis of the spatial positions and directions of the particles was conducted to understand their distribution in the detector. The results obtained from 100,000 events for ResNet-18 improved with ResNet-34 but were insufficient, indicating unsatisfactory performance for the classification between electrons and gamma rays. An increase in the number of training epochs led to overfitting of the training data, while an increase in the number of events to 1 million yielded acceptable results, indicating suitable performance for the model to make predictions.
Direction
HERNANDO MORATA, JOSE ANGEL (Tutorships)
RENNER , JOSHUA EDWARD (Co-tutorships)
HERNANDO MORATA, JOSE ANGEL (Tutorships)
RENNER , JOSHUA EDWARD (Co-tutorships)
Court
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
ALVAREZ MUÑIZ, JAIME (Chairman)
BELIN , SAMUEL JULES (Secretary)
CARBALLEIRA ROMERO, CARLOS (Member)
Computational characterization of pseudo-ionic liquids for proton transport.
Authorship
G.D.S.
Bachelor of Physics
G.D.S.
Bachelor of Physics
Defense date
07.18.2024 09:30
07.18.2024 09:30
Summary
In the present Bachelor Thesis, a computational carachterization of a mixture of methyl-imidazolium (HMIM) and acetate (ACET) is performed. When both species react, the HMIM proton can be transferred to the ACET molecule, resulting in methyl-imidazol (MIM). Therefore, in practice we will have a MIM and HMIM acetate mixture, what is known as pseudo-ionic liquid. Recently, Watanabe et al. have reported proton conduction mechanisms in these compounds. Previously, we introduce the ionic liquid (IL) notion, its importance and their subclasses, as well as the pseudo-protic ionic liquid (pPIL) concept. Then, we describe the employed methodology, followed by the discussion of the simulations carried out and the obtained results, to finally conclude with a brief conclusions section.
In the present Bachelor Thesis, a computational carachterization of a mixture of methyl-imidazolium (HMIM) and acetate (ACET) is performed. When both species react, the HMIM proton can be transferred to the ACET molecule, resulting in methyl-imidazol (MIM). Therefore, in practice we will have a MIM and HMIM acetate mixture, what is known as pseudo-ionic liquid. Recently, Watanabe et al. have reported proton conduction mechanisms in these compounds. Previously, we introduce the ionic liquid (IL) notion, its importance and their subclasses, as well as the pseudo-protic ionic liquid (pPIL) concept. Then, we describe the employed methodology, followed by the discussion of the simulations carried out and the obtained results, to finally conclude with a brief conclusions section.
Direction
Montes Campos, Hadrián (Tutorships)
Varela Cabo, Luis Miguel (Co-tutorships)
Montes Campos, Hadrián (Tutorships)
Varela Cabo, Luis Miguel (Co-tutorships)
Court
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
MORENO DE LAS CUEVAS, VICENTE (Chairman)
Liñeira del Río, José Manuel (Secretary)
TORRON CASAL, CAROLINA (Member)
Tumor growth models under radiation therapy fractionation
Authorship
R.R.L.
Bachelor of Physics
R.R.L.
Bachelor of Physics
Defense date
07.19.2024 10:00
07.19.2024 10:00
Summary
In this work, tumor growth is studied from the perspective of the nonlinear dynamics of spatiotemporal structures. For this purpose, mathematical models have been used, particularly the slightly improved exponential model, which describes tumor growth, and the LQ model, which models cell death due to the application of radiotherapy. In the temporal study, a population dynamics approach is implemented, establishing a competition between two species, healthy cells and cancerous cells. Through the development of custom Python codes, we obtain clear graphical representations that simulate different treatment schemes and verify the validity of the models used. On the spatial side, nonlinear dynamics methods are used to simulate cell growth in space, for both a longitudinal tumor and a superficial tumor.
In this work, tumor growth is studied from the perspective of the nonlinear dynamics of spatiotemporal structures. For this purpose, mathematical models have been used, particularly the slightly improved exponential model, which describes tumor growth, and the LQ model, which models cell death due to the application of radiotherapy. In the temporal study, a population dynamics approach is implemented, establishing a competition between two species, healthy cells and cancerous cells. Through the development of custom Python codes, we obtain clear graphical representations that simulate different treatment schemes and verify the validity of the models used. On the spatial side, nonlinear dynamics methods are used to simulate cell growth in space, for both a longitudinal tumor and a superficial tumor.
Direction
Pérez Muñuzuri, Alberto (Tutorships)
Guiu Souto, Jacobo (Co-tutorships)
Pérez Muñuzuri, Alberto (Tutorships)
Guiu Souto, Jacobo (Co-tutorships)
Court
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)
ADEVA ANDANY, BERNARDO (Chairman)
IGLESIAS REY, RAMON (Secretary)
ADAM , CHRISTOPH (Member)