Tesis Doctorales

Permanent URI for this collection

Archivo Abierto Institucional de la Universidad Carlos III de Madrid: Tesis Doctorales Guía "Buscar tesis en e-Archivo"

Esta colección contiene tesis leídas en la Universidad Carlos III, cuyos autores han autorizado su depósito en E-Archivo. A partir de 2012 se depositan todas las tesis leídas en la Universidad Carlos III de Madrid, conforme a lo dispuesto en el Real Decreto 99/2011, de 28 de enero por el que se regulan las enseñanzas oficiales de doctorado (art. 14.5), y al Reglamento de la Escuela de Doctorado de la Universidad Carlos III de Madrid, de 7 de febrero de 2013 (art. 26.5, art. 31.1 y art. 32.2).

Browse

Recent Submissions

Now showing 1 - 20 of 2431
  • Item
    FLIPEC: A free-boundary equilibrium solver in the context of Ideal MHD for toroidally axisymmetric plasmas in the presence of flows
    (2024-02) Fernández-Torija Daza, Gonzalo; Sánchez Fernández, Luis Raul; Reynolds Barredo, José Miguel; UC3M. Departamento de Física; Sánchez Fernández, Luis Raul
    Since the 1950s, science has been striving to extract energy in a controlled and usable manner from the same source that powers the Sun: the nuclear fusion of light nuclei. The immense gravity produced by the Sun’s mass creates pressure in its outer layers, heating and confining its predominantly hydrogen core for an extended period. Subjected to these high temperatures, hydrogen nuclei, composed of a single proton and its only electron, dissociate into the state of matter known as plasma. In this state, and under certain conditions, positively charged nuclei can collide, forming a different nucleus. Despite the relatively simple premise, the idea promised quasi-unlimited energy at a low cost for humanity. Unfortunately, the conditions within the Sun are impossible to replicate on Earth, as we lack the Sun’s gravitational energy to confine and heat the plasma; such a mechanism can only be achieved within stars. On our planet, we need to resort to smaller-scale methods to achieve nuclear fusion. One proposed method, known as "magnetic confinement fusion," involves using powerful magnetic fields to enclose plasma, which is a fluid composed of charged particles that react to these fields. This strategy faces several challenges that hinder the development of a commercial reactor capable of producing energy consistently and efficiently for the grid. The need to confine a plasma for an extended period, around ten times hotter than the solar interior is one of the most defiant. However, maintaining such a plasma at these temperatures for long enough to build a fusion plant is still a distant goal. Currently, the leading experimental reactors in fusion research are the stellarator and the tokamak. Both designs are based on toroidal geometries to confine plasma, but their magnetic field configurations impart different properties to their operation. While the stellarator possesses essentially three-dimensional configurations, tokamaks’ magnetic fields exhibit toroidal symmetry, simplifying some aspects while introducing other disadvantages. Notably, tokamaks require induced current in the plasma to maintain equilibrium. This work is specifically focused on tokamak equilibrium within the context of toroidally axisymmetric geometry. Balancing forces within the plasma is crucial for confining particles within the reactors. Due to the complexity of the equations describing plasma behavior in detail, finding equilibrium solutions for a reactor can become an overwhelming challenge, both analytically and computationally. One solution involves simplifications leading to a field known as Magneto-hydrodynamics (MHD), specifically Ideal MHD in this study, assuming zero resistivity in the plasma. Despite achieving a considerable simplification of the starting equations, finding equilibrium solutions in Ideal MHD remains a considerable challenge, especially for certain magnetic field configurations. Numerous 3D equilibrium codes, such as VMEC or SIESTA, are widely used by the scientific community to find solutions not only for tokamaks but also for the intricate geometries of stellarator magnetic fields. Fortunately, this work deals with a toroidal symmetry context, allowing for further simplification of mathematics and computation times. This reduction leads to an elegant second-order differential equation: the Grad-Shafranov equation. Its solution depends on a single variable, typically represented by the symbol ψ, providing a measure of the poloidal magnetic flux required to maintain a plasma in equilibrium with a certain pressure distribution ppψq under a toroidal field indirectly given by a profile Fpψq. This relative simplicity opens the door to new possibilities, including obtaining analytical solutions for certain cases and significantly reducing computational costs for numerical solutions. Both three-dimensional codes and those solving the Grad-Shafranov equation share a common characteristic: they deliberately disregard the presence of macroscopic flows. That is, the term describing velocity in MHD equations is considered negligible. However, toroidal and poloidal flows are present in most operating tokamaks. Addressing the Grad-Shafranov equation by including the velocity term is not particularly challenging; however, its final expression does entail a significant increase in the complexity of the equation. Flow profiles increase from two to five, becoming less intuitive, and the differential operator risks changing its nature if poloidal flows are significant. Moreover, there are no longer analytical solutions except for simple toroidal flow cases. Consequently, only a few codes can compute equilibria in the presence of general flows, including FLOW, CLIO, or FINNESSE. Thus, this work undertakes the task of developing a code from scratch that offers different characteristics than existing ones, contributing a unique tool for the fusion community to analyze flow effects differently than previously done. The effort was carried out in two phases. Firstly, an iterative code capable of running under free boundary conditions was built and validated. Subsequently, the focus shifted to providing the code with the necessary tools and capabilities to make it a useful resource for the community. Both phases are covered in two articles, one already published, and the other in the process of publication as of the writing of this document. Among FLIPEC’s main properties are: • An Eulerian mesh implemented initially in toroidal coordinates, where the computational domain adhered to a circle. The smaller radius was treated numerically with secondorder finite differences, while a pseudo-spectral method was used for poloidal dependencies. • An computational boundary update scheme was adapted for the code to enable the calculation of free-boundary equilibriums. This strategy had already been successfully implemented in SIESTA, a 3D static equilibrium code [1, 2]. • To enable the code to adapt to any toroidally symmetric fusion device, generalized coordinates were implemented. This is essential because point currents running through coils create a singularity in the generated magnetic field values, and a circle only suited a very limited number of tokamaks. • Calculations under free boundaries trigger a vertical instability causing plasma to move vertically. An effective scheme was introduced to correct this displacement in each iteration. • In tests, it was observed that, in cases with strong toroidal rotation near the magnetic axis, the total plasma current tended to decrease over iterations. Since the current remains fixed during reactor operation, an algorithm was added to obtain equilibriums by fixing the plasma current. This strategy may perhaps be replicated in the future for other parameters. All these features were tested with two reference configurations associated with two experimental reactors with notably different characteristics: ITER, an ambitious large tokamak under construction, and NSTX, a smaller spherical tokamak with a low aspect ratio.
  • Item
    Transición de la educación superior a formación híbrida: retos y enfoques metodológicos
    (2024-02) Reina Sánchez, Karen; Durán Heras, Alfonso; UC3M. Departamento de Ingeniería Mecánica; Durán Heras, Alfonso
    En la actualidad, las Instituciones de Educación Superior (IES) se enfrentan a una profunda reflexión sobre su papel en la sociedad del futuro. La pandemia puso de relieve las carencias del modelo de enseñanza tradicional y resaltó el potencial de modelos más flexibles, que combinan la docencia virtual y la presencial. Asimismo, las necesidades y expectativas de los estudiantes actuales apuntan hacia una mayor flexibilización y digitalización de la enseñanza superior. Los avances tecnológicos más recientes, especialmente en Inteligencia Artificial Generativa, advierten que no será posible retrasar por mucho tiempo la transformación digital de la educación. En el corto plazo, las IES deben emprender una metamorfosis digital para preservar su lugar en la sociedad. Los avances en esa dirección abarcan inversiones en tecnologías e infraestructura, pero muestran progresos menos significativos en cuanto a la transformación de las metodologías y las prácticas pedagógicas que se distancien de las tradicionales. En la última década, las IES con modelos de educación presenciales han realizado algunas incursiones en formatos de aprendizaje en línea, como la implementación de foros de discusión o de cursos en línea masivos abiertos o cerrados, para complementar la docencia presencia. Sin embargo, no fue hasta la irrupción de la pandemia, con el cambio mandatorio a la virtualidad, que las IES se enfrentaron de forma masiva a una transformación radical en sus métodos de enseñanza. La literatura científica posterior a 2020 refleja esa disrupción provocada por la pandemia en el modelo educativo de las IES, evidenciada en un crecimiento en las publicaciones sobre experiencias en docencia en línea síncrona e híbrida. Este auge en la producción científica y la relativa novedad de este campo de investigación, resaltan la necesidad de una revisión exhaustiva de la literatura que pueda arrojar luz sobre cuestiones fundamentales, tales como, identificar los principios que rigen una docencia efectiva en entornos virtuales síncronos e híbridos, comprender los retos inherentes a estas modalidades de impartición y evaluar el estado actual de las experiencias que se han publicado al respecto. Dar respuesta a estas cuestiones es crucial para orientar la transición hacia modelos de formación más digitales y flexibles en el ámbito de la Educación Superior (ES). Por otra parte, la influencia de la situación de excepcionalidad en las experiencias de docencia en línea síncrona e híbrida subraya la necesidad de examinar los enfoques aplicados en ese periodo y evaluar su aplicabilidad en circunstancias normales, así como de encontrar nuevos enfoques que contribuyan a garantizar una docencia digital de calidad en el futuro. Dada la generación de datos en los entornos de aprendizaje digital y la importancia de integrarlos en la toma de decisiones de docentes y directivos de las IES, se impone también el estudio de alternativas para su procesamiento y explotación como parte fundamental de la transición. Teniendo en cuenta todo lo anterior, se establece como objetivo general de esta tesis investigar enfoques metodológicos y de explotación de datos educativos para contribuir a la transición en educación superior a una formación híbrida eficaz. Para alcanzar este objetivo se plantea la revisión de la literatura más reciente sobre docencia en línea síncrona e híbrida en ES, la evaluación sistemática de enfoques metodológicos en entornos reales de aprendizaje y la caracterización, exploración y análisis de los datos que se generan en dichos entornos. La investigación se divide en tres fases. Una primera Fase de estudio teórico, que comprende los dos primeros objetivos de esta tesis, una Fase de estudio empírico que engloba, tanto la exploración de enfoques metodológicos como el análisis de los datos y una Fase de resultados y conclusiones. Para su presentación, este documento se estructuró en 10 capítulos, incluyendo un capítulo de introducción y uno de conclusiones. Entre los resultados más significativos de esta tesis se encuentra la conceptualización y caracterización de la docencia eficaz, profundizando en los principios que deben guiar la instrucción reglada para alcanzar la eficacia en los nuevos entornos de aprendizaje. Además, se aportan un conjunto de enfoques metodológicos sobre organización y gestión de la docencia, el fomento de la participación y la entrega de contenidos en entornos en línea síncronos, y la descripción de los resultados obtenidos durante la evaluación sistemática de dichos enfoques en dos asignaturas de la disciplina de Ingeniería de Organización. También se proporcionan detalles de la evaluación de enfoques que combinan el uso de actividades presenciales y en línea con sistemas ERP, la reutilización de recursos educativos y modificaciones en el sistema de evaluación, para mejorar las experiencias de aprendizaje de los estudiantes en un entorno presencial enriquecido con la tecnología. Como parte del estudio empírico se definen una serie de variables e indicadores asociados a las analíticas de aprendizaje en entornos en línea síncronos e híbridos, que permiten obtener información sobre los enfoques aplicados. Por último, se proponen un conjunto de análisis, que tienen por objetivo transformar los datos generados en las plataformas de aprendizaje en valor para las IES. Para facilitar la instrumentación tanto de los enfoques metodológicos como de los análisis de datos, en la tesis se proporcionan detalles de todo el proceso seguido, iteración tras iteración de Investigación-Acción y, además, se ha creado un repositorio con los procedimientos en Python generados en el marco de esta investigación para que puedan ser utilizados por otros investigadores, en línea con las demandas actuales de la ciencia abierta.
  • Item
    Self-adaptive hp finite element method with iterative mesh truncation technique accelerated with ACA
    (2023-09) Barrio Garrido, Rosa María; García Castillo, Luis Emilio; Salazar Palma, Magdalena; UC3M. Departamento de Teoría de la Señal y Comunicaciones; García Castillo, Luis Emilio
    Among the most prominent computer-aided techniques employed in Computational Electromagnetics (CEM) for the numerical resolution of the Maxwell’s equations, is the powerful and flexible Finite Element Method, or FEM. This methodology stands out among other general-purpose numerical techniques due to its versatile nature, which opens the possibility of the generation of “adapted” meshes. The most powerful type of adaptivity is the so called hp-adaptivity in which the h refinements (modification of element size) and the prefinements (variation of the polynomial order p) are performed simultaneously, providing exponential rates of convergence, even in the presence of singularities. Thus, very accurate solutions are yielded even in the presence of singularities or, equivalently, approximate solutions within engineering accuracy can be obtained by using a minimum number of unknowns. When these type of combined h and p optimal refinements for the the adaption of the mesh to the solution is made automatically, it is referred to as automatic hp-adaptivity or self-adaptive hp-finite elements. This makes hp self-adaptivity the paradigm of the future in the field of electromagnetic simulators and solvers. However, the mathematical and numerical complexity behind such techniques has until very recently relegated them to a purely academic role that is currently in the development phase. But most of the commercial simulation tools have had an academic beginning as numerical in-house software developed by research groups. In this regard, the research group headed by Professor Demkowicz (Texas University) has developed a novel, fully self-adaptive hp-FEM electromagnetic solver for two- and three-dimensional complex geometries. In fact, this simulator is unique in its field, as it is the only tool developed to date with fully automatic adaptivity in h and p simultaneously applied specifically to electromagnetic problem solving. However the hp code is not originally adapted to the solution of open region problems, as FEM’s original formulation allows the analysis of finite geometries only. One of the main contributions of the work group to which the author of the present Ph. D dissertation belongs, is an optimal truncation technique for the use and extension of the original FEM formulation to open region problems. This technique is referred herein as FE-IIEE (Finite Element - Iterative Integral Equation Evaluation). Thus, working together with Prof. Demkowicz, the advisor of the present doctoral thesis and his group have successfully integrated this iterative FE-IIEE methodology into the self-adaptive hp-FEM software, obtaining a competitive hp tool for the analysis of two-dimensional electromagnetic open problems at the time this doctoral thesis was developed. Nonetheless, FE-IIEE imposes an extra computational effort to evaluate the convolutional integral appearing on the fictitious boundary where the FE-IIEE algorithm is applied to enclose the infinite layout. And this is the framework for the research developed in this doctoral thesis. Taking advantage of the knowledge of these collaborative research groups and taking as a starting point this powerful hp-FE-IIEE auto adaptivity solver in 2D, the main goal of this Ph. D dissertation is the implementation of an accelerated technique to avoid the mentioned computational bottleneck of the self-adaptive hp-FE-IIEE software. The first objective of this research is the appropriate choice of the fast integration technique to be implemented in the hp code. In order to make this choice, two known methodologies of different nature have been pre-selected with a priori proven advantages: FMM and ACA. Both acceleration methods have been herein implemented in a 2D FEM version truncated by means of FE-IIEE, and their performance has been studied with respect to several aspects. Due to the high accuracy imposed by hp and the double error control carried out by two different error control parameters (one for the self-adaptivity and one for the FE-IIEE iterations, placed in a double nested loop), the response of the fast integration technique to the required accuracy (by its own error control parameter), is crucial. In addition, its robustness, as well as its behavior in meshes with different h sizes and different polynomial orders, including higher p-order meshes, must be taken into account. Finally, their ease of implementation and the generality offered by each of the methods tested have been also addressed. In light of the results, it has been proven that the kernel-independent and robust ACA is the preferred algorithm for its implementation in the hp-FE-IIEE software. The second and main objective is the implementation of the method of choice in the hp-FE-IIEE code for its acceleration, studying its viability and results obtained in terms of potential loss of accuracy and computational complexity gains. A novel formulation for ACA has been implemented for its adaptation to the boundary condition used in the Integral Equation evaluation. The implementation has been validated and its performance in terms of computational savings (CPU time consumption and memory savings) has been studied. It will be shown in this doctoral dissertation that in this context ACA exhibits a robust behavior, yields good accuracy and compression levels up to 90%, and provides a good fair control of the approximation, which is a crucial advantage for hp adaptivity. Theoretical and empirical results of performance (computational complexity or CPU time) comparing the accelerated and non-accelerated versions of the method are presented. Several canonical scenarios are addressed to resemble the behavior of ACA with h, p and hp adaptive strategies, and higher order methods in general. The main conclusions yielded by this research are easily extrapolated to geometries in three dimensions, as well as to higher order contexts.
  • Item
    Analysis of the wave-plasma interaction in electrodeless plasma thrusters
    (2024-03) Jiménez Jiménez, Pedro José; Merino Martínez, Mario; Ahedo Galilea, Eduardo Antonio; UC3M. Departamento de Ingeniería Aeroespacial; European Commission; Comunidad de Madrid; Ministerio de Ciencia, Innovación y Universidades (España); Merino Martínez, Mario
    La tesis presentada contribuye a la comprensión y el modelado numérico de propulsores de plasma sin electrodos (EPTs). Con un enfoque dual que combina herramientas prácticas de diseño y modelos de investigación fundamental, este trabajo ofrece un conjunto de herramientas versátil y completo para avanzar el estado del arte en física de plasmas de baja temperatura aplicada a propulsión eléctrica. El núcleo de la investigación lo constituyen los avances en el estudio de EPTs y tecnologías para su modelado y simulación. Estos se enfocadan principalmente en la interacción de ondas electromagnéticas y su relación con fenómenos de transporte en el plasma. El estudio comienza con un análisis detallado del modelo de plasma frío, aplicado a problemas de propagación de ondas en propulsores de plasma de clase Helicón (HPT). Cabe destacar la introducción de PWHISTLER, una herramienta de simulación de ondas que emplea el método de elementos finitos (FE). Este modelo destaca por su mayor velocidad, precisión y capacidad para simular geometrías complejas, mejorando significativamente el estudio de fenómenos electromagnéticos en plasmas magnetizados. Una serie de análisis utilizando tanto un modelo de diferencias finitas (FD) como PWHISTLER demuestran su efectividad en la caracterización de la propagación y absorción de ondas en HPTs, siendo una observación clave la absorción de potencia concentrada en la superficie de resonancia electrónica-ciclotrónica (ECR). La integración de PWHISTLER con el código de simulación para el transporte de plasma HYPHEN facilita un estudio exhaustivo de una nueva topología de campo magnético con cúspide en HPT. Las simulaciones, verificadas con datos experimentales, ofrecen conclusiones sobre las pérdidas de rendimiento y la eficiencia de empuje, destacando el papel de las corrientes de plasma a pared, la temperatura de electrones y la influencia de la topología magnética. Finalmente se presenta una nueva formulación de un algoritmo implícito de partículas en celda (PIC), diseñado específicamente para toberas magnéticas. El método PIC implícito mejora la eficiencia computacional frente a métodos bien establecidos, y constituye un avance sustancial en la simulación y optimización de toberas magnéticas para EPTs.
  • Publication
    La protección de datos personales en el Sistema Internacional, Europeo e Interamericano. Con especial análisis de su recepción en México
    (2023-11) Tovar Partida, Gilberto; Fernández Liesa, Carlos Ramón; Universidad Carlos III de Madrid.
  • Publication
    Hybrid and Bayesian modelling of passenger occupancy at Beijing metro
    (2024-01) He, Sun; Cabras, Stefano; UC3M. Departamento de Estadística; Cabras, Stefano
    The thesis defended here is that modeling passenger flows with acceptable properties can be done with the models exposed in the above chapters. This thesis explored statistical methods for modeling passenger flows in Beijing’s Metro. The focus was on Bayesian methods and their application to dynamic systems, particularly urban metros. The Bayesian paradigm, including prior probability, likelihood, and posterior probability, was emphasized. Computational challenges were addressed using Integrated Nested Laplace Approximations (INLA), suitable for large-dimensional parameters in complex models. The work started from the socio-economic and engineering contexts of Beijing’s rapid urbanization. The Beijing Metro, serving millions daily, faces challenges due to unpredictable ridership patterns influenced by various factors. Predictive analytics are therefore crucial for operational efficiency, expansion planning, and passenger experience enhancement. Bayesian analysis was used for its adaptability and learning capability from new data. INLA was employed for efficient Bayesian inference, particularly in complex spatial and spatio-temporal models relevant to the study. The framework proved effective for regression models, dynamic linear models, and spatial applications. Data from ticketing systems, turnstiles, smart card check-ins, and mobile apps provided essential input for analysis. This data was crucial for our research as it is for managing peak traffic, scheduling trains, ensuring passenger safety, and supporting strategic decision-making. The thesis demonstrated the effectiveness of Bayesian models in predicting passenger flow in urban metro systems. Future work could focus on enhancing the computational efficiency of these models and exploring their application in other dynamic urban systems. Further research could also delve into the integration of additional data sources and the development of more advanced predictive models. Although the primary focus is on the Beijing Metro, this research draws data from the 1st of September to the 31st of October in 2020 to ensure a comprehensive understanding. It is worth noting that while the Bayesian model developed might offer theoretical applications for other metro systems, its design, calibration, and validation remain rooted in Beijing’s context. Aspects like intermodal transportation or predictions for bus networks fall outside of this study’s purview. To our knowledge, at the moment the daily passenger model has also been fitted to data from the metro network in other cities with a performance similar to the one shown here.
  • Item
    Desing, development and characterization of a microwave electrodeless plasma thruster
    (2024) Inchingolo, Marco Riccardo; Navarro Cavallé, Jaume; Merino Martínez, Mario; UC3M. Departamento de Ingeniería Aeroespacial; Navarro Cavallé, Jaume
    This thesis delves into the design of a waveguide Electron Cyclotron Resonance Thruster prototype, the experimental characterization of its plasma discharge and plume, and the evaluation of its performance characteristics. The research aims to attain a purely electrodeless plasma thruster with comparable performance to conventional electric thrusters employing electrodes, addressing their inherent lifetime limitations. Electric propulsion (EP) is an in-space propulsion technology that uses electric power to accelerate a propellant. This propellant is typically in the form of a plasma and accelerated using electric and magnetic fields, generating thrust. Typical established thruster technologies employ electrodes for plasma acceleration. However, these are subject to erosion or contamination, potentially leading to failure, and thus limiting the lifetime of a space mission. Electrodeless thrusters, on the other hand, use electromagnetic power to generate plasma and, typically, a magnetic nozzle to accelerate it, therefore they lack these lifetime-limiting components. Additionally, can operate on virtually any propellant gas (Xenon, Argon, Krypton, water, etc.). However, these technologies show limited performance and are still the subject of extensive research efforts. The Electron Cyclotron Resonance (ECR) occurs when electromagnetic power (typically microwaves) is efficiently absorbed by a plasma immersed in a magnetic field. This phenomenon is extensively used in industrial plasma sources, and in the last decades, the applications in the electric propulsion field have become of interest to the EP community, thanks to the work of several research groups on Coaxial ECR thrusters. However, this technology is not purely electrodeless and necessitates the presence of a central conductor exposed to the plasma, subject to erosion. An alternative way of coupling power to the plasma using the ECR is employing a waveguide geometry which does not need the presence of the central conductor. On the other hand, this technology has shown limited performance in the past if compared to the coaxial one. This thesis contributes to a better understanding of this thruster with the development and experimental study of a waveguide Electron Cyclotron Resonance thruster (ECRT) prototype to identify the performance limitations and assess the viability of this technology. This objective is carried out firstly by designing an ECRT prototype, secondly, the discharge physics is explored via simulation tools and the plume metrics characterized via electrostatic probe and LIF measurements. Finally, the thruster performance is directly assessed via thrust balance measurements. The design of the waveguide thruster presented in this thesis is described in Chapter 3. The thruster is designed to work at a larger microwave frequency, 5.8 GHz, than the typically employed 2.45 GHz, allowing the reduction of the thruster dimensions and the down-scaling of the power requirement to a power lower than 400 W. A permanent magnet is used to generate the magnetic field necessary for the ECR and the creation of the magnetic nozzle for the plasma acceleration. An additional electromagnet is also included in the assembly to alter the resonance position and vary the shape of the magnetic nozzle. The design of the transmission line employed in the experiments is also presented. To assess the thruster design, simulations of the discharge chamber and the near magnetic nozzle segment are performed using the HYPHEN code developed by the EP2 team. HYPHEN is a hybrid PIC-fluid code, that can be used to simulate, and has been validated on different types of electric thrusters. The geometry of the designed waveguide ECRT has been used as a baseline for defining the simulation domain. To simulate the electromagnetic power absorption at the ECR region, userdefined power deposition maps are provided for the electron fluid. Depending on it, different pressure, temperature, and current profiles are found, strongly affecting the discharge. Regions of magnetic drag may exist in the plume. A parametric analysis has been performed showing good scaling with the energy per particle. Performance metrics have been obtained, being in the 10% – 20% thrust efficiency range. In Chapter 5 the first characterization of the plasma plume produced by the waveguide ECRT prototype is presented. The plume is probed with electrostatic probes, a Faraday cup, and a Langmuir probe, across a wide range of working points. Further information is gathered by analyzing the change in microwave reflection coefficient and thruster floating potential. The plume current analysis showed large divergence angles and poor plume utilization efficiency, below 70%. Relatively low electron temperatures have also been found, and electron cooling has been observed along the plume expansion. The effect of the magnetic field topology is analyzed by varying the electromagnet current. Using these data, a preliminary estimation of the thrust efficiency was performed, obtaining values below 2%. Following the characterization of the plume, improvements in the thruster assembly are made by introducing a diffused radial propellant injector and a stub-tuner in the transmission line to reduce the level of power reflections. In Chapter 6, the performance of the thruster is further analyzed with the use of an amplified displacement hanging pendulum thrust balance, additionally, the plume characterization is completed by measurements of the ion energy by using an RPA. The thrust measurements led to an estimation of the thrust efficiency of up to 3.5 %. The performed probe measurements allowed us to compute the partial efficiencies, showing that the energy efficiency, is the factor that limits the most the performance of the device, staying below 7%, the utilization was shown to be consistent with the previous measurements. For increased mass flow rates the plume has been found to become hollow, leading to a high divergence angle. Preliminary results regarding the existence of a population of energetic electrons have also been found. Finally, in collaboration with the CNRS-ICARE laboratory, the LIF technique has been used to carry out additional measurements on the plume of this prototype. The plume has been analyzed for various conditions, and the related findings are discussed in Chapter 7. The magnetic nozzle length was not found to alter the terminal speed of the ions in the observed spatial range, however, it affects ion accelerations in the near-plume region. Large ion axial kinetic temperatures were found in the range of several thousands of Kelvins, a result that may hint at an extended ionization region. Further measurements were performed in the middle of the plasma discharge chamber, a region notoriously difficult to probe with intrusive probes. Findings show that the ions have a negative mean velocity (towards the backplate), therefore a big fraction of the ion production is thought to be lost at the thruster walls. Two Appendices at the end of this work summarize parallel activities performed in the context of this thesis. Appendix A presents the work performed by the author at the Japanese Aerospace Exploration Agency (JAXA), where the LIF diagnostic has been used on the engineering model of the ECR gridded ion thruster used along the Hayabusa II mission, the technique has been used to assess back-sputtering phenomenon observed during in-flight operations. In Appendix B, Faraday Cups are tested on the plume of a Helicon thruster to evaluate the impact of design alternatives on the measured ion current density.
  • Item
    Maneuvering Target Tracking Methods for Space Surveillance
    (2024-03) Escribano Blázquez, Guillermo; Universidad Carlos III de Madrid.
    Earth orbits are a valuable natural resource that shall be preserved. A myriad of services currently relies on orbital stations, most of which support modern human society. To name a few, global navigation satellite services and weather forecasting, crucial for the vast majority of the population, require sensors and stations in Earth orbit. Recently, the space industry has seen a considerable decrease in launch and development costs, so access to space is more affordable than ever before. The latter has opened the gate for new actors in the space domain, such as startups and universities, which now operate small-sized spacecraft (up to 500 kg) at low orbital altitudes. This breed of new satellites adds up to an already numerous background population, composed not only of operative satellites but also derelict rockets and spacecraft as well as fragments originated from break-up events or explosions and in-orbit collisions. It is this last subset of the Earth orbital population that alarms the space community: for a sufficiently high congestion level, a single collision can trigger a cascade of additional events to the point that every object placed in orbit is guaranteed to collide against a neighboring fragment in a relatively short period of time. To prevent this catastrophic situation, the Earth orbital space is continuously monitored by surveillance networks composed of ground and space-based sensors of different types. Data acquired by these sensors is processed and fused with information coming from satellite operators to elaborate and maintain a comprehensive list, or catalog, of objects in Earth orbit. Generating a labelled map of objects can help to prevent future in-orbit collisions: by running one-to-one conjunction analysis over the entire population, it is possible to identify potentially hazardous conjunctions and issue warnings to spacecraft operators so they can take remedial action. Accurately tracking space objects with surveillance sensors is a complex task, especially if it is to be done automatically. In particular, the integrity of space object catalogs is compromised when spacecraft perform maneuvers and their operators do not report them in a timely manner, mainly because automated space surveillance and tracking systems rely on natural satellite motion or very simple maneuver models at best. This thesis is aimed at advancing the tools and techniques required for automated space surveillance and tracking in the presence of unknown maneuvers, emphasizing on the definition of suitable maneuver models for space objects. One of the main problems associated with orbital maneuvers is data association, this is, being able to assess whether certain observation corresponds to a given target following a maneuver. In general, it is not easy to provide evidence supporting measurement to object correlation after or during a maneuvering period, especially if no knowledge regarding the maneuvering characteristics of the target can be assumed a priori. In the context of this thesis, orbital maneuvers are classified according to the different types of space propulsion technologies, namely, chemical and electrical. The former present high-thrust magnitudes and a poor propellant utilization, whereas the latter are significantly more fuel efficient at the cost of lower thrusting forces. Nonetheless, one can derive educated guesses for the expected control magnitudes of orbital maneuvers by analyzing missions equipped with either propulsive type. These control magnitudes can be used to bound the space that is accessible (or reachable) for a target given an initial state and some temporal bounds, for instance representing the last known state and the time elapsed since then. Still, application of these concepts requires the development of computationally efficient methods to compute orbital distances in terms of control, also known as control distance metrics, based either on chemical or electrical propulsion. In other words, a key enabler of the methodology proposed in this thesis is the development of inexpensive methods to evaluate control distance metrics, which shall capture the dynamical features of Earth’s orbital environment and the different space propulsive technologies. To this end, two different surrogate models for orbital maneuvers have been developed, under low and high-thrust assumptions, and exploited to compute not only the control distance between two orbital states but also to approximate reachability bounds conditioned on an initial state and a time of flight. Statistically consistent maneuver models are then constructed from the developed surrogates, which can provide reasonable maneuver evidence for data association in space surveillance applications. These models are embedded in an advanced multiple maneuvering target tracking filter, capable of managing ambiguity arising from target maneuvers, death, birth, missed sensor detections and false observations, thus being suitable for robust automated operations. Results obtained from synthetic datasets indicate the developed methods help the filter to maintain custody of maneuvering targets and resolve ambiguity in a wide range of scenarios. However, the strong dynamical impact of maneuvers leads to significant uncertainties in the state of space objects, to the point that the quality of conjunction analyses is severely degraded. Therefore, even if the developed methods are capable of maintaining custody of maneuverable space objects, it is advisable to further characterize maneuvers according to, for instance, historical data. Including heuristics in the definition of maneuver modes can help to reduce the effective uncertainty at early post-maneuver epochs, thereby increasing the overall quality of the space object catalog and allowing for accurate conjunction analyses in the presence of maneuvering targets.
  • Item
    Métodos para la Gestión del Conocimiento de Modelos Físicos
    (2024-01) Cibrián Sánchez, Eduardo; Álvarez Rodríguez, José María; UC3M. Departamento de Informática; Álvarez Rodríguez, José María
    El desarrollo de sistemas involucra la aplicación de diversas técnicas y herramientas a lo largo de todo el ciclo de vida del desarrollo, dando lugar a la generación de múltiples artefactos del sistema, como modelos físicos, los cuales están codificados en formatos diversos y necesitan ser accedidos a través de protocolos y formatos no estándar. No obstante, el desafío principal reside en la eficiente reutilización de estos modelos, ya que la diversidad de formatos y codificación demanda un enfoque más sofisticado para llevar a cabo búsquedas efectivas. Esta tesis doctoral aborda la necesidad de representar y reutilizar modelos físicos en Ingeniería de Sistemas mediante el uso de técnicas de procesamiento del lenguaje natural y aprendizaje automático, específicamente con las representaciones vectoriales de palabras (word embeddings). Para ello, se definen unos métodos como una estrategia para la gestión de conocimiento de modelos físicos, integrándolos en el contexto de la Ingeniería de Sistemas, para mejorar el proceso de desarrollo de sistemas complejos. Se proponen cinco métodos para la reutilización semántica de modelos físicos, utilizando técnicas basadas en word embeddings, permitiendo la búsqueda por contenido, generando las descripciones textuales de los modelos, en lugar de depender únicamente de palabras clave o metadatos. Estos métodos se validan mediante experimentos que demuestran su eficacia en la recuperación de modelos físicos. Se realiza una implementación de los métodos que implica el pre-procesamiento de un corpus de texto de modelos de MATLAB Simulink entrenándolo para generar representaciones vectoriales de palabras. La validación se realiza en un conjunto de modelos físicos de MATLAB Simulink, transformando descripciones en vectores y aplicando el algoritmo de recuperación que se propone en los métodos. Además, se realiza una comparación de los resultados obtenidos con la metodología con otros enfoques de búsqueda basados en ontologías y en búsqueda por metadatos. La investigación de esta tesis revela que el algoritmo de similitud empleado para la reutilización de modelos físicos es efectivo incluso para términos de consulta no presentes de forma explícita en los modelos, respaldando la viabilidad de esta aproximación. Se concluye que el uso de word embeddings puede resolver problemas de búsqueda semántica en Ingeniería de Sistemas.
  • Item
    Production and characterization of advanced nuclear materials
    (2024-02) Oñoro Salaices, Moisés; Auger, María A.; UC3M. Departamento de Física; Ministerio de Economía y Competitividad (España); Comunidad de Madrid; Agencia Estatal de Investigación (España); Auger, María A.
    Materials science research has fostered great developments in areas as aeronautic, aerospace, in-formatics or digitalization, new energy sources, construction of buildings and infrastructures, etc. Among them, energy production is a common necessity for the progress of all these activities. The discovery and implementation of new energy sources, or the efficiency improvement of the al-ready existing ones, has always been a key for the society development. Nowadays, one of the most important efforts at international level is focused on the energy production in nuclear fusion re-actors. The European Union, Japan, the United States or China have a shared objective to build a nuclear fusion power plant. The successful implementation of this objective will demonstrate the viability of this energy source. The efforts of the scientific community during the last decades are being materialized in the construction of the first fusion experimental reactor, ITER, in Cadarache (south of France). The main objective of this thesis is to extend the knowledge in the material science area with specific application in the construction of this type of infrastructures. From the point of view of the nuclear fusion energy, the development of new materials is completely necessary. We face the construction of a facility that will support very high operation temperatures, together with high energetic radiation phenomena due to the presence of 14 MeV neutrons. This will directly impact on the facility components endurance. Particularly, this thesis presents the production and char-acterization of advanced nuclear materials, e.g. reduced activated ferritic steels strengthened with an oxide dispersion, or ODS RAF steels. ODS RAF steels are leading structural material candidates for future fusion energy reactors. A thermal aging treatment is presented to study the impact of the expected operational tempera-tures (873 K) in the fusion reactor on a ODS RAF steel. The material under research is character-ized by a nominal chemical composition of Fe-14Cr-2W-0.4Ti-0.3Y2O3 (wt. %). The ODS steel microstructure and secondary phases has been characterized by scanning electron microscopy and energy dispersive X-ray spectrometers and electron backscattered diffraction detectors (SEM + XEDS & EBSD), transmission electron microscopy (TEM), X-ray diffraction analysis (XRD) atom probe tomography (APT), small angle neutron scattering (SANS) and X-ray absorption spectroscopy (XAS). The mechanical characterization has been obtained by means of Vickers mi-crohardness measurements and tensile and Charpy impact tests. This research has proven the high thermal stability of the ODS steel under study. The tensile behavior is stable after thermal aging, with a loss of ductility that has slightly impacted the ductile-to-brittle transition tempera-ture (DBTT). This behavior has been associated to the redistribution of W-Cr-rich secondary phases along grain boundaries observed after the thermal aging treatment. M23C6 phases were identified prior and after thermal aging, while Laves phase (Fe,Cr)2W formation has been addi-tionally reported afterwards. The homogeneous dispersion of Y-rich oxide nanoparticles has been characterized and proved to be stable after thermal aging. A new ODS RAF steels was produced based on the knowledge acquired on researching the previ-ous material. The chemical composition was slightly modified and different processing routes were designed to explore the potential benefits of new production techniques. The new nominal chemical composition was Fe-14Cr-2W-0.3Zr-0.24Y (wt. %). The design of the new material and the production methods are presented, with the most significant results during its production and initial characterization. This study opens new lines of research to be fulfilled with the microstruc-tural and mechanical characterization of this new ODS steel. Main conclusions of these two research lines are addressed and summarized; future work is pro-posed. Additional tests, simulating different expected operational environments, were performed to fur-ther explore the behavior of the ODS steel under relevant conditions inside fusion reactors. Self-ion irradiation campaigns were designed and accomplished to simulate the 14 MeV neutrons ir-radiation effects on the ODS reference steel without thermal aging. Furthermore, to assess the chemical compatibility of ODS steels with functional materials in fusion reactors, corrosion tests were performed in a stagnant PbLi bath at 873 K for periods up to 8 weeks. The ODS steel exhib-ited a stable microstructure under self-ion Fe+ implantation; further studies are currently under-going to infer a global characterization of the PbLi corrosion tests.
  • Publication
    La codificación penal militar en España. Principio de especialidad y principio de complementariedad en la ley penal militar española
    (2023-09) Fernández Ferrer, Alejandro Javier; Álvarez García, Francisco Javier; Universidad Carlos III de Madrid.; Álvarez García, Francisco Javier
  • Item
    Innovative 3D-Printed Designs for Millimeter-Wave Lenses and Meta-Lens Antennas
    (2024-02) Poyanco Acevedo, Jose Manuel; Rajo Iglesias, Eva; Pizarro Torres, Francisco Guillermo; UC3M. Departamento de Teoría de la Señal y Comunicaciones; Rajo Iglesias, Eva
    Over time, wireless communication networks have gained increasing importance, both for personal use and industrial and organizational applications. In fact, internet users are no longer just individuals, as the number of objects connected to the internet, known as the "Internet of Things" (IoT), has grown over time. The continuous increase in the number of users and the volume of data transmitted and received has made the data transmission provided by wireless communication systems insufficient. Consequently, these systems have been updated over time, progressing from one generation to the next. Currently, the fifth generation of wireless communication is in the process of being implemented for its initial applications. The fact that the amount of data transferred for each user is increasing has led communication systems to move to higher frequencies, as these frequencies allow for a greater bandwidth to be transferred. However, this increase in frequency is accompanied by an inevitable rise in free-space propagation losses, which are countered by increasing the gain of the antennas used. This has resulted in a significant increase in the research on high-gain antennas. As time has passed, not only have communication systems evolved, but other disciplines have evolved in tandem. An example of this is 3D-printing, which allows for the creation of complex and customized objects at a low cost, democratizing manufacturing for both individuals and small companies. Meanwhile, materials engineering has enabled the creation of various materials for use in 3D-printing, including different types of polymers, ceramics, and even metals. These materials are characterized not only by their elasticity, mechanical strength, or thermal capacity but also by their electromagnetic properties. Initially, antennas were designed using 3D-printing as a manufacturing method, using materials that, although not designed for electromagnetic applications, could still be used for this purpose. However, they presented a fundamental problem: high losses. This triggered various companies to design materials specifically for electromagnetic applications, with a specific permittivity value and, most importantly, low losses, even at millimeter-wave frequencies. This allowed researchers to consider this manufacturing method as part of the possibilities. All of this together has led to a large number of antennas being proposed for 5G applications, with prototypes manufactured using 3D-printing. Among the diverse range of proposed antennas that require only plastic materials, we can find dielectric resonators, leaky-wave antennas, dielectric polarizers for antennas, and lens antennas. The latter have gained popularity as communication systems have shifted to higher frequencies because, at these frequencies, the overall size of the lens decreases, making them suitable candidates. One advantage they offer is the simplicity in the design of their feeding network, which becomes increasingly valuable as the frequency increases and the size of the components decreases. For this reason, the motivation for this thesis lies in the investigation of lens antenna proposals designed for high-frequency applications, with the aim of serving as candidates for new wireless technologies, including the current 5G, the upcoming 6G, or any other future technology that may use millimeter-wave frequencies. 3D-printing will be used as the main manufacturing method throughout this thesis, allowing for the production of antennas with complex shapes at a low cost and with low losses. The thesis can be divided into two parts: lens antennas and meta-lenses. In the first category, four topologies of antennas have been investigated, all of them dielectric and designed to be manufactured using 3D-printing. Among them, a new lens antenna stands out, with the ability to convert linear polarization to circular without adding any extra devices. The novelty of the other antennas shown here lies in their specific design for 3D-printing and high-frequency applications, pushing the limits of 3D-printing for successful prototyping. The second part of this thesis focuses on the study of metasurfaces for lens antenna applications, where two periodic structures have been investigated, one dielectric and one metallic, both embedded into a parallel plate structure. With the first analyzed unit cell, two meta-lens antennas have been designed, a Luneburg lens and a Mikaelian lens. The second analyzed periodic structure consists of a new metallic unit cell that stands out for its high level of isotropy, making it an excellent candidate for lens antenna design. This has motivated the design of a Mikaelian lens antenna to demonstrate its how to take advantage of the properties of this unit cell.
  • Publication
    Superimposed Training in Realistic Systems and Multi-Antenna Transmitters
    (2023-11) Piqué Muntané, Ignasi; Fernández-Getino García, María Julia; UC3M. Departamento de Teoría de la Señal y Comunicaciones; Fernández-Getino García, María Julia
    During the last decades, the way people interchange information has drastically evolved. The 5-th Generation (5G) of wireless communications, which is designed under the New Radio (NR) standard, aims to improve the capabilities of previous systems by several orders of magnitude. These enhancements are necessary to achieve some challenging goals, such as: the provision of connectivity to a massive number of users, the proper communication with users who travel in ultra fast vehicles, or the implementation of cutting-edge technologies that require an essential low-latency and very reliable transmission. For this reason, to differentiate each of these applications, the goals of 5G-NR have been categorized in three main usage scenarios: Enhanced Mobile BroadBand (eMBB), Ultra Reliable Low-Latency Communications (URLLC) and Massive Machine Type Communications (mMTC). To overcome all these challenges, some techniques, which largely increase the throughput of the system, need to be considered. Most of practical proposals have been designed to exploit the resources from wireless channels, where special attention has been drawn to the frequency and spatial domains. This is why some of the proposed techniques are focused on: a better utilization of the spectrum, e.g. with Orthogonal Frequency Division Multiplexing (OFDM) waveform; a higher aggregation of resources in the frequency domain, e.g. by using larger bandwidths like in Millimeter Wave (mmWave) regimes; and, a larger multiplexing of signals in the spatial domain, e.g. by employing many antennas in Multiple Input Multiple Output (MIMO) systems. Since the number of links, both in the frequency and spatial domains, will be largely increased, the 5G-NR standard has adopted the Time Division Duplex (TDD) transmission protocol. In any case, for a proper operation, an estimation of the channel is mandatory and represents an important challenge. At the moment, a commonly employed technique for this purpose is Pilot Symbol Assisted Modulation (PSAM), where some symbols, referred as pilots, are known by the transmitter and receiver. Even though the estimates of PSAM can be computed with high accuracy, the throughput of this technique can be severely worsened since the resources used as pilot symbols impede any type of data transmission. As an alternative, Superimposed Training (ST) scheme jointly transmits the pilots and data symbols together. This way, despite the fact that superimposed data will degrade the estimation of the channel, some data symbols will still be transmitted. This Thesis studies ST from two points of view. First, it analyzes the performance of the Mean Squared Error (MSE) of channel estimation when the channel coefficients evolve realistically. This work is computed for an OFDM scheme from 5G-NR standard that uses the Least Squares (LS) and Minimum MSE (MMSE) estimator techniques. After solving this problem, ST is studied in terms of achievable rate expressions, which are computed for a system model where several users employ multiple correlated antennas to transmit. In the first part of this dissertation, it is shown that to properly design a feasible ST scheme, it is necessary to employ models whose channel coefficients evolve realistically. This way, an optimization of the MSE can be derived in terms of optimum pilot length, both in time and frequency domains, and results show that the estimation can be significantly improved. Later on, these improvements are justified with other metrics such as Symbol Error Rate (SER). Considering that a transcendental equation must be solved to obtain these optimized values, a further analysis is performed where the optimum pilot length is computed following a multiple linear regression model using the scenario parameters as input values. In addition, a classification method to select between the LS or MMSE estimators is proposed taking into account several assumptions on performance. Finally, in the second part of the Thesis, the ST scheme is studied when several users employ many antennas to transmit. In this part, it is shown that in highly loaded systems where many streams need to be fed, PSAM is significantly worsened. On the other hand, ST scheme achieves a very promising throughput, specially if the contribution from the superimposed pilots is retrieved when decoding the data symbols. All these analytical results are derived in terms of closed-form ergodic achievable rate expressions. Finally, numerical simulations validate these results and provide deep insights on the feasibility of ST in current and future deployments.
  • Item
    Subtitulado simplificado para personas con discapacidad
    (2024-03) Masiello Ruiz, José Manuel; Ruiz Mezcua, María Belén; Martínez Fernández, Paloma; Universidad Carlos III de Madrid.; Ruiz Mezcua, María Belén
    En el Capítulo 1. Introducción. Se explica la necesidad y motivación origen de esta investigación y se detallan los objetivos e hipótesis de partida. En el Capítulo 2. Estado de la cuestión. Se lleva a cabo una revisión de la literatura científica en los campos de interés de este trabajo. Se presentan las técnicas de subtitulado y su impacto en la calidad del resultado. Se analizan las tecnologías de sincronización: automáticas, manuales o mixtas de subtítulos teniendo en cuenta el tipo de producción audiovisual. De igual forma se analizan las principales arquitecturas de redes neuronales propuestas para resolver la problemática de la de puntuación automática de textos como son la arquitectura Recurrent Neural Network (RNN) y la arquitectura Bidirectional Encoder Representations Transformers (BERT). En el Capítulo 3. Solución propuesta. Se presenta el esquema general, los componentes detallados y la relación entre ellos de la solución propuesta en este trabajo. En el Capítulo 4. Sistema de sincronización. Se describe la solución adoptada para resolver la problemática de la sincronización, el detalle de los algoritmos utilizados y entorno de desarrollo para la implementación del componente de sincronización de la solución. Se definen los escenarios de los experimentos y los resultados de estos. Se presenta una discusión y un conjunto de conclusiones. En el Capítulo 5. Sistema de puntuación. Se describe la solución adoptada para resolver la problemática de la puntuación automática, el detalle de la arquitectura de red neuronal adoptada y los conjuntos de datos utilizados para el entrenamiento y validación de estas. Se definen los escenarios de los experimentos y los resultados de estos. Se presenta una discusión y un conjunto de conclusiones. En el Capítulo 6. Conclusiones y trabajos futuros. A partir de las hipótesis y objetivos establecidos, así como de los resultados experimentales obtenidos, se presentan una serie de conclusiones. Asimismo, se establecen una serie de trabajos futuros en la línea de automatizar la mejora de calidad de los subtítulos. En el Capítulo Bibliografía. Se presenta un listado con la bibliografía referenciada en este documento.
  • Item
    Data Analytics and Applied Machine Learning: Tool Health Monitoring in Automatic Drilling Operations in Aeronautical Structural Components
    (2024-02) Domínguez Monferrer, Carlos; Cantero Guisández, José Luis; Universidad Carlos III de Madrid.; Cantero Guisández, José Luis
    In aircraft manufacturing, the assembly process depends on creating multiple holes for accommodating bolts and rivets for the secure interlocking of structural components within the aircraft fuselage. The increasing integration of sensor systems in this domain has significantly enhanced data generation during hole-making. This development presents an opportunity to refine machining operations through real-time Tool Health Monitoring Systems. The focus of this doctoral thesis is on the utilization of data generated during automatic drilling operations in an aeronautical production system to reduce non-productive times and consumable costs, thereby fostering a highly efficient and adaptable machining ecosystem. The research centers on developing a Tool Failure Detection System and a Tool Wear Monitoring System. Both systems are based on a systematic methodology for collecting, processing and analyzing spindle power consumption data and other machining-related information. This development process utilizes cutting-edge Data Analytics and Machine Learning techniques to enhance the precision and effectiveness of the systems. The Tool Failure Detection System based on Multiresolution Analysis has demonstrated exceptional adaptability and high accuracy rates in diverse breakage scenarios involving various cutting tools. On the other hand, exploring the Tool Wear Monitoring System, employing a range of Machine Learning algorithms from Linear models to k-nearest Neighbors, Decision Trees, and Ensemble models, has highlighted the challenge of surpassing human-level performance. This challenge is attributed to data quality and quantity limitations. Future research will focus on overcoming current limitations and expanding the capabilities of these systems, further enhancing their practical application in other machining environments.
  • Item
    Metodología de análisis experimental y numérico de protecciones de cabeza para uso deportivo
    (2024-03) Mantecón Padín, Ramiro; Miguélez Garrido, María Henar; Díaz Álvarez, José; Universidad Carlos III de Madrid.; Díaz Álvarez, José
    Biomechanics is the science dedicated to analysing the mechanical behaviour of biological systems, encompassing both motion – kinematics – and the forces and stresses endured – dynamics –, two interdependent branches of biomechanical study. Analysis of the human body is hindered by the impossibility of in vivo testing characterising this mechanical response, along with the limited ex vivo representativity of biological tissues. Impacts sustained by the head constitute cases of particular interest due to the consequences arising from mechanisms of brain injury, spanning a wide range of potential effects from mild to severe. Analysing the biomechanical damage caused by impulsive loads on the head requires elements simulating the mechanical behaviour of the studied biological component. This thesis aims to develop a methodology for the analysis of head protective equipment based on the use of head surrogates. Manufacturing surrogates through additive technologies necessitates studying and analysing the fabrication process, given the peculiarities and variability of results associated with these technologies. Thus, the thermomechanical effect of the 3D printing process with the selected polymers for obtaining the surrogates was explored. The results indicate a need to control the method and selected parameters. This variability stems from the thermal evolution and the influence of the manufactured component's interactions with the elements of the printer, which makes it particularly interesting in the design of large components. This methodology was employed to design and manufacture skull surrogates with two alternative geometries, validating the material's small-scale mechanical behaviour and studying the effects of full-size manufacturing strategy. The designed substitutes underwent representative tests of the sports head protection evaluation methodology for cycling, conducting impact tests with a helmet demonstrator compared to a commercial surrogate. These surrogates were complemented with finite element models, demonstrating the great potential of the application of these surrogates in the biomechanical study of head impacts, enabling the assessment of other damage mechanisms not measured in experimental tests. These surrogates also result in cost reduction in testing campaigns, as well as the expansion of head protection evaluation methods, paving the way for future work to explore more complex and representative load scenarios observed in the sports domain.
  • Item
    Medical Grade Bioabsorbable Composites for the 3D Printing of Multi-Material Orthopaedic Devices
    (2023-12) Thompson, Cillian; Llorca Martínez, Javier; González Martínez, Carlos Daniel; Universidad Carlos III de Madrid. Departamento de Ciencia e Ingeniería de Materiales e Ingenieria Química; Agencia Estatal de Investigación (España); Campos Gómez, Mónica
    Biodegradable polymer composites fabricated by 3D printing may overcome many of the challenges associated with permanent non-degradable metals (Ti, stainless steel, Co-Cr) or with biodegradable polymers (PLA, PLGA, and PCL) and biodegradable metals (Mg, Zn, Fe) by achieving a final material with synergistic properties for orthopaedic implant applications. This thesis deals with the development of a filament production process for combining biodegradable polymers and metals through a two-step extrusion process. PLA was combined with Mg or Zn particles and extruded into filaments with constant diameter and a homogeneous dispersion of particles, suitable for 3D printing. This strategy was then used to 3D print biocompatible composites comprised of medical grade PLDL with a 4% volume fraction of Mg or Zn metallic particles. The addition of Mg and Zn to the PLDL material was able to increase the degradation rate of PLDL, and reduce the acidity of the PBS environment at 37◦C over a 1-year period. The addition of the particles slightly increased the stiffness, but reduced the strength of the PLDL due to the stress concentrations around the metallic particles. The composite materials exhibited excellent biocompatibility properties in terms of the material-cell interactions and cell proliferation. This study demonstrates that the addition of metallic particles to the polymer matrix can tailor the degradation properties but does not improve the mechanical properties. To overcome this latter limitation, a customised FFF 3D printer was developed to incorporate continuous metallic wires into a polymer matrix. The 3D printer comprised of 4 individual print heads capable of printing with 4 different materials, one of which was a continuous metallic wire, while the others could print thermoplastic polymers/composites. Unidirectionally reinforced PLA/Al wire composites were manufactured with 15% and 25% volume fractions of Al wire. The composite reinforced with 25% volume fraction of Al wires showed a six-fold increase in elastic modulus while the strength improved by 63% with respect to the polymeric matrix. Furthermore, medical grade PLDL composite coupons unidirectionally reinforced with a 15% volume fraction of Mg wires were manufactured by 3D printing. Mg wires with and without a surface treatment by plasma electrolytic oxidation were used. The mechanical properties showed a three-fold increase in the elastic modulus and up to a 80% increase in tensile strength compared to the matrix in air and at ambient temperature. Excellent interface strength was observed in the composites containing the Mg wire modified by plasma electrolytic oxidation. The oxide layer on the Mg wires reduced the degradation rate of the Mg wires and suppressed pitting corrosion. The mechanical properties of the PLDL matrix in the composite decreased dramatically when tested in water at 37◦C very likely because of the increased chain mobility induced by the disruption of the intermolecular interactions due to the synergistic contribution of water and temperature. Finally, a multi-material composite material was fabricated using three print heads on the customised printer. The coupon consisted of various layers of PLA, PLA reinforced with Mg particles, and PLA reinforced with Mg wires. This proof-of-concept demonstrates the possibility to create 3D printed multilayer scaffolds in which the properties of each layer can be tailored to meet specific requirements in terms of mechanical properties, degradation rate, and cytocompatibility, opening the path to manufacturing 3D printed multimaterial biomedical devices.
  • Publication
    Design and processing of particle strengthened High Entropy Alloys by Powder Metallurgy
    (2023-09) Reverte Palomino, Eduardo; Cornide Arce, Juan; Universidad Carlos III de Madrid.; Comunidad de Madrid; Agencia Estatal de Investigación (España); Ministerio de Ciencia e Innovación (España); Gordo Odériz, Elena
    High entropy alloys (HEAs) are a family of materials that have recently raised attention in Materials Science. These alloys distance themselves from the traditional metallurgy of alloys with one element as the main component. Moreover, they are named multicomponent alloys, where up to 6 elements act as the main constituents of the material. Contrary to common alloys, the high entropy of mixing favours the formation of simple crystalline structures, such as BCC, FCC or, in minor cases, HCP. In terms of performance, they have shown good mechanical properties depending on the composition designed. Other properties include wear resistance, hardness and good oxidation resistance at high temperatures. These alloys’ novelty and unknown properties make them the subject of study for many industrial applications, more so if we consider the increasingly strict requirements pushing the development and optimisation of alloys further for new advanced applications. One example is the future technologies involving nuclear fission or fusion investigations. In this work, producing a novel HEA with body centered cubic structure (BCC) was studied with particle reinforcements that will strengthen the material, which could even improve the high temperature properties. The processing of this alloy is based on powder metallurgy (PM) techniques instead of standard casting techniques. The thesis follows all the stages of alloy production, from the design of the alloy, evaluating several thermodynamic parameters favouring obtaining a BCC structure, to the production of the powders and their consolidation. The design of the alloy focused on nowadays know-how of prediction rules for HEAs, which are mainly about thermodynamic parameters, such as entropy or enthalpy of mixing. Among them, the electron concentration factor of compositions was crucial to the design of the alloy. Finally, the composition was set to Al1.8CoCrCu0.5FeNi. Three different routes were used for the production of powder, (1) elemental powder blends, (2) gas atomisation and (3) high energy milling. The powder blend route was discarded for the atomised powder for later hardening by oxides. The crystalline structure of the atomised powder was a main BCC phase, with minor segregation of copper. The quality of the powders was addressed with various characterisation techniques such as density measurements, particle sizes, differential thermal analysis (DTA) and X-ray diffraction (XRD). In this context, the prediction of the alloy’s structure was accurate as it was designed initially. The next challenge of the PhD was the sintering stage of the powders, studying techniques such as Spark plasma sintering (SPS), conventional Press and Sinter (P&S) and Electrical Resistance Sintering (ERS). The differences in heating rates and sintering times resulted in multiple phase morphologies and arrangements. These samples were studied via scanning electron microscope (SEM), X-ray diffraction (XRD), differential thermal analysis (DTA) and transmission electron microscopy (TEM) to observe the influence of the processing route. For the introduction of the oxides, the milling process was optimised, evaluating the morphology, microstrain and crystallite size of the lattice structure during the process. Afterwards, the powder was consolidated following the results of standard HEA samples and analysed using similar techniques. The strengthened material showed an increase in hardness values compared to the base HEA.
  • Publication
    Multiple localization and fracture in metallic rings and plates subjected to dynamics expansion
    (2023-12) Murlidhar, Anil Kumar; Rodríguez Martínez, José Antonio; UC3M. Departamento de Mecánica de Medios Continuos y Teoría de Estructuras; European Commission; Rodríguez Martínez, José Antonio
    Ductile materials are commonly used in high-strain rate applications involving impact or blast loads due to their notable capacity to absorb energy and undergo plastic deformation before fracture. Over the last two decades, studies on dynamic strain rates have evolved dramatically, leading to a better knowledge of material behavior under high-speed loading situations and generating advances in a variety of industries. Imperfections in ductile metals, such as cracks, inclusions, and voids, are significant, and these imperfections can significantly impact the material’s mechanical properties and structural integrity, thereby affecting its suitability for various industrial applications. Hence, further research is necessary to understand the mechanical behavior of ductile metals with imperfections. This doctoral thesis investigates the effect of porosity, anisotropy, and tensioncompression asymmetry on the mechanical response of metallic materials under dynamic loading conditions. In the first part of the thesis, we used linear stability analysis and unit-cell finite element calculations to investigate the onset of necking instabilities in porous ductile plates under biaxial loading. In the next part of the work, we used three techniques-linear stability analysis, a nonlinear two-zone model, and unit-cell finite element calculations-to examine the necking formability of anisotropic and tension-compression asymmetric metallic sheets subjected to in-plane loading paths spanning plane strain tension to near equal-biaxial tension. The last part of the study focused on examining the fragmentation process of 3D-printed AlSi10Mg porous ring specimens. This was achieved by implementing two experimental ring expansion test setups and subjecting the aluminum alloy to electromagnetic and mechanical loadings. The objective was to gain insights into the behavior of these alloys when exposed to high strain rates.
  • Publication
    Non-myocytes as electrophysiological contributors to cardiac conduction and atrial fibrillation maintenance
    (2024-01-15) Simón Chica, Ana; Kohl, Peter; Figueiras Rama, David; UC3M. Departamento de Bioingeniería; Vaquero López, Juan José
    Cardiac research has traditionally focused on the electrical and mechanical activity of cardiomyocytes (CM). However, the heterocellular nature of the heart has received more attention in recent years. In addition to CM as the key cell type responsible for the electromechanical activity of the heart, non-myocytes (NM) such as endothelial cells, fibroblasts, or immune cells are emerging as key players, influencing and modulating CM activity. This thesis aims at providing new insights into direct and indirect electrophysiological interactions between NM and CM, and their implications in cardiac conduction and arrhythmias. The followed research approach, ranging from basic electrophysiological characterization to computational modeling and translational animal models, provides a multifaceted perspective on the relevance of NM in cardiac electrophysiology under both homeostatic and patho-physiological conditions. In the first part of this thesis, we characterize passive and active electrophysiological properties of murine cardiac resident macrophages, and model their potential electrophysiological relevance for CM. We combined classic electrophysiological approaches with 3D florescence imaging, RNA-sequencing, pharmacological interventions, and computer simulations. Our results provide novel electrophysiological characterization of cardiac resident macrophages, and a computational model to quantitatively explore their relevance in the heterocellular heart. In the second part of the thesis, we focus on distinguishing electrophysiological effects of NM in patho-physiological remodelling, when NM change their phenotype, proliferate, and/or invade from external sources. More specifically, we study the role of NM on atrial fibrillation (AF) maintenance. The clinical relevance is highlighted by the fact that AF is the most frequent sustained cardiac arrhythmia, and nowadays there is no definitive treatment. Part of longterm inefficacy of pharmacological and ablation therapies can be explained by atrial functional and structural changes associated with AF remodelling. In fact, the role of NM on modulating atrial remodelling and their implications on longterm AF maintenance are poorly understood. Here, we generated a translational porcine model of long-term self-sustained persistent AF to identify the mechanisms underlying AF progression and maintenance. This pig model enabled us to use clinically-applicable tools for investigation tasks, therefore enhancing the clinical translation of the results. We analysed electrical, structural and inflammatory changes during AF progression from early stages of atrial remodelling to long-term persistent stages of advanced atrial cardiomyopathy. More specifically, we studied two clinically relevant porcine models of AF: (i) a model resembling lone AF progression without underlying structural heart disease (PsAF), and (ii) a second model with infarctrelated atrial myopathy (MI-PsAF) to increase the translational impact in clinical scenarios with other common comorbidities like ischaemic cardiomyopathy. The study was mainly focused on two NM populations that included the fibroblasts and immune cells and how they contribute to AF maintenance. We specifically performed single-cell RNA sequencing (scRNA-seq) for identifying these two cell phenotypes and their changes in animals with AF compared to sham-operated controls. Animals underwent advanced electroanatomical mapping in vivo to identify specific atrial regions associated with AF maintenance (i.e., driver regions). Further ex vivo studies after electroanatomical mapping showed differential molecular and histopathological properties between driver and nondriver regions during AF. Finally, we focused the analyses on regional differences of fibroblasts and immune cell populations that may explain hierarchical organization of atrial regions, which further support the notion that AF maintenance relies on a few atrial regions capable of sustaining higher than surrounding activation rates.