Rights:
Atribución-NoComercial-SinDerivadas 3.0 España
Abstract:
Actualmente se exige mucho a los sistemas de prestación de servicios en la nube, por lo que para optimizar los recursos y atender a la demanda actual de los usuarios se ha llegado a disponer de virtualización ligera con contenedores, que virtualizan en una máqActualmente se exige mucho a los sistemas de prestación de servicios en la nube, por lo que para optimizar los recursos y atender a la demanda actual de los usuarios se ha llegado a disponer de virtualización ligera con contenedores, que virtualizan en una máquina física otra máquina entera con solo con lo necesario, pudiendo generarse así múltiples servidores. Además, los usuarios exigen que los servicios estén disponibles en todo momento y que los estos cada vez sean mejores. Lo que conlleva actualizaciones periódicas y que los servicios estén disponibles independientemente de la cantidad de gente que quiera acceder a la vez, es decir, es necesario gestionar la carga de peticiones de los servicios repartiéndolas entre varios servidores del mismo servicio, en sistemas descentralizados.
Por lo que el objetivo del proyecto se centra en el estudio de Kubernetes como tecnología orquestadora de contenedores, que nos ofrece: balance de carga para gestionar las peticiones, escalado de las aplicaciones para atender a más demanda, actualización de las aplicaciones con control de versiones y sin interrumpir el servicio, y auto-reparación de los elementos del sistema para que el sistema siempre funcione. Para el análisis de las características de Kubernetes que cubren las necesidades que motivan el proyecto se ha creado un escenario de Kubernetes en nube pública, creando un clúster a través de la plataforma de Google (GCP), y se han desplegado dos servidores, web y de hora, configurándolos para probar Kubernetes.
La conclusión ha sido satisfactoria, dando como resultado que Kubernetes cubre las necesidades de prestación de servicios que motivan este proyecto, con una pequeña excepción ya que se ha detectado interrupción del servicio al actualizar aplicaciones, teniendo por defecto configurada una pequeña demora para esta acción.[+][-]
The evolution of the necessities in the provision of cloud storage services in the world has triggered the search of new technologies which can adapt to these demands.
In the past, providing several services of the same kind required the use of programs in chThe evolution of the necessities in the provision of cloud storage services in the world has triggered the search of new technologies which can adapt to these demands.
In the past, providing several services of the same kind required the use of programs in charge of managing the requests about the service provider, in order to function properly, which can cause complicated configurations. Furthermore, among services of different types, some problems can originate due to the use of the resources of the physical machine on which they are hosted, which forces to use different machines for each service provider. Obviously, this is not efficient, since a machine which does not use all its resources available is employed for a single service, which implies a waste of resources.
This is how virtual machines arise, which allow a physical machine to virtualize complete environments to create servers inside it, but this requires an entire environment, that is to say, an operating system for each virtual machine and its resulting reserve of resources which, despite helping to alleviate the problem, does not solve it. That is the reason why light virtualization mechanisms emerge, which are used to create optimized environments, providing the environment only with what it needs.
All this necessity of optimizing the cloud service provision comes from the increasing demand for services by users. for this reason, servers which are able to increase their resources at times of peak demand and also able to be available at any time are needed; un inaccessible service cannot be permitted, even during the updating of service providers. All this promotes the study of decentralized technologies, with the purpose of preventing the system failure in case a machine fails the system does not continues working, which added to technologies of self-repair of the services, makes the server system reliable. In addition, the high demand for services may cause that a single server is not enough, giving rise to the need to have two servers providing the same service, so the load balancing technology of these requests is an important tool and, therefore, to cover in this project.
Currently, there are lightweight virtualization technologies based on containers and methods to manage them. A container can be defined as a program packaged in its basic environment and optimized to be as light as possible. Through the need to have many containers and take advantage of the resources of the networks, the aim is to create decentralized environments that execute in a simple way a group of containers belonging to the same service and that are distributed in several machines that work together, being an environment of orchestrated containers. This solves the problem of single point of failure that exists in centralized systems and provides the capacity to dispose of the resources of several machines that are connected in a network.
For the study of orchestration technology, Kubernetes has been chosen as the orchestrator. Kubernetes is a technology created by Google, and since 2015 is an open source code technology, allowing developers to access the code to improve it or simply understand it in order to create plugins that fit perfectly to Kubernetes. The expansion of Kubernetes due to its ability to orchestrate containers, together with the release of the code has generated the existence of communities of application developers who collaborate to improve this technology.
This circumstance has generated the need to certify these people and ensure that they are able to create efficient applications. In this way companies or whoever that needs applications, can be sure to hire the right person. This is not only limited to people, as there has also been a need to certify applications so that customers are confident that they will run on any platform that offers Kubernetes as a service. It is such an advantageous technology for providing services that it has also led several companies to focus on offering Kubernetes as a service in itself (KaaS), which has also led to the creation of certifications for service providers that guarantee customers that when contracting Kubernetes services, they will function perfectly...[+][-]