Rights:
Atribución-NoComercial-SinDerivadas 3.0 España
Abstract:
It is now commonly agreed that future 5G Networks will build upon the network
slicing concept. Network slicing is an emerging paradigm in mobile networks that
leverages Network Function Virtualization (NFV) to enable the instantiation of multiple
logically It is now commonly agreed that future 5G Networks will build upon the network
slicing concept. Network slicing is an emerging paradigm in mobile networks that
leverages Network Function Virtualization (NFV) to enable the instantiation of multiple
logically independent copies -named slices- of a same physical network infrastructure.
The operator can allocate to each slice dedicated resources and customized functions that
allow meeting the highly heterogeneous and stringent requirements of modern mobile
services. Managing functions and resources under network slicing is a challenging task
that requires making efficient decisions at all network levels and in real-time, which can
be achieved by integrating Artificial Intelligence (AI) in the network.
This thesis investigates the potential of AI for sliced mobile networks. In particular
it focuses on resource allocation and orchestration for network slices. This involves two
steps: (i)Admission Control that is responsible to decide which slices can be admitted
to the network, and (ii) Network resource orchestration that dynamically allots to the
admitted slices the necessary resources for their operation.
Network Slicing will have an impact on the models that sustain the business ecosystem
opening the door to new players: the Infrastructure Provider (InP), which is the owner of
the infrastructure, and the tenants, which may acquire a network slice from the InP
to deliver specific service to their customers. In this new context, how to correctly
handle resource allocation among tenants and how to maximize the monetization of
the infrastructure become fundamental problems that need to be solved. In this thesis
we address this issue by designing a network slice admission control algorithm that (i)
autonomously learns the best acceptance policy while (ii) it ensures that the service
guarantees provided to tenants are always satisfied. This includes (i) an analytical model
for the admissibility region of a network slicing-capable 5G Network, (ii) the analysis of
the system (modeled as a Semi-Markov Decision Process) and the optimization of the
infrastructure provider’s revenue, and (iii) the design of a machine learning algorithm
that can be deployed in practical settings and achieves close to optimal performance.
Dynamically orchestrate network resources is both a critical and challenging task in
upcoming multi-tenant mobile networks, which requires allocating capacity to individual
network slices so as to accommodate future time-varying service demands. Such an anticipatory resource configuration process must be driven by suitable predictors that take
into account all the sources of monetary cost associated to network capacity orchestration.
Legacy models that aim at forecasting traffic demands fail to capture these key economic
aspects of network operation. To close this gap in the second part of this thesis, we
first present DeepCog, a first generation deep neural network architecture inspired by
advances in image processing and trained via a dedicated loss function in order to
deal with monetary cost due to overprovisioning or underprovisioning of networking
capacity. Unlike traditional traffic volume predictors, DeepCog returns a cost-aware
capacity forecast, which can be directly used by operators to take short- and long-term
reallocation decisions that maximize their revenues. Extensive performance evaluations
with real-world measurement data collected in a metropolitan-scale operational mobile
network demonstrate the effectiveness of our proposed solution, which can reduce
resource management costs by over 50% in practical case studies. Then we introduce
AZTEC, a second generation data-driven framework that effectively allocates capacity to
individual slices by adopting an original multi-timescale forecasting model. Hinging on
a combination of Deep Learning architectures and a traditional optimization algorithm,
AZTEC anticipates resource assignments that minimize the comprehensive management
costs induced by resource overprovisioning, instantiation and reconfiguration, as well as
by denied traffic demands. Experiments with real-world mobile data traffic show that
AZTEC dynamically adapts to traffic fluctuations, and largely outperforms state-of-the-art
solutions for network resource orchestration.
At the time of writing DeepCog and AZTEC are, to the best of our knowledge, the
only works where a deep learning architecture is explicitly tailored to the problem of
anticipatory resource orchestration in mobile networks.[+][-]