Rights:
Atribución-NoComercial-SinDerivadas 3.0 España
Abstract:
Desire of privacy is oftentimes associated with the intention to hide certain
aspects of our thoughts or actions due to some illicit activity. This is a
narrow understanding of privacy, and a marginal fragment of the motivations
for undertaking an action wiDesire of privacy is oftentimes associated with the intention to hide certain
aspects of our thoughts or actions due to some illicit activity. This is a
narrow understanding of privacy, and a marginal fragment of the motivations
for undertaking an action with a desired level of privacy. The right for not
being subject to arbitrary interference of our privacy is part of the universal
declaration of human rights (Article 12) and, above that, a requisite for
our freedom. Developing as a person freely, which results in the development
of society, requires actions to be done without a watchful eye. While
the awareness of privacy in the context of modern technologies is not widely
spread, it is clearly understood, as can be seen in the context of elections,
that in order to make a free choice one needs to maintain its privacy. So
why demand privacy when electing our government, but not when selecting
our daily interests, books we read, sites we browse, or persons we encounter?
It is popular belief that the data that we expose of ourselves would not be
exploited if one is a law-abiding citizen. No further from the truth, as this
data is used daily for commercial purposes: users’ data has value. To make
matters worse, data has also been used for political purposes without the
user’s consent or knowledge. However, the benefits that data can bring to
individuals seem endless and a solution of not using this data at all seems
extremist. Legislative efforts have tried, in the past years, to provide mechanisms
for users to decide what is done with their data and define a framework
where companies can use user data, but always under the consent of the latter.
However, these attempts take time to take track, and have unfortunately
not been very successful since their introduction.
In this thesis we explore the possibility of constructing cryptographic protocols
to provide a technical, rather than legislative, solution to the privacy
problem. In particular we focus on two aspects of society: browsing and
internet voting. These two events shape our lives in one way or another, and
require high levels of privacy to provide a safe environment for humans to
act upon them freely. However, these two problems have opposite solutions.
On the one hand, elections are a well established event in society that has
been around for millennia, and privacy and accountability are well rooted
requirements for such events. This might be the reason why its digitalisation
is something which is falling behind with respect to other acts of our society
(banking, shopping, reading, etc). On the other hand, browsing is a recently
introduced action, but that has quickly taken track given the amount of possibilities
that it opens with such ease. We now have access to whatever we
can imagine (except for voting) at the distance of a click. However, the data
that we generate while browsing is extremely sensitive, and most of it is disclosed to third parties under the claims of making the user experience better
(targeted recommendations, ads or bot-detection).
Chapter 1 motivates why resolving such a problem is necessary for the
progress of digital society. It then introduces the problem that this thesis
aims to resolve, together with the methodology. In Chapter 2 we introduce
some technical concepts used throughout the thesis. Similarly, we expose the
state-of-the-art and its limitations.
In Chapter 3 we focus on a mechanism to provide private browsing. In
particular, we focus on how we can provide a safer, and more private way, for
human attestation. Determining whether a user is a human or a bot is important
for the survival of an online world. However, the existing mechanisms
are either invasive or pose a burden to the user. We present a solution that
is based on a machine learning model to distinguish between humans and
bots that uses natural events of normal browsing (such as touch the screen
of a phone) to make its prediction. To ensure that no private data leaves
the user’s device, we evaluate such a model in the device rather than sending
the data over the wire. To provide insurance that the expected model has
been evaluated, the user’s device generates a cryptographic proof. However
this opens an important question. Can we achieve a high level of accuracy
without resulting in a noneffective battery consumption? We provide a positive
answer to this question in this work, and show that a privacy-preserving
solution can be achieved while maintaining the accuracy high and the user’s
performance overhead low.
In Chapter 4 we focus on the problem of internet voting. Internet voting
means voting remotely, and therefore in an uncontrolled environment.
This means that anyone can be voting under the supervision of a coercer,
which makes the main goal of the protocols presented to be that of coercionresistance.
We need to build a protocol that allows a voter to escape the
act of coercion. We present two proposals with the main goal of providing
a usable, and scalable coercion resistant protocol. They both have different
trade-offs. On the one hand we provide a coercion resistance mechanism
that results in linear filtering, but that provides a slightly weaker notion of
coercion-resistance. Secondly, we present a mechanism with a slightly higher
complexity (poly-logarithmic) but that instead provides a stronger notion of
coercion resistance. Both solutions are based on a same idea: allowing the
voter to cast several votes (such that only the last one is counted) in a way
that cannot be determined by a coercer.
Finally, in Chapter 5, we conclude the thesis, and expose how our results
push one step further the state-of-the-art. We concisely expose our contributions,
and describe clearly what are the next steps to follow. The results
presented in this work argue against the two main claims against privacy preserving solutions: either that privacy is not practical or that higher levels
of privacy result in lower levels of security.[+][-]