Generative models for image segmentation and representation

e-Archivo Repository

Show simple item record

dc.contributor.advisor Díaz de María, Fernando González Díaz, Iván 2011-11-09T13:09:45Z 2011-11-09T13:09:45Z 2011 2011-07-21
dc.description.abstract This PhD. Thesis consists of two well differentiated parts, each of them focusing on one particular field of Computer Vision. The first part of the document considers the problem of automatically generating image segmentations in video sequences in the absence of any kind of semantic knowledge or labeled data. To that end, a blind spatio-temporal segmentation algorithm is proposed that fuses motion, color and spatial information to produce robust segmentations. The approach follows an iterative splitting process in which well known probabilistic techniques such as Gaussian Mixture Models are used as a core technique. At each iteration of the segmentation process, some regions are split into new ones, so that the number of mixture components is automatically set depending on the image content. Furthermore, in order to keep in memory valuable information from previous iterations, prior distributions are applied to the mixture components so that areas of the image that remain unchanged are fixed during the learning process. Additionally, in order to make decisions about whether or not to split regions at the end of one iteration, we propose the use of novel spatio-temporal mid-level features. These features model properties that are usually found in real-world objects so that the resulting segmentations are closer to the human perception. Examples of spatial mid-level features are regularity or adjacency, whereas the temporal ones relate to well known motion patterns such as translation or rotation. The proposed algorithm has been assessed in comparison to some state-of-the-art spatio-temporal segmentation algorithms, taking special care of showing the influence of each of the original contributions. The second part of the thesis studies the application of generative probabilistic models to the image representation problem. We consider “image representation” as a concurrent process that helps to understand the contents in an image and covers several particular tasks in computer vision as image recognition, object detection or image segmentation. Starting from the well-known bag-of-words paradigm we study the application of Latent Topic Models. These models were initially proposed in the text retrieval field, and consider a document as generated by a mixture of latent topics that are hopefully associated to semantic concepts. Each topic generates in turn visual local descriptors following a specific distribution. Due to the bag-of-words representation, Latent Topic Models exhibit an important limitation when applied to vision problems: they do not model the distribution of topics along the images. The benefits of this spatial modeling are twofold: first, an improved performance of these models in tasks such as image classification or topic discovery; and second, an enrichment of such models with the capability of generating robust image segmentations. However, modeling the spatial location of visual words under this framework is not longer straightforward since one must ensure that both appearance and spatial models are jointly trained using the same learning algorithm that infers the latent topics. We have proposed two Latent Topic Models, Region-Based Latent Topic Model and Region-Based Latent Dirichlet Allocation that extend basic approaches to model the spatial distribution of topics along images. For that end, previous blind segmentations provide a geometric layout of an image and are included in the model through cooperative distributions that allow regions to influence each other. In addition, our proposals tackle several other aspects in topic models that enhance the image representation. It is worth to mention one contribution that explores the use of advanced appearance models, since it has shown to notably improve the performance in several tasks. In particular, a distribution based on the Kernel Logistic Regression has been proposed that takes into account the nonlinear relations of visual descriptors that lie in the same image region. Our proposals have been evaluated in three important tasks towards the total scene understanding: image classification, category-based image segmentation and unsupervised topic discovery. The obtained results support our developments and compare well with several state-of-the-art algorithms and, even more, with more complex submissions to international challenges in the vision field.
dc.format.mimetype application/pdf
dc.language.iso eng
dc.rights Atribución-NoComercial-SinDerivadas 3.0 España
dc.subject.other Computer vision
dc.subject.other Image segmetation
dc.subject.other Segmentation algorithms
dc.title Generative models for image segmentation and representation
dc.type doctoralThesis PeerReviewed
dc.subject.eciencia Telecomunicaciones
dc.rights.accessRights openAccess
dc.contributor.departamento Universidad Carlos III de Madrid. Departamento de Teoría de la Señal y Comunicaciones
 Find Full text

Files in this item

*Click on file's image for preview. (Embargoed files's preview is not supported)

The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record