This page gives a quick overview of my work in

Query Optimization

Dissertation on query optimization - overview

Declarative query languages such as SQL allow users to simply describe the data they need instead of specifying how to generate it. Such languages are based on query optimizer components that translate declarative queries into executable query plans. The goal of query optimization is generally to find an optimal query plan for a given query (e.g., a plan with minimal execution time). The query optimization problem has been introduced in the 70's and is perhaps the optimization problem in the database community that most work has been published on. However, the context of query optimization is continuously changing which leads to novel challenges. For instance, modern execution engines have many more tuning parameters than before and motivate us to compare alternative query plans according to multiple cost metrics instead of only one. Both makes optimization more challenging. On the other hand, novel optimization platforms are nowadays available that can be exploited for query optimization.

In my dissertation, I address several extended variants of query optimization for which no practical algorithms were previously available. Also, I show how to optimize queries in traditional query optimization that are by far too large for prior methods. I propose a variety of approaches for making query optimization more efficient that can be categorized into three high-level ideas, as illustrated in the overview figure above. First, if queries correspond to query templates that are known in advance, we can make optimization a pre-processing step. Even though optimization may take a long time, it does not add any run time overhead. Second, we can speed up optimization by relaxing optimality guarantees. We have published approximation schemes, incremental algorithms, and randomized algorithms for novel query optimization variants. Finally, we can decrease optimization time by leveraging novel software and hardware platforms for optimization. We have recently shown how to leverage integer programming solvers, massively parallel computation clusters, and quantum annealers for query optimization variants. In many scenarios, our approaches reduce optimization times from hours or even years to just minutes or even seconds.

Quantum Computing

Quantum computing is currently reaching a tipping point where a purely theoretical research area turns into one that allows empirical evaluation. Several companies have recently presented first hardware devices that are claimed to realize at least a limited form of quantum computing. We currently have access to a D-Wave 2X adiabatic quantum annealer with over 1000 qubits. Those machines solve NP-hard optimization problems. Using them in practice is however often challenging and requires problem-specific transformation and mapping methods. Our goal is to find out, if and how such machines can be used to solve optimization problems that arise in the context of data analysis and data management.

I have recently presented the first paper on quantum computing on the VLDB conference (see talk video below). In this paper, we focus on a classical optimization problem from the database domain, the problem of multiple query optimization. We show how to map instances of that problem to the restrictive input format supported by the quantum annealer. Doing so is challenging, mainly due to the limited number of qubits and sparse connections between qubits. We describe problem representations with low asymptotic qubit growth that map well to the sparse connection structure between qubits. Also, we introduce an algorithm that calculates such mappings efficiently at run time. Finally, we present experimental results obtained on the quantum annealer, thereby evaluating the practicality and performance of the proposed method.


Text Mining and Machine Learning

Search engine providers such as Google regularly receive queries that contain subjective predicates. Users query for instance for "big cities", "cute animals", or "safe cities". In order to answer those queries from structured data, search engine providers need to understand what subjective properties the average user associates to which entities. This was the motivation for a project that I recently concluded at Google Mountain View. During this project, we developed the Surveyor system that mines the entire Web to find billions of subjective associations.

We use natural language analysis to identify statements in Web text that express an opinion about whether or not a specific property applies to a specific entity. As we consider subjective properties, user opinions diverge and we need to resolve conflicting opinions in order to correctly infer the majority opinion. This is surprisingly difficult and simple resolution strategies (e.g., use the majority vote among conflicting opinions) lead to poor results (i.e., poor match with opinions collected from test users). The reason for that are various types of skew that influence the probability that users express a certain opinion on the Web. For instance, users who think that a certain city is big are more likely to write about it on the Web. Hence, they are always overrepresented in a sample of opinions collected from the Web. We overcome those challenges by learning property and entity type specific user behavior models by unsupervised machine learning using an expectation-maximization approach. We show by a large user study that we can use those models to quite reliably infer the opinions of the average user. The system is described in more detail in the corresponding SIGMOD 2015 talk: