[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

LIPIcs, Volume 47

33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)



Thumbnail PDF

Event

STACS 2016, February 17-20, 2016, Orléans, France

Editors

Nicolas Ollinger
Heribert Vollmer

Publication Details

  • published at: 2016-02-16
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-001-9
  • DBLP: db/conf/stacs/stacs2016

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 47, STACS'16, Complete Volume

Authors: Nicolas Ollinger and Heribert Vollmer


Abstract
LIPIcs, Volume 47, STACS'16, Complete Volume

Cite as

33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Proceedings{ollinger_et_al:LIPIcs.STACS.2016,
  title =	{{LIPIcs, Volume 47, STACS'16, Complete Volume}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016},
  URN =		{urn:nbn:de:0030-drops-57682},
  doi =		{10.4230/LIPIcs.STACS.2016},
  annote =	{Keywords: Software/Program Verification, Models of Computation, Modes of Computation, Complexity Measures and Classes, Nonnumerical Algorithms and Problems, Specifying and Verifying and Reasoning about Programs, Mathematical Logic, Formal Languages}
}
Document
Front Matter
Front Matter, Foreword, Conference Organization, External Reviewers, Table of Contents

Authors: Nicolas Ollinger and Heribert Vollmer


Abstract
Front Matter, Foreword, Conference Organization, External Reviewers, Table of Contents

Cite as

33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 0:i-0:xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{ollinger_et_al:LIPIcs.STACS.2016.0,
  author =	{Ollinger, Nicolas and Vollmer, Heribert},
  title =	{{Front Matter, Foreword, Conference Organization, External Reviewers, Table of Contents}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{0:i--0:xvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.0},
  URN =		{urn:nbn:de:0030-drops-57015},
  doi =		{10.4230/LIPIcs.STACS.2016.0},
  annote =	{Keywords: Front Matter, Foreword, Conference Organization, External Reviewers, Table of Contents}
}
Document
Invited Talk
Ideal Decompositions for Vector Addition Systems (Invited Talk)

Authors: Jérôme Leroux and Sylvain Schmitz


Abstract
Vector addition systems, or equivalently Petri nets, are one of the most popular formal models for the representation and the analysis of parallel processes. Many problems for vector addition systems are known to be decidable thanks to the theory of well-structured transition systems. Indeed, vector addition systems with configurations equipped with the classical point-wise ordering are well-structured transition systems. Based on this observation, problems like coverability or termination can be proven decidable. However, the theory of well-structured transition systems does not explain the decidability of the reachability problem. In this presentation, we show that runs of vector addition systems can also be equipped with a well quasi-order. This observation provides a unified understanding of the data structures involved in solving many problems for vector addition systems, including the central reachability problem.

Cite as

Jérôme Leroux and Sylvain Schmitz. Ideal Decompositions for Vector Addition Systems (Invited Talk). In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 1:1-1:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{leroux_et_al:LIPIcs.STACS.2016.1,
  author =	{Leroux, J\'{e}r\^{o}me and Schmitz, Sylvain},
  title =	{{Ideal Decompositions for Vector Addition Systems}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{1:1--1:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.1},
  URN =		{urn:nbn:de:0030-drops-57024},
  doi =		{10.4230/LIPIcs.STACS.2016.1},
  annote =	{Keywords: Petri net, ideal, well-quasi-order, reachability, verification}
}
Document
Invited Talk
Complexity and Expressive Power of Ontology-Mediated Queries (Invited Talk)

Authors: Carsten Lutz


Abstract
Data sets that have been collected from multiple sources or extracted from the web or often highly incomplete and heterogeneous, which makes them hard to process and query. One way to address this challenge is to use ontologies, which provide a way to assign a semantics to the data, to enrich it with domain knowledge, and to provide an enriched and uniform vocabulary for querying. The combination of a traditional database query with an ontology is called an ontology-mediated query (OMQ). The aim of this talk is to survey fundamental properties of OMQs such as their complexity, expressive power, descriptive strength, and rewritability into traditional query languages such as SQL and Datalog. A central observation is that there is a close and fruitful connection between OMQs and constraint satisfaction problems (CSPs) as well as related fragments of monadic NP, which puts OMQs into a more general perspective and gives raise to a number of interesting results.

Cite as

Carsten Lutz. Complexity and Expressive Power of Ontology-Mediated Queries (Invited Talk). In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 2:1-2:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{lutz:LIPIcs.STACS.2016.2,
  author =	{Lutz, Carsten},
  title =	{{Complexity and Expressive Power of Ontology-Mediated Queries}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{2:1--2:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.2},
  URN =		{urn:nbn:de:0030-drops-57034},
  doi =		{10.4230/LIPIcs.STACS.2016.2},
  annote =	{Keywords: Ontology-Mediated Queries, Description Logic, Constraint Satisfaction}
}
Document
Invited Talk
Fine-Grained Algorithms and Complexity (Invited Talk)

Authors: Virginia Vassilevska Williams


Abstract
A central goal of algorithmic research is to determine how fast computational problems can be solved in the worst case. Theorems from complexity theory state that there are problems that, on inputs of size n, can be solved in t(n) time but not in t(n)^{1-epsilon} time for epsilon>0. The main challenge is to determine where in this hierarchy various natural and important problems lie. Throughout the years, many ingenious algorithmic techniques have been developed and applied to obtain blazingly fast algorithms for many problems. Nevertheless, for many other central problems, the best known running times are essentially those of the classical algorithms devised for them in the 1950s and 1960s. Unconditional lower bounds seem very difficult to obtain, and so practically all known time lower bounds are conditional. For years, the main tool for proving hardness of computational problems have been NP-hardness reductions, basing hardness on P != NP. However, when we care about the exact running time (as opposed to merely polynomial vs non-polynomial), NP-hardness is not applicable, especially if the problem can already be solved in polynomial time. In recent years, a new theory has been developed, based on "fine-grained reductions" that focus on exact running times. The goal of these reductions is as follows. Suppose problem A is solvable in a(n) time and problem B in b(n) time, and no a(n)^{1-epsilon} and b(n)^{1-epsilon} algorithms are known for A and B respectively, for any epsilon>0. Then if A is fine-grained reducible to problem B (for a(n) and b(n)), then a b(n)^{1-epsilon} time algorithm for B (for any epsilon>0) implies an a(n)^{1-epsilon'} algorithm for A (for some epsilon'>0). Now, mimicking NP-hardness, the approach is to (1) select a key problem X that is conjectured to require t(n)^{1-o(1)} time for some t(n), and (2) reduce X in a fine-grained way to many important problems. This approach has led to the discovery of many meaningful relationships between problems, and even sometimes to equivalence classes. In this talk I will give an overview of the current progress in this area of study, and will highlight some new exciting developments.

Cite as

Virginia Vassilevska Williams. Fine-Grained Algorithms and Complexity (Invited Talk). In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{vassilevskawilliams:LIPIcs.STACS.2016.3,
  author =	{Vassilevska Williams, Virginia},
  title =	{{Fine-Grained Algorithms and Complexity}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.3},
  URN =		{urn:nbn:de:0030-drops-57044},
  doi =		{10.4230/LIPIcs.STACS.2016.3},
  annote =	{Keywords: algorithms, complexity, polynomial time problems}
}
Document
Tutorial
Tutorial on Cellular Automata and Tilings (Tutorial)

Authors: Jarkko Kari


Abstract
Cellular automata (CA) are massively parallel systems where a regular grid of finite symbols is updated according to a synchronous application of the same local update rule everywhere. A closely related concept is that of Wang tiles where a local relation between neighboring symbols determines allowed combinations of symbols in the grid. In this tutorial we start with classical results on cellular automata, such as the Garden-of-Eden theorems, the Curtis-Hedlund-Lyndon-theorem and the balance property of surjective cellular automata. We then discuss Wang tiles and, in particular, the concept of aperiodicity and the undecidability of the domino problem. The domino problem is the decision problem to determine if a given Wang tile set admits any valid tilings of the grid. We relate Wang tiles to cellular automata, and establish a number of undecidability results for cellular automata.

Cite as

Jarkko Kari. Tutorial on Cellular Automata and Tilings (Tutorial). In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, p. 4:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kari:LIPIcs.STACS.2016.4,
  author =	{Kari, Jarkko},
  title =	{{Tutorial on Cellular Automata and Tilings}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{4:1--4:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.4},
  URN =		{urn:nbn:de:0030-drops-57054},
  doi =		{10.4230/LIPIcs.STACS.2016.4},
  annote =	{Keywords: cellular automata, wang tiles, decision problems, dynamics}
}
Document
Graph Reconstruction with a Betweenness Oracle

Authors: Mikkel Abrahamsen, Greg Bodwin, Eva Rotenberg, and Morten Stöckel


Abstract
Graph reconstruction algorithms seek to learn a hidden graph by repeatedly querying a black-box oracle for information about the graph structure. Perhaps the most well studied and applied version of the problem uses a distance oracle, which can report the shortest path distance between any pair of nodes. We introduce and study the betweenness oracle, where bet(a, m, z) is true iff m lies on a shortest path between a and z. This oracle is strictly weaker than a distance oracle, in the sense that a betweenness query can be simulated by a constant number of distance queries, but not vice versa. Despite this, we are able to develop betweenness reconstruction algorithms that match the current state of the art for distance reconstruction, and even improve it for certain types of graphs. We obtain the following algorithms: (1) Reconstruction of general graphs in O(n^2) queries, (2) Reconstruction of degree-bounded graphs in ~O(n^{3/2}) queries, (3) Reconstruction of geodetic degree-bounded graphs in ~O(n) queries In addition to being a fundamental graph theoretic problem with some natural applications, our new results shed light on some avenues for progress in the distance reconstruction problem.

Cite as

Mikkel Abrahamsen, Greg Bodwin, Eva Rotenberg, and Morten Stöckel. Graph Reconstruction with a Betweenness Oracle. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{abrahamsen_et_al:LIPIcs.STACS.2016.5,
  author =	{Abrahamsen, Mikkel and Bodwin, Greg and Rotenberg, Eva and St\"{o}ckel, Morten},
  title =	{{Graph Reconstruction with a Betweenness Oracle}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.5},
  URN =		{urn:nbn:de:0030-drops-57068},
  doi =		{10.4230/LIPIcs.STACS.2016.5},
  annote =	{Keywords: graph reconstruction, bounded degree graphs, query complexity}
}
Document
Airports and Railways: Facility Location Meets Network Design

Authors: Anna Adamaszek, Antonios Antoniadis, and Tobias Mömke


Abstract
We introduce a new framework of Airport and Railway Problems, which combines capacitated facility location with network design. In this framework we are given a graph with weights on the vertices and on the edges, together with a parameter k. The vertices of the graph represent cities, and weights denote respectively the costs of opening airports in the cities and building railways that connect pairs of cities. The parameter $k$ can be thought of as the capacity of an airport. The goal is to construct a minimum cost network of airports and railways connecting the cities, where each connected component in the network spans at most k vertices, contains an open airport, and the network satisfies some additional requirements specific to the problem in the framework. We consider two problems in this framework. In the AR_F problem there are no additional requirements for the network. This problem is related to capacitated facility location. In the AR_P problem, we require each component to be a path with airports at both endpoints. AR_P is a relaxation of the capacitated vehicle routing problem (CVRP). We consider the problems in the two-dimensional Euclidean setting. We show that both AR_F and AR_P are NP-hard, even for uniform vertex weights (i.e., when the cost of building an airport is the same for all cities). On the positive side, we provide polynomial time approximation schemes for AR_F and AR_P when vertex weights are uniform. We also investigate AR_F and AR_P for k = infinity. In this setting we present an exact polynomial time algorithm for AR_F with general vertex costs, which also works for general edge costs. In contrast to AR_F, AR_P remains NP-hard when k = infinity, and we present a polynomial time approximation scheme for general vertex weights. We believe that our PTAS for AR_P with uniform vertex weights and arbitrary k brings us closer towards a PTAS for Euclidean CVRP, for which the main difficulty is to deal with paths of length at most k.

Cite as

Anna Adamaszek, Antonios Antoniadis, and Tobias Mömke. Airports and Railways: Facility Location Meets Network Design. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 6:1-6:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{adamaszek_et_al:LIPIcs.STACS.2016.6,
  author =	{Adamaszek, Anna and Antoniadis, Antonios and M\"{o}mke, Tobias},
  title =	{{Airports and Railways: Facility Location Meets Network Design}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{6:1--6:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.6},
  URN =		{urn:nbn:de:0030-drops-57074},
  doi =		{10.4230/LIPIcs.STACS.2016.6},
  annote =	{Keywords: approximation algorithms, geometric approximation, facility location, network design, PTAS}
}
Document
Simultaneous Feedback Vertex Set: A Parameterized Perspective

Authors: Akanksha Agrawal, Daniel Lokshtanov, Amer E. Mouawad, and Saket Saurabh


Abstract
For a family of graphs F, a graph G, and a positive integer k, the F-DELETION problem asks whether we can delete at most k vertices from G to obtain a graph in F. F-DELETION generalizes many classical graph problems such as Vertex Cover, Feedback Vertex Set, and Odd Cycle Transversal. A graph G = (V, cup_{i=1}^{alpha} E_{i}), where the edge set of G is partitioned into alpha color classes, is called an alpha-edge-colored graph. A natural extension of the F-DELETION problem to edge-colored graphs is the alpha-SIMULTANEOUS F-DELETION problem. In the latter problem, we are given an alpha-edge-colored graph G and the goal is to find a set S of at most k vertices such that each graph G_i\S, where G_i = (V, E_i) and 1 <= i <= alpha, is in F. In this work, we study alpha-SIMULTANEOUS F-DELETION for F being the family of forests. In other words, we focus on the alpha-SIMULTANEOUS FEEDBACK VERTEX SET (alpha-SIMFVS) problem. Algorithmically, we show that, like its classical counterpart, alpha-SIMFVS parameterized by k is fixed-parameter tractable (FPT) and admits a polynomial kernel, for any fixed constant alpha. In particular, we give an algorithm running in 2^{O(alpha * k)} * n^{O(1)} time and a kernel with O(alpha * k^{3(alpha + 1)}) vertices. The running time of our algorithm implies that alpha-SIMFVS is FPT even when alpha in o(log(n)). We complement this positive result by showing that for alpha in O(log(n)), where n is the number of vertices in the input graph, alpha-SIMFVS becomes W[1]-hard. Our positive results answer one of the open problems posed by Cai and Ye (MFCS 2014).

Cite as

Akanksha Agrawal, Daniel Lokshtanov, Amer E. Mouawad, and Saket Saurabh. Simultaneous Feedback Vertex Set: A Parameterized Perspective. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 7:1-7:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.STACS.2016.7,
  author =	{Agrawal, Akanksha and Lokshtanov, Daniel and Mouawad, Amer E. and Saurabh, Saket},
  title =	{{Simultaneous Feedback Vertex Set: A Parameterized Perspective}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{7:1--7:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.7},
  URN =		{urn:nbn:de:0030-drops-57084},
  doi =		{10.4230/LIPIcs.STACS.2016.7},
  annote =	{Keywords: parameterized complexity ,feedback vertex set, kernel, edge-colored graphs}
}
Document
On Regularity of Unary Probabilistic Automata

Authors: S. Akshay, Blaise Genest, Bruno Karelovic, and Nikhil Vyas


Abstract
The quantitative verification of Probabilistic Automata (PA) is undecidable in general. Unary PA are a simpler model where the choice of action is fixed. Still, the quantitative verification problem is open and known to be as hard as Skolem's problem, a problem on linear recurrence sequences, whose decidability is open for at least 40 years. In this paper, we approach this problem by studying the languages generated by unary PAs (as defined below), whose regularity would entail the decidability of quantitative verification. Given an initial distribution, we represent the trajectory of a unary PA over time as an infinite word over a finite alphabet, where the n-th letter represents a probability range after n steps. We extend this to a language of trajectories (a set of words), one trajectory for each initial distribution from a (possibly infinite) set. We show that if the eigenvalues of the transition matrix associated with the unary PA are all distinct positive real numbers, then the language is effectively regular. Further, we show that this result is at the boundary of regularity, as non-regular languages can be generated when the restrictions are even slightly relaxed. The regular representation of the language allows us to reason about more general properties, e.g., robustness of a regular property in a neighbourhood around a given distribution.

Cite as

S. Akshay, Blaise Genest, Bruno Karelovic, and Nikhil Vyas. On Regularity of Unary Probabilistic Automata. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 8:1-8:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{akshay_et_al:LIPIcs.STACS.2016.8,
  author =	{Akshay, S. and Genest, Blaise and Karelovic, Bruno and Vyas, Nikhil},
  title =	{{On Regularity of Unary Probabilistic Automata}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{8:1--8:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.8},
  URN =		{urn:nbn:de:0030-drops-57093},
  doi =		{10.4230/LIPIcs.STACS.2016.8},
  annote =	{Keywords: Probabilistic automata, Symbolic dynamics, Markov chains, Skolem problem, Regularity}
}
Document
The Expanding Search Ratio of a Graph

Authors: Spyros Angelopoulos, Christoph Dürr, and Thomas Lidbetter


Abstract
We study the problem of searching for a hidden target in an environment that is modeled by an edge-weighted graph. Most of the previous work on this problem considers the pathwise cost formulation, in which the cost incurred by the searcher is the overall time to locate the target, assuming that the searcher moves at unit speed. More recent work introduced the setting of expanding search in which the searcher incurs cost only upon visiting previously unexplored areas of the graph. Such a paradigm is useful in modeling problems in which the cost of re-exploration is negligible (such as coal mining). In our work we study algorithmic and computational issues of expanding search, for a variety of search environments including general graphs, trees and star-like graphs. In particular, we rely on the deterministic and randomized search ratio as the performance measures of search strategies, which were originally introduced by Koutsoupias and Papadimitriou [ICALP 1996] in the context of pathwise search. The search ratio is essentially the best competitive ratio among all possible strategies. Our main objective is to explore how the transition from pathwise to expanding search affects the competitive analysis, which has applications to optimization problems beyond the strict boundaries of search problems.

Cite as

Spyros Angelopoulos, Christoph Dürr, and Thomas Lidbetter. The Expanding Search Ratio of a Graph. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{angelopoulos_et_al:LIPIcs.STACS.2016.9,
  author =	{Angelopoulos, Spyros and D\"{u}rr, Christoph and Lidbetter, Thomas},
  title =	{{The Expanding Search Ratio of a Graph}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.9},
  URN =		{urn:nbn:de:0030-drops-57109},
  doi =		{10.4230/LIPIcs.STACS.2016.9},
  annote =	{Keywords: Search games, randomized algorithms, competitive analysis, game theory}
}
Document
Derandomizing Isolation Lemma for K3,3-free and K5-free Bipartite Graphs

Authors: Rahul Arora, Ashu Gupta, Rohit Gurjar, and Raghunath Tewari


Abstract
The perfect matching problem has a randomized NC algorithm, using the celebrated Isolation Lemma of Mulmuley, Vazirani and Vazirani. The Isolation Lemma states that giving a random weight assignment to the edges of a graph ensures that it has a unique minimum weight perfect matching, with a good probability. We derandomize this lemma for K3,3-free and K5-free bipartite graphs. That is, we give a deterministic log-space construction of such a weight assignment for these graphs. Such a construction was known previously for planar bipartite graphs. Our result implies that the perfect matching problem for K3,3-free and K5-free bipartite graphs is in SPL. It also gives an alternate proof for an already known result – reachability for K3,3-free and K5-free graphs is in UL.

Cite as

Rahul Arora, Ashu Gupta, Rohit Gurjar, and Raghunath Tewari. Derandomizing Isolation Lemma for K3,3-free and K5-free Bipartite Graphs. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 10:1-10:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{arora_et_al:LIPIcs.STACS.2016.10,
  author =	{Arora, Rahul and Gupta, Ashu and Gurjar, Rohit and Tewari, Raghunath},
  title =	{{Derandomizing Isolation Lemma for K3,3-free and K5-free Bipartite Graphs}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{10:1--10:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.10},
  URN =		{urn:nbn:de:0030-drops-57116},
  doi =		{10.4230/LIPIcs.STACS.2016.10},
  annote =	{Keywords: bipartite matching, derandomization, isolation lemma, SPL, minor-free graph}
}
Document
Entropy Games and Matrix Multiplication Games

Authors: Eugene Asarin, Julien Cervelle, Aldric Degorre, Catalin Dima, Florian Horn, and Victor Kozyakin


Abstract
Two intimately related new classes of games are introduced and studied: entropy games (EGs) and matrix multiplication games (MMGs). An EG is played on a finite arena by two-and-a-half players: Despot, Tribune and the non-deterministic People. Despot wants to make the set of possible People's behaviors as small as possible, while Tribune wants to make it as large as possible. An MMG is played by two players that alternately write matrices from some predefined finite sets. One wants to maximize the growth rate of the product, and the other to minimize it. We show that in general MMGs are undecidable in quite a strong sense. On the positive side, EGs correspond to a subclass of MMGs, and we prove that such MMGs and EGs are determined, and that the optimal strategies are simple. The complexity of solving such games is in NP cap coNP.

Cite as

Eugene Asarin, Julien Cervelle, Aldric Degorre, Catalin Dima, Florian Horn, and Victor Kozyakin. Entropy Games and Matrix Multiplication Games. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 11:1-11:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{asarin_et_al:LIPIcs.STACS.2016.11,
  author =	{Asarin, Eugene and Cervelle, Julien and Degorre, Aldric and Dima, Catalin and Horn, Florian and Kozyakin, Victor},
  title =	{{Entropy Games and Matrix Multiplication Games}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{11:1--11:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.11},
  URN =		{urn:nbn:de:0030-drops-57129},
  doi =		{10.4230/LIPIcs.STACS.2016.11},
  annote =	{Keywords: game theory, entropy, joint spectral radius}
}
Document
Good Predictions Are Worth a Few Comparisons

Authors: Nicolas Auger, Cyril Nicaud, and Carine Pivoteau


Abstract
Most modern processors are heavily parallelized and use predictors to guess the outcome of conditional branches, in order to avoid costly stalls in their pipelines. We propose predictor-friendly versions of two classical algorithms: exponentiation by squaring and binary search in a sorted array. These variants result in less mispredictions on average, at the cost of an increased number of operations. These theoretical results are supported by experimentations that show that our algorithms perform significantly better than the standard ones, for primitive data types.

Cite as

Nicolas Auger, Cyril Nicaud, and Carine Pivoteau. Good Predictions Are Worth a Few Comparisons. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 12:1-12:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{auger_et_al:LIPIcs.STACS.2016.12,
  author =	{Auger, Nicolas and Nicaud, Cyril and Pivoteau, Carine},
  title =	{{Good Predictions Are Worth a Few Comparisons}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{12:1--12:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.12},
  URN =		{urn:nbn:de:0030-drops-57135},
  doi =		{10.4230/LIPIcs.STACS.2016.12},
  annote =	{Keywords: branch misses, binary search, exponentiation by squaring, Markov chains}
}
Document
Dense Subset Sum May Be the Hardest

Authors: Per Austrin, Petteri Kaski, Mikko Koivisto, and Jesper Nederlof


Abstract
The SUBSET SUM problem asks whether a given set of n positive integers contains a subset of elements that sum up to a given target t. It is an outstanding open question whether the O^*(2^{n/2})-time algorithm for SUBSET SUM by Horowitz and Sahni [J. ACM 1974] can be beaten in the worst-case setting by a "truly faster", O^*(2^{(0.5-delta)*n})-time algorithm, with some constant delta > 0. Continuing an earlier work [STACS 2015], we study SUBSET SUM parameterized by the maximum bin size beta, defined as the largest number of subsets of the n input integers that yield the same sum. For every epsilon > 0 we give a truly faster algorithm for instances with beta <= 2^{(0.5-epsilon)*n}, as well as instances with beta >= 2^{0.661n}. Consequently, we also obtain a characterization in terms of the popular density parameter n/log_2(t): if all instances of density at least 1.003 admit a truly faster algorithm, then so does every instance. This goes against the current intuition that instances of density 1 are the hardest, and therefore is a step toward answering the open question in the affirmative. Our results stem from a novel combinatorial analysis of mixings of earlier algorithms for SUBSET SUM and a study of an extremal question in additive combinatorics connected to the problem of Uniquely Decodable Code Pairs in information theory.

Cite as

Per Austrin, Petteri Kaski, Mikko Koivisto, and Jesper Nederlof. Dense Subset Sum May Be the Hardest. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{austrin_et_al:LIPIcs.STACS.2016.13,
  author =	{Austrin, Per and Kaski, Petteri and Koivisto, Mikko and Nederlof, Jesper},
  title =	{{Dense Subset Sum May Be the Hardest}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{13:1--13:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.13},
  URN =		{urn:nbn:de:0030-drops-57143},
  doi =		{10.4230/LIPIcs.STACS.2016.13},
  annote =	{Keywords: subset sum, additive combinatorics, exponential-time algorithm, homo-morphic hashing, littlewood–offord problem}
}
Document
Computing the L1 Geodesic Diameter and Center of a Polygonal Domain

Authors: Sang Won Bae, Matias Korman, Joseph S. B. Mitchell, Yoshio Okamoto, Valentin Polishchuk, and Haitao Wang


Abstract
For a polygonal domain with h holes and a total of n vertices, we present algorithms that compute the L_1 geodesic diameter in O(n^2+h^4) time and the L_1 geodesic center in O((n^4+n^2 h^4)*alpha(n)) time, where alpha(.) denotes the inverse Ackermann function. No algorithms were known for these problems before. For the Euclidean counterpart, the best algorithms compute the geodesic diameter in O(n^{7.73}) or O(n^7(h+log(n))) time, and compute the geodesic center in O(n^{12+epsilon}) time. Therefore, our algorithms are much faster than the algorithms for the Euclidean problems. Our algorithms are based on several interesting observations on L_1 shortest paths in polygonal domains.

Cite as

Sang Won Bae, Matias Korman, Joseph S. B. Mitchell, Yoshio Okamoto, Valentin Polishchuk, and Haitao Wang. Computing the L1 Geodesic Diameter and Center of a Polygonal Domain. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 14:1-14:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{wonbae_et_al:LIPIcs.STACS.2016.14,
  author =	{Won Bae, Sang and Korman, Matias and Mitchell, Joseph S. B. and Okamoto, Yoshio and Polishchuk, Valentin and Wang, Haitao},
  title =	{{Computing the L1 Geodesic Diameter and Center of a Polygonal Domain}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{14:1--14:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.14},
  URN =		{urn:nbn:de:0030-drops-57151},
  doi =		{10.4230/LIPIcs.STACS.2016.14},
  annote =	{Keywords: geodesic diameter, geodesic center, shortest paths, polygonal domains, L1 metric}
}
Document
Are Short Proofs Narrow? QBF Resolution is not Simple

Authors: Olaf Beyersdorff, Leroy Chew, Meena Mahajan, and Anil Shukla


Abstract
The groundbreaking paper 'Short proofs are narrow - resolution made simple' by Ben-Sasson and Wigderson (J. ACM 2001) introduces what is today arguably the main technique to obtain resolution lower bounds: to show a lower bound for the width of proofs. Another important measure for resolution is space, and in their fundamental work, Atserias and Dalmau (J. Comput. Syst. Sci. 2008) show that space lower bounds again can be obtained via width lower bounds. Here we assess whether similar techniques are effective for resolution calculi for quantified Boolean formulas (QBF). A mixed picture emerges. Our main results show that both the relations between size and width as well as between space and width drastically fail in Q-resolution, even in its weaker tree-like version. On the other hand, we obtain positive results for the expansion-based resolution systems Forall-Exp+Res and IR-calc, however only in the weak tree-like models. Technically, our negative results rely on showing width lower bounds together with simultaneous upper bounds for size and space. For our positive results we exhibit space and width-preserving simulations between QBF resolution calculi.

Cite as

Olaf Beyersdorff, Leroy Chew, Meena Mahajan, and Anil Shukla. Are Short Proofs Narrow? QBF Resolution is not Simple. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 15:1-15:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{beyersdorff_et_al:LIPIcs.STACS.2016.15,
  author =	{Beyersdorff, Olaf and Chew, Leroy and Mahajan, Meena and Shukla, Anil},
  title =	{{Are Short Proofs Narrow? QBF Resolution is not Simple}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{15:1--15:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.15},
  URN =		{urn:nbn:de:0030-drops-57164},
  doi =		{10.4230/LIPIcs.STACS.2016.15},
  annote =	{Keywords: proof complexity, QBF, resolution, lower bound techniques, simulations}
}
Document
Faster Algorithms for the Constrained k-Means Problem

Authors: Anup Bhattacharya, Ragesh Jaiswal, and Amit Kumar


Abstract
The classical center based clustering problems such as k-means/median/center assume that the optimal clusters satisfy the locality property that the points in the same cluster are close to each other. A number of clustering problems arise in machine learning where the optimal clusters do not follow such a locality property. For instance, consider the r-gather clustering problem where there is an additional constraint that each of the clusters should have at least r points or the capacitated clustering problem where there is an upper bound on the cluster sizes. Consider a variant of the k-means problem that may be regarded as a general version of such problems. Here, the optimal clusters O_1, ..., O_k are an arbitrary partition of the dataset and the goal is to output k-centers c_1, ..., c_k such that the objective function sum_{i=1}^{k} sum_{x in O_{i}} ||x - c_{i}||^2 is minimized. It is not difficult to argue that any algorithm (without knowing the optimal clusters) that outputs a single set of k centers, will not behave well as far as optimizing the above objective function is concerned. However, this does not rule out the existence of algorithms that output a list of such k centers such that at least one of these k centers behaves well. Given an error parameter epsilon > 0, let l denote the size of the smallest list of k-centers such that at least one of the k-centers gives a (1+epsilon) approximation w.r.t. the objective function above. In this paper, we show an upper bound on l by giving a randomized algorithm that outputs a list of 2^{~O(k/epsilon)} k-centers. We also give a closely matching lower bound of 2^{~Omega(k/sqrt{epsilon})}. Moreover, our algorithm runs in time O(n * d * 2^{~O(k/epsilon)}). This is a significant improvement over the previous result of Ding and Xu who gave an algorithm with running time O(n * d * (log{n})^{k} * 2^{poly(k/epsilon)}) and output a list of size O((log{n})^k * 2^{poly(k/epsilon)}). Our techniques generalize for the k-median problem and for many other settings where non-Euclidean distance measures are involved.

Cite as

Anup Bhattacharya, Ragesh Jaiswal, and Amit Kumar. Faster Algorithms for the Constrained k-Means Problem. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 16:1-16:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.STACS.2016.16,
  author =	{Bhattacharya, Anup and Jaiswal, Ragesh and Kumar, Amit},
  title =	{{Faster Algorithms for the Constrained k-Means Problem}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{16:1--16:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.16},
  URN =		{urn:nbn:de:0030-drops-57179},
  doi =		{10.4230/LIPIcs.STACS.2016.16},
  annote =	{Keywords: k-means, k-median, approximation algorithm, sampling}
}
Document
A Catalog of EXISTS-R-Complete Decision Problems About Nash Equilibria in Multi-Player Games

Authors: Vittorio Bilò and Marios Mavronicolas


Abstract
[Schaefer and Stefankovic, Theory of Computing Systems, 2015] provided an explicit formulation of EXISTS-R as the class capturing the complexity of deciding the Existential Theory of the Reals, and established that deciding, given a 3-player game, whether or not it has a Nash equilibrium with no probability exceeding a given rational is EXISTS-R-complete. Four more decision problems about Nash equilibria for 3-player games were very recently shown EXISTS-R-complete via a chain of individual, problem-specific reductions in [Garg et al., Proceedings of ICALP 2015]; determining more such EXISTS-R-complete problems was posed there as an open problem. In this work, we deliver an extensive catalog of EXISTS-R-complete decision problems about Nash equilibria in 3-player games, thus resolving completely the open problem from [Garg et al., Proceedings of ICALP 2015]. Towards this end, we present a single and very simple, unifying reduction from the EXISTS-R-complete decision problem from [Schaefer and Stefankovic, Theory of Computing Systems, 2015] to (almost) all the decision problems about Nash equilibria that were before shown NP-complete for 2-player games in [Bilo and Mavronicolas, Proceedings of SAGT 2012; Conitzer and Sandholm, Games and Economic Behavior, 2008; Gilboa and Zemel, Games and Economic Behavior, 1989]. Encompassed in the catalog are the four decision problems shown EXISTS-R-complete in [Garg et al., Proceedings of ICALP 2015].

Cite as

Vittorio Bilò and Marios Mavronicolas. A Catalog of EXISTS-R-Complete Decision Problems About Nash Equilibria in Multi-Player Games. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 17:1-17:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.STACS.2016.17,
  author =	{Bil\`{o}, Vittorio and Mavronicolas, Marios},
  title =	{{A Catalog of EXISTS-R-Complete Decision Problems About Nash Equilibria in Multi-Player Games}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{17:1--17:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.17},
  URN =		{urn:nbn:de:0030-drops-57189},
  doi =		{10.4230/LIPIcs.STACS.2016.17},
  annote =	{Keywords: Nash equilibrium, complexity of equilibria, EXISTS-R-completeness}
}
Document
Multiple-Edge-Fault-Tolerant Approximate Shortest-Path Trees

Authors: Davide Bilò, Luciano Gualà, Stefano Leucci, and Guido Proietti


Abstract
Let G be an n-node and m-edge positively real-weighted undirected graph. For any given integer f >= 1, we study the problem of designing a sparse f-edge-fault-tolerant (f-EFT) sigma-approximate single-source shortest-path tree (sigma-ASPT), namely a subgraph of G having as few edges as possible and which, following the failure of a set F of at most f edges in G, contains paths from a fixed source that are stretched at most by a factor of sigma. To this respect, we provide an algorithm that efficiently computes an f-EFT (2|F|+1)-ASPT of size O(f n). Our structure improves on a previous related construction designed for unweighted graphs, having the same size but guaranteeing a larger stretch factor of 3(f+1), plus an additive term of (f+1)*log(n). Then, we show how to convert our structure into an efficient f-EFT single-source distance oracle (SSDO), that can be built in ~{O}(f m) time, has size O(fn *log^2(n)), and is able to report, after the failure of the edge set F, in O(|F|^2 * log^2(n)) time a (2|F|+1)-approximate distance from the source to any node, and a corresponding approximate path in the same amount of time plus the path's size. Such an oracle is obtained by handling another fundamental problem, namely that of updating a minimum spanning forest (MSF) of G after that a batch of k simultaneous edge modifications (i.e., edge insertions, deletions and weight changes) is performed. For this problem, we build in O(m * log^3(n)) time a sensitivity oracle of size O(m * log^2(n)), that reports in O(k^2 * log^2(n)) time the (at most 2k) edges either exiting from or entering into the MSF. As a result of independent interest, it is worth noticing that our MSF oracle can be employed to handle arbitrary sequences of o(sqrt[4]{n}/log(n)) (non-simultaneous) updates with a worst-case time per update of o(sqrt{n}). Thus, for relatively short sequences of updates, our oracle should be preferred w.r.t. the best-known (in a worst-case sense) MSF fully-dynamic algorithm, requiring O(sqrt{n}) time per update.

Cite as

Davide Bilò, Luciano Gualà, Stefano Leucci, and Guido Proietti. Multiple-Edge-Fault-Tolerant Approximate Shortest-Path Trees. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 18:1-18:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.STACS.2016.18,
  author =	{Bil\`{o}, Davide and Gual\`{a}, Luciano and Leucci, Stefano and Proietti, Guido},
  title =	{{Multiple-Edge-Fault-Tolerant Approximate Shortest-Path Trees}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{18:1--18:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.18},
  URN =		{urn:nbn:de:0030-drops-57196},
  doi =		{10.4230/LIPIcs.STACS.2016.18},
  annote =	{Keywords: fault-tolerant shortest-path tree, distance oracle, minimum spanning tree}
}
Document
On a Fragment of AMSO and Tiling Systems

Authors: Achim Blumensath, Thomas Colcombet, and Pawel Parys


Abstract
We prove that satisfiability over infinite words is decidable for a fragment of asymptotic monadic second-order logic. In this fragment we only allow formulae of the form "exists t forall s exists r: phi(r,s,t)", where phi does not use quantifiers over number variables, and variables r and s can be only used simultaneously, in subformulae of the form s < f(x) <= r.

Cite as

Achim Blumensath, Thomas Colcombet, and Pawel Parys. On a Fragment of AMSO and Tiling Systems. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 19:1-19:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{blumensath_et_al:LIPIcs.STACS.2016.19,
  author =	{Blumensath, Achim and Colcombet, Thomas and Parys, Pawel},
  title =	{{On a Fragment of AMSO and Tiling Systems}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{19:1--19:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.19},
  URN =		{urn:nbn:de:0030-drops-57202},
  doi =		{10.4230/LIPIcs.STACS.2016.19},
  annote =	{Keywords: monadic second-order logic, boundedness, tiling problems}
}
Document
The Complexity of Phylogeny Constraint Satisfaction

Authors: Manuel Bodirsky, Peter Jonsson, and Trung Van Pham


Abstract
We systematically study the computational complexity of a broad class of computational problems in phylogenetic reconstruction. The class contains for example the rooted triple consistency problem, forbidden subtree problems, the quartet consistency problem, and many other problems studied in the bioinformatics literature. The studied problems can be described as constraint satisfaction problems where the constraints have a first-order definition over the rooted triple relation. We show that every such phylogeny problem can be solved in polynomial time or is NP-complete. On the algorithmic side, we generalize a well-known polynomial-time algorithm of Aho, Sagiv, Szymanski, and Ullman for the rooted triple consistency problem. Our algorithm repeatedly solves linear equation systems to construct a solution in polynomial time. We then show that every phylogeny problem that cannot be solved by our algorithm is NP-complete. Our classification establishes a dichotomy for a large class of infinite structures that we believe is of independent interest in universal algebra, model theory, and topology. The proof of our main result combines results and techniques from various research areas: a recent classification of the model-complete cores of the reducts of the homogeneous binary branching C-relation, Leeb’s Ramsey theorem for rooted trees, and universal algebra.

Cite as

Manuel Bodirsky, Peter Jonsson, and Trung Van Pham. The Complexity of Phylogeny Constraint Satisfaction. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 20:1-20:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bodirsky_et_al:LIPIcs.STACS.2016.20,
  author =	{Bodirsky, Manuel and Jonsson, Peter and Van Pham, Trung},
  title =	{{The Complexity of Phylogeny Constraint Satisfaction}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{20:1--20:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.20},
  URN =		{urn:nbn:de:0030-drops-57218},
  doi =		{10.4230/LIPIcs.STACS.2016.20},
  annote =	{Keywords: constraint satisfaction problems, computational complexity, phylogenetic reconstruction, ramsey theory, model theory}
}
Document
The MSO+U Theory of (N,<) Is Undecidable

Authors: Mikolaj Bojanczyk, Pawel Parys, and Szymon Torunczyk


Abstract
We consider the logic MSO+U, which is monadic second-order logic extended with the unbounding quantifier. The unbounding quantifier is used to say that a property of finite sets holds for sets of arbitrarily large size. We prove that the logic is undecidable on infinite words, i.e. the MSO+U theory of (N,<) is undecidable. This settles an open problem about the logic, and improves a previous undecidability result, which used infinite trees and additional axioms from set theory.

Cite as

Mikolaj Bojanczyk, Pawel Parys, and Szymon Torunczyk. The MSO+U Theory of (N,<) Is Undecidable. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 21:1-21:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bojanczyk_et_al:LIPIcs.STACS.2016.21,
  author =	{Bojanczyk, Mikolaj and Parys, Pawel and Torunczyk, Szymon},
  title =	{{The MSO+U Theory of (N,\langle) Is Undecidable}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{21:1--21:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.21},
  URN =		{urn:nbn:de:0030-drops-57223},
  doi =		{10.4230/LIPIcs.STACS.2016.21},
  annote =	{Keywords: automata, logic, unbounding quantifier, bounds, undecidability}
}
Document
Time-Approximation Trade-offs for Inapproximable Problems

Authors: Édouard Bonnet, Michael Lampis, and Vangelis Th. Paschos


Abstract
In this paper we focus on problems which do not admit a constant-factor approximation in polynomial time and explore how quickly their approximability improves as the allowed running time is gradually increased from polynomial to (sub-)exponential. We tackle a number of problems: For MIN INDEPENDENT DOMINATING SET, MAX INDUCED PATH, FOREST and TREE, for any r(n), a simple, known scheme gives an approximation ratio of r in time roughly r^{n/r}. We show that, for most values of r, if this running time could be significantly improved the ETH would fail. For MAX MINIMAL VERTEX COVER we give a non-trivial sqrt{r}-approximation in time 2^{n/{r}}. We match this with a similarly tight result. We also give a log(r)-approximation for MIN ATSP in time 2^{n/r} and an r-approximation for MAX GRUNDY COLORING in time r^{n/r}. Furthermore, we show that MIN SET COVER exhibits a curious behavior in this super-polynomial setting: for any delta>0 it admits an m^delta-approximation, where m is the number of sets, in just quasi-polynomial time. We observe that if such ratios could be achieved in polynomial time, the ETH or the Projection Games Conjecture would fail.

Cite as

Édouard Bonnet, Michael Lampis, and Vangelis Th. Paschos. Time-Approximation Trade-offs for Inapproximable Problems. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 22:1-22:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.STACS.2016.22,
  author =	{Bonnet, \'{E}douard and Lampis, Michael and Paschos, Vangelis Th.},
  title =	{{Time-Approximation Trade-offs for Inapproximable Problems}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{22:1--22:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.22},
  URN =		{urn:nbn:de:0030-drops-57236},
  doi =		{10.4230/LIPIcs.STACS.2016.22},
  annote =	{Keywords: Algorithm, Complexity, Polynomial and Subexponential Approximation, Reduction, Inapproximability}
}
Document
External Memory Three-Sided Range Reporting and Top-k Queries with Sublogarithmic Updates

Authors: Gerth Stølting Brodal


Abstract
An external memory data structure is presented for maintaining a dynamic set of N two-dimensional points under the insertion and deletion of points, and supporting unsorted 3-sided range reporting queries and top-k queries, where top-k queries report the k points with highest y-value within a given x-range. For any constant 0 < epsilon <= 1/2, a data structure is constructed that supports updates in amortized O(1/(epsilon * B^{1-epsilon}) * log_B(N)) IOs and queries in amortized O(1/epsilon * log_B(N+K/B)) IOs, where B is the external memory block size, and K is the size of the output to the query (for top-k queries K is the minimum of k and the number of points in the query interval). The data structure uses linear space. The update bound is a significant factor B^{1-epsilon} improvement over the previous best update bounds for these two query problems, while staying within the same query and space bounds.

Cite as

Gerth Stølting Brodal. External Memory Three-Sided Range Reporting and Top-k Queries with Sublogarithmic Updates. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 23:1-23:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{brodal:LIPIcs.STACS.2016.23,
  author =	{Brodal, Gerth St{\o}lting},
  title =	{{External Memory Three-Sided Range Reporting and Top-k Queries with Sublogarithmic Updates}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{23:1--23:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.23},
  URN =		{urn:nbn:de:0030-drops-57241},
  doi =		{10.4230/LIPIcs.STACS.2016.23},
  annote =	{Keywords: External memory, priority search tree, 3-sided range reporting, top-k queries}
}
Document
Catalytic Space: Non-determinism and Hierarchy

Authors: Harry Buhrman, Michal Koucký, Bruno Loff, and Florian Speelman


Abstract
Catalytic computation, defined by Buhrman, Cleve, Koucký, Loff and Speelman (STOC 2014), is a space-bounded computation where in addition to our working memory we have an exponentially larger auxiliary memory which is full; the auxiliary memory may be used throughout the computation, but it must be restored to its initial content by the end of the computation. Motivated by the surprising power of this model, we set out to study the non-deterministic version of catalytic computation. We establish that non-deterministic catalytic log-space is contained in ZPP, which is the same bound known for its deterministic counterpart, and we prove that non-deterministic catalytic space is closed under complement (under a standard derandomization assumption). Furthermore, we establish hierarchy theorems for non-deterministic and deterministic catalytic computation.

Cite as

Harry Buhrman, Michal Koucký, Bruno Loff, and Florian Speelman. Catalytic Space: Non-determinism and Hierarchy. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 24:1-24:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{buhrman_et_al:LIPIcs.STACS.2016.24,
  author =	{Buhrman, Harry and Kouck\'{y}, Michal and Loff, Bruno and Speelman, Florian},
  title =	{{Catalytic Space: Non-determinism and Hierarchy}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{24:1--24:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.24},
  URN =		{urn:nbn:de:0030-drops-57258},
  doi =		{10.4230/LIPIcs.STACS.2016.24},
  annote =	{Keywords: catalytic computation, Immerman–Szelepcs\'{e}nyi theorem, space hierarchy}
}
Document
Testing Shape Restrictions of Discrete Distributions

Authors: Clément L. Canonne, Ilias Diakonikolas, Themis Gouleakis, and Ronitt Rubinfeld


Abstract
We study the question of testing structured properties (classes) of discrete distributions. Specifically, given sample access to an arbitrary distribution D over [n] and a property P, the goal is to distinguish between D in P and l_{1}(D,P)>epsilon. We develop a general algorithm for this question, which applies to a large range of "shape-constrained" properties, including monotone, log-concave, t-modal, piecewise-polynomial, and Poisson Binomial distributions. Moreover, for all cases considered, our algorithm has near-optimal sample complexity with regard to the domain size and is computationally efficient. For most of these classes, we provide the first non-trivial tester in the literature. In addition, we also describe a generic method to prove lower bounds for this problem, and use it to show our upper bounds are nearly tight. Finally, we extend some of our techniques to tolerant testing, deriving nearly-tight upper and lower bounds for the corresponding questions.

Cite as

Clément L. Canonne, Ilias Diakonikolas, Themis Gouleakis, and Ronitt Rubinfeld. Testing Shape Restrictions of Discrete Distributions. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{canonne_et_al:LIPIcs.STACS.2016.25,
  author =	{Canonne, Cl\'{e}ment L. and Diakonikolas, Ilias and Gouleakis, Themis and Rubinfeld, Ronitt},
  title =	{{Testing Shape Restrictions of Discrete Distributions}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{25:1--25:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.25},
  URN =		{urn:nbn:de:0030-drops-57260},
  doi =		{10.4230/LIPIcs.STACS.2016.25},
  annote =	{Keywords: property testing, probability distributions, statistics, lower bounds}
}
Document
Deciding Circular-Arc Graph Isomorphism in Parameterized Logspace

Authors: Maurice Chandoo


Abstract
We compute a canonical circular-arc representation for a given circular-arc (CA) graph which implies solving the isomorphism and recognition problem for this class. To accomplish this we split the class of CA graphs into uniform and non-uniform ones and employ a generalized version of the argument given by Köbler et al. (2013) that has been used to show that the subclass of Helly CA graphs can be canonized in logspace. For uniform CA graphs our approach works in logspace and in addition to that Helly CA graphs are a strict subset of uniform CA graphs. Thus our result is a generalization of the canonization result for Helly CA graphs. In the non-uniform case a specific set Omega of ambiguous vertices arises. By choosing the parameter k to be the cardinality of Omega this obstacle can be solved by brute force. This leads to an O(k + log(n)) space algorithm to compute a canonical representation for non-uniform and therefore all CA graphs.

Cite as

Maurice Chandoo. Deciding Circular-Arc Graph Isomorphism in Parameterized Logspace. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 26:1-26:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chandoo:LIPIcs.STACS.2016.26,
  author =	{Chandoo, Maurice},
  title =	{{Deciding Circular-Arc Graph Isomorphism in Parameterized Logspace}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{26:1--26:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.26},
  URN =		{urn:nbn:de:0030-drops-57275},
  doi =		{10.4230/LIPIcs.STACS.2016.26},
  annote =	{Keywords: graph isomorphism, canonical representation, parameterized algorithm}
}
Document
Bottleneck Paths and Trees and Deterministic Graphical Games

Authors: Shiri Chechik, Haim Kaplan, Mikkel Thorup, Or Zamir, and Uri Zwick


Abstract
Gabow and Tarjan showed that the Bottleneck Path (BP) problem, i.e., finding a path between a given source and a given target in a weighted directed graph whose largest edge weight is minimized, as well as the Bottleneck spanning tree (BST) problem, i.e., finding a directed spanning tree rooted at a given vertex whose largest edge weight is minimized, can both be solved deterministically in O(m * log^*(n)) time, where m is the number of edges and n is the number of vertices in the graph. We present a slightly improved randomized algorithm for these problems with an expected running time of O(m * beta(m,n)), where beta(m,n) = min{k >= 1 | log^{(k)}n <= m/n } <= log^*(n) - log^*(m/n)+1. This is the first improvement for these problems in over 25 years. In particular, if m >= n * log^{(k)} * n, for some constant k, the expected running time of the new algorithm is O(m). Our algorithm, as that of Gabow and Tarjan, work in the comparison model. We also observe that in the word-RAM model, both problems can be solved deterministically in O(m) time. Finally, we solve an open problem of Andersson et al., giving a deterministic O(m)-time comparison-based algorithm for solving deterministic 2-player turn-based zero-sum terminal payoff games, also known as Deterministic Graphical Games (DGG).

Cite as

Shiri Chechik, Haim Kaplan, Mikkel Thorup, Or Zamir, and Uri Zwick. Bottleneck Paths and Trees and Deterministic Graphical Games. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 27:1-27:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chechik_et_al:LIPIcs.STACS.2016.27,
  author =	{Chechik, Shiri and Kaplan, Haim and Thorup, Mikkel and Zamir, Or and Zwick, Uri},
  title =	{{Bottleneck Paths and Trees and Deterministic Graphical Games}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{27:1--27:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.27},
  URN =		{urn:nbn:de:0030-drops-57283},
  doi =		{10.4230/LIPIcs.STACS.2016.27},
  annote =	{Keywords: bottleneck paths, comparison model, deterministic graphical games}
}
Document
Packing Groups of Items into Multiple Knapsacks

Authors: Lin Chen and Guochuan Zhang


Abstract
We consider a natural generalization of the classical multiple knapsack problem in which instead of packing single items we are packing groups of items. In this problem, we have multiple knapsacks and a set of items which are partitioned into groups. Each item has an individual weight, while the profit is associated with groups rather than items. The profit of a group can be attained if and only if every item of this group is packed. Such a general model finds applications in various practical problems, e.g., delivering bundles of goods. The tractability of this problem relies heavily on how large a group could be. Deciding if a group of items of total weight 2 could be packed into two knapsacks of unit capacity is already NP-hard and it thus rules out a constant-approximation algorithm for this problem in general. We then focus on the parameterized version where the total weight of items in each group is bounded by a factor delta of the total capacity of all knapsacks. Both approximation and inapproximability results with respect to delta are derived. We also show that, depending on whether the number of knapsacks is a constant or part of the input, the approximation ratio for the problem, as a function on delta, changes substantially, which has a clear difference from the classical multiple knapsack problem.

Cite as

Lin Chen and Guochuan Zhang. Packing Groups of Items into Multiple Knapsacks. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 28:1-28:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.STACS.2016.28,
  author =	{Chen, Lin and Zhang, Guochuan},
  title =	{{Packing Groups of Items into Multiple Knapsacks}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{28:1--28:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.28},
  URN =		{urn:nbn:de:0030-drops-57299},
  doi =		{10.4230/LIPIcs.STACS.2016.28},
  annote =	{Keywords: approximation algorithms, lower bound, multiple knapsack, bin packing}
}
Document
Cost Functions Definable by Min/Max Automata

Authors: Thomas Colcombet, Denis Kuperberg, Amaldev Manuel, and Szymon Torunczyk


Abstract
Regular cost functions form a quantitative extension of regular languages that share the array of characterisations the latter possess. In this theory, functions are treated only up to preservation of boundedness on all subsets of the domain. In this work, we subject the well known distance automata (also called min-automata), and their dual max-automata to this framework, and obtain a number of effective characterisations in terms of logic, expressions and algebra.

Cite as

Thomas Colcombet, Denis Kuperberg, Amaldev Manuel, and Szymon Torunczyk. Cost Functions Definable by Min/Max Automata. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 29:1-29:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{colcombet_et_al:LIPIcs.STACS.2016.29,
  author =	{Colcombet, Thomas and Kuperberg, Denis and Manuel, Amaldev and Torunczyk, Szymon},
  title =	{{Cost Functions Definable by Min/Max Automata}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{29:1--29:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.29},
  URN =		{urn:nbn:de:0030-drops-57305},
  doi =		{10.4230/LIPIcs.STACS.2016.29},
  annote =	{Keywords: distance automata, B-automata, regular cost functions, stabilisation monoids, decidability, min-automata, max-automata}
}
Document
Varieties of Cost Functions

Authors: Laure Daviaud, Denis Kuperberg, and Jean-Éric Pin


Abstract
Regular cost functions were introduced as a quantitative generalisation of regular languages, retaining many of their equivalent characterisations and decidability properties. For instance, stabilisation monoids play the same role for cost functions as monoids do for regular languages. The purpose of this article is to further extend this algebraic approach by generalising two results on regular languages to cost functions: Eilenberg's varieties theorem and profinite equational characterisations of lattices of regular languages. This opens interesting new perspectives, but the specificities of cost functions introduce difficulties that prevent these generalisations to be straightforward. In contrast, although syntactic algebras can be defined for formal power series over a commutative ring, no such notion is known for series over semirings and in particular over the tropical semiring.

Cite as

Laure Daviaud, Denis Kuperberg, and Jean-Éric Pin. Varieties of Cost Functions. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 30:1-30:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{daviaud_et_al:LIPIcs.STACS.2016.30,
  author =	{Daviaud, Laure and Kuperberg, Denis and Pin, Jean-\'{E}ric},
  title =	{{Varieties of Cost Functions}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{30:1--30:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.30},
  URN =		{urn:nbn:de:0030-drops-57319},
  doi =		{10.4230/LIPIcs.STACS.2016.30},
  annote =	{Keywords: Cost functions, regular language, varieties, syntactic algebra}
}
Document
Kernelization and Sparseness: the Case of Dominating Set

Authors: Pål Grønås Drange, Markus Dregi, Fedor V. Fomin, Stephan Kreutzer, Daniel Lokshtanov, Marcin Pilipczuk, Michal Pilipczuk, Felix Reidl, Fernando Sánchez Villaamil, Saket Saurabh, Sebastian Siebertz, and Somnath Sikdar


Abstract
We prove that for every positive integer r and for every graph class G of bounded expansion, the r-DOMINATING SET problem admits a linear kernel on graphs from G. Moreover, in the more general case when G is only assumed to be nowhere dense, we give an almost linear kernel on G for the classic DOMINATING SET problem, i.e., for the case r=1. These results generalize a line of previous research on finding linear kernels for DOMINATING SET and r-DOMINATING SET (Alber et al., JACM 2004, Bodlaender et al., FOCS 2009, Fomin et al., SODA 2010, Fomin et al., SODA 2012, Fomin et al., STACS 2013). However, the approach taken in this work, which is based on the theory of sparse graphs, is radically different and conceptually much simpler than the previous approaches. We complement our findings by showing that for the closely related CONNECTED DOMINATING SET problem, the existence of such kernelization algorithms is unlikely, even though the problem is known to admit a linear kernel on H-topological-minor-free graphs (Fomin et al., STACS 2013). Also, we prove that for any somewhere dense class G, there is some r for which r-DOMINATING SET is W[2]-hard on G. Thus, our results fall short of proving a sharp dichotomy for the parameterized complexity of r-DOMINATING SET on subgraph-monotone graph classes: we conjecture that the border of tractability lies exactly between nowhere dense and somewhere dense graph classes.

Cite as

Pål Grønås Drange, Markus Dregi, Fedor V. Fomin, Stephan Kreutzer, Daniel Lokshtanov, Marcin Pilipczuk, Michal Pilipczuk, Felix Reidl, Fernando Sánchez Villaamil, Saket Saurabh, Sebastian Siebertz, and Somnath Sikdar. Kernelization and Sparseness: the Case of Dominating Set. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 31:1-31:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{drange_et_al:LIPIcs.STACS.2016.31,
  author =	{Drange, P\r{a}l Gr{\o}n\r{a}s and Dregi, Markus and Fomin, Fedor V. and Kreutzer, Stephan and Lokshtanov, Daniel and Pilipczuk, Marcin and Pilipczuk, Michal and Reidl, Felix and S\'{a}nchez Villaamil, Fernando and Saurabh, Saket and Siebertz, Sebastian and Sikdar, Somnath},
  title =	{{Kernelization and Sparseness: the Case of Dominating Set}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{31:1--31:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.31},
  URN =		{urn:nbn:de:0030-drops-57327},
  doi =		{10.4230/LIPIcs.STACS.2016.31},
  annote =	{Keywords: kernelization, dominating set, bounded expansion, nowhere dense}
}
Document
Canonizing Graphs of Bounded Tree Width in Logspace

Authors: Michael Elberfeld and Pascal Schweitzer


Abstract
Graph canonization is the problem of computing a unique representative, a canon, from the isomorphism class of a given graph. This implies that two graphs are isomorphic exactly if their canons are equal. We show that graphs of bounded tree width can be canonized in deterministic logarithmic space (logspace). This implies that the isomorphism problem for graphs of bounded tree width can be decided in logspace. In the light of isomorphism for trees being hard for the complexity class logspace, this makes the ubiquitous classes of graphs of bounded tree width one of the few classes of graphs for which the complexity of the isomorphism problem has been exactly determined.

Cite as

Michael Elberfeld and Pascal Schweitzer. Canonizing Graphs of Bounded Tree Width in Logspace. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 32:1-32:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{elberfeld_et_al:LIPIcs.STACS.2016.32,
  author =	{Elberfeld, Michael and Schweitzer, Pascal},
  title =	{{Canonizing Graphs of Bounded Tree Width in Logspace}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{32:1--32:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.32},
  URN =		{urn:nbn:de:0030-drops-57336},
  doi =		{10.4230/LIPIcs.STACS.2016.32},
  annote =	{Keywords: algorithmic graph theory, computational complexity, graph isomorphism, logspace, tree width}
}
Document
Preprocessing Under Uncertainty

Authors: Stefan Fafianie, Stefan Kratsch, and Vuong Anh Quyen


Abstract
In this work we study preprocessing for tractable problems when part of the input is unknown or uncertain. This comes up naturally if, e.g., the load of some machines or the congestion of some roads is not known far enough in advance, or if we have to regularly solve a problem over instances that are largely similar, e.g., daily airport scheduling with few charter flights. Unlike robust optimization, which also studies settings like this, our goal lies not in computing solutions that are (approximately) good for every instantiation. Rather, we seek to preprocess the known parts of the input, to speed up finding an optimal solution once the missing data is known. We present efficient algorithms that given an instance with partially uncertain input generate an instance of size polynomial in the amount of uncertain data that is equivalent for every instantiation of the unknown part. Concretely, we obtain such algorithms for minimum spanning tree, minimum weight matroid basis, and maximum cardinality bipartite matching, where respectively the weight of edges, weight of elements, and the availability of vertices is unknown for part of the input. Furthermore, we show that there are tractable problems, such as small connected vertex cover, for which one cannot hope to obtain similar results.

Cite as

Stefan Fafianie, Stefan Kratsch, and Vuong Anh Quyen. Preprocessing Under Uncertainty. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 33:1-33:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fafianie_et_al:LIPIcs.STACS.2016.33,
  author =	{Fafianie, Stefan and Kratsch, Stefan and Anh Quyen, Vuong},
  title =	{{Preprocessing Under Uncertainty}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{33:1--33:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.33},
  URN =		{urn:nbn:de:0030-drops-57340},
  doi =		{10.4230/LIPIcs.STACS.2016.33},
  annote =	{Keywords: preprocessing, uncertainty, spanning trees, matroids, matchings}
}
Document
Characterisation of an Algebraic Algorithm for Probabilistic Automata

Authors: Nathanaël Fijalkow


Abstract
We consider the value 1 problem for probabilistic automata over finite words: it asks whether a given probabilistic automaton accepts words with probability arbitrarily close to 1. This problem is known to be undecidable. However, different algorithms have been proposed to partially solve it; it has been recently shown that the Markov Monoid algorithm, based on algebra, is the most correct algorithm so far. The first contribution of this paper is to give a characterisation of the Markov Monoid algorithm. The second contribution is to develop a profinite theory for probabilistic automata, called the prostochastic theory. This new framework gives a topological account of the value 1 problem, which in this context is cast as an emptiness problem. The above characterisation is reformulated using the prostochastic theory, allowing to give a modular proof.

Cite as

Nathanaël Fijalkow. Characterisation of an Algebraic Algorithm for Probabilistic Automata. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 34:1-34:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fijalkow:LIPIcs.STACS.2016.34,
  author =	{Fijalkow, Nathana\"{e}l},
  title =	{{Characterisation of an Algebraic Algorithm for Probabilistic Automata}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{34:1--34:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.34},
  URN =		{urn:nbn:de:0030-drops-57355},
  doi =		{10.4230/LIPIcs.STACS.2016.34},
  annote =	{Keywords: Probabilistic Automata, Value 1 Problem, Markov Monoid Algorithm, Algebraic Algorithm, Profinite Theory, Topology in Computer Science}
}
Document
Semantic Versus Syntactic Cutting Planes

Authors: Yuval Filmus, Pavel Hrubeš, and Massimo Lauria


Abstract
In this paper, we compare the strength of the semantic and syntactic version of the cutting planes proof system. First, we show that the lower bound technique of Pudlák applies also to semantic cutting planes: the proof system has feasible interpolation via monotone real circuits, which gives an exponential lower bound on lengths of semantic cutting planes refutations. Second, we show that semantic refutations are stronger than syntactic ones. In particular, we give a formula for which any refutation in syntactic cutting planes requires exponential length, while there is a polynomial length refutation in semantic cutting planes. In other words, syntactic cutting planes does not p-simulate semantic cutting planes. We also give two incompatible integer inequalities which require exponential length refutation in syntactic cutting planes. Finally, we pose the following problem, which arises in connection with semantic inference of arity larger than two: can every multivariate non-decreasing real function be expressed as a composition of non-decreasing real functions in two variables?

Cite as

Yuval Filmus, Pavel Hrubeš, and Massimo Lauria. Semantic Versus Syntactic Cutting Planes. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 35:1-35:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{filmus_et_al:LIPIcs.STACS.2016.35,
  author =	{Filmus, Yuval and Hrube\v{s}, Pavel and Lauria, Massimo},
  title =	{{Semantic Versus Syntactic Cutting Planes}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{35:1--35:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.35},
  URN =		{urn:nbn:de:0030-drops-57367},
  doi =		{10.4230/LIPIcs.STACS.2016.35},
  annote =	{Keywords: proof complexity, cutting planes, lower bounds}
}
Document
Editing to Connected f-Degree Graph

Authors: Fedor V. Fomin, Petr Golovach, Fahad Panolan, and Saket Saurabh


Abstract
In the EDGE EDITING TO CONNECTED f-DEGREE GRAPH problem we are given a graph G, an integer k and a function f assigning integers to vertices of G. The task is to decide whether there is a connected graph F on the same vertex set as G, such that for every vertex v, its degree in F is f(v) and the number of edges inthe symmetric difference of E(G) and E(F), is at most k. We show that EDGE EDITING TO CONNECTED f-DEGREE GRAPH is fixed-parameter tractable (FPT) by providing an algorithm solving the problem on an n-vertex graph in time 2^{O(k)}n^{O(1)}. Our FPT algorithm is based on a non-trivial combination of color-coding and fast computations of representative families over direct sum matroid of l-elongation of co-graphic matroid associated with G and uniform matroid over the set of non-edges of G. We believe that this combination could be useful in designing parameterized algorithms for other edge editing problems.

Cite as

Fedor V. Fomin, Petr Golovach, Fahad Panolan, and Saket Saurabh. Editing to Connected f-Degree Graph. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.STACS.2016.36,
  author =	{Fomin, Fedor V. and Golovach, Petr and Panolan, Fahad and Saurabh, Saket},
  title =	{{Editing to Connected f-Degree Graph}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.36},
  URN =		{urn:nbn:de:0030-drops-57370},
  doi =		{10.4230/LIPIcs.STACS.2016.36},
  annote =	{Keywords: Connected f-factor, FPT, Representative Family, Color Coding}
}
Document
Sub-exponential Approximation Schemes for CSPs: From Dense to Almost Sparse

Authors: Dimitris Fotakis, Michael Lampis, and Vangelis Th. Paschos


Abstract
It has long been known, since the classical work of (Arora, Karger, Karpinski, JCSS'99), that MAX-CUT admits a PTAS on dense graphs, and more generally, MAX-k-CSP admits a PTAS on "dense" instances with Omega(n^k) constraints. In this paper we extend and generalize their exhaustive sampling approach, presenting a framework for (1-epsilon)-approximating any MAX-k-CSP problem in sub-exponential time while significantly relaxing the denseness requirement on the input instance. Specifically, we prove that for any constants delta in (0, 1] and epsilon > 0, we can approximate MAX-k-CSP problems with Omega(n^{k-1+delta}) constraints within a factor of (1-epsilon) in time 2^{O(n^{1-delta}*ln(n) / epsilon^3)}. The framework is quite general and includes classical optimization problems, such as MAX-CUT, MAX-DICUT, MAX-k-SAT, and (with a slight extension) k-DENSEST SUBGRAPH, as special cases. For MAX-CUT in particular (where k=2), it gives an approximation scheme that runs in time sub-exponential in n even for "almost-sparse" instances (graphs with n^{1+delta} edges). We prove that our results are essentially best possible, assuming the ETH. First, the density requirement cannot be relaxed further: there exists a constant r < 1 such that for all delta > 0, MAX-k-SAT instances with O(n^{k-1}) clauses cannot be approximated within a ratio better than r in time 2^{O(n^{1-delta})}. Second, the running time of our algorithm is almost tight for all densities. Even for MAX-CUT there exists r<1 such that for all delta' > delta >0, MAX-CUT instances with n^{1+delta} edges cannot be approximated within a ratio better than r in time 2^{n^{1-delta'}}.

Cite as

Dimitris Fotakis, Michael Lampis, and Vangelis Th. Paschos. Sub-exponential Approximation Schemes for CSPs: From Dense to Almost Sparse. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 37:1-37:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fotakis_et_al:LIPIcs.STACS.2016.37,
  author =	{Fotakis, Dimitris and Lampis, Michael and Paschos, Vangelis Th.},
  title =	{{Sub-exponential Approximation Schemes for CSPs: From Dense to Almost Sparse}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{37:1--37:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.37},
  URN =		{urn:nbn:de:0030-drops-57388},
  doi =		{10.4230/LIPIcs.STACS.2016.37},
  annote =	{Keywords: polynomial and subexponential approximation, sampling, randomized rounding}
}
Document
The Complexity of the Hamilton Cycle Problem in Hypergraphs of High Minimum Codegree

Authors: Frederik Garbe and Richard Mycroft


Abstract
We consider the complexity of the Hamilton cycle decision problem when restricted to k-uniform hypergraphs H of high minimum codegree delta(H). We show that for tight Hamilton cycles this problem is NP-hard even when restricted to k-uniform hypergraphs H with delta(H) >= n/2 - C, where n is the order of H and C is a constant which depends only on k. This answers a question raised by Karpinski, Rucinski and Szymanska. Additionally we give a polynomial-time algorithm which, for a sufficiently small constant epsilon > 0, determines whether or not a 4-uniform hypergraph H on n vertices with delta(H) >= n/2 - epsilon * n contains a Hamilton 2-cycle. This demonstrates that some looser Hamilton cycles exhibit interestingly different behaviour compared to tight Hamilton cycles. A key part of the proof is a precise characterisation of all 4-uniform hypergraphs H on n vertices with delta(H) >= n/2 - epsilon * n which do not contain a Hamilton 2-cycle; this may be of independent interest. As an additional corollary of this characterisation, we obtain an exact Dirac-type bound for the existence of a Hamilton 2-cycle in a large 4-uniform hypergraph.

Cite as

Frederik Garbe and Richard Mycroft. The Complexity of the Hamilton Cycle Problem in Hypergraphs of High Minimum Codegree. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 38:1-38:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{garbe_et_al:LIPIcs.STACS.2016.38,
  author =	{Garbe, Frederik and Mycroft, Richard},
  title =	{{The Complexity of the Hamilton Cycle Problem in Hypergraphs of High Minimum Codegree}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{38:1--38:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.38},
  URN =		{urn:nbn:de:0030-drops-57392},
  doi =		{10.4230/LIPIcs.STACS.2016.38},
  annote =	{Keywords: Hamilton cycles, hypergraphs, graph algorithms}
}
Document
Efficiently Finding All Maximal alpha-gapped Repeats

Authors: Pawel Gawrychowski, Tomohiro I, Shunsuke Inenaga, Dominik Köppl, and Florin Manea


Abstract
For alpha >=1, an alpha-gapped repeat in a word w is a factor uvu of w such that |uv| <= alpha * |u|; the two occurrences of a factor u in such a repeat are called arms. Such a repeat is called maximal if its arms cannot be extended simultaneously with the same symbol to the right nor to the left. We show that the number of all maximal alpha-gapped repeats occurring in words of length n is upper bounded by 18 * alpha * n, allowing us to construct an algorithm finding all maximal alpha-gapped repeats of a word on an integer alphabet of size n^{O}(1)} in {O}(alpha * n) time. This result is optimal as there are words that have Theta(alpha * n) maximal alpha-gapped repeats. Our techniques can be extended to get comparable results in the case of alpha-gapped palindromes, i.e., factors uvu^{T} with |uv| <= alpha |u|.

Cite as

Pawel Gawrychowski, Tomohiro I, Shunsuke Inenaga, Dominik Köppl, and Florin Manea. Efficiently Finding All Maximal alpha-gapped Repeats. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 39:1-39:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.STACS.2016.39,
  author =	{Gawrychowski, Pawel and I, Tomohiro and Inenaga, Shunsuke and K\"{o}ppl, Dominik and Manea, Florin},
  title =	{{Efficiently Finding All Maximal alpha-gapped Repeats}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{39:1--39:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.39},
  URN =		{urn:nbn:de:0030-drops-57408},
  doi =		{10.4230/LIPIcs.STACS.2016.39},
  annote =	{Keywords: combinatorics on words, counting algorithms}
}
Document
On the Number of Lambda Terms With Prescribed Size of Their De Bruijn Representation

Authors: Bernhard Gittenberger and Zbigniew Golebiewski


Abstract
John Tromp introduced the so-called 'binary lambda calculus' as a way to encode lambda terms in terms of 0-1-strings. Later, Grygiel and Lescanne conjectured that the number of binary lambda terms with m free indices and of size n (encoded as binary words of length n) is o(n^{-3/2} tau^{-n}) for tau ~ 1.963448... . We generalize the proposed notion of size and show that for several classes of lambda terms, including binary lambda terms with m free indices, the number of terms of size n is Theta(n^{-3/2} * rho^{-n}) with some class dependent constant rho, which in particular disproves the above mentioned conjecture. A way to obtain lower and upper bounds for the constant near the leading term is presented and numerical results for a few previously introduced classes of lambda terms are given.

Cite as

Bernhard Gittenberger and Zbigniew Golebiewski. On the Number of Lambda Terms With Prescribed Size of Their De Bruijn Representation. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 40:1-40:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{gittenberger_et_al:LIPIcs.STACS.2016.40,
  author =	{Gittenberger, Bernhard and Golebiewski, Zbigniew},
  title =	{{On the Number of Lambda Terms With Prescribed Size of Their De Bruijn Representation}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{40:1--40:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.40},
  URN =		{urn:nbn:de:0030-drops-57411},
  doi =		{10.4230/LIPIcs.STACS.2016.40},
  annote =	{Keywords: lambda calculus, terms enumeration, analytic combinatorics}
}
Document
Tightening the Complexity of Equivalence Problems for Commutative Grammars

Authors: Christoph Haase and Piotr Hofman


Abstract
Given two finite-state automata, are the Parikh images of the languages they generate equivalent? This problem was shown decidable in coNEXP by Huynh in 1985 within the more general setting of context-free commutative grammars. Huynh conjectured that a Pi_2^P upper bound might be possible, and Kopczynski and To established in 2010 such an upper bound when the size of the alphabet is fixed. The contribution of this paper is to show that the language equivalence problem for regular and context-free commutative grammars is actually coNEXP-complete. In addition, our lower bound immediately yields further coNEXP-completeness results for equivalence problems for regular commutative expressions, reversal-bounded counter automata and communication-free Petri nets. Finally, we improve both lower and upper bounds for language equivalence for exponent-sensitive commutative grammars.

Cite as

Christoph Haase and Piotr Hofman. Tightening the Complexity of Equivalence Problems for Commutative Grammars. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 41:1-41:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{haase_et_al:LIPIcs.STACS.2016.41,
  author =	{Haase, Christoph and Hofman, Piotr},
  title =	{{Tightening the Complexity of Equivalence Problems for Commutative Grammars}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{41:1--41:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.41},
  URN =		{urn:nbn:de:0030-drops-57422},
  doi =		{10.4230/LIPIcs.STACS.2016.41},
  annote =	{Keywords: language equivalence, commutative grammars, presburger arithmetic, semi-linear sets, petri nets}
}
Document
Autoreducibility of NP-Complete Sets

Authors: John M. Hitchcock and Hadi Shafei


Abstract
We study the polynomial-time autoreducibility of NP-complete sets and obtain separations under strong hypotheses for NP. Assuming there is a p-generic set in NP, we show the following: - For every k >= 2, there is a k-T-complete set for NP that is k-T autoreducible, but is not k-tt autoreducible or (k-1)-T autoreducible. - For every k >= 3, there is a k-tt-complete set for NP that is k-tt autoreducible, but is not (k-1)-tt autoreducible or (k-2)-T autoreducible. - There is a tt-complete set for NP that is tt-autoreducible, but is not btt-autoreducible. Under the stronger assumption that there is a p-generic set in NP cap coNP, we show: - For every k >= 2, there is a k-tt-complete set for NP that is k-tt autoreducible, but is not (k-1)-T autoreducible. Our proofs are based on constructions from separating NP-completeness notions. For example, the construction of a 2-T-complete set for NP that is not 2-tt-complete also separates 2-T-autoreducibility from 2-tt-autoreducibility.

Cite as

John M. Hitchcock and Hadi Shafei. Autoreducibility of NP-Complete Sets. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 42:1-42:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hitchcock_et_al:LIPIcs.STACS.2016.42,
  author =	{Hitchcock, John M. and Shafei, Hadi},
  title =	{{Autoreducibility of NP-Complete Sets}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{42:1--42:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.42},
  URN =		{urn:nbn:de:0030-drops-57437},
  doi =		{10.4230/LIPIcs.STACS.2016.42},
  annote =	{Keywords: computational complexity, NP-completeness, autoreducibility, genericity}
}
Document
A Randomized Polynomial Kernel for Subset Feedback Vertex Set

Authors: Eva-Maria C. Hols and Stefan Kratsch


Abstract
The SUBSET FEEDBACK VERTEX SET problem generalizes the classical FEEDBACK VERTEX SET problem and asks, for a given undirected graph G=(V,E), a set S subseteq V, and an integer k, whether there exists a set X of at most k vertices such that no cycle in G-X contains a vertex of S. It was independently shown by Cygan et al. (ICALP'11, SIDMA'13) and Kawarabayashi and Kobayashi (JCTB'12) that SUBSET FEEDBACK VERTEX SET is fixed-parameter tractable for parameter k. Cygan et al. asked whether the problem also admits a polynomial kernelization. We answer the question of Cygan et al. positively by giving a randomized polynomial kernelization for the equivalent version where S is a set of edges. In a first step we show that EDGE SUBSET FEEDBACK VERTEX SET has a randomized polynomial kernel parameterized by |S|+k with O(|S|^2k) vertices. For this we use the matroid-based tools of Kratsch and Wahlstrom (FOCS'12). Next we present a preprocessing that reduces the given instance (G,S,k) to an equivalent instance (G',S',k') where the size of S' is bounded by O(k^4). These two results lead to a polynomial kernel for SUBSET FEEDBACK VERTEX SET with O(k^9) vertices.

Cite as

Eva-Maria C. Hols and Stefan Kratsch. A Randomized Polynomial Kernel for Subset Feedback Vertex Set. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 43:1-43:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hols_et_al:LIPIcs.STACS.2016.43,
  author =	{Hols, Eva-Maria C. and Kratsch, Stefan},
  title =	{{A Randomized Polynomial Kernel for Subset Feedback Vertex Set}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{43:1--43:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.43},
  URN =		{urn:nbn:de:0030-drops-57448},
  doi =		{10.4230/LIPIcs.STACS.2016.43},
  annote =	{Keywords: parameterized complexity, kernelization, subset feedback vertex set}
}
Document
Periods and Borders of Random Words

Authors: Štepán Holub and Jeffrey Shallit


Abstract
We investigate the behavior of the periods and border lengths of random words over a fixed alphabet. We show that the asymptotic probability that a random word has a given maximal border length k is a constant, depending only on k and the alphabet size l. We give a recurrence that allows us to determine these constants with any required precision. This also allows us to evaluate the expected period of a random word. For the binary case, the expected period is asymptotically about n-1.641. We also give explicit formulas for the probability that a random word is unbordered or has maximum border length one.

Cite as

Štepán Holub and Jeffrey Shallit. Periods and Borders of Random Words. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 44:1-44:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{holub_et_al:LIPIcs.STACS.2016.44,
  author =	{Holub, \v{S}tep\'{a}n and Shallit, Jeffrey},
  title =	{{Periods and Borders of Random Words}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{44:1--44:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.44},
  URN =		{urn:nbn:de:0030-drops-57453},
  doi =		{10.4230/LIPIcs.STACS.2016.44},
  annote =	{Keywords: random word, period, word border}
}
Document
Constrained Bipartite Vertex Cover: The Easy Kernel is Essentially Tight

Authors: Bart M. P. Jansen


Abstract
The CONSTRAINED BIPARTITE VERTEX COVER problem asks, for a bipartite graph G with partite sets A and B, and integers k_A and k_B, whether there is a vertex cover for G containing at most k_A vertices from A and k_B vertices from B. The problem has an easy kernel with 2 * k_A * k_B edges and 4 k_A * k_B vertices, based on the fact that every vertex in A of degree more than k_B has to be included in the solution, together with every vertex in B of degree more than k_A. We show that the number of vertices and edges in this kernel are asymptotically essentially optimal in terms of the product k_A * k_B. We prove that if there is a polynomial-time algorithm that reduces any instance (G,A,B,k_A,k_B) of CONSTRAINED BIPARTITE VERTEX COVER to an equivalent instance (G',A',B',k'_A,k'_B) such that k'_A in (k_A)^{O(1)}, k'_B in (k_B)^{O(1)}, and |V(G')| in O((k_A * k_B)^{1 - epsilon}), for some epsilon > 0, then NP subseteq coNP/poly and the polynomial-time hierarchy collapses. Using a different construction, we prove that if there is a polynomial-time algorithm that reduces any n-vertex instance into an equivalent instance (of a possibly different problem) that can be encoded in O(n^{2- epsilon}) bits, then NP subseteq coNP/poly.

Cite as

Bart M. P. Jansen. Constrained Bipartite Vertex Cover: The Easy Kernel is Essentially Tight. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 45:1-45:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{jansen:LIPIcs.STACS.2016.45,
  author =	{Jansen, Bart M. P.},
  title =	{{Constrained Bipartite Vertex Cover: The Easy Kernel is Essentially Tight}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{45:1--45:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.45},
  URN =		{urn:nbn:de:0030-drops-57463},
  doi =		{10.4230/LIPIcs.STACS.2016.45},
  annote =	{Keywords: kernel lower bounds, constrained bipartite vertex cover}
}
Document
Separation Between Read-once Oblivious Algebraic Branching Programs (ROABPs) and Multilinear Depth Three Circuits

Authors: Neeraj Kayal, Vineet Nair, and Chandan Saha


Abstract
We show an exponential separation between two well-studied models of algebraic computation, namely read-once oblivious algebraic branching programs (ROABPs) and multilinear depth three circuits. In particular we show the following: 1. There exists an explicit n-variate polynomial computable by linear sized multilinear depth three circuits (with only two product gates) such that every ROABP computing it requires 2^{Omega(n)} size. 2. Any multilinear depth three circuit computing IMM_{n,d} (the iterated matrix multiplication polynomial formed by multiplying d, n * n symbolic matrices) has n^{Omega(d)} size. IMM_{n,d} can be easily computed by a poly(n,d) sized ROABP. 3. Further, the proof of 2 yields an exponential separation between multilinear depth four and multilinear depth three circuits: There is an explicit n-variate, degree d polynomial computable by a poly(n,d) sized multilinear depth four circuit such that any multilinear depth three circuit computing it has size n^{Omega(d)}. This improves upon the quasi-polynomial separation result by Raz and Yehudayoff [2009] between these two models. The hard polynomial in 1 is constructed using a novel application of expander graphs in conjunction with the evaluation dimension measure used previously in Nisan [1991], Raz [2006,2009], Raz and Yehudayoff [2009], and Forbes and Shpilka [2013], while 2 is proved via a new adaptation of the dimension of the partial derivatives measure used by Nisan and Wigderson [1997]. Our lower bounds hold over any field.

Cite as

Neeraj Kayal, Vineet Nair, and Chandan Saha. Separation Between Read-once Oblivious Algebraic Branching Programs (ROABPs) and Multilinear Depth Three Circuits. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 46:1-46:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kayal_et_al:LIPIcs.STACS.2016.46,
  author =	{Kayal, Neeraj and Nair, Vineet and Saha, Chandan},
  title =	{{Separation Between Read-once Oblivious Algebraic Branching Programs (ROABPs) and Multilinear Depth Three Circuits}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{46:1--46:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.46},
  URN =		{urn:nbn:de:0030-drops-57475},
  doi =		{10.4230/LIPIcs.STACS.2016.46},
  annote =	{Keywords: multilinear depth three circuits, read-once oblivious algebraic branching programs, evaluation dimension, skewed partial derivatives, expander graphs,}
}
Document
Towards an Atlas of Computational Learning Theory

Authors: Timo Kötzing and Martin Schirneck


Abstract
A major part of our knowledge about Computational Learning stems from comparisons of the learning power of different learning criteria. These comparisons inform about trade-offs between learning restrictions and, more generally, learning settings; furthermore, they inform about what restrictions can be observed without losing learning power. With this paper we propose that one main focus of future research in Computational Learning should be on a structured approach to determine the relations of different learning criteria. In particular, we propose that, for small sets of learning criteria, all pairwise relations should be determined; these relations can then be easily depicted as a map, a diagram detailing the relations. Once we have maps for many relevant sets of learning criteria, the collection of these maps is an Atlas of Computational Learning Theory, informing at a glance about the landscape of computational learning just as a geographical atlas informs about the earth. In this paper we work toward this goal by providing three example maps, one pertaining to partially set-driven learning, and two pertaining to strongly monotone learning. These maps can serve as blueprints for future maps of similar base structure.

Cite as

Timo Kötzing and Martin Schirneck. Towards an Atlas of Computational Learning Theory. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 47:1-47:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kotzing_et_al:LIPIcs.STACS.2016.47,
  author =	{K\"{o}tzing, Timo and Schirneck, Martin},
  title =	{{Towards an Atlas of Computational Learning Theory}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{47:1--47:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.47},
  URN =		{urn:nbn:de:0030-drops-57483},
  doi =		{10.4230/LIPIcs.STACS.2016.47},
  annote =	{Keywords: computational learning, language learning, partially set-driven learning, strongly monotone learning}
}
Document
Quantum Query Complexity of Subgraph Isomorphism and Homomorphism

Authors: Raghav Kulkarni and Supartha Podder


Abstract
Let H be a (non-empty) graph on n vertices, possibly containing isolated vertices. Let f_H(G) = 1 iff the input graph G on n vertices contains H as a (not necessarily induced) subgraph. Let alpha_H denote the cardinality of a maximum independent set of H. In this paper we show: Q(f_H) = Omega( sqrt{alpha_H * n}), where Q(f_H) denotes the quantum query complexity of f_H. As a consequence we obtain lower bounds for Q(f_H) in terms of several other parameters of H such as the average degree, minimum vertex cover, chromatic number, and the critical probability. We also use the above bound to show that Q(f_H) = Omega(n^{3/4}) for any H, improving on the previously best known bound of Omega(n^{2/3}) [M. Santha/A. Chi-Chih Yao, unpublished manuscript]. Until very recently, it was believed that the quantum query complexity is at least square root of the randomized one. Our Omega(n^{3/4}) bound for Q(f_H) matches the square root of the current best known bound for the randomized query complexity of f_H, which is Omega(n^{3/2}) due to Groger. Interestingly, the randomized bound of Omega(alpha_H * n) for f_H still remains open. We also study the Subgraph Homomorphism Problem, denoted by f_{[H]}, and show that Q(f_{[H]}) = Omega(n). Finally we extend our results to the 3-uniform hypergraphs. In particular, we show an Omega(n^{4/5}) bound for quantum query complexity of the Subgraph Isomorphism, improving on the previously known Omega(n^{3/4}) bound. For the Subgraph Homomorphism, we obtain an Omega(n^{3/2}) bound for the same.

Cite as

Raghav Kulkarni and Supartha Podder. Quantum Query Complexity of Subgraph Isomorphism and Homomorphism. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 48:1-48:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kulkarni_et_al:LIPIcs.STACS.2016.48,
  author =	{Kulkarni, Raghav and Podder, Supartha},
  title =	{{Quantum Query Complexity of Subgraph Isomorphism and Homomorphism}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{48:1--48:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.48},
  URN =		{urn:nbn:de:0030-drops-57495},
  doi =		{10.4230/LIPIcs.STACS.2016.48},
  annote =	{Keywords: quantum query complexity, subgraph isomorphism, monotone graph properties}
}
Document
Faster Exact and Parameterized Algorithm for Feedback Vertex Set in Tournaments

Authors: Mithilesh Kumar and Daniel Lokshtanov


Abstract
A tournament is a directed graph T such that every pair of vertices is connected by an arc. A feedback vertex set is a set S of vertices in T such that T\S is acyclic. In this article we consider the FEEDBACK VERTEX SET problem in tournaments. Here the input is a tournament T and an integer k, and the task is to determine whether T has a feedback vertex set of size at most k. We give a new algorithm for FEEDBACK VERTEX SET IN TOURNAMENTS. The running time of our algorithm is upper-bounded by O(1.6181^k + n^{O(1)}) and by O(1.466^n). Thus our algorithm simultaneously improves over the fastest known parameterized algorithm for the problem by Dom et al. running in time O(2^kk^{O(1)} + n^{O(1)}), and the fastest known exact exponential-time algorithm by Gaspers and Mnich with running time O(1.674^n). On the way to proving our main result we prove a strengthening of a special case of a graph partitioning theorem due to Bollobas and Scott. In particular we show that the vertices of any undirected m-edge graph of maximum degree d can be colored white or black in such a way that for each of the two colors, the number of edges with both endpoints of that color is between m/4-d/2 and m/4+d/2.

Cite as

Mithilesh Kumar and Daniel Lokshtanov. Faster Exact and Parameterized Algorithm for Feedback Vertex Set in Tournaments. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 49:1-49:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.STACS.2016.49,
  author =	{Kumar, Mithilesh and Lokshtanov, Daniel},
  title =	{{Faster Exact and Parameterized Algorithm for Feedback Vertex Set in Tournaments}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{49:1--49:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.49},
  URN =		{urn:nbn:de:0030-drops-57501},
  doi =		{10.4230/LIPIcs.STACS.2016.49},
  annote =	{Keywords: Parameterized algorithms, Exact algorithms, Feedback vertex set, Tour- naments, Graph partitions}
}
Document
Knapsack in Graph Groups, HNN-Extensions and Amalgamated Products

Authors: Markus Lohrey and Georg Zetzsche


Abstract
It is shown that the knapsack problem, which was introduced by Myasnikov et al. for arbitrary finitely generated groups, can be solved in NP for graph groups. This result even holds if the group elements are represented in a compressed form by SLPs, which generalizes the classical NP-completeness result of the integer knapsack problem. We also prove general transfer results: NP-membership of the knapsack problem is passed on to finite extensions, HNN-extensions over finite associated subgroups, and amalgamated products with finite identified subgroups.

Cite as

Markus Lohrey and Georg Zetzsche. Knapsack in Graph Groups, HNN-Extensions and Amalgamated Products. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 50:1-50:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{lohrey_et_al:LIPIcs.STACS.2016.50,
  author =	{Lohrey, Markus and Zetzsche, Georg},
  title =	{{Knapsack in Graph Groups, HNN-Extensions and Amalgamated Products}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{50:1--50:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.50},
  URN =		{urn:nbn:de:0030-drops-57512},
  doi =		{10.4230/LIPIcs.STACS.2016.50},
  annote =	{Keywords: Graph groups, HNN-extensions, amalgamated products, knapsack}
}
Document
FPTAS for Hardcore and Ising Models on Hypergraphs

Authors: Pinyan Lu, Kuan Yang, and Chihao Zhang


Abstract
Hardcore and Ising models are two most important families of two state spin systems in statistic physics. Partition function of spin systems is the center concept in statistic physics which connects microscopic particles and their interactions with their macroscopic and statistical properties of materials such as energy, entropy, ferromagnetism, etc. If each local interaction of the system involves only two particles, the system can be described by a graph. In this case, fully polynomial-time approximation scheme (FPTAS) for computing the partition function of both hardcore and anti-ferromagnetic Ising model was designed up to the uniqueness condition of the system. These result are the best possible since approximately computing the partition function beyond this threshold is NP-hard. In this paper, we generalize these results to general physics systems, where each local interaction may involves multiple particles. Such systems are described by hypergraphs. For hardcore model, we also provide FPTAS up to the uniqueness condition, and for anti-ferromagnetic Ising model, we obtain FPTAS under a slightly stronger condition.

Cite as

Pinyan Lu, Kuan Yang, and Chihao Zhang. FPTAS for Hardcore and Ising Models on Hypergraphs. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 51:1-51:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:LIPIcs.STACS.2016.51,
  author =	{Lu, Pinyan and Yang, Kuan and Zhang, Chihao},
  title =	{{FPTAS for Hardcore and Ising Models on Hypergraphs}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{51:1--51:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.51},
  URN =		{urn:nbn:de:0030-drops-57526},
  doi =		{10.4230/LIPIcs.STACS.2016.51},
  annote =	{Keywords: hard-core model, ising model, hypergraph, spatial mixing, correlation decay}
}
Document
Efficient Enumeration of Solutions Produced by Closure Operations

Authors: Arnaud Mary and Yann Strozecki


Abstract
In this paper we address the problem of generating all elements obtained by the saturation of an initial set by some operations. More precisely, we prove that we can generate the closure by polymorphisms of a boolean relation with a polynomial delay. Therefore we can compute with polynomial delay the closure of a family of sets by any set of "set operations" (e.g. by union, intersection, difference, symmetric difference ...). To do so, we prove that for any set of operations F, one can decide in polynomial time whether an element belongs to the closure by F of a family of sets. When the relation is over a domain larger than two elements, we prove that our generic enumeration method fails, since the associated decision problem is NP-hard.

Cite as

Arnaud Mary and Yann Strozecki. Efficient Enumeration of Solutions Produced by Closure Operations. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 52:1-52:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{mary_et_al:LIPIcs.STACS.2016.52,
  author =	{Mary, Arnaud and Strozecki, Yann},
  title =	{{Efficient Enumeration of Solutions Produced by Closure Operations}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{52:1--52:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.52},
  URN =		{urn:nbn:de:0030-drops-57538},
  doi =		{10.4230/LIPIcs.STACS.2016.52},
  annote =	{Keywords: enumeration, set saturation, polynomial delay, Post’s lattice}
}
Document
Copyless Cost-Register Automata: Structure, Expressiveness, and Closure Properties

Authors: Filip Mazowiecki and Cristian Riveros


Abstract
Cost register automata (CRA) and its subclass, copyless CRA, were recently proposed by Alur et al. as a new model for computing functions over strings. We study structural properties, expressiveness, and closure properties of copyless CRA. We show that copyless CRA are strictly less expressive than weighted automata and are not closed under reverse operation. To find a better class we impose restrictions on copyless CRA, which ends successfully with a new robust computational model that is closed under reverse and other extensions.

Cite as

Filip Mazowiecki and Cristian Riveros. Copyless Cost-Register Automata: Structure, Expressiveness, and Closure Properties. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 53:1-53:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{mazowiecki_et_al:LIPIcs.STACS.2016.53,
  author =	{Mazowiecki, Filip and Riveros, Cristian},
  title =	{{Copyless Cost-Register Automata: Structure, Expressiveness, and Closure Properties}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{53:1--53:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.53},
  URN =		{urn:nbn:de:0030-drops-57547},
  doi =		{10.4230/LIPIcs.STACS.2016.53},
  annote =	{Keywords: Cost Register Automata, Weighted Automata, Semirings}
}
Document
Algorithmic Statistics, Prediction and Machine Learning

Authors: Alexey Milovanov


Abstract
Algorithmic statistics considers the following problem: given a binary string x (e.g., some experimental data), find a "good" explanation of this data. It uses algorithmic information theory to define formally what is a good explanation. In this paper we extend this framework in two directions. First, the explanations are not only interesting in themselves but also used for prediction: we want to know what kind of data we may reasonably expect in similar situations (repeating the same experiment). We show that some kind of hierarchy can be constructed both in terms of algorithmic statistics and using the notion of a priori probability, and these two approaches turn out to be equivalent (Theorem 5). Second, a more realistic approach that goes back to machine learning theory, assumes that we have not a single data string x but some set of "positive examples" x_1,...,x_l that all belong to some unknown set A, a property that we want to learn. We want this set A to contain all positive examples and to be as small and simple as possible. We show how algorithmic statistic can be extended to cover this situation (Theorem 11).

Cite as

Alexey Milovanov. Algorithmic Statistics, Prediction and Machine Learning. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 54:1-54:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{milovanov:LIPIcs.STACS.2016.54,
  author =	{Milovanov, Alexey},
  title =	{{Algorithmic Statistics, Prediction and Machine Learning}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{54:1--54:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.54},
  URN =		{urn:nbn:de:0030-drops-57550},
  doi =		{10.4230/LIPIcs.STACS.2016.54},
  annote =	{Keywords: algorithmic information theory, minimal description length, prediction, kolmogorov complexity, learning}
}
Document
Polynomial Kernels for Deletion to Classes of Acyclic Digraphs

Authors: Matthias Mnich and Erik Jan van Leeuwen


Abstract
We consider the problem to find a set X of vertices (or arcs) with |X| <= k in a given digraph G such that D = G-X is an acyclic digraph. In its generality, this is DIRECTED FEEDBACK VERTEX SET or DIRECTED FEEDBACK ARC SET respectively. The existence of a polynomial kernel for these problems is a notorious open problem in the field of kernelization, and little progress has been made. In this paper, we consider both deletion problems with an additional restriction on D, namely that D must be an out-forest, an out-tree, or a (directed) pumpkin. Our main results show that for each of these three restrictions the vertex deletion problem remains NP-hard, but we can obtain a kernel with k^{O(1)} vertices on general digraphs G. We also show that, in contrast to the vertex deletion problem, the arc deletion problem with each of the above restrictions can be solved in polynomial time.

Cite as

Matthias Mnich and Erik Jan van Leeuwen. Polynomial Kernels for Deletion to Classes of Acyclic Digraphs. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 55:1-55:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{mnich_et_al:LIPIcs.STACS.2016.55,
  author =	{Mnich, Matthias and van Leeuwen, Erik Jan},
  title =	{{Polynomial Kernels for Deletion to Classes of Acyclic Digraphs}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{55:1--55:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.55},
  URN =		{urn:nbn:de:0030-drops-57569},
  doi =		{10.4230/LIPIcs.STACS.2016.55},
  annote =	{Keywords: directed feedback vertex/arc set, parameterized algorithms, kernels}
}
Document
Size-Treewidth Tradeoffs for Circuits Computing the Element Distinctness Function

Authors: Mateus de Oliveira Oliveira


Abstract
In this work we study the relationship between size and treewidth of circuits computing variants of the element distinctness function. First, we show that for each n, any circuit of treewidth t computing the element distinctness function delta_n:{0,1}^n -> {0,1} must have size at least Omega((n^2)/(2^{O(t)}*log(n))). This result provides a non-trivial generalization of a super-linear lower bound for the size of Boolean formulas (treewidth 1) due to Neciporuk. Subsequently, we turn our attention to read-once circuits, which are circuits where each variable labels at most one input vertex. For each n, we show that any read-once circuit of treewidth t and size s computing a variant tau_n:{0,1}^n -> {0,1} of the element distinctness function must satisfy the inequality t * log(s) >= Omega(n/log(n)). Using this inequality in conjunction with known results in structural graph theory, we show that for each fixed graph H, read-once circuits computing tau_n which exclude H as a minor must have size at least Omega(n^2/log^{4}(n)). For certain well studied functions, such as the triangle-freeness function, this last lower bound can be improved to Omega(n^2/log^2(n)).

Cite as

Mateus de Oliveira Oliveira. Size-Treewidth Tradeoffs for Circuits Computing the Element Distinctness Function. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 56:1-56:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{deoliveiraoliveira:LIPIcs.STACS.2016.56,
  author =	{de Oliveira Oliveira, Mateus},
  title =	{{Size-Treewidth Tradeoffs for Circuits Computing the Element Distinctness Function}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{56:1--56:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.56},
  URN =		{urn:nbn:de:0030-drops-57571},
  doi =		{10.4230/LIPIcs.STACS.2016.56},
  annote =	{Keywords: non-linear lower bounds, treewidth, element distinctness}
}
Document
On Space Efficiency of Algorithms Working on Structural Decompositions of Graphs

Authors: Michal Pilipczuk and Marcin Wrochna


Abstract
Dynamic programming on path and tree decompositions of graphs is a technique that is ubiquitous in the field of parameterized and exponential-time algorithms. However, one of its drawbacks is that the space usage is exponential in the decomposition's width. Following the work of Allender et al. [Theory of Computing, 2014], we investigate whether this space complexity explosion is unavoidable. Using the idea of reparameterization of Cai and Juedes [J. Comput. Syst. Sci., 2003], we prove that the question is closely related to a conjecture that the Longest Common Subsequence problem parameterized by the number of input strings does not admit an algorithm that simultaneously uses XP time and FPT space. Moreover, we complete the complexity landscape sketched for pathwidth and treewidth by Allender et al. by considering the parameter tree-depth. We prove that computations on tree-depth decompositions correspond to a model of non-deterministic machines that work in polynomial time and logarithmic space, with access to an auxiliary stack of maximum height equal to the decomposition's depth. Together with the results of Allender et al., this describes a hierarchy of complexity classes for polynomial-time non- deterministic machines with different restrictions on the access to working space, which mirrors the classic relations between treewidth, pathwidth, and tree-depth.

Cite as

Michal Pilipczuk and Marcin Wrochna. On Space Efficiency of Algorithms Working on Structural Decompositions of Graphs. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 57:1-57:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{pilipczuk_et_al:LIPIcs.STACS.2016.57,
  author =	{Pilipczuk, Michal and Wrochna, Marcin},
  title =	{{On Space Efficiency of Algorithms Working on Structural Decompositions of Graphs}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{57:1--57:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.57},
  URN =		{urn:nbn:de:0030-drops-57588},
  doi =		{10.4230/LIPIcs.STACS.2016.57},
  annote =	{Keywords: tree decomposition, LCS, tree-depth, NAuxSA, Savitch’s theorem}
}
Document
Improved Approximation Algorithms for Balanced Partitioning Problems

Authors: Harald Räcke and Richard Stotz


Abstract
We present approximation algorithms for balanced partitioning problems. These problems are notoriously hard and we present new bicriteria approximation algorithms, that approximate the optimal cost and relax the balance constraint. In the first scenario, we consider Min-Max k-Partitioning, the problem of dividing a graph into k equal-sized parts while minimizing the maximum cost of edges cut by a single part. Our approximation algorithm relaxes the size of the parts by (1+epsilon) and approximates the optimal cost by O(log^{1.5}(n) * log(log(n))), for every 0 < epsilon < 1. This is the first nontrivial algorithm for this problem that relaxes the balance constraint by less than 2. In the second scenario, we consider strategies to find a minimum-cost mapping of a graph of processes to a hierarchical network with identical processors at the leaves. This Hierarchical Graph Partitioning problem has been studied recently by Hajiaghayi et al. who presented an (O(log(n)),(1+epsilon)(h+1)) approximation algorithm for constant network heights h. We use spreading metrics to give an improved (O(log(n)),(1+epsilon)h) approximation algorithm that runs in polynomial time for arbitrary network heights.

Cite as

Harald Räcke and Richard Stotz. Improved Approximation Algorithms for Balanced Partitioning Problems. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 58:1-58:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{racke_et_al:LIPIcs.STACS.2016.58,
  author =	{R\"{a}cke, Harald and Stotz, Richard},
  title =	{{Improved Approximation Algorithms for Balanced Partitioning Problems}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{58:1--58:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.58},
  URN =		{urn:nbn:de:0030-drops-57598},
  doi =		{10.4230/LIPIcs.STACS.2016.58},
  annote =	{Keywords: graph partitioning, dynamic programming, scheduling}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail