[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Quadratically Tight Relations for Randomized Query Complexity

  • Published:
Theory of Computing Systems Aims and scope Submit manuscript

Abstract

In this work we investigate the problem of quadratically tightly approximating the randomized query complexity of Boolean functions R(f). The certificate complexity C(f) is such a complexity measure for the zero-error randomized query complexity R0(f): C(f) ≀R0(f) ≀C(f)2. In the first part of the paper we introduce a new complexity measure, expectational certificate complexity EC(f), which is also a quadratically tight bound on R0(f): EC(f) ≀R0(f) = O(EC(f)2). For R(f), we prove that EC2/3 ≀R(f). We then prove that EC(f) ≀C(f) ≀EC(f)2 and show that there is a quadratic separation between the two, thus EC(f) gives a tighter upper bound for R0(f). The measure is also related to the fractional certificate complexity FC(f) as follows: FC(f) ≀EC(f) = O(FC(f)3/2). This also connects to an open question by Aaronson whether FC(f) is a quadratically tight bound for R0(f), as EC(f) is in fact a relaxation of FC(f). In the second part of the work, we investigate whether the corruption bound corrπœ–(f) quadratically approximates R(f). By Yao’s theorem, it is enough to prove that the square of the corruption bound upper bounds the distributed query complexity \(\mathsf {D}^{\mu }_{\epsilon }(f)\) for all input distributions ΞΌ. Here, we show that this statement holds for input distributions in which the various bits of the input are distributed independently. This is a natural and interesting subclass of distributions, and is also in the spirit of the input distributions studied in communication complexity in which the inputs to the two communicating parties are statistically independent. Our result also improves upon a result of Harsha et al. (2016), who proved a similar weaker statement. We also note that a similar statement in the communication complexity is open.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
Β£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Jain and Klauck in their paper defined prtπœ–(f) to be the value of the linear program, instead of the logarithm of the value of the program.

References

  1. Aaronson, S.: Quantum certificate complexity. J. Comput. Syst. Sci. 74(3), 313–322 (2008)

    ArticleΒ  MathSciNetΒ  Google ScholarΒ 

  2. Aaronson, S., Ben-David, S., Kothari, R.: Separations in query complexity using cheat sheets. In: Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing, STOC’16, pp. 863–876. ACM, New York (2016)

  3. Ambainis, A., Kokainis, M., Kothari, R.: Nearly optimal separations between communication (or query) complexity and partitions. In: Proceedings of the 31St Conference on Computational Complexity, CCC’16, pp. 4:1–4:14. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Germany (2016)

  4. Beals, R., Buhrman, H., Cleve, R., Mosca, M., de Wolf, R.: Quantum lower bounds by polynomials. J. ACM 48(4), 778–797 (2001)

    ArticleΒ  MathSciNetΒ  Google ScholarΒ 

  5. Blum, M., Impagliazzo, R.: Generic oracles and oracle classes. In: Proceedings of the 28th annual symposium on foundations of computer science, SFCS’87, pp. 118–126. IEEE Computer Society, Washington (1987)

  6. Buhrman, H., de Wolf, R.: Complexity measures and decision tree complexity: a survey. Theor. Comput. Sci. 288(1), 21–43 (2002)

    ArticleΒ  MathSciNetΒ  Google ScholarΒ 

  7. Gilmer, J., Saks, M., Srinivasan, S.: Composition limits and separating examples for some Boolean function complexity measures. Combinatorica 36(3), 265–311 (2016)

    ArticleΒ  MathSciNetΒ  Google ScholarΒ 

  8. Harsha, P., Jain, R., Radhakrishnan, J.: Partition bound is quadratically tight for product distributions. In: 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016), vol. 55 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 135:1–135:13, Dagstuhl, Germany. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik (2016)

  9. Hartmanis, J., Hemachandra, L.A.: One-way functions, robustness, and non-isomorphism of NP-complete sets. In: Proceedings of 2nd Structure in Complexity Theory, pp. 160–173 (1987)

  10. Jain, R., Klauck, H.: The partition bound for classical communication complexity and query complexity. In: Proceedings of the 2010 IEEE 25th Annual Conference on Computational Complexity, CCC’10, pp. 247–258. IEEE Computer Society, Washington (2010)

  11. Kulkarni, R., Tal, A.: On fractional block sensitivity. Chicago Journal Of Theoretical Computer Science 8, 1–16 (2016)

    ArticleΒ  MathSciNetΒ  Google ScholarΒ 

  12. Nisan, N.: Crew prams decision trees. In: Proceedings of the twenty-first annual ACM symposium on theory of computing, STOC’89, pp. 327–335. ACM, New York (1989)

  13. Tal, A.: Properties and applications of Boolean function composition. In: Proceedings of the 4th Conference on Innovations in Theoretical Computer Science, ITCS’13, pp. 441–454. ACM, New York (2013)

  14. Tardos, G.: Query complexity or why is it difficult to separate NP A ∩coNP A from P A by a random oracle. Combinatorica 9, 385–392 (1990)

    ArticleΒ  Google ScholarΒ 

Download references

Acknowledgements

This work is supported in part by the Singapore National Research Foundation under NRF RF Award No. NRF-NRFF2013-13, the Ministry of Education, Singapore under the Research Centres of Excellence programme by the Tier-3 grant Grant β€œRandom numbers from quantum processes” No. MOE2012-T3-1-009.

M.S. is partially funded by the ANR Blanc program under contract ANR-12-BS02-005 (RDAM project).

J.V. is supported by the ERC Advanced Grant MQC. Part of this work was done while J.V. was an intern at the Centre for Quantum Technologies at the National University of Singapore.

We thank Anurag Anshu for helpful discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to JevgΔ“nijs Vihrovs.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Computer Science Symposium in Russia (2018)

Appendices

Appendix A: Analysis of Algorithm 1

For correctness, note that the algorithm outputs 1 on all 1-inputs. Thus assume x is a 0-input from here on in the analysis. Then we have to prove that \(\mathcal {A}\) outputs 0 with probability at least 1 βˆ’ πœ–. This amounts to showing that the function reduces to a constant 0 function and the algorithm terminates within ⌈EC(f)2/πœ–βŒ‰ iterations with probability at least 1 βˆ’ πœ–. (For notational convenience, in what follows we will drop the ceilings and assume EC(f)2/πœ– is an integer.)

Define a random variable Tk as

$$ T_{k} = \left\{\begin{array}{ll} \frac{1}{\mathsf{EC}(f)}, & \text{if} \mathcal{A} \text{has terminated before the} k-\text{th iteration}, \\ w_{x}(i), & \text{if at the} k-\text{th iteration} \mathcal{A} \text{has queried} x_{i} \text{for the first time}, \\ 0, & \text{if} x_{i} \text{has been queried before the} k-\text{th iteration}. \end{array}\right. $$

Let \(T = {\sum }_{k=1}^{\mathsf {EC}(f)^{2}/\epsilon } T_{k}\). As \({\sum }_{i \in [n]} w_{x}(i) \leq \mathsf {EC}(f)\) by definition, T > EC(f) implies that \(\mathcal {A}\) has terminated before point 2. Then it has returned 0, and the answer is correct. Let p = Pr[T > EC(f)]. We will prove that p β‰₯ 1 βˆ’ πœ–, in which case we would be done.

We continue by showing an upper and a lower bound on \(\mathbb {E}[T]\).

  • The maximum possible value of T is at most

    $$ T \leq \sum\limits_{i \in [n]} w_{x}(i) + \frac{\mathsf{EC}(f)^{2}}{\epsilon} \cdot \frac{1}{\mathsf{EC}(f)} \leq \left( 1 + \frac{1}{\epsilon}\right) \mathsf{EC}(f). $$

    Therefore, \(\mathbb {E}[T] \leq p \cdot \left (1 + \frac {1}{\epsilon }\right ) \mathsf {EC}(f) + (1-p) \cdot \mathsf {EC}(f) = \left (1+\frac {p}{\epsilon }\right ) \mathsf {EC}(f)\).

  • Let \(\mathcal {E}_{k}\) be the event that \(\mathcal {A}\) has terminated before the k-th iteration. In case \(\mathcal {A}\) performs the k-th iteration, let y be consistent 1-input chosen and the random variable ik be the position that \(\mathcal {A}\) queries.

    $$ \begin{array}{@{}rcl@{}} \mathbb{E}[T_{k}] &=& \text{Pr}[\mathcal{E}_{k}] \cdot \frac{1}{\mathsf{EC}(f)} + \text{Pr}[\overline{\mathcal{E}_{k}}] \cdot \mathbb{E}[w_{x}(i_{k}) \mid \overline{\mathcal{E}_{k}}] \\ &\geq& \text{Pr}[\mathcal{E}_{k}] \cdot \frac{1}{\mathsf{EC}(f)} + \text{Pr}[\overline{\mathcal{E}_{k}}] \cdot \sum\limits_{i : x_{i} \neq y_{i}} w_{x}(i) \mu_{y}(i) \\ &=& \text{Pr}[\mathcal{E}_{k}] \cdot \frac{1}{\mathsf{EC}(f)} + \text{Pr}[\overline{\mathcal{E}_{k}}] \cdot \sum\limits_{i : x_{i} \neq y_{i}} w_{x}(i) w_{y}(i) / \sum\limits_{i \in [n]} w_{y}(i)\\ &\geq& \text{Pr}[\mathcal{E}_{k}] \cdot \frac{1}{\mathsf{EC}(f)} + \text{Pr}[\overline{\mathcal{E}_{k}}] \cdot \frac{1}{\mathsf{EC}(f)} \\ &=& \frac{1}{\mathsf{EC}(f)}, \end{array} $$

    The first inequality here follows from the fact that any i such that xi≠yi has not been queried yet, because x and y are both consistent with the queries made so far. Thus, the inequality holds regardless of the randomness chosen by \(\mathcal {A}\). The second inequality follows from the expectational certificate properties \({\sum }_{i:x_{i}\neq y_{i}} w_{x}(i)w_{y}(i) \geq 1\) and \({\sum }_{i \in [n]} w_{y}(i) \leq \mathsf {EC}(f)\). By the linearity of expectation, we have that \( \mathbb {E}[T] = {\sum }_{k = 1}^{\mathsf {EC}(f)^{2}/\epsilon } \mathbb {E}[T_{k}] \geq \mathsf {EC}(f)/\epsilon . \)

Combining the two bounds together, we get \(\frac {\mathsf {EC}(f)}{\epsilon } \leq \left (1+\frac {p}{\epsilon }\right ) \mathsf {EC}(f)\). Thus, p β‰₯ 1 βˆ’ πœ–.

Appendix B: Analysis of Algorithm 2

Proof of Theorem 2

We analize the correctness and performance of the algorithm for inputs sampled according to ΞΌ.

For each i = 2,…, 4bs(f), define T(i) to be the event that \(\mathcal {B}\) completes at least i βˆ’β€‰1 iterations and define T(1) to be the true event. Let i be arbitrary, and assume that T(i) occurs. Then A(i) denotes the πœ–-error certificate (under Ξ·(i)) picked in the i-th iteration in step 2a. Let b(i) ∈{0, 1} be the value approximately certified by A(i) under Ξ·(i). Let \(E^{(i)} \subseteq A^{(i)}\) denote the set of inputs y ∈ A(i) such that f(y)β‰ b(i). Recall from SectionΒ 2\(Q_{A^{(i)}}\) is the set of variables set by A(i). For each assignment \(s \in \{0,1\}^{Q_{A^{(i)}}}\) to the variables fixed by A(i) and subset \(U \subseteq A^{(i)}\), let U βŠ• s denote the shift of U by the vector s. Formally (β€˜βŠ•β€™ stands for bitwise exlusive or),

$$ \begin{array}{@{}rcl@{}} &&U\oplus s:=\{y \in \{0,1\}^{n}: \forall j \in Q_{A^{(i)}}, y_{j}=A^{(i)}_{j}\oplus s_{j} \text{ and }\\ &&{\kern98pt}\exists z \in U\text{ s.t. } \forall j \notin Q_{A^{(i)}}, y_{j}=z_{j}\}. \end{array} $$

For i β‰₯ 2, define \(\mathcal L^{(i)}\) to be the set of variables queried in first i βˆ’β€‰1 iterations and define \(\mathcal L^{(1)}:=\varnothing \). Note that \(\eta ^{(i)}=\mu \mid _{x_{\mathcal L^{(i)}}}\), and Ξ·(i) is a product distribution.

Define all the above random variables to be βŠ₯ if T(i) does not take place. Now define

$$ X^{(i)}=\left\{\begin{array}{ll} 1 & \text{if } T^{(i)} \text{ occurs and } x \in \bigcup_{s \in \{0,1\}^{Q^{(i)}}} E^{(i)}\oplus s,\\ 0 & \text{otherwise.} \end{array}\right. $$

First we bound the number of queries made by \(\mathcal {B}\). Since \(\mathcal {B}\) terminates when either t0 = 2bs(f) or t1 = 2bs(f), it performs at most 4bs(f) βˆ’β€‰1 many iterations. On the other hand since Ξ·(i) is a product distribution for each i, therefore \(|A^{(i)}| \leq \mathsf {corr}^{\times }_{\min , \epsilon }(f)\). Therefore, the algorithm makes \(O(\mathsf {corr}^{\times }_{\min , \epsilon }(f) \cdot \mathsf {bs}(f))\) many queries.

Now we prove that it errs on at most 4πœ– fraction of the inputs according to ΞΌ.

Proposition 2

For every i and\(s \in \{0,1\}^{Q_{A^{(i)}}}, \text {Pr}[x \!\in \! E^{(i)}\oplus s \mid T^{(i)}, x \!\in \! A^{(i)}\oplus s] \!\leq \! \epsilon \).

Proof

Condition on the events \(T^{(i)}, x \in A^{(i)}\oplus s\). Furthermore, condition on \(x_{\mathcal L^{(i)}}\). Notice that under this conditioning, the distribution of the input x is \(\eta ^{(i)}=\mu \mid _{x_{\mathcal L^{(i)}}}\).

If T(i) occurs, A(i) is an πœ–-error b(i)-certificate under Ξ·(i). So \(\text {Pr}_{x \sim \eta ^{(i)}}[x \in E^{(i)} \mid T^{(i)}, x \in A^{(i)}] \leq \epsilon \). Since Ξ·(i) is a product distribution as observed before, we have that for each \(s \in \{0,1\}^{Q_{A^{(i)}}}, \text {Pr}_{x \sim \eta ^{(i)}}[x \in E^{(i)}\oplus s \mid T^{(i)}, x \in A^{(i)}\oplus s]=\text {Pr}_{x \sim \eta ^{(i)}}[x \in E^{(i)} \mid T^{(i)}, x \in A^{(i)}]\leq \epsilon \). The proposition follows. β–‘

In particular, Proposition 2 implies that for all i = 1,…, 4bs(f),

$$ \text{Pr}[X^{(i)}=1] \leq \epsilon. $$
(1)

Since \(\mathcal {B}\) runs for at most 4bs(f) βˆ’β€‰1 < 4bs(f) steps, by (1), linearity of expectation and Markov’s inequality we have that

$$ \text{Pr}[|\{i \mid X^{(i)}=1\}| \geq \mathsf{bs}(f)] \leq 4\epsilon. $$
(2)

For i such that T(i) occurs, define \(S^{(i)}:=\{j \in Q_{A^{(i)}} \mid x_{j} \neq A^{(i)}(j)\}\). The following proposition will play a central role in our analysis.

Proposition 3

Let \(i_{1} < i_{2}\). For each \(i \in \{i_{1}, i_{2}\}\), let T(i)happen andX(i) = 0. Then \(f(x^{S^{(i)}})=b^{(i)}, S^{(i_{1})} \cap S^{(i_{2})}=\varnothing \). In particular, if \(b^{(i_{1})}=b^{(i_{2})}\) and \(f(x)=1-b^{(i_{1})}\) then \(S^{(i_{1})}\) and \(S^{(i_{2})}\) are disjoint sensitive blocks for x.

Proof

Clearly, \(x^{S^{(i)}} \in A^{(i)}\). Also, since \(X^{(i)}=0, x \notin E^{(i)}+s\) for any s. Thus \(x^{S^{(i)}} \notin E^{(i)}\). Hence \(f(x^{S^{(i)}})=b^{(i)}\). To see that \(S^{(i_{1})} \cap S^{(i_{2})}=\varnothing \), let \(j \in S^{(i_{1})}\). It is easy to see that \(i_{2}>i_{1}\) implies that the distribution \(\eta ^{(i_{2})}\) at step i2 is supported only on inputs consistent with \(x_{Q_{A^{(i_{1})}}}\). Hence, if \(j \in Q_{A^{(i_{2})}}\), then \(x_{j}=A^{(i_{2})}(j)\) which implies that \(j \notin S^{(i_{2})}\). β–‘

For the rest of the proof, condition on the event that \(\mathcal {B}\) terminates at iteration i. We will bound the probability that \(\mathcal {B}\) errs.

First, condition on the event that \(\mathcal {B}\) terminates in step 2d. Then the probability that it errs is \(\text {Pr}[x \in E^{(i)} \mid T^{(i)}, x \in A^{(i)}] \leq \epsilon \) (by Proposition 2 invoked with \(s=0^{Q_{A^{(i)}}}\)).

Next, condition on the event that \(\mathcal {B}\) terminates at step 2e, and t0 = 2bs(f) (the case t1 = 2bs(f) is symmetrical). By (2), |{i∣X(i) = 1}|β‰₯bs(f) with probability at most 4πœ–. Condition on |{i∣X(i) = 1}| < bs(f). Then \(\mathcal {B}\) outputs 0. We claim that f(x) = 0 with probability 1. Towards a contradiction, assume that f(x) = 1. As t0 = 2bs(f) and |{i∣X(i) = 1}| < bs(f), then in at least 2bs(f) βˆ’ (bs(f) βˆ’β€‰1) = bs(f) + 1 iterations j ≀ i, b(j) = 0 and X(j) = 0. By Proposition 3, the blocks S(j) for those j iterations are sensitive for x and are disjoint. Since any input can have at most bs(f) sensitive blocks, we have the desired contradiction.

Thus the probability that \(\mathcal {B}\) errs is at most \(\max \nolimits \{\epsilon , 4\epsilon \}=4\epsilon \). β–‘

Appendix C: Proof of Lemma 5

Proof

Let c = prtπœ–(f). Abusing notation, let \(\{w_{z,A}\}_{z,A}\) be a primal feasible point which minimizes the objective. Thus \({\sum }_{z,A}w_{z,A}\cdot 2^{|A|}=2^{c}\). We immediately have that,

$$ 2^{c} \geq \sum\limits_{z, A: |A|>c + \log (1/\epsilon)}w_{z, A}\cdot 2^{|A|}\geq \frac{2^{c}}{\epsilon}\sum\limits_{z, A: |A|>c + \log (1/\epsilon)}w_{z, A}. $$

implying,

$$ \sum\limits_{z,A: |A|>c +\log (1/\epsilon)}w_{z,A} \leq \epsilon. $$
(3)

Let ΞΌ be any product probability distribution on {0, 1}n (in fact, the proof works for any distribution ΞΌ). Without loss of generality, assume that, \(\text {Pr}_{x \sim \mu }[f(x)=1] \geq \text {Pr}_{x \sim \mu }[f(x)=0]\). We shall show that \(\mathsf {corr}_{2\epsilon }^{1,\mu }(f) \leq c + \log (1/\epsilon )\). That will prove the theorem.

If \(\text {Pr}_{x \sim \mu }[f(x)=0]=0\) then {0, 1}n is a 0-error 1-certificate of co-dimension 0, and we are done. From now on, we will assume that \(\text {Pr}_{x \sim \mu }[f(x)=0]>0\).

Equation 3 and The two primal constraints imply that for each x ∈{0, 1}n,

$$ \begin{array}{@{}rcl@{}} &&\sum\limits_{A \ni x,|A| \leq c + \log (1/\epsilon)} w_{f(x),A} \geq 1-2\epsilon; \end{array} $$
(4)
$$ \begin{array}{@{}rcl@{}} &&\sum\limits_{A \ni x, |A| \leq c + \log (1/\epsilon)} w_{1-f(x),A} \leq 2\epsilon. \end{array} $$
(5)

Multiplying (4) and (5) by ΞΌx, adding the former over fβˆ’β€‰1(1) and the later over fβˆ’β€‰1(0), and re-arranging the order of summations we have,

$$ \begin{array}{@{}rcl@{}} &&\sum\limits_{A: |A| \leq c +\log (1/\epsilon)} \sum\limits_{x \in A, f(x)=1} \mu(x) \cdot w_{1,A} \geq (1-2\epsilon) \cdot \sum\limits_{x \in f^{-1}(1)}\mu(x); \end{array} $$
(6)
$$ \begin{array}{@{}rcl@{}} &&\sum\limits_{A : |A| \leq c +\log (1/\epsilon)} {\sum}_{x \in A, f(x)=0} \mu(x) \cdot w_{1,A} \leq 2\epsilon \cdot {\sum}_{x \in f^{-1}(0)}\mu(x). \end{array} $$
(7)

Dividing (6) by (7) (note that \({\sum }_{x \in f^{-1}(0)}\mu _{x} \neq 0\) by our assumption about ΞΌ), we have that,

$$ \begin{array}{@{}rcl@{}} \frac{{\sum}_{A: |A| \leq c +\log (1/\epsilon)} w_{1,A}\cdot \left( {\sum}_{x \in A, f(x)=1} \mu(x)\right)}{{\sum}_{A: |A| \leq c + \log (1/\epsilon)} w_{1,A}\cdot \left( {\sum}_{x \in A, f(x)=0} \mu(x)\right)} \!\geq\! \frac{1-2\epsilon}{2\epsilon} \cdot \frac{{\sum}_{x \in f^{-1}(1)}\mu(x)}{{\sum}_{x \in f^{-1}(0)}\mu(x)} \!\geq\! \frac{1-2\epsilon}{2\epsilon}. \end{array} $$

The last inequality above holds because of our assumption about ΞΌ. This implies that there exists a subcube A with co-dimension \(|A| \leq c + \log (1/\epsilon )\) such that,

$$ \frac{{\sum}_{x \in A, f(x)=1} \mu(x)}{{\sum}_{x \in A, f(x)=0} \mu(x)} \geq \frac{1-2\epsilon}{2\epsilon}. $$

Thus,

$$ \underset{x \sim \mu}{\text{Pr}}[f(x)=1 \mid x \in A] \geq 1-2\epsilon. $$

In other words, A is a 2πœ–-error 1-certificate under ΞΌ. We have,

$$ \mathsf{corr}_{\min,2\epsilon}^{\mu}(f) \leq \mathsf{corr}_{2\epsilon}^{1,\mu}(f) \leq |A| \leq \mathsf{prt}_{\epsilon} (f) +\log(1/\epsilon). $$

β–‘

Note that in (5), a better upper bound of simply πœ– follows from the first primal constraint of the partition bound LP. This results in a slightly better error \(\frac {\epsilon }{1-\epsilon } < 2\epsilon \) for πœ– < 1/2. Thanks to the anonymous reviewer for pointing this out.

Appendix D: Proof of Lemma 6

Proof

Let x be such that fbs(f, x) = fbs(f), and let b = f(x). We construct a distribution ΞΌ such that \(\mathsf {corr}_{\epsilon }^{b,\mu }(f) \geq \mathsf {fbs}(f)\).

Suppose that x has k sensitive blocks \(B_{1}, \ldots , B_{k}\). Let \(u_{1}, \ldots , u_{k}\) be the corresponding solution to the fbs(f, x) linear program. Let c ∈ (0, 1 βˆ’ πœ–) be a constant and define ΞΌ(x) = c and \(\mu (x^{B_{i}}) = (1-c)\frac {u_{i}}{{\sum }_{i=1}^{k} u_{i}} = (1-c)\frac {u_{i}}{\mathsf {fbs}(f)}\). Clearly, ΞΌ is a probability distribution on {0, 1}n.

Let A be the shortest πœ–-error b-certificate according to ΞΌ and recall that QA is the set of variables fixed by A. Any input \(x^{B_{i}}\) is inconsistent with A iff \(B_{i} \cap Q_{A} \neq \varnothing \), thus

$$\sum\limits_{i : B_{i} \cap Q_{A} \neq \varnothing} \mu(x^{B_{i}}) = \underset{y \sim \mu}{\text{Pr}}\left[ f(y) \neq b, y \notin A\right].$$

We also have

$$\frac{\text{Pr}_{y \sim \mu}\left[ f(y) = b, y \in A\right]}{\text{Pr}_{y \sim \mu}\left[ f(y) = b, y \in A\right]+\text{Pr}_{y \sim \mu}\left[ f(y) \neq b, y \in A\right]} \geq 1-\epsilon$$

by definition of A. Since \(\text {Pr}_{y \sim \mu }\left [ f(y) = b, y \in A\right ] = c\), this implies

$$\underset{y \sim \mu}{\text{Pr}}\left[ f(y) \neq b, y \in A\right] \leq c \cdot \frac{\epsilon}{1-\epsilon}.$$

Then we get

$$ \begin{array}{@{}rcl@{}} \underset{y \sim \mu}{\text{Pr}}\left[ f(y) \neq b, y \notin A \right] &= \underset{y \sim \mu}{\text{Pr}}\left[ f(y) \neq b \right] - \underset{y \sim \mu}{\text{Pr}}\left[ f(y) \neq b, y \in A \right] \\ &\geq (1-c) - c \cdot \frac{\epsilon}{1-\epsilon} = 1- c \cdot \frac{1}{1-\epsilon}. \end{array} $$

On the other hand, since \({\sum }_{i : j \in B_{i}} u_{i} \leq 1\) for each j ∈ [n], we have

$$ \sum\limits_{i : B_{i} \cap Q_{A} \neq \varnothing} \mu(x^{B_{i}}) \!\leq\! \sum\limits_{j \in Q_{A}} \sum\limits_{i : j \in B_{i}} \mu(x^{B_{i}}) = \sum\limits_{j \in Q_{A}} \sum\limits_{i : j \in B_{i}} (1 - c)\frac{u_{i}} {\mathsf{fbs}(f)} \!\leq\! (1 - c) \frac{|A|}{\mathsf{fbs}(f)}. $$

Therefore,

$$ \frac{\mathsf{corr}_{\epsilon}(f)}{\mathsf{fbs}(f)} \geq \frac{\mathsf{corr}^{b, \mu}_{\epsilon}(f)}{\mathsf{fbs}(f)} = \frac{|A|}{\mathsf{fbs}(f)} \geq \frac{1-\epsilon-c}{(1-\epsilon)(1-c)} = \frac{1-\epsilon-c}{1-\epsilon-c+\epsilon c}. $$

Since the above relation is true for every c, we have,

$$ \frac{\mathsf{corr}_{\epsilon}(f)}{\mathsf{fbs}(f)} \geq \lim\limits_{c \rightarrow 0}\frac{1-\epsilon-c}{1-\epsilon-c+\epsilon c}=1. $$

Thus we have corrπœ–(f) β‰₯fbs(f). β–‘

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jain, R., Klauck, H., Kundu, S. et al. Quadratically Tight Relations for Randomized Query Complexity. Theory Comput Syst 64, 101–119 (2020). https://doi.org/10.1007/s00224-019-09935-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00224-019-09935-x

Keywords

Navigation