CircuitSat DTIME(2n) consequences

CircuitSatDTIME(2n) does not imply NPDTIME(2n) since the reduction might increase the input’s size. Suppose for example that LNP and the reduction from L to CircuitSat maps strings of length n to strings of length nc (for all nN), then applying the reduction and using a naive exponential time algorithm for circuit sat yields a 2nc time algorithm for L.

If however you can place an NP-complete problem in a subexponential time class, e.g. nlogn, then NPEXP, since nO(logn) is closed under polynomial transformations, i.e. nnc.

P/poly =cNSIZE(nc) consequences

If NEXPTIME P/poly then NEXPTIME = EXPTIME. Proof that BPPP/Poly Let L be a language in BPP, and let M(x,r) be a polynomial-time algorithm that decides L with error1/3(where x is the input string and r is a set of random bits).

Construct a new machine M(x,r), which runsM,48n times and takes a majority vote of the results (where n is the input length and R is a sequence of 48n independently random rs). Thus,M is also polynomial-time, and has an error probability 1/eby the (chernof). If we can fix R then we obtain an algorithm that is deterministic.

If Bad(x) is defined as{R:M(x,R) is incorrect}, we have:

:xProbR[RBad(x)]1en.>

The input size is ”n”, so there are 2<sup>”n”</sup> possible inputs. Thus, by the [[union bound]], the probability that a random ”R” is bad for at least one input ”x” is

:ProbR[xRBad(x)]2nen<1.

In words, the probability that ”R” is bad for some ”x” is less than 1, therefore there must be an ”R” that is good for all ”x”. Take such an ”R” to be the advice string in our ”’P/poly”’ algorithm.

Prove that PP is closed under complement

algorithm A with the property that xLPr[A(x)=1]>12 and xLPr[A(x)=1]12.

> Let f(|x|) be the > polynomial upper bound on the running time of A on input x. Thus A > makes at most f(|x|) random coin flips during its execution. In > particular, xLPr[A(x)=1]12+12f(|x|).

I understand that:

- By definition of PP, xLPr[A(x)=1]>12. It means that if indeed xL, more than half of the possible coin flip combinations would cause the algorithm to accept. - The algorithm’s running time is bound by f(|x|), then the probability of getting some specific combination of coin tosses (that accepts or rejects, we do not know) is bound by (12)f(|x|)=12f(|x|).

But why is it so obvious that xLPr[A(x)=1]12+12f(|x|)

xLPr[A acc x]12(112f(|x|)+1)<12
andxLPr[A acc x](12+12f(|x|))(112f(|x|)+1)>12.

This justifies the assumption (since A is still a polynomial-time probabilistic algorithm) and completes the proof.

Theorem

(extended rice) Let CRE s.t ΣC,C. then LCRE

LC={M:M is TM andL(M)C}RE
Proof

ACCmLC and Let MA be TM that accept some AC. Define f(M,w)BM,w,
BM,w On input x: Emulate MA(x) and M(w) in parallel, and accept if one of them does.

BM,wACCL(BM,w)=ACBM,wACCL(BM,w)=ΣC

Rice extended(not the version we saw)

theorem Given a property SRE, the language

LS={ML(M)S}

is recursively-enumerable (LSRE) if and only if all the following three statements jointly hold
1. For any two L1,L2RE, if L1S and also L1L2 then also L2S.
2. If L1S then there exists a finite subset L2L1 so that L2S.
3. The set of all *finite* languages in S is enumerable (in other words: there is a TM that enumerates all the finite languages LS).

Proof

If (1,2) hold, but (3) does not, then LSRE*. Let’s assume that LSRE, and we’ll see that we have a way to accept any finite languages in S (and thus, the set of all these languages is RE), thus condition (3) holds and we reach a contradiction. How to decide if a finite L belongs to S or not? Easily – we use the description of L to construct a machine ML that accepts only the words in L, and now we run the machine of LS on ML (remember - we assumed LSRE, so there is a machine that accepts LS!). If LS then MLLS and since LSRE, its machine will say yes on the input ML, and we are done.
If (2,3) hold, but (1) does not, then LSRE*. We assume that LSRE and we’ll show that we have a way to decide HP, leading to a contradiction.

Because condition (1) doesn’t hold, there is a language L1S and a superset of it, L2L1 so that L2S. Now we are going to repeat the argument used in Section 4 to decide HP: given an input (M,x) for HP, we construct a machine M whose language is L1 if (M,x)HP or otherwise, its language is L2. Then, we can decide HP: either M halts on x, or the RE-machine for LS accepts M; we can run both in parallel and are guaranteed that at least one will halt.

Let’s give the details of constructing M (on input x):

1. Do the following in parallel: 1.1 run M on x. 1.2 run the machine of L1 on x 2. If 1.2 halts and accepts - accept. 3. If 1.1 halts: run the machine of L2 on x.

Why does this work? If (M,x)HP then 1.1 never halts, and M accepts exactly all the inputs that are being accepted at step 1.2, so L(M)=L1. On the other hand, if (M,x)HP then, at some point step 1.1 halts and M accepts exactly L2. It may happen that 1.2 accepts beforehand, but since L1L2, this doesn’t change the language of M in this case.
If (1,3) hold, but (2) does not, then LSRE*. Again, we will assume LSRE and show that HP becomes decidable, which is a contradiction.

If condition (2) doesn’t hold, then for any L1S, all its finite subsets L2L1 satisfy L2S (note that L1 must be infinite, since L1L1). As in the above, in order to decide HP for a given input (M,x), we construct a machine M whose language is L1 if (M,x)HP and some finite L2 otherwise. The contradiction follows in a similar way as above.

The construction of this machine is quite similar to the previous M we constructed. The machine M (on input x) does:

1. Runs M on x for |x| steps. 2. If M halts during step 1 – reject 3. Otherwise, run the machine of L1 on x.

It holds that, if (M,x)HP, then at some point, say after 1000 steps, M halts on x. Therefore, step 1 will halt on (and reject) any input x of length >1000. Therefore, in this case, L(M) is **finite**. Also note that L(M)L1, and in particular, by our assumptions on the invalidity of condition (2), we have that L(M)S.

On the other hand, if (M,x)HP, then step 1 never halts, and we never reject at step 2. In this case it is easy to see that L(M)=L1 and in particular, L(M)S.

We are left to show the other direction of the extended theorem. That is, we need to show that if all the conditions (1,2,3) hold, then we have a TM that accepts LS, that is, LSRE. In other words, we need to show a machine MS so that for any input M for which L(M)S, the machine accepts this input, MS(M)accept.

Here is how the machine MS behaves (on input M):

1. Let Menum S be the machine that enumerates all the finite languages in S, guaranteed by condition (3). 2. Run the following in parallel for i=1,2,... 2.1 Run Menum S until it outputs the language Li 2.2. Check if M accepts all the words of Li (run M on these words, again in parallel). 2.3. If for some i, M accepts all the words of Li – accept.

Why does it work? If L(M)S then it has a finite subset LjS, and once Menum S outputs that subset, step 2.2/2.3 will find that M accepts all the words in that language and accept.

On the other hand, if L(M)S it cannot be accepting all the words in Li for any i=1,2,.... Indeed, by condition (1), any LLi is also in S, so if M accepts all the words in Li for some i, then L(M)Li and thus L(M)S, in contradiction.

Given a non trivial property SRE, so that S, the language >
LS={ML(M)S}
> is not recursively-enumerable, that is, LSRE.