Randomized Algorithms For 3-Sat
Randomized Algorithms For 3-Sat
Thomas Hofmeister
Informatik 2, Universitat Dortmund, 44221 Dortmund, Germany
Uwe Schoning
, Rainer Schuler
Abt. Theoretische Informatik, Universitat Ulm
Oberer Eselsberg, 89069 Ulm, Germany
Osamu Watanabe
A preliminary version of this paper was reported at the 19th Sympos. on Theoretical Aspects of
Computer Science (STACS02).
= a;
for 3n times do {
if a
;
Select a literal of C uniformly at random and ip the assignment to this literal;
Call the new assignment (still) a
;}
if a
)
/p(n), for some polynomial p.
Remark. In the proof, we show that the bound holds with p(n) =
) (1 F(a)).
That is, if a is a satisfying assignment of F, then X(a) = 0; otherwise, X(a) takes d(a, a
).
Then it is easy to see that
Pr{ RW(a
0
) succeeds } = Pr{ t 3n [ X(A
t
) = 0 ] }.
Thus, below we investigate how X(A
t
) decreases during the execution of RW(a
0
). Recall
that X(A
0
) = X(a
0
) = d(a
0
, a
) = j
0
.
The following point is a key for our analysis:
After the t-th iteration (if A
t
is not a satisfying assignment), the algorithm ips
the value of one variable that is randomly selected from three variables of any
unsatised clause. The assignment A
t+1
obtained by this ip has a Hamming
distance d(A
t+1
, a
) with probability
q 1/3. It has a Hamming distance larger by one with probability 1q 2/3.
This is because any unsatised clause has at least one variable whose current assignment
diers from the one in a
.
For a precise argument, let us introduce a Markov chain Y
0
, Y
1
, . . . that is dened as
follows (below let j 1 be any integer).
Pr{ Y
t+1
= j 1 | Y
t
= j } =
1
3
, and Pr{ Y
t+1
= j + 1 | Y
t
= j } =
2
3
.
As a special case, we dene Y
0
= j
0
and Pr{Y
t+1
= 0 | Y
t
= 0} = 1. Then the following
relation is clear from the above observation.
Claim 1. For any k and j, j k 0, and for any t 0, we have
Pr{ X(A
t+1
) k | X(A
t
) j } Pr{ Y
t+1
k | Y
t
j }.
4
Thus, by induction we have the following bound for any j j
0
and any t 0.
Pr{ X(A
t
) j } Pr{ Y
t
j }.
In particular, we have Pr{X(A
t
) = 0} Pr{Y
t
= 0}. Hence,
Pr{ RW(a
0
) succeeds } Pr{ t 3n[ Y
t
= 0 ] }.
Now consider the righthand side probability. For any s 0, let E
s
be the event that
Y
s+1
= Y
s
1 if Y
s
= 0, and some appropriate event so that Pr{E
s
} = 1/3 if Y
s
= 0. Then
for any i 0, starting from Y
0
= j
0
, Y
t
becomes 0 for some t j
0
+ 2i, if the number of
ss, 0 s < j
0
+ 2i, such that E
s
occurs is j
0
+ i. Since there are
_
j
0
+2i
i
_
ways to choose
such ss, we have
Pr{ t 3n[ Y
t
= 0 ] }
_
j
0
+ 2i
i
__
1
3
_
j
0
+i
_
2
3
_
i
for any i 0 such that j
0
+2i 3n. Choose i = j
0
, which is possible since j
0
n. Then
we have the desired bound as follows.
Pr{ t 3n[ Y
t
= 0 ] }
_
3j
0
j
0
_
2
j
0
3
3j
0
6j
0
2
3j
0
H(1/3)
2
j
0
3
3j
0
=
1
6j
0
2
j
0
6n
2
j
0
.
Here we used the entropy function H()
def
= log (1 ) log(1 ) and the
following well-known bound (see, e.g., [A65]):
_
n
n
_
1
_
8n(1 )
2
nH()
2n
2
nH()
.
Choosing p(n) =
= (a
1
, . . . , a
n
) denote
it. Let a = (a
1
, . . . , a
n
) be an initial assignment that is chosen randomly; that is, each a
i
is a random variable taking 0 or 1 with probability 1/2. Dene X
i
= d(a
i
, a
i
), a random
variable that is 0 if a
i
= a
i
and 1 otherwise. Clearly, X
i
is 0 or 1, each with probability
1/2. We use E[Y ] to denote the expected value of a random variable Y . Then we have
Pr{ success }
a{0,1}
n
Pr{ a is the initial assignment }
_
1
2
_
d(a,a
)
/p(n)
=
1
p(n)
E
_
_
1
2
_
d(a,a
)
_
=
1
p(n)
E
_
_
1
2
_
X
1
++X
n
_
=
1
p(n)
E
_
_
1
2
_
X
1
_
E
_
_
1
2
_
X
n
_
.
Here, we have exploited that E[Y Z] = E[Y ] E[Z] for independent random variables Y
and Z. Since X
i
takes 0 or 1 with probability 1/2, we have
E
_
_
1
2
_
X
i
_
=
1
2
_
1
2
_
0
+
1
2
_
1
2
_
1
=
3
4
.
= x
1
x
5
x
6
are not independent. For a formula F, a
maximal independent clause set C is a subset of the clauses of F such that all clauses in C
are (mutually) independent and no clause of F can be added to C without destroying this
property. For a given formula, one of its maximal independent clause sets can be found
in polynomial time by a greedy algorithm that selects independent clauses until no more
independent clauses can be added.
Maximal independent clause sets have the following simple property:
Lemma 3. Let C be a maximal independent clause set for a formula F. Then every
clause C of F contains at least one variable that occurs in (some clause of) C.
The proof is easy. If the property did not hold for C, then C would not be a maximal
independent clause set. Though simple, this property has the following important conse-
quence: If we assign constants to all the variables contained in the independent clauses,
then after the usual simplications consisting of removing constants we obtain a for-
mula
F which is a 2-CNF formula, since every clause in
F has at most two literals. It
is well-known that there is a polynomial-time algorithm for checking the satisability of
a 2-CNF formula [APT79] and nding a satisfying assignment if one exists. Thus, given
a satisable formula F and a maximal independent clause set {C
1
, . . . , C
b m
}, we can take
the following approach to nd a satisfying assignment. For each of the m independent
clauses, assign its three variables to one of the seven assignments that satisfy the clause.
Then the satisability of the remaining 2-CNF formula is checked by the polynomial
time algorithm. This is tested for all possible assignments to the independent clauses. It
is easy to see that this whole test can be done in time poly(n) 7
b m
. This is the idea of our
algorithm IndR. The algorithm is not so strong; from the simple bound m n/3 it only
follows that its running time is bounded by O(7
n/3
) = O(1.913
n
). It is, however, useful if
m is small.
Now we state the algorithm IndR precisely in Figure 2. Some explanation may be
necessary here. For using IndR as a complementary algorithm for (a modied) RW, we
7
Procedure IndR(q
1
, q
2
, q
3
); % q
1
, q
2
, q
3
0 and 3q
1
+ 3q
2
+ q
3
= 1.
% Let F be an input formula, and let C = {C
1
, . . . , C
b m
} be the maximal
% independent clause set obtained by the greedy method. We may assume,
% by renaming if necessary, that each clause consists of positive literals.
for each C C do { % Let C = x
i
x
j
x
k
.
Set the variables x = (x
i
, x
j
, x
k
) randomly so that
Pr{ x = (0, 0, 1)} = Pr{ x = (0, 1, 0)} = Pr{ x = (1, 0, 0)} = q
1
,
Pr{ x = (0, 1, 1)} = Pr{ x = (1, 0, 1)} = Pr{ x = (1, 1, 0)} = q
2
, and
Pr{ x = (1, 1, 1)} = q
3
; }
Simplify the resulting formula to a 2-CNF formula
F;
Determine whether
F is satisable using the polynomial-time 2-CNF algorithm;
Figure 2: IndR: A randomized algorithm based on Lemma 3
design it as a randomized procedure, which chooses assignments to independent clauses
under a certain distribution that is specied by parameters q
1
, q
2
, q
3
. (We assume that
a maximal independent clause set has been obtained by the greedy method before exe-
cuting this procedure.) Suppose, for example, that we run this procedure on a satisable
formula with q
1
= q
2
= q
3
= 1/7. Then each of the 7
b m
satisfying assignments to the
independent clauses is chosen with the same probability; hence, the success probability
(of one execution of IndR) is at least 7
b m
. Therefore, if we execute this procedure until
some satisfying assignment is found, then the expected number of executions is at most
1/7
b m
= 7
b m
.
The parameters q
1
, q
2
, q
3
are chosen depending on the structure of the independent
clauses w.r.t. a satisfying assignment. For a given satisable formula F, we arbitrarily
x one satisfying assignment a
. Since a
has k 1s
8
in C. Then the probability that the random assignment agrees with a
on C is exactly
q
k
. Hence, the probability that the random assignment agrees on a
on all clauses in C is
q
m
1
1
q
m
2
2
q
m
3
3
. This gives the following bound.
Theorem 4. For any satisable formula F on n variables, let m, m
1
, m
2
, m
3
, and
1
,
2
,
3
be dened as above. Then the success probability of IndR(
1
/3,
2
/3,
3
) is at least
_
1
3
_
m
1
_
2
3
_
m
2
m
3
3
=
__
1
3
_
1
_
2
3
_
3
3
_
b m
.
4 An Algorithm using Better Initialization
Now we have a procedure, namely IndR, that has a reasonable performance when a given
formula F has a maximal independent clause set C of small size m. (More precisely, such
C is obtained by the greedy method.) Thus, we modify our rst algorithm for the other
case, i.e., the case where m is not so small. Our modication is given in Figure 3. In fact,
we simply add a part of choosing an initial assignment, which is almost the same as the
one in IndR and is controlled by parameters p
1
, p
2
, and p
3
.
Procedure RW-RS(p
1
, p
2
, p
3
); % p
1
, p
2
, p
3
0 and 3p
1
+ 3p
2
+ p
3
= 1.
% Let F be an input formula, and let C = {C
1
, . . . , C
b m
} be the maximal
% independent clause set obtained by the greedy method. We may assume,
% by renaming if necessary, that each clause consists of positive literals.
Let a be an assignment to x
1
, . . . , x
n
, initially all undened;
for each C C do { % Let C = x
i
x
j
x
k
.
Dene a random assignment a = (a
i
, a
j
, a
k
) to (x
i
, x
j
, x
k
) in such a way that
Pr{a = (0, 0, 1)} = Pr{a = (0, 1, 0)} = Pr{a = (1, 0, 0)} = q
1
,
Pr{a = (0, 1, 1)} = Pr{a = (1, 0, 1)} = Pr{a = (1, 1, 0)} = q
2
, and
Pr{a = (1, 1, 1)} = q
3
; }
Dene a on the remaining variables x
i
so that Pr{x
i
= 0} = Pr{x
i
= 1} = 1/2;
Execute RW(a);
Figure 3: RW-RS: A random local search procedure with a random initial assignment
Its success probability is analyzed as follows.
Theorem 5. For any satisable formula F on n variables, let m, and m
1
, m
2
, m
3
be
dened as above. Then the success probability of RW-RS(p
1
, p
2
, p
3
) is at least
_
3
4
_
n3 b m
_
3p
1
2
+
9p
2
8
+
p
3
4
_
m
1
_
9p
1
8
+
3p
2
2
+
p
3
2
_
m
2
_
3p
1
4
+
3p
2
2
+ p
3
_
m
3
1
p(n)
.
9
Proof. For a given formula F, let C = {C
1
, . . . , C
b m
} be the maximal independent clause
set obtained by the greedy method. For simplicity of notation, let us assume that the
rst clause C
1
is x
1
x
2
x
3
, the second one C
2
is x
4
x
5
x
6
, and so on. Consider any
satisfying assignment a
)
_
,
where Pr{ success } is the probability that RW-RS(p
1
, p
2
, p
3
) succeeds.
This time, it does not hold that all variables are xed independently of each other. But
the only dependence is between variables that are in the same clause C
i
, 1 i m. Dene
X
1,2,3
to be the random variable that is the Hamming distance in the rst independent
clause C
1
, i.e., the distance between (a
1
, a
2
, a
3
) and (a
1
, a
2
, a
3
). Dene X
4,5,6
etc. similarly
until X
3 b m2,3 b m1,3 b m
. On the other hand, for any i 3 m + 1, dene X
i
= d(a
i
, a
i
). Then
we have
d(a, a
) = X
1,2,3
+ X
4,5,6
+ + X
3 b m2,3 b m1,3 b m
+ X
3 b m+1
+ X
3 b m+2
+ + X
n
.
Hence,
Pr{ success }
1
p(n)
E
_
_
1
2
_
X
1,2,3
+X
4,5,6
++X
3 b m2,3 b m1,3 b m
+X
3 b m+1
+X
3 b m+2
++X
n
_
=
1
p(n)
E
_
_
1
2
_
X
1,2,3
_
E
_
_
1
2
_
X
3 b m2,3 b m1,3 b m
_
i=3 b m+1
E
_
_
1
2
_
X
i
_
.
As in the proof of Theorem 2, we have E[(1/2)
X
i
] = 3/4 for i 3 m + 1; hence,
n
i=3 b m+1
E
_
_
1
2
_
X
i
_
=
_
3
4
_
n3 b m
.
We show how to analyze E[(1/2)
X
1,2,3
]; the other terms E[(1/2)
X
4,5,6
] etc. are analyzed
in exactly the same way. It turns out that E[(1/2)
X
1,2,3
] depends on how many ones
(a
1
, a
2
, a
3
) contains. We have to analyze the following three possible cases.
Case that a
1
+ a
2
+ a
3
= 3: This case occurs if and only if (a
1
, a
2
, a
3
) = (1, 1, 1). The
algorithm chooses (a
1
, a
2
, a
3
) = (1, 1, 1) with probability p
3
, in which case the Ham-
ming distance from (a
1
, a
2
, a
3
) is zero. Likewise, with probability 3p
2
, the algorithm sets
(a
1
, a
2
, a
3
) to one of (0, 1, 1), (1, 0, 1), and (1, 1, 0), which leads to Hamming distance 1
on this part. Finally, with probability 3p
1
, the algorithm sets (a
1
, a
2
, a
3
) to a value that
leads to Hamming distance 2 on this part. Thus, by denition of the expected value, we
obtain
E
_
_
1
2
_
X
1,2,3
_
=
_
1
2
_
0
p
3
+
_
1
2
_
1
3p
2
+
_
1
2
_
2
3p
1
= p
3
+
3p
2
2
+
3p
1
4
.
10
The other two cases can be analyzed in an analogous way and one obtains:
Case that a
1
+ a
2
+ a
3
= 1:
E
_
_
1
2
_
X
1,2,3
_
=
_
1
2
_
0
p
1
+
_
1
2
_
1
2p
2
+
_
1
2
_
2
(2p
1
+p
3
)+
_
1
2
_
3
p
2
=
p
3
4
+
9p
2
8
+
3p
1
2
.
Case that a
1
+ a
2
+ a
3
= 2:
E
_
_
1
2
_
X
1,2,3
_
=
_
1
2
_
0
p
2
+
_
1
2
_
1
(p
3
+2p
1
)+
_
1
2
_
2
2p
2
+
_
1
2
_
3
p
1
=
p
3
2
+
3p
2
2
+
9p
1
8
.
The values m
1
, m
2
, m
3
count for how many clauses which of the three cases holds.
Therefore, we obtain the bound on Pr{ success } stated in the theorem.
By combining the obtained procedures IndR and RW-RS with appropriate
parameters, we can improve the poly(n) (4/3)
n
time bound of the rst algorithm RW.
More specically, we execute two procedures IndR(q
1
, q
2
, q
3
) and RW-RS(p
1
, p
2
, p
3
) in
parallel with some appropriate values for q
1
, q
2
, q
3
and p
1
, p
2
, p
3
. As before, we analyze
the success probability of this execution and show that it is at least (1.331
n
), (more
precisely, (1.330258
n
)), improving (1.334
n
) for RW.
For a given formula F, we compute, by the greedy method, the maximal independent
clause set C. Thus, we know its size m; on the other hand, the values of m
1
, m
2
, m
3
(where
m
1
+m
2
+m
3
= m), which is dened for some satisfying assignment, is not known. Under
this situation, the simplest strategy of choosing parameters is to choose them so that the
success probabilities do not depend on m
1
, m
2
, m
3
. This leads to q
1
= q
2
= q
3
= 1/7 for
IndR(q
1
, q
2
, q
3
), and p
1
= 4/21, p
2
= 2/21, p
3
= 3/21 for RW-RS(p
1
, p
2
, p
3
). With these
parameters, we have
Pr{ IndR succeeds }
_
1
7
_
b m
1
p(n)
_
1
7
_
b m
, and
Pr{ RW-RS succeeds }
1
p(n)
_
3
4
_
n3 b m
_
3
7
_
b m
=
1
p(n)
_
3
4
_
n
_
64
63
_
b m
.
Observe that the above bound for IndR decreases with m, while the one for RW-RS
increases with m; hence the maximum of these two bounds becomes smallest when both
are equal. This is the case for
n
m
=
log 9/64
log 3/4
6.8188417,
or equivalently, m 0.1466525 n. Estimating both bounds with this m, we can show
that the success probability is at least (1.330258
n
); as before, this yields a randomized
algorithm for 3-SAT with expected running time O(1.330258
n
).
11
5 Renements
The algorithms IndR and RW-RS can be ne-tuned if the values m
1
, m
2
, m
3
(or equiv-
alently,
1
,
2
,
3
) are known in advance. For example, if we knew that (
1
,
2
,
3
) =
(1, 0, 0), then we could run IndR(1/3, 0, 0), whose success probability is 3
b m
>> 7
b m
. Of
course, the values m
1
, m
2
, m
3
are not known in advance. But since we know m and since
m
1
+ m
2
+ m
3
= m, there are only polynomially in n many possibilities for m
1
, m
2
, m
3
.
We proceed as follows: For each of the possible values for
1
,
2
,
3
, we execute both pro-
cedures IndR and RW-RS with the parameters that maximize their success probabilities
if the values for
1
,
2
,
3
are indeed those for the input formula. While its running time
is still polynomial, the success probability of the whole is at least as large as the largest
one of all executions.
This idea is simple, but the computations are a little bit tedious. In the following, we
exhibit a parameter choice which achieves a success probability of at least (1.330193
n
).
(It is possible to improve this success probability by choosing the parameters more care-
fully; but since the improvement is minor and since it is not enough to beat the better
upper bounds obtained in [IT03, Rol03], we leave it to the interested reader to search for
the best parameters.)
In the following, let us assume that we know
1
,
2
,
3
for a given satisable formula F.
It is rather straightforward to see that one should call IndR(
1
/3,
2
/3,
3
) to maximize
the success probability of IndR, which is bounded by Theorem 4. Dene
Z(
1
,
2
,
3
)
def
=
_
1
3
_
2
3
_
3
3
.
Then by Theorem 4, we have
Pr{ IndR(
1
/3,
2
/3,
3
) succeeds } Z(
1
,
2
,
3
)
b m
1
p(n)
Z(
1
,
2
,
3
)
b m
.
We denote the last bound by P
1
. We have to compare it with the bound for RW-RS. P
1
is decreasing with m since Z(
1
,
2
,
3
) 1.
Let P
2
be the bound for the success probability of RW-RS given by Theorem 5.
Using that p
3
+ 3p
2
+ 3p
1
= 1, we rewrite P
2
as
P
2
=
1
p(n)
_
_
_
3
4
_n
b m
3
_
3
8
+
3p
1
8
p
3
8
_
. .
T
1
(p
1
,p
3
)
_
1
2
3p
1
8
_
. .
T
2
(p
1
,p
3
)
_
1
2
3p
1
4
+
p
3
2
_
. .
T
3
(p
1
,p
3
)
3
_
_
b m
.
Since we select the parameters p
1
, p
2
, p
3
depending on
1
,
2
,
3
, we can consider the
following function as a function depending only on
1
,
2
,
3
.
T(
1
,
2
,
3
)
def
= T
1
(p
1
, p
3
)
1
T
2
(p
1
, p
3
)
2
T
3
(p
1
, p
3
)
3
.
12
We thus have
P
2
=
1
p(n)
T(
1
,
2
,
3
)
b m
(3/4)
n3 b m
.
As we will see below, under our parameter choice, we have
log
3/4
T(
1
,
2
,
3
) 3.
_
That is, T(
1
,
2
,
3
)
_
3
4
_
3
.
_
This guarantees that P
2
is increasing with m.
Like we did in the previous section, we have to analyze for which m both bounds on
the success probabilities are equal. The equality of P
1
and P
2
is achieved when
_
3
4
_n
b m
3
=
Z(
1
,
2
,
3
)
T(
1
,
2
,
3
)
.
_
That is,
n
m
= 3 + log
3/4
Z(
1
,
2
,
3
)
T(
1
,
2
,
3
)
.
_
When m satises this, we have
P
1
=
1
p(n)
Z(
1
,
2
,
3
)
n/[log
3/4
(Z(
1
,
2
,
3
)/T(
1
,
2
,
3
))+3]
=
1
p(n)
_
3
4
_
G(
1
,
2
,
3
)n
where G(
1
,
2
,
3
) =
1
1 +
3log
3/4
T(
1
,
2
,
3
)
log
3/4
Z(
1
,
2
,
3
)
.
This lower bound for the success probability is increasing with T(
1
,
2
,
3
) and Z(
1
,
2
,
3
).
Here we suggest the following choice of p
1
, p
2
, p
3
that depends only on
1
.
p
3
=
_
_
0, if 0
1
0.5
2
1
1, if 0.5
1
0.6
1
5
, otherwise
, and p
1
=
4p
3
3
.
This determines p
2
= (13p
1
p
3
)/3. Since 3p
1
+p
3
1, this is a valid choice. Note that
our choice is not optimal. First of all, it is possible to take
2
and/or
3
into account.
Also the choice of the intervals is rather arbitrary; we chose them so that the calculation
becomes a bit simpler. One may consider better choices; but since the gain in the success
probability is very minor, we leave such analysis to the interested reader.
By setting the parameters as described above, we have T
2
(p
1
, p
3
) = T
3
(p
1
, p
3
) and
T(
1
,
2
,
3
) is computed as follows.
for 0
1
0.5 for 0.5
1
0.6 for 0.6
1
1
T
1
(p
1
, p
3
) 3/8 (3/4)
1
9/20
T
2
(p
1
, p
3
) = T
3
(p
1
, p
3
) 1/2 1
1
2/5
T(
1
,
2
,
3
)
_
3
8
_
1
_
1
2
_
1
1
_
3
4
1
_
1
(1
1
)
1
1
_
9
20
_
1
_
2
5
_
1
1
13
In order to make Z(
1
,
2
,
3
) large, we choose for a given
1
the parameters
2
=
(3/4) (1
1
) and
3
= (1
1
)/4. (This choice can also be shown to be optimal among
all possible choices for
2
and
3
such that
1
+
2
+
3
= 1.) We obtain
Z(
1
,
2
,
3
) =
_
1
3
_
1
_
1
1
4
_
1
1
,
which achieves its minimum value 1/7 when
1
= 3/7.
For the case 0
1
0.5, we can now argue as follows: Since T(
1
,
2
,
3
) =
_
3
8
_
1
_
1
2
_
1
1
is decreasing with
1
, the minimum of T(
1
,
2
,
3
) on the interval [0, 0.5]
is achieved for
1
= 0.5; thus the value of log
3/4
T(
1
,
2
,
3
) on the interval is upper-
bounded by 2.90943. Hence, for 0
1
0.5, the value of G(
1
,
2
,
3
) which determines
the expected running time of our algorithm, is bounded by 0.986788, implying P
1
(1/p(n)) (1.3283)
n
.
The case 0.6
1
1 can be analyzed similarly. Since T(
1
,
2
,
3
) =
_
9
20
_
1
_
2
5
_
1
1
is increasing with
1
, the minimum of T(
1
,
2
,
3
) on [0.6, 1] is achieved for
1
= 0.6,
and it is 0.42929 . Hence, we have G(
1
,
2
,
3
) 0.99085, implying P
1
(1/p(n))
(1.32983)
n
.
Only for the case 0.5
1
0.6, we have to use the exact formulas for T as well
as for Z. By evaluating T and Z numerically on the interval 0.5
1
0.6 (or using
some tedious calculus computations), one can now show that the minimum value of P
1
is achieved for
1
0.5703, where P
1
(1/p(n)) 1.330193
n
. Therefore, we have the
following theorem.
Theorem 6. For any satisable formula F on n variables, the execution of IndR(q
1
, q
2
, q
3
)
and RW-RS(p
1
, p
2
, p
3
) with the parameters suggested above has a success probability at
least (1.330193
n
).
We nally remark that in special cases, the above analysis shows that the running
time is even better. Assume, for example, that the input formula F has the property that
there is an assignment a such that F(a) = 1 as well as F( a) = 1. For one of these two
satisfying assignments, it holds that the corresponding parameter
1
is not larger than
0.5. Hence, our algorithm has a running time bounded by O(1.3284
n
), as shown in the
proof above.
References
[A65] R. Ash, Information Theory. Interscience Publishers, New York, 1965.
14
[APT79] B. Aspvall, M.F. Plass, R.E. Tarjan, A linear-time algorithm for testing the
truth of certain quantied Boolean formulas, Information Processing Letters,
8(3), 121123, 1979.
[BS03] S. Baumer and R. Schuler, Improving a probabilistic 3-SAT algorithm by
dynamic search and independent clause pairs, in Theory and Applications
of Satisability Testing, 6th International Conference, SAT 2003, Selected
Revised Papers, Lecture Notes in Comp. Sci. 2919, 150161, 2004.
[DGHS00] E. Dantsin, A. Goerdt, E.A. Hirsch, and U. Schoning, Deterministic algo-
rithms for k-SAT based on covering codes and local search, in Proc. of the
27th International Colloquium on Automata, Languages and Programming
(ICALP00), Lecture Notes in Comp. Sci. 1853, 236247, 2000.
[Hir00a] E.A. Hirsch, New worst-case upper bounds for SAT, Journal of Automated
Reasoning, 24(4), 397420, 2000.
[HSSW02] T. Hofmeister, U. Schoning, R. Schuler, and O. Watanabe, A probabilistic 3-
SAT algorithm further improved, in Proc. of the 19th Sympos. on Theoretical
Aspects of Computer Science (STACS02), Lecture Notes in Comp. Sci. 2285,
193202, 2002.
[IT03] K. Iwama and S. Tamaki, Improved upper bounds for 3-SAT, Electronic Col-
loquium on Computational Complexity, Report No. 53, 2003.
[Kul99] O. Kullmann, New methods for 3-SAT decision and worst-case analysis, The-
oretical Computer Science, 223(1/2), 172, 1999.
[MS85] B. Monien and E. Speckenmeyer, Solving satisability in less than 2
n
steps,
Discrete Applied Mathematics, 10, 287295, 1985.
[Pap91] C.H. Papadimitriou, On selecting a satisfying truth assignment, in Proc. of the
32nd Ann. IEEE Sympos. on Foundations of Comp. Sci. (FOCS91), IEEE,
163169, 1991.
[PPSZ98] R. Paturi, P. Pudlak, M.E. Saks, and F. Zane, An improved exponential-time
algorithm for k-SAT, in Proc. of the 39th Ann. IEEE Sympos. on Foundations
of Comp. Sci. (FOCS98), IEEE, 628637, 1998.
[Rol03] D. Rolf, 3-SAT RTIME(O(1.32793
n
)): Improving randomized local search
by initializing strings of 3-clauses, Electronic Colloquium on Computational
Complexity, Report No. 54, 2003.
15
[Sch99] U. Schoning, A probabilistic algorithm for k-SAT and constraint satisfaction
problems, in Proc. of the 40th Ann. IEEE Sympos. on Foundations of Comp.
Sci. (FOCS99), IEEE, 410414, 1999.
[Sch02] U. Schoning, A probabilistic algorithm for k-SAT based on limited local search
and restart, Algorithmica 32(4), 615623, 2002.
16