MODULE 9:   Randomized Algorithms
Recitation 8
1
Review
  • A randomized algorithm is an algorithm that has access to a random number generator. In this class we will allow randomized algorithms to call RandInt(\(n\)) and Bernoulli(\(p\)).

  • Here are two interesting types of randomized algorithms:

    • An algorithm \(A\) is a \(T(n)\)-time Las Vegas algorithm if

      • for every input \(x \in \Sigma^*\), \(A(x)\) returns the right answer with probability \(1\), and

      • for every input \(x \in \Sigma^*\), \(\mathbb{E}[\text{number of steps $A(x)$ takes}] \le T(|x|)\).

    • An algorithm \(A\) is a \(T(n)\)-time Monte Carlo algorithm with error probability \(\varepsilon\) if

      • for every input \(x \in \Sigma^*\), \(A(x)\) gives the wrong answer with probability at most \(\varepsilon\), and

      • for every input \(x \in \Sigma^*\), \(A(x)\) has a worst-case running-time of at most \(T(|x|)\).

  • Note that above, we require correctness and running-time guarantees for every input \(x\).

2
What happens in Las Vegas doesn’t stay in Monte Carlo

The expected number of comparisons that the Quicksort algorithm makes is at most \(2n \ln n\) (which you can cite without proof — you might see a proof of this fact if you take 15-210). Describe how to convert this Las Vegas algorithm into a Monte Carlo algorithm with the worst-case number of comparisons being \(1000n \ln n\). Give an upper bound on the error probability of the Monte Carlo algorithm.

Solution

First attempt: Run the Las Vegas algorithm, but if it performs \(1000 n\ln n\) comparisons, we will stop the algorithm and declare failure. Let \(\boldsymbol{X}\) be a random variable for the number of comparisons performed by a run of the Las Vegas algorithm. Our algorithm only fails if the Las Vegas algorithm tries to perform \(\ge 1000 n \ln n\) comparisons. By Markov’s inequality, this occurs with probability \[\Pr[\boldsymbol{X} \ge 1000 n \ln n] \le \frac{1}{500}.\] This is not really a great bound, so let’s try something else.

Second attempt: Now we will use boosting to achieve a much better bound for the error probability. Run the Las Vegas algorithm, but if it performs \(4n \ln n\) comparisons, stop the algorithm and declare failure. Again, by Markov’s inequality, this occurs with probability at most \(1/2\). We will do this independently \(250\) times (so we have at \(1000n \ln n\) comparisons in total). If any repetition succeeds, we can give its output and be correct. This way only fails if all \(250\) repetitions fail. The events of failing in these repetitions are independent events. Therefore the overall error probability is \((1/2)^{250} < 1/10^{75}\).

3
Atlantic City to Monte Carlo

Suppose you are given a randomized algorithm that solves \(f : \Sigma^* \to \Sigma^*\) in expected time \(T(n)\) and with \(\varepsilon\) probability of error (i.e., the algorithm gambles both with correctness and running time). Show that for any constant \(\varepsilon' > 0\), there is a Monte Carlo algorithm computing \(f\) with running time \(O(T(n))\) and error probability \(\varepsilon + \varepsilon'\).

Solution

Let \(A\) be the algorithm given to us. We want to construct a Monte Carlo algorithm \(A'\), which works as follows: run \(A\) for \(T(n)/\varepsilon'\) steps. If it halts, give its answer. If not, declare “failure”. We can fail in two ways: either the simulation of \(A\) halts and gives the wrong answer, or the simulation of \(A\) does not halt in \(T(n)/\varepsilon'\) steps. The first happens with probability at most \(\varepsilon\), and the second happens with probability at most \(\varepsilon'\) (by Markov). Using the union bound, the error probability is at most \(\varepsilon + \varepsilon'\).

Here is another way of thinking about the analysis. For any fixed input, the original algorithm induces a probability tree representing the computation. We know that at most \(\varepsilon'\) fraction of the leaves are too deep (i.e., running time exceeds \(T(n)/\varepsilon'\)). And we know at most \(\varepsilon\) fraction of the leaves give the wrong answer. By the union bound, at most \(\varepsilon+\varepsilon'\) fraction of the leaves either give the wrong answer or are too deep. If the wrong leaves intersect with deep ones, that would be better, but in the worst case, wrong leaves and deep ones would be disjoint.

Technicality Alert: There is a small technical issue here. Algorithm \(A'\) needs to be able to compute \(T(|x|)\) from \(x\) in \(O(T(|x|))\) time. This is indeed the case for most \(T(\cdot)\) that we care about.

4
Randomization Meets Approximation

Consider the MAX-3SAT problem where, given a CNF formula in which every clause has exactly 3 literals (with distinct variables), we want to find a truth assignment to the variables in the formula so that we maximize the number of clauses that evaluate to True.

Describe a polynomial-time randomized algorithm with the property that, given a 3CNF formula with \(m\) clauses, it outputs a truth assignment to the variables such that the expected number of clauses that evaluate to True is \(\frac{7}{8}m\).

Solution

The solution here is simply to pick the assignment to the variables randomly: For each variable, flip a coin. If it is heads, assign it the value True. If it is tails, assign it False.

Suppose we have \(m\) clauses. Now let \(\boldsymbol{X}\) be the number of clauses satisfied. We are interested in computing \(\mathbb{E}[\boldsymbol{X}]\). Define the indicator random variable \(X_i\) to be 1 if the \(i\)’th clause is satisfied, and 0 otherwise. Then \(\boldsymbol{X} = \sum_{i=1}^{m} \boldsymbol{X}_i\) and so \(\mathbb{E}[\boldsymbol{X}] = \mathbb{E}[\sum_{i=1}^{m} \boldsymbol{X}_i] = \sum_{i=1}^{m} \mathbb{E}[\boldsymbol{X}_i]\) where in the last equality, we used linearity of expectation. Note that for all \(i\), \(\mathbb{E}[\boldsymbol{X}_i] = \Pr[\boldsymbol{X}_i = 1] = 7/8\) (the only way a clause is not satisfied is when all the literals are False, which happens with probability \((1/2)^3 = 1/8\)). So \(\mathbb{E}[\boldsymbol{X}] = \frac{7}{8}m\).

Here it is important to note that the \(\boldsymbol{X}_i\)’s need not be independent! In fact, they can be perfectly correlated: consider a case where all the \(m\) clauses are identical to each other. Then if you know that one clause is satisfied, you automatically know that all of the clauses must be satisfied. Yet, this does not change the expected number of satisfied clauses at all. And as you can see in the calculation above, linearity of expectation allows us to treat each clause independently, without worrying about potential dependencies among clauses. We know that \(\mathbb{E}[\boldsymbol{X}_i] = \Pr[\boldsymbol{X}_i = 1] = 7/8\) no matter what is going on with other clauses.

5
(Extra) (Brain Teaser) Passive-Aggressive Passengers

Consider a plane with \(n\) seats \(s_1, s_2, \ldots , s_n\). There are \(n\) passengers, \(p_1, p_2, \ldots, p_n\) and they are randomly assigned unique seat numbers. The passengers enter the plane one by one in the order \(p_1, p_2,\ldots p_n\). The first passenger \(p_1\) does not look at their assigned seat and instead picks a uniformly random seat to sit in. All the other passengers, \(p_2, p_3, \ldots, p_n\), use the following strategy. If the seat assigned to them is available, they sit in that seat. Otherwise they pick a seat uniformly at random among the available seats, and they sit there. What is the probability that the last passenger, \(p_n\), will end up sitting in their assigned seat?

Solution

For ease of presentation, note that the random assignment of seats is a red herring — we can without loss of generality rearrange seat numbers so that \(p_i\) is assigned to \(s_i\).

We take a different perspective on the problem as follows: The passengers \(p_2, p_3\), …, \(p_{n-1}\) use the following strategy. Passenger \(p_i\) goes to their assigned seat \(s_i\). If it is occupied, they kick out the intruder (who will always be \(p_1\)). No matter what, \(p_i\) sits in \(s_i\). Then the intruder picks a uniformly random available seat to sit in. This is the same as the original problem, but the identities are changed around so that \(p_1\) is the only passenger going around picking uniformly random seats.

This perspective makes the situation a little easier to think about. Now we want to find the probability that \(s_n\) is unoccupied when \(p_n\) comes to take a seat.

Notice that it only matters whether \(p_1\) eventually sits in \(s_1\) or in \(s_n\). If \(p_1\) sits elsewhere, they will be kicked out and have to repick their seat until eventually they choose \(s_1\) or \(s_n\). Then \(p_1\) won’t move again until \(p_n\) comes, at which point \(s_n\) is unoccupied iff \(p_1\) sits in \(s_1\). Every time \(p_1\) picks a seat, they have an equal chance of going to \(s_1\) and of going to \(s_n\), so the answer is \(1/2\).