26 views

Skip to first unread message

May 7, 2010, 4:44:29 AM5/7/10

to CS 510 Randomized Algorithms

Hi class,

Sorry for the delay. Here is the full list of homework questions.

These are challenging, so I encourage you to work together, and

consult me if you get stuck. You can also post discussion in this

google group, which I will be checking regularly. I need your

solution

sets by Thursday midnight at the latest.

-Tom

There are 3 problems (in random order):

1) As described in class, do the exploratory assignment in Section 5.8

of the textbook. Moreover, although the book doesn't say to do so, I

want you to try to prove the tightest upper AND lower bounds you can

for each of the 3 expectations.

2) Recall from class the Metropolis Glauber dynamics on independent

sets:

Let lambda < 1 be given, and a graph G = (V,E).

Given a set X_{t-1}, choose a random vertex v_t, and

(a) if v_t is in X_{t-1}, then X_t := X_{t-1} minus {v_t}

(b) if v_t is not in X_{t-1} and no neighbor of v_t is in X_{t-1},

then

with probability lambda, X_t := X_{t-1} union {v_t}.

(c) Otherwise, X_t := X_{t-1}.

We proved in class that, when lambda < 1/Delta, where Delta is the

maximum degree of G, the mixing time is O(n log(n)/epsilon).

Now, suppose we modify this Markov chain by adding an additional

"slide" rule, thus:

Let lambda < 1 be given, and a graph G = (V,E).

Given a set X_{t-1}, choose a random vertex v_t, and

(a) if v_t is in X_{t-1}, then X_t := X_{t-1} minus {v_t}

(b) if v_t is not in X_{t-1} and no neighbor of v_t is in X_{t-1},

then

with probability lambda, X_t := X_{t-1} union {v_t}. Otherwise,

X_t := X_{t-1}.

(c) if v_t is not in X_{t-1} and exactly one neighbor, w, of v_t is in

X_{t-1}, then,

with probability alpha, let X_t = X_{t-1} union {v_t} minus {w}.

Informally, we

"slide" one element of the set along the edge {w, v_t}.

(d) Otherwise, X_t := X_{t-1}.

Using path coupling, analyze the worst-case expected change in

Hamming

distance after one step of this Markov chain. Set alpha to get the

best upper

bound on mixing time. Show that, for this chain, the mixing time is

O(n log(n))

for twice the range of lambda values.

3) Let G be a connected regular graph on n vertices, where the degree

of

every vertex equals d. Prove that the mixing time for lazy random

walk

on G is, in general, at most polynomial in n and d. Try to pin down

the

best such general polynomial with the best upper and lower bounds you

can.

Sorry for the delay. Here is the full list of homework questions.

These are challenging, so I encourage you to work together, and

consult me if you get stuck. You can also post discussion in this

google group, which I will be checking regularly. I need your

solution

sets by Thursday midnight at the latest.

-Tom

There are 3 problems (in random order):

1) As described in class, do the exploratory assignment in Section 5.8

of the textbook. Moreover, although the book doesn't say to do so, I

want you to try to prove the tightest upper AND lower bounds you can

for each of the 3 expectations.

2) Recall from class the Metropolis Glauber dynamics on independent

sets:

Let lambda < 1 be given, and a graph G = (V,E).

Given a set X_{t-1}, choose a random vertex v_t, and

(a) if v_t is in X_{t-1}, then X_t := X_{t-1} minus {v_t}

(b) if v_t is not in X_{t-1} and no neighbor of v_t is in X_{t-1},

then

with probability lambda, X_t := X_{t-1} union {v_t}.

(c) Otherwise, X_t := X_{t-1}.

We proved in class that, when lambda < 1/Delta, where Delta is the

maximum degree of G, the mixing time is O(n log(n)/epsilon).

Now, suppose we modify this Markov chain by adding an additional

"slide" rule, thus:

Let lambda < 1 be given, and a graph G = (V,E).

Given a set X_{t-1}, choose a random vertex v_t, and

(a) if v_t is in X_{t-1}, then X_t := X_{t-1} minus {v_t}

(b) if v_t is not in X_{t-1} and no neighbor of v_t is in X_{t-1},

then

with probability lambda, X_t := X_{t-1} union {v_t}. Otherwise,

X_t := X_{t-1}.

(c) if v_t is not in X_{t-1} and exactly one neighbor, w, of v_t is in

X_{t-1}, then,

with probability alpha, let X_t = X_{t-1} union {v_t} minus {w}.

Informally, we

"slide" one element of the set along the edge {w, v_t}.

(d) Otherwise, X_t := X_{t-1}.

Using path coupling, analyze the worst-case expected change in

Hamming

distance after one step of this Markov chain. Set alpha to get the

best upper

bound on mixing time. Show that, for this chain, the mixing time is

O(n log(n))

for twice the range of lambda values.

3) Let G be a connected regular graph on n vertices, where the degree

of

every vertex equals d. Prove that the mixing time for lazy random

walk

on G is, in general, at most polynomial in n and d. Try to pin down

the

best such general polynomial with the best upper and lower bounds you

can.

May 8, 2010, 12:50:57 AM5/8/10

to cs510-s...@googlegroups.com

Tom,

Vamsi and I are working on the first question. Our conjecture is that you need to sample at least a number of nodes which is of the form 1/2(a - \sqrt(a)), and (as if in part 3 of the question) if you don't pick any marked nodes, you will not need more than a/2 samples.

So, we think the upper and lower bounds for the three cases can be worked out using this. In the first case we apply coupons collector?

Amitabh

--

“I don't know the key to success, but the key to failure is trying to please everybody.”

“Is the glass half full, or half empty? It depends on whether you're pouring, or drinking.”

“I don't know the key to success, but the key to failure is trying to please everybody.”

“Is the glass half full, or half empty? It depends on whether you're pouring, or drinking.”

May 8, 2010, 1:19:57 PM5/8/10

to CS 510 Randomized Algorithms

Hi,

Well, every positive number is of the form 1/2(a - sqrt(a)) for some

a,

so your conjecture is going to be true in that sense. But you would

need to say something about what a is.

Yes, the first case can be reduced to coupon collector (for both the

upper and lower bounds).

Keep at it!

Tom

Well, every positive number is of the form 1/2(a - sqrt(a)) for some

a,

so your conjecture is going to be true in that sense. But you would

need to say something about what a is.

Yes, the first case can be reduced to coupon collector (for both the

upper and lower bounds).

Keep at it!

Tom

May 8, 2010, 1:42:11 PM5/8/10

to cs510-s...@googlegroups.com

Yep, of course, I guess we were trying to reveal the least information ;) - but yes, a is a linear function of n (almost n actually). If a were n, then wouldn't this be like mean - variance (\sqrt(n)). We have some more explicit numbers - we were wondering if we should put them on the list. In asymptotic terms, we think:

Case i: lower and upper: O(n log n)

Case ii: lower and upper: O(n)

Case iii: lower and upper: O(n)

Any hints on how to approach the expected values?

Amitabh

May 9, 2010, 12:31:06 AM5/9/10

to CS 510 Randomized Algorithms

Hi again,

Your answers should be more precise than that.

Try for the following:

Case 1: Theta(n log n).

Case 2: n - Theta(sqrt(n)).

Case 3: I don't want to spoil this yet, but doing

some computer simulation should put you on the right track.

Tom

Your answers should be more precise than that.

Try for the following:

Case 1: Theta(n log n).

Case 2: n - Theta(sqrt(n)).

Case 3: I don't want to spoil this yet, but doing

some computer simulation should put you on the right track.

Tom

May 11, 2010, 7:01:30 PM5/11/10

to cs510-s...@googlegroups.com

Hello Tom,

I think we got the following:

Lower bound Upper bound

1. Theta(n log n) Theta(n log n)

2. (n+1)/2-|_sqrt(n+1)/2 _| n-2

3. (n+1)/2 - |_sqrt(n+1)/2_| (n+1)/2

where |_ _| is the floor function.

Let me know if they are right.

Thanks,

~Vamsi.

I think we got the following:

Lower bound Upper bound

1. Theta(n log n) Theta(n log n)

2. (n+1)/2-|_sqrt(n+1)/2 _| n-2

3. (n+1)/2 - |_sqrt(n+1)/2_| (n+1)/2

where |_ _| is the floor function.

Let me know if they are right.

Thanks,

~Vamsi.

May 11, 2010, 8:27:50 PM5/11/10

to cs510-s...@googlegroups.com

I believe that the case (3) is very, very different comparing to other two ones and its lower bound is very, very,...,very close to the upper bound :-)

May 12, 2010, 1:37:43 PM5/12/10

to cs510-s...@googlegroups.com

Hello Tom,

For the third problem, is the mixing time equivalent to

calculating the second eigenvalue of the transition matrix?

Thanks,

~Vamsi.

For the third problem, is the mixing time equivalent to

calculating the second eigenvalue of the transition matrix?

Thanks,

~Vamsi.

May 12, 2010, 2:43:14 PM5/12/10

to cs510-s...@googlegroups.com

Hi Vamsi,

Not quite equivalent, but if we call "gap" the difference

between the first and second eigenvalues of the transition

matrix, and you can prove (1/gap) <= poly(n,d), then that

will imply that the mixing time is also poly(n,d).

So, basically, yes.

See, for instance, Theorem 12.3 on p. 155 of this book:

(This is an excellent book, by the way, although it can be a bit dry.)

Tom

May 13, 2010, 5:30:39 PM5/13/10

to cs510-s...@googlegroups.com

Dear Professor Hayes,

For the 3-rd question:

is it enough to show that t = log(n/eps)/delta ? where delta = 1-lamda_1 (and depends on n and d) and eps is given distance?

Thanks,

V.

May 13, 2010, 10:20:23 PM5/13/10

to CS 510 Randomized Algorithms

It is enough to prove that what you are calling delta is at least

1/poly(n,d). Probably the easiest way to do this is by proving a

lower bound on the conductance, which you can read about in

the same book I gave the link to earlier.

A more direct approach would be to show that, if mu_t is the

distribution at time t, then the 1-(or 2)-norm of (mu_t - pi) is at

most

(1 - 1/poly(n,d)) times (mu_{t-1} - pi). Then you can induct on

t to bound (mu_{t} - pi) explicitly. For this, you would of course

make use of the fact that, since the graph is d-regular, pi is

uniform.

Hope these suggestions help!

Tom

1/poly(n,d). Probably the easiest way to do this is by proving a

lower bound on the conductance, which you can read about in

the same book I gave the link to earlier.

A more direct approach would be to show that, if mu_t is the

distribution at time t, then the 1-(or 2)-norm of (mu_t - pi) is at

most

(1 - 1/poly(n,d)) times (mu_{t-1} - pi). Then you can induct on

t to bound (mu_{t} - pi) explicitly. For this, you would of course

make use of the fact that, since the graph is d-regular, pi is

uniform.

Hope these suggestions help!

Tom

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu