Remarks on Lecture 1

Today we started studying the spectrum of the combinatorial Laplacian on a graph G=(V,E) . This is the operator L_G : \mathbb R^V \to \mathbb R^V given by

{\displaystyle L_G f(x) = \mathrm{deg}(x) f(x) - \sum_{y \sim x} f(y)\,.}

We saw that L_G has a spectrum of the form 0 = \lambda_1 \leq \lambda_2 \leq \cdots \leq \lambda_n , and that the number of connected components in G is precisely the multiplicity of \lambda_1 .

I encourage you to look at the first four lectures of Spielman’s course for a refresher.


The best way to learn a subject is to get your hands dirty. Here are some optional exercises to help build your intuition.

Recall that for a function f : V \to \mathbb R , we define its Rayleigh quotient by

\displaystyle \mathcal R_G(f) = \frac{\sum_{x \sim y} (f(x)-f(y))^2}{\sum_{x \in V} f(x)^2}.

If P_n is the path graph on n vertices, then as we argued today, we have \lambda_k \asymp \left(\frac{k}{n}\right)^2  .

  1. Prove that \lambda_2 \asymp \frac{1}{n^2}  by showing that (a) there exists a map f : V \to \mathbb R  with f \perp \mathbf{1} and \mathcal R_G(f) \lesssim \frac{1}{n^2} and (b) for any such map with f \perp \mathbf{1} , we have \mathcal R_G(f) \gtrsim \frac{1}{n^2} . We did this in class, but it will help to reconstruct the proof on your own.

  2. Try to prove that \lambda_k \lesssim \left(\frac{k}{n}\right)^2 by exhibiting an explicit subspace of test functions
    that achieves this bound. It may help to use the Courant-Fischer min-max principle
    which says that

    \displaystyle \lambda_k = \min_{S \subseteq \mathbb R^V} \max_{0 \neq f \in S} \mathcal R_G(f),

    where the minimum is over all k-dimensional subspaces S .

    [Hint: Have your subspace be the span of k functions with disjoint supports, where the support of a function f : V \to \mathbb R  is \mathrm{supp}(f) = \{ x \in V : f(x) \neq 0 \} .]


2 thoughts on “Remarks on Lecture 1

  1. For (1), is there a high level way to see the lower bound for R(f)? I follow the proof given in class but somehow it’s a hard proof to remember because it seems very arithmetic.

    Specifically, why would you think to sum over choices for a fixed vertex, and why does it make sense to employ cauchy-schwarz?

  2. Let’s normalize so that f : V \to \mathbb R satisfies \frac{1}{n} \sum_{v \in V} f(v)^2 = 1. We also know that \sum_{v \in V} f(v) = 0.

    These two facts together suggests that f has to have a large variance in values. For instance, it is possible to prove from these facts that \frac{1}{n^2} \sum_{u,v \in V} (f(u)-f(v))^2 = 2. Instead of using this sum over pairs, you can just fix a vertex. This is what we did in class, and it captures the same effect.

    Now you want to say that any function which varies a lot (in this sense) must have to vary somewhat along edges. Consider the endpoints of the path v_1 and v_n. We know that the edges in between much “pay” for the stretch (f(v_1)-f(v_n))^2. The cheapest way to do this (because we’re looking at a sum of squares) is for every edge to be of length |f(v_1)-f(v_n)|/(n-1). Cauchy-Schwarz is the formal way to prove that this is “cheapest.”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s