Skip to main content

Section 9.2 Erdős–Lovász–Spencer Dimension Theorem

Many of the key results in extremal graph theory can be phrased in terms of inequalities in homomorphism densities. For example, the Erdős-Stone Theorem is essentially equivalent to the statement that if \(t(H,G)=0\text{,}\) then \(t(K_2,G)\leq\frac{\chi(H)-2}{\chi(H)-1}\text{.}\) More generally, it is an interesting problem to try to determine the structure of the set of all possible “achievable” vectors of homomorphism densities. That is, for graphs \(H_1,\dots,H_k\text{,}\) what is the “shape” of the set
\begin{equation*} \{(t_1,\dots,t_k): \exists G\text{ such that }(t(H_1,G),\dots,t(H_k,G))= (t_1,\dots,t_k)\}. \end{equation*}
It is quite ambitious to determine the full structure of this set. In fact, there are very few non-trivial cases that are known, and the resolution of the seemingly simple case \(H_1=K_2\) and \(H_2=K_3\) was one of the great triumphs of modern extremal graph theory, known as “Razborov’s Triangle Density Theorem.” A more modest goal is to try to determine the “dimension” of this set. That is, to what extent can we say that the functions \(t(H_1,G),\dots,t(H_k,G)\) are “independent” of one another? Linear independence is a natural form of “independence” that we will address in this section, but our ultimate goal is to prove the following theorem of Erdős, Lovász and Spencer which establishes a stronger form of independence for non-isomorphic connected graphs with more than one vertex.
Note that, since homomorphism densities of graphs are always rational, it is necessary for us to “take limits” in Theorem 9.2.1 in order to achieve every point in the ball. Let us now address linear independence, not only because it is interesting in its own right, but because it is also used in the proof of Theorem 9.2.1.

Proof.

Let \(n=\max\{|V(F_i)|:1\leq i\leq m\}\) and, for each \(i\text{,}\) let \(F_i'\) be obtained from \(F_i\) by adding \(n-|V(F_i)|\) isolated vertices. Since the graphs \(F_1,\dots,F_m\) are pairwise non-isomorphic and do not have isolated vertices, the graphs \(F_1',\dots,F_m'\) are pairwise non-isomorphic. For any graph \(G\text{,}\) we have
\begin{equation*} t(F_i,G)=t(F_i',G)=\frac{\hom(F_i',G)}{|V(G)|^n}. \end{equation*}
So, for any graphs \(G_1,\dots,G_m\) we have \([t(F_i,G_j)]_{1\leq i,j\leq m}\) can be obtained from \([\hom(F_i',G_j)]_{1\leq i,j\leq m}\) by scaling the \(j\)th column by \(\frac{1}{|V(G_j)|^n}\) and so if one of these matrices is non-singular, then so is the other. Thus, it suffices to work with the latter matrix.
For \(1\leq i\leq m\) and positive integers \(x_1^i,\dots,x_n^i\text{,}\) we let \(G_i(x_1^i,\dots,x_n^i)\) be a graph obtained from \(F_i'\) by blowing up the \(k\)th vertex into a set of size \(x_k^i\) for all \(1\leq k\leq n\text{.}\) Then \(\det\left([\hom(F_i',G_j(x_1^j,\dots,x_n^j))]_{1\leq i,j\leq m}\right)\) is a polynomial in \(nm\) variables. If we can show that this polynomial is not equal to the zero polynomial, then there must exist a choice for the variables (in fact, infinitely many) such that it is non-zero, which will complete the proof.
So, it all boils down to showing that the determinant is not equal to the zero polynomial. Let \(\hom(F_i',G_j(x_1^j,\dots,x_n^j)) = p_{i,j}(x_1^j,\dots,x_n^j) + q_{i,j}(x_1^j,\dots,x_n^j)\) where \(p_{i,j}(x_1^j,\dots,x_n^j)\) counts the homomorphisms in which every pair of vertices of \(F_i'\) are mapped to vertices of \(G_j(x_1^j,\dots,x_n^j)\) which correspond to different vertices of \(F_j'\text{;}\) i.e. the composition of the homomophism and the natural projection of \(G_j(x_1^j,\dots,x_n^j)\) onto \(F_j'\) is injective. If we say that a monomial is “multilinear” if every variable within it has exponent 1 (e.g. \(x_1x_2\) is multilinear and \(x_1^2x_2\) is not), then \(p_{i,j}(x_1^j,\dots,x_n^j)\) consists of the multilinear term of the polynomial \(\hom(F_i',G_j(x_1^j,\dots,x_n^j))\text{.}\) Also, the multilinear term of the determinant of \([p_{i,j}(x_1^j,\dots,x_n^j) + q_{i,j}(x_1^j,\dots,x_n^j)]_{1\leq i,j\leq k}\) is precisely equal to the determinant of \([p_{i,j}(x_1^j,\dots,x_n^j)]_{1\leq i,j\leq k}\text{.}\) So, if we can show that this determinant is non-zero, then we will know that our polynomial is not equal to the zero polynomial (since it contains a multilinear term with non-zero coefficient). Without loss of generality, we can assume that \(F_1',\dots,F_k'\) are listed so that \(|E(F_1')|\leq |E(F_2')|\leq \cdots\leq |E(F_k')|\text{.}\) Since the graphs \(F_1',\dots,F_k'\) are pairwise non-isomorphic, this implies that \(p_{i,j}(x_1^j,\dots,x_n^j)=0\) whenever \(i>j\text{.}\) Also, \(p_{i,i}(x_1^i,\dots,x_n^i)>0\) for any \(i\) by considering the identity map. So, \([p_{i,j}(x_1^j,\dots,x_n^j)]_{1\leq i,j\leq k}\) is a lower triangular matrix with no zeros on the diagonal and therefore it has non-zero determinant. This completes the proof.
To prove Theorem 9.2.1, we will combine Lemma 9.2.2 with the Implicit Function Theorem, stated below.

Definition 9.2.3.

Given a function \(F: \mathbb{R}^{k+m}\to \mathbb{R}^k\) and \(1\leq i\leq k\text{,}\) we let \(F_i:\mathbb{R}^{k+m}\to \mathbb{R}\) be the \(i\)th coordinate of \(F\text{.}\) That is, \(F_1,\dots,F_k\) are the functions such that \(F = (F_1,\dots,F_k)\text{.}\)

Definition 9.2.4.

Given a function \(F: \mathbb{R}^{k+m}\to \mathbb{R}^k\) and \(x_1,\dots,x_k,y_1,\dots,y_m\text{,}\) the Jacobian matrix of \(F\) at \(x_1,\dots,x_k,y_1,\dots,y_m\) is defined by
\begin{equation*} J_F(x_1,\dots,x_k,y_1,\dots,y_m) = \begin{bmatrix}\frac{\partial F_1}{\partial x_1}(x_1,\dots,x_k,y_1,\dots,y_m)&\cdots& \frac{\partial F_1}{\partial x_k}(x_1,\dots,x_k,y_1,\dots,y_m)\\\vdots & \ddots & \vdots\\ \frac{\partial F_k}{\partial x_1}(x_1,\dots,x_k,y_1,\dots,y_m)&\cdots& \frac{\partial F_k}{\partial x_k}(x_1,\dots,x_k,y_1,\dots,y_m)\end{bmatrix}. \end{equation*}
We now have everything we need to prove Theorem 9.2.1.

Proof of Theorem 9.2.1.

Let \(H_1,\dots,H_k\) be connected graphs with at least two vertices and let \(G_1,\dots, G_k\) be graphs such that \([t(H_i,G_j)]_{1\leq i,j\leq m}\) is non-singular, which are guaranteed to exist by Lemma 9.2.2. Let \(W\) be the adjacency matrix of \(G_1\sqcup G_2\sqcup\cdots\sqcup G_k\sqcup K_1\text{.}\)
Let \(\frac{1}{4k}\leq \alpha_1,\dots,\alpha_k\leq 1/2k\) be variables, let \(\beta=1-\sum_{i=1}^k\alpha_i\) and take \(a=a(\alpha_1,\dots,\alpha_k)\) to be a vector of length \(1+\sum_{i=1}^k|V(G_i)|\) where the first \(|V(G_1)|\) entries are \(\alpha_1\text{,}\) the next \(|V(G_2)|\) entries are \(\alpha_2\text{,}\) and so on, and the last entry is \(\beta\text{.}\) Define \(H:[1/4k,1/2k]^k\to\mathbb{R}^k\) by
\begin{equation*} H(\alpha_1,\dots,\alpha_k) = (\hom(H_1,(a,W)),\dots,\hom(H_k,(a,W))). \end{equation*}
Here’s how I think it goes from here: Argue that \(\det(J_H(\alpha_1,\dots,\alpha_k))\) is a polynomial in \(\alpha_1,\dots,\alpha_k\) that is not equal to the zero polynomial. I think we use Lemma 9.2.2 somewhere here. Also, though, we must use connectivity of the graphs \(H_1,\dots,H_k\) here or else it can’t possibly work. I haven’t figured out the proof of that statement about the Jacobian yet. Anyway, for now, let’s assume this and move on. So, there must be a choice of \(\alpha_1,\dots,\alpha_k\) at which the determinant of the Jacobian is non-zero. Fix this choice of \(\alpha_1,\dots,\alpha_k,\beta\) and \(a=a(\alpha_1,\dots,\alpha_k,\beta)\) throughout the rest of the proof.
Let \(x_1,\dots,x_k,y_1,\dots,y_k\) be variables and let \(F:\mathbb{R}^{2k}\to\mathbb{R}^k\) be defined by
\begin{equation*} F(x_1,\dots,x_k,y_1,\dots,y_k)=H(\alpha_1+x_1,\dots,\alpha_k+x_k) - (\hom(H_1,(a,W))-y_1,\dots,\hom(H_k,(a,W))-y_k). \end{equation*}
Then it seems that we should be able to apply the Implicit Function Theorem to this thing to get a continuously differentiable function \(g:(-\varepsilon,\varepsilon)^k\to \mathbb{R}^k\) such that \(g(0)=0\) and \(F(g(y_1,\dots,y_k),y_1,\dots,y_k)=0\) for all \(y_1,\dots,y_k\in(-\varepsilon,\varepsilon)\text{.}\) By continuity of \(g\text{,}\) by taking \(\varepsilon\) even smaller if necessary, we can assume that \(\|g(y_1,\dots,y_k)\|_\infty \leq \min\{\alpha_1,\dots,\alpha_k\}\) for all \(y_1,\dots,y_k\in(-\varepsilon,\varepsilon)\text{.}\) So, this tells us that we can achieve any point in an \(\varepsilon\)-ball around \((\hom(H_1,(a,W)),\dots,\hom(H_k,(a,W)))\) as a vector of homomorphism counting functions of matrices. To finish the proof, we observe that all of these points can be achieved as a limit of homomorphism densities in graphs by Proposition 8.1.6.