Here’s the idea. Take a set of \(n\) vertices and split them randomly into sets \(V_1,\dots,V_k\) where the probability that a vertex is in \(V_i\) is equal to \(a_i/A\) independently of all other vertices. Then, between \(V_i\) and \(V_j\text{,}\) we put a random set of edges by including each edge with probability \(W(i,j)/c\) independently of all other edges. Note that, since \(W\) is symmetric, this is the same as adding these edges with probability \(W(j,i)/c\text{.}\) Let \(H_n\) be the resulting graph.
Now, let \(G\) be an arbitrary graph. Our goal is to show that, as \(n\) tends to infinity, \(\frac{\hom(G,H_n)}{n^{|V(G)|}}\) tends to \(\frac{\hom(G,(a,W))}{A^{|V(G)|}c^{|E(G)|}}\) with probability one. Let us first compute the expected value of the left side. Given a function \(f:V(G)\to H_n\text{,}\) let \(1_f\) be the “indicator random variable” which is equal to \(1\) if \(f\) is a homomorphism and \(0\) otherwise. Then
\begin{equation*}
\hom(G,H_n)=\sum_{f:V(G)\to V(H_n)}1_f
\end{equation*}
and so, by linearity of expectation,
\begin{equation*}
\mathbb{E}(\hom(G,H_n))=\sum_{f:V(G)\to V(H_n)}\mathbb{E}(1_f) = \sum_{f:V(G)\to V(H_n)}\mathbb{P}(f\text{ is a homomorphism}).
\end{equation*}
If we let \(\varphi\) be a uniformly random function from \(V(G)\) to \(V(H_n)\) chosen independently of all of the random choices made so far, then, for any fixed \(f:V(G)\to V(H_n)\text{,}\) we have \(\mathbb{P}(\varphi=f)=\frac{1}{n^{|V(G)|}}\text{.}\) Therefore, by linearity of expectation,
\begin{equation*}
\mathbb{E}\left(\frac{\hom(G,H_n)}{n^{|V(G)|}}\right)= \sum_{f:V(G)\to V(H_n)}\frac{1}{n^{|V(G)|}}\mathbb{P}(f\text{ is a homomorphism})
\end{equation*}
\begin{equation*}
=\sum_{f:V(G)\to V(H_n)}\mathbb{P}(\varphi=f)\mathbb{P}(f\text{ is a homomorphism})=\mathbb{P}(\varphi\text{ is a homomorphism})
\end{equation*}
where the last equality uses independence of the events. Now, let’s make some observations about \(\mathbb{P}(\varphi\text{ is a homomorphism})\text{.}\) We can lower bound it as follows using standard properties of conditional expectation:
\begin{equation*}
\mathbb{P}(\varphi\text{ is a homomorphism}) \geq \mathbb{P}(\varphi\text{ is an injective homomorphism}) = \mathbb{P}(\varphi\text{ is a homomorphism}\mid \varphi\text{ is injective})\mathbb{P}(\varphi\text{ is injective})
\end{equation*}
\begin{equation*}
= \mathbb{P}(\varphi\text{ is a homomorphism}\mid \varphi\text{ is injective})(1-\mathbb{P}(\varphi\text{ is non-injective})) \geq \mathbb{P}(\varphi\text{ is a homomorphism}\mid \varphi\text{ is injective}) - \mathbb{P}(\varphi\text{ is non-injective}).
\end{equation*}
Similarly, we get the following upper bound
\begin{equation*}
\mathbb{P}(\varphi\text{ is a homomorphism})\leq \mathbb{P}(\varphi\text{ is an injective homomorphism})+\mathbb{P}(\varphi\text{ is not injective})\leq \mathbb{P}(\varphi\text{ is a homomorphism}\mid \varphi\text{ is injective}) + \mathbb{P}(\varphi\text{ is non-injective}).
\end{equation*}
The probability that \(\varphi\) is not injective is \(O(1/n)\text{;}\) do you see why? So,
\begin{equation*}
\mathbb{E}\left(\frac{\hom(G,H_n)}{n^{|V(G)|}}\right) = \mathbb{P}(\varphi\text{ is a homomorphism}\mid\varphi\text{ is injective}) \pm O(1/n).
\end{equation*}
Now, \(\mathbb{P}(\varphi\text{ is a homomorphism}\mid\varphi\text{ is injective})\approx \frac{\hom(G,(a,W))}{A^{|V(G)|}c^{|E(G)|}}\text{.}\) To verify this requires some thinking about how the function \(\hom(G,(a,W))\) works. Putting this together, we should get that
\begin{equation*}
\mathbb{E}(\hom(G,H_n)) = \frac{n^{|V(G)|}\hom(G,(a,W))}{A^{|V(G)|}c^{|E(G)|}} \pm O(n^{|V(G)|-1}).
\end{equation*}
After this, the last step is to use a “concentration inequality” to get that
\(\hom(G,H_n)\) is very close to its expected value with probability tending to 1 as
\(n\to\infty\text{.}\) This is a very standard fact for anyone who has seen a lot of applications of the “probabilistic method” but, since we may not have this background, let us give the details.
Claim 8.1.7.
For each \(n\text{,}\)
\begin{equation*}
\mathbb{P}(|\hom(G,H_n) - \mathbb{E}(\hom(G,H_n))| > \log(n)\cdot n^{|V(G)|-1})\leq \frac{1}{n^2}.
\end{equation*}
Proof.
Let \(e_1,\dots,e_{\binom{n}{2}}\) be the edges of the complete graph with vertex set \(V(H_n)\) listed in an arbitrary order. Imagine we start with a graph with vertex set \(V(H_n)\) and no edges and, for each \(i\) between \(1\) and \(\binom{n}{2}\text{,}\) one by one, we “reveal” whether \(e_i\in E(H_n)\) or \(e_i\notin E(H_n)\text{.}\) We can think of this as revealing the value of the indicator random variables \(1_{e_1\in E(H_n)},1_{e_2\in E(H_n)},\dots\) one by one.
Now, define a sequence of random variables
\(X_0,\dots,X_{\binom{n}{2}}\) as follows. First, we define
\(X_0 = \mathbb{E}(\hom(G,H_n))\text{.}\) We think of
\(X_0\) as representing our “best guess” of the value of
\(\hom(G,H_n)\) before we have revealed any edges at all. Now, reveal the value of
\(1_{e_1\in E(H_n)}\) and let
\(X_1\) be the expected value of
\(\hom(G,H_n)\) given the value of
\(1_{e_1\in E(H_n)}\text{.}\) That is, it is the average of
\(\hom(G,H_n)\) over all choices of the indicator functions of all edges except for
\(e_1\text{.}\) And continue in this way; that is,
\(X_2\) is the expectation of
\(\hom(G,H_n)\) after revealing the first two edges, and so on. Then
\(X_{\binom{n}{2}}\) is simply equal to
\(\hom(G,H_n)\) (because all of the edges have been revealed and there is no more “randomness” remaining). The sequence
\(X_0,\dots,X_{\binom{n}{2}}\) forms something which is known as a “martingale” which just means that if we take
\(X_k\) and “forget” the outcome of the edge
\(e_k\) by averaging out over all possibilities of
\(1_{e_k\in E(H_n)}\text{,}\) then we just end up the random variable
\(X_{k-1}\text{.}\)
Theorem 8.1.8. Azuma’s Inequality.
Suppose that \(m> 0\) and \(X_0,\dots,X_N\) is a martingale such that \(|X_k-X_{k-1}|\leq m\) with probability 1 for all \(1\leq k\leq N\text{.}\) Then, for all \(t>0\text{,}\)
\begin{equation*}
\mathbb{P}(|X_N-X_0|\geq t) \leq 2\exp\left(\frac{-t^2}{2Nm^2}\right).
\end{equation*}
Note that, in the martingale that we have defined, we have
\(|X_k-X_{k-1}|=O(n^{|V(G)|-2})\) for all
\(1\leq k\leq \binom{n}{2}\text{.}\) This is because revealing any edge can boost the number of homomorphisms from
\(G\) to
\(H\) by at most
\(O(n^{|V(G)|-2})\text{,}\) and so it can only increase the expected number of such homomorphisms by at most this amount as well. So, by Azuma’s Inequality,
\begin{equation*}
\mathbb{P}(|\hom(G,H_n) - \mathbb{E}(\hom(G,H_n))|>\log(n)\cdot n^{|V(G)|-1}) = \mathbb{P}(|X_{\binom{n}{2}}-X_0| >\log(n)\cdot n^{|V(G)|-1}) \leq 2\exp\left(\Omega\left(\frac{-\log^2(n)\cdot n^{2|V(G)|-2}}{2\binom{n}{2}n^{2|V(H)|-4}}\right)\right)
\end{equation*}
\begin{equation*}
\leq \exp(-\Omega(\log^2(n))) \leq 1/n^2
\end{equation*}
where the last inequality holds for large enough \(n\text{.}\)
Now, to finish the proof, we note that
\(\sum_{n=1}^\infty \frac{1}{n^2}< \infty\text{.}\) A standard result in probability is the “Borel-Cantelli Lemma” which says that, if the sum of the probabilities of countably many events is finite, then the probability that infinitely many of the events hold is zero. So, combining this with the claim, we get that, with probability one, the event
\begin{equation*}
\{|\hom(G,H_n) - \mathbb{E}(\hom(G,H_n))| \leq \log(n)\cdot n^{|V(G)|-1}\}
\end{equation*}
holds for all but finitely many \(n\text{.}\) So, putting everything together, we get that
\begin{equation*}
\lim_{n\to\infty}\frac{\hom(G,H_n)}{n^{|V(G)|}}=\frac{\hom(G,(a,W))}{A^{|V(G)|}c^{|E(G)|}}
\end{equation*}
with probability one.