1. INTRODUCTION
5
The typical way in which this theorem is used to prove that a property Q has
a sharp threshold involves two steps:
The first step is usually easy. For a coarse property Q, the theorem
guarantees the existence of M. A possible explanation to this would be
that M itself has the property Q. In that case, since Q is a monotone
increasing property it would no longer seem "magical" that adding a copy
of M induces property Q. In other words, if M G Q then Assumption (iii)
in the theorem is a triviality and does not enable one to deduce anything.
Therefore, as a start, one has to rule out this possibility by showing that
a balanced graph with the prescribed density can not have the property.
The second step is typically more involved. One chooses an appropriate
property Q that is typical of G(n,p) and then shows that a graph G G Q
with Pr[(G U M*) G Q] 2a is quite "saturated", i.e. is close to having
property Q. Therefore adding a random copy of G(n, £p) should induce the
property Q with probability much larger than a, contradicting condition
(4) of the theorem.
In [8], [1], and [10] one can see variations of this scheme used to prove that
various graph properties have a sharp threshold. Although the basic technique is
similar, each property presents its own difficulties and requires a special approach.
The case of property 72., handled in this paper, has by far been the most difficult
and involved; a key technique in our approach turns out to be regularization.
1.4. Regularity. One of the fundamental tools in asymptotic graph theory is
the well-known regularity lemma of Szemeredi [35] (see also [34]). Indeed, since its
discovery in the 70s, this lemma has been instrumental in the study of the structure
of large graphs. The reader is referred to the excellent survey [24] for a thorough
introduction to the wide range of applications of this result.
In essence, the regularity lemma tells us that any large graph may be decom-
posed into a bounded number of quasi-random, induced bipartite graphs. Thus,
this lemma is a powerful tool for detecting and making transparent the random-like
behavior of large deterministic graphs. What makes the lemma such a powerful
tool is that it reveals a quasi-random structure that enables one to carry out a deep
quantitative analysis.
The precise formulation of the regularity lemma is somewhat technical (see
Section 4.1.1 for details). In this short section, we only discuss some points in
broad terms.
The quasi-random bipartite graphs that Szemeredi's lemma uses in its decom-
position are graphs in which the edges are uniformly distributed. The uniformity
is measured by the ratio of edges to potential edges (pairs), and so this concept
becomes trivial for graphs of vanishing density. To manage sparse graphs, one may
adjust the notion of quasi-randomness by a natural rescaling, and it is a routine
matter to check that the original proof extends to this notion, provided we restrict
ourselves to graphs of vanishing density that do not contain 'dense patches' (see
Section 4.1.2). However, the quasi-random structure that the lemma reveals in this
case is harder to exploit than in the dense case, and one needs to work harder when
applying the lemma to such 'sparse
graphs5.
Nevertheless, there have been some
successful applications of the lemma in this context (see [20, 23]).
Previous Page Next Page