partitioner. Here crossover and mutation operators have been used to compute edge
biases, which yield hints for the underlying multilevel graph partitioner. Benlic et
al. [4] provided a multilevel memetic algorithm for balanced graph partitioning.
This approach is able to compute many entries in Walshaw’s Benchmark Archive
[23] for the case = 0. Very recently an algorithm called PUNCH [8] has been
introduced. This approach is not based on the multilevel principle. However, it
creates a coarse version of the graph based on the notion of natural cuts. Natural
cuts are relatively sparse cuts close to denser areas. They are discovered by finding
minimum cuts between carefully chosen regions of the graph. They introduced an
evolutionary algorithm which is similar to Soper et al. [23], i.e. using a combine
operator that computes edge biases yielding hints for the underlying graph parti-
tioner. Experiments indicate that the algorithm computes very good partitions for
road networks. For instances without a natural structure natural cuts are not very
4. Karlsruhe Fast Flow Partitioner
The aim of this section is to provide an overview over the techniques used in
KaFFPa which is used by KaFFPaE as a base case partitioner. KaFFPa [22] is
a classical matching based multilevel graph partitioner. Recall that a multilevel
graph partitioner basically has three phases: coarsening, initial partitioning and
Coarsening. KaFFPa makes contraction more systematic by separating two
issues: A rating function indicates how much sense it makes to contract an edge
based on local information. A matching algorithm tries to maximize the sum of the
ratings of the contracted edges looking at the global structure of the graph. While
the rating function allows a flexible characterization of what a “good” contracted
graph is, the simple, standard definition of the matching problem allows to reuse
previously developed algorithms for weighted matching. Matchings are contracted
until the graph is “small enough”. In [13] we have observed that the rating function
v}) :=
works best among other edge rating functions, so
that this rating function is also used in KaFFPa.
KaFFPa employs the Global Path Algorithm (GPA) as a matching algorithm.
It was proposed in [17] as a synthesis of the Greedy algorithm and the Path Grow-
ing Algorithm [10]. This algorithm achieves a half-approximation in the worst
case, but empirically, GPA gives considerably better results than Sorted Heavy
Edge Matching and Greedy (for more details see [13]). GPA scans the edges in or-
der of decreasing weight but rather than immediately building a matching, it first
constructs a collection of paths and even cycles. Afterwards, optimal solutions are
computed for each of these paths and cycles using dynamic programming.
Initial Partitioning. The contraction is stopped when the number of remaining
nodes is below the threshold max(60k, n/(60k)). The graph is then small enough
to be partitioned by some initial partitioning algorithm. KaFFPa employs Scotch
as an initial partitioner since it empirically performs better than Metis.
Uncoarsening. Recall that the refinement phase iteratively uncontracts the
matchings contracted during the contraction phase. After a matching is uncon-
tracted, local search based refinement algorithms move nodes between block bound-
aries in order to reduce the cut while maintaining the balancing constraint. Local
improvement algorithms are usually variants of the FM-algorithm [12]. Our variant
Previous Page Next Page